November 20, 2019

qsim - a quantum computing simulator

I’d like to introduce my latest project – qsim – a Go package for simulating quantum computations. This is a very rudimentary project that I’d like to expand considerably, but I think the core design has reached a good enough stage to share and garner feedback.

Here is the code:

What is qsim, and what does it mean to simulate quantum computations? In brief, we write a software program to run on a classical von Neuman computer that approximates the behavior of algorithms that could run on a quantum computer. Let’s dive in…

Classical Computation versus Quantum Computation

A classical computer reduces to binary operations, where the program and the data under manipulation can be represented as physical bits 0 and 1. Sometimes this gets described as an “off” and an “on” state on a machine’s transistors; a computation occurs when a set of bits are manipulated through sequential operations or logic gates. A machine can execute a classical computation only as fast as it can flip these bits.

A quantum computer adheres to many of the same restrictions in terms of computability and sequential execution – with the exception that, in certain problem domains, the quantum computer can speed up the arrival at a soluiton by encoding more than one state on a single bit. So, instead of representing data as a binary 0 or 1, a quantum computer can represent data as a linear combination of basis states |0> and |1>. The key concept in quantum theory is this notion of a linear combination. Without the tools from linear algebra, we would not be able to describe things like super-imposed states, and we would be stuck with the binary 0 and 1 of classical computation.

I’ll describe more of linear combinations in a moment, but before I get there, let me introduct the code I wrote for qsim.

Go structs, interfaces

The elementary qbits – the quantum bit over the basis states |0> and |1> – have a very simple struct representation:

// QuantumBit represents a single quantum bit
type QuantumBit struct {

Basically, the QuantumBit wraps a Vector struct imported from a linear algebra subpackage, linalg, that I wrote specifically to perform the necessary mathematics. This matches standard notation in quantum computing, where the basis state |0> has the vector representation [ 1 ; 0 ] and the basis state |1> has the vector representation [ 0 ; 1 ]. In general, the state of a qbit may be derived from any linear combination of states a|0> + b|1> with vector representation [ a; b ]. The variables a and b are complex numbers, and as a rule, the sum of the absolute values squared abs(a)^2 + abs(b)^2 must equal 1. This rule along with the linear algebra of complex-valued vectors are the backbone of quantum physics and quantum computing.

To manipulate quantum bits, we need the notion of a linear operator, defined in code by

type Operator struct {

which, as you can see, wraps a Matrix struct from the linalg subpackage. The purpose here is simple: you can describe the alteration of a QuantumBits underlying Vector representation by successively scaling, stretching, and rotating it, and all these actions have a representation as a Matrix. By way of example, if we want to take the |0> basis vector [ 1 ; 0 ] and transform it into the |1> basis vector [ 0 ; 1 ], we can multiply it by the matrix [ 0 1 ; 1 0 ] which has the effect of swapping rows. All possible quantum states – which is to say, 2N x 1 vectors that obey the rule that the sum of absolute values squared equals one – can be constructed from application of a matrix operation.

In quantum computing, there’s a set of standard operators that we always start with. These operators are considered universal, as by repeatedly applying operators from this set, we can perform any valid quantum computation. The standard operators are:

  • the Pauli gates X, Y, and Z
  • the Hadamard gate H
  • the controlled NOT gate CNOT
  • the Toffoli or CCNOT gate Toffoli

Each of these gates have implementations in qsim through their respective matrix representations.

Lastly, I supply the StateSpace interface

// StateSpace interface provides a broad abstraction for manipulating the underlying vectors and matrices involved in running simulated QC calculations.
type StateSpace interface {
	Equal(StateSpace) bool
	Len() int
	AtVec(int) complex128
	Data() linalg.Matrix

which serves as a template for me to expand the application of Operator backed gates to multiple qbits, represented by the Register struct.

Simulating Quantum Behavior

I have not yet explained the phrase “the sum of the absolute values squared must equal one”. What’s behind this phrase is key to quantum theory in general. There’s a notion from the very start of quantum theory that, rather than physics being based on a collection of purely deterministic rules and processes, the physics of the world we live in has a non-deterministic, which is to say probabalistic or stochastic, description. These probabilities generally do not come into play when observing common physical materials like wood tables, or ice water. The table is there, or it is not – a simple classical observation. But the stochastic nature of very small or very high energy systems – such as a single electron or a plasma flow – does have a significant impact on being able to describe the system accurately. The electron of known velocity has a probability of being located in a given position. Or equivalently, an electron found at a known location is probably moving somewhere else. The phrase “the sum of the absolute values squared must equal one” merely expresses the idea that we must account for all the probabilities.

In quantum computing, the two key states we have an interest in are the basis states |0> and |1>. We’re interested in these precisely because we can use them to express a correspondence with classical data. The stochastic interpretation of quantum theory then allows us to talk about the probability of a 0 or 1 through the correspondence with the basis states |0> and |1> and their amplitudes.

In code, I implement the probability of a classical bit through a naive random number generation

func shot() float64 {
	return rand.New(rand.NewSource(time.Now().UnixNano())).Float64()

and then perform a weighted random choice across the amplitudes to select a possible value.

This random number generation – perhaps with a more sophisticated choice algorithm or a more nuanced probability distribution – is the only way a classical computer can approximate the behavior of an actual quantum computation. A real quantum machine would have the qbit itself act in this role. A real quantum machine would also not need to use matrix math to achieve transitions between states; such behavior would occur naturally through the machine itself rather than through the algebraic representation.

Future plans

Please do look at the code I’ve put up on Gitlab and offer critiques. There’s two important things I will be working to improve:

  1. Improved manipulation of individual qbit amplitudes. As the code stands at this moment, once multiple qbits interact, even deterministically, I haven’t given a good way to apply operations on qbits individually again. This is a serious design flaw and impediment to progressing with the project.

  2. A more flexible quantum circuit modeling interface. Actually, since QASM is a pretty good standard for describing quantum algorithms, I’d like to write an interpreter in Go that uses qsim to run a given QASM described circuit.

Both these will be very educational, at least for myself. (1) will teach me design efficiencies that make Python the implementation language of choice of other simulators, including qiskit, cirq and qutip. (2) will teach me how to write an interpreter – along with its underlying parser/lexer – from scratch in Go. Both will be extremely valuable lessons in computing, even without the quantum.

Content by © Jared Davis 2019-2020

Powered by Hugo & Kiss.