Can we explain the difference between qubit and bit with a simple code example?

The only places I know that you can play with quantum computing are google and the playground , and the quantum experience is ibm . While the former uses qscript and other qasm languages ​​(which are easy to learn), their use is still not much different from ordinary programming (in addition to a few specific functions). Here is the wikipedia explanation:

The qubit has several similarities with the classic bit, but overall is very different. There are two possible results for measuring a qubit - usually 0 and 1, as a bit. The difference is that if the bit state is 0 or 1, the qubit state can also be a superposition of both. You can fully encode one bit in one qubit. However, a qubit may contain even more information, for example. up to two bits using super dense coding.

For a system of n components, a full description of its state in classical physics requires only n bits, while in quantum physics 2 ^ n - 1 complex numbers are required.

Which is more or less clear. But how can this be shown with sample code?

+4
source share
1 answer

, , :

def coin_count():
    bit = False
    counter = 0
    for _ in range(500):
        bit ^= random() < 0.5  # False β†’ 50% False, 50% True
                               #  True β†’ 50% False, 50% True
        if bit:
            counter += 1
    return counter

, :

Classic binomial distribution

- , , , . " ", .

def hadamard_coin_count():
    qubit = qalloc()
    counter = 0
    for _ in range(500):
        apply Hadamard to qubit # |0⟩ β†’ √½|0⟩ + √½|1⟩
                                # |1⟩ β†’ √½|0⟩ - √½|1⟩
        if qubit:  # (not a measurement; controls nested operations)
            counter += 1  # (happens only in some parts of the superposition)
    return measure(counter)  # (note: counter was in superposition)

, , - :

quantum walk distribution

, , . , . .

+6

Source: https://habr.com/ru/post/1679840/


All Articles