Getting started#

import sys
import os
# to import quask, move from docs/source/notebooks to src
sys.path.append('../../../src')

Are you curious to know if quask is the right library for your project? Here is a quick and straightforward guide on how to get started using this tool. We will see:

  1. how to quickly install the framework;

  2. what are the main components of quask and how to use them;

  3. how to solve a toy classification problem using quantum kernels via the quask application programming interface.

Warning

This first tutorial illustrates the functionalities of the framework. It presumes a pre-existing knowledge of kernel methods and quantum computing. For a more beginner’s level introduction, take a look at the Intro to classical kernels page.

Fast installation#

The easiest way to use quask is by installing it in your Python3 environment (version >= 3.10) via the pip packet manager,

python3 -m pip install quask==2.0.0-alpha1

You also need any quantum SDK installed on your system. For example, we can install Qiskit. For more information about the installation process, you can see the Installation section.

python3 -m pip install qiskit qiskit_ibm_runtime
python3 -m pip install qiskit_ibm_runtime --upgrade
python3 -m pip install qiskit-aer

You can check if the installation has been successful by running the command:

import quask
print(quask.__version__)

In this way, quask has been started as a standalone application, meaning it can be used via the command line without the need for coding. This modality is explored in depth later.

Hello world, quantum kernels!#

from quask.core import Ansatz, Kernel, KernelFactory, KernelType
import numpy as np

Here, we can see the main objects of quask. The class Ansatz represents the function that maps classical data to the Hilbert space of the quantum system using a parametric quantum circuit. This class is parameterized by the number of qubits in the underlying quantum circuit, which often corresponds to the number of features (although it’s not a strict rule), the number of gates applied to the quantum circuit, and the number of features that the classical data point has.

The class Kernel represents a kernel object, which is essentially an ansatz along with additional information on how to effectively implement the quantum circuit for the entire procedure. The kernel object must be executed using one of the available backends (Qiskit, Pennylane, Qibo, etc.). To achieve this, the kernel class has been designed as an abstract object, meaning it cannot be used directly. Instead, we can use one of its subclasses, with each subclass interfacing with a particular backend. We can instantiate the concrete (non-abstract) kernel objects using the KernelFactory class.

Since there are several ways to design a quantum kernel using a single ansatz, the KernelType class is an enumeration whose values indicate the kind of kernel to be implemented.

# Number of features in the data point to be mapped in the Hilbert space of the quantum system
N_FEATURES = 1

# Number of gates applied to the quantum circuit
N_OPERATIONS = 1

# Number of qubits of the quantum circuit
N_QUBITS = 2

# Ansatz object, representing the feature map
ansatz = Ansatz(n_features=N_FEATURES, n_qubits=N_QUBITS, n_operations=N_OPERATIONS)

The ansatz class is not immediately usable when instantiated. It needs to be initialized so that all its operations correspond to valid gates, in this case, corresponding to the identity.

ansatz.initialize_to_identity()

Each operation acts on two qubits and is defined as

\[U(\beta \theta) = \exp(-i \beta \frac{\theta}{2} \sigma_1 \sigma_2),\]

where the generators \(\sigma_1\) and \(\sigma_2\) correspond to the Pauli matrices \(X, Y, Z\), and \(\mathrm{Id}\). When one of these generators is the identity, the gate effectively applies nontrivially to a single qubit.

All the gates are parameterized by a single real-valued parameter, \(\theta\), which can optionally be rescaled by a global scaling parameter \(0 < \beta < 1\). We can characterize each parametric gate by the following:

  • The feature that parameterizes the rotation, with \(0 \le f \le N\_FEATURES - 1\), or the constant feature \(1\). The constant features allow us to construct non-parameterized gates.

  • A pair of generators, represented by a 2-character string.

  • The qubits on which the operation acts, denoted as \((q_1, q_2)\), where \(0 \le q_i < N\_QUBITS\), and \(q_1 \neq q_2\). For ‘single-qubit gates’ with the identity as one or both generators, the qubit on which the identity is applied has a negligible effect on the transformation.

  • The scaling parameter \(\beta\).

ansatz.change_operation(0, new_feature=0, new_wires=[0, 1], new_generator="XX", new_bandwidth=0.9)

The ansatz serves as the feature map for our quantum kernel. To calculate kernel values, however, we have the opportunity to specify the method of calculation. This can be done using the fidelity test or by computing the expectation value of some observable. Additionally, we need to specify the backend to be used.

Currently, we support Qiskit, Pennylane, and Braket. More detailed information is available at the Backends in quask tutorial. Here, we suppose to use Qiskit as a backend, which has to be installed separately. To create the commonly used fidelity kernel, we provide the ansatz, the basis on which we will perform the measurement (typically the computational basis), and the type of kernel.

from quask.core_implementation import QiskitKernel
kernel = QiskitKernel(ansatz, "Z" * N_QUBITS, KernelType.FIDELITY)

To test if the kernel object function correctly we can call the kernel function on a pair of data point.

x1 = np.array([0.001])
x2 = np.array([0.999])
similarity = kernel.kappa(x1, x2)
print("The kernel value between x1 and x2 is", similarity)
The kernel value between x1 and x2 is 0.4033203125

We can decouple the actual backend used from the high-level APIs. The decoupling is managed by the KernelFactory class. to the create_kernel method. By default, KernelFactory creates objects that rely on the noiseless, infinite-shot simulation of Pennylane as a backend. To use the KernelFactory class, you first have to set up with backend are you using.

def create_qiskit_noiseless(ansatz: Ansatz, measurement: str, type: KernelType):
    return QiskitKernel(ansatz, measurement, type, n_shots=None)

KernelFactory.add_implementation('qiskit_noiseless', create_qiskit_noiseless)
KernelFactory.set_current_implementation('qiskit_noiseless')
kernel = KernelFactory.create_kernel(ansatz, "Z" * N_QUBITS, KernelType.FIDELITY) # QiskitKernel

Solve the iris dataset classification using quask#

We demonstrate how to integrate quask into a machine learning pipeline based on the library scikit-learn. This package allows us to effortlessly set up a toy classification problem that can be solved using kernel machines with quantum kernels.

from sklearn.datasets import load_iris
from sklearn.svm import SVC
from sklearn.metrics import mean_squared_error, r2_score
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
import numpy as np

First, we load the dataset. It can be retrieved directly from the Python package of scikit-learn.

It contains 150 samples associated with the three different subspecies of the Iris flower, with 50 samples for each subspecies. To simplify the task, we classify only the first two classes and select 10 samples for each class.

Each sample has 4 real featues.

N_ELEMENTS_PER_CLASS = 20
iris = load_iris()
X = np.row_stack([iris.data[0:N_ELEMENTS_PER_CLASS], iris.data[50:50+N_ELEMENTS_PER_CLASS]])
y = np.array([0] * N_ELEMENTS_PER_CLASS + [1] * N_ELEMENTS_PER_CLASS)

We preprocess our data and divide the dataset in training and testing set.

# Standardize the features
scaler = StandardScaler()
X = scaler.fit_transform(X)

# Split the data into training and testing sets
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.5, random_state=5454)

We then define the machine learning model to solve the classification task. Among the possibilities, we choose the Support Vector Machine. In order to use the quantum kernel, we specify we will give the kernel machine the kernel Gram matrix instead of the original features, by using the precomputed option.

# Instantiate a machine learning model
model = SVC(kernel='precomputed')

We then calculate the kernel Gram matrices and train the model.

# Create a quantum kernel
ansatz = Ansatz(n_features=4, n_qubits=4, n_operations=4)
ansatz.initialize_to_identity()
ansatz.change_operation(0, new_feature=0, new_wires=[0, 1], new_generator="XX", new_bandwidth=0.9)
ansatz.change_operation(1, new_feature=1, new_wires=[1, 2], new_generator="XX", new_bandwidth=0.9)
ansatz.change_operation(2, new_feature=2, new_wires=[2, 3], new_generator="XX", new_bandwidth=0.9)
ansatz.change_operation(3, new_feature=3, new_wires=[3, 0], new_generator="XX", new_bandwidth=0.9)
kernel = KernelFactory.create_kernel(ansatz, "ZZZZ", KernelType.FIDELITY)

# Fit the model to the training data
K_train = kernel.build_kernel(X_train, X_train)
model.fit(K_train, y_train)
SVC(kernel='precomputed')
In a Jupyter environment, please rerun this cell to show the HTML representation or trust the notebook.
On GitHub, the HTML representation is unable to render, please try loading this page with nbviewer.org.

We then use the model to predict the label of elements in the testing set. Again, we need to create the kernel Gram matrix of the elements in the testing set.

# Predict the labels for the test data
K_test = kernel.build_kernel(X_test, X_train)
y_pred = model.predict(K_test)

Finally, we can calculate the accuracy with respect to the testing set.

# Calculate the accuracy
accuracy = np.sum(y_test == y_pred) / len(y_test)
print("Accuracy:", accuracy)
Accuracy: 0.4

Among the features of quask is the ability to evaluate the kernel according to criteria known in the literature. We demonstrate one possible method for evaluating our quantum kernel with respect to the Centered Kernel Target Alignment. The lower the cost, the better the kernel is suited for the task.

from quask.evaluator import CenteredKernelAlignmentEvaluator
ce = CenteredKernelAlignmentEvaluator()
cost = ce.evaluate(None, K_train, X_train, y_train)
print("The cost according to the Centered-KTA is:", cost)
The cost according to the Centered-KTA is: -0.0441181503241057