|
160 | 160 | 2, |
161 | 161 | None, |
162 | 162 | 'the-iris-data-and-classical-svm'), |
| 163 | + ('Small addendum, $F1$-score', |
| 164 | + 2, |
| 165 | + None, |
| 166 | + 'small-addendum-f1-score'), |
163 | 167 | ('Iris Dataset', 2, None, 'iris-dataset'), |
164 | 168 | ('Qiskit implementation', 2, None, 'qiskit-implementation'), |
165 | 169 | ('Credit data classification', |
|
338 | 342 | <!-- navigation toc: --> <li><a href="#pennylane-implementations" style="font-size: 80%;">PennyLane implementations</a></li> |
339 | 343 | <!-- navigation toc: --> <li><a href="#steps-in-quantum-kernel-svm" style="font-size: 80%;">Steps in Quantum Kernel SVM</a></li> |
340 | 344 | <!-- navigation toc: --> <li><a href="#the-iris-data-and-classical-svm" style="font-size: 80%;">The Iris data and classical SVM</a></li> |
| 345 | + <!-- navigation toc: --> <li><a href="#small-addendum-f1-score" style="font-size: 80%;">Small addendum, \( F1 \)-score</a></li> |
341 | 346 | <!-- navigation toc: --> <li><a href="#iris-dataset" style="font-size: 80%;">Iris Dataset</a></li> |
342 | 347 | <!-- navigation toc: --> <li><a href="#qiskit-implementation" style="font-size: 80%;">Qiskit implementation</a></li> |
343 | 348 | <!-- navigation toc: --> <li><a href="#credit-data-classification" style="font-size: 80%;">Credit data classification</a></li> |
@@ -971,7 +976,7 @@ <h2 id="quantum-svm-algorithms-large-scale-vs-nisq" class="anchor">Quantum SVM A |
971 | 976 | <p>Early work by Rebentrost, Mohseni, and Lloyd (2014) formulated an SVM |
972 | 977 | in terms of quantum linear algebra. They showed that one can invert |
973 | 978 | the kernel matrix (a positive semidefinite matrix) using quantum |
974 | | -algorithms (HHL algorithm) in time polylogarithmic in \( N \) and \( d \) . |
| 979 | +algorithms in time polylogarithmic in \( N \) and \( d \) . |
975 | 980 | Concretely, they assumed quantum RAM (QRAM) access to data and used a |
976 | 981 | quantum subroutine to solve the dual SVM as a linear system, yielding |
977 | 982 | the vector of \( \alpha_i \) in superposition. Under ideal conditions |
@@ -1008,7 +1013,7 @@ <h2 id="and-nisq-quantum-kernels" class="anchor">And NISQ Quantum Kernels </h2> |
1008 | 1013 | <h2 id="quantum-neural-network" class="anchor">Quantum neural network </h2> |
1009 | 1014 |
|
1010 | 1015 | <p>Another variation is the quantum variational classifier, sometimes |
1011 | | -called a quantum neural network. Instead of precomputing a fixed |
| 1016 | +called a quantum neural network (to be discussed below). Instead of precomputing a fixed |
1012 | 1017 | kernel, one trains a parameterized quantum circuit to output labels. |
1013 | 1018 | Interestingly, Schuld (2021) shows that variational quantum models, |
1014 | 1019 | when trained by minimizing a loss, are mathematically equivalent to |
@@ -1360,7 +1365,10 @@ <h2 id="training-svm-with-precomputed-quantum-kernels" class="anchor">Training S |
1360 | 1365 | on the test set. |
1361 | 1366 | </p> |
1362 | 1367 |
|
1363 | | -<p>It is also possible to integrate PennyLane’s differentiable capabilities by defining a parameterized kernel and optimizing parameters via gradient descent, but here we keep a fixed feature map.</p> |
| 1368 | +<p>It is also possible to integrate PennyLane’s differentiable |
| 1369 | +capabilities by defining a parameterized kernel and optimizing |
| 1370 | +parameters via gradient descent, but here we keep a fixed feature map. |
| 1371 | +</p> |
1364 | 1372 |
|
1365 | 1373 | <!-- !split --> |
1366 | 1374 | <h2 id="discussion-of-implementation" class="anchor">Discussion of Implementation </h2> |
@@ -1467,6 +1475,32 @@ <h2 id="the-iris-data-and-classical-svm" class="anchor">The Iris data and classi |
1467 | 1475 | </div> |
1468 | 1476 |
|
1469 | 1477 |
|
| 1478 | +<!-- !split --> |
| 1479 | +<h2 id="small-addendum-f1-score" class="anchor">Small addendum, \( F1 \)-score </h2> |
| 1480 | + |
| 1481 | +<p>The \( F1 \) measure (or \( F1 \)-score) in machine learning is a metric used to |
| 1482 | +evaluate the accuracy of a classification model, particularly in |
| 1483 | +situations where class distribution is imbalanced. |
| 1484 | +It is the harmonic mean of precision and recall and is defined as |
| 1485 | +</p> |
| 1486 | +$$ |
| 1487 | +\mathrm{F1 score} = 2 \times \frac{\mathrm{precision} \times \mathrm{recall}}{\mathrm{precision} + \mathrm{recall}}, |
| 1488 | +$$ |
| 1489 | + |
| 1490 | +<p>where we have defined</p> |
| 1491 | +$$ |
| 1492 | +\mathrm{precision} = \frac{\mathrm{True-Positives}}{\mathrm{True-Positives} + \mathrm{False-Positives}}, |
| 1493 | +$$ |
| 1494 | + |
| 1495 | +<p>and</p> |
| 1496 | +$$ |
| 1497 | +\mathrm{recall} = \frac{\mathrm{True-Positives}}{\mathrm{True-Positives} + \mathrm{False-Negatives}}. |
| 1498 | +$$ |
| 1499 | + |
| 1500 | +<p>The \( F1 \)-score ranges from \( 0 \) to \( 1 \) where \( 1 \) means perfect precision and recall, while |
| 1501 | +\( 0 \) means either precision or recall is zero. |
| 1502 | +</p> |
| 1503 | + |
1470 | 1504 | <!-- !split --> |
1471 | 1505 | <h2 id="iris-dataset" class="anchor">Iris Dataset </h2> |
1472 | 1506 |
|
@@ -2036,7 +2070,7 @@ <h2 id="mathematical-example" class="anchor">Mathematical example </h2> |
2036 | 2070 |
|
2037 | 2071 | <p>and a variational layer is</p> |
2038 | 2072 | $$ |
2039 | | -V(\boldsymbol\theta)=R_y(\theta_1)\otimes R_y(\theta_2),\text{CNOT}(0,1), |
| 2073 | +V(\boldsymbol\theta)=R_y(\theta_1)\otimes R_y(\theta_2),\mathrm{CNOT}(0,1), |
2040 | 2074 | $$ |
2041 | 2075 |
|
2042 | 2076 | <p>(apply \( R_y \) on each qubit then entangle). After |
|
0 commit comments