Skip to content

Commit 90a431b

Browse files
Merge branch 'master' of https://github.com/PeilinZHENG/tensorcircuit into tqlpr163
2 parents 467399f + f4cb4f8 commit 90a431b

File tree

1 file changed

+3
-3
lines changed

1 file changed

+3
-3
lines changed

docs/source/tutorials/imag_time_evo.ipynb

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -414,7 +414,7 @@
414414
{
415415
"cell_type": "markdown",
416416
"source": [
417-
"We use two methods to calculate $\\boldsymbol{\\delta}$, one is to calculate directly according to the expressions of $\\boldsymbol{A}$ and $\\boldsymbol{C}$, and the other is to call the existing API to calculate $\\boldsymbol{A}$ and $\\boldsymbol{C}$. The former only needs to calculate the $|\\partial_{\\boldsymbol{\\theta}_{j}}\\psi\\rangle$ once, while the latter needs to calculate that twice, but the code of the latter is more concise. In each method, we set the parameter $\\text{fixed_global_phase}$ to decide whether to fix the global phase, that is, whether the second term of $\\boldsymbol{A}$ vanishes.\n",
417+
"We use two methods to calculate $\\boldsymbol{\\delta}$, one is to calculate directly according to the expressions of $\\boldsymbol{A}$ and $\\boldsymbol{C}$, and the other is to call the existing API to calculate $\\boldsymbol{A}$ and $\\boldsymbol{C}$. The former only needs to calculate the $|\\partial_{\\boldsymbol{\\theta}_{j}}\\psi\\rangle$ once, while the latter needs to calculate that twice, but the code of the latter is more concise. In each method, we set the parameter fixed_global_phase to decide whether to fix the global phase, that is, whether the second term of $\\boldsymbol{A}$ vanishes.\n",
418418
"\n",
419419
"Then we choose the existing optimizer, SGD, to implement the update step. Since compared with naive gradient descent, the approximate imaginary-time evolution has been corrected on the update step size, the adaptive optimizer improved for the naive gradient descent such as Adam is not suitable for the approximate imaginary-time evolution. When update by the adaptive optimizer, the loss function fluctuates greatly. On the other hand, the update method of SGD without momentum is naive update, which is convenient for comparison with the exact imaginary-time evolution."
420420
],
@@ -576,7 +576,7 @@
576576
{
577577
"cell_type": "markdown",
578578
"source": [
579-
"We first show the overlap between the final states obtained by different methods. The final states obtained by different methods but with the same parameter of $\\text{fixed_global_phase}$ are almost the same, which are also close to the exact final state. And the final states obtained by the same method but with the different parameter of $\\text{fixed_global_phase}$ has a global phase difference."
579+
"We first show the overlap between the final states obtained by different methods. The final states obtained by different methods but with the same parameter of fixed_global_phase are almost the same, which are also close to the exact final state. And the final states obtained by the same method but with the different parameter of fixed_global_phase has a global phase difference."
580580
],
581581
"metadata": {
582582
"collapsed": false
@@ -915,7 +915,7 @@
915915
{
916916
"cell_type": "markdown",
917917
"source": [
918-
"We also use two methods to calculate $\\boldsymbol{\\delta}$, but make some changes in the method of directly calling the API and the update method. When calculating $\\boldsymbol{A}$, we call $\\text{qng2}$ instead of $\\text{qng}$, and when calculating $\\boldsymbol{C}$, we call $\\text{dynamics_rhs}$ instead of calculating the energy gradient by $\\text{value_and_grad}$. For the update method, we do not call the existing optimizer but directly adopt the naive update method."
918+
"We also use two methods to calculate $\\boldsymbol{\\delta}$, but make some changes in the method of directly calling the API and the update method. When calculating $\\boldsymbol{A}$, we call qng2 instead of qng, and when calculating $\\boldsymbol{C}$, we call dynamics_rhs instead of calculating the energy gradient by value_and_grad. For the update method, we do not call the existing optimizer but directly adopt the naive update method."
919919
],
920920
"metadata": {
921921
"collapsed": false

0 commit comments

Comments
 (0)