-
Notifications
You must be signed in to change notification settings - Fork 8
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
如何使用GPU训练模型? #32
Comments
loss 是QTensor变量。使用loss.to_numpy()转化为numpy |
模型部分的代码如下:
model = Net().toGPU() ` |
@lingjiajie 可以直接附上你的完整代码么,我来看下。 |
@lingjiajie 你说的很对,那个示例不能很简单的改为GPU版本。
|
谢谢,十分感谢。 您的回复可以解决我的问题。 再次感谢您的帮助! |
@lingjiajie 如果解决了您的问题,请close这个issue。 |
您好!
我是的代码是在案例“混合量子经典神经网络模型”的基础上进行修改的。修改的代码如下:
`
x_train, y_train, x_test, y_test = data_select(1000, 100)
model = Net().toGPU()
optimizer = Adam(model.parameters(), lr=0.005)
#分类任务使用交叉熵函数
loss_func = CategoricalCrossEntropy()
#训练次数
epochs = 10
train_loss_list = []
val_loss_list = []
train_acc_list =[]
val_acc_list = []
for epoch in range(1, epochs):
`
但是出现下面的问题:
terminate called after throwing an instance of 'std::invalid_argument' what(): tensor --> numpy device not supported, toCPU() first. Aborted (core dumped)
请问这是哪里出了问题?谢谢!
The text was updated successfully, but these errors were encountered: