Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

如何使用GPU训练模型? #32

Open
lingjiajie opened this issue Dec 3, 2023 · 6 comments
Open

如何使用GPU训练模型? #32

lingjiajie opened this issue Dec 3, 2023 · 6 comments

Comments

@lingjiajie
Copy link

lingjiajie commented Dec 3, 2023

您好!

我是的代码是在案例“混合量子经典神经网络模型”的基础上进行修改的。修改的代码如下:

`
x_train, y_train, x_test, y_test = data_select(1000, 100)
model = Net().toGPU()

optimizer = Adam(model.parameters(), lr=0.005)
#分类任务使用交叉熵函数
loss_func = CategoricalCrossEntropy()

#训练次数
epochs = 10
train_loss_list = []
val_loss_list = []
train_acc_list =[]

val_acc_list = []

for epoch in range(1, epochs):

total_loss = []
model.train()
batch_size = 1
correct = 0
n_train = 0

for x, y in data_generator(x_train, y_train, batch_size=1, shuffle=True):

    x = x.reshape(-1, 1, 28, 28)
    x = QTensor(x)
    y = QTensor(y)
    x = x.to_gpu(DEV_GPU_0)
    y = y.to_gpu(DEV_GPU_0)
    optimizer.zero_grad()
    output = model(x)
    loss = loss_func(y, output)
    loss_np = np.array(loss.data)
    np_output = np.array(output.data, copy=False)
    mask = (np_output.argmax(1) == y.argmax(1))
    correct += np.sum(np.array(mask))
    n_train += batch_size
    loss.backward()
    optimizer._step()
    total_loss.append(loss_np)

train_loss_list.append(np.sum(total_loss) / len(total_loss))
train_acc_list.append(np.sum(correct) / n_train)
print("{:.0f} loss is : {:.10f}".format(epoch, train_loss_list[-1]))

`

但是出现下面的问题:

terminate called after throwing an instance of 'std::invalid_argument' what(): tensor --> numpy device not supported, toCPU() first. Aborted (core dumped)

请问这是哪里出了问题?谢谢!

@kevzos
Copy link
Contributor

kevzos commented Dec 4, 2023

    loss = loss_func(y, output)
    loss_np = np.array(loss.data)
    np_output = np.array(output.data, copy=False)

loss 是QTensor变量。使用loss.to_numpy()转化为numpy
@lingjiajie

@lingjiajie
Copy link
Author

image
并不是您提的这里有问题,我想应该是有问题

模型部分的代码如下:
`

class Net(Module):

def __init__(self):

    super(Net, self).__init__()
    self.conv1 = Conv2D(input_channels=1, output_channels=6, kernel_size=(5, 5), stride=(1, 1), padding="valid")
    self.maxpool1 = MaxPool2D([2, 2], [2, 2], padding="valid")
    self.conv2 = Conv2D(input_channels=6, output_channels=16, kernel_size=(5, 5), stride=(1, 1), padding="valid")
    self.maxpool2 = MaxPool2D([2, 2], [2, 2], padding="valid")
    self.fc1 = Linear(input_channels=256, output_channels=64)
    self.fc2 = Linear(input_channels=64, output_channels=1)
    self.hybrid = Hybrid(np.pi / 2)
    self.fc3 = Linear(input_channels=1, output_channels=2)

def forward(self, x):
    x = F.ReLu()(self.conv1(x))  # 1 6 24 24
    x = self.maxpool1(x)
    x = F.ReLu()(self.conv2(x))  # 1 16 8 8
    x = self.maxpool2(x)
    x = tensor.flatten(x, 1)   # 1 256
    x = F.ReLu()(self.fc1(x))  # 1 64
    x = self.fc2(x)    # 1 1
    x = self.hybrid(x)
    x = self.fc3(x)
    return x

model = Net().toGPU()

`

@kevzos
Copy link
Contributor

kevzos commented Dec 6, 2023

@lingjiajie 可以直接附上你的完整代码么,我来看下。

@kevzos
Copy link
Contributor

kevzos commented Dec 6, 2023

@lingjiajie 你说的很对,那个示例不能很简单的改为GPU版本。
我这里给出那个示例使用GPU的代码,这个代码只能支持batch=1。
hybird_gpu_b1.txt
也可以使用QuantumLayerV2接口,封装的更加好一些,可以支持多种device以及batchsize:
hybird_gpu_qlayer.txt

image 并不是您提的这里有问题,我想应该是有问题

模型部分的代码如下: `

class Net(Module):

def __init__(self):

    super(Net, self).__init__()
    self.conv1 = Conv2D(input_channels=1, output_channels=6, kernel_size=(5, 5), stride=(1, 1), padding="valid")
    self.maxpool1 = MaxPool2D([2, 2], [2, 2], padding="valid")
    self.conv2 = Conv2D(input_channels=6, output_channels=16, kernel_size=(5, 5), stride=(1, 1), padding="valid")
    self.maxpool2 = MaxPool2D([2, 2], [2, 2], padding="valid")
    self.fc1 = Linear(input_channels=256, output_channels=64)
    self.fc2 = Linear(input_channels=64, output_channels=1)
    self.hybrid = Hybrid(np.pi / 2)
    self.fc3 = Linear(input_channels=1, output_channels=2)

def forward(self, x):
    x = F.ReLu()(self.conv1(x))  # 1 6 24 24
    x = self.maxpool1(x)
    x = F.ReLu()(self.conv2(x))  # 1 16 8 8
    x = self.maxpool2(x)
    x = tensor.flatten(x, 1)   # 1 256
    x = F.ReLu()(self.fc1(x))  # 1 64
    x = self.fc2(x)    # 1 1
    x = self.hybrid(x)
    x = self.fc3(x)
    return x

model = Net().toGPU()

`

@lingjiajie
Copy link
Author

谢谢,十分感谢。

您的回复可以解决我的问题。

再次感谢您的帮助!

@kevzos
Copy link
Contributor

kevzos commented Dec 7, 2023

@lingjiajie 如果解决了您的问题,请close这个issue。

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants