When&Where to transfer a tensor to cuda?

General Tech Bugs & Fixes 3 years ago

4.66K 1 0 0 0

User submissions are the sole responsibility of contributors, with TuteeHUB disclaiming liability for accuracy, copyrights, or consequences of use; content is for informational purposes only and not professional advice.

Answers (1)

Post Answer
profilepic.png
manpreet Tuteehub forum best answer Best Answer 3 years ago

I transferred the output tensor to GPU at different positions in my code(before or after modifying its value) but got different results. What's the reason?

The failed code can be simplified as:

def Network(self):
    ........
    A = self.model(input)
    indexlist = self.indexlist
    output = torch.zeros(A.size(0))
    for i,li in enumerate(indexlist):
        if li:
            s,e = li
            output[i]+=sum(A[i,s:e])
    output = output if self.no_cuda else output.cuda(device=self.gpu,async=True)
    return output
pred = Network()
loss = F.nll_loss(pred,target)
loss.backward()

And the RuntimeError: Function torch::autograd::CopySlices returned an invalid gradient at index 1 - expected type torch.cuda.FloatTensor but got torch.FloatTensor

If I changed one line as follows, it runs normally:

def Network(self):
    ........
    A = self.model(input)
    indexlist = self.indexlist
    output = torch.zeros(A.size(0))
    output = output if self.no_cuda else output.cuda(device=self.gpu,async=True)
    for i,li in enumerate(indexlist):
        if li:
            s,e = li
            output[i]+=sum(A[i,s:e])
    return output
pred = Network()
loss = F.nll_loss(pred,target)
loss.backward()
0 views
0 shares

No matter what stage you're at in your education or career, TuteeHUB will help you reach the next level that you're aiming for. Simply,Choose a subject/topic and get started in self-paced practice sessions to improve your knowledge and scores.

Similar Forum