Running memory during an assessment in Pitcher

I am training a model in pytorch. Every 10 eras, I evaluate the train and the test error for the entire train and test data set. For some reason, the rating function causes out of memory on my GPU. This is strange because I have the same package for training and evaluation. I believe this is due to the fact that the net.forward () method is repeated and has all hidden values ​​stored in memory, but I'm not sure how to get around this?

def evaluate(self, data):
    correct = 0
    total = 0
    loader = self.train_loader if data == "train" else self.test_loader
    for step, (story, question, answer) in enumerate(loader):
        story = Variable(story)
        question = Variable(question)
        answer = Variable(answer)
        _, answer = torch.max(answer, 1)

        if self.config.cuda:
            story = story.cuda()
            question = question.cuda()
            answer = answer.cuda()

        pred_prob = self.mem_n2n(story, question)[0]
        _, output_max_index = torch.max(pred_prob, 1)
        toadd = (answer == output_max_index).float().sum().data[0]
        correct = correct + toadd
        total = total + captions.size(0)

    acc = correct / total
    return acc
+4
source share
1 answer

I would suggest using the volatile flag for True for all variables used during evaluation,

    story = Variable(story, volatile=True)
    question = Variable(question, volatile=True)
    answer = Variable(answer, volatile=True)

, , . , :

del story, question, answer, pred_prob

( ). , ,

model.eval()
+8

Source: https://habr.com/ru/post/1688646/


All Articles