相关文章推荐
强悍的苹果  ·  sequential quadratic ...·  1 年前    · 
阳刚的雪糕  ·  C# ...·  1 年前    · 
飘逸的香蕉  ·  sql DELETE的用法 - CSDN文库·  2 周前    · 
重感情的煎鸡蛋  ·  Jquery ...·  1 年前    · 
Collectives™ on Stack Overflow

Find centralized, trusted content and collaborate around the technologies you use most.

Learn more about Collectives

Teams

Q&A for work

Connect and share knowledge within a single location that is structured and easy to search.

Learn more about Teams

Given groups=1, weight of size 16 16 3 3, expected input[16, 64, 222, 222] to have 16 channels, but got 64 channels instead?

Ask Question

I try to run the following program for image classification problems in Pytorch. I am new to PyTorch and I am not getting what's wrong with the code. I tried reshaping the images but no help. I am running this code with Cuda. I have around 750 classes and 10-20 images in one class. My dataset is a benchmark dataset and every image has a size of 60*160.

class Net(nn.Module):
    def __init__(self):
        super().__init__()
        self.ConvLayer1 = nn.Sequential(
            nn.Conv2d(3, 64, 3), # inp (3, 512, 512) changes doing here original (3, 64, 3)
            nn.Conv2d(8, 16, 3), # original (8,16,3)
            nn.MaxPool2d(2),
            nn.ReLU() # op (16, 256, 256)
        self.ConvLayer2 = nn.Sequential(
            nn.Conv2d(16, 32, 5), # inp (16, 256, 256)
            nn.Conv2d(32, 32, 3),
            nn.MaxPool2d(4),
            nn.ReLU() # op (32, 64, 64)
        self.ConvLayer3 = nn.Sequential(
            nn.Conv2d(32, 64, 3), # inp (32, 64, 64) original (32,64,3)
            nn.Conv2d(64, 64, 5),
            nn.MaxPool2d(2),
            nn.ReLU() # op (64, 32, 32)
        self.ConvLayer4 = nn.Sequential(
            nn.Conv2d(64, 128, 5), # inp (64, 32, 32)
            nn.Conv2d(128, 128, 3),
            nn.MaxPool2d(2),
            nn.ReLU() # op (128, 16, 16)
        self.Lin1 = nn.Linear(15488, 15)
        self.Lin2 = nn.Linear(1500, 150)
        self.Lin3 = nn.Linear(150, 15)
    def forward(self, x):
        x = self.ConvLayer1(x)
        x = self.ConvLayer2(x)
        x = self.ConvLayer3(x)
        x = self.ConvLayer4(x)
        x = x.view(x.size(0), -1)
        x = self.Lin1(x)
        return F.log_softmax(x, dim = 1)
criterion = nn.CrossEntropyLoss()
optimizer = optim.SGD(model.parameters(), lr=0.0001, momentum=0.5)
for epoch in tqdm(range(2)):  # loop over the dataset multiple times
    running_loss = 0.0
    for i, data in enumerate(dataloaders['train']):
        # get the inputs; data is a list of [inputs, labels]
        inputs, class_names = data
        # zero the parameter gradients
        optimizer.zero_grad()
        # forward + backward + optimize
        outputs = model(inputs)
        loss = criterion(outputs, class_names)
        loss.backward()
        optimizer.step()
        # print statistics
        running_loss += loss.item()
       # if i % 10 == 0:    # print every 10 mini-batches
        #    print('[%d, %5d] loss: %.3f' %
         #         (epoch + 1, i + 1, running_loss / 2000))
          #  running_loss = 0.0
    break
print('Finished Training')

Getting this error and I don't know where to make changes. Given groups=1, the weight of size 16 16 3 3, expected input[16, 64, 222, 222] to have 16 channels, but got 64 channels instead.

The number of output channels in a conv layer need to match the number of input channels in the next conv layer. Say you have nn.Conv(3, 64, 3) then the next conv layer needs to begin nn.Conv(64, .... Right now the issue is that you're trying to pass a 64 channel result into a conv layer which you've defined to expect an 8 channel input.

Thanks for contributing an answer to Stack Overflow!

  • Please be sure to answer the question. Provide details and share your research!

But avoid

  • Asking for help, clarification, or responding to other answers.
  • Making statements based on opinion; back them up with references or personal experience.

To learn more, see our tips on writing great answers.