concat network
2020.08.27 00:19
https://discuss.pytorch.org/t/concatenate-layer-output-with-additional-input-data/20462
댓글 3
-
WHRIA
2020.09.05 11:08
-
WHRIA
2020.09.05 11:17
class MyModelA(nn.Module): def __init__(self): super(MyModelA, self).__init__() self.fc1 = nn.Linear(10, 2) def forward(self, x): x = self.fc1(x) return x class MyModelB(nn.Module): def __init__(self): super(MyModelB, self).__init__() self.fc1 = nn.Linear(20, 2) def forward(self, x): x = self.fc1(x) return x class MyEnsemble(nn.Module): def __init__(self, modelA, modelB): super(MyEnsemble, self).__init__() self.modelA = modelA self.modelB = modelB self.classifier = nn.Linear(4, 2) def forward(self, x1, x2): x1 = self.modelA(x1) x2 = self.modelB(x2) x = torch.cat((x1, x2), dim=1) x = self.classifier(F.relu(x)) return x # Create models and load state_dicts modelA = MyModelA() modelB = MyModelB() # Load state dicts modelA.load_state_dict(torch.load(PATH)) modelB.load_state_dict(torch.load(PATH)) model = MyEnsemble(modelA, modelB) x1, x2 = torch.randn(1, 10), torch.randn(1, 20) output = model(x1, x2)
-
WHRIA
2020.10.08 13:08
https://gist.github.com/andrewjong/6b02ff237533b3b2c554701fb53d5c4d
번호 | 제목 | 글쓴이 | 날짜 | 조회 수 |
---|---|---|---|---|
1686 | ubuntu cuda nvidia-smi | WHRIA | 2020.08.29 | 852 |
» | concat network [3] | WHRIA | 2020.08.27 | 147 |
1684 | GPT2 [1] | WHRIA | 2020.08.03 | 250 |
1683 | scopus [1] | WHRIA | 2020.08.02 | 344 |
1682 | melafind | WHRIA | 2020.08.01 | 771 |
1681 | nvidia dali [1] | WHRIA | 2020.08.01 | 265 |
1680 | pytorch optimize | WHRIA | 2020.08.01 | 126 |
1679 | startup , 미국 | WHRIA | 2020.07.30 | 134 |
1678 | consort , stard | WHRIA | 2020.07.30 | 81 |
1677 | model split [1] | WHRIA | 2020.07.30 | 142 |
1676 | FTC [1] | WHRIA | 2020.07.28 | 238 |
1675 | 암호화 | WHRIA | 2020.07.26 | 1000 |
1674 | raid 6 | WHRIA | 2020.07.23 | 43 |
1673 | asyncio | WHRIA | 2020.07.23 | 286 |
1672 | amp distributed pytorch [1] | WHRIA | 2020.07.14 | 51 |
https://discuss.pytorch.org/t/combining-trained-models-in-pytorch/28383/2