concat network
2020.08.27 00:19
https://discuss.pytorch.org/t/concatenate-layer-output-with-additional-input-data/20462
댓글 3
-
WHRIA
2020.09.05 11:08
-
WHRIA
2020.09.05 11:17
class MyModelA(nn.Module): def __init__(self): super(MyModelA, self).__init__() self.fc1 = nn.Linear(10, 2) def forward(self, x): x = self.fc1(x) return x class MyModelB(nn.Module): def __init__(self): super(MyModelB, self).__init__() self.fc1 = nn.Linear(20, 2) def forward(self, x): x = self.fc1(x) return x class MyEnsemble(nn.Module): def __init__(self, modelA, modelB): super(MyEnsemble, self).__init__() self.modelA = modelA self.modelB = modelB self.classifier = nn.Linear(4, 2) def forward(self, x1, x2): x1 = self.modelA(x1) x2 = self.modelB(x2) x = torch.cat((x1, x2), dim=1) x = self.classifier(F.relu(x)) return x # Create models and load state_dicts modelA = MyModelA() modelB = MyModelB() # Load state dicts modelA.load_state_dict(torch.load(PATH)) modelB.load_state_dict(torch.load(PATH)) model = MyEnsemble(modelA, modelB) x1, x2 = torch.randn(1, 10), torch.randn(1, 20) output = model(x1, x2)
-
WHRIA
2020.10.08 13:08
https://gist.github.com/andrewjong/6b02ff237533b3b2c554701fb53d5c4d
번호 | 제목 | 글쓴이 | 날짜 | 조회 수 |
---|---|---|---|---|
139 | nvidia dali [1] | WHRIA | 2020.08.01 | 337 |
138 | melafind | WHRIA | 2020.08.01 | 785 |
137 | scopus [1] | WHRIA | 2020.08.02 | 357 |
136 | GPT2 [1] | WHRIA | 2020.08.03 | 264 |
» | concat network [3] | WHRIA | 2020.08.27 | 172 |
134 | ubuntu cuda nvidia-smi | WHRIA | 2020.08.29 | 919 |
133 | fda 인증 | WHRIA | 2020.09.03 | 223 |
132 | onnx broswer | WHRIA | 2020.09.15 | 47 |
131 | file lock | WHRIA | 2020.09.22 | 160 |
130 | Transformer | WHRIA | 2020.10.09 | 163 |
129 | sample size | WHRIA | 2020.10.13 | 4264 |
128 | steamlit | WHRIA | 2020.10.15 | 223 |
127 | pytorch pretrained | WHRIA | 2020.10.28 | 168 |
126 | unattended upgrade | WHRIA | 2020.11.01 | 519 |
125 | dkms for r8125 | WHRIA | 2020.11.12 | 141 |
https://discuss.pytorch.org/t/combining-trained-models-in-pytorch/28383/2