Skip to content

pytorch mini batch size

2019.12.19 06:12

WHRIA 조회 수:82

https://stackoverflow.com/questions/52518324/how-to-compensate-if-i-cant-do-a-large-batch-size-in-neural-network/52523847

 

 

4

In pytorch, when you perform the backward step (calling loss.backward() or similar) the gradients are accumulated in-place. This means that if you call loss.backward() multiple times, the previously calculated gradients are not replaced, but in stead the new gradients get added on to the previous ones. That is why, when using pytorch, it is usually necessary to explicitly zero the gradients between minibatches (by calling optimiser.zero_grad() or similar).

If your batch size is limited, you can simulate a larger batch size by breaking a large batch up into smaller pieces, and only calling optimiser.step() to update the model parameters after all the pieces have been processed.

For example, suppose you are only able to do batches of size 64, but you wish to simulate a batch size of 128. If the original training loop looks like:

optimiser.zero_grad()
loss = model(batch_data) # batch_data is a batch of size 128
loss.backward()
optimiser.step()

then you could change this to:

optimiser.zero_grad()

smaller_batches = batch_data[:64], batch_data[64:128]
for batch in smaller_batches:
    loss = model(batch) / 2
    loss.backward()

optimiser.step()

and the updates to the model parameters would be the same in each case (apart maybe from some small numerical error). Note that you have to rescale the loss to make the update the same.

번호 제목 글쓴이 날짜 조회 수
1686 ubuntu cuda nvidia-smi WHRIA 2020.08.29 850
1685 concat network [3] WHRIA 2020.08.27 147
1684 GPT2 [1] WHRIA 2020.08.03 250
1683 scopus [1] WHRIA 2020.08.02 344
1682 melafind WHRIA 2020.08.01 771
1681 nvidia dali [1] WHRIA 2020.08.01 265
1680 pytorch optimize WHRIA 2020.08.01 126
1679 startup , 미국 WHRIA 2020.07.30 134
1678 consort , stard WHRIA 2020.07.30 81
1677 model split [1] WHRIA 2020.07.30 142
1676 FTC [1] WHRIA 2020.07.28 238
1675 암호화 WHRIA 2020.07.26 1000
1674 raid 6 WHRIA 2020.07.23 43
1673 asyncio WHRIA 2020.07.23 286
1672 amp distributed pytorch [1] WHRIA 2020.07.14 51

Powered by Xpress Engine / Designed by Sketchbook

sketchbook5, 스케치북5

sketchbook5, 스케치북5

나눔글꼴 설치 안내


이 PC에는 나눔글꼴이 설치되어 있지 않습니다.

이 사이트를 나눔글꼴로 보기 위해서는
나눔글꼴을 설치해야 합니다.

설치 취소