CPU Memory and Cores
Request CPU Memory
- Standard Partition (no GPU): you will get 9GB RAM per core
- Interactive Partition: you will get 6GB RAM per core
- GPU Partition: you can specify how much RAM you want
You should have enough RAM to comfortably work with your GPU. In other words, request at least as much RAM as the GPU you select.
- If you select multiple GPUs, request as much RAM as the GPU you selected with the largest memory.
- If you are using a large dataset and/or want to do extensive preprocessing, more RAM is probably helpful.
- How much more? Depends! You can experiment and check your memory efficiency.
Visit this Deep Learning Hardware Guide for more information.
Request CPU Cores
It depends how many CPU Cores to request! Generally, make your best guess to start. Then check the CPU and GPU efficiency of your script and adjust from there.
- Are you are doing any data preprocessing on the CPU prior to training the network on the GPU?
- Is the preprocessing code serial or parallel?
- NOTE: Even if your code is written as a serial program, NumPy automatically uses multiple cores for linear algebra operations!
- Are you using a single core or multiple cores for the data loading from the CPU to the GPU for the training process?
- Use enough CPU cores to keep the GPU busy
PyTorch
- PyTorch’s DataLoader has a
num_workers
parameter, which is the number of CPU cores to use for the data loading. - The default is
num_workers=1
, but this may not load data fast enough to keep the GPU busy. - Try increasing
num_workers
to improve GPU efficiency and speed up DL code.
Keras
- Keras will use multiple cores for data loading automatically