Do a scalability test before launching a series of calculations
For HPC software, scalability (also known as parallelization efficiency) is a concept to describe how much speedup we can get when putting more computing resources into the jobs.
The scalability of a specific software depends on the nature of your calculations (size of the system, nature of the algorithms, etc). Blindly increasing the number of nodes does not necessarily lead to the speedup number as you were expecting. This is because, when increasing the number of nodes, the overhead caused by the inter-node communication becomes more and more dominant, and the computing power of the nodes are being used less and less efficiently.
Taking a Quantum ESPRESSO job (5 atoms, 15 k-points) as an example, it takes 286 seconds on 1 node (32 cores). You might expect a linear scaling and a time-to-solution of ~150 seconds on 2 nodes (64 cores), but it actually takes 246 seconds.
Therefore, before launching a series of calculations with similar size, please do a scalability test to find out the optimal number of nodes, so as to use the computing resource more efficiently.