Common issues/gotchas
Below we list common issues and gotchas encountered when using SuperScreen
, along with the recommended solutions.
If you encounter an issue not listed here, or the recommended solution does not work for you, please
open an issue on GitHub.
Device.make_mesh()
hangs for asuperscreen.Device
with transport terminals.Tip
In order to correctly set the boundary conditions for films with transport terminals (see Terminal currents),
SuperScreen
generates a mesh for such films where the boundary vertices of the mesh are exactly the same as the boundary vertices of the associatedsuperscreen.Polygon
. As a result,Device.make_mesh()
can hang if the distance between vertices in the filmsuperscreen.Polygon
is greater than themax_edge_length
requested inDevice.make_mesh()
. To fix this, simply increase the number of points in thesuperscreen.Polygon
defining the film. For example, to double the number of points in the polygon, run:polygon.points = polygon.resample(2 * len(polygon.points))
Poor performance when running
SuperScreen
in multiple Python processes.Tip
SuperScreen
uses numba to automatically perform some computations in parallel across multiple CPUs. To avoid competition between multiple Python processes runningSuperScreen
, you can set the number of threads available tonumba
in each process. For example, if your computer has 8 physical CPU cores and you are runningSuperScreen
in 2 different Python processes, you should tellnumba
to use 8 / 2 = 4 threads in each Python process.import joblib import numba # Number of Python processes in which you will run SuperScreen number_of_python_processes = 2 # Number of physical CPU cores available physical_cpus = joblib.cpu_count(only_physical_cores=True) # Tell numba how many threads to use in each Python process numba.set_num_threads(int(physical_cpus / number_of_python_processes))
For more details, see Setting the Number of Threads in the
numba
documentation.A similar problem can occur when running
SuperScreen
on a cluster using a scheduler such as Slurm.numba
sets the default number of threads according tomultiprocessing.cpu_count()
, which does not know about the number of CPUs requested from the scheduler. On the other handjoblib.cpu_count()
does know about the number of CPUs allocated by the scheduler. Therefore, when runningSuperScreen
on such a cluster you should always runimport joblib import numba numba.set_num_threads(joblib.cpu_count(only_physical_cores=True))
or set the environment variable
NUMBA_NUM_THREADS
prior to importingnumba
/superscreen
.