9th international ABINIT developer workshop
20-22nd May 2019 - Louvain-la-Neuve, Belgium
Two different approaches:
Both approaches share the same codebase (AbinitInput, factory functions, AbiPy objects, TaskManager...).
Number and type of calculations are important ➝ choose according to your needs.
Programmatic interface to generate input files:
Can invoke ABINIT to get important parameters such as:
inp = abilab.AbinitInput(structure="si.cif", pseudos="si.psp8")
inp["ecut"] = 8
"ecut" in inp
True
inp.set_vars(kptopt=1, ngkpt=[2, 2, 2],
shiftk=[0.0, 0.0, 0.0, 0.5, 0.5, 0.5] # 2 shifts in one list
);
inp.set_autokmesh(nksmall=2)
{'ngkpt': array([2, 2, 2]), 'kptopt': 1, 'nshiftk': 4, 'shiftk': array([[0.5, 0.5, 0.5], [0.5, 0. , 0. ], [0. , 0.5, 0. ], [0. , 0. , 0.5]])}
Once we have an AbinitInput, it is possible to execute Abinit to:
Methods invoking Abinit start with the abi prefix followed by a verb:
To call Abinit from AbiPy, one has to prepare a configuration file (manager.yml) providing all the information required to execute/submit Abinit jobs:
$PATH
, $LD_LIBRARY_PATH
For futher info consult the documentation
qadapters:
# List of qadapters objects
- priority: 1
queue:
qtype: shell
qname: localhost
job:
mpi_runner: mpirun
pre_run:
# abinit exec must be in $PATH
- export PATH=$HOME/git_repos/abinit/_build/src/98_main:$PATH
limits:
timelimit: 30:00
max_cores: 2
hardware:
num_nodes: 1
sockets_per_node: 1
cores_per_socket: 2
mem_per_node: 4Gb
Examples of configuration files for clusters available here
Use abirun.py doc_manager to get documentation inside the shell
hardware: &hardware
num_nodes: 80
sockets_per_node: 2
cores_per_socket: 12
mem_per_node: 95Gb
job: &job
mpi_runner: mpirun
modules: # Load modules used to compile Abinit
- intel/2017b
- netCDF-Fortran/4.4.4-intel-2017b
- abinit_8.11
pre_run: "ulimit -s unlimited"
# Slurm options.
qadapters:
- priority: 1
queue:
qtype: slurm
qname: large
limits:
timelimit: 0-0:30:00
min_cores: 1
max_cores: 48
min_mem_per_proc: 1000
max_mem_per_proc: 2000
max_num_launches: 10
hardware: *hardware
job: *job
ibz = inp.abiget_ibz()
print("Number of k-points:", len(ibz.points))
print("Weights normalized to:", ibz.weights.sum())
n = min(5, len(ibz.points))
for i, (k, w) in enumerate(zip(ibz.points[:n], ibz.weights[:n])):
print(i, "kpt:", k, "weight:", w)
if n != len(ibz.points): print("...")
Number of k-points: 2 Weights normalized to: 1.0 0 kpt: [-0.25 0.5 0. ] weight: 0.75 1 kpt: [-0.25 0. 0. ] weight: 0.25
inp["paral_kgb"] = 1
pconfs = inp.abiget_autoparal_pconfs(max_ncpus=5)
print("Best efficiency:\n", pconfs.sort_by_efficiency()[0])
#print("Best speedup:\n", pconfs.sort_by_speedup()[0])
Best efficiency: {'efficiency': 0.788, 'mem_per_cpu': 0.0, 'mpi_ncpus': 2, 'omp_ncpus': 1, 'tot_ncpus': 2, 'vars': {'bandpp': 1, 'npband': 1, 'npfft': 1, 'npimage': 1, 'npkpt': 2, 'npspinor': 1}}
inp.abiget_irred_phperts(qpt=(0.25, 0, 0))
[{'qpt': [0.25, 0.0, 0.0], 'ipert': 1, 'idir': 1}, {'qpt': [0.25, 0.0, 0.0], 'ipert': 1, 'idir': 2}]
inp.abiget_irred_strainperts()
[{'qpt': [0.0, 0.0, 0.0], 'ipert': 1, 'idir': 1}, {'qpt': [0.0, 0.0, 0.0], 'ipert': 5, 'idir': 1}, {'qpt': [0.0, 0.0, 0.0], 'ipert': 5, 'idir': 2}, {'qpt': [0.0, 0.0, 0.0], 'ipert': 5, 'idir': 3}, {'qpt': [0.0, 0.0, 0.0], 'ipert': 6, 'idir': 1}, {'qpt': [0.0, 0.0, 0.0], 'ipert': 6, 'idir': 2}, {'qpt': [0.0, 0.0, 0.0], 'ipert': 6, 'idir': 3}]
Each format has pros and cons:
YAML
Netcdf4
MSG_ERROR("Fatal error!")
MSG_BUG("This is a bug!")
ABI_CHECK(natom > 0, sjoin("Input natom must be > 0 but was:", itoa(natom))
--- !ERROR
src_file: m_invars1.F90
src_line: 314
mpi_rank: 0
message: |
Input natom must be > 0, but was -42
...
MSG_ERROR_CLASS(dilatmx_errmsg, "DilatmxError")
These classes bring metadata and/or imply some action at the Fortran level
The presence of the Yaml error in the log file triggers specialized python logic:
call chkdilatmx(dt_chkdilatmx, dilatmx, rprimd, rprimd_orig, dilatmx_errmsg)
if (len_trim(dilatmx_errmsg) /= 0) then
! Write last structure before aborting so that we can restart from it.
if (my_rank == master) then
NCF_CHECK(crystal%ncwrite_path("out_DILATMX_STRUCT.nc")))
end if
call xmpi_barrier(comm_cell)
write(dilatmx_errmsg, '(a,i0,3a)') &
'Dilatmx has been exceeded too many times (', nerr_dilatmx, ')',ch10, &
'Restart calculation from larger lattice vectors and/or a larger dilatmx'
MSG_ERROR_CLASS(dilatmx_errmsg, "DilatmxError")
end if
!DilatmxError
in log file, AbiPy expects a ncfile with the last structure--- !FinalSummary
program: abinit
version: 8.11.6
start_datetime: Sat Mar 30 23:01:14 2019
end_datetime: Sat Mar 30 23:04:04 2019
overall_cpu_time: 168.8
overall_wall_time: 169.7
exit_requested_by_user: no
timelimit: 0
pseudos:
Li : 9517c0b7d24d4898578b8627ce68311d
F : 14cf65a61ba7320a86892d2f062b1f44
usepaw: 0
mpi_procs: 1
omp_threads: 1
num_warnings: 2
num_comments: 73
...
ncerr = nctk_open_create(ncid, "si_scf_GSR.nc", xmpi_comm_self)
! Write hdr, crystal and band structure
ncerr = hdr%ncwrite(ncid, fform_den, nc_define=.True.)
ncerr = crystal%ncwrite(ncid)
ncerr = ebands%ncwrite(ncid)
! Add energy, forces, stresses
ncerr = results_gs%ncwrite(ncid, dtset%ecut, dtset%pawecutdg)
ncerr = nf90_close(ncid)
NCF_CHECK(nctk_open_create(ncid, "out_PHDOS.nc", xmpi_comm_self))
! PHDOS.nc has a crystalline structure + DOS values
NCF_CHECK(cryst%ncwrite(ncid))
NCF_CHECK(phdos%ncwrite(ncid)
class GsrFile(AbinitNcFile, Has_Header, Has_Structure, Has_ElectronBands):
"""This file contains ground-state results"""
class FatBandsFile(AbinitNcFile, Has_Header, Has_Structure, Has_ElectronBands):
"""This file contains LM-projected bands"""
for path in ("out_GSR.nc", "out_FATBANDS.nc", "out_WFK.nc", "out_SIGEPH.nc"):
abifile = abilab.abiopen(path)
abifile.ebands.plot()
$ cat job.sh
#SBATCH --time=12:00:00
abinit --timelimit 12:00:00 < run.file > run.log 2> run.err
# Test whether projected DOSes computed by anaddb integrate to 3*natom.
ddb = abilab.abiopen("out_DDB")
for dos_method in ("tetra", "gaussian"):
# Get phonon bands and dos with anaddb.
phbst_nc, phdos_nc = ddb.anaget_phbst_and_phdos_files(dos_method=dos_method)
phbands, phdos = phbst_nc.phbands, phdos_nc.phdos
# Total PHDOS should integrate to 3 * natom
assert_almost_equal(phdos.integral_value, len(phbands.structure) * 3)
# Summing projected DOSes over types should give the total DOS.
pj_sum = sum(pjdos.integral_value for pjdos in phdos_file.pjdos_symbol.values())
assert_almost_equal(phdos.integral_value, pj_sum)
# Summing proj-DOSes over types and directions should give the total DOS.
values = phdos_file.reader.read_value("pjdos_rc_type").sum(axis=(0, 1))
tot_dos = abilab.Function1D(phdos.mesh, values)
assert_almost_equal(phdos.integral_value, tot_dos.integral_value)
for mpi_procs in range(1, 10**23):
ddb.anaget_phbst_and_phdos_files(dos_method=dos_method, mpi_procs=mpi_procs)
Stress testing involves testing beyond normal operational capacity, often to a breaking point in order to:
- determine breaking points or safe usage limits
determine how exactly a system fails
(wikipedia)
# Titanium with 256 atoms and k-point sampling.
# GS calculations with paral_kgb == 1 and different values of wfoptalg.
# List of MPI distribution.
pconfs = [
dict(npkpt=2, npband=8 , npfft=8 ), # 128
dict(npkpt=2, npband=8 , npfft=16), # 256
dict(npkpt=2, npband=16, npfft=16), # 512
]
omp_list = [1, 2, 4] # List of OpenMP threads
flow = BenchmarkFlow()
template = generate_input()
for wfoptalg in [1, 10, 14]:
work = flowtk.Work()
for d, omp_threads in product(pconfs, omp_list):
mpi_procs = reduce(operator.mul, d.values(), 1)
manager = manager.new_with_fixed_mpi_omp(mpi_procs, omp_threads)
inp = template.new_with_vars(d, wfoptalg=wfoptalg)
work.register_scf_task(inp, manager=manager)
flow.register_work(work)
flow.allocate()