- Clean up any old data
- Unpack the .zip
- Run PWSCF's executables.
For each step, we need to write a simple Condor .cmd file. Condor can do parameter sweeps but you must use DAGMan to run jobs with dependencies. We will start with a data file called __CC5f_7.zip.
Here is clean_pwscf.cmd:
universe=vanilla
executable = /bin/rm
arguments= -rf __CC5f_7
output = clean.out
error = clean.err
log = clean.log
queue
Now unpack_pwscf.cmd:
universe = vanilla
executable = /usr/bin/unzip
arguments = __CC5f_7.zip
output = unpack.out
error = unpack.err
log = unpack.log
queue
Finally, run_pwscf.cmd:
universe=vanilla
executable = pw.x
input = Pwscf_Input
output = pw_condor.out
error = pw_condor.err
log = pw_condor.log
initialdir= __CC5f_7
queue
All of these are run in the same directory that contains the pw.x executable. The only interesting part to any of these scripts is the initialdir directive in the last one. This specifies that the script is executed in the newly unpacked __CC5f_7 directory, which contains the Pwscf_Input file and will be the location of the .out, .err, and .log files.
As with most scientific codes, PWSCF creates more than one output file. In this case they are located in the __CC5f_7/tmp directory. Presumably these would be preserved and copied back if I was running this on a cluster and not just one machine, although I may need to include the directives
should_transfer_files = IF_NEEDED
when_to_transfer_output = ON_EXIT
Finally, we need to create our DAG. Here is the content of pwscf.dag:
Job Clean clean_pwscf.cmd
Job Unpack unpack_pwscf.cmd
Job Run run_pwscf.cmd
PARENT Clean CHILD Unpack
PARENT Unpack CHILD Run
The "job" portion associates each script with a nickname. The PARENT portion than defines the DAG dependencies. Submit this with
condor_submit_dag -f pwscf.dag
The -f option forces condor to overwrite any preexisting log and other files associated with pwscf.dag.
No comments:
Post a Comment