Friday, November 30, 2007
CIMA Portal Link
Here's a link to the CIMA Crytallography Portal: http://cimaportal.indiana.edu:8080/gridsphere/gridsphere. This was hard for me to find with Google for some reason. The keyword combination "cima portal" did not work well, but it was the top hit if I googled "cimaportal".
Saturday, November 17, 2007
Condor, Dagman for Running VLAB Jobs
Here's how to run PWSCF codes with Condor. Assume you have the executables and that the input data comes to you in a .zip file. We need to do these steps:
For each step, we need to write a simple Condor .cmd file. Condor can do parameter sweeps but you must use DAGMan to run jobs with dependencies. We will start with a data file called __CC5f_7.zip.
Here is clean_pwscf.cmd:
universe=vanilla
executable = /bin/rm
arguments= -rf __CC5f_7
output = clean.out
error = clean.err
log = clean.log
queue
Now unpack_pwscf.cmd:
universe = vanilla
executable = /usr/bin/unzip
arguments = __CC5f_7.zip
output = unpack.out
error = unpack.err
log = unpack.log
queue
Finally, run_pwscf.cmd:
universe=vanilla
executable = pw.x
input = Pwscf_Input
output = pw_condor.out
error = pw_condor.err
log = pw_condor.log
initialdir= __CC5f_7
queue
All of these are run in the same directory that contains the pw.x executable. The only interesting part to any of these scripts is the initialdir directive in the last one. This specifies that the script is executed in the newly unpacked __CC5f_7 directory, which contains the Pwscf_Input file and will be the location of the .out, .err, and .log files.
As with most scientific codes, PWSCF creates more than one output file. In this case they are located in the __CC5f_7/tmp directory. Presumably these would be preserved and copied back if I was running this on a cluster and not just one machine, although I may need to include the directives
should_transfer_files = IF_NEEDED
when_to_transfer_output = ON_EXIT
Finally, we need to create our DAG. Here is the content of pwscf.dag:
Job Clean clean_pwscf.cmd
Job Unpack unpack_pwscf.cmd
Job Run run_pwscf.cmd
PARENT Clean CHILD Unpack
PARENT Unpack CHILD Run
The "job" portion associates each script with a nickname. The PARENT portion than defines the DAG dependencies. Submit this with
condor_submit_dag -f pwscf.dag
The -f option forces condor to overwrite any preexisting log and other files associated with pwscf.dag.
- Clean up any old data
- Unpack the .zip
- Run PWSCF's executables.
For each step, we need to write a simple Condor .cmd file. Condor can do parameter sweeps but you must use DAGMan to run jobs with dependencies. We will start with a data file called __CC5f_7.zip.
Here is clean_pwscf.cmd:
universe=vanilla
executable = /bin/rm
arguments= -rf __CC5f_7
output = clean.out
error = clean.err
log = clean.log
queue
Now unpack_pwscf.cmd:
universe = vanilla
executable = /usr/bin/unzip
arguments = __CC5f_7.zip
output = unpack.out
error = unpack.err
log = unpack.log
queue
Finally, run_pwscf.cmd:
universe=vanilla
executable = pw.x
input = Pwscf_Input
output = pw_condor.out
error = pw_condor.err
log = pw_condor.log
initialdir= __CC5f_7
queue
All of these are run in the same directory that contains the pw.x executable. The only interesting part to any of these scripts is the initialdir directive in the last one. This specifies that the script is executed in the newly unpacked __CC5f_7 directory, which contains the Pwscf_Input file and will be the location of the .out, .err, and .log files.
As with most scientific codes, PWSCF creates more than one output file. In this case they are located in the __CC5f_7/tmp directory. Presumably these would be preserved and copied back if I was running this on a cluster and not just one machine, although I may need to include the directives
should_transfer_files = IF_NEEDED
when_to_transfer_output = ON_EXIT
Finally, we need to create our DAG. Here is the content of pwscf.dag:
Job Clean clean_pwscf.cmd
Job Unpack unpack_pwscf.cmd
Job Run run_pwscf.cmd
PARENT Clean CHILD Unpack
PARENT Unpack CHILD Run
The "job" portion associates each script with a nickname. The PARENT portion than defines the DAG dependencies. Submit this with
condor_submit_dag -f pwscf.dag
The -f option forces condor to overwrite any preexisting log and other files associated with pwscf.dag.
Saturday, November 10, 2007
Running Condor on the Airplane
Or anywhere without internet connection.
- Install with condor_install as best you can.
- Use 127.0.0.1 for your condor host.
- Edit the annoying HOSTALLOW_WRITE to change the broken default. Why can't I set this during the installation phase?
- Uncomment DEFAULT_DOMAIN_NAME. The value doesn't matter.
- Uncomment NO_DNS=TRUE.
- And don't forget to replace $(UWCS...) with $(TESTINGMODE...) in Part 3.
Monday, November 05, 2007
Using Maven 1 Repositories with Maven 2
The wonderful Maven2 Dependency Plugin lets you grab your project's dependent jars and puts them into a local directory. Thanks to Srinath for showing me this.
This grabs all the jars in your dependency list and puts them in target/.../lib.
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-dependency-plugin</artifactId>
<executions>
<execution>
<id>copy-dependencies</id>
<phase>package</phase>
<goals>
<goal>copy-dependencies</goal>
</goals>
<configuration>
<outputDirectory>${basedir}/target/${pom.artifactId}-${pom.version}/lib</outputDirectory>
<overWriteReleases>false</overWriteReleases>
<overWriteSnapshots>true</overWriteSnapshots>
</configuration>
</execution>
</executions>
</plugin>
This grabs all the jars in your dependency list and puts them in target/.../lib.
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-dependency-plugin</artifactId>
<executions>
<execution>
<id>copy-dependencies</id>
<phase>package</phase>
<goals>
<goal>copy-dependencies</goal>
</goals>
<configuration>
<outputDirectory>${basedir}/target/${pom.artifactId}-${pom.version}/lib</outputDirectory>
<overWriteReleases>false</overWriteReleases>
<overWriteSnapshots>true</overWriteSnapshots>
</configuration>
</execution>
</executions>
</plugin>
Emacs goto-column function
Found this at http://www.icce.rug.nl/edu/1/cygwin/extra/dot.emacs.
Put it in your .emacs file.
(defun goto-column-number (number)
"Untabify, and go to a column number within the current line (1 is beginning
of the line)."
(interactive "nColumn number (- 1 == C) ? ")
(beginning-of-line)
(untabify (point-min) (point-max))
(while (> number 1)
(if (eolp)
(insert ? )
(forward-char))
(setq number (1- number))))
To use it in emacs, use "Esc-X goto-column-number", enter, and type column number of the current line.
Maven 1 Repositories with Maven 2
Add the following to your Maven 2 POM's repository section to download jars from an older Maven 1 repository. This is useful since Maven 1 repositories are much simpler to create than Maven2.
<repository>
<id>ogce-maven1-legacy-repo</id>
<url>http://www.collab-ogce.org/maven</url>
<layout>legacy</layout>
</repository>
<repository>
<id>ogce-maven1-legacy-repo</id>
<url>http://www.collab-ogce.org/maven</url>
<layout>legacy</layout>
</repository>
Thursday, November 01, 2007
Maven: Making a War and Jar at the Same Time
Maven 2 nice automates building WAR files, but it places your compiled classes in WEB-INF/classes instead of making a new jar in /WEB-INF/lib.
If you want your stuff to be compiled as a .jar as well as a .war, you can do this by specifying the jar goal in the command line:
[shell>mvn clean jar install
Note this will make myproject.jar in target/, not in target/myproject/WEB-INF/lib, so you will need to use the Ant plugin to move this stuff around.
But this is not always an option: for deep, modular builds using the reactor, you may want to build your whole thing using one "mvn install". To do this, do the following:
<plugins>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-jar-plugin</artifactId>
<executions>
<execution>
<id>make-a-jar</id>
<phase>compile</phase>
<goals>
<goal>jar</goal>
</goals>
</execution>
</executions>
</plugin>
</plugins>
</build>
And that's it. Do a " maven install" and you get a jar and a war.
If you want your stuff to be compiled as a .jar as well as a .war, you can do this by specifying the jar goal in the command line:
[shell>mvn clean jar install
Note this will make myproject.jar in target/, not in target/myproject/WEB-INF/lib, so you will need to use the Ant plugin to move this stuff around.
But this is not always an option: for deep, modular builds using the reactor, you may want to build your whole thing using one "mvn install". To do this, do the following:
- Specify "war" packaging at the top of your pom.xml.
- Then add the following to your build section.
<plugins>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-jar-plugin</artifactId>
<executions>
<execution>
<id>make-a-jar</id>
<phase>compile</phase>
<goals>
<goal>jar</goal>
</goals>
</execution>
</executions>
</plugin>
</plugins>
</build>
And that's it. Do a " maven install" and you get a jar and a war.
Subscribe to:
Posts (Atom)