Monday, December 31, 2007

Installing and Configuring Globus Services on Mac OS X

* Download Mac binaries from Globus site. Probably the VDT will work fine also.

* Do the usual configure/make/make install business.

* Unfortunately, all Macs that I tried are missing Perl XML parsers and so will fail at the "make install" step. For some reason, Globus's ./configure isn't set up to catch this. Not sure why. I found the following nice instructions for installing Perl::XML and other stuff at

http://ripary.com/bundle-xml.html

This will take a while.

* Set up Simple CA. See

http://vdt.cs.wisc.edu/releases/1.8.1/installation_post_server.html#simpleca

and

http://www-unix.globus.org/toolkit/docs/4.0/admin/docbook/ch07.html#s-simpleca-admin-installing.

The VDT page above actually points to some older (v 3.2) instructions, but these are still OK also.

* Make your host certificate and sign it. I did all of this as root.

$GLOBUS_LOCATION/bin/grid-cert-request -host `hostname`

(note the backticks around hostname).

cd /etc/grid-security/
~/Globus-Services/bin/grid-ca-sign -in hostcert_request.pem -out hostsigned.pem

* Create your xinetd services.

cd /etc/inetd.d/
touch gsigatekeeper
touch gsiftp

* Here is my gsigatekeeper:
service gsigatekeeper
{
socket_type = stream
protocol = tcp
wait = no
user = root
env = LD_LIBRARY_PATH=/Users/mpierce/Globus-Services/lib
env = DYLD_LIBRARY_PATH=/Users/mpierce/Globus-Services/lib
server = /Users/mpierce/Globus-Services/sbin/globus-gatekeeper
server_args = -conf /Users/mpierce/Globus-Services/etc/globus-gatekeeper.conf
disable = no
}

And my gsiftp:
service gsiftp
{
instances = 100
socket_type = stream
wait = no
user = root
env += GLOBUS_LOCATION=/Users/mpierce/Globus-Services
env += LD_LIBRARY_PATH=/Users/mpierce/Globus-Services/lib
env += DYLD_LIBRARY_PATH=/Users/mpierce/Globus-Services/lib
server = /Users/mpierce/Globus-Services/sbin/globus-gridftp-server
server_args = -i
log_on_success += DURATION
nice = 10
disable = no
}

Note the LD_LIBRARY_PATH is not useful for Macs--you need DYLD_LIBRARY_PATH instead (see below). But I left it in anyway--you will need this for Linux installations.

* Start your services:

service gsiftp start
service gsigatekeeper start

* You may want to add these also to /etc/services
tail /etc/services
# Carstein Seeberg
# 48004-48555 Unassigned
com-bardac-dw 48556/udp # com-bardac-dw
com-bardac-dw 48556/tcp # com-bardac-dw
# Nicholas J Howes
# 48557-49150 Unassigned
# 49151 IANA Reserved
#gsiftp 2811/tcp
#gsigatekeeper 2119/tcp


* Check these with telnet:
telnet localhost 2811
telnet localhost 2119

* Note you must use DYLD_LIBRARY_PATH on the Mac or else the service will not start even though the "service" commands above will not complain. You will get errors like this if you telnet to the ports:
/Users/condor/execute/dir_8492/userdir/install/lib/libglobus_gss_assist_gcc32.0.dylib

* Requisite Globus Complaint: I had to do all the above configuration stuff by hand. Why not have a configuration post-installation "flavor" called "my first globus installation" that does all of this for you?

* Create a grid-mapfile and some usercerts, or just use your favorite grid-mapfile from some place else.

Monday, December 17, 2007

Condor-G, Birdbath and TransferOutputRemaps

Here is the classad attribute structure you need for transferring your output back from the remote globus machine to specific files on your condor-g job broker/universal client. By default, the output goes to your cluster's spool directory. The example below shows how to move these to /tmp/ on the condor host.

Note that TransferOutput files are separated by commas, and TransferOutputRemaps files are separated by semicolons.

new ClassAdStructAttr("TransferOutput",
ClassAdAttrType.value2,
"\"testgeoupdate.index,testgeoupdate.node,testgeoupdate.tetra\""),

new ClassAdStructAttr("TransferOutputRemaps",
ClassAdAttrType.value2,"\"testgeoupdate.index=/tmp/testgeoupdate.index;
testgeoupdate.node=/tmp/testgeoupdate.node;testgeoupdate.tetra=/tmp/testgeoupdate.tetra\""),

...

The trick as usual is to get the condor true-names for the parameters you want to use. These are typically NOT the familiar names from condor_submit. Always create a command script for what you want to do before writing your birdbath api code, and then use condor_q -l to see the internal names that condor actually uses. These are the ones you will need to use in your classAdStructAttr array.

Saturday, December 15, 2007

File Retrieval with BirdBath and Condor-G

As I wrote earlier, output files (other than standard out) of Globus jobs submitted via Condor-G can be retrieved using the TransferOutput attribute. To reproduce this with the BirdBath Java API, you need an incantation something like below.

ClassAdStructAttr[] extraAttributes =
{
new ClassAdStructAttr("GridResource", ClassAdAttrType.value3,
gridResourceVal),
new ClassAdStructAttr("Out", ClassAdAttrType.value3,
outputFile),
new ClassAdStructAttr("UserLog", ClassAdAttrType.value3,
logFile),
new ClassAdStructAttr("Err", ClassAdAttrType.value3,
errFile),
new ClassAdStructAttr("TransferExecutable",
ClassAdAttrType.value4,
"FALSE"),
new ClassAdStructAttr("when_to_transfer_output",
ClassAdAttrType.value2,
"\"ON_EXIT\""),
new ClassAdStructAttr("should_transfer_files",
ClassAdAttrType.value2,
"\"YES\""),
new ClassAdStructAttr("StreamOut",
ClassAdAttrType.value4,
"FALSE"),
new ClassAdStructAttr("StreamErr",
ClassAdAttrType.value4,
"FALSE"),
new ClassAdStructAttr("TransferOutput",
ClassAdAttrType.value2,
"\"testgeoupdate.index,testgeoupdate.node,testgeoupdate.tetra\""),
new ClassAdStructAttr("Environment",ClassAdAttrType.value2,
"\"PATH=/home/teragrid/tg459247/geofest.binaryexec:/bin:/usr/bin\""),
new ClassAdStructAttr("x509userproxy",
ClassAdAttrType.value3,
proxyLocation)


The key problem with BirdBath is that it seems to require the parameter "true-names" rather than their values documented in the commandline condor_submit's man pages. For example, to specify the file to use for logging, you have to use the parameter "UserLog", not "log" or "Log". To see the actual "true-names" used by Condor internally, submit a job with condor_submit and then use condor_q -l. It will be in the job_queue.log file in the the spool directory as well.

With birdbath+condor-g there is a similar problem returning files from the remote globus server. You have to use "TransferOutput" as one of the attributes. Also I noticed birdbath did not set StreamOut and StreamErr, so I manually set these as well.

The output files (testgeoupdate.index, etc) will be uploaded to your job's spool directory. Note birdbath does not clean this up, unlike the condor_submit invocations (which delete your job's clusterxy.proc0.subproc0 directory on completion). To make birdbath mimic this behavior, you can set LeaveJobInQueue=TRUE in your classadstructattrs above.

Thursday, December 13, 2007

Condor-G Plus TeraGrid Example

Below is an example condor command script for running a job from my local grid client (a PC running FC7, globus, condor). I'm running a set of codes that I installed on the machine (lonestar) that I wanted to use. These codes are run by a perl script, so I had to set the PATH. I want to upload the input files from my desktop at submission time--the paths here are for my client machine. Note Condor-G will put these in $SCRATCH_DIRECTORY on lonestar, which doubles as your working directory (that is, autoref.pl will be executed here). To get the files back from lonestar to the PC, I used "transfer_output_files" and listed each file. Full paths for these aren't necessary. Condor will pull them back from $SCRATCH_DIRECTORY on the remote machine to your local directory.

# Here it is. Please gaze at it only through a welder's mask.
executable = /home/teragrid/myaccnt/geofest.binaryexec/autoref.pl
arguments = testgeoupdate rare
transfer_executable = false
should_transfer_files=yes
when_to_transfer_output=ON_EXIT
transfer_input_files =/home/mpierce/condor_test/Northridge2.flt,/home/mpierce/condor_test/Northr
idge2.params,/home/mpierce/condor_test/Northridge2.sld,/home/mpierce/condor_test/NorthridgeAreaMan
tle.materials,/home/mpierce/condor_test/NorthridgeAreaMantle.sld,/home/mpierce/condor_test/Northri
dgeAreaMidCrust.materials,/home/mpierce/condor_test/NorthridgeAreaMidCrust.sld,/home/mpierce/condo
r_test/NorthridgeAreaUpper.materials,/home/mpierce/condor_test/NorthridgeAreaUpper.sld,/home/mpier
ce/condor_test/testgeoupdate.grp
transfer_output_files=testgeoupdate.index,testgeoupdate.node,testgeoupdate.tetra
universe = grid
grid_resource = gt2 tg-login.tacc.teragrid.org/jobmanager-fork
output = test.out.$(Cluster)
log = test.log.$(Cluster)
environment = PATH=/home/teragrid/myaccnt/geofest.binaryexec/:/bin/:/usr/bin
queue

Returning Standard Output in Condor-G

[The problems have been hard to reproduce, so none of this may be necessary. But it doesn't hurt.]

Here's a nugget of wisdom: start Condor-G as root and set CONDOR_IDS=(uid).(gid)(that is, see /etc/passwd and /etc/group for the values, should be something like 501.501) if you want to run as a user other than "condor".

I had the following problem: condor_submit worked correctly but standard output went to /dev/null. Checking the logs and spool, I saw that TransferOut was also FALSE and couldn't override this.

I solved this by restarting condor as root (it resets the process id to the CONDOR_IDS user). It also seems to work if you run condor as the correct user (that is, the one specified by CONDOR_IDS).

Wednesday, December 12, 2007

Globus Client+MyProxy+Pacman on FC7 Notes

Here are my "hold your mouth right" instructions. Deviate at your own risk.

* Get pacman, untar, cd, and run setup.sh.

* Make an installation directory (like mkdir $HOME/vdt) and cd into it. Don't run Pacman in $HOME--it copies everything in the current working directory into post-install, which could be awful if you run from $HOME.

* Fedora Core 7 is not supported by the VDT, so do the pacman pretend:

cd $HOME/vdt (if not there already)
pacman -pretend-platform Fedora-4

* Install Globus clients (you are still in $HOME/vdt):
pacman -get http://vdt.cs.wisc.edu/vdt_181_cache:Globus-Client

Answer no to installing VDT certs. We will get TG certs in a minute, and the VDT certs seem to have some problems.

* Install MyProxy (still in $HOME/vdt):
pacman -get http://vdt.cs.wisc.edu/vdt_181_cache:MyProxy

* Install GSIOpenSSH (still in $HOME/vdt):
pacman -get http://vdt.cs.wisc.edu/vdt_181_cache:GSIOpenSSH

* Set your $GLOBUS_LOCATION and source $GLOBUS_LOCATION/etc/globus-user-env.sh.

* Get the TG certificates:
* wget http://security.teragrid.org/docs/teragrid-certs.tar.gz
Unpack these in $HOME/.globus/certificates/

* I didn't like the Condor installation I got from Pacman. This should be done for now with the old condor-configure command on a direct download.

* Also make sure pacman didn't set $X509_CERT_DIR. This had some self-signed certs that caused myproxy-logon to myproxy.teragrid.org to fail and overrode my .globus and /etc/grid-security certificate locations.

Monday, December 10, 2007

MyProxy, Globus Clients on Mac OSX: Use the VDT

[NOTE: Read the comments. Mac binaries are available from www.globus.org; missing links were added.]

I spent a couple of hours trying to compile Globus source on my Mac OS X, only to have it fail with some mysterious error. Some things never change. Why don't they provide a pre-built Mac OS X binary? Here is the error that I get:

make: *** [globus_rls_server-thr] Error 2

Luckily, the VDT seems to do things the right way. I used Pacman to install globus clients (the command line tools only) and MyProxy. See the VDT documentation for instructions. Why doesn't Globus do this? Unbelievable. Anyway, here are the Pacman commands that I used:

pacman -get http://vdt.cs.wisc.edu/vdt_181_cache:Globus-Client

and

pacman -get http://vdt.cs.wisc.edu/vdt_181_cache:MyProxy

I answered "no" for installing certificates--I have the ones I needed, and answering "yes" caused a failure, probably because of write permission problems.

Set $GLOBUS_LOCATION and source $GLOBUS_LOCATION/etc/globus-user-env.sh as usual.

Thursday, December 06, 2007

Condor Birdbath Compromises

I have not been able to find the right incantation for using the "initialdir" attribute with the Birdbath web service. I'm trying to do the equivalent of the following submission script:

universe = vanilla
executable = /path/to/my/bin/a.out
output = theOutput
error = theError
log = theLog
arguments = 10 21
environment = "PATH=/path/to/my/bin/:/bin/:/usr/bin"
initialdir = /path/to/my/data
should_transfer_files = IF_NEEDED
when_to_transfer_output = ON_EXIT
queue

Looking at spool/job.queue.log, you will see that this attribute's proper name is "Iwd". However, the classad structure (using Birdbath java helper classes) like below didn't work:

//Create a classad for the job.
ClassAdStructAttr[] extraAttributes =
{
new ClassAdStructAttr("Out", ClassAdAttrType.value3,
outputFile),
new ClassAdStructAttr("Err", ClassAdAttrType.value3,
errFile),
new ClassAdStructAttr("Log", ClassAdAttrType.value3,
logFile),
new ClassAdStructAttr("Environment", ClassAdAttrType.value2,
"\"PATH=/path/to/my/bin/:/bin:/usr/bin\""),
new ClassAdStructAttr("ShouldTransferFiles", ClassAdAttrType.value2,
"\"IF_NEEDED\""),
new ClassAdStructAttr("WhenToTransferFiles", ClassAdAttrType.value2,
"ON_EXIT"),
new ClassAdStructAttr("Iwd", ClassAdAttrType.value3,
"/path/to/my/data/")
};

The Iwd parameter is set correctly in the logs, but the executables can't find the input files in the initial directory correctly. My workaround solution was to uploading all the data files to the

File[] files={ new File("/local/data/path/file1.dat"),
new File("/local/data/path/file1.dat") };

And then run the usual way, except specify the files

xact.submit(clusterId, jobId, userName, universeType,
executable,arguments,"(TRUE)", extraAttributes, files);
xact.commit();
schedd.requestReschedule();

This will put the data (input and output) in condor's spool directory.

Tuesday, December 04, 2007

Running GeoFEST with Condor

This is a prelude to using Condor-G. I used the following incantation:

-bash-3.00$ more meshgen.cmd
universe = vanilla
executable = /globalhome/gateway/geofest.binaryexec/autoref.pl
output = autoref.out
error = autoref.err
log = autoref.log
arguments = /globalhome/gateway/condor_test/testgeoupdate rare
environment = "PATH=/globalhome/gateway/geofest.binaryexec/"
#getenv = true
initialdir = /globalhome/gateway/condor_test
should_transfer_files = IF_NEEDED
when_to_transfer_output = ON_EXIT
queue

The key thing here was the "environment" attribute, since the autoref.pl script is of course a perl script that calls other executables. These executables need to be in the PATH.

You can also do this by setting the PATH environment variable in your shell and then using the getenv attribute. This will work OK for the command line but will not be much use in a web service version. OK, you could get it to work, but this is the kind of thing that will break 6 months later when you move to a new machine.

None of the condor examples I could find discuss this, but it is well documented in the condor_submit manual page.

Monday, December 03, 2007

Compiling RDAHMM Correctly

The correct value for the LDLIBS variable is

LDLIBS = -lda -lnr -lut -lcp /usr/lib/liblapack.a -lblas -lm

Friday, November 30, 2007

CIMA Portal Link

Here's a link to the CIMA Crytallography Portal: http://cimaportal.indiana.edu:8080/gridsphere/gridsphere. This was hard for me to find with Google for some reason. The keyword combination "cima portal" did not work well, but it was the top hit if I googled "cimaportal".

Saturday, November 17, 2007

Condor, Dagman for Running VLAB Jobs

Here's how to run PWSCF codes with Condor. Assume you have the executables and that the input data comes to you in a .zip file. We need to do these steps:
  1. Clean up any old data
  2. Unpack the .zip
  3. Run PWSCF's executables.
As always, see my del.icio.us "condor" tags (right side of blog) for useful links.

For each step, we need to write a simple Condor .cmd file. Condor can do parameter sweeps but you must use DAGMan to run jobs with dependencies. We will start with a data file called __CC5f_7.zip.

Here is clean_pwscf.cmd:

universe=vanilla
executable = /bin/rm
arguments= -rf __CC5f_7
output = clean.out
error = clean.err
log = clean.log
queue

Now unpack_pwscf.cmd:

universe = vanilla
executable = /usr/bin/unzip
arguments = __CC5f_7.zip
output = unpack.out
error = unpack.err
log = unpack.log
queue

Finally, run_pwscf.cmd:

universe=vanilla
executable = pw.x
input = Pwscf_Input
output = pw_condor.out
error = pw_condor.err
log = pw_condor.log
initialdir= __CC5f_7
queue

All of these are run in the same directory that contains the pw.x executable. The only interesting part to any of these scripts is the initialdir directive in the last one. This specifies that the script is executed in the newly unpacked __CC5f_7 directory, which contains the Pwscf_Input file and will be the location of the .out, .err, and .log files.

As with most scientific codes, PWSCF creates more than one output file. In this case they are located in the __CC5f_7/tmp directory. Presumably these would be preserved and copied back if I was running this on a cluster and not just one machine, although I may need to include the directives

should_transfer_files = IF_NEEDED
when_to_transfer_output = ON_EXIT

Finally, we need to create our DAG. Here is the content of pwscf.dag:

Job Clean clean_pwscf.cmd
Job Unpack unpack_pwscf.cmd
Job Run run_pwscf.cmd

PARENT Clean CHILD Unpack
PARENT Unpack CHILD Run

The "job" portion associates each script with a nickname. The PARENT portion than defines the DAG dependencies. Submit this with
condor_submit_dag -f pwscf.dag

The -f option forces condor to overwrite any preexisting log and other files associated with pwscf.dag.

Saturday, November 10, 2007

Running Condor on the Airplane

Or anywhere without internet connection.
  1. Install with condor_install as best you can.
  2. Use 127.0.0.1 for your condor host.
  3. Edit the annoying HOSTALLOW_WRITE to change the broken default. Why can't I set this during the installation phase?
  4. Uncomment DEFAULT_DOMAIN_NAME. The value doesn't matter.
  5. Uncomment NO_DNS=TRUE.
  6. And don't forget to replace $(UWCS...) with $(TESTINGMODE...) in Part 3.

Monday, November 05, 2007

Using Maven 1 Repositories with Maven 2

The wonderful Maven2 Dependency Plugin lets you grab your project's dependent jars and puts them into a local directory. Thanks to Srinath for showing me this.

This grabs all the jars in your dependency list and puts them in target/.../lib.
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-dependency-plugin</artifactId>
<executions>
<execution>
<id>copy-dependencies</id>
<phase>package</phase>
<goals>
<goal>copy-dependencies</goal>
</goals>
<configuration>
<outputDirectory>${basedir}/target/${pom.artifactId}-${pom.version}/lib</outputDirectory>
<overWriteReleases>false</overWriteReleases>
<overWriteSnapshots>true</overWriteSnapshots>
</configuration>
</execution>
</executions>
</plugin>

Emacs goto-column function

Found this at http://www.icce.rug.nl/edu/1/cygwin/extra/dot.emacs.


Put it in your .emacs file.


(defun goto-column-number (number)
"Untabify, and go to a column number within the current line (1 is beginning
of the line)."
(interactive "nColumn number ( - 1 == C) ? ")
(beginning-of-line)
(untabify (point-min) (point-max))
(while (> number 1)
(if (eolp)
(insert ? )
(forward-char))
(setq number (1- number))))


To use it in emacs, use "Esc-X goto-column-number", enter, and type column number of the current line.

Maven 1 Repositories with Maven 2

Add the following to your Maven 2 POM's repository section to download jars from an older Maven 1 repository. This is useful since Maven 1 repositories are much simpler to create than Maven2.

<repository>
<id>ogce-maven1-legacy-repo</id>
<url>http://www.collab-ogce.org/maven</url>
<layout>legacy</layout>
</repository>

Thursday, November 01, 2007

Maven: Making a War and Jar at the Same Time

Maven 2 nice automates building WAR files, but it places your compiled classes in WEB-INF/classes instead of making a new jar in /WEB-INF/lib.

If you want your stuff to be compiled as a .jar as well as a .war, you can do this by specifying the jar goal in the command line:

[shell>mvn clean jar install

Note this will make myproject.jar in target/, not in target/myproject/WEB-INF/lib, so you will need to use the Ant plugin to move this stuff around.

But this is not always an option: for deep, modular builds using the reactor, you may want to build your whole thing using one "mvn install". To do this, do the following:
  1. Specify "war" packaging at the top of your pom.xml.
  2. Then add the following to your build section.
<build>
<plugins>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-jar-plugin</artifactId>
<executions>
<execution>
<id>make-a-jar</id>
<phase>compile</phase>
<goals>
<goal>jar</goal>
</goals>
</execution>
</executions>
</plugin>
</plugins>
</build>

And that's it. Do a " maven install" and you get a jar and a war.

Wednesday, October 10, 2007

Condor Collector Didn't Start

OK, installing Condor on a local set of linux machines with NFS, no firewall problems, and real IP addresses. What could possibly go wrong? Used condor_install, edited the HOSTWRITE_ALLOW param, tried to start with condor_master, but condor_status gave the error

-bash-3.00$ ./bin/condor_status
CEDAR:6001:Failed to connect to
Error: Couldn't contact the condor_collector on myhost.myplace.myuniversity.edu.
....

The answer is found in the link below: you have to enable the additional daemons (particularly COLLECTOR) in the condor_config file.

https://www-auth.cs.wisc.edu/lists/condor-users-rc/2005-February/msg00119.shtml

Note this does not apply to firewalled machines with no access to DNS, no real IPs, etc. See my earlier post for this solution.

Why in the world was this not set in the config file for the master node by default?

Sunday, October 07, 2007

NVO Summary

I bookmarked several NVO related links on del.icio.us. See my tag role on this blog, or check http://del.icio.us/marlon_pierce. This blog post is intended to tie these together.

It looks like the Grid Summer Schools in 2004 and 2006 are the best sources for software. Ray Plante's nice but somewhat outdated summary, http://us-vo.org/pubs/files/PublishHowTo.html, is the best primer.

In general, it looks like one needs to do the following:
  • Get some data.
  • Set up a registry. NCSA's VORegistry-in-a-box looks like the best available software. They also interestingly seem to build on the Digital Library's OAI software.
  • Set up some data services. The simplest is a Cone Search service. This is available from the 2004 Summer School. This uses HTTP GET/URLS/REST, and apparently returns a list of links to existing whole images in the archive.
  • The SAIP service (aka SAI service) seems to be also available in the 2004 download. This implements the Simple Image Access Protocol. The services are typically HTTP GET/REST based but support more arguments that the Cone Service. For example, you can request image cutouts and other things that are created and returned on the fly, not just whole images.
  • The OpenSkyNode service is yet another service interface. This looks like it supports more complicated, SQL-like queries and can query multiple servers. But the JSkyNode software that Ray Plante mentions is no longer available.
For actually building the database, Roy Williams suggested http://amwdb.u-strasbg.fr/saada/.

Tuesday, October 02, 2007

Better TeraGrid Single Sign On

Through the invisible hand of market forces, the world inevitably becomes optimized over time. Witness http://www.teragrid.org/userinfo/access/tgsso_native.php. This supersedes my earlier post, http://communitygrids.blogspot.com/2007/06/teragrid-single-sign-on.html.

Friday, September 21, 2007

Quote

"ALL men of whatsoever quality they be, who have done anything of excellence, or which may properly resemble excellence, ought, if they are persons of truth and honesty, to describe their life with their own hand; but they ought not to attempt so fine an enterprise till they have passed the age of forty."

Benvenuto Cellini, one of Tom Sawyer's favorite authors, from Cellini's autobiography.

Tuesday, September 18, 2007

Windows Vista, Sockets, Java NIO, and TIME_WAIT

Blogging on behalf of Shrideep Pallickara:

When you are working with NIO sockets on Microsoft Vista you may run into a
problem where you will sometimes be able to connect to a specific host/port
combination, and sometimes the following error related to a bind exception
would be thrown:
java.net.SocketException: Invalid argument: sun.nio.ch.Net.setIntOption
...


The reason this happens is that when a connection is closed it goes into a
TIME_WAIT state. This can be checked with a tool such as netstat. Sometimes,
it can take up to a couple of minutes to get out of this state. When you try
to establish connections to the same host/port combination at a later time
you may not be able to establish this connection, because the previous
connection is still in a timeout state.

To get around this problem you need to configure your socket so that it can
reuse addresses. With this fix you can bind a socket to the specified
Address even if there is a connection in the timeout state that utilizes the
socket's address or port.

Please note that on the client side you will need to configure the setup
BEFORE you bind the socket. The code below shows how you do this for NIO.
You will need to wrap this code-fragment in the appropriate try-catch block.


SocketChannel sc = SocketChannel.open();
sc.socket().setReuseAddress(true);
sc.socket().setKeepAlive(true);
sc.configureBlocking(false);

InetSocketAddress ia = new InetSocketAddress(_hostName, _portNum);
sc.connect(ia);

You will also need to do a similar configuration on the Server side of the
socket as well.

Wednesday, September 12, 2007

Restarting the ISCSI SAN

The ISCSI SAN periodically dies and the file systems become inaccessible. Logwatch error symptoms can look like this:

 --------------------- Kernel Begin ------------------------

WARNING: Kernel Errors Present
connection0:0: iscsi: detected conn error (1011) ...: 3 Time(s)
Buffer I/O error on device sdc1, ...: 29 Time(s)
EXT2-fs error (device sdc1): e ...: 171 Time(s)
end_request: I/O error, dev sdc, sector ...: 2421 Time(s)
lost page write due to I/O error on sdc1 ...: 29 Time(s)
sd 3:0:0:0: SCSI error: return code = 0 ...: 2421 Time(s)
sd 3:0:0:0: SCSI error: return code ueu ...: 1 Time(s)

To restart, follow these steps:

1. Log into the local NavSphere Express web utility running on the SAN.
2. Restart the SAN via NavSphere
3. Reboot the Linux server that mounts the SAN. If you don't want to reboot, try restarting the iscsi daemon and remounting the file system. Use fdisk -l to make sure that the iscsi devices are visible.

services iscsi restart
fdisk -l
mount /my/iscsi/partition

Thursday, August 09, 2007

db4o and NFS problems

Strange unexpected errors running db4o on NFS file system. Everything worked fine until a mysterious exception

com.db4o.ext.DatabaseFileLockedException: Database locked: '/globalhome/gateway/queueservice3.db'
at com.db4o.internal.JDK_1_4.lockFile(Unknown Source)
at com.db4o.internal.Platform4.lockFile(Unknown Source)
at com.db4o.io.RandomAccessFileAdapter.(Unknown Source)
at com.db4o.io.RandomAccessFileAdapter.open(Unknown Source)
at com.db4o.internal.IoAdaptedObjectContainer.open(Unknown Source)
at com.db4o.internal.IoAdaptedObjectContainer.(Unknown Source)
at com.db4o.internal.ObjectContainerFactory.openObjectContainer(Unknown Source)
at com.db4o.Db4o.openFile(Unknown Source)
at com.db4o.Db4o.openFile(Unknown Source)
at org.apache.jsp.junk_jsp._jspService(junk_jsp.java:49)


This only happens on NFS directories. Changing to a local directory, the same code works fine.

Friday, July 27, 2007

Google Maps: EGeoXML

This is a wonderful little JS that allows you to work with KML files in your JavaScript code. Also overcomes the rendering limits of KML, so you can show many more placemarks.

http://www.econym.demon.co.uk/googlemaps/egeoxml.htm

Thursday, July 19, 2007

Drupal Mac Install

I'm doing this in a unixie way rather than looking for Mac specific packages.

0. sudo -H -u root /bin/bash

1. Get Mysql and install. Put this in /usr/local/msyql. The mysql manual is pretty good, so I won't duplicate. Make a db user ("drupaloompa") and db as drupal tells you.

2. Get Apache 2 and install. You'll need DSO support, so ./configure --enable-so. I wonder why they don't make this a default. Anyway, Apache 2.2 gave an error:

configure: error: Cannot use an external APR-util with the bundled APR

I decided to punt and go back to Apache 2.0. This worked fine. Probably I have another apache server pre-installed by Steve Jobs. Looks so, so be be careful that your paths point to /usr/local/apache2.

3. Get PHP 5. Configure/make/make install had one variation: ./configure failed unless I gave the full path to mysql, so I ran (in /usr/local/php-X.Y)

./configure --with-apxs2=/usr/local/apache2/bin/apxs --with-mysql=/usr/local/mysql


Noticed an unusual problem with PHP's "make install": it looked for libs in

/usr/local/mysql/lib/mysql/

but my libs were in

/usr/local/mysql/lib/

I rectified this straighforwardly (it's late) but probably this is some ./configure issue.

Checked php by creating a little file (phptest.php) with the line <? phpinfo() ?> in htdocs. This worked/

4. Copied drupal directory (after gnutarring) into htdocs. Note you want to do this:

cp -r /path/to/drupal-5.1/* /usr/local/apache/htdocs

You don't want the directory /usr/local/apache/htdocs/drupal-5.1. For some reason, my PHP files did not get parsed in this directory, but everything worked fine when I moved things to the top level htdocs directory.

5. Start apache and point to http://localhost/index.php. You will see a complaint about write access to settings.php. Do this:

chmod a+w sites/default/settings.php

Run this, use your Mysql user account and password, and you are good. Remove write permissions on settings.php.

chmod a-w sites/default/settings.php

Point your browser to http://localhost/index.php again and get going. Note that you will need to clear your browser cache if you loaded any Apache pages. Use shift-reload or just quit the browser and start a new one. In any case, http://localhost/ should show the index.php pages, not "It works" or any Apache feathers.

Wednesday, July 11, 2007

Some condor problems and solutions

As in my previous post, here were some problems I encountered along the way to getting condor installed on a set of lab PCs running linux with no shared file systems and no DNS.

* Condor complains about missing libstdc++.so.5: install the compatibility patch for your OS. See earlier post.

* You get something like

Failed to start non-blocking update to <10.237.226.81:9618>.
attempt to connect to <10.237.226.81:9618 > failed: Connection refused (connect errno = 111).
ERROR: SECMAN:2003:TCP connection to <10.237.226.81:9618> failed

This probably means you have not set HOSTALLOW_READ and HOSTALLOW_WRITE correctly. Use the IP addresses you want to connect as the values. Also be sure you have enabled the NO_DNS and DEFAULT_DOMAIN_NAME parameters in your condor_config file.

* Jobs seem stuck in "idle". This could just mean that condor is really being unobtrusive. Try changing the START, SUSPEND, etc attributes in condor_config (see previous post and link).

* Jobs only run on the machine they are submitted from. Try adding

should_transfer_files=YES
when_to_transfer_output=ON_EXIT

to your job submission script.

* Need some general debugging help: here is a nice little link
http://www.cs.wisc.edu/condor/CondorWeek2004/presentations/adesmet_admin_tutorial/#DebuggingJobs

In short, look at the log files in /home/condor/log. Here are some useful commands:

1. condor_q -better-analyze
2. condor_q -long
3. condor_status -long

#1 will tell you why your job is failing to get matched.

Beating Condor Like a Rented Buzzard

OK, trying to get Condor running on a few linux boxes on a private network with no DNS. There were lots of pitfalls that took a while to debug. Also, try searching for my del.icio.us links tagged with "condor" and "ecsu" for sites I found helpful.

* Make sure you have the libstdc++.so compatibility patch installed, since condor needs the obsolete libstdc++.so.5. See the previous post.

* Turn off your iptables firewalls (/etc/init.d/ipconfig stop).

* I did not find using /etc/hosts entries to be helpful. This will conflict with the NO_DNS attribute that I used below.

Get the tar.gz version of condor, since we will need to babystep our way through the complicated configuration file. Do the following as the root user. Unpack the tar, and use

./condor_install

for the interactive installation on all the machines you want to use. Pick one of these to be the master. Don't use condor_configure --install unless you know what you are doing. Answer the questions as best you can. Note you should answer "No" when asked about the shared file system if you are just using a random collection of machines with no NFS, etc.

Next, set your $CONDOR_CONFIG environment variable and then edit this file. Make sure that CONDOR_HOST points to the master node. You must change HOSTALLOW_WRITE and you may want to change HOSTALLOW_READ. I set these to IP addresses since I don't have a DNS. Don't forget to include 127.0.0.1 in the allowed list.

Next, scroll down in condor_config to find the parameters DEFAULT_DOMAIN_NAME and NO_DNS. Uncomment these, since we assume there is no DNS. The value of the first can be anything, the value of the second should be TRUE.

Still editing condor_config, cruise down to part 3 and set the attributes START=TRUE, SUSPEND=FALSE, CONTINUE=TRUE, PREEMPT=FALSE, and KILL=FALSE as described at https://www-auth.cs.wisc.edu/lists/condor-users-rc/2005-February/msg00259.shtml.
This will cause condor to submit your jobs right away.

Now, start condor with condor_master on your main condor host. This should come up in a few minutes, so use condor_status to check.

Next, from each machine in your cluster, try "telnet 10.20.30.40 9618". Replace 10.20.30.40 with the main host's IP address. 9618 is the port of the collector. If you can't even get to this port, you probably have firewall issues, so turn off iptables and check /etc/hosts.deny and /etc/hosts.allow. If you get connection refused, it is probably a condor_config issue, so make sure that you followed the steps above.

I actually found it useful to install Tomcat and modify to run on port 9618 just to verify that other machines could reach the main host's port 9618. This tells you connection if problems are in your network configuration or in condor's config. Be sure to turn tomcat off when you are done.

Repeat the condor installation steps for the other condor cluster members. When you bring them up, they should be added to the output of condor_status.

Tuesday, July 10, 2007

Condor on FC7

Trying to get condor running on a fresh Redhat Fedora Core 7. Got the following error while using the FC5 release from Condor:
[condor@localhost condor-6.8.5]$ ./sbin/condor_master
./sbin/condor_master: error while loading shared libraries: libstdc++.so.5: cannot open shared object file: No such file or directory

Luckily, just needed to add a compatibility library. First, used yum to install apt and synaptic:

yum install apt synaptic

Synaptic had a nice GUI but apt-cache was more my speed. I did have to update the file listings first ("reload" button on synaptic).

[root@localost ~]# apt-cache search libstdc++
You don't seem to have one or more of the needed GPG keys in your RPM database.
Importing them now...
Error importing GPG keys
compat-libstdc++-33 - Compatibility standard C++ libraries
libstdc++ - GNU Standard C++ Library
compat-libstdc++-296 - Compatibility 2.96-RH standard C++ libraries
libstdc++-devel - Header files and libraries for C++ development

[root@localhost ~]# apt-get install "compat-libstdc++-33"
You don't seem to have one or more of the needed GPG keys in your RPM database.
Importing them now...
Error importing GPG keys
Reading Package Lists... Done
Building Dependency Tree... Done
The following NEW packages will be installed:
compat-libstdc++-33 (3.2.3-61)
0 upgraded, 1 newly installed, 0 removed and 0 not upgraded.
Need to get 237kB of archives.
After unpacking 733kB of additional disk space will be used.
Get:1 http://download.fedora.redhat.com fedora/linux/releases/7/Everything/i386/os/ compat-libstdc++-33 3.2.3-61 [237kB]
Fetched 237kB in 0s (309kB/s)
Checking GPG signatures... ########################################### [100%]
Committing changes...
Preparing... ########################################### [100%]
1:compat-libstdc++-33 ########################################### [100%]
Done.

Bang, condor_master now starts. Of course, I get the classic
[condor@localhost condor-6.8.5]$ ./sbin/condor_master
ERROR "The following configuration macros appear to contain default values that must be changed before Condor will run. These macros are:
hostallow_write (found on line 215 of /home/condor/Desktop/condor-6.8.5/etc/condor_config)
" at line 223 in file condor_config.C

I'm surprised the condor people don't post more warnings here. Just edit the condor_config file and change line 223 (should be obvious from the nearby comments).

Condor Authentication Error

Got something like this below:
[username@host:~] condor_submit my_job.cmd
Submitting job(s)
ERROR: Failed to connect to local queue manager
AUTHENTICATE:1003:Failed to authenticate with any method
AUTHENTICATE:1004:Failed to authenticate using KERBEROS

Distressing, since I'm not using Kerberos or GSI in a vanilla condor installation.

Solution: I had two condor installations, probably with different versions. My
.bashrc file set CONDOR_CONFIG to point to installation #1, but I started condor_master
using binaries from installation #2. Used the same installation consistently and
everything worked.

up2date obsolete on rhel5 and fc7

Redhat has decided to go with yum instead. Just use "yum update".

Sunday, July 08, 2007

ECSU Summer School

Basic Agenda

This isn't set in stone so we can change depending on the initial discussions.

Getting started (Monday Morning):
* Every will create a blog if they don't have one already. Post every useful nugget of information or observation here. This is a good practice. Also, del.icio.us should be used to bookmark useful web sites. Tag everything with "ECSU". You can embed a del.icio.us badge in your blog.

* Introductions

* Expectations: what do we want to accomplish for the week? I will overview a lot of technologies, but we need to find ways to tie everything together.

Monday: Introductory/overview lecture and Condor lab
- Installing and using Condor on your linux cluster
- I will probably use some lecture material from the Condor group and Grid summer schools.
- We should identify some interesting and relevant projects for the students that would use Condor--my guess is the image analysis project mentioned below would be an excellent candidate. These could be done over several days.
- Need to make sure this installs OK on your version of Linux.

Tuesday: Web Services Lab
- I have all of this material in hand.
- Introduction to WSDL, SOAP, and REST
- May also provide some lectures on XML
- Hands on lab with Apache Axis 2
- Build workflows with Taverna

Wednesday: Globus and Portals/Gateways Lab
- Lectures on Globus basics
- Install and use Globus command line stuff
- Install and use OGCE portal
- Build a Web gateway prototype for CERSER
- Need to make sure Globus installs OK on your version of Linux

Thursday: Web 2.0 Lab
- Web 2.0 overview and tutorial
- Mashup lab: go to programmableweb.com, brainstorm ideas, and implement.

Friday: Wrap up projects

Friday, July 06, 2007

Condor Commands Short List

Following are the most frequently used condor commands

Before running the following commands you need to set the following environment variables

export CONDOR_CONFIG=/home/condor_user/condor_installation_directory/etc/condor_config
export PATH=$PATH:/home/condor_user/condor_installation_directory/bin
export PATH=$PATH:/home/condor_user/condor_installation_directory/sbin



(1) To start the condor use 'condor_master' command

(2) use condor_status to see all the machines included in condor pool

(3) you can also use condor_off command to kill the condor processes and use condor_on command to start it again

(4) If you have a simple job to submit
EX: test

universe = vanilla
executable = /bin/ls
output = test.out
error = test.error
log = test.log
queue

you can run "condor_submit test" command

(5) Check the status of job using "condor_q" command

(6) To remove job from queue
use "condor_rm JobID"

(7) you can permenently remove job from local machine using "condor_rm -forcex jobID" command

Thursday, July 05, 2007

Linux ISCSI with EMC2 AX150 SAN

Finally got my new external SAN file system up and running and attached to the linux server farm using ISCSI (i.e. SCSI over a standard network interface--all devices are attached to the same network switch, but otherwise no special equipment needed).

The Problems
1. EMC^2 needs to be told they are missing an equal sign.
2. Their documentation is terrible.
3. Their SAN management software doesn't yet work on RHEL5, but my version of RHEL4 did not support the linux ISCSI initiator software.

Resolution
First, review the ISCSI terminology a bit. See the links from my previous post for nice reviews. In short, your SAN device is your ISCSI target. You will mount it as a SCSI device over a network connection to a linux server (or Windows 2003 if you prefer), where it can be treated like an ordinary file system. The linux host runs the ISCSI initiator and acts as a client, if you are more comfortable with client-server terminology.

The basic steps (besides powering up the SAN) are as follows:
1. Install iscsi initiator software (use up2date or yum--"yum iscsi-initiator") on your linux server. You probably want to update your kernel as well. This was my first problem, since up2date on RHEL4 refused to update my kernel (linux 2.6.9-5), and attempting to start the ISCSI daemon on linux failed because of missing kernel modules. So I upgraded to RHEL5. This whole set of sub-steps was wildly unclear in the EMC2 documentation.

2. Set up your SAN's network configuration from your linux server. EMC2's tools (Navisphere, Powerpath) worked fine on RHEL4 but not RHEL5, so I wound up doing this step from a second linux server that still ran RHEL4. I just used IP addresses from our pool.

3. Configure your SAN's disk pool, RAID, etc. EMC2 provides a nice web tool, Navisphere Express, to do this.

4. Fire up the ISCSI client on your linux server. See the README at http://www.open-iscsi.org for everything you need, although it took a while for me to digest it. At the end, run "fdisk -l" and you should see your new /dev/sdX devices. If you don't, but all the iscsiadm commands are working fine, it could mean that you completed step #2 but not #3.

5. Partition and format your devices in the usual way with fdisk, etc. See for example http://tldp.org/HOWTO/Partition. Mount the file systems and you should be in business.





Wednesday, July 04, 2007

Creating Bootable CDs with a Mac

The situation: I was making some installation CDs of RHEL 5 from downloaded .iso files on my Mac workstation for our servers. There are a few different ways to burn CDs on the Mac, so my first efforts made readable file copies but not bootable CDs.

The solution: use the Disk Utility (under Applications/Utility). Drag your .iso file into the DU's left-hand navigation area and then click the (now enabled) "burn" icon on the top of the DU.

Here's your 4th of July bonus: http://hypem.com/search/ray%20charles/1/

Monday, July 02, 2007

Useful RedHat ISCSI Links

For the attic:

http://www.cuddletech.com/articles/iscsi/index.html
http://mail.digicola.com/wiki/index.php?title=User:Martin:iSCSI
http://fedoranews.org/mediawiki/index.php/Going_Enterprise_-_setup_your_FC4_iSCSI_target_in_5_minutes
http://kbase.redhat.com/faq/FAQ_85_7905.shtm

The open-iscsi has a nice README: http://www.open-iscsi.org/

Here is a good doc on linux partitioning: http://tldp.org/HOWTO/Partition/index.html


I guess I should tag these on del.icio.us or something but this is easier.

JSF Managed Bean Properties and Constructors

I think I keep discovering and forgetting this: don't initialize anything in a managed bean's constructor if you want to use properties from faces-config.xml's values.

For example,

public SimplexBean() throws Exception {
simplexService = new SimpleXServiceServiceLocator()
.getSimpleXExec(new URL(simpleXServiceUrl));

System.out.println("Simplex Bean Created");
}

will NOT use the value for simpleXServiceUrl from faces-config.xml. It will use any value set in the code instead.

To work around this, just do something like make a little method

private void initSimplexService() throws Exception {
simplexService = new SimpleXServiceServiceLocator()
.getSimpleXExec(new URL(simpleXServiceUrl));
}

and call it before you invoke the web service. I assume on the scale of things, this is a cheap call, so no need to put it in the constructor even if it is redundant.

Make sure of course that you have get/setSimpleXServiceUrl() methods.

Thursday, June 28, 2007

Misc Mac X11 Stuff

I like iTerm, but I have yet to waste enough time figuring how to get display forwarding for remote logins to work, so when logging into remote machine, I drop back to xterms. Here are a couple of tips:

1. From the xterm command line, use "ssh -Y yourname@your.remote.host" for easy X11 forwarding.

2. The default window is black and white and has tiny fonts. Create a .Xdefaults file in your home directory and give it some reasonable values, such as here: http://lists.apple.com/archives/x11-users/2005/May/msg00112.html

Monday, June 11, 2007

Web 2.0 Tutorial Slides and Survey Doc

Some time ago , I promised to put my Web 2.0 musings for science gateways on this blog. These actually turned into a tutorial and a short book chapter, linked below.

Web 2.0 Tutorial slides are here: http://grids.ucs.indiana.edu/ptliupages/presentations/Web20Tutorial_CTS.ppt

My related survey document is here:
http://grids.ucs.indiana.edu/ptliupages/publications/CIWeb20Chapter.doc

If I was in charge, Web 2.0 would be called Web 2.0 Beta.

Finally, anyone who thinks the Semantic Web is "Web 3.0" should think again.

Importing and Exporting Thunderbird Message Filters

Here's a nice little tool for importing and exporting. Files are text, so it works between Windows and Mac.

https://addons.mozilla.org/en-US/firefox/addon/2474

Note you don't want to click the install button, since you are installing in Thunderbird and not Firefox. Right click the link, save to desktop, and then install as an extension/add-on.

Friday, June 01, 2007

TeraGrid Single Sign On

I don't understand why the TeraGrid brain trust doesn't have this more clearly documented. You'd think they'd also want to automate all of the following steps.

If you get a Teragrid roaming account, you will get access to ~ a dozen clusters and supercomputers. You'll also get several different usernames and passwords for each site. It's a big mess.

Luckily, you can use your Grid credentials as a single sign-on (just like Kerberos!). This will save you from remembering 10 different usernames and passwords. Here are the steps:
  1. Login to one of the teragrid machines (I used one at SDSC) and create a grid public and private key pair. Use the "cacl" command.
  2. Copy your key pair (should be in your $HOME/.globus directory) to all other machines you plan to use. This can include your Linux/Mac desktop. If you use Windows, try Gregor's Java CoG kit.
  3. Run the gx-map command on all the machines you plan to use. This will automatically update the /etc/grid-security/grid-mapfile and add your local username and global DN. The update is probably done with a cron script, so it may take an hour or so.
You're done. Well, you may want to install Globus on your local desktop machine. This is a little heavy (you don't need all the services, probably), but it gives you access to all the command line clients, including GSI-enabled ssh.

GSI-enabled SSH is where the magic happens. This is should be located somewhere under your globus installation like $GLOBUS_LOCATION/bin/ssh.d/ssh. Do the following shell commands:
  1. export GLOBUS_LOCATION=/path/to/your/globus
  2. source $GLOBUS_LOCATION/etc/globus-user-env.sh
  3. grid-proxy-init
  4. which ssh
Step #3 will get you a grid proxy credential that is good for a few hours (just like Kerberos!). Step #4 is a check to make sure that you are using the GSI enabled ssh. If not, modify your $PATH or use the full path as shown above.

You can now ssh to any machine in the TeraGrid as long as you have gx-mapped your self into the grid-mapfile. The grid-mapfile will take care of mapping your global DN identity (part of your grid keys) to your account name on the local machine.

Tuesday, May 29, 2007

Disabling and Re-Enabling Bugzilla

Posted here since I am bound to forget this again:

To disable your bugzilla, point your browser to http://yourhost/bugzilla/editparams.cgi and log in as administrator. Next, put some text or html in the "shutdownhtml" field, preferably something informative.

To re-enable your bugzilla, go to the same URL above, login, and remove any text/html in the shutdownhtml field.

Thursday, May 03, 2007

Example Google Map with Clickable Polygons

Thanks to Pamela Fox for pointing me to this: http://www.veamadrid.com/googlemaps/recorridoscallejeros.htm.

View source and take a look at the javascript to see how to make polygons listen for events. I wonder why this is not in the official documentation.

Sunday, April 22, 2007

Interesting Amazon S3+EC2+SQS Article

See this one: http://developer.amazonwebservices.com/connect/entry.jspa?entryID=691&ref=featured

Awful title but interesting. Somebody should show this to the NSF TeraGrid people.

Thursday, March 29, 2007

db4o Java Bean Database

The db4o http://www.db4o.com/ project makes a very simple, light-weight object database. It's free, too. Maybe too free, as they use GPL. But see comment below. Obviously this sort of thing is great if you do a lot of POJO development (say, with JSF) and need some persistent storage.

The downloadable documentation is pretty good, as is this article: http://www-128.ibm.com/developerworks/java/library/j-db4o1.html

Two issues with the docs. First, their examples always assume that your bean has a complete constructor that sets all fields. You may not wish to do this, so instead you can just use the appropriate set method instead.

Assume you have a MyBean class with the fields beanName, beanCreationDate, beanCreator, and so on. Then:

MyBean myBean=new MyBean();
myBean.setBeanName("Junk");
ObjectSet results=db.get(myBean);

The returned results set will have all the MyBean instances that you have stored with the name "Junk".

Second, you can only delete specific bean instances. That is, the corresponding db.delete(myBean) method for the above code will not delete all the matching beans.

//Won't work if your bean has multiple fields.
MyBean myBean=new MyBean();
myBean.setBeanName("Junk");
//Won't work
db.delete(myBean);

To do this, you have to get the specific beans that you matched in the get() and the delete them one by one. For example,

MyBean myBean=new MyBean();
myBean.setBeanName("Junk");
ObjectSet results=db.get(myBean);
//Reassign the myBean.
while(results.hasNext());
myBean=(MyBean)results.next();
db.delete(myBean);
}

Denying Anonymous Reading in Media Wiki

Seehttp://meta.wikimedia.org/wiki/Access_Restrictions Don't forget to add the white-list pages to specify exceptions. Otherwise, no one will be able to read the login page. The following blog had a full example: http://dimer.tamu.edu/simplog/archive.php?blogid=7&pid=4015

In summary, you need to edit LocalSettings.php with settings like those below.

$wgGroupPermissions['user' ]['read'] = true;


$wgGroupPermissions['*' ]['createaccount'] = false;
$wgGroupPermissions['*' ]['read'] = false;
$wgGroupPermissions['*' ]['edit'] = false;
$wgWhitelistRead=array("Special:Userlogin");


One other note on creating accounts: to create accounts for other users, log in with sysop privileges (that is, login using the account that you created when you set up the wiki) and then point your browser to index.php/Special:Userlogin. This should give you the forms for new account creation.

Monday, March 19, 2007

Resizing Linux File Systems

I hadn't done this in a while and got some new linux servers, so I'm pleased to learn that it has gotten much easier to change disk partitions with LVM. I'm doing the following on RHEL ES 4.

I'll assume logical groups and volumes have already been created, say by Dell. They'll probably do this because of the immutable physical law that it is easier to get bigger than to get smaller. If you are happy with these predefined logical volumes, you'll just need to increase the space on the existing file systems.

All of the following commands should be done as root. I'll assume your PATH is reasonable, but if not you can probably find them in /sbin/ or /usr/sbin.

  1. fdisk -l: This will show you the available devices and partitions. Use this to decide how much to increase any of the existing files.
  2. df -k: shows existing file systems. This will also show you the names of any logical volumes.
  3. vgdisplay: this will display info on your volume groups (i.e. see how much size is available).
  4. pvcreate /dev/sba: use this to initialize a new partition. You will need to do this if you have added new disks to your RAID. Get the actual device name (/dev/xyz) from the df command.
  5. lvextend -L100G /dev/VolGroup00/LogVol00: this will set the logical volume 100 GB in size. You can also use -L+10G to increase the size further. This will take a few moments to run.
  6. Use lvreduce to reduce logical volume sizes. Use this if you accidentally extend the wrong volume in the previous step (doh!).
  7. ext2online /dev/VolGroup00/LogVol00: increases the actual file system size (but see below).
The last step can depend on your file system type (mine was ext3). Look in /etc/fstab to see. This step caused some uncertainty at first, since many pages that I googled used resize2fs, but this was not in my path and so made me suspect this info may be obsolete or not appropriate for RHEL. http://kbase.redhat.com/faq/FAQ_96_4842.shtm was helpful.

[Looks like resize2fs is back in RHEL5 and Fedora Core 7, and ext2online's functionality has been incorporated in it. You just need to replace "ext2online" with "resize2fs".]

That's it. You probably will want to verify that everything worked, so do the following:
  1. umount /dev/VolGroup00/LogVol01: unmounts the volume.
  2. e2fsck /dev/VolGroup00/LogVol01: verifies the file system.
  3. mount /dev/VolGroup00/LogVol01: brings the volume and associated file system back up.
  4. df: verify the file system is mounted as expected.

Thursday, March 15, 2007

Java Libraries for Creating Atom Feeds

We're using Atomsphere to generate atom feeds from databases as a simpler version of Web services. So far, so good. I will post some examples later. See also Carol's notes here.

The above link is for their SourceForge downloads. The project page is here, although it looks like they are having bad gateway problems this morning.

Tuesday, March 13, 2007

Preventing new lines in JSP's generated servlets

Good tip from Dr. Chuck: don't use the following for your JSPs:

<%@ page import="cgl.gis.wfs.webservice.*"%>
<%
String beginDate=request.getParameter("beginDate");
...
out.println("stuff");
%>

The generated servlet will include

out.println('\n');

which will put a blank line at the top of your output. This can cause problems if you are using JSP to generate (for example) XML feeds, since the XML header line must be the first line.

To fix, use instead

<%@ page import="cgl.gis.wfs.webservice.*"%><%
String beginDate=request.getParameter("beginDate");
...
out.println("stuff");
%>

Sakai JSR 168 Tests

* svn co https://source.sakaiproject.org/svn/sakai/trunk sakai

* Must create a build file in your $HOME:

[gateway@gridfarm008 WebFeatureService_Deploy]$ more ~/build.properties
maven.repo.remote = http://source.sakaiproject.org/mirrors/ibiblio/maven/,http://source.sakaiproject.org/maven/,http://www.ibi
blio.org/maven,http://horde.planetmirror.com/pub/maven/,http://www.collab-ogce.org/maven/cog-common/

maven.tomcat.home = /home/gateway/SakaiJsr168/apache-tomcat-5.5.20/

maven.tomcat.version.major=5

maven.compile.source= 1.5
maven.compile.target= 1.5

maven.test.skip=true

* For a fast build, use "~/maven-1.0.2/bin/maven bld dpl" the first time to avoid downloading unnecessary jars.

* To run sakai's maven targets in any subdirectory, use

~/maven-1.0.2/bin/maven plugin:download -DgroupId=sakaiproject -DartifactId=sakai -Dversion=2.2



* To completely clean everything out,

~/maven-1.0.2/bin/maven cln

Note the usual "maven clean" will not use the the reactor. Noticed this because we had some problems with
incompatible java versions (1.6 versusu 1.5, trying to use 1.6 classes with 1.5 compiler).

* Or just use "find . -name target|xargs rm -r" (this is faster, since you won't have to check remote repositories
for jars that don't exist).

* Don't use Java 1.4. Use Java 1.5. Java 1.6 may work or not, but be sure to be consistent--Java 1.5 compilers
will not understand java 1.6 classes.

* After installing,
[gateway@gridfarm008 SakaiJsr168]$ mkdir apache-tomcat-5.5.20/sakai/
[gateway@gridfarm008 SakaiJsr168]$ cp sakai/reference/demo/sakai.properties apache-tomcat-5.5.20/sakai/

* Be sure to use extra memory:
export CATALINA_OPTS="-server -Xms512m -Xmx512m -XX:NewSize=128m -XX:PermSize=16m -XX:MaxPermSize=128m -XX:+UseConcMarkSweepGC -XX

* Point browser to http://.../portal after it starts (tail -f your catalina.out file to track--could take a few
seconds).

* Make your web.xml for your portlet:
<?xml version="1.0" encoding="ISO-8859-1"?>

<!DOCTYPE web-app
PUBLIC "-//Sun Microsystems, Inc.//DTD Web Application 2.3//EN"
"http://java.sun.com/dtd/web-app_2_3.dtd">

<web-app>
<display-name>IFrame Portlet</display-name>
<description>
This application is a portlet. It can not be used outside a portal.
</description>


<servlet>
<servlet-name>PortletServlet</servlet-name>
<servlet-class>org.apache.pluto.core.PortletServlet</servlet-class>
<!-- This param value should match the value of the <portlet-name> in
portlet.xml
-->
<init-param>
<param-name>portlet-name</param-name>
<param-value>IFramePortlet</param-value>
</init-param>
<load-on-startup>1</load-on-startup>
</servlet>

<servlet-mapping>
<!-- Note the servlet-name must match the servlet-name param above -->
<servlet-name>PortletServlet</servlet-name>
<url-pattern>/PlutoInvoker/IFramePortlet</url-pattern>
</servlet-mapping>

<welcome-file-list>
<welcome-file>index.html</welcome-file>
</welcome-file-list>

</web-app>

* To add portlet,
1. Click "Worksite setup" and create a new workspace
2. In the workspace you just created, add the portlet using "Site Info"

* Tested: Velocity-based Iframe portlet (worked) and JSF-bridges based Google Map portlet (also worked). Tested
OGCE myproxy portlet and found a bug in the Sakai bridge (accidentally called portlet twice, which caused
problems since the portlet's state had been flushed.) Turned out to be an easy fix.

Tuesday, March 06, 2007

Web Desktop Master List

From Wikipedia, linked here so that I can remember it:

http://en.wikipedia.org/wiki/Web_desktop

Tuesday, February 27, 2007

Thursday, February 22, 2007

Reading Property Files with Axis 1.x

From the past: here is a little code for loading a config.properties file into Axis 1.x. The first version is useful for debugging when you are running things outside the servlet container (i.e. firing from the command line rather than inside tomcat). The second method shows how to do this from within the servlet container.

//Needed to get the ServletContext to read the properties.
import org.apache.axis.MessageContext;
import org.apache.axis.transport.http.HTTPConstants;
import javax.servlet.ServletContext;
import javax.servlet.http.HttpServlet;

...

//Do this if running from the command line.
if(useClassLoader) {
System.out.println("Using classloader");
//This is useful for command line clients but does not work
//inside Tomcat.
ClassLoader loader=ClassLoader.getSystemClassLoader();
properties=new Properties();

//This works if you are using the classloader but not inside
//Tomcat.
properties.load(loader.getResourceAsStream("geofestconfig.properties"));
}
else {
//Extract the Servlet Context
System.out.println("Using Servlet Context");
MessageContext msgC=MessageContext.getCurrentContext();
ServletContext
context=((HttpServlet)msgC.getProperty(HTTPConstants.MC_HTTP_SERVLET))
.getServletContext();

String propertyFile=context.getRealPath("/")
+"/WEB-INF/classes/config.properties";
System.out.println("Prop file location "+propertyFile);

properties=new Properties();
properties.load(new
FileInputStream(propertyFile));
}

serverUrl=properties.getProperty("service.url");
baseWorkDir=properties.getProperty("base.workdir");
baseDestDir=properties.getProperty("base.dest.dir");
....

Wednesday, February 21, 2007

Aligning JSF Panel Grids

Found this at http://forum.java.sun.com/thread.jspa?threadID=736808&messageID=4233027 but the answer was a little mangled.

Suppose you want to put to tables of different sizes side-by-side in two columns of a containing table. This JSF you need is
<%@ taglib uri="http://java.sun.com/jsf/core" prefix="f"%>
<%@ taglib uri="http://java.sun.com/jsf/html" prefix="h"%>

<style>
.alignTop {
vertical-align:top;
}
</style>

<f:view>
<h:panelGrid columns="2" columnClasses="alignTop, alignTop">
<h:panelGrid columns="3" >

<h:outputText value="1"/>
<h:outputText value="2"/>
<h:outputText value="3"/>
<h:outputText value="4"/>
<h:outputText value="5"/>
<h:outputText value="6"/>

</h:panelGrid>

<h:panelGrid columns="3">

<h:outputText value="1"/>
<h:outputText value="2"/>
<h:outputText value="3"/>
<h:outputText value="4"/>
<h:outputText value="5"/>
<h:outputText value="6"/>
<h:outputText value="7"/>
<h:outputText value="8"/>
<h:outputText value="9"/>

</h:panelGrid>

</h:panelGrid>
</f:view>

That is, note you need two "alignTop" strings separated by a comma for the value of the columnClasses attribute. If you have only one "alignTop" it will only apply to the first column. To see this, switch the two inner panel grids.

Probably there is a better way to do this.

Useful Tomcat JVM Settings

Dr. Chuck recommends using

JAVA_OPTS='-Xmx512m -Xms512m -XX:PermSize=16m -XX:MaxPermSize=128m -XX:NewSize=128m -XX:+UseConcMarkSweepGC -XX:+UseParNewGC' ; export JAVA_OPTS


See http://www.dr-chuck.com/csev-blog/000222.html.

You can also export these as $CATALINA_OPTS.

Wednesday, February 14, 2007

Some Notes on Axis2 Version 1.1

This is probably the first in a series.

* See http://ws.apache.org/axis2/ of course. Get the code and unpack.

* To run inside Tomcat, go to axis2-1.1.1/webapp and run "ant create.war" (you need ant). Then get the war from axis2-1.1.1/dist and drop into Tomcat's webapp directory.

* I'll start by just summarizing the documentation, so these are my "cut to the chase" notes.

* Start with a POJO service like the one below.

package pojo.magic;
public class PojoBean {
String name="Jojomojo";
public PojoBean() {
}
public String getName() {
return name;
}
public void setName(String name) {
this.name=name;
}
}

* To deploy this service by hand, you must do the following steps.
1. Compile it (use javac -d . PojoBean.java).
2. Copy the resulting directory (pojo) to the webapps/axis2/WEB-INF/services.
3. Mkdir webapps/axis2/WEB-INF/services/pojo/META-INF
4. Create a service.xml file with your service metadata.

* Your service.xml file will look like this:

<service name="PojoService" scope="application">
<description>
Sample POJO service
</description>
<messageReceivers>
<messageReceiver
mep="http://www.w3.org/2004/08/wsdl/in-only"
class="org.apache.axis2.rpc.receivers.RPCInOnlyMessageReceiver"/>
<messageReceiver
mep="http://www.w3.org/2004/08/wsdl/in-out"
class="org.apache.axis2.rpc.receivers.RPCMessageReceiver"/>
</messageReceivers>
<parameter name="ServiceClass">pojo.magic.PojoBean</parameter>
</service>

* I created this file BY HAND. Why, oh why, do they not have tool to do this? Note also that you will need to restart this webapp (or the whole tomcat server) if you make a mistake writing the XML or put anything in the wrong place. Apparently there's no more JWS.

* Also note you'd better not get confused (as I initially did) about the format for this file. There seem to be serveral different versions of server.xml, depending on how you develop your service--that is, if you build your service with Axis2's AXIOM classes (see below), then your service.xml will be completely different.

* You can also make .aar files for your services. In the above example, this would just be the contents of the pojo directory that I copied into WEB-INF/services.

* Take a quick look at what you have wrought:

- WSDL is here: http://localhost:8080/axis2/services/PojoService?wsdl
- Schema is here: http://localhost:8080/axis2/services/PojoService?xsd
- Invoke REST: http://localhost:8080/axis2/rest/PojoService/getName

Examine the WSDL. You will note that they explicitly define messages using an XML schema (the ?xsd response) even for this very simple service, which only gets/sets with strings.

* Note of course the limitations of my POJO: I only have String i/0 parameters, which can be
mapped to XSD defined types. I did not try to send/receive structured objects (ie other Javabeans), and of course I did not try to get/set Java objects.

* It is interesting that they support REST as well as WSDL/SOAP style invocation by default. For the above service, Axis is definitely a sledgehammer, since you could easily make the REST version of this in two seconds with a JSP page. And for more complicated inputs, REST would be really difficult.

Using AXIOM

* This would be really overkill for our simple little POJO.

* Presumably this is useful if you need to send more complicated structured messages (which need to be serialized as XML). But you don't really won't to do this if you can use XML Beans instead. That is, use the tool that specializes in serialization/deserialization of XML<->Java.

Tuesday, February 13, 2007

Notes on Amazon S3 File Services

Getting Started

* Best thing to do is go straight to the end and download the code, since the code snippets in the tutorial are not actual working programs. Go through the program s3Driver.java and then go back and read tutorial:

http://docs.amazonwebservices.com/AmazonS3/2006-03-01/gsg/?ref=get-started

* Then read the technical documentation,

http://docs.amazonwebservices.com/AmazonS3/2006-03-01/

It is all pretty simple stuff--all the sophistication is on the server-side, I'm sure.

Building a Client

* Interestingly, (for Java) there is no jar to download. Everything works with standard Java classes if you use the REST version of the service. If you use the WSDL, you will need jars for your favorite WSDL2Java tool (e.g. Axis).

* Java 1.5 worked fine, but 1.4.2 has a compilation error. The readme has almost no instructions on how to compile and run. To do the thing on linux,
  1. Unpack it and cd into s3-example-libraries/java.
  2. Edit both S3Driver.java and S3Test.java to use your access ID key and secret key.
  3. Compile with find . -name "*.java"| xargs javac -d .
  4. Run tests with "java -cp . S3Test" and then run the example with "java -cp . S3Drive"

This will compile various helper classes in the com.amazon.s3 package. These are all relatively transparent wrappers around lower level but standard HTTP operations (PUT, GET, DELETE), request parameters, and custom header fields, so you can easily invent your own API if you don't like Amazon's client code.

* Some basic concepts:
- To get started, you need to generate an Access ID key and a Secret Access Key.
- Your access id key is used to construct unique URL names for you.
- Files are called "objects" and named with keys.
- Files are stored in "buckets".
- You can name the files and buckets however you please.
- You can have 100 buckets, but the number of objects in a bucket is unlimited.

* Both buckets and s3objects can be mapped to local Java objects. The s3object is just data and arbitrary metadata (stored as a Java Map of name/value pairs). The bucket has more limited metadata (name and creation date). Note again this is all serialized as HTTP, so the local programming language objects are just there as a programming convenience.

* Buckets can't contain other buckets, so to mimic a directory structure you need to come up with some sort of key naming convention (i.e. try naming /mydir1/mydir2/file as the key mydir1.mydir2.file).

* In the REST version of things, everything is done by sending an HTTP command: PUT, GET, DELETE. You then write the contents directly to the remote resource using standard HTTP transfer.

* Security is handled by a HTTP request property called "Authorization". This is just a string of
the form

"AWS "+"[your-access-key-id]"+":"+"[signed-canonical-string]".

The canonical string is a sum of all the "interesting" Amazon headers that are required for a particular communication. This is then signed by the client program using the client's secret key. Amazon also has a copy of this secret key and can verify the authenticity of the request.

* Your files will be associated with the URL

https://s3.amazonaws.com/[your-access-key-id]-[bucket-name]/

That is, if your key is "ABC123DEF456" and you create a bucket called "my-bucket" and you create a file object called "test-file-key", then your files will be in the URL

https://s3.amazonaws.com/ABC123DEF456-my-bucket/test-file-key

* By default, your file will be private, so even if you know the bucket and key name, you won't be able to retrieve the file without also including a signed request. This URL will look something like this:

https://s3.amazonaws.com:443/ABC123DEF456-test-bucket/test-key?Signature=xelrjecv09dj&AWSAccessKeyId=ABC123DEF456

* Also note that this URL can be reconstructed entirely on the client side without any communication to the server--all you need is to know the name of the bucket and the object key and have access to your Secret Key. So even though the URL looks somewhat random, it is not, and no communication is required between the client and Amazon S3 to create this.

* Note also that this URL is in no way tied to the client that has the secret key. I could send it in
email or post it to a blog and allow anyone to download the contents. It is possible I suppose to actually guess this URL also, but note that guessing one file URL this way would not help you guess another, since the Signature is a hash of the file name and other parameters--a file with a very similar name would have a very different hash value.

* You can also set access controls on your file. Amazon does this with special a HTTP header,
x-amz-acl. To make the file publicly readable, you put the magic string "public-read" as the value of this header field.

* If your file is public (that is, can be read anonymously), then the URL

https://s3.amazonaws.com/ABC123DEF456-my-bucket/test-file-key-public

is all you need to retrieve it.

Higher Level Operations

* Error Handling: The REST version relies almost entirely on HTTP response codes. As noted in the tutorial, the provided REST client classes have minimal error handling, so this is something that would need to be beefed up.

For more information, see the tech docs:

http://docs.amazonwebservices.com/AmazonS3/2006-03-01/

It is not entirely clear that the REST operations give you as much error information as the SOAP faults. Need to see if REST has additional error messages besides the standard HTTP error codes.

* Fine-Grained Access Control: Both objects and their container buckets can have ACL. The ACL of objects is actually expressed in XML. These can be sent as stringified XML. For more information on this, you have to go to the full developer documentation:

http://docs.amazonwebservices.com/AmazonS3/2006-03-01/


* Objects can have arbitrary metadata in the form of name/value pairs. These are sent as "x-amz-meta-" HTTP headers. If you want something more complicated (say, structured values or RDF triplets) you will have to invent your own layer over the top of this.

Security Musings

* Amazon provides X509 certs optionally, but all Amazon services work with Access Key identifiers. The Access Key is public and is transmitted over the wire as an identification mechanism. You verify your identity by signing your Access Key with your Secret Access Key. This is basically a shared secret, since Amazon also has a copy this key. Based on the incoming Access Key Identifier, they can look up the associated secret key and verify that the message indeed comes from the proper person and was not tampered with by reproducing the message hash.

* You use your secret key to sign all commands that you send to the S3 service. This authorizes you to write to a particular file space, for example.

* This is of course security 101, but I wonder how they handle compromised keys. These sorts of non-repudiation issues are the bane of elegant security implementations, so presumably they have some sort of offline method for resolving these issues. I note that you have to give Amazon your credit card to get a key pair in the first place, so I suppose on the user side you could always dispute charges.

* Imagine, for example, that someone got access to your secret key and uploaded lots of pirated copies of large home movies. This is not illegal material (like pirated movies or software) but it will cost you money. So how does Amazon handle this situation? Or imagine that I uploaded lot of stuff and then claimed my key was stolen in order to get out of paying. How does Amazon handle this?

Monday, February 12, 2007

A Quick Look at One Click Hosting

There are an enormous number of so-called one-click web hosting services, some offering (unbelievably) unlimited storage for an unlimited time for free. For a nice summary table, see of course, Wikipedia: http://en.wikipedia.org/wiki/Comparison_of_one-click_hosters.

Here are a few of the best-featured free services:
  • DivShare: Has a limit on the size of files, but has no limit on disk usage. Their blog reveals they have been in operation since late 2006 and have just past the TeraByte storage mark--i.e. they are really still a small operation.
  • FileHO: No limits on file sizes or diskusage, and no limit on time. Some interesting comments here. Sounds too good to be true. Provides FTP services as well as web interfaces. Not much other information about the company.
  • in.solit.us: Damned by its blog. They reveal that they were shut down temporarily in early 2007 by their host, Dreamhost, for violating terms of usage--probably for sharing illegal files. Overall, they look very amateurish.
All of these allow you to make files either public or private.

Usually, you get what you pay for, so my guess is that if your files really must be preserved, you should pay for this service. Amazon S3 is the most famous of these, with fees of $0.15/GB/month for storage and $0.20/GB of data transferred. Several smaller companies attempt to provide higher level services on top of S3.

Sunday, February 11, 2007

Displaying RSS and Atom Feeds in MediaWiki

I used SimplePie to display Blogger atom feeds in Media Wiki (using the ancient 1.5.2 version). Here's an example:


http://www.chembiogrid.org/wiki/index.php/SimplePieFeedTest


To add another feed, all you need to do is edit a page and add this:

<:feed>:http://chembiogrid.blogspot.com/atom.xml</feed>

(Replacing chembiogrid's atom.xml with the alternative feed URL).


More info is here: http://simplepie.org/. It is as easy as advertised. See particularly here: http://simplepie.org/docs/plugins/mediawiki. Or just google "simplepie mediawiki".

Avoid Magpie RSS. I spent the whole afternoon trying to get it to work with Atom with no luck. SimplePie worked in about 5 minutes.

Mediawiki's Secret RSS/Atom Feeds

Several Mediawiki "special pages" can be viewed as RSS and Atom feeds. See http://swik.net/MediaWiki/Documentation.

For example, our CICC wiki's recent changes are published via the URL
http://www.chembiogrid.org/wiki/index.php?title=Special:Recentchanges&feed=atom
.

Friday, February 09, 2007

Fun with YUI's Calendar

* Thanks for Jason Novotny for pointing me to this.

* Download code from Yahoo's web site: http://developer.yahoo.com/yui/.

* To get started, you must first put the javascript libraries in an appropriate path on
your web server. Yahoo does not for some reason supply a global URL. I did this by copying
the YUI zip file into my Tomcat's ROOT webapp. More below.

* First trick: adding a caledar to a Web Form. Read the instructions. One variation on the
instructions is that the src attribute needs to be set correctly. If you unpacked the
YUI zip in ROOT as described above, then you should use the following minimal page.

<html>
<head>

<script type="text/javascript" src="/yui_0.12.2/build/yahoo/yahoo.js"></script>
<script type="text/javascript" src="/yui_0.12.2/build/event/event.js"></script>
<script type="text/javascript" src="/yui_0.12.2/build/dom/dom.js"></script>

<script type="text/javascript" src="/yui_0.12.2/build/calendar/calendar.js"></script>
<link type="text/css" rel="stylesheet" href="/yui_0.12.2/build/calendar/assets/calendar.css">

<script>
YAHOO.namespace("example.calendar");
function init() {
YAHOO.example.calendar.cal1=new YAHOO.widget.Calendar("cal1","cal1Container");
YAHOO.example.calendar.cal1.render();
}
YAHOO.util.Event.addListener(window,"load",init);
</script>

</head>

<body>
Here is the caledar <p>
<div id="cal1Container"></div>
<p>
</body>

</html>


Note this code seems to work just fine in the HTML <body> so I will put it there henceforth.

* Let's look now how to get this value into a Web Form. We'll do the web form with JSF just
to make it more difficult. The particular example I'll use is a front end to a data
analysis service (RDAHMM) that will look for modes in selected GPS station over the selected
date range, but these details are not part of the web interface.

This will actually put the calendar right next to the form (ie not the way we would like it)
but that's fine for now.

<%@ taglib uri="http://java.sun.com/jsf/html" prefix="h"%>
<%@ taglib uri="http://java.sun.com/jsf/core" prefix="f"%>
<html>
<head>
<title>RDAHMM Minimalist Input</title>

<script type="text/javascript" src="/yui_0.12.2/build/yahoo/yahoo.js"></script>
<script type="text/javascript" src="/yui_0.12.2/build/event/event.js"></script>
<script type="text/javascript" src="/yui_0.12.2/build/dom/dom.js"></script>

<script type="text/javascript" src="/yui_0.12.2/build/calendar/calendar.js"></script>
<link type="text/css" rel="stylesheet" href="/yui_0.12.2/build/calendar/assets/calendar.css">

</head>


<body>
<script>
//Set up the object and add a listener.
YAHOO.namespace("example.calendar");
function init() {
YAHOO.example.calendar.cal1=new YAHOO.widget.Calendar("cal1","cal1Container");
YAHOO.example.calendar.cal1.render();

YAHOO.util.Event.addListener(window,"load",init);

//Add an alert window.
var mySelectHandler=function(type,args,obj) {
var dates=args[0];
var date=dates[0];
var year=date[0],month=date[1],day=date[2];
var startDate=year+"-"+month+"-"+day;

var newStartDateVal=document.getElementById("form1:beginDate");
newStartDateVal.setAttribute("value",startDate);
}

YAHOO.example.calendar.cal1.selectEvent.subscribe(mySelectHandler,YAHOO.example.calendar.cal1, true);
YAHOO.example.calendar.cal1.render();
}
YAHOO.util.Event.addListener(window,"load",init);
</script>

Here is the caledar <br>
<div id="cal1Container"></div>

The input data URL is obtained directly from the GRWS web service
as a return type.

<f:view>
<h:form id="form1">
<b>Input Parameters</b>
<h:panelGrid columns="3" border="1">

<h:outputText value="Site Code"/>
<h:inputText id="siteCode" value="#{simpleRdahmmClientBean.siteCode}"
required="true"/>
<h:message for="siteCode" showDetail="true" showSummary="true" errorStyle="color: red"/>


<h:outputText value="Begin Date"/>
<h:inputText id="beginDate" value="#{simpleRdahmmClientBean.beginDate}"
required="true"/>
<h:message for="beginDate" showDetail="true" showSummary="true" errorStyle="color: red"/>


<h:outputText value="End Date"/>
<h:inputText id="endDate" value="#{simpleRdahmmClientBean.endDate}"
required="true"/>
<h:message for="endDate" showDetail="true" showSummary="true" errorStyle="color: red"/>

<h:outputText value="Number of Model States"/>
<h:inputText id="nmodel" value="#{simpleRdahmmClientBean.numModelStates}"
required="true"/>
<h:message for="nmodel" showDetail="true" showSummary="true" errorStyle="color: red"/>
</h:panelGrid>
<h:commandButton value="Submit"
action="#{simpleRdahmmClientBean.runBlockingRDAHMM2}"/>
</h:form>
</f:view>
</body>
</html>

The basic thing to see is that the YUI event listener connects the "mySelectHandler" Javascript to
calendar mouse clicks, which then update the form item labeled "form1:beginDate".

Friday, February 02, 2007

Rethinking Science Gateways

Note: I intend to make this a series of articles on Web 2.0 technologies that follow my usual focus on nuts and bolts issues. However, I will start with 1-2 polemics.

Science gateways are essentially user-centric Web portals and Web Services for accessing Grid resources (primarily Globus and Condor in the US). Java-based portals are typically (but not always) built using the JSR 168 portlet standard. Numerous examples can be found in the Science Gateways Workshop at Global Grid Forum 14 (see Science Gateways Workshop at Global Grid Forum 14) and the associated special issue of Concurrency and Computation (they are available online but not yet published; search at http://www3.interscience.wiley.com/cgi-bin/jtoc/77004395/) to see articles). See also GCE06 for a more recent overview.

Java-based gateways are characterized by their reliance of Enterprise standards, particularly numerous Java Specification Requests, and related Enterprise Web application development frameworks such as Struts, Java Server Faces, Velocity, and so on. This has the advantage of allowing portals to work with numerous third party jars and also (by implementing standards) allows the science gateway community to interact with a larger community of developers than one would expect from the relatively specialized “web science” community.

This has come with a price. Java Web frameworks are designed to encourage beautiful coding practices (such as the Model-View-Controller design pattern) that are necessary for a team of professional developers. However, many casual Web developers find these frameworks very difficult to learn and furthermore (as many work in very small groups or individually) the benefits of beautiful code bases are outweighed by the difficulty getting anything working. This high entry barrier has created a “portal priesthood” possessing specialized knowledge.

While the enterprise approach and associated development practices may be appropriate for software development teams building science gateways for entire institutions (i.e. the TeraGrid User Portal), many science gateways would undoubtedly benefit from “agile development” rather than “enterprise development” practices. This is because many science teams cannot afford specialized Web and Grid developers. They would instead like to build gateways themselves. Such would-be gateway developers typically possess a great deal of programming knowledge but are more likely to be driven by practical applications rather than the desire to build elegantly architected software.

One of the recurring problems in building science gateways is that it is a very time-consuming activity if one follows the traditional software design approach: a gateway development team must extract requirements from a set of scientific users, who may not be directly involved in the gateway coding. Typically, the resulting prototype interfaces must go through many evolutions before they are useful to a working scientist or grad student.

Clearly many of these problems could be alleviated if the scientists could just build the interfaces themselves. Such user interfaces may be non-scaling, highly personal, or ephemeral, but the scientist gets what he or she wants. Well engineered software tools are still needed, but these must be wrapped with very easy to learn and use developer libraries. Google's Map API is the prime example of this sort of approach: very nice services (interactive online maps, geo-location services, direction services, interactive layer add-ons) can be built using JavaScript and thus in the classic scripting sense, “agilely” developed.

One of the useful lessons of the Programmable Web is that there is a distinction between the Enterprise and the Internet, but the two are complementary rather than opposed. Obviously Google, Flickr, et al need to build very sophisticated services (with such things as security for Grids and ACID transactions for the commercial world), but these internal guts should not be exposed in the service interface, or at least to the “Internet” face of the service, any more than they would be expose in the user interface. Again, Google serves as an example: calculating Page Ranks for the Internet is a very computationally demanding process, and Google undoubtedly has much specialized software for managing this distributed computing task. But this “enterprise” part of Google does not need to be exposed to anyone outside. Instead, we can all very easily use the results of this service through much simpler interfaces.

It seems clear that the Enterprise-centric portal interface portion of Science Gateways will need to be rethought, and I plan to do this in a series of posts examining technical issues of Web 2.0 development. The Web Service portion of the gateway architecture is, on the other hand, conceptually safe, but this does not mean that these also do not need to change. The lesson from the Programmable Web should be very clear: we must do a better job building Web services. Too many gateways have collections of Web Services that are unusable without their portal interfaces. That is, the services have not been designed and documented so that anyone can (by inspecting the WSDL and reading up a bit), create a useful client to the service. Instead, the services are too tightly coupled to their associated portal. I propose the acid test for gateway services should be simply "Can you put it up on programmableweb.com?"