Friday, July 27, 2007

Google Maps: EGeoXML

This is a wonderful little JS that allows you to work with KML files in your JavaScript code. Also overcomes the rendering limits of KML, so you can show many more placemarks.

Thursday, July 19, 2007

Drupal Mac Install

I'm doing this in a unixie way rather than looking for Mac specific packages.

0. sudo -H -u root /bin/bash

1. Get Mysql and install. Put this in /usr/local/msyql. The mysql manual is pretty good, so I won't duplicate. Make a db user ("drupaloompa") and db as drupal tells you.

2. Get Apache 2 and install. You'll need DSO support, so ./configure --enable-so. I wonder why they don't make this a default. Anyway, Apache 2.2 gave an error:

configure: error: Cannot use an external APR-util with the bundled APR

I decided to punt and go back to Apache 2.0. This worked fine. Probably I have another apache server pre-installed by Steve Jobs. Looks so, so be be careful that your paths point to /usr/local/apache2.

3. Get PHP 5. Configure/make/make install had one variation: ./configure failed unless I gave the full path to mysql, so I ran (in /usr/local/php-X.Y)

./configure --with-apxs2=/usr/local/apache2/bin/apxs --with-mysql=/usr/local/mysql

Noticed an unusual problem with PHP's "make install": it looked for libs in


but my libs were in


I rectified this straighforwardly (it's late) but probably this is some ./configure issue.

Checked php by creating a little file (phptest.php) with the line <? phpinfo() ?> in htdocs. This worked/

4. Copied drupal directory (after gnutarring) into htdocs. Note you want to do this:

cp -r /path/to/drupal-5.1/* /usr/local/apache/htdocs

You don't want the directory /usr/local/apache/htdocs/drupal-5.1. For some reason, my PHP files did not get parsed in this directory, but everything worked fine when I moved things to the top level htdocs directory.

5. Start apache and point to http://localhost/index.php. You will see a complaint about write access to settings.php. Do this:

chmod a+w sites/default/settings.php

Run this, use your Mysql user account and password, and you are good. Remove write permissions on settings.php.

chmod a-w sites/default/settings.php

Point your browser to http://localhost/index.php again and get going. Note that you will need to clear your browser cache if you loaded any Apache pages. Use shift-reload or just quit the browser and start a new one. In any case, http://localhost/ should show the index.php pages, not "It works" or any Apache feathers.

Wednesday, July 11, 2007

Some condor problems and solutions

As in my previous post, here were some problems I encountered along the way to getting condor installed on a set of lab PCs running linux with no shared file systems and no DNS.

* Condor complains about missing install the compatibility patch for your OS. See earlier post.

* You get something like

Failed to start non-blocking update to <>.
attempt to connect to < > failed: Connection refused (connect errno = 111).
ERROR: SECMAN:2003:TCP connection to <> failed

This probably means you have not set HOSTALLOW_READ and HOSTALLOW_WRITE correctly. Use the IP addresses you want to connect as the values. Also be sure you have enabled the NO_DNS and DEFAULT_DOMAIN_NAME parameters in your condor_config file.

* Jobs seem stuck in "idle". This could just mean that condor is really being unobtrusive. Try changing the START, SUSPEND, etc attributes in condor_config (see previous post and link).

* Jobs only run on the machine they are submitted from. Try adding


to your job submission script.

* Need some general debugging help: here is a nice little link

In short, look at the log files in /home/condor/log. Here are some useful commands:

1. condor_q -better-analyze
2. condor_q -long
3. condor_status -long

#1 will tell you why your job is failing to get matched.

Beating Condor Like a Rented Buzzard

OK, trying to get Condor running on a few linux boxes on a private network with no DNS. There were lots of pitfalls that took a while to debug. Also, try searching for my links tagged with "condor" and "ecsu" for sites I found helpful.

* Make sure you have the compatibility patch installed, since condor needs the obsolete See the previous post.

* Turn off your iptables firewalls (/etc/init.d/ipconfig stop).

* I did not find using /etc/hosts entries to be helpful. This will conflict with the NO_DNS attribute that I used below.

Get the tar.gz version of condor, since we will need to babystep our way through the complicated configuration file. Do the following as the root user. Unpack the tar, and use


for the interactive installation on all the machines you want to use. Pick one of these to be the master. Don't use condor_configure --install unless you know what you are doing. Answer the questions as best you can. Note you should answer "No" when asked about the shared file system if you are just using a random collection of machines with no NFS, etc.

Next, set your $CONDOR_CONFIG environment variable and then edit this file. Make sure that CONDOR_HOST points to the master node. You must change HOSTALLOW_WRITE and you may want to change HOSTALLOW_READ. I set these to IP addresses since I don't have a DNS. Don't forget to include in the allowed list.

Next, scroll down in condor_config to find the parameters DEFAULT_DOMAIN_NAME and NO_DNS. Uncomment these, since we assume there is no DNS. The value of the first can be anything, the value of the second should be TRUE.

Still editing condor_config, cruise down to part 3 and set the attributes START=TRUE, SUSPEND=FALSE, CONTINUE=TRUE, PREEMPT=FALSE, and KILL=FALSE as described at
This will cause condor to submit your jobs right away.

Now, start condor with condor_master on your main condor host. This should come up in a few minutes, so use condor_status to check.

Next, from each machine in your cluster, try "telnet 9618". Replace with the main host's IP address. 9618 is the port of the collector. If you can't even get to this port, you probably have firewall issues, so turn off iptables and check /etc/hosts.deny and /etc/hosts.allow. If you get connection refused, it is probably a condor_config issue, so make sure that you followed the steps above.

I actually found it useful to install Tomcat and modify to run on port 9618 just to verify that other machines could reach the main host's port 9618. This tells you connection if problems are in your network configuration or in condor's config. Be sure to turn tomcat off when you are done.

Repeat the condor installation steps for the other condor cluster members. When you bring them up, they should be added to the output of condor_status.

Tuesday, July 10, 2007

Condor on FC7

Trying to get condor running on a fresh Redhat Fedora Core 7. Got the following error while using the FC5 release from Condor:
[condor@localhost condor-6.8.5]$ ./sbin/condor_master
./sbin/condor_master: error while loading shared libraries: cannot open shared object file: No such file or directory

Luckily, just needed to add a compatibility library. First, used yum to install apt and synaptic:

yum install apt synaptic

Synaptic had a nice GUI but apt-cache was more my speed. I did have to update the file listings first ("reload" button on synaptic).

[root@localost ~]# apt-cache search libstdc++
You don't seem to have one or more of the needed GPG keys in your RPM database.
Importing them now...
Error importing GPG keys
compat-libstdc++-33 - Compatibility standard C++ libraries
libstdc++ - GNU Standard C++ Library
compat-libstdc++-296 - Compatibility 2.96-RH standard C++ libraries
libstdc++-devel - Header files and libraries for C++ development

[root@localhost ~]# apt-get install "compat-libstdc++-33"
You don't seem to have one or more of the needed GPG keys in your RPM database.
Importing them now...
Error importing GPG keys
Reading Package Lists... Done
Building Dependency Tree... Done
The following NEW packages will be installed:
compat-libstdc++-33 (3.2.3-61)
0 upgraded, 1 newly installed, 0 removed and 0 not upgraded.
Need to get 237kB of archives.
After unpacking 733kB of additional disk space will be used.
Get:1 fedora/linux/releases/7/Everything/i386/os/ compat-libstdc++-33 3.2.3-61 [237kB]
Fetched 237kB in 0s (309kB/s)
Checking GPG signatures... ########################################### [100%]
Committing changes...
Preparing... ########################################### [100%]
1:compat-libstdc++-33 ########################################### [100%]

Bang, condor_master now starts. Of course, I get the classic
[condor@localhost condor-6.8.5]$ ./sbin/condor_master
ERROR "The following configuration macros appear to contain default values that must be changed before Condor will run. These macros are:
hostallow_write (found on line 215 of /home/condor/Desktop/condor-6.8.5/etc/condor_config)
" at line 223 in file condor_config.C

I'm surprised the condor people don't post more warnings here. Just edit the condor_config file and change line 223 (should be obvious from the nearby comments).

Condor Authentication Error

Got something like this below:
[username@host:~] condor_submit my_job.cmd
Submitting job(s)
ERROR: Failed to connect to local queue manager
AUTHENTICATE:1003:Failed to authenticate with any method
AUTHENTICATE:1004:Failed to authenticate using KERBEROS

Distressing, since I'm not using Kerberos or GSI in a vanilla condor installation.

Solution: I had two condor installations, probably with different versions. My
.bashrc file set CONDOR_CONFIG to point to installation #1, but I started condor_master
using binaries from installation #2. Used the same installation consistently and
everything worked.

up2date obsolete on rhel5 and fc7

Redhat has decided to go with yum instead. Just use "yum update".

Sunday, July 08, 2007

ECSU Summer School

Basic Agenda

This isn't set in stone so we can change depending on the initial discussions.

Getting started (Monday Morning):
* Every will create a blog if they don't have one already. Post every useful nugget of information or observation here. This is a good practice. Also, should be used to bookmark useful web sites. Tag everything with "ECSU". You can embed a badge in your blog.

* Introductions

* Expectations: what do we want to accomplish for the week? I will overview a lot of technologies, but we need to find ways to tie everything together.

Monday: Introductory/overview lecture and Condor lab
- Installing and using Condor on your linux cluster
- I will probably use some lecture material from the Condor group and Grid summer schools.
- We should identify some interesting and relevant projects for the students that would use Condor--my guess is the image analysis project mentioned below would be an excellent candidate. These could be done over several days.
- Need to make sure this installs OK on your version of Linux.

Tuesday: Web Services Lab
- I have all of this material in hand.
- Introduction to WSDL, SOAP, and REST
- May also provide some lectures on XML
- Hands on lab with Apache Axis 2
- Build workflows with Taverna

Wednesday: Globus and Portals/Gateways Lab
- Lectures on Globus basics
- Install and use Globus command line stuff
- Install and use OGCE portal
- Build a Web gateway prototype for CERSER
- Need to make sure Globus installs OK on your version of Linux

Thursday: Web 2.0 Lab
- Web 2.0 overview and tutorial
- Mashup lab: go to, brainstorm ideas, and implement.

Friday: Wrap up projects

Friday, July 06, 2007

Condor Commands Short List

Following are the most frequently used condor commands

Before running the following commands you need to set the following environment variables

export CONDOR_CONFIG=/home/condor_user/condor_installation_directory/etc/condor_config
export PATH=$PATH:/home/condor_user/condor_installation_directory/bin
export PATH=$PATH:/home/condor_user/condor_installation_directory/sbin

(1) To start the condor use 'condor_master' command

(2) use condor_status to see all the machines included in condor pool

(3) you can also use condor_off command to kill the condor processes and use condor_on command to start it again

(4) If you have a simple job to submit
EX: test

universe = vanilla
executable = /bin/ls
output = test.out
error = test.error
log = test.log

you can run "condor_submit test" command

(5) Check the status of job using "condor_q" command

(6) To remove job from queue
use "condor_rm JobID"

(7) you can permenently remove job from local machine using "condor_rm -forcex jobID" command

Thursday, July 05, 2007

Linux ISCSI with EMC2 AX150 SAN

Finally got my new external SAN file system up and running and attached to the linux server farm using ISCSI (i.e. SCSI over a standard network interface--all devices are attached to the same network switch, but otherwise no special equipment needed).

The Problems
1. EMC^2 needs to be told they are missing an equal sign.
2. Their documentation is terrible.
3. Their SAN management software doesn't yet work on RHEL5, but my version of RHEL4 did not support the linux ISCSI initiator software.

First, review the ISCSI terminology a bit. See the links from my previous post for nice reviews. In short, your SAN device is your ISCSI target. You will mount it as a SCSI device over a network connection to a linux server (or Windows 2003 if you prefer), where it can be treated like an ordinary file system. The linux host runs the ISCSI initiator and acts as a client, if you are more comfortable with client-server terminology.

The basic steps (besides powering up the SAN) are as follows:
1. Install iscsi initiator software (use up2date or yum--"yum iscsi-initiator") on your linux server. You probably want to update your kernel as well. This was my first problem, since up2date on RHEL4 refused to update my kernel (linux 2.6.9-5), and attempting to start the ISCSI daemon on linux failed because of missing kernel modules. So I upgraded to RHEL5. This whole set of sub-steps was wildly unclear in the EMC2 documentation.

2. Set up your SAN's network configuration from your linux server. EMC2's tools (Navisphere, Powerpath) worked fine on RHEL4 but not RHEL5, so I wound up doing this step from a second linux server that still ran RHEL4. I just used IP addresses from our pool.

3. Configure your SAN's disk pool, RAID, etc. EMC2 provides a nice web tool, Navisphere Express, to do this.

4. Fire up the ISCSI client on your linux server. See the README at for everything you need, although it took a while for me to digest it. At the end, run "fdisk -l" and you should see your new /dev/sdX devices. If you don't, but all the iscsiadm commands are working fine, it could mean that you completed step #2 but not #3.

5. Partition and format your devices in the usual way with fdisk, etc. See for example Mount the file systems and you should be in business.

Wednesday, July 04, 2007

Creating Bootable CDs with a Mac

The situation: I was making some installation CDs of RHEL 5 from downloaded .iso files on my Mac workstation for our servers. There are a few different ways to burn CDs on the Mac, so my first efforts made readable file copies but not bootable CDs.

The solution: use the Disk Utility (under Applications/Utility). Drag your .iso file into the DU's left-hand navigation area and then click the (now enabled) "burn" icon on the top of the DU.

Here's your 4th of July bonus:

Monday, July 02, 2007

Useful RedHat ISCSI Links

For the attic:

The open-iscsi has a nice README:

Here is a good doc on linux partitioning:

I guess I should tag these on or something but this is easier.

JSF Managed Bean Properties and Constructors

I think I keep discovering and forgetting this: don't initialize anything in a managed bean's constructor if you want to use properties from faces-config.xml's values.

For example,

public SimplexBean() throws Exception {
simplexService = new SimpleXServiceServiceLocator()
.getSimpleXExec(new URL(simpleXServiceUrl));

System.out.println("Simplex Bean Created");

will NOT use the value for simpleXServiceUrl from faces-config.xml. It will use any value set in the code instead.

To work around this, just do something like make a little method

private void initSimplexService() throws Exception {
simplexService = new SimpleXServiceServiceLocator()
.getSimpleXExec(new URL(simpleXServiceUrl));

and call it before you invoke the web service. I assume on the scale of things, this is a cheap call, so no need to put it in the constructor even if it is redundant.

Make sure of course that you have get/setSimpleXServiceUrl() methods.