Sunday, June 28, 2009

You can do the DB scheme work online!

I'm working on a project with colleagues from other institutes. Within this project we decided to some work on a client-server model application with a database back-end.

We had many mail exchanges, nice figures to describe workflows, phone meetings, video conference meetings but it was time to start doing some work.

One first thing we had to agree on was the DB scheme we are going to work on and i was surprised by a tool that one of the colleagues used to give us his SQL model. The tool is called wwwsqldesigner and it is open-source. Of course there is a demo installation to use if you don't want to install it your self.

What i liked most is that you are able to get your design in XML format. Then you can send the XML file to the rest developing team who can upload it again either to their local installation or the demo one, do their changes and publish a new version etc... Of course you are able to save your model at the server and then others can just select it from a list in order to view it and change it.

I liked it so much that i'm thinking of installing it locally and upload our local project's schemes.

Friday, June 26, 2009

I don't want all these mails on my iPhone!

A week ago i was writing on this blog about about the iPhone 3.0 OS update and as a disadvantage i had that there is no mail filters yet.

I really hate it when my laptop is not connected to the imap server (in order to filter all my mails) and i get all these SPAM and mailing list mails on my iPhone. It makes it totally useless.

The first thought i had was to use procmail which seem to be powerful with one "show-stopper" for me. It requires to have access to your mailbox server in order to upload your procmail configuration.

Then i thought to search for a simple client that will connect to IMAP, filter my mails and then logout. The client that i found is imapfilter which is actually developed a Greek guy!

Its operation does EXACTLY what i want. You feed it with an easy to read configuration (LUA):

---------------
-- Options --
---------------

options.timeout = 120
options.subscribe = true


----------------
-- Accounts --
----------------

-- Connects to "imap1.mail.server", as user "user1" with "secret1" as password.
account1 = {
server = 'imap1.mail.server',
username = 'user1',
password = 'secret1',
ssl = 'ssl3',
}

---------------
-- Filters --
---------------

spam = {
'new',
'header "X-DSPAM-Result" "Spam"',
}


----------------
-- Commands --
----------------

-- Get status (messages, recent, unseen) of the mailbox.
-- check(account1, 'INBOX')

-- Move messages between mailboxes at the same account.
results = match(account1, 'INBOX', spam)
move(account1, 'INBOX', account1, 'SPAM', results)

I just setup a cronjob for this and works perfect!

Thursday, June 25, 2009

Lets cut some (gLite) Hydra heads

You may be familiar with the Lernaean Hydra. The complexity of this beast was perfect to name a gLite service that is used to encrypt/decrypt data.

This service is based on the "Shamir's Secret Sharing" algorithm where a the encryption/decryption key is divided to X parts and Y parts of them (where Y <= X) are needed to reconstruct the key.

A requirement for data encryption was raised sometime in the previous years and we had deployed 3 gLite Hydra servers (each one will hold a part of every user's key and only 2 of them would be required for encryption/decryption operations) with clear geographic and administration separation.

A software update to one of them led to a "funny" situation where no new keys were able to be registered and no old ones could be unregistered. (These are the only operations that require all the servers to be up and responding). The tool that was provided to (re)configure the service had the very interesting operation of dropping every DB table and re-create them using the predefined schema.

A re-configuration of the updated server gave us a "everything just doesn't work" state, which we had to resolve under user community pressure. Note that if the service just didn't work, users may have lost lots of human/cpu hours because they are just able to get an encrypted output which they can't decrypt.

Analysis to the DB at another gLite Hydra instance gave us an idea of how this service stores its data. Due to luck the actual keys were not deleted by the configuration script but only the relation between users and keys was deleted.

A copy of the user database and some reverse engineering at the relation DB at a working Hydra instance was enough to recover the service with (almost?) no cost.

That reminded me that common Murphy's law where the backup you have is either unreadable at the time you needed or was last updated BEFORE your critical data was stored.

Saturday, June 20, 2009

OpenMP jobs on Grid? (The LCG-CE - PBS approach)

There was a user support requirement for OpenMP jobs in Grid. OpenMP is a shared-memory implementation which means that all processes must run on the same box.

Well this can easily achieved at PBS side by using the directive:
#PBS -l nodes=1:ppn=X

Where "X" is the number of requested processes. But the main issue is HOW can we get this requirement based on what WMS gives to us on submission?

After googling this, the "correct" solution can only be achieved at CREAM CE where users can select a number of requirements that will not only be used for job matching process at WMS but also passed to the CE. You can find more info on this here.

LCG CEs on the other hand are only getting a poor RSL which doesn't carry almost any of the user's requirements. So lets get in LCG CE's internals...

First a job reaches the globus-gatekeeper. At this phase user's proxy is matched to a pool account. GateKeeper's task is to authenticate the user and the job and pass it to the globus job manager.

The globus job manager uses the GRAM protocol to report the job state and submits the job to the globus-job-manager-marshal which is using a perl module to talk to the relevant queuing system.

This perl module is responsible for the creation of the job (shell script) that will be submitted to the PBS server. In this module the CpuNumber requirement is translated by default to:
#PBS -l nodes=X

So this is the part we need to change in order to create OpenMP jobs. The next issue now is how we find out if user has asked for OpenMP job. I've noticed that the JDL option "Environment" is passed to the job executable that will be submitted thus a definition like the following:
Environment = {"OPENMP=true"};
can do the trick.

The whole above approach works but for sure needs a lot of work but as proof of concept is more than ok...
In the (near) future i would like to test the CREAM CE which, as i said before, has a more clear way to support requirements from JDLs using the CeForwardParameters definition.

Friday, June 19, 2009

Coding on multiple SVN repositories...

As a developer i use to use repositories (mainly SVN) for code versioning and to interact with other developers.

Involvement in developments from other teams within a project usually require that the (production) repository is hosted somewhere centrally. This give us the advantage of having one code-base where all developers to work. The main disadvantage of this implementation though is that usually developers doesn't commit till they have something really stable and working.

Another disadvantage is that it is not clear to someone outside your mind to find out on what you are working on (and usually "manager" guys need to do so).

I was proposed to use a local repository for every developing i do where it would be easy to have "every change" commits and commit stable versions to the central repositories. This will give us both frequent commits (thus clear history view) and other are able to see on what you are working and probably comment on this work. At first i was highly against this... It's clear that it adds a lot of additional work without giving us many clear advantages.

As this was a "manager's" proposition i had to try it. The initial thought was to work on our local repository and then, when i have something stable, take a diff since last sync of the repositories to apply it to the remote (central repository).

But ... thinking on this again, is it the "svn tagging" procedure having different server for trunk and tags?

An implementation to test:
  1. Creation of a local test repository with trunk and tag trees
  2. Create a new repository to serve as "the remote central repository"
  3. Create a new tag at the first repository which will have an svn:external link to "the remote central repository"
  4. Start tagging as normal on the local test repository but always at the same tag.

iPhone 3.0 OS is here...

It took me about 8 hours "check for updates" clicking and finally around 8pm on Wednesday it was available!

For the 230 MB download, the first 200MB were downloaded within a few seconds while the last 30 took about half an hour! (was one of the first downloaders?)

About 24 hours experience on iPhone 3.0 and i think that finally there is all the missing "phone" functionality.

Thumps up:
  • I can write Greek!
  • iPod shake! (shake to shuffle)
  • "Search iPhone" or the iPhone spotlight. Everything is about a text box away from your screen.
  • mms (never used it before but it was a pity that iPhone was unable to do something that 30 euro mobiles do)
Thumps down:
  • Some apps 3rd party report "compatibility errors" (fortunately without any (visible) malfunction).
  • Still no background applications (no skype on background)
  • No email filters (should I consider procmail?)