Saturday, October 25, 2008

Systems Administrators' Guide

Chamith Kumarage, systems administrator at WSO2, has started blogging. Chamith is a very active FOSS enthusiast. He will be publishing interesting articles related to systems administration, Unix/Linux & other stuff. If you are a Unix/Linux geek & have an interest in the field of systems administration or FOSS, I'd highly recommend you to bookmark this blog.

Tuesday, October 21, 2008

RESTful PHP Web Services Book



Axis2/C & PHP Web services pioneer & Samisa Abeysinghe, who is also the Director of Engineering at WSO2, has published a book titled "RESTful PHP Web Services". This is one of the very few, perhaps the only book available on PHP Web services. It also has a dedicated appendix on WSF/PHP, the comprehensive PHP Web services stack from WSO2, which supports all major Web services standards in addition to REST.

What you will learn from this book
  • Basic concepts of REST architecture
  • Consuming public REST-style services from your PHP applications
  • Consuming RESTful web services, such as those from leading APIs such as Flickr, and Yahoo Web Search
  • Making your own PHP applications accessible to other applications through a RESTful API
  • REST support in the popular Zend framework
  • Debugging RESTful services and clients
  • A case study of designing a RESTful PHP service from the ground up, and designing clients to consume the service
Approach
The book explains the basic concepts associated with the REST architectural style, but the emphasis is on creating PHP code for consuming and creating RESTful services in PHP. There is plenty of example PHP code to illustrate the concepts, with careful explanations of how the code works.
Who this book is written for
This book targets PHP developers who want to build or make use of RESTful web services, or explore the options available to them in PHP. You will need to know the basics of PHP development, but no knowledge of REST is assumed, nor any knowledge of creating web services generally.

Monday, October 20, 2008

Terminator - Terminal Emulator



I recently started using Terminator and am really impressed. It makes my life so much easier with the split pane view, which allows me to run & monitor a multinode cluster.

"apt-get install terminator" worked on Ubuntu 8.04.

Sunday, October 19, 2008

How to extend WSO2 Registry

A great article by Chathura Ekanayake, lead developer of the WSO2 Registry on extending the WSO2 Registry.

SOA Governance with WSO2 Registry

A good article by Ruwan Janapriya on SOA Governance using WSO2 Registry.

How to create patches for OSGi bundles

The standard practice that was followed with respect to providing patches (which contain bug fixes) for WSO2 Java products was to create a JAR file containing the fixes, which contains only the classes that have been changed, and placing this patch file, say foo-patch.jar, in the $PRODUCT_HOME/lib/patches directory. When the classpath is constructed, the $PRODUCT_HOME/lib/patches directory is placed in front of the $PRODUCT_HOME/lib/ directory, hence the patched classes take precedence. However, this mechanism does not work on the newer generation of Java products, which are based on OSGi. Hence we had to figure out how to patch OSGi bundles. The obvious answer one can think of is, 'Ship a new version of the bundle'. However, this is not sometimes feasible due to several reasons.

1. The patch may be large (it is the entire bundle that is shipped), even if only a single class changed.
2. Some other bundles may depend on a fixed version of the packages exported by the bundle that is being patched. Hence, we will have to ship new versions of the affected bundles as well, which can lead to a chain reaction. In the worst case, all the bundles may have to be shipped due to a change to a single class in a core bundle!
3. If the dependent bundles import packages in a range of versions, such as [1.0,2.0), the importer will be wired to the bundle which exports the package with the highest version. In this case, shipping a new version of the bundle may work, provided that backwards compatibility is satisfied. However, this may not be always possible.

We can provide a simple solution to this problem using 2 OSGi constructs; require-bundle & fragments. Figure 1 shows that the required bundles take precedence over the bundle's classpath. Hence we can use required bundles to override classes in the bundle.


Figure 1: Class loading flowchart. Source: OSGi R4 spec

However, at the time of shipping a bundle, we cannot know what patch bundles will be required. Hence we cannot put a Require-Bundle manifest header pointing to the patch, in the original bundle. This is where fragments come into play. The entries in the fragments' manifest are merged into that of the fragment host. Hence, we can create a fragment which will require the patch bundle. In effect we have managed to dynamically add a Require-Bundle header to the original host bundle. However, we should ensure that the original bundle is the primary exporter of the patched packages. This can be done by specifying a mandatory attribute. We need to add this to the Export-Package header of the fragment;
e.g. Export-Package:*;partial=true;mandatory:=partial
Once the fragment is attached to the host & the host is refreshed, we will be able to see the functionality in the patch taking effect.

If you ever require to revert the patch, it is just a matter of removing the fragment & patch bundles, and refreshing the main bundle.

I have created a PoC project which demonstrates this concept. You can download the code and try it out. The main bundle is the bundle that needs to be patched. The patch bundle is the required bundle & the fragment bundle attaches to its host, the main bundle. The Knopflerfish desktop tool can be used to test this out.

NOTE: Don't confuse this with bytecode level patching at runtime. That is a different aspect. I've written another post on how that can be done.

How to setup EC2 tools

0. Download EC2 tools

1. Extract ec2-api-tools.zip to some directory. I've installed them at $HOME/.ec2

2. Install a JDK (>= 1.5)

3. export/set the following environment variables; JAVA_HOME, EC2_HOME, EC2_PRIVATE_KEY, EC2_CERT, PATH.
You could simply add these to you /etc/profile, /etc/bash.bashrc or ~/.bashrc files. e.g. Include the following in your /etc/profile, /etc/bash.bashrc or ~/.bashrc files

JAVA_HOME=/usr/local/java

EC2_HOME=/home/azeez/.ec2
EC2_PRIVATE_KEY=/home/azeez/.ec2/pk-xxx.pem
EC2_CERT=/home/azeez/.ec2/cert-xxx.pem

PATH=$JAVA_HOME/bin:$EC2_HOME/bin:$PATH

export JAVA_HOME EC2_HOME PATH EC2_PRIVATE_KEY EC2_CERT

How to create an EC2 AMI

The easiest way to create an EC2 AMI (Amazon Machine Image) is to select one of the publicly available AMIs which suits your requirement. Let's say you selected an Ubuntu image, ami-0757b26e. If you don't have an Amazon Web Services (AWS) account, first create one. Also you will need to download the EC2 command line tools & then set them up. For instructions on how to setup the EC2 tools, read this.

Follw these step to create your AMI:

0. Generate a keypair if you have not already done so
e.g. ec2-add-keypair key1
The output will be something like the following:

KEYPAIR key1
-----BEGIN RSA PRIVATE KEY-----
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
-----END RSA PRIVATE KEY-----

Copy the string starting from -----BEGIN RSA PRIVATE KEY----- up to -----END RSA PRIVATE KEY----- and save it in your keys directory, say in the /home/azeez/.ec2/keys/id_key1 file. Make sure that only the owner can read & write to that file.
i.e. chmod 600 /home/azeez/.ec2/keys/id_key1

1. Launch an instance of ami-0757b26e, providing a key, say key1, which you generated in step 0
ec2-run-instances ami-0757b26e -k key1
or you could use the ElasticFox GUI to do the same thing
The output will be something similar to
---------------------------------------------------------------------------------------------------
RESERVATION r-d5825cbc 610968236798 default
INSTANCE i-5c7dd335 ami-0757b26e pending key1 0 m1.small 2008-10-20T03:25:27+0000 us-east-1b aki-a71cf9ce ari-a51cf9cc
---------------------------------------------------------------------------------------------------

2. Connect to that instance using SSH,
e.g. ssh -i /home/azeez/.ec2/keys/id_key1 root@ec2-67-202-60-248.compute-1.amazonaws.com

3. Make the necessary changes to that instance. For example, you may install some custom software on that instance.

4. Upload your Amazon Web Services (AWS) private key (PK) & certificate (CERT) files to that instance. You can use scp to do this.
scp -i /home/azeez/.ec2/keys/id_key1 pk-XXX.pem cert-xxx.pem root@ec2-75-101-215-95.compute-1.amazonaws.com:/mnt/

5. On that instance, create an image of the current setup.
ec2-bundle-vol -k /mnt/pk-xxx.pem -c /mnt/cert-xxx.pem -u [user-id] -d /mnt

pk-xxx.pem = the PK file you uploaded in step 4
cert-xxx.pem = the CERT file you uploaded in step 4.
user-id = Your AWS User ID

In this step, you may wish to exclude some directories from the new image. Use the -e option followed by the ABSOLUTE path of the directories to be excluded. By default, some directories, like /mnt/, are excluded during image creation.

6. Create a bucket in Amazon S3. You can use s3fox to do this using a GUI, or use the command line tooling.

7. Upload the newly created image to your S3 bucket which was created in step 6
ec2-upload-bundle -b [your-s3-bucket] -m /mnt/image.manifest.xml -a [aws-access-key-id] -s [aws-secret-access-key-id]
aws-access-key-id = your AWS access key
aws-secret-access-key-id = your AWS secret access key.

8. Register the image. On your local machine, run
ec2-register [your-s3-bucket]/image.manifest.xml
The AMI ID will be displayed if the image was successfully registered. Say this was ami-af34d0c6

9. Test your image. Launch an instance of your new image. On your local machine run, e.g.
ec2-run-instances ami-af34d0c6 -k key1
or you could use the ElasticFox GUI to do the same thing

10. Connect to your new instance using SSH. This is similar to step 2.

That's it. You have just created your own AMI.

If you would like to make your AMI public, do the following:
1. ec2-modify-image-attribute --launch-permission -a all
2. Check the launch permissions of an AMI
ec2-describe-image-attribute -l
ami_id= the ID of the AMI

Saturday, October 18, 2008

How to add a tag cloud to blogger

I followed this article http://phy3blog.googlepages.com/Beta-Blogger-Label-Cloud.html

Just followed everything to the letter, and got the nice looking tag cloud you see on this page

Difference between EC2 AMI, AKI & ARI

AKI (Amazon Kernel Image)
Kernel loaded by the Amazon "boot loader"

ARI (Amazon Ramdisk Image)
"Disk" used by boot loader during kernel load

AMI (Amazon Machine Image)
Everything post-boot, including loadable kernel module. An AKI & ARI can be specified when starting an instance of an AMI

Thursday, October 16, 2008

How does an EC2 instance find information about itself?

Do an HTTP GET from http://169.254.169.254/1.0/meta-data/

Using curl,
$ curl http://169.254.169.254/1.0/meta-data/
ami-id
ami-launch-index
ami-manifest-path
hostname
instance-id
local-ipv4
public-keys/
reservation-id
security-groups

e.g. an instance can get its instance ID by,
$ curl http://169.254.169.254/1.0/meta-data/instance-id/

Passing parameters to EC2 instances

In my work related to making Axis2 autoscale on EC2, I had to figure out how to pass parameters to each instance. Apparently, the userdata parameter set during new instance launch can be used for passing parameters. When or after the instance starts up, by sending an HTTP GET request to http://169.254.169.254/1.0/user-data, the values passed for the userdata field can be retrieved. This was very useful for me since I can pass the initial parameters as well as Amazon Web Service keys to the instance. The key files are necessary because when the system autoscales, the primary EC2 instances needs to start up new EC2 instance.

For more details, read 'Using Parameterized Launches to Customize Your AMIs'

Tuesday, October 14, 2008

Making Openoffice work with dark themes



I recently installed the SlicknesS dark Gnome theme, which has a polished look and feel. However, Openoffice did not work properly with this theme and it seems that due to a bug in Openoffice, it does not work well with dark themes. So the solution was to run Openoffice with a compatible theme while running the rest of the applications with my favorite dark theme. Here is how to do that;

1. Rename /usr/bin/openoffice.org2.4 to /usr/bin/openoffice.org2.4BIN
2. Create a new file called /usr/bin/openoffice.org2.4 and add the following content
#!/bin/sh
env GTK2_RC_FILES=/usr/share/themes/Clearlooks/gtk-2.0/gtkrc openoffice.org2.4BIN "$@"
3. Make /usr/bin/openoffice.org2.4 executable
4. That's it!

The screenshot of my desktop shows gcalctool & terminator running with the default SlicknesS theme, while Openoffice writer is running with the Clearlooks theme.

In step 2, we are setting the theme only for the openoffice.org2.4BIN command. You can do the same for other applications too, if you need to run them with a different theme (not the default theme)
e.g. env GTK2_RC_FILES=/usr/share/gdm/themes/Human/gtk-2.0/gtkrc gedit
will launch gedit with the Human theme

Sunday, October 12, 2008

Openoffice reference & bibliography management




While searching for a good reference & bibliography management tool, I came across Zotero. It has excellent Openoffice integration. It is the by far the best tool with Openoffice integration. The bibliography can be managed using a Firefox extension.

Syncing contents between EC2 & S3 - s3sync

In my ongoing work on autoscaling Web services on EC2, when an Axis2 EC2 instance boots up, it needs to load all the configuration files & the service+module repository from Amazon S3. While searching for an appropriate tool to do this, I came across s3sync



Installation
1. Install ruby
On Ubuntu, it was apt-get install ruby ruby1.9
2. Install openssl
apt-get install openssl
3. Install openssl-ruby
apt-get install libopenssl-ruby

Before Running
Make sure that the following environment variables are exported.
AWS_ACCESS_KEY_ID
AWS_SECRET_ACCESS_KEY
In order to obtain the values for those, you need to register with Amazon Web Services (AWS).

Examples from the s3sync README
1. Put the local etc directory itself into S3
s3sync.rb -r /etc mybucket:pre
(This will yield S3 keys named pre/etc/...)

2. Put the contents of the local /etc dir into S3, rename dir:
s3sync.rb -r /etc/ mybucket:pre/etcbackup
(This will yield S3 keys named pre/etcbackup/...)

3. Put contents of S3 "directory" etc into local dir
s3sync.rb -r mybucket:pre/etc/ /root/etcrestore
(This will yield local files at /root/etcrestore/...)

4. Put the contents of S3 "directory" etc into a local dir named etc
s3sync.rb -r mybucket:pre/etc /root
(You need to first create the directory, /root/etc. This will yield local files at /root/etc/...)

5. Put S3 nodes under the key pre/etc/ to the local dir etcrestore
**and create local dirs even if S3 side lacks dir nodes**
s3sync.rb -r --make-dirs mybucket:pre/etc/ /root/etcrestore
(This will yield local files at /root/etcrestore/...)


Only the contents that have been changed in the S3 bucket will be loaded from S3. Now that I can sync between my EC2 instance & S3 bucket, I simply need to write an init script that loads the configuration & repository from a specified S3 bucket. This init script needs to be automatically run when the instance boots up. This can be done easily using chkconfig. chckconfig can be downloaded from here. Installation instructions are available here. Create the script, say syncs3, place it in /etc/init.d, and add it as follows:

chkconfig --add syncs3

Remember to rebundle your instance into a new image. Now whenever an instance of the new image starts up, it will try to load the configuration from the S3 bucket.




Friday, October 10, 2008

Building business solutions using WSO2 products

A very good case study which shows how Concur used WSO2 ESB & WSO2 Data Services to build a working solution in 3 hours, which previously took them 3 weeks to build using other technologies.

Thursday, October 09, 2008

Is this a turning point in history?

Two Microsoft employees have received Apache committership! This is indeed a significant event in the history of open source software development.

Converting legacy code into OSGi bundles

Any person who starts to do something serious with OSGi soon hits a wall; issues with legacy code such as;

  1. Calls to System.exit()
  2. Starting up using main(String[]) method
  3. Using the Thread Context Classloader (TCCL)
Due to such issues, it seems like it is next to impossible to make use of legacy code without code changes & recompiling. However, this is impractical since

  1. The source may not be available
  2. Modification of source code may violate copyright
  3. It may be practically impossible to find & fix all instances of violations
This is one of the major issues we faced while making the functionality in the WSO2 Java products into OSGi compatible Carbon components.

The Knopflerfish OSGi framework has come up with a solution to address these concerns; bytecode level patching at runtime. The violations are located & patched at runtime. There is a very good presentation by OSGi veterans Gunnar Ekolin & Erik Wistrand from Makewave titled 'Everything Can be a Bundle', which is a MUST READ for any OSGi newbie & even for people who have been working with OSGi for sometime.