PHP Development with NetBeans 7.0

In this post, I’m going to be going through creating a simple PHP project using the NetBeans 7.0 IDE and deploying it to a development Apache HTTP server, running on the same machine. All of the instructions below were tested on Ubuntu 10.04 and assume that you have a LAMP stack (although we don’t need the MySQL component for now) and NetBeans installed (I’ve used the All bundle, which comes with PHP) and have the userdir module for Apache turned on (see my previous post).

In terms of usability, I find NetBeans to be one of the best open source PHP IDE’s available, with excellent support for things like autocomplete and debugging, which make development a whole lot easier.

To start off with, open up NetBeans and create a new PHP Project:

I’ve set the project name to “PHPHelloWorld” and I found that NetBeans detected my ~/public_html folder automatically:

NetBeans also detected my local web server and URL settings:

For our simple HelloWorld application, we don’t require any PHP frameworks:

After clicking ‘Finish’, we end up with the default PHP template:

Now I added some code, including a function to the index.php file. I’ve included a function in the code so that when you type it out you can see some of the NetBeans autocomplete features:

Finally, click on the ‘Run Project’ button (or hit F6) and watch as your program deploys:

Apache HTTP Server userdir module

As a part of setting up a development environment for creating web applications/sites to be deployed to Apache HTTP Server, one of the things I would highly recommend is making use of the userdir module. This module allows a user to create their own directory (under /home/[user]/public_html) and have that directory automatically be made accessible by Apache at http://localhost/~%5Busername%5D/ and thus skips a lot of the headaches that are caused by permission problems.

To set up this module, first you need to create this directory:

mkdir ~/public_html

The next step is to enable the module:

sudo a2enmod userdir

Note that if you want to change the name of the directory or any other settings for this folder, you can do so by editing the /etc/apache2/mods-available/userdir.conf file.

Then, finally we just need to restart apache for the module to be loaded:

sudo /etc/init.d/apache2 restart

If it all went well, you should now be able to open your browser and browse to http://localhost/~%5Busername%5D/ and see the contents of your public_html directory.

Another very important thing to mention is that by default, PHP processing is disabled on this directory. If you need to turn on PHP processing, you need to modify the /etc/apache2/mods-available/php5.conf file:

<IfModule mod_php5.c>
<FilesMatch “.ph(p3?|tml)$”>
SetHandler application/x-httpd-php
</FilesMatch>
<FilesMatch “.phps$”>
SetHandler application/x-httpd-php-source
</FilesMatch>
# To re-enable php in user directories comment the following lines
# (from <IfModule …> to </IfModule>.) Do NOT set it to On as it
# prevents .htaccess files from disabling it.
<IfModule mod_userdir.c>
<Directory /home/*/public_html>
php_admin_value engine Off
</Directory>
</IfModule>
</IfModule>

Like it says in the commented out sections of this file, you just need to comment out the mod_userdir.c section to enable PHP on the ~/public_html directory.

Apache HTTP Server VirtualHost directive

Once of the things which always catches me out is the use of the VirtualHost directive in the configuration files for the Apache HTTP server. When you need to set up virtual hosting (i.e. more than one host off of the same IP, differentiated by the hostname) you need to use this directive. However, don’t make the assumption that you can do something like:

<VirtualHost http://www.example.com>…</VirtualHost&gt;

and that this will result in having a server defined for the ‘www.example.com’ hostname. The VirtualHost directive is used only to define the IP that this “virtual server” should listen on. It does not define which hostname it should reply to. While the above configuration is legal, the actual behaviour of Apache’s HTTP Server is to lookup the hostname, convert it to an IP and use that in the directive. Functionally, it is no different than doing:

<VirtualHost [IP Address of http://www.example.com host]>…</VirtualHost>

Using the hostname in this part of the configuration might lead to some unexpected behaviour. For me, I added this directive with a hostname, expecting that the configuration section would only apply to a particular hostname, when in fact it matched all hostnames using that IP.

The correct way to define a hostname based virtual host is to make use of the ServerName and additionally ServerAlias directives inside the VirtualHost stanza.

I/O Benchmarks with Bonnie++

So, after my last article using the palimsest application, I decided to give a tool called Bonnie++ a try. From the description on the authors’ website:

“Bonnie++ is a benchmark suite that is aimed at performing a number of simple tests of hard drive and file system performance. Then you can decide which test is important and decide how to compare different systems after running it. I have no plans to ever have it produce a single number, because I don’t think that a single number can be useful when comparing such things.”

So, after running the default program as provided by Ubuntu repositories, we get the following output:

$ bonnie++
Writing a byte at a time…done
Writing intelligently…done
Rewriting…done
Reading a byte at a time…done
Reading intelligently…done
start ’em…done…done…done…done…done…
Create files in sequential order…done.
Stat files in sequential order…done.
Delete files in sequential order…done.
Create files in random order…done.
Stat files in random order…done.
Delete files in random order…done.
Version 1.96 ——Sequential Output—— –Sequential Input- –Random-
Concurrency 1 -Per Chr- –Block– -Rewrite- -Per Chr- –Block– –Seeks–
Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP
srdan-desktop 8G 505 98 83083 10 36609 4 2514 86 96195 5 155.8 3
Latency 16674us 1003ms 1830ms 41497us 269ms 1014ms
Version 1.96 ——Sequential Create—— ——–Random Create——–
srdan-desktop -Create– –Read— -Delete– -Create– –Read— -Delete–
files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP
16 11634 22 +++++ +++ 17458 25 16959 30 +++++ +++ 17864 23
Latency 55341us 601us 638us 753us 43us 123us
1.96,1.96,srdan-desktop,1,1308644057,8G,,505,98,83083,10,36609,4,2514,86,96195,5,155.8,3,16,,,,,11634,22,+++++,+++,17458,25,16959,30,+++++,+++,17864,23,16674us,1003ms,1830ms,41497us,269ms,1014ms,55341us,601us,638us,753us,43us,123us

So, overall we have a lot more detail, although not as many pretty graphs as we had with Palimpsest. Also, I’m afraid my blogger skills have let me down, because the output above actually looks a lot better from the command line, but I can’t for the life of me figure out how to retain tabs when posting.

Looking at the output we can see that the tool tests five main categories:

* Sequential Output (Per Chr, Block, Rewrite)
* Sequential Input (Per Chr, Block)
* Random Seeks
* Sequential Create (Create, Read, Delete)
* Random Create (Create, Read, Delete)

According to the documentation the first three categories simulate the same kind of I/O load that a database would normally put onto a disk, while the last two categories simulate the kind of load that you would expect to see in an NNTP and/or web caching server. From what I could tell, the first three tests all create a single file to test on and the last two create a host of small files.

One thing to note is that this is a test of both the disk and the filsystem (and kernel), unlike the palimpsest benchmark, which only tests the disk. Another thing to note is that as well as getting the I/O throughput we’re also seeing the %CP figure, i.e. how taxing the operation is on the CPU. This might be an important factor, when trying to determine what kind of CPU you need for your web caching server.

Overall, bonnie++ is a very good tool for looking at the holistic performance of any storage system.

I/O Benchmarks with GNOME Disk Utility

Recently I decided to tidy up my home office. In the process I found a lot of old computer hardware which has built up over the years. One of the more interesting things I found was several old (ATA) hard drives. This got me thinking how I could make use of these drives. The first thing that popped into mind was using them for benchmarking purposes. i.e. to get familiar with the tools used to benchmark I/O. Being a Ubuntu user, I noticed that there’s a really nice utility installed by default (go to ‘System’->’Administration’->’Disk Utility’). This is actually a program called Palimpsest/GNOME Disk Utility and as well as having the ability to benchmark hard drives it also gives you the ability to use the disks’ SMART capabilities to get some idea of the number of bad sectors as well as other warning signs that the drive may be failing.

So, I’ve shut down my computer and plugged in the drives (no hot plugging with ATA unfortunately), booted up and ran the ‘Read-Only’ benchmark. This gives some basic numbers showing the maximum, minimum and average read times. When I went to do the ‘Read/Write’ benchmark, I found that you had to completely format the disk in order to benchmark it. This involved not only deleting all of the partitions, but also deleting the MBR. Once this was done, I was able to run the Read/Write tests on both of the 40GB drives. As well as the max, min and avg times, you also get a pretty graph:

The red line corresponds to writes, with the blue line corresponding to reads. I’m not sure what all of the green points and lines correspond to. So, how did the drives perform? One of them (the one pictured above) had quite a bit of variance as you can see in the graph above. Also, the read/write rates crossed at about 40% of the way through the test, which I don’t quite understand. As a comparison, the other 40GB drive I tested was amazingly stable, with the minimum read rate only 1.2 MB/s below the maximum and the write max/min only differing by 0.1 MB/s. The output of this drive can be seen below:


So, now the only question left is “How will they perform in a RAID array?”. After putting the two drives into a RAID-0 array (note that you need to install the mdadm package first), we see the following results:


The results are quite interesting in that they show that the avg read rate for the array was actually worse (21.5 MB/s) than either of the two individual drives (24.6 MB/s and 22.3 MB/s). The avg write rate was noticeably higher at 26 MB/s, compared to the individual results of 22.1 and 22.9 MB/s.

So what can we conclude from this? We probably shouldn’t look too much into the results as both of the hard drives are old and from different manufacturers. However, in saying that, I think it does show that write intensive applications might benefit from RAID0 more than read intensive ones.