Category Archives: VPS - Page 2

Resize XEN VM disk and Swap Disk

master:~# fdisk /dev/xvda

Command (m for help): p
Command (m for help): d

Partition number (1-4):<enter partition number>

Command (m for help): n
Command action
e extended
p primary partition (1-4): p
Partition number (1-4): <enter partition number>
First cylinder (32-6527, default 32):
Last cylinder or +size or +sizeM or +sizeK (32-6527, default 6527):

Command (m for help): t
Partition number (1-4): <enter partition number>
Hex code (type L to list codes): 8e
Command (m for help): p
Command (m for help): w

The partition table has been altered!
master:~# reboot

when it has rebooted, you just have to resize the filesystem with..

master:~# resize2fs /dev/xvda1
master:~# reboot

Resize the SWAP Disk like this:

<run through the regular resize for a disk and reboot>

master:~# swapoff

master:~# cfdisk /dev/swap-partion

master:~# mkswap -c /dev/swap-partition

master:~# swapon -a


 

Disk speed VPS, how fast?


dd if=/dev/zero of=test bs=64k count=512 oflag=dsync

The number one bottle neck for a VPS is Disk I/O. The easy way to find out the quality of your VPS’s disc I/O is by issuing the above command which will copy a 32mb file, outputting the time taken and average transfer speed…

PCRE unicode support CENTOS

Centos PCRE repo is 6.6, and cannot be YUM REMOVED without removing hundred of dependancies…

I needed to install PCRE with unicode support which meant compiling my own package.

First you need to install all the correct libs


yum install gcc-c++

then download latest pcre from

http://www.pcre.org/

unpack and configure


./configure --prefix=/usr
--docdir=/usr/share/doc/pcre-8.10
--enable-utf8
--enable-unicode-properties


make


make check


make install

the libraries will be installed to /usr/lib and you will need to add a new path for your DSO loader create a file called pcre.conf in the directory


/etc/ld.so.conf.d

and insert the path


/usr/lib

save and close and update the cache by running


ldconfig

restart your web server and check phpinfo that your PCRE module has been updated to 8.11 (or latest).

Logrotate

Rotating your log files will save you a bunch of space from bloated web logs growing out of control. Logrotate is easily customisable to suit any needs.

To force logrotate to rotate out of schedule use the f flag, and to make it output to screen, use V for verbose ie.

Below is an excerpt for Logrotate configured with lighttpd. you will need to reload your specific web server with its own reload command.

Subversion + Lighttpd + Apache

yum install -y subversion mod_dav_svn

groupadd svn
useradd svn

mkdir -pm700 /var/svn/projects
svnadmin create /var/svn/projects/test
chown -R svn:svn /var/svn/projects

mkdir -p /var/www/projects.example.com/httpdocs

edit httpd.conf

Listen 8080

LoadModule dav_module modules/mod_dav.so
LoadModule dav_svn_module modules/mod_dav_svn.so
LoadModule authz_svn_module modules/mod_authz_svn.so

Change the user and Group from apache:apache to svn:svn

User svn
Group svn

Add a virtual host.


ServerName projects.example.com
DocumentRoot /var/www/projects/httpdocs


DAV svn
SVNPath /var/svn/projects/test
AuthType Basic
AuthName "Test Subversion repository"
AuthUserFile /var/svn/projects/test/conf/users
Require valid-user
Order allow,deny
Allow from all


create a password for the user svnclient

htpasswd -cm /var/svn/projects/test/conf/users svnclient

edit lighttpd conf file

nano /etc/lighttpd/lighttpd.conf

mod_proxy

add in a redirect for svn requests

$HTTP["host"] == "svn.projects.com" {
server.document-root = "/var/www/projects/httpdocs"
proxy.server = (
"/svn/test" => (("host" => "127.0.0.1", "port" => 8080))
)
}

restart apache and lighttpd

test the svn client

svn import /var/www/projects/httpdocs file:///var/svn/projects/test -m "Initial import"
svn checkout --username svnclient http://projects.yoursite.com/svn/test
cd test
svn mkdir testbranches tags trunk
svn commit

to add a new repository to the SVN

svnadmin create /var/svn/projects/newdir
chown -R svn:svn /var/svn/projects

Then add a new location in the virtual host file


DAV svn
SVNPath /var/svn/projects/newdir
AuthType Basic
AuthName "Test Subversion repository"
AuthUserFile /var/svn/projects/test/conf/users
Require valid-user
Order allow,deny
Allow from all

UnixBenchmark 5.1.2 a study measuring performance across different spec VPS’s

Linode

Linode

The idea was pretty simple. See how my VPS benchmarked using the unixbench script.

Then I took the idea further by upgrading my VPS to different sizes to see how performance tracked against different classes of VPS.

Linode, has a simple way to reconfigure VPS instances with more memory and space.

Upgrading services is completely symmetric that means bandwidth, memory, disk size and price all scale linearly.

In the symmetric Linode world, which relies on the XEN virtualisation platform, only a certain number of ‘nodes’ can reside on each box, say 40 nodes for a 512mb VPS offering, and therefore 20 nodes on a 1024mb VPS. The larger the VPS, the less users, therefore, potentially more CPU time and better disc IO.

Unixbench should help us understand what the potential benefits are in upgrading our VPS service in terms of disc IO and CPU. Lets take a look at the results.

First off we started with a clean 512mb VPS with Centos 5.5 (32bit), yum updated and gcc/make installed so that we could run unixbench. It is important to note that Linode CPU was identical across all 4 machines (L5630 2.13GHz), with 4 virtual cores enabled.

The latest unixbench was then downloaded from the google code repository and set off to work on our 512mb instance, here are the results:

512mb Linode

4 CPUs in system; running 1 parallel copy of tests
Dhrystone 2 using register variables 9880922.0 lps (10.0 s, 7 samples)
Double-Precision Whetstone 1932.9 MWIPS (10.3 s, 7 samples)
Execl Throughput 1524.7 lps (29.9 s, 2 samples)
File Copy 1024 bufsize 2000 maxblocks 323047.5 KBps (30.0 s, 2 samples)
File Copy 256 bufsize 500 maxblocks 84232.5 KBps (30.0 s, 2 samples)
File Copy 4096 bufsize 8000 maxblocks 864676.5 KBps (30.0 s, 2 samples)
Pipe Throughput 449392.8 lps (10.0 s, 7 samples)
Pipe-based Context Switching 22118.8 lps (10.0 s, 7 samples)
Process Creation 2462.9 lps (30.0 s, 2 samples)
Shell Scripts (1 concurrent) 3618.6 lpm (60.0 s, 2 samples)
Shell Scripts (8 concurrent) 1145.2 lpm (60.0 s, 2 samples)
System Call Overhead 452957.5 lps (10.0 s, 7 samples)

System Benchmarks Index Values BASELINE RESULT INDEX
Dhrystone 2 using register variables 116700.0 9880922.0 846.7
Double-Precision Whetstone 55.0 1932.9 351.4
Execl Throughput 43.0 1524.7 354.6
File Copy 1024 bufsize 2000 maxblocks 3960.0 323047.5 815.8
File Copy 256 bufsize 500 maxblocks 1655.0 84232.5 509.0
File Copy 4096 bufsize 8000 maxblocks 5800.0 864676.5 1490.8
Pipe Throughput 12440.0 449392.8 361.2
Pipe-based Context Switching 4000.0 22118.8 55.3
Process Creation 126.0 2462.9 195.5
Shell Scripts (1 concurrent) 42.4 3618.6 853.4
Shell Scripts (8 concurrent) 6.0 1145.2 1908.7
System Call Overhead 15000.0 452957.5 302.0
========
System Benchmarks Index Score 473.0

------------------------------------------------------------------------
Benchmark Run: Fri Nov 19 2010 01:31:40 - 02:00:07
4 CPUs in system; running 4 parallel copies of tests

Dhrystone 2 using register variables 39295363.6 lps (10.0 s, 7 samples)
Double-Precision Whetstone 7670.9 MWIPS (10.3 s, 7 samples)
Execl Throughput 5676.3 lps (29.4 s, 2 samples)
File Copy 1024 bufsize 2000 maxblocks 311812.9 KBps (30.0 s, 2 samples)
File Copy 256 bufsize 500 maxblocks 82966.6 KBps (30.0 s, 2 samples)
File Copy 4096 bufsize 8000 maxblocks 1048007.7 KBps (30.0 s, 2 samples)
Pipe Throughput 1795144.2 lps (10.0 s, 7 samples)
Pipe-based Context Switching 212468.4 lps (10.0 s, 7 samples)
Process Creation 8793.6 lps (30.0 s, 2 samples)
Shell Scripts (1 concurrent) 9378.8 lpm (60.0 s, 2 samples)
Shell Scripts (8 concurrent) 1283.3 lpm (60.1 s, 2 samples)
System Call Overhead 1620850.7 lps (10.0 s, 7 samples)

System Benchmarks Index Values BASELINE RESULT INDEX
Dhrystone 2 using register variables 116700.0 39295363.6 3367.2
Double-Precision Whetstone 55.0 7670.9 1394.7
Execl Throughput 43.0 5676.3 1320.1
File Copy 1024 bufsize 2000 maxblocks 3960.0 311812.9 787.4
File Copy 256 bufsize 500 maxblocks 1655.0 82966.6 501.3
File Copy 4096 bufsize 8000 maxblocks 5800.0 1048007.7 1806.9
Pipe Throughput 12440.0 1795144.2 1443.0
Pipe-based Context Switching 4000.0 212468.4 531.2
Process Creation 126.0 8793.6 697.9
Shell Scripts (1 concurrent) 42.4 9378.8 2212.0
Shell Scripts (8 concurrent) 6.0 1283.3 2138.9
System Call Overhead 15000.0 1620850.7 1080.6
========
System Benchmarks Index Score 1230.9

I’ll summarise the important numbers


1 parallel copy of test
File Copy 1024 bufsize 2000 maxblocks 311812.9 KBps (30.0 s, 2 samples)
File Copy 256 bufsize 500 maxblocks 82966.6 KBps (30.0 s, 2 samples)
File Copy 4096 bufsize 8000 maxblocks 1048007.7 KBps (30.0 s, 2 samples)
Pipe Throughput 1795144.2 lps (10.0 s, 7 samples)

1 parallel copy of test Score 473.0
4 parallel copies of test Score 1230.9

473 is a pretty awesome score. The big points here to note is the wicked disk IO, 311Mb/s for 1024 buff, and 1048Mb/s for 4096!!!! That is some pretty amazing performance, those number would indicate that Linode are packing SSD’s to cope with the load generated by 40 odd users.

Lets have a look at the numbers from a 1gb Linode;


4 CPUs in system; running 1 parallel copy of tests

Dhrystone 2 using register variables 9625775.6 lps (10.0 s, 7 samples)
Double-Precision Whetstone 1912.9 MWIPS (10.2 s, 7 samples)
Execl Throughput 1246.5 lps (30.0 s, 2 samples)
File Copy 1024 bufsize 2000 maxblocks 76893.8 KBps (30.0 s, 2 samples)
File Copy 256 bufsize 500 maxblocks 19415.0 KBps (30.0 s, 2 samples)
File Copy 4096 bufsize 8000 maxblocks 276594.9 KBps (30.0 s, 2 samples)
Pipe Throughput 86488.9 lps (10.0 s, 7 samples)
Pipe-based Context Switching 16362.5 lps (10.0 s, 7 samples)
Process Creation 2301.2 lps (30.0 s, 2 samples)
Shell Scripts (1 concurrent) 3109.2 lpm (60.0 s, 2 samples)
Shell Scripts (8 concurrent) 953.0 lpm (60.0 s, 2 samples)
System Call Overhead 446224.4 lps (10.1 s, 7 samples)

System Benchmarks Index Values BASELINE RESULT INDEX
Dhrystone 2 using register variables 116700.0 9625775.6 824.8
Double-Precision Whetstone 55.0 1912.9 347.8
Execl Throughput 43.0 1246.5 289.9
File Copy 1024 bufsize 2000 maxblocks 3960.0 76893.8 194.2
File Copy 256 bufsize 500 maxblocks 1655.0 19415.0 117.3
File Copy 4096 bufsize 8000 maxblocks 5800.0 276594.9 476.9
Pipe Throughput 12440.0 86488.9 69.5
Pipe-based Context Switching 4000.0 16362.5 40.9
Process Creation 126.0 2301.2 182.6
Shell Scripts (1 concurrent) 42.4 3109.2 733.3
Shell Scripts (8 concurrent) 6.0 953.0 1588.3
System Call Overhead 15000.0 446224.4 297.5
========
System Benchmarks Index Score 271.8

------------------------------------------------------------------------
Benchmark Run: Thu Nov 18 2010 20:41:48 - 21:09:45
4 CPUs in system; running 4 parallel copies of tests

Dhrystone 2 using register variables 38290705.2 lps (10.0 s, 7 samples)
Double-Precision Whetstone 7572.1 MWIPS (10.3 s, 7 samples)
Execl Throughput 4521.7 lps (29.9 s, 2 samples)
File Copy 1024 bufsize 2000 maxblocks 103936.6 KBps (30.0 s, 2 samples)
File Copy 256 bufsize 500 maxblocks 26407.4 KBps (30.0 s, 2 samples)
File Copy 4096 bufsize 8000 maxblocks 395275.9 KBps (30.0 s, 2 samples)
Pipe Throughput 158949.9 lps (10.0 s, 7 samples)
Pipe-based Context Switching 76362.5 lps (10.0 s, 7 samples)
Process Creation 7095.2 lps (30.0 s, 2 samples)
Shell Scripts (1 concurrent) 7789.2 lpm (60.0 s, 2 samples)
Shell Scripts (8 concurrent) 1065.2 lpm (60.1 s, 2 samples)
System Call Overhead 1605531.3 lps (10.0 s, 7 samples)

System Benchmarks Index Values BASELINE RESULT INDEX
Dhrystone 2 using register variables 116700.0 38290705.2 3281.1
Double-Precision Whetstone 55.0 7572.1 1376.7
Execl Throughput 43.0 4521.7 1051.6
File Copy 1024 bufsize 2000 maxblocks 3960.0 103936.6 262.5
File Copy 256 bufsize 500 maxblocks 1655.0 26407.4 159.6
File Copy 4096 bufsize 8000 maxblocks 5800.0 395275.9 681.5
Pipe Throughput 12440.0 158949.9 127.8
Pipe-based Context Switching 4000.0 76362.5 190.9
Process Creation 126.0 7095.2 563.1
Shell Scripts (1 concurrent) 42.4 7789.2 1837.1
Shell Scripts (8 concurrent) 6.0 1065.2 1775.4
System Call Overhead 15000.0 1605531.3 1070.4
========
System Benchmarks Index Score 657.3

Summary

1 parallel copy of test
File Copy 1024 bufsize 2000 maxblocks 76893.8 KBps (30.0 s, 2 samples)
File Copy 256 bufsize 500 maxblocks 19415.0 KBps (30.0 s, 2 samples)
File Copy 4096 bufsize 8000 maxblocks 276594.9 KBps (30.0 s, 2 samples)
Pipe Throughput 86488.9 lps (10.0 s, 7 samples)

1 parallel copy of test Score 271.0
4 parallel copies of test Score 657.9

WOW! Disc speed on the 1gb Linode is not as impressive as the 512mb system.

The tests from the 2 and 4 gb systems were almost mirror images of the 1gb system, i’ll summarise them:

2gb

File Copy 1024 bufsize 2000 maxblocks 77849.3 KBps (30.0 s, 2 samples)
File Copy 256 bufsize 500 maxblocks 19654.5 KBps (30.0 s, 2 samples)
File Copy 4096 bufsize 8000 maxblocks 279506.5 KBps (30.0 s, 2 samples)
Pipe Throughput 87483.1 lps (10.0 s, 7 samples)

1 parallel copy of test Score 273.0
4 parallel copies of test Score 667.9

4gb

File Copy 1024 bufsize 2000 maxblocks 68375.9 KBps (30.0 s, 2 samples)
File Copy 256 bufsize 500 maxblocks 17805.5 KBps (30.0 s, 2 samples)
File Copy 4096 bufsize 8000 maxblocks 252472.2 KBps (30.0 s, 2 samples)
Pipe Throughput 79473.8 lps (10.0 s, 7 samples)

1 parallel copy of test Score 241.0
4 parallel copies of test Score 569.

Conclusion:
Linode is not slacking off when it comes to providing a top level service. My current ‘node only benches 273, and I never notice a problem with Disc I/O. For those lucky users that get a Linode that is probably Raid-1/0 SSD based they certainly won’t be complaining about other users chewing up all the Disc I/O.

It would be interesting to know how many active users were on the box when I was running that test, it would be interesting to see how those numbers changed in a full house. I get the impression that performance would be as good, if not better than the other Linode services.

Hats off to Linode!

YUM tricks of the trade

# make yum update all programs every night


chkconfig yum on

# make yum install program and answer YES to all queries.

yum -y install

How to setup your first VPS – linode – PART 2

So we’ve setup our server, it is up to date, and root login has been disabled. Now to setup our webserver:

yum install httpd

We’ll assume you’ll be hosting more than one site so will be using Virtual Hosts, lets keep all the V-Host files in one file to make editting websites later on easier.

Lets edit the Vhost file

nano /etc/httpd/conf.d/vhost.conf


ServerAdmin [email protected]
ServerName domain.com
ServerAlias www.domain.com
DocumentRoot /srv/www/domain.com/public_html/
ErrorLog /srv/www/domain.com/logs/error.log
CustomLog /srv/www/domain.com/logs/access.log combined

Replace domain.com with your domain name, and insert your email address in the ServerAdmin section.

Now add in your domain web directories

mkdir -p /srv/www/domain.com/public_html
mkdir -p /srv/www/domain.com/logs

now lets start up the web server!

/etc/init.d/httpd start
/sbin/chkconfig --levels 235 httpd on

If all has gone well you should see the green OK box come up. To check that its worked, open up your browser and paste in the IP address, (of course make sure there is a html file in there!) and your web server should be alive!

Mysql install, execute the following commands and follow the prompts

yum install mysql-server
/sbin/chkconfig --levels 235 mysqld on
/etc/init.d/mysqld start
mysql_secure_installation

PHP Install

yum install php php-pear
yum install php-mysql

Restart Apache to activate PHP

/etc/init.d/httpd restart

Check that php has been installed successfully edit a file like this

nano phpinfo.php

and insert


Open your browser and navigate to this file http://youripaddress/phpinfo.php you should see something like this

phpinfo

phpinfo

How to setup your first VPS – linode – PART 1

linode

linode

For this example, we’ll be setting up a VPS with with awesome folks at Linode lets begin! Once we are past the account setup stuff, this guide will help you setup a VPS with any provider.

Select a plan

Select a plan

Select a plan, fill in the form, hand over your credit card details and submit! Within a few minutes you’ll receive an account activation. Using your login username and password, log into your control panel @ www.linode.com.

Click on deploy distro

For this example we’ll be spinning up a Centos 32bit distro, no need to make any other changes, just slot in your password. Once the image has been installed you’ll see this screen.

Click on the boot button.

Click on the network tab to find your ip address, we’ll need this to ssh into your new VPS to complete the setup process. Your IP is located ext to the heading eth0: in our case this is 173.255.216.68

On your local machine open up a terminal window (or download Putty if your are on Windows)

login

login

In your terminal type (replace 100.100.100.100 with your ip address)

ssh [email protected]

You may then be asked to authenticate your hosts RSA fingerprint, type YES  and hit enter.

Now you’ll be asked for your password, enter it and hit return.

Well done, you’ve made it into your new VPS! We’ll start doing a bit of house work to get your VPS up to scratch, first we’ll see if there are any updates that need to be installed, CENTOS has a package manager called YUM. To execute the update type this:

yum update

YUM will work out which packages need to be downloaded and present you a list that looks like this:

Yum UPDATE

Yum UPDATE

Hit y and then return and watch your system get updated.

The next thing to do is to remove ROOT access to your server, this is basic security 101, we will disable the user ROOT’s ability to log into our server via SSH, instead we’ll create another user with the name ‘superdude’ (you can pick whatever name you like, but try and avoid generic names like admin etc etc) whilst we are in this process we’ll also install a program called denyhosts which will monitor our system for illegal login attempts and ban people that are trying to access our machine.

adduser

adduser

useradd superdude
passwd superdude

Type in your new password, you’ll notice i typed in a common word found in a dictionary which CENTOS rejected, make sure your password is STRONG, this means alphanumeric combinations greater than 7 characters and also add in at least one special character like [email protected]#$%^&*()><.

Now lets install denyhosts, to do this we must install the RPM repo like this.

wget http://packages.sw.be/rpmforge-release/rpmforge-release-0.5.1-1.el5.rf.i386.rpm

Install the GPG key

rpm --import http://apt.sw.be/RPM-GPG-KEY.dag.txt

Verify the package

rpm -K rpmforge-release-0.5.1-1.el5.rf.*.rpm

Install RPMFORGE

rpm -i rpmforge-release-0.5.1-1.el5.rf.*.rpm

Now run yum Check.

yum check-update

Now lets install denyhosts with the following command.

yum -y install denyhosts

the default settings are ok, but if you want to customise your settings you need to edit this file

nano /etc/denyhosts/denyhosts.cfg

Now lets turn on the daemon so that it runs 24/7

chkconfig denyhosts on
service denyhosts start

Now lets turn off ROOT login access edit:

nano /etc/ssh/sshd_config

Find this section:

# Authentication:
#LoginGraceTime 2m
#PermitRootLogin yes
#StrictModes yes
#MaxAuthTries 6

And modify to look like this:

# Authentication:
LoginGraceTime 2m
PermitRootLogin no
StrictModes yes
MaxAuthTries 6

Now lets restart sshd

/etc/init.d/sshd restart

Now the next time you SSH into your server you will need to log in with user superdude, however you won’t have root access until you log in as super user, to do this, you’ll need to execute

su -

followed by your ROOT PASSWORD

In our next edition we’ll setup the webserver/mysql/php

ntop

http://www.banym.de/projects/centos-fedora/install-ntop-on-centos