Saturday, June 2, 2012

Server Resources Provisioning

Any of the IT company may spend huge amount of money for IT infrastructure, majorly on servers. Its obvious that number of servers [cost] and keep everything up and running [availability] is a trade-off factor. More you distribute the services, higher the availability but lesser the ROI. If you host everything in one single server in order to minimize the cost, you may manifest the risk or danger only in case of a catastrophic. :) .In a nutshell, bundling everything into single host does not give you the ROI in long term. However there are some factors to consider to achieve maximum ROI when allocating resources.




1) Availability Requirement.
2) Redundancy requirement.
3) Ability to bundle into same server.
4) Service resources consumption.
5) Security Considearation.


Following I have taken small to medium software development company as a example


Availability Requirement.

Importance of the service is very critical factor to consider. As a example your corporate web-site or the customer support site need to be up and running 24X7 and its essentials to achieve five nines availability (99.999%). To achieve five nines you can only have down time of 5.26 minutes per year. Its kind of three time rebooting a server within the year.

So these kind of critical services need be isolate from all others. You should only be running these services on individual servers no matter how many resources available on the servers. Don't mix it with garbage. This may prevent you from happening.

i) Human nature of mistakes

If your critical system is running with no issue, Why should you put non critical service on the same server. If you are frequently log into a server or more you do modifications, there is a high probability to interrupt other services. Therefore you need to make-sure unless otherwise you have to you need to do a modification to the critical service you does not need to log into the server.

ii) It let you the opportunity to allocate maximum resources.

As an example if you have a 4Gb memory machine to run a tomcat instance, there is no reason to prevent you from allocating “-Xmx4096m” to the JVM, Increase the “memory_limit” of a PHP script or increase the “MaxClients” of apache . Simple as that.






Redundancy requirement.

Redundancy to achieve high availability but replications need extra servers. But you need to make sure redundant servers are selected from different physical locations/ data centers/ ISPs.

If the replication is Hot standby, but replication server serves only and only if primary is down, then the Master node need to consider as most importance and treat as per the availability requirement factor. But you may use the slave nodes with conjunction with other services as if slave servers till master up and running. However if the setup is balancing and fail-over scenario you need to treat both nodes equally important.


Ability to bundle into same server.

Bundling or hosting more then one service ideal specially for Staging//testing setups. Also non critical service can be bundle into same server. However you need to further look at to mitigate potential vulnerabilities. As a example its not good idea to host your SFTP/FTP server even its jailed with Apache/SSL enabled server. In case of SFTP vulnerability exploitation you may loose your certifications files, or private keys.

Secondarily depending on the usage/ resource requirement you can limit the memory allocation for each services. Or process cores can be limit through virtualization. But process intensive services like build servers, SVN need to be separate as both need significant amount of processing and at a peak time both services might not be available.


Services Resource Consumption

There are some services which requires high CPU/Memory consumption, like build systems. So you have to move these services into separate server in order to mitigate the interruption. Its a good idea to use virtualization to service isolation, memory/ process utilization. However You need to make sure to host, processor intensive programs as less as possible. Because any of over processing VMs may lead into total system crash or staled.


Security consideration

Security is a important aspect to consider as I explained in services bundling. Potential suspicious services should not ever run with security essential service. Also you shouldn't distribute critical/ultra important contents among many servers. As a example SSL certificates, keys should not distribute among many servers to enable ssl on different httpd sites. Instead you can use one server with different virtual hosts to use same content.

Wednesday, May 23, 2012

How to let internal users to direct HTTP access and outsiders to LDAP auth

If you need to allow direct http access from internal network but outsiders to authenticate through LDAP, following apache example would do the job.
Assume your internal subnet is 192.168.0.0/16 and LDAP Group based authorizations have been used.





Require valid-user
Order deny,allow
Deny from all
Allow from 127.0.0.0/255.0.0.0 ::1/128
Allow from 192.168.0.0/16
Satisfy any
AuthBasicProvider ldap
AuthBasicAuthoritative on
AuthzLDAPAuthoritative on

AuthType Basic
AuthName "Example"
AuthLDAPBindDN "uid=userid,ou=dpt,dc=crew,dc=example,dc=com"
AuthLDAPBindPassword xxxx
AuthLDAPURL "ldap://ldap.example.com:389/ou=dpt,dc=crew,dc=example,dc=com?uid"
require ldap-group cn=group-users,ou=departmentgroup,dc=crew,dc=example,dc=com
AuthLDAPGroupAttribute member




Friday, February 10, 2012

How to Optimize SVN Mirror

My previous blog post explains how to setup a svn mirror using svnsync. Lately I realized how hard to keep the replication system running. Perhaps People have been moved to proprietary replication systems like wandisco, due to lack of replication capability in native subversion system. Its really challenging task to keep the svn mirrors up and running 24 X 7 with no complain from the commiters. Following I found some of the problems with svnsync [1] due to its centralized architecture.


Master/slave model means a single point of failure - If the master is down no-one can write to the repository.
No performance improvement on writes (these are simply proxied to the master server)
No guarantee of transactional integrity
No topology intelligence
Can’t build against slave/mirror if process requires master
Manual intervention is normally required in the event of network outage and latency
No optimization of SVN traffic
Recovery time and recovery point objectives are greater than zero
Requires additional solution or approach for DR, business continuity, fault tolerance


One of the most common problem is commit truncate with "The specified baseline is not the latest baseline, so it may not be" error, due to internet connection is slow between master slave is the.

Some times svnsync dies while lock file is present in the slave, therefore following commits return a error saying "Failed to get lock on destination repos, currently held by" etc.

If in any case Slave got insert a commit, you have to rebuild the repo.

Apache software foundations themself experience issues with svn replication using "svnsync" they are listed down at [2]



With subversion 1.7 there are some improverments have been made. Inmemory caching and data compression comes with subversion 1.7. You may need to manually build the subversion 1.7..

How to Enable Inmemory cache




# Enable a 1 Gb Subversion data cache for both fulltext and deltas.
SVNInMemoryCacheSize 1048576
SVNCacheTextDeltas On
SVNCacheFullTexts On



How Enable data compression. Level 9 denotes maximum compression, default is 5.



SVNCompressionLevel 9



By enabling keepalive it allows to sent multiple request over same tcp connection. keep MaxKeepAliveRequests directive as maximum as possible.


KeepAlive On
MaxKeepAliveRequests 1000
KeepAliveTimeout 15


We experienced that most of the issues cased due to svn on the fly replication. When a user commits a file, it has to update the master the run the replication with salve to complete the commit. Most cases commit fails due to replication issues.

There fore its good to remove the svnsync from the "post-commit" and run the it on a separate cronjob in continuous time interval.

following shell script checks master, slave revision numbers if slave is out of sync svnsync runs. If svnsync fails it sends a mail. Instead of sending mail you can use this to alert on Nagios monitoring system.



#!/bin/bash


MAINREPO=`svn info https://svn.example.com/repo/main | grep Revision | awk '{print $2 }'`
MIRRORREPO=`svn info https://svnmirror.example.com/repo/main | grep Revision | awk '{print $2 }'`




if [ $MAINREPO != $MIRRORREPO ]; then
RESULT=`/usr/bin/svnsync --sync-username svnsync --sync-password "password" sync https://svnmirror.example.com/sync/proxy`

CHECK=`echo $RESULT | grep 'Committed revision'`
if [ "$CHECK" = "" ]; then
mail -s "SVN Mirror is not Syncing please check" admin@example.com < /dev/null
fi

fi




Makesure no accedent commits inserted to slave. Follwoing script on salve "pre-revprop-change" hook script makesure only svnsync user do the sync. You



#!/bin/sh
USER=$3

if [ "$USER" != "svnsync" ]; then
echo >&2 "Only the svnsync user is allowed to change revprops"
exit 1
fi

exit 0

exit 0




Follwoing apache configuration make sure sync is comming only from master and only svnsync user can commit to the slave.



DAV svn
SVNPath /path/to/svn/mirror
Order deny,allow
Deny from all
#IP address of the master
Allow from X.X.X.X
AuthType Basic
AuthName "SVN mirror Login"
AuthUserFile /path/to/svn/password/file
SSLRequireSSL
Require user svnsync




Further if you want to do traffic load balancing based on the location you can try to use GeoDNS service as my previous blog post.

I have commit all configurations to git hub. Syntax highlighter is not properly showing the configuration. you may find it through here. http://bit.ly/yL4jUh



[1] http://www.svnforum.org/entries/8-Is-svnsync-good-enough
[2] http://www.apache.org/dev/version-control.html#svnproblems
[3] http://subversion.apache.org/docs/release-notes/1.7.html

Before Configuring Mailman



DEFAULT_URL_HOST is need to first properly set with your server host name (eg: lists.example.com) before you create any mailing list. If not welcome mails or mail-footer will not have proper links to mailing interface. Even if you change the above variable after creating the list will not effect as message template does not change accordingly.

In case if you have changed the host name best thing is to get a user list backup, delete the mailing list (without archives), and recreate the mailing list and re-import the user list. So new hostage will effects to the welcome massages.


You need to create MAILMAN_SITE_LIST mailing list before start the mailman and makesure to add sys-admins mailing lists.

You need to create Apache vhost to host archives and images.


ScriptAlias /mailman/ /usr/lib/cgi-bin/mailman/
ScriptAlias /cgi-bin/mailman/ /usr/lib/cgi-bin/mailman/


Alias /images/ /usr/share/images/

   AllowOverride None
   Options ExecCGI
   Order allow,deny
   Allow from all




   AllowOverride None
   Options ExecCGI
   Order allow,deny
   Allow from all


Alias /pipermail/ /var/lib/mailman/archives/public/

   Options Indexes MultiViews FollowSymLinks
   AllowOverride None
   Order allow,deny
   Allow from all


Make sure to increase the attachment size into at least 5M.
Set “Prefix for subject line of list postings
Enable moderator approval for MAILMAN_SITE_LIST list and steps required to subscription as “Require approval”
Disable archive for MAILMAN_SITE_LIST.
If you enabling through postfix “postalias” command will help to generate hash list of mailing list. 

Tuesday, February 7, 2012

How to install PgeoDNS [ GEODNS ]

There was a problem since the early begin of the internet how to reach to clostest servers from any area. Some one can point out "anycast" as a solution. Most of IPV4 implemetations any case is used as advertising same BGP prefixes from different locations of the world and it store in the global BGP table with different metrics. When a packt comes to a particular destination according to the BGP metric packet is sent to the clostes destination. But this method is not easy to implement because you need to have your own public ip range.

Another solution is you can do DNS load balancing where you configure two "A" type records to same domain name, but it does not gurentee alwas some client will get the clostest server ip resolved by DNS. To overcome all these issues there has been geographic aware DNS servers have been introduced. These DNS servers can resolve your domain and give you the closetest server ip address to the user. Basically these DNS servers looks into source ip address and gives reply the DNS query by matching with his internal databases. maxming [1] is a one of these country ip database provider. PGeoDNS uses pgeoIP perl module for these purpose.

[1] http://www.maxmind.com/

Following is how to install PGeoDNS.

you have to download following perl libraries from CPAN. Note[ Following versions worked for me.

Geo-IP-1.40
IO-Socket-INET6-2.69
JSON-2.53
JSON-XS-2.32
Net-DNS-0.67
Scalar-List-Utils-1.23
Socket6-0.23


and also PgeoDNS

pgeodns-1.40


Now its required to install each perl modules including PgeoDNS one by one as following

perl Makefile.PL # will warn if any dependencies are missing
make
make test # optional
make install


You have to add a new user to execute the PgeoDNS. Add a new user as following

adduser pgeodns

The Zone configurations need to be configure as JSON notations. Sample config files can be download from apache infra site and it will give you idea about how the configurations shoud be.

https://svn.apache.org/repos/infra/infrastructure/trunk/dns/zones/pgeodns.conf
https://svn.apache.org/repos/infra/infrastructure/trunk/dns/zones/geo.apache.org.json


You can start the service with following command as root.

pgeodns --config=pgeodns.conf --interface=192.168.1.2 --user=pgeodns  --verbose

Check the DNS queries as following.

dig a svn.geo.apache.org @192.168.1.2

Wednesday, December 7, 2011

SPF SOFTFAIL vs FAIL

When you are configuring SPF records in DNS servers you have to clearly define the mail server policy to prevent unauthorized users sending mails to using your domain. Following is the SOFTFAIL vs FAIL comparison on SPF records.

~all ==> Defines the SOFTFAIL
-all ==> defines the FAIL

SOFTFAIL will mark e-mails as spam and forward to the sender while "FAIL" will drop the mail at the mail-server itself in case you are sending unauthorized mail server.

When you start configuring SPF records its good to start with SOFTFAIL once you identified and optimized mail servers you better go with SPF "FAIL".

As a good practice its better to configure both TXT and SPF records.

Monday, November 14, 2011

Routing in Dual Interface Linux Servers

If you have a two interface eth0 and eth1 in a Linux servers. Incoming traffic of eth0 send reply through eth1. Some times this may caused problem. Eg: mail-servers may receive mails through one interface and send through some other interface. There fore its a good practice to keep separate routing table for each interface in a Linux box. And send back the data through same interface it received.


Add two routing tables

Add routing tables on /etc/iproute2/rt_tables file

1 tble_eth1
2 tble_eth2


Incoming data as out put in same interface

Interfaces are 10.100.0.1 and 10.100.0.2 and gw is 10.100.0.254

ip route add 10.100.0.0/24 dev eth0 src 10.100.0.1 table tble_eth0
ip route add default 10.100.0.254 dev eth0 src 10.100.0.1 table tble_eth0
ip rule add from 10.100.0.1 table tble_eth0

ip route add 10.100.0.0/24 dev eth1 src 10.100.0.2 table tble_eth1
ip route add default via 10.100.0.254 dev eth1 src 10.100.0.2 table tble_eth1
ip rule add from 10.100.0.2 table tble_eth1

Run above script when interfaces getting up in a boot.