The first thing that I have to say, after more than 10 years working with different OSes, is that there is no better operative system than Linux. Any other OS that I’ve worked with is a pure shit, in my humble opinion off course. HP-UX is one of this. This is a closed box with custom patches here and there, not a true, modern os like linux or free bsd, and the like. The compiler is closed source and it’s not free.
The best way that I’ve found to compile apache with gcc on HP-UX 11.11 (pa-risc) using open source free software is:
At this point, after some minute, you probably will end with a compiler error like this one:
/var/adm/crash/src/httpd-2.2.21/srclib/apr/libtool –silent –mode=link gcc -g -O2 -pthread -L/usr/local/lib -o htpasswd htpasswd.lo -lm /var/adm/crash/src/httpd-2.2.21/srclib/pcre/libpcre.la /var/adm/crash/src/httpd-2.2.21/srclib/apr-util/libaprutil-1.la /var/adm/crash/src/httpd-2.2.21/srclib/apr-util/xml/expat/libexpat.la -liconv /var/adm/crash/src/httpd-2.2.21/srclib/apr/libapr-1.la -lrt -lm -lpthread -ldld
libtool: link: warning: this platform does not like uninstalled shared libraries
libtool: link: `htpasswd’ will be relinked during installation
/usr/ccs/bin/ld: Unsatisfied symbols:
apr_generate_random_bytes (first referenced in .libs/htpasswd.o) (code)
collect2: ld returned 1 exit status
gmake[2]: *** [htpasswd] Error 1
gmake[2]: Leaving directory `/var/adm/crash/src/httpd-2.2.21/support’
gmake[1]: *** [all-recursive] Error 1
gmake[1]: Leaving directory `/var/adm/crash/src/httpd-2.2.21/support’
gmake: *** [all-recursive] Error 1
This means that the APR library cannot generate random numbers. I have to investigate why, probably the system is not capable/patched to generate PRN numbers at kernel level (/dev/random or /dev/urandom) and the APR library breaks. Not a problem. Simply skip the creation of the htpasswd executable. You will probably not need it.
Now came back to compile:
when finished, simple “gmake install“, and you hopefully have done, thinking why you are still using a non modern os and becoming soon a happy new linux user..
😉 Hope this one will help some linux user fighting on HP as well like me!
Ciao, Dino.
When you have to publish mainly static contents, like static sites, the most powerful solution is to configure your apache http server to use the MPM Worker, mod_mem_cache e mod_deflate modules.
Why the MPM Worker
It implements a multi process / multi thread server model. The father process spawn processes, and each child process spawn threads. Each thread will handle a client connection.
This implementation can handle a large number of requests with fewer system resources than a standard prefork multi process server model.
Please note that you cannot use the MPM Worker in server environments that are not thread safe. For example, PHP, mod_perl, and other dynamic page processors do not ensure you that the environment it’s completely thread safe, so my advice is to NOT USE the MPM Worker with PHP, mod_perl and the like.
The Worker MPM can consume much less memory because the heap memory is shared among threads, while that’s not true for processes.
For more informations you can read the official page: http://httpd.apache.org/docs/2.2/mod/worker.html
Why the mod_mem_cache module
This module can be configured to cache open file descriptors and objects into the heap storage (memory).
If the same object (html, css, js, etc) it’s requested for the first time by a client, it get saved into the heap memory. The second time it got requested, the object got feeded directly from the memory cache. It can lower down CPU and disk I/O.
For more informations you can read the official page: http://httpd.apache.org/docs/2.2/mod/mod_mem_cache.html
Why the mod_deflate module
It can allows output from your server to be compressed before being sent to the client . The HTTP 1/1 protocol has a header called Accept-Encoding. This way a client can tell the server witch response encoding it can reads.
Any modern browsers today can handle page compression, so why not using it?
With it you can save bandwidth.
For more informations you can read the official page: http://httpd.apache.org/docs/2.2/mod/mod_deflate.html
Ok. Let’s begin to enable that stuff.
First step is to compile apache from source.
If you want to use the packages released by your linux distribution instead of compiling apache by yourself you can do it.
Always choose the latest apache stable version available.
To compile apache 2.2.X with most modules in shared form (*.so) you should run this configure:
$ ./configure –prefix=<YOUR_APACHE_DIR> –with-mpm=worker –with-included-apr –with-expat=builtin –enable-mods-shared=most –enable-ssl –enable-proxy –enable-proxy-connect –enable-proxy-http –enable-proxy-balancer –enable-cache –enable-disk-cache –enable-mem-cache –enable-nonportable-atomics=yes
Then, as usual, run:
$ make
$ make install
You hopefully end up with apache correctly installed with all needed modules in place.
Now configure your httpd.conf adding those lines:
# Compress on the fly HTML pages, TXT and XML files, CSS and JS.
AddOutputFilterByType DEFLATE text/html text/plain text/xml text/css text/x-js application/x-javascript
# Cache open file descriptors
CacheEnable fd /
# Enable memory caching
CacheEnable mem /
# Limit the size of the cache to 24 Megabyte
MCacheSize 25165824
# Minimum size of an object that can be cached: 1 Kbyte
MCacheMinObjectSize 1024
# Maximum size of an object that can be cached: 3 Mbyte
MCacheMaxObjectSize 3145728
# Spawn 10 child processes, spawning 100 threads for each child process.
# So, a pool of 1000 threads is left up and sleeping, ready to serve incoming requests.
# If more requests will come in, apache will spawn new child processes, each one spawning 100 threads,
# enlarging the thread pool until the total number of threads become 2000. In that case, apache begin
# to cleanly drop processes, trying to reach 1000 threads.
# New processes and its threads are spawned in case of a large spike of requests, until 4000 parallel
# client requests are reached, then apache will no longer accept new incoming connections.
# When the load calm down, and requests come back under 4000 parallel connections, apache will continue
# to accept connections. After 1,000,000 requests served by a child, q. 10,000 per thread, the process
# get closed by the father to ensure no memory leak is fired.
<IfModule mpm_worker_module>
ThreadLimit 100
ServerLimit 4000
StartServers 10
MaxClients 4000
MinSpareThreads 1000
MaxSpareThreads 2000
ThreadsPerChild 100
MaxRequestsPerChild 1000000
</IfModule>
Start apache.
Enjoy!!
Oggi leggendo la posta elettronica ho visto una mail proveniente niente popò di meno che dal Sindaco di Roma Giovanni Alemanno.
Si tratta della protesta che stanno facendo molti Comuni italiani dopo i gravi tagli previsti dalla manovra finanziaria del Governo italiano.
Segue la trascrizione del testo:
I TAGLI AI COMUNI SONO TAGLI AI TUOI DIRITTI
Carissimi cittadini,
oggi ho comunicato al Prefetto e al Ministro dell’Interno che Roma Capitale non è
più in grado di garantire i servizi ai cittadini. Come è stato deciso
dall’Associazione Nazionale Comuni Italiani (ANCI), chiuderò simbolicamente
l’ufficio Anagrafe e stato civile come stanno facendo quasi tutti i miei colleghi Sindaci.
Si tratta di una forma di protesta molto forte, alla quale i Comuni italiani
sono arrivati perché, fino ad ora, non sono riusciti a far cambiare in modo
significativo una manovra economica necessaria ma troppo pesante per le
istituzioni territoriali.
Non vogliamo peggiorare la qualità della vostra vita ma cercare di migliorare
i servizi e difendere i vostri diritti.
Infatti, fino a quando sarà possibile tutti i servizi saranno garantiti grazie
allo sforzo della struttura comunale.
Oggi non è più possibile percé si preferisce togliere ai Comuni invece di
andare a vedere dove le risorse si sprecano realmente.
Ogni anno i Comuni portano soldi alle casse dello Stato per un totale di oltre
3 miliardi di euro. Queste risorse si perdono in mille rivoli, mentre noi veniamo
costretti ad aumentare le tasse o a chiudere i servizi.
Ho deciso di scrivervi perché ognuno di voi possa rendersi conto che la
protesta che Roma Capitale e l’ANCI stanno facendo non è una polemica politica
o una rivendicazione istituzionale.
Al contrario il nostro obbiettivo è solo quello di trovare un nuovo accordo con
il Governo per rendere le nostre città e il nostro Paese sempre più solidi,
competitivi e vivibili.
Per ulteriori informazioni potete consultare il sito www.anci.it.
Un cordiale saluto.
Il Sindaco di Roma
In qualità di Ufficiale di Governo
(Giovanni Alemanno)
In this short abstract I’ll show you how did I make my work on trying to create a highly scalable, geo localized and distributed system using the DNS.
What we are going to create is a simple but powerful DNS system that can handle queries for a domain returning records based on the user’s geo location.
To accomplish this task we have to choose a good opensource DNS server. My choice was powerdns (http://wiki.powerdns.com/trac).
Powerdns is a great piece of software. It’s a powerful DNS server daemon that can be configured to fit in different DNS environments.
You can save domain zones into different backends (MySQL, Oracle, bind zone file, ldap, etc), and you can have primary and secondary DNS servers with automatic zone replication. This is all what you need to create a full featured DNS system.
One of the powerdns backends do accomplish the geo lookup task, and it’s called “geobackend” (http://wiki.powerdns.com/trac/browser/trunk/pdns/modules/geobackend/README).
Our test environment will consist in a primary DNS server (powerdns as a master), a secondary DNS server (powerdns as a slave) and a geo lookup DNS server (powerdns as master with geobackend enabled). We will enable automatic zone transfer between the primary and the slave server, so that if you add a new record on the master it will be automatically created on the slave.
So, we need 3 servers with powerdns installed. The installation process may be different in each case, but if you are using debian, the task can be as simple as running by root the following command:
apt-get install pdns-server
Now you need the backend where you will save the zone data. May be you want to choose “MySQL” for the master and “bind format file” for the slave DNS. The geo dns server will not need a zone backend because its single task is to retrieve the caller’s IP address and fetch its geographic location from a particular location file, then lookup this location from a map file and return back to the calling user the associated CNAME record that’s into the map file.
A quick brain guideline is given below.
My system (yourdomain.com) is composed like this:
ns1.yourdomain.com (primary DNS server with mysql backend)
ns2.yourdomain.com (secondary DNS server with auto zone replication on bind zone file)
ns1.geo.yourdomain.com (geo lookup DNS server with geobackend)
I executed the steps below:
On ns1.yourdomain.com you have to:
1) install powerdns with the gmysql backend
2) install MySQL server, create a database and grant a user on that DB
3) configure powerdns as master, with gmysql backend connecting to MySQL
4) please note that this server is authoritative to the “yourdomain.com” zone
5) delegate the “geo” zone with a NS record to the geo dns server: “geo IN NS ns1.geo.yourdomain.com”
6) create the glue record for the geodns with the record: “ns1.geo IN A ip_geo_dns_server”
On ns2.yourdomain.com you have to:
1) install powerdns with the bind backend
2) configure powerdns to be a slave with bind backend and enable ns1.yourdomain.com as a supermaster
3) please note that this server is authoritative to the “yourdomain.com” zone
On ns1.geo.yourdomain.com you have to:
1) install powerdns with the geo backend
2) configure powerdns as master with geobackend
3) please note that this server is authoritative to the “geo.yourdomain.com” zone
4) create a map file to handle the association between your country location (eg: uk) and the CNAME that the server will reply
5) download the location database zone, for example I use: zz.countries.nerd.dk (http://countries.nerd.dk/)
If you need how to do that in details please do not hesitate to write me a email. You will find it into my contact page.
Ciao, Dino.
The enigma2 patch file, generally named patch.e2, is a packet binary file containing a cramfs root filesystem and a linux kernel in zboot format.
It is commonly used by some TV decoder linux distribution systems to update the system firmware.
I didn’t find a quick way to extract the “/” cramfs filesystem from patch.e2 files from a linux system, so I decided to write a small utility from myself.
It is in attach, it’s called unpack_e2 and it’s very easy to use.
Just compile it with:
gcc -o unpack_e2 -O2 -Wall unpack_e2.c
And call it with the patch.e2 file as the first argument.
The two files produced are cram.img and kernel.img.
dino@dam2k:~/AZBOX_RTi_E2$ ./unpack_e2 /tmp/patch.e2
Team name: RTi Team
Description: Core 1.0
Version: 1.0.0
About: v.1.0
Kernel description: #78_May27
Size of cram image: 47542272 bytes (45.34 Mb)
Size of kernel image: 6584320 bytes (6.28 Mb)
Unpacking cramfs image to cram.img
Unpacking kernel image to kernel.img
Warning: it will work only with the new E2 image format (I think >= Core RC12).
If for some weird reason you need to extract the linux binary kernel file, you must use those commands:
# mount -t auto -o loop kernel.img /mnt
# dd if=/mnt/xrpc_xload_vmlinux_ES4_prod.bin skip=1 bs=836 |zcat >/tmp/vmlinux.bin
P.S. To do the reverse (pack a patch.e2) follow this link: http://sourceforge.net/projects/rticoree2/files/image_tools/
Ciao, Dino.
Today i posted to the orientdb mailinglist and I’ve written about liborient, my very first orientdb C library implementation.
We are searching for new developers to join. This is what I putted to the list.
Hi all.
I’m making an attempt to write a proof of concept, simple, LGPLv3
OrientDB C library for linux.
The library is written in best effort, so don’t kill me if you see bad
code for now…
As a starting point, there is already a very first implementation of
some simple binary protocol methods.
For those there are interested, this is the API that it’s just (it
seems…) working with the latest OrientDB SVN version:
http://www.tuxweb.it/temp/apishot/liborient/liborient_8h.html#func-members
You can view development code here:
http://svn.tuxweb.it/cgi-bin/viewvc.cgi/liborient/trunk/main/liborient/src/
INSTALL:
1) Install the latest GNU autoconf, automake and libtool
2) svn co http://svn.tuxweb.it/SVN/projects/liborient/trunk/main/liborient
3) cd liborient
4) ./autogen.sh
5) ./configure –prefix=/tmp/liborient
6) make
7) make check
8) make install
Warning: this is a very first proof of concept implementation that I
started to study OrientDB. Do not use it in production environments.
Even if I think “the scalable way”, I’m a Linux SysAdmin and not a
full time developer, so may be the API is not well designed and the
code is ugly.
We need people that write code. If you are interested, please join in
and contribute.
This is a sample C program that links liborient… and works 🙂
http://svn.tuxweb.it/cgi-bin/viewvc.cgi/liborient/trunk/main/liborient/test/single_orient.c?view=markup
<snip>
orientdb *oh;
o_conh *och;
unsigned long cid;
// create a new liborient handler
oh = orient_new();
// setup library debug level to “ORIENT_DEBUG”
orient_debug_setlevel(oh, ORIENT_DEBUG);
// setup debug callback
orient_debug_sethook(oh, &your_debug_function);
// preparing to open a new binary connection handler for orientDB
och = orient_prepare_connection(oh, ORIENT_PROTO_BINARY, “localhost”, “2424”);
// setting admin credentials
orient_set_credentials(oh, och, ORIENT_ADMIN, “root”, “pippo”);
// setting user credentials
orient_set_credentials(oh, och, ORIENT_USER, “reader”, “reader”);
// create the real connection with orientdb server
cid = orient_connect(oh, och, timeout);
// open the database “demo”
orient_dbopen(oh, och, cid, “demo”, timeout);
// get the DB size
dbsize = orient_db_size(oh, och, cid, timeout);
// get the total number of records
records = orient_db_countrecords(oh, och, cid, timeout);
// close the database
orient_dbclose(oh, och, cid, timeout);
// free library stuff
orient_free(oh);
</snip>
Any thoughts?
Ciao, Dino Ciuffetti.
Today was one of the most productive day of the year for me. Codemotion event was great, full of great talks, good new ideas and tech staff demistified.
Great speech on orientdb and nosql by Luca Garulli and very good talk by Alessandro Nadalin: “REST in peace”, the RESTful + ESI correct usage.
I’m very happy that my simple proxy php script is now in bundle with a great product: orientdb.
Now, I’m going to take two beers!! Cheers!!!!
Eccomi finalmente in quel di Barcellona, con un ritardo complessivo di 4 ore e 15 minuti per colpa del volo Vueling.
L’hotel e’ fico e abbiamo anche la connettivita’ internet, molto veloce devo dire. Unica cosa: la porta rj45 era nascosta dietro la scrivania e l’albergatore e’ pazzo.
Domani mi spetta una giornata speciale. W i graphdb!!
http://www.dama.upc.edu/research-2/UIM-GDB
In questo momento mi trovo all’aeroporto di Fiumicino, cercando di prendere un volo per Barcellona per assistere ad una conferenza internazionale sui GraphDB.
Mi trovo qui all’aeroporto ma il volo della compagia Vueling e’ stato spostato a data da destinarsi.
Quelli della Vueling ci hanno spostato su un altro volo, il problema e’ che ha tardato pure questo e il risultato e’ che siamo qui in attesa di partire… con tanto di bestemmie gratuite e variegate… e nel frattempo lavoriamo col PC e con il telefono.
Iniziamo bene quest’avventura…