If you’re a developer working with Tado’s unofficial REST APIs, you may have recently come across an important update from Tado regarding authentication. In a recent support article, Tado has requested that developers modify their authentication mechanisms to ensure secure and compliant access to their APIs: https://support.tado.com/en/articles/8565472-how-do-i-authenticate-to-access-the-rest-api
This change is critical for maintaining the integrity of Tado’s systems and protecting user data. I’ll break down what this means for developers and how to adapt to the new requirements.
Tado’s REST APIs have been a popular tool for developers looking to integrate smart home functionality into their applications. However, as with any system, security is an ongoing concern. Tado has identified potential vulnerabilities in the way some developers are handling authentication, particularly when using unofficial APIs. To address these concerns, Tado is now enforcing stricter authentication protocols to prevent unauthorized access and ensure that only legitimate requests are processed.
This move is not uncommon in the tech world. As APIs become more widely used, companies often need to tighten security measures to protect their infrastructure and users. For developers, this means staying up-to-date with these changes and adapting their code accordingly.
The primary change revolves around how developers authenticate with Tado’s APIs. Previously, some developers may have relied on less secure methods, such as hardcoding credentials or using outdated authentication flows. Tado is now requiring developers to implement a more robust and secure authentication mechanism.
While the specifics of the new authentication process may vary depending on your implementation, here are some general guidelines to follow:
Use OAuth 2.0: Tado requires using OAuth 2.0 device code flow for authentication, which is a widely adopted standard for secure API access. OAuth 2.0 provides a secure way to handle tokens and ensures that credentials are not exposed in requests.
Avoid Hardcoding Credentials: Hardcoding usernames, passwords, or tokens in your code is a significant security risk. Instead, use environment variables or secure credential storage solutions to manage sensitive information.
Implement Token Refresh: Access tokens typically have a limited lifespan. Make sure your application can handle token expiration by implementing a token refresh mechanism. This ensures uninterrupted access to the API without requiring manual intervention.
If you’re currently using Tado’s unofficial APIs, it’s time to review your authentication process and make the necessary changes to implement device code flow. This flow is designed for devices that lack a keyboard or easy input method, such as smart thermostats or mobile apps. Here’s how it works:
$response = $this->client->post('https://login.tado.com/oauth2/device_authorize',
[ 'form_params' => [ 'client_id' => '1bb50063-6b0c-4d11-bd99-387f4a91cc46
‘,
'scope' => 'offline_access', ], ]);
{"device_code":"ftcrinX_KQaXUNI1wkh-5zxFmmYOUug43SAYWORs1AU","expires_in":300,
"interval":5,"user_code":"9HAZP1",
"verification_uri":"https://login.tado.com/oauth2/device",
"verification_uri_complete":"https://login.tado.com/oauth2/device?user_code=9HAZP1"}
$response = $this->client->post('https://login.tado.com/oauth2/token', ['form_params' => ['client_id' => '1bb50063-6b0c-4d11-bd99-387f4a91cc46',
'grant_type' => 'urn:ietf:params:oauth:grant-type:device_code',
'device_code' => $device_code]]);
[access_token] => eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCIsImd0eSI..............................KcYbYQ
[expires_in] => 599
[refresh_token] => 6Vu1vQadysY-1G6naR8gdp_y-AgFtakb75C7KVK5-uUxgbM3EWHTza2e2D6ZD81W
[refresh_token_id] => 9fa5bb86-8d55-4178-9268-f13bbd1bc1a5
[scope] => offline_access
[token_type] => Bearer
[userId] => 595e1511-078f-8010-332a-0adc13930002
Access tokens have a limited lifespan (10 minutes). Make sure your application can handle token expiration by implementing a token refresh mechanism.
$response = $this->client->post('https://login.tado.com/oauth2/token', ['form_params' => ['client_id' => '1bb50063-6b0c-4d11-bd99-387f4a91cc46','grant_type' => 'refresh_token','refresh_token' => $refresh_token]]);
You can take a look at my working implementation here: https://github.com/dam2k/tadoapi
Enjoy your smart home!
Ok, it’s been a while since I wrote anything here, but that doesn’t mean I’m not doing anything technically challenging. I’m a noob with stock markets but I realized that the solutions provided by my broker are nice but not perfect for me. I would like to have a deeper sight on stock markets to make better decisions (I’ve lost > 98% of my invested capital buying a wrong ticker… you know, bad hydrogen cell green company?!!??). So I decided that I want to observe my stocks candles and indicators near real-time on a self hosted dashboard. I also want to store stock data on a DB on-premise so that I can make custom queries without worry about limited external API requests.
After some scouting on the web / IA chatbots, I decided to implement the dashboard on a self hosted Grafana, with self-hosted PostgreSQL + TimescaleDB as time-series capable DB, getting financial data in near-realtime from a nice vendor (99$/month).
Spending two words about the DB schema, I’d like to select tickers and markets and I’d like a wonderful performance to switch tickers, periods and timeframes, so I’ll need proper DB schema, ER, optimized queries, indexes and foreign keyes.
The first is the stockbars table and is responsible of storing candles (open, high, low, close, volume, trades, etc). It’s market_id and ticker_id are coupled with the second table, called tickers. It will contain any stock ticker and its optional details. It’s market_id and ticker_id are then coupled with the third table called markets that has the task of storing markets (NASDAQ, NYSE, etc). This way I can easily make JOIN to get the requested ticker from the requested markets normalizing data and avoiding to duplicate and waste storage space. This also permits the DB engine to use indexes and foreign keys, optimizing rows fetch from the storage and performing really well even on big tables.
But how can I make queries to aggregate data in the requested timeframe? Postgres doesn’t have this feature, so I need TimescaleDB extension. This is a really nice piece of software that can transform your RDBMS to a fully featured time-series DB. I just need to make the stockbars table a TimescaleDB hypertable. It will automatically partition the table to the proper format to make the magic timeperiod happen in background.
So, thanks to TimescaleDB I’m able to change the timeframe (5 min), the observed period (48 hours), or get aggregated info (NVNTD ticker on NASDAQ market. NVNTD is a non existent stock created just to test the system).
Now, I just need to create the grafana dashboard, and since PostgreSQL and TimescaleDB are fully and natively supported by grafana, I took just a couple of minutes to integrate the dashboard and show my candles (TSLA ticker on NYSE market here with random market data, just for testing reasons).
OK, nice!! So I built a custom Grafana dashboard to show candlesticks of my watched stocks, and some of you may have noticed that Bollinger Bands are present in the chart but postgres, timescaledb and grafana alone are not capable of generating indicators or overlays like Bollinger Bands…
Have you ever thought of a PostgreSQL extension that is able to generate indicators and overlays in C as postgres functions? I did not find anything ready, so I’ve written the postgres extension myself, with the help of the Tulip Indicators library as a math buddy. More will come here… if there will be some hype.
Fastweb ha deciso per il momento di non fornire IPv6 nativo ai propri clienti, e inoltre da qualche giorno ha disabilitato il tunnel TSP (tsp-auth.ipv6.fastweb.it) il quale non risulta piu’ raggiungibile.
Visto che nel mio caso ho un router fastweb Argo 55+ su fibra 100, e il suddetto router non supporta IPv6, quando vado sulla MyFastPage e cerco di attivare il protocollo IPv6 il sistema mi dice che devo sostituire il router con un nuovo modello. Io NON voglio cambiare router perche’ lo considero estremamente stabile e performante.
Come posso quindi attivare IPv6 nella mia rete domestica senza dover cambiare router?
Fastweb porta IPv6 ai suoi utenti tramite 6rd (https://en.wikipedia.org/wiki/IPv6_rapid_deployment). Questo significa che e’ probabilmente possibile ottenere la subnet in tunnel anche su linux.
Ho preso uno dei miei raspberry pi con raspbian e ci ho installato il pacchetto radvd (sudo apt-get install radvd), poi nel mio /etc/network/interfaces ho messo questo:
iface eth0 inet6 static
address 2001:b07:27b:7b7b::1
netmask 64auto ipv6fastweb
iface ipv6fastweb inet6 v4tunnel
netmask 64
endpoint 81.208.50.214
up ip -6 route add default dev ipv6fastweb
down ip -6 route del default dev ipv6fastweb
Invece di usare come indirizzo ip 2001:b07:27b:7b7b::1 devi calcolarti il tuo a partire dal tuo IP pubblico fisso che ti ha fornito fastweb. Puoi ottenere il tuo ip pubblico ad esempio da qui: http://whatismyipaddress.com/
Facciamo finta che tu abbia l’IP 2.123.123.123, devi convertirlo in esadecimale, ad esempio cosi:
printf “%x%02x:%x%02x::\n” `echo 2.123.123.123 |tr . ” “`
Quello che otterrai, ad esempio 27b:7b7b:: va accodato al prefisso di fastweb (2001:b07:) e come suffisso accodi il numero 1.
In questo caso, quindi, l’IP diventa:
2001:b07:27b:7b7b::1 che e’ stato costruito da [2001:b07]:[b07:27b:7b7b]::[1]. La prima e’ fissa, la seconda dipende dal tuo IP pubblico e infine 1. Questo andra’ messo nella riga “address” nel file /etc/network/interfaces che ti dicevo prima e va anche messo nella direttiva “prefix” del radvd.conf, ma in ques’ultimo caso senza l’1 finale.
Crea il file /etc/radvd.conf e mettici dentro questo:
interface eth0
{
AdvSendAdvert on;
MinRtrAdvInterval 3;
MaxRtrAdvInterval 10;prefix 2001:b07:27b:7b7b::/64
{
AdvOnLink on;
AdvAutonomous on;
AdvRouterAddr on;
};RDNSS 2001:4860:4860::8888
{
AdvRDNSSLifetime 20;
};
};
Riavvia il raspberry pi e se tutto va bene, sempre nel raspberry pi dovrai avere una scheda di rete virtuale chiamata ipv6fastweb senza IP usabili ma che serve per creare il tunnel con il border gateway di fastweb (81.208.50.214). Se non funziona, prova a cercare un altro border gateway, magari chiedendo al numero verde o cercando su internet. A me funziona con questo. Metti quello giusto alla direttiva “endpoint” del file “interfaces”.
Poi avrai l’IP pubblico IPv6 che ti sei calcolato (nel caso di esempio 2001:b07:27b:7b7b::1/64) sulla scheda eth0 e avrai il tuo radvd che invia i router advertisement ipv6 alla tua rete.
Ogni PC nella tua rete che supporta IPv6 otterrà un IP pubblico nella subnet che ti sei calcolato, e sara’ raggiungibile direttamente da internet tramite IPv6.
Bello no? Ovviamente se abilitate questo dovete disabilitare IPv6 sul router di fastweb perche’ sara il vostro raspberry pi a fare da router ipv6.
Commentate sotto, mi raccomando! Fatemi sapere.
A me funziona perfettamente e sono molto felice. Credo che Fastweb dovrebbe creare una guida ufficiale su questo per il bene degli utenti. Ci ho messo 2 ore a farlo funzionare, con una guida ci avrei messo 2 minuti.
Docker is becoming the “today standard” of lxs linux containers.
I think I will avoid learning Kubernetes to handle dockerized hosts, and I will study Docker Engine, Docker Swarm and Docker Machine and its REST APIs instead.
I started from here: https://docs.docker.com/machine/overview/
If you use Mozilla Firefox and you want to view video streaming with HTML5 embedded players, you may need to enable some video functionality into the firefox configuration.
For some reason obscure to me Mozilla Firefox will not play some HTML5 based video streaming, and the embedded player will probably fall back to Adobe Flash based streaming (that is what I don’t want since it’s closed source and it’s a very insecure application).
For example, youtube has a HTML5 based player that you could choose to use instead of flv. You can verify and enable HTML5 based streaming functionality of your browser navigating to https://www.youtube.com/html5.
In my case (Firefox 64 bit on Linux Debian) the tabs “Media Source Extensions”, “MSE & H.264” and “MSE & WebM VP9” are disabled by default.
If you want to enable those functionalities simply write “about:config” into the firefox navigation bar.
You will be alerted to pay attention to not modify things or your warranty will be broken.
Just ignore the warning and proceed.
Now, if you did not already, switch the following configuration parameters to true:
media.fragmented-mp4.exposed
media.fragmented-mp4.ffmpeg.enabled
media.fragmented-mp4.gmp.enabled
media.mediasource.webm.enabled
media.mediasource.enabled
If you try to refresh the HTML5 based youtube player, you should hopefully have all video streaming tabs ready and enabled.
If you want, you can now switch the youtube default player to HTML5.
In my previous blog post I published a TSL2561 light sensor driver in C for Raspberry PI. In this article I will publish a user space C driver for Adafruit 4-digit 7-segment display.
This is based on a HT16K33 led driver IC, that it’s a I2C driven RAM mapping 16*8 LED controller driver.
The driver I’m posting it’s valid for the adafruit circuit only, since it’s completely based on the electronic schematic they realized.
Don’t use the driver with other circuits, since the display could not function properly.
Basically the adafruit 7-segment backpack (http://www.adafruit.com/products/879) uses 8 (rows) * 5 (columns) HT16K33 lines to drive its leds. The column number 1 is dedicated to the first digit, the second column is dedicated to the second digit, the third column is attached to the colon sign in the middle of the 4 digits, the fourth column is attached to the third digit, and the fifth colum to the fourth display digit.
While each row drives a single led of the given column.
The display columns 0, 1, 3, 4 can show numbers and some letters (A-F, n, o, i, l, L, etc…) plus a decimal point, while the column 2 can only show a colon sign (:).
A number or a letter for each digit is composed by 7 led segments, so the possibilities are few… but not so few after all (check 7seg.txt file attachment for more details on letter composition).
So, now comes the fun. How can I access the led driver memory to light display digits in C? Adafruit releases proof of concept libraries in C and python, but they don’t seem to run on my raspberry pi.
Since I am too lazy to port their code with external dependencies, I decided to write my own library in C.
#include "7seg_bp_ada.h" /* prepare the backpack driver (the first parameter is the raspberry pi i2c master controller attached to the HT16K33, the second is the i2c selection jumper) The i2c selection address can be one of HT16K33_ADDR_01 to HT16K33_ADDR_08 */ HT16K33 led_backpack1 = HT16K33_INIT(1, HT16K33_ADDR_01); /* initialize the backpack */ rc = HT16K33_OPEN(&led_backpack1); /* power on the ht16k33 */ HT16K33_ON(&led_backpack1); /* make it shining bright */ HT16K33_BRIGHTNESS(&led_backpack1, 0x0F); /* make it not blinking */ HT16K33_BLINK(&led_backpack1, HT16K33_BLINK_OFF); /* power on the display */ HT16K33_DISPLAY(&led_backpack1, HT16K33_DISPLAY_ON); /* Say hello */ HT16K33_UPDATE_DIGIT(&led_backpack1, 0, 'H', 0); // first digit HT16K33_UPDATE_DIGIT(&led_backpack1, 1, 'E', 0); // second digit // turn off the colon sign in the middle of the 4 digits HT16K33_UPDATE_DIGIT(&led_backpack1, 2, HT16K33_COLON_OFF, 0); HT16K33_UPDATE_DIGIT(&led_backpack1, 3, '#', 0); // third digit HT16K33_UPDATE_DIGIT(&led_backpack1, 4, 'o', 0); // fourth digit HT16K33_COMMIT(&led_backpack1); // commit to the display memory // call this if you want to shut down the device (power saving mode) // HT16K33_OFF(&led_backpack1); /* close things (the display remains in the conditions left) */ HT16K33_CLOSE(&led_backpack1);
I decided to release the software with the liberal apache 2 license, so feel free to use this software inside your commercial, non free software / firmware.
Below you will find the files .c and .h that you can embed into your project.
It’s helpful for me, and I hope it will be helpful for you.
Ciao, Dino.
Note: on Raspberry PI OS (and debian) you need libi2c-dev (apt install libi2c-dev) before compiling.
gcc -Wall -O2 -o 7seg_bp_ada.o -c 7seg_bp_ada.c
gcc -Wall -O2 -o 7seg_bp_ada_test.o -c 7seg_bp_ada_test.c
gcc -Wall -O2 -o 7seg_bp_ada_test 7seg_bp_ada.o -li2c 7seg_bp_ada_test.o
After I bought a new TSL2561 digital light sensor from Adafruit, I found that the very cool and small device cannot be accessed directly from linux (rasbian doesn’t have it’s kernel module compiled). Since I didn’t want to cross recompile my whole raspberry pi kernel just to have the tsl2563.ko driver enabled, and since it seems that raspbian does not relase genuine kernel headers to just compile custom kernel modules, I decided to write a user space simple library driver in C.
I found out that Adafruit relases proof of concept libraries written in C++ and python to access its hardware devices, the problem is that the c++ version is ready for arduino but it was not so directly usable for my raspberry pi. It also makes use of an adafruit unified sensor library and other external stuff. Since I am too lazy I decided yesterday to write a new simple library in plain C without external dependencies, just ready for my raspberry pi.
This is the arduino version that inspired me: https://github.com/adafruit/TSL2561-Arduino-Library
This is another cool blog post that inspired me (it now seems dead!!): http://russelldavis.org/2013/03/23/raspberryhunt-part-2/
This is an example:
/* prepare the sensor (the first parameter is the raspberry pi i2c master controller attached to the TSL2561, the second is the i2c selection jumper) The i2c selection address can be one of: TSL2561_ADDR_LOW, TSL2561_ADDR_FLOAT or TSL2561_ADDR_HIGH */ TSL2561 light1 = TSL2561_INIT(1, TSL2561_ADDR_FLOAT); /* initialize the sensor */ rc = TSL2561_OPEN(&light1); /* sense the luminosity from the sensor (lux is the luminosity taken in "lux" measure units) the last parameter can be 1 to enable library auto gain, or 0 to disable it */ rc = TSL2561_SENSELIGHT(&light1, &broadband, &ir, &lux, 1); TSL2561_CLOSE(&light1);
Compile:
gcc -Wall -O2 -o TSL2561.o -c TSL2561.c
gcc -Wall -O2 -o TSL2561_test.o -c TSL2561_test.c
gcc -Wall -O2 -o TSL2561_test TSL2561.o TSL2561_test.o
The output is like this:
root@rasponi:~/test/gpio# ./TSL2561_test
Test. RC: 0(Success), broadband: 141, ir: 34, lux: 12
As you can see it’s very easy at this point to get the light measures in C. Just include TSL2561.c and TSL2561.h inside your project and use the public APIs to setup and sense the IC.
I decided to release the code with the liberal apache v2 license, so feel free to include it into your commercial projects if you like.
It’s useful for me, and I hope that it can be useful to you too. Obviously it comes with absolutely no warranty.
p.s.1: I left the hardware stuff out of this article (just attach +vcc, gnd and i2c bus to the sensor
p.s.2: you have to load two kernel modules to get i2c bus working on you Raspberry pi:
modprobe i2c_bcm2708
modprobe i2c_dev
Ciao, Dino.
TSL2561.c
TSL2561.h
TSL2561_test.c
This is an example on how to use all 3 sensors on the same i2c bus:
#include <stdio.h> #include <string.h> #include "TSL2561.h" int main() { int i; int rc; uint16_t broadband, ir; uint32_t lux=0; TSL2561 lights[3]; // we can handle 3 sensors // prepare the sensors // (the first parameter is the raspberry pi i2c master controller attached to the TSL2561, the second is the i2c selection jumper) // The i2c selection address can be one of: TSL2561_ADDR_LOW, TSL2561_ADDR_FLOAT or TSL2561_ADDR_HIGH // prepare all sensors /* cannot assign that way lights[0] = TSL2561_INIT(1, TSL2561_ADDR_LOW); lights[1] = TSL2561_INIT(1, TSL2561_ADDR_FLOAT); lights[2] = TSL2561_INIT(1, TSL2561_ADDR_HIGH); */ // initialize at runtime instead // FIRST SENSOR --> TSL2561_ADDR_LOW lights[0].adapter_nr=1; // change this according to your i2c bus lights[0].sensor_addr=TSL2561_ADDR_LOW; // don't change this lights[0].integration_time=TSL2561_INTEGRATIONTIME_402MS; // don't change this lights[0].gain=TSL2561_GAIN_16X; // don't change this lights[0].adapter_fd=-1; // don't change this lights[0].lasterr=0; // don't change this bzero(&lights[0].buf, sizeof(lights[0].buf)); // don't change this // SECOND SENSOR --> TSL2561_ADDR_FLOAT lights[1].adapter_nr=1; // change this according to your i2c bus lights[1].sensor_addr=TSL2561_ADDR_FLOAT; // don't change this lights[1].integration_time=TSL2561_INTEGRATIONTIME_402MS; // don't change this lights[1].gain=TSL2561_GAIN_16X; // don't change this lights[1].adapter_fd=-1; // don't change this lights[1].lasterr=0; // don't change this bzero(&lights[1].buf, sizeof(lights[1].buf)); // don't change this // THIRD SENSOR --> TSL2561_ADDR_HIGH lights[2].adapter_nr=1; // change this according to your i2c bus lights[2].sensor_addr=TSL2561_ADDR_HIGH; // don't change this lights[2].integration_time=TSL2561_INTEGRATIONTIME_402MS; // don't change this lights[2].gain=TSL2561_GAIN_16X; // don't change this lights[2].adapter_fd=-1; // don't change this lights[2].lasterr=0; // don't change this bzero(&lights[2].buf, sizeof(lights[2].buf)); // don't change this // initialize the sensors for(i=0; i<3; i++) { rc = TSL2561_OPEN(&lights[i]); if(rc != 0) { fprintf(stderr, "Error initializing TSL2561 sensor %i (%s). Check your i2c bus (es. i2cdetect)\n", i+1, strerror(lights[i].lasterr)); return 1; } // set the gain to 1X (it can be TSL2561_GAIN_1X or TSL2561_GAIN_16X) // use 16X gain to get more precision in dark ambients, or enable auto gain below rc = TSL2561_SETGAIN(&lights[i], TSL2561_GAIN_1X); // set the integration time // (TSL2561_INTEGRATIONTIME_402MS or TSL2561_INTEGRATIONTIME_101MS or TSL2561_INTEGRATIONTIME_13MS) // TSL2561_INTEGRATIONTIME_402MS is slower but more precise, TSL2561_INTEGRATIONTIME_13MS is very fast but not so precise rc = TSL2561_SETINTEGRATIONTIME(&lights[i], TSL2561_INTEGRATIONTIME_101MS); } // you can now sense each sensor when you like for(i=0; i<3; i++) { // sense the luminosity from the sensors (lux is the luminosity taken in "lux" measure units) // the last parameter can be 1 to enable library auto gain, or 0 to disable it rc = TSL2561_SENSELIGHT(&lights[i], &broadband, &ir, &lux, 1); printf("Test sensor %i. RC: %i(%s), broadband: %i, ir: %i, lux: %i\n", i+1, rc, strerror(lights[i].lasterr), broadband, ir, lux); } // when you have finisched, you can close things for(i=0; i<3; i++) { TSL2561_CLOSE(&lights[i]); } return 0; }
Come saprete, l’ultimo filone di openldap (2.4.x) supporta una varietà di meccanismi di replica utili per la realizzazione dell’alta affidabilità. Si trovano in rete vari documenti su cui potete osservare i vari meccanismi e i loro pro e contro. Ne riporto un paio tra i più rappresentativi (in lingua inglese):
http://www.openldap.org/doc/admin24/replication.html
http://www.synetis.com/en/2012/09/03/replication-openldap
Faccio presente che non esistono configurazioni di ldap che permettono una gestione trasparente dell’alta affidabilità, infatti tutte le configurazioni hanno bisogno di un bilanciatore di carico o un sistema di cluster manager per poter gestire il flusso di dati verso il server ldap attivo, la replica ha il solo scopo di mantenere aggiornati tutto il tempo gli ldap server.
In particolare, se vi è la disponibilità di soli due server e la volontà di realizzare l’alta affidabilità, vorrei consigliare la modalità di replica di opendap 2.4.X chiamata MIRROR MODE, di cui riporto pro e contro come indicato nel documento “http://www.synetis.com/en/2012/09/03/replication-openldap/”:
A mirror is composed of only two nodes. Both nodes are configured in both master and slave. In this mode, both nodes are identical at all times. They are writable and it is possible to update either one or the other.
Advantages:
– If a node is down, on his return, it automatically updates;
– If the data files of a node is destroyed, when it restarts, it will synchronize completely from the other node;
– A node is configured as a master. It is possible to connect consumers.Disadvantages:
– Mass treatment of update of a node are longer in fashion provider / consumers, because the two nodes are updated simultaneously and in full mode.
Sebbene in questa modalità sia prevista l’operatività sia in scrittura che in lettura di entrambi i nodi ldap, il documento ufficiale di openldap “http://www.openldap.org/doc/admin24/replication.html”, relativamente al paragrafo 18.2.3, specifica che la corretta configurazione è quella di utilizzare in scrittura un nodo per volta.
Riporto il testo del paragrafo in questione:
MirrorMode is a hybrid configuration that provides all of the consistency guarantees of single-master replication, while also providing the high availability of multi-master. In MirrorMode two providers are set up to replicate from each other (as a multi-master configuration), but an external frontend is employed to direct all writes to only one of the two servers. The second provider will only be used for writes if the first provider crashes, at which point the frontend will switch to directing all writes to the second provider. When a crashed provider is repaired and restarted it will automatically catch up to any changes on the running provider and resync.
Il fatto che le scrittura debbano essere spedite ad un master per volta è necessario (come in un qualsiasi sistema multi master) ad evitare l’accesso concorrente alla stessa risorsa (record).
Questo tipo di configurazione infatti risolve a priori qualsiasi conflitto di concorrenza a livello di record e allo stesso tempo garantisce l’alta affidabilità.
A questo punto è possibile ipotizzare un paio di configurazioni architetturali per identificare quale sarà il frontend esterno che dovrà gestire le richieste in scrittura su uno dei nodi ldap:
1) l’utilizzo di un bilanciatore di carico hardware a livello TCP/IP, impostato non in modalità round robind ma in modalità Active/Standby con controllo della risorsa (porta TCP/389);
2) l’utilizzo di un gestore di cluster come Linux HA (http://www.linux-ha.org/wiki/Main_Page) che gestisca lo switch dell’IP di erogazione del servizio ldap su uno dei nodi ldap in replica incrociata, erogato sul server supersite in caso di fault di uno dei due nodi.
Se si sceglie la prima ipotesi, volendo, si potrebbe prevedere una terza possibilità utile al mantenimento dell’alta affidabilità in lettura/scrittura e allo stesso tempo per ottenere il bilanciamento di carico per le richieste in sola lettura sui due nodi. Quest’ultima possibilità prevede l’utilizzo di due indirizzi IP, uno in HA da utilizzarsi per le sole scritture, nelle modalità indicate al punto 1 di cui sopra, e l’altro IP che bilancia il traffico in lettura sui due nodi, tramite una configurazione in modalità round robin verso i due nodi ldap.
Per riassumere, considerando che le macchine sono linux redhat 6, se si sceglie la prima ipotesi (letture e scritture LDAP in HA su uno dei due nodi tramite utilizzo di un bilanciatore hardware), la lista della spesa è:
– installazione di OpenLDAP 2.4.X su tutti e due i nodi. I processi devono sempre essere mantenuti attivi contemporaneamente;
– configurazione di un indirizzo IP (VIP) da associare all’erogazione del servizio che viene impostato sul bilanciatore di carico;
– configurazione dei due nodi LDAP in modalità MirrorMode
Se si sceglie la seconda ipotesi (utilizzo di un gestore di cluster per ottenere letture e scritture LDAP in HA), la lista è:
– installazione di OpenLDAP 2.4.X su tutti e due i nodi. I processi devono sempre essere mantenuti attivi contemporaneamente;
– installazione e configurazione di un cluster manager come linux-ha (http://www.linux-ha.org/wiki/Main_Page) sui due nodi;
– configurazione di un indirizzo IP (VIP) da associare all’erogazione del servizio che viene impostato sul cluster manager;
– configurazione dei due nodi LDAP in modalità MirrorMode
La terza ipotesi (utilizzo di due IP su bilanciatore hardware, con scritture in HA su uno dei due nodi e letture in bilanciamento di carico) prevede la seguente lista della spesa:
– installazione di OpenLDAP 2.4.X su tutti e due i nodi. I processi devono sempre essere mantenuti attivi contemporaneamente;
– configurazione di due indirizzi IP (VIP), uno da associare all’erogazione del servizio di sole letture che viene impostato sul bilanciatore di carico in modalità round robin, l’altro da associare all’erogazione del servizio di lettura e scrittura che viene impostato sul bilanciatore di carico in modalita’ active/standby con controllo della risorsa;
– configurazione dei due nodi LDAP in modalità MirrorMode
Sono tutte e tre valide, anche se secondo me la migliore è la terza perchè permette HA + bilanciamento di carico in lettura, HA in scrittura, e soprattutto la divisione logica dei flussi di scrittura e lettura.
Ciao, Dino Ciuffetti.
If you are using IPv6 (like me) you can see that this blog is reachable via IPv6. Pretty cool!
Tonight at 03.00 GTM the NuvolaBase team publicly released the new NuvolaBase Dashboard.
As you may know, with NuvolaBase you can handle your private database on the cloud.
The new dashboard aims to be simple, stable and powerful. You can login using your google, twitter, facebook, linkedin account.
In the next days the NuvolaBase guys will release many new cool features like a powerful REST API to handle your databases in the cloud from your application.
This is the official article on the NuvolaBase blog: http://nuvolabase.blogspot.it/2012/12/nuvolabase-dashboard-upgrade.html