msgbartop
Blog di Dino Ciuffetti (Bernardino in realtà)
msgbarbottom

10 Mar 25 TADO ™ unofficial REST API device code flow

If you’re a developer working with Tado’s unofficial REST APIs, you may have recently come across an important update from Tado regarding authentication. In a recent support article, Tado has requested that developers modify their authentication mechanisms to ensure secure and compliant access to their APIs: https://support.tado.com/en/articles/8565472-how-do-i-authenticate-to-access-the-rest-api

This change is critical for maintaining the integrity of Tado’s systems and protecting user data. I’ll break down what this means for developers and how to adapt to the new requirements.

Why the Change?

Tado’s REST APIs have been a popular tool for developers looking to integrate smart home functionality into their applications. However, as with any system, security is an ongoing concern. Tado has identified potential vulnerabilities in the way some developers are handling authentication, particularly when using unofficial APIs. To address these concerns, Tado is now enforcing stricter authentication protocols to prevent unauthorized access and ensure that only legitimate requests are processed.

This move is not uncommon in the tech world. As APIs become more widely used, companies often need to tighten security measures to protect their infrastructure and users. For developers, this means staying up-to-date with these changes and adapting their code accordingly.

What’s Changing?

The primary change revolves around how developers authenticate with Tado’s APIs. Previously, some developers may have relied on less secure methods, such as hardcoding credentials or using outdated authentication flows. Tado is now requiring developers to implement a more robust and secure authentication mechanism.

While the specifics of the new authentication process may vary depending on your implementation, here are some general guidelines to follow:

Use OAuth 2.0: Tado requires using OAuth 2.0 device code flow for authentication, which is a widely adopted standard for secure API access. OAuth 2.0 provides a secure way to handle tokens and ensures that credentials are not exposed in requests.

Avoid Hardcoding Credentials: Hardcoding usernames, passwords, or tokens in your code is a significant security risk. Instead, use environment variables or secure credential storage solutions to manage sensitive information.

Implement Token Refresh: Access tokens typically have a limited lifespan. Make sure your application can handle token expiration by implementing a token refresh mechanism. This ensures uninterrupted access to the API without requiring manual intervention.

How to Update Your Implementation

If you’re currently using Tado’s unofficial APIs, it’s time to review your authentication process and make the necessary changes to implement device code flow. This flow is designed for devices that lack a keyboard or easy input method, such as smart thermostats or mobile apps. Here’s how it works:

  • The device requests a device code and user verification URL from the authorization server

$response = $this->client->post('https://login.tado.com/oauth2/device_authorize',
[ 'form_params' => [ 'client_id' => '1bb50063-6b0c-4d11-bd99-387f4a91cc46,
'scope' => 'offline_access', ], ]);

{"device_code":"ftcrinX_KQaXUNI1wkh-5zxFmmYOUug43SAYWORs1AU","expires_in":300,
"interval":5,"user_code":"9HAZP1",
"verification_uri":"https://login.tado.com/oauth2/device",
"verification_uri_complete":"https://login.tado.com/oauth2/device?user_code=9HAZP1"}

  • The user visits the URL on a secondary device (e.g., a smartphone or computer) and enters the device code, in this case: https://login.tado.com/oauth2/device?user_code=9HAZP1
  • Once the user approves the request with its tado credentials, the device polls the authorization server for an access token
$response = $this->client->post('https://login.tado.com/oauth2/token', ['form_params' => ['client_id' => '1bb50063-6b0c-4d11-bd99-387f4a91cc46',
'grant_type' => 'urn:ietf:params:oauth:grant-type:device_code',
'device_code' => $device_code]]);

[access_token] => eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCIsImd0eSI..............................KcYbYQ
[expires_in] => 599
[refresh_token] => 6Vu1vQadysY-1G6naR8gdp_y-AgFtakb75C7KVK5-uUxgbM3EWHTza2e2D6ZD81W
[refresh_token_id] => 9fa5bb86-8d55-4178-9268-f13bbd1bc1a5
[scope] => offline_access
[token_type] => Bearer
[userId] => 595e1511-078f-8010-332a-0adc13930002

Access tokens have a limited lifespan (10 minutes). Make sure your application can handle token expiration by implementing a token refresh mechanism.

$response = $this->client->post('https://login.tado.com/oauth2/token', ['form_params' => ['client_id' => '1bb50063-6b0c-4d11-bd99-387f4a91cc46','grant_type' => 'refresh_token','refresh_token' => $refresh_token]]);

You can take a look at my working implementation here: https://github.com/dam2k/tadoapi

Enjoy your smart home!

28 Feb 25 Postgres extension for stock technical analysis

Ok, it’s been a while since I wrote anything here, but that doesn’t mean I’m not doing anything technically challenging. I’m a noob with stock markets but I realized that the solutions provided by my broker are nice but not perfect for me. I would like to have a deeper sight on stock markets to make better decisions (I’ve lost > 98% of my invested capital buying a wrong ticker… you know, bad hydrogen cell green company?!!??). So I decided that I want to observe my stocks candles and indicators near real-time on a self hosted dashboard. I also want to store stock data on a DB on-premise so that I can make custom queries without worry about limited external API requests.

After some scouting on the web / IA chatbots, I decided to implement the dashboard on a self hosted Grafana, with self-hosted PostgreSQL + TimescaleDB as time-series capable DB, getting financial data in near-realtime from a nice vendor (99$/month).

Spending two words about the DB schema, I’d like to select tickers and markets and I’d like a wonderful performance to switch tickers, periods and timeframes, so I’ll need proper DB schema, ER, optimized queries, indexes and foreign keyes.

The first is the stockbars table and is responsible of storing candles (open, high, low, close, volume, trades, etc). It’s market_id and ticker_id are coupled with the second table, called tickers. It will contain any stock ticker and its optional details. It’s market_id and ticker_id are then coupled with the third table called markets that has the task of storing markets (NASDAQ, NYSE, etc). This way I can easily make JOIN to get the requested ticker from the requested markets normalizing data and avoiding to duplicate and waste storage space. This also permits the DB engine to use indexes and foreign keys, optimizing rows fetch from the storage and performing really well even on big tables.

But how can I make queries to aggregate data in the requested timeframe? Postgres doesn’t have this feature, so I need TimescaleDB extension. This is a really nice piece of software that can transform your RDBMS to a fully featured time-series DB. I just need to make the stockbars table a TimescaleDB hypertable. It will automatically partition the table to the proper format to make the magic timeperiod happen in background.

So, thanks to TimescaleDB I’m able to change the timeframe (5 min), the observed period (48 hours), or get aggregated info (NVNTD ticker on NASDAQ market. NVNTD is a non existent stock created just to test the system).

Now, I just need to create the grafana dashboard, and since PostgreSQL and TimescaleDB are fully and natively supported by grafana, I took just a couple of minutes to integrate the dashboard and show my candles (TSLA ticker on NYSE market here with random market data, just for testing reasons).

OK, nice!! So I built a custom Grafana dashboard to show candlesticks of my watched stocks, and some of you may have noticed that Bollinger Bands are present in the chart but postgres, timescaledb and grafana alone are not capable of generating indicators or overlays like Bollinger Bands…

Have you ever thought of a PostgreSQL extension that is able to generate indicators and overlays in C as postgres functions? I did not find anything ready, so I’ve written the postgres extension myself, with the help of the Tulip Indicators library as a math buddy. More will come here… if there will be some hype.

05 Ago 23 dam2k/tadoapi and telegraf-dam2ktado

Ready for a new grafana story? Today I have some more: my brand new opensource repositories:

  • dam2k/tadoapi (packagist and github): a simple Tado ™ SDK implementation for PHP
  • telegraf-dam2ktado (github): API exporter (telegraf execd input plugin) written in PHP

I realized them for myself, after I did not find anything good.

The exporter (telegraf-dam2ktado) is a plugin written in PHP that connects to the tado network (internet), fetches thermostat metrics and devices for your home installation, parses it as a single json and write this json to its stdout. It is signaled directly by telegraf which sends a empty new line on stdout that the plugin catches on its stdin. In this way, the plugin knows when telegraf wants to fetch new data, and telegraf can read parsed and cleaned data from tado so that it can collect metrics and put them to influxdb.

With all the metrics on influxdb one can create a beautiful dashboard on grafana (on cloud or on premise).

The installation instructions are on the respective projects home page.

Now, some cool screenshots of my grafana dashboard (19301)…

31 Lug 23 Dashboarding Fritz!box router with telegraf, influxdb and grafana

So, with this second article about grafana we are going to dashboard a Fritz!box router.

Same as the previous article, you need a grafana+influxdb installation somewhere. Also you’ll need a linux host connected to the fritz router to monitor, a Raspberry pi 4 will be more than OK.

Install telegraf on your rpi4, then follow these instructions:

https://github.com/Ragin-LundF/telegraf_fritzbox_monitor

Once you’ve installed the required software you’ll end up with the sw installed into /opt/telegraf_fritzbox and a configuration file into /etc/telegraf/telegraf.d/telegraf_fritzbox.conf.

Open and modify this last file in this way:

[[inputs.exec]]
  commands = ["python3 /opt/telegraf_fritzbox/telegraf_fritzbox.py"]
  timeout = '30s'
  data_format = "influx"
  interval = "30s"

Now edit this file: /opt/telegraf_fritzbox/config.yaml and setup your fritz router’s username and password connection. NOTE: It’s a good thing to create a dedicated user.

It’s possible to test this command to check if the connection with ther router is working:

python3 /opt/telegraf_fritzbox/telegraf_fritzbox.py ; chown telegraf:telegraf /opt/telegraf_fritzbox/fritz.db

If everything is ok you should have a list of metrics coming from your router.

Please note that you need to enable UPnP status on your router networking configuration or you’ll have an error regarding a unknown service.

Now, restart telegraf with service telegraf restart.

It’s now time to import the grafana dashboard. I had big problems with the official json from https://github.com/Ragin-LundF/telegraf_fritzbox_monitor/blob/main/GrafanaFritzBoxDashboard_Influx2.json so I put my modified dashbord here.

Some screenshots here

A really big thank goes to the software author Ragin-LundF -> https://github.com/Ragin-LundF

30 Lug 23 Dashboarding Linux system metrics with Telegraf, InfluxDB, Grafana

There are tons of documentation and howtos on the web regarding system monitoring and metrics dashboards, so I don’t put all the boring stuff here.

You may want to have a central grafana and influxdb installation, then a telegraf installation on every node to monitor. For example you may have a grafana + influxdb installation somewhere in the cloud, a VPN, and a couple of raspberry pi nodes that gather metrics and send them to the central influxdb+grafana node for storage and visualization.

For this task, I use this beautiful grafana dashboard: https://grafana.com/grafana/dashboards/928-telegraf-system-dashboard/

Just import this dashbord to your local or remote grafana installation.

To make all those panels working, all your nodes to be monitored must have this telegraf plugins enabled and configured:

[[inputs.cpu]]
percpu = true
totalcpu = true
collect_cpu_time = false
report_active = false
core_tags = false
[[inputs.disk]]
ignore_fs = ["tmpfs", "devtmpfs", "devfs", "iso9660", "overlay", "aufs", "squashfs"]
[[inputs.diskio]]
[[inputs.kernel]]
[[inputs.mem]]
[[inputs.processes]]
use_sudo = false
[[inputs.swap]]
[[inputs.system]]
[[inputs.conntrack]]
files = ["ip_conntrack_count","ip_conntrack_max",
"nf_conntrack_count","nf_conntrack_max"]
dirs = ["/proc/sys/net/ipv4/netfilter","/proc/sys/net/netfilter"]
collect = ["all", "percpu"]
[[inputs.internal]]
[[inputs.interrupts]]
[inputs.interrupts.tagdrop]
irq = [ "NET_RX", "TASKLET" ]
[[inputs.linux_sysctl_fs]]
[[inputs.net]]
[[inputs.netstat]]
[[inputs.nstat]]
proc_net_netstat = "/proc/net/netstat"
proc_net_snmp = "/proc/net/snmp"
proc_net_snmp6 = "/proc/net/snmp6"
dump_zeros = true

Also, remember to configure your telegraf output to send collected metrics to your central influxdb node:

[[outputs.influxdb_v2]]
urls = ["http://192.168.0.2:8086"]
token = "A1ycabIZjg3XjulgubSanvPEdoj7UxqmEbsPADXX_h1Ns3-kTspG63s0SP3wuR0MGisd62rx9jLzExrhPvKAUg=="
organization = "YourOrg"
bucket = "YourBucket"

Enjoy your system telegraf metrics visualized 🙂

28 Lug 23 Firefox and Chrome screen share (meet) not working on Debian 12 with Gnome 43 and Wayland

I found out that after I upgraded Debian from 11 to 12 the screen share functions from firefox and chrome (for example from a google meet meeting) is not working anymore. I did not share my screen with my colleagues.

In Debian 12 you are probably using PipeWire, Wayland and Gnome 43.

The solution for me was as simple as that:

apt install xdg-desktop-portal-gnome

This way firefox and chrome can reach gnome services inside the sandbox and the screen share is now working as expected.

11 Feb 23 How to create persistent Queues, Exchanges, and DLXs on RabbitMQ to avoid loosing messages

What happens when you publish a message to an exchange in RabbitMQ with the wrong topic, or better, routing key? What happens if you send a message to the broker in a queue with a TTL policy or the TTL property is in the message itself and that TTL expires? What happens when a consumer discard your message got from the queue with no republish? What if a queue overflows due to a policy?

It’s simple, the broker will simply discard your message forever.

If this thing will make you mad like it does to me this blog article is for you. Here I will tell you how to create a simple tree of queues, DLX and policies to overcome this problem.

I think that starting with commands and examples is better than 10000 written words, and since I don’t have any ads on my blog I don’t have to write a long article to get money from the ads, so here we are.

I consider your RabbitMQ installation and your admin account ready, so we start with the commands.

# Create your VirtualHost
rabbitmqctl add_vhost vhtest --description "Your VH" --default-queue-type classic

# Give your admin user permissions to to everything on your virtualhost
rabbitmqctl set_permissions --vhost vhtest admin '.*' '.*' '.*'

# Create the user that will publish messages to the exchange
rabbitmqctl add_user testuserpub yourpassword
1

# Create the user that will subscribe to your queue to read messages
rabbitmqctl add_user testusersub yourpassword2

Now, we have 3 users (admin, testuserpub and testusersub) and a virtualhost (vhtest). We are ready to create 2 DLX, one to handle overflow, expired TTL and discarded messages, the other to handle messages sent with the wrong routing key. A DLX (or Dead Letter Exchange) is a particular exchange that is designed to handle dead lettered (discarded) messages.

# Create the DLX to handle overflowed, expired or discarded by consumers
rabbitmqadmin declare exchange --vhost=vhtest name=DLXexQoverfloworttl type=headers internal=true

# Create the DLX to handle messages with wrong routing key
rabbitmqadmin declare exchange --vhost=vhtest name=DLXexQwrongtopic type=fanout internal=true

We’ll now declare and bind queues to the first DLX using three different policies

rabbitmqadmin declare queue --vhost=vhtest name=DLXquQoverflow
rabbitmqadmin declare queue --vhost=vhtest name=DLXquQttl
rabbitmqadmin declare queue --vhost=vhtest name=DLXquQrejected
rabbitmqadmin declare binding --vhost=vhtest source=DLXexQoverfloworttl destination=DLXquQoverflow arguments='{"x-first-death-reason": "maxlen", "x-match": "all-with-x"}'
rabbitmqadmin declare binding --vhost=vhtest source=DLXexQoverfloworttl destination=DLXquQttl arguments='{"x-first-death-reason": "expired", "x-match": "all-with-x"}'
rabbitmqadmin declare binding --vhost=vhtest source=DLXexQoverfloworttl destination=DLXquQrejected arguments='{"x-first-death-reason": "rejected", "x-match": "all-with-x"}'

And now we’ll declare and bind queues to the second DLX to handle messages with wrong topic (routing key)

rabbitmqadmin declare queue --vhost=vhtest name=DLXquQwrongtopic
rabbitmqadmin declare binding --vhost=vhtest source=DLXexQwrongtopic destination=DLXquQwrongtopic

Now we have 1 DLX with 3 queues and another DLX with 1 queue bound. The first will route expired, discarded and overflowed messages to the respective queues (DLXquQttl, DLXquQoverflow, DLXquQrejected), the second will route messages with invalid routing key to the respective queue (DLXquQwrongtopic).

Now we are going to create our main queue and the normal Exchange that will send message to it

rabbitmqadmin declare queue --vhost=vhtest name=quQ
rabbitmqadmin declare exchange --vhost=vhtest name=exQ type=direct

In this example, we want to route all messages with routing key NBE

rabbitmqadmin declare binding --vhost=vhtest source=exQ destination=quQ routing_key=NBE

We now want to create the policy that is needed to associate the wrong topic DLX to our main exchange

rabbitmqctl set_policy --vhost vhtest wrongtopicQ1 "^exQ$" '{"alternate-exchange":"DLXquQwrongtopic"}' --apply-to exchanges

This is an example policy to set limits to 100 messages, 1073741824 bytes, 30 seconds TTL to the quQ queue.

rabbitmqctl set_policy --vhost vhtest shorttimedqunbe '^quQ$' '{"max-length":100,"max-length-bytes":1073741824,"message-ttl":30000,"overflow":"reject-publish-dlx","dead-letter-exchange":"DLXexQoverfloworttl"}' --priority 0 --apply-to queues

Going to give proper permissions to our publish and subscriber users. The user testuserpub can only write to its exchange, while testusersub can read from its queue. No other permissions here.

rabbitmqctl set_permissions --vhost vhtest testuserpub '' '^exQ$' ''
rabbitmqctl set_permissions --vhost vhtest testusersub '' '' '^quQ$'

Mission complete. Please try this at home and write to the comments below! Happy RabbitMQ hacking!


23 Ott 18 How to disable Diffie-Hellman ciphers on apache

If you are getting errors like “DH key too small” you can avoid using DH ciphersuites on apache.
You can obtain that using Perfect forward secrecy, or disabling all DH ciphersuites like this:

SSLCipherSuite ALL:!EXP:!NULL:!DH:!LOW

06 Lug 18 How to setup xtables GeoIP for iptables on OpenWRT

I’m not racist, but sometimes you may need to keep some people out of your network based on its geographical region… in reality I’m referring to its public source IP address.

Requests are coming from the big internet, so you can check if the source IP of the peers that are trying to connect to your powerful OpenWRT Linux router are coming from a particular region and you may want to stop all those requests accordingly.

If you are using iptables and ip6tables (if you are on linux this is the standard) you can setup GeoIP based packet filtering with iptables: it’s an iptables extension that make use of the glorious and free MaxMind GeoLite Legacy Downloadable Database (that is now dying…).

On OpenWRT this is supported automatically, but generally your router does not have all the disk space to host the entire GeoLite GeoIP database. So, you need to prepare all the records of the geographical regions that you want to block on your firewall.

Take a linux workstation or server (your notebook, a raspberry pi you own always on, one of your client’s server… better not IMHO…) and run those commands:

root@firegate2:~# mkdir /tmp/temp
root@firegate2:~# cd /tmp/temp
root@firegate2:/tmp/temp# /usr/share/xt_geoip/xt_geoip_dl
–2018-07-05 23:54:40– http://geolite.maxmind.com/download/geoip/database/GeoIPv6.csv.gz
Risoluzione di geolite.maxmind.com (geolite.maxmind.com)… 104.16.37.47, 104.16.38.47, 2400:cb00:2048:1::6810:262f, …
Connessione a geolite.maxmind.com (geolite.maxmind.com)|104.16.37.47|:80… connesso.
Richiesta HTTP inviata, in attesa di risposta… 200 OK
Lunghezza: 1715246 (1,6M) [application/octet-stream]
Salvataggio in: “GeoIPv6.csv.gz”

GeoIPv6.csv.gz 100%[================================================================>] 1,64M –.-KB/s in 0,1s

2018-07-05 23:54:40 (10,9 MB/s) – “GeoIPv6.csv.gz” salvato [1715246/1715246]

–2018-07-05 23:54:40– http://geolite.maxmind.com/download/geoip/database/GeoIPCountryCSV.zip
Riutilizzo della connessione esistente a geolite.maxmind.com:80.
Richiesta HTTP inviata, in attesa di risposta… 200 OK
Lunghezza: 2540312 (2,4M) [application/zip]
Salvataggio in: “GeoIPCountryCSV.zip”

GeoIPCountryCSV.zip 100%[================================================================>] 2,42M 11,3MB/s in 0,2s

2018-07-05 23:54:40 (11,3 MB/s) – “GeoIPCountryCSV.zip” salvato [2540312/2540312]

TERMINATO –2018-07-05 23:54:40–
Tempo totale: 0,4s
Scaricati: 2 file, 4,1M in 0,4s (11,1 MB/s)
Archive: GeoIPCountryCSV.zip
inflating: GeoIPCountryWhois.csv
root@firegate2:/tmp/temp#
root@firegate2:/tmp/temp# /usr/share/xt_geoip/xt_geoip_build -D . *.csv
209370 entries total
0 IPv6 ranges for A1 Anonymous Proxy
32 IPv4 ranges for A1 Anonymous Proxy
0 IPv6 ranges for A2 Satellite Provider
36 IPv4 ranges for A2 Satellite Provider
3 IPv6 ranges for AD Andorra
26 IPv4 ranges for AD Andorra
51 IPv6 ranges for AE United Arab Emirates
302 IPv4 ranges for AE United Arab Emirates
… and continues …
11 IPv6 ranges for ZM Zambia
64 IPv4 ranges for ZM Zambia
14 IPv6 ranges for ZW Zimbabwe
59 IPv4 ranges for ZW Zimbabwe
root@firegate2:/tmp/temp#

The script /usr/share/xt_geoip/xt_geoip_build it’s part of xtables-addons package that you may or may not find on your distribution. Search on google for that.

This will create LE and BE directories. Now you want to extract only the geo regions that you need (ex: CN,UA,TW,VN,VG,KP,VI,KR), like that:

root@firegate2:/tmp/temp# du -csh BE/CN.iv? BE/UA.iv? BE/TW.iv? BE/VN.iv? BE/VG.iv? BE/KP.iv? BE/VI.iv? BE/KR.iv? LE/CN.iv? LE/UA.iv? LE/TW.iv? LE/VN.iv? LE/VG.iv? LE/KP.iv? LE/VI.iv? LE/KR.iv?
36K BE/CN.iv4
48K BE/CN.iv6
24K BE/UA.iv4
16K BE/UA.iv6
8,0K BE/TW.iv4
4,0K BE/TW.iv6
4,0K BE/VN.iv4
4,0K BE/VN.iv6
4,0K BE/VG.iv4
4,0K BE/VG.iv6
4,0K BE/KP.iv4
0 BE/KP.iv6
4,0K BE/VI.iv4
4,0K BE/VI.iv6
8,0K BE/KR.iv4
4,0K BE/KR.iv6
36K LE/CN.iv4
48K LE/CN.iv6
24K LE/UA.iv4
16K LE/UA.iv6
8,0K LE/TW.iv4
4,0K LE/TW.iv6
4,0K LE/VN.iv4
4,0K LE/VN.iv6
4,0K LE/VG.iv4
4,0K LE/VG.iv6
4,0K LE/KP.iv4
0 LE/KP.iv6
4,0K LE/VI.iv4
4,0K LE/VI.iv6
8,0K LE/KR.iv4
4,0K LE/KR.iv6
352K totale

So, in this case we will need 352 Kb of router disk space. After we created the /usr/share/xt_geoip/BE/ and /usr/share/xt_geoip/LE/ directories on our router’s filesystem:

root@dam2ktplinkrouter:~# mkdir -p /usr/share/xt_geoip/BE /usr/share/xt_geoip/LE

We are going to copy those files on our OpenWRT router:

root@firegate2:/tmp/temp# scp LE/CN.iv? LE/UA.iv? LE/TW.iv? LE/VN.iv? LE/VG.iv? LE/KP.iv? LE/VI.iv? LE/KR.iv? root@192.168.10.1:/usr/share/xt_geoip/LE
root@192.168.10.1’s password:
CN.iv4 100% 35KB 109.8KB/s 00:00
CN.iv6 100% 47KB 76.0KB/s 00:00
UA.iv4 100% 23KB 108.4KB/s 00:00
UA.iv6 100% 15KB 87.9KB/s 00:00
TW.iv4 100% 4704 47.5KB/s 00:00
TW.iv6 100% 3104 77.8KB/s 00:00
VN.iv4 100% 3888 98.9KB/s 00:00
VN.iv6 100% 3584 87.9KB/s 00:00
VG.iv4 100% 536 40.1KB/s 00:00
VG.iv6 100% 224 20.0KB/s 00:00
KP.iv4 100% 40 3.9KB/s 00:00
KP.iv6 100% 0 0.0KB/s 00:00
VI.iv4 100% 392 30.6KB/s 00:00
VI.iv6 100% 160 14.4KB/s 00:00
KR.iv4 100% 8128 105.8KB/s 00:00
KR.iv6 100% 3616 87.0KB/s 00:00
root@firegate2:/tmp/temp# scp BE/CN.iv? BE/UA.iv? BE/TW.iv? BE/VN.iv? BE/VG.iv? BE/KP.iv? BE/VI.iv? BE/KR.iv? root@192.168.10.1:/usr/share/xt_geoip/BE/
root@192.168.10.1’s password:
CN.iv4 100% 35KB 114.4KB/s 00:00
CN.iv6 100% 47KB 66.0KB/s 00:00
UA.iv4 100% 23KB 115.2KB/s 00:00
UA.iv6 100% 15KB 74.9KB/s 00:00
TW.iv4 100% 4704 93.8KB/s 00:00
TW.iv6 100% 3104 75.7KB/s 00:00
VN.iv4 100% 3888 108.8KB/s 00:00
VN.iv6 100% 3584 76.2KB/s 00:00
VG.iv4 100% 536 40.3KB/s 00:00
VG.iv6 100% 224 20.0KB/s 00:00
KP.iv4 100% 40 4.1KB/s 00:00
KP.iv6 100% 0 0.0KB/s 00:00
VI.iv4 100% 392 33.1KB/s 00:00
VI.iv6 100% 160 15.4KB/s 00:00
KR.iv4 100% 8128 111.1KB/s 00:00
KR.iv6 100% 3616 71.8KB/s 00:00
root@firegate2:/tmp/temp#

OK, now we have our pieces of geo ip informations. We now need to install the iptables-mod-geoip from the LuCI web interface (or by hand if you like).
Now you can create your Firewall Traffic Rules. Go to Network -> Firewall -> Traffic Rules on the router’s LuCI web interface and add a custom traffic rule. In my case I created 2 that I called “CinamerdaMuoriUDPeTCP” and “CinamerdaMuoriICMP”, the first to block TCP and UDP and the second to block ICMP traffic.

After that you can setup your custom rule setting your protocol and address families, set “source zone” to WAN, “destination zone” to “Any zone (forward)”, action to DROP, and the “Extra arguments” field like this:

-m geoip –source-country CN,UA,TW,VN,VG,KP,VI,KR

You can check the image below.

OpenWRT Firewall geoip example

Now you can say hello to most chinaspammers and something like that, but don’t abuse, this is not ethic if you set up this on a server that offers some sort of public service.

21 Apr 17 How to create a sparse file from a block device

This is how to create a sparse file dump from a block device:
cp --sparse=always <(dd if=/dev/vg0/vmservice1 bs=8M iflag=direct | pv -pre --size=20G) /opt/backups/vmservice1.dat