msgbartop
Blog di Dino Ciuffetti (Bernardino in realtà)
msgbarbottom

28 Feb 25 Postgres extension for stock technical analysis

Ok, it’s been a while since I wrote anything here, but that doesn’t mean I’m not doing anything technically challenging. I’m a noob with stock markets but I realized that the solutions provided by my broker are nice but not perfect for me. I would like to have a deeper sight on stock markets to make better decisions (I’ve lost > 98% of my invested capital buying a wrong ticker… you know, bad hydrogen cell green company?!!??). So I decided that I want to observe my stocks candles and indicators near real-time on a self hosted dashboard. I also want to store stock data on a DB on-premise so that I can make custom queries without worry about limited external API requests.

After some scouting on the web / IA chatbots, I decided to implement the dashboard on a self hosted Grafana, with self-hosted PostgreSQL + TimescaleDB as time-series capable DB, getting financial data in near-realtime from a nice vendor (99$/month).

Spending two words about the DB schema, I’d like to select tickers and markets and I’d like a wonderful performance to switch tickers, periods and timeframes, so I’ll need proper DB schema, ER, optimized queries, indexes and foreign keyes.

The first is the stockbars table and is responsible of storing candles (open, high, low, close, volume, trades, etc). It’s market_id and ticker_id are coupled with the second table, called tickers. It will contain any stock ticker and its optional details. It’s market_id and ticker_id are then coupled with the third table called markets that has the task of storing markets (NASDAQ, NYSE, etc). This way I can easily make JOIN to get the requested ticker from the requested markets normalizing data and avoiding to duplicate and waste storage space. This also permits the DB engine to use indexes and foreign keys, optimizing rows fetch from the storage and performing really well even on big tables.

But how can I make queries to aggregate data in the requested timeframe? Postgres doesn’t have this feature, so I need TimescaleDB extension. This is a really nice piece of software that can transform your RDBMS to a fully featured time-series DB. I just need to make the stockbars table a TimescaleDB hypertable. It will automatically partition the table to the proper format to make the magic timeperiod happen in background.

So, thanks to TimescaleDB I’m able to change the timeframe (5 min), the observed period (48 hours), or get aggregated info (NVNTD ticker on NASDAQ market. NVNTD is a non existent stock created just to test the system).

Now, I just need to create the grafana dashboard, and since PostgreSQL and TimescaleDB are fully and natively supported by grafana, I took just a couple of minutes to integrate the dashboard and show my candles (TSLA ticker on NYSE market here with random market data, just for testing reasons).

OK, nice!! So I built a custom Grafana dashboard to show candlesticks of my watched stocks, and some of you may have noticed that Bollinger Bands are present in the chart but postgres, timescaledb and grafana alone are not capable of generating indicators or overlays like Bollinger Bands…

Have you ever thought of a PostgreSQL extension that is able to generate indicators and overlays in C as postgres functions? I did not find anything ready, so I’ve written the postgres extension myself, with the help of the Tulip Indicators library as a math buddy. More will come here… if there will be some hype.

05 Ago 23 dam2k/tadoapi and telegraf-dam2ktado

Ready for a new grafana story? Today I have some more: my brand new opensource repositories:

  • dam2k/tadoapi (packagist and github): a simple Tado ™ SDK implementation for PHP
  • telegraf-dam2ktado (github): API exporter (telegraf execd input plugin) written in PHP

I realized them for myself, after I did not find anything good.

The exporter (telegraf-dam2ktado) is a plugin written in PHP that connects to the tado network (internet), fetches thermostat metrics and devices for your home installation, parses it as a single json and write this json to its stdout. It is signaled directly by telegraf which sends a empty new line on stdout that the plugin catches on its stdin. In this way, the plugin knows when telegraf wants to fetch new data, and telegraf can read parsed and cleaned data from tado so that it can collect metrics and put them to influxdb.

With all the metrics on influxdb one can create a beautiful dashboard on grafana (on cloud or on premise).

The installation instructions are on the respective projects home page.

Now, some cool screenshots of my grafana dashboard (19301)…

31 Lug 23 Dashboarding Fritz!box router with telegraf, influxdb and grafana

So, with this second article about grafana we are going to dashboard a Fritz!box router.

Same as the previous article, you need a grafana+influxdb installation somewhere. Also you’ll need a linux host connected to the fritz router to monitor, a Raspberry pi 4 will be more than OK.

Install telegraf on your rpi4, then follow these instructions:

https://github.com/Ragin-LundF/telegraf_fritzbox_monitor

Once you’ve installed the required software you’ll end up with the sw installed into /opt/telegraf_fritzbox and a configuration file into /etc/telegraf/telegraf.d/telegraf_fritzbox.conf.

Open and modify this last file in this way:

[[inputs.exec]]
  commands = ["python3 /opt/telegraf_fritzbox/telegraf_fritzbox.py"]
  timeout = '30s'
  data_format = "influx"
  interval = "30s"

Now edit this file: /opt/telegraf_fritzbox/config.yaml and setup your fritz router’s username and password connection. NOTE: It’s a good thing to create a dedicated user.

It’s possible to test this command to check if the connection with ther router is working:

python3 /opt/telegraf_fritzbox/telegraf_fritzbox.py ; chown telegraf:telegraf /opt/telegraf_fritzbox/fritz.db

If everything is ok you should have a list of metrics coming from your router.

Please note that you need to enable UPnP status on your router networking configuration or you’ll have an error regarding a unknown service.

Now, restart telegraf with service telegraf restart.

It’s now time to import the grafana dashboard. I had big problems with the official json from https://github.com/Ragin-LundF/telegraf_fritzbox_monitor/blob/main/GrafanaFritzBoxDashboard_Influx2.json so I put my modified dashbord here.

Some screenshots here

A really big thank goes to the software author Ragin-LundF -> https://github.com/Ragin-LundF

30 Lug 23 Dashboarding Linux system metrics with Telegraf, InfluxDB, Grafana

There are tons of documentation and howtos on the web regarding system monitoring and metrics dashboards, so I don’t put all the boring stuff here.

You may want to have a central grafana and influxdb installation, then a telegraf installation on every node to monitor. For example you may have a grafana + influxdb installation somewhere in the cloud, a VPN, and a couple of raspberry pi nodes that gather metrics and send them to the central influxdb+grafana node for storage and visualization.

For this task, I use this beautiful grafana dashboard: https://grafana.com/grafana/dashboards/928-telegraf-system-dashboard/

Just import this dashbord to your local or remote grafana installation.

To make all those panels working, all your nodes to be monitored must have this telegraf plugins enabled and configured:

[[inputs.cpu]]
percpu = true
totalcpu = true
collect_cpu_time = false
report_active = false
core_tags = false
[[inputs.disk]]
ignore_fs = ["tmpfs", "devtmpfs", "devfs", "iso9660", "overlay", "aufs", "squashfs"]
[[inputs.diskio]]
[[inputs.kernel]]
[[inputs.mem]]
[[inputs.processes]]
use_sudo = false
[[inputs.swap]]
[[inputs.system]]
[[inputs.conntrack]]
files = ["ip_conntrack_count","ip_conntrack_max",
"nf_conntrack_count","nf_conntrack_max"]
dirs = ["/proc/sys/net/ipv4/netfilter","/proc/sys/net/netfilter"]
collect = ["all", "percpu"]
[[inputs.internal]]
[[inputs.interrupts]]
[inputs.interrupts.tagdrop]
irq = [ "NET_RX", "TASKLET" ]
[[inputs.linux_sysctl_fs]]
[[inputs.net]]
[[inputs.netstat]]
[[inputs.nstat]]
proc_net_netstat = "/proc/net/netstat"
proc_net_snmp = "/proc/net/snmp"
proc_net_snmp6 = "/proc/net/snmp6"
dump_zeros = true

Also, remember to configure your telegraf output to send collected metrics to your central influxdb node:

[[outputs.influxdb_v2]]
urls = ["http://192.168.0.2:8086"]
token = "A1ycabIZjg3XjulgubSanvPEdoj7UxqmEbsPADXX_h1Ns3-kTspG63s0SP3wuR0MGisd62rx9jLzExrhPvKAUg=="
organization = "YourOrg"
bucket = "YourBucket"

Enjoy your system telegraf metrics visualized 🙂