For my indoor garden I wanted to monitor the temperature inside and outside the tent. It is in the basement, which I don’t think ever gets above 70 or below 50, but to control the plant’s cycles I need to control light and temp. The light came with an App so I just needed to get temp data into my database. Here is the code I’m using at the moment. I’d also like to log outside temperature data, but I haven’t figured that out yet. I’m think there has got to be an API for the NWS that I can put in a zip code and get the temp and humidity. Here is the code, and some notes.
We need to get some libraries and connect to the sensors.
I have two functions. One to get the data from the sensors, and one to insert the data into the database.
Where the magic happens
and some cleanup
So I calculated the space needed by putting 10 readings in the db, and if the average holds(which it should only go down) I will use about 840 MB a year to get a reading every minute, which I don’t really think I would need, so I’ll probably lower it to once an hour maybe later.
The starting point for this project is the Kasa powerstrip I posted about, like a month ago. I’m trying o code up something to log power levels directly to a mysql db.
Both seem to have good documentation on how to use them.
The kasa library requires asyncio, which I haven’t really messed with async programing before, so I get to learn some new concepts. Though what little I know tells me it’s just a way to keep a connection open while waiting on the other end to respond.
asyncio is a library to write concurrent code using the async/await syntax.
asyncio is used as a foundation for multiple Python asynchronous frameworks that provide high-performance network and web-servers, database connection libraries, distributed task queues, etc.
I’ve slapped together this to print the data I want with python
import asyncio
from kasa import Discover
async def main():
dev = await Discover.discover_single("powerstrip.lan")
await dev.update()
# possible features:
# state
# rssi
# on_since
# reboot
# led
# cloud_connection
# current_consumption
# consumption_today
# consumption_this_month
# consumption_total
# voltage
# current
state = dev.features.get("state")
voltage = dev.features.get("voltage")
current = dev.features.get("current")
current_consumption = dev.features.get("current_consumption")
consumption_this_month = dev.features.get("consumption_this_month")
consumption_today = dev.features.get("consumption_today")
# open file and write values
f = open("power.txt", "w")
f.write(f'Power On: {state.value}<br>{voltage.value} V {current.value} A {current_consumption.value} W<br>Today: {consumption_today.value} kwh<br>This Month: {consumption_this_month.value} kwh')
if __name__ == "__main__":
asyncio.run(main())
Which gives me this in WordPress:
Not directly of course, I haven’t gotten that far yet. For now, the python writes a text file with HTML tags for display. I have an hourly cron job that uses scp to copy the file over to /var/www, and then some php in my functions.php file to load the file, and display it as a wordpress shortcode.
The stupid WordPress part took the longest, but I understand the layout of wordpress a little better. It has been around and popular for so long, that it is easy finding well written documentation, but I’ve found that a lot of it is outdated. It is very annoying to read through something, try it out, and then find your error is because you’re trying to use a deprecated feature.
The vision is a cron job that runs a python script to import data directly into the DB, and then php pulls the data from the DB, but the next step is to just get something into to DB.
I finally decided on CentOS and Cockpit for the VM Host. Which is quite surprising. I have naturally used RedHat Enterprise and clones at work for quite some time, but I haven’t given them another look since they first started with Cockpit, and man is it slick now. Some clarification I needed to arrive at this decision is what CentOS actually IS. It is basically a stable release of RHEL. The next upcoming release, so they give that out for free for people to test before they release it to paying customers. I did not know that it is made by RH engineers, and I never really saw myself ever using a RH derived distro since I abandoned them in the late 90s.
Anyhoo, back to what this post is actually about, the DB migration. I was trying to come up with a clever name that rhymed with DB and DP came to mind, so we get this completely tasteless image and server name DBDP. If you are not familiar with the reference – then good – you’ve lived a good life.
I configured the new VM Guest with 8 cores and 32 GB of RAM. This is probably overkill, but it will allow me to do stupid things and “probably” not take out my website db in the process.
Ubuntu Server 24 LTS is the OS, and I’m switching from mysql to mariadb. Honestly, I don’t know why I even chose mysql. I wouldn’t have if I remembered that it was now owned by Oracle, part of the Sun acquisition. It is my opinion that Oracle was and continues to be everything that MS was made out to be during the anti-trust cases of the 90s. Actually, I just googled and it wasn’t settled until 2001, but it started in 1990.
I just used the regular server netinstall iso I used for the old db server, only the 24 version, and so far I’ve just the mariadb-server package. Side note, /var/log/apt/history.log next time you can’t remember what you’ve installed with apt. I setup a winscp connection for root and copied over the keys for password-less login. Added a rule for mysql in the fancy-shmancy pit of cocks.
Fire up DBeaver and connect to mariadb as root over SSH, so I can create a dev account on the DB.
Which of course did not allow me to connect. Mariadb by default doesn’t even allow local connections over tcp/ip, I find after much confusion. So I add this to /etc/mysql/my.cnf.
[mysqld]
skip-networking=0
skip-bind-address
Same as mysql, accounts are tied to hosts, and root is tied to local host, so I still won’t be able to connect with root even over ssh, apparently. So I’ll create a development user that is close to root, and I managed to do it without much googling thanks to an earlier post.
CREATE USER 'cwebdev'@'%' IDENTIFIED BY '*****';
GRANT CREATE, ALTER, DROP, INSERT, UPDATE, DELETE, SELECT, REFERENCES, RELOAD on *.* TO 'cwebdev'@'%' WITH GRANT OPTION;
Yay! DBeaver is connecting.
And now it’s a few minutes until one am, and I’m hungry. Off to WhataBurger and then I’ll dick-around with loading info from that kasa power strip.
The cats weren’t around when I fed them yesterday. I noticed that the back porch light was on so I glanced out to take a look, around two in the morning I think.
Anyhoo, I bought the vm host hardware and it is setup. Arch was way to much manual work, though it is ideal if I really want to do things MY way… But, MY way would be a gruesome sojourn into masochism, for nothing but LFS would really be my way, and if I don’t have time or patience for Arch, MY way isn’t feasible.
So far, I’ve built the Arch system, Debian system with KVM/QEMU/Libvirt, Proxmox(disappointment for the hype), and I just started a Ubuntu server LTS build. Fucking Broadcom, killed another with their VMWare purchase. It would be some much easier to use ESXi.
Testing out minimal distros to run my hypervisor. Debian is fine and light enough, but the server doesn’t come for at least another day, so I’ve got time. I’ve been hearing about Arch for ever and I haven’t really looked into it, but it sounds exactly like what I’m looking for.
Arch boots into live cli environment, and then you have to manually partition the disk to start.
So, how do I want to do this?
Update the first partition must be the efi partition, and it cannot be in LVM, so do that first
fdisk /dev/sda
# g to create GPT table, n to make new, t to change type, and w to write
g
n
+1G
t
uefi
# make LVM partition
n
w
# boot partition is FAT32 - efi mandates as a standard
mkfs.fat -F 32 /dev/sda1
mkfs.fat -F 32 /dev/rootVG/bootLV
# swap
mkswap /dev/rootVG/swapLV
# the rest
mkfs.ext4 /dev/rootVG/rootLV
mount shit under /mnt. This better get less do-it-yourself real soon or I’m going back to debian. But, if I can slap these in a script I’ll be fine.
# mount root filesystem
mount /dev/rootVG/rootLV /mnt
# make all those mf mount points you just had to have
mount --mkdir /dev/rootVG/bootLV /mnt/boot
mount --mkdir /dev/rootVG/varLV /mnt/var
and so on...
# enable swap
swapon /dev/rootVG/swapLV
Package list:
base linux linux-firmware vim efibootmgr grub intel-ucode networkmanager dosfstools exfatprogs e2fsprogs ntfs-3g lvm2 sshd sudo
pacstrap -K /mnt base linux linux-firmware
fstab
# Generate an fstab file (use -U or -L for UUID or labels)
genfstab -L /mnt >> /mnt/etc/fstab
chroot to new install
# fancy smancy arch version of chroot
arch-chroot /mnt
set a bunch of shit you normally never have to…
# time zone
ln -sf /usr/share/zoneinfo/America/Chicago /etc/localtime
# hw clock
hwclock --systohc
# Edit /etc/locale.gen and uncomment en_US.UTF-8 UTF-8
# fuck, install vim with 'pacman -S vim' if you forget it
locale-gen
# Create the locale.conf(5) file, and set the LANG variable accordingly
echo LANG=en_US.UTF-8 >> /etc/locale.conf
echo archkvm >> /etc/hostname
# because we are using LVM we need to create a new initramfs. Also needed for encryption and RAID.
# edit /etc/mkinitcpio.conf
# remove udev and replace with systemd
# insert vlm2 between block and filesystems
HOOKS=(base systemd ... block lvm2 filesystems)
# rebuild image
mkinitcpio -P
# install lvm2 and rebuild again because it gave you an error about exactly that
pacman -S lvm2
mkinitcpio -P
root password
passwd
install bootloader – I’m doing grub for now, but I may either put the /boot partition outside of LVM and load directly from UEFI.
# install grub and efibootmgr (if you haven't already)
pacman -S grub efibootmgr
# mount efi partition
mount --mkdir /dev/sda1 /boot/efi
# install grub
grub-install --target=x86_64-efi --efi-directory=/boot/efi --bootloader-id=GRUB
# make grub config
grub-mkconfig -o /boot/grub/grub.cfg
NOTE: it is here where you realize the efi partition can NOT be on an LVM partition, even though GRUB is fine with /boot being there. Starting over and updating notes. fml
cross fingers and reboot
# exit chroot
exit
umount -R /mnt
reboot
Aaaaannnd voila!!!
The most basic-bitch linux distro I’ve ever seen. Well, except for LFS, and I guess Gentoo was possibly worse because you had to wait five hours of compiling to realize you fucked up. But this is what I wanted. A Hypervisor should be very minimal.
I finally got a Pi after hearing my cousin talk about it a few times over the past few days. I so far am amazed at the performance to price ratio. Below are benchmark results for it and the thecweb.com server(which is quite old really). For ~$125 it is a steal.
Benchmark
thcweb.com
pi5
CPU events per second
1062.19
2730.24
Memory MiB/sec
6025.36
3649.76
File IO read MiB/sec
19.10
9.46
File IO write MiB/sec
12.73
6.31
So, the pi5 appears to be much faster than the Intel Core i5-4570T running thecweb.com. But, not surprisingly the pi5 can’t compete with the memory and file io.
Since it has 8 GB of RAM and CPU to spare, I installed all the recommended software when I copied the OS to the SD card. It comes with some lightweight window manager I don’t recognize and a few useful tools for updating the Pi and what not. Debian based to nothing new for me there. I moved the webcam over to it from thecweb.com and installed Motion. It seems to work fine.
So far I really haven’t had much fun setting it up. Too easy. But, I’m sure I’ll be tearing my hair out once I get to the electrical side of things. It has been over 20 years since my time a Devry. And I was a real shitty student.
I have a couple fairly complicated and hopefully long-term projects I’d like to do, and things are much easier to work on if I have a good way to store information about various components and incidents, so I’m going to see how hard it is to roll my own install of GLPI(Gestionnaire Libre de Parc Informatique, or “Free IT Equipment Manager”).
Installation
Downloaded this. Moved extracted folder to /var/www.
Create directories for configs, data, and logs.
GLPI_CONFIG_DIR: set path to the configuration directory;
/etc/glpi
GLPI requires read rights on this directory to work; and write rights during the installation process.
copy the contents of the config directory to this place.
GLPI_VAR_DIR : set path to the files directory;
/var/lib/glpi
GLPI requires read and write rights on this directory.
copy the contents of the files directory to this place.
GLPI_LOG_DIR : set path to logs files.
/var/log/glpi
GLPI requires read and write access on this directory.
Create a inc/downstream.php file into GLPI directory with the following contents:
Add info Apache virtual server. I’ll lock it down to my local network, so this won’t be accessible from the internet for now. Added to /etc/apache2/sites-enabled/glpi.conf.
<VirtualHost *:80>
ServerName glpi
ServerAdmin webmaster@localhost
DocumentRoot /var/www/glpi/public
ErrorLog ${APACHE_LOG_DIR}/glpi-error.log
CustomLog ${APACHE_LOG_DIR}/glpi-access.log combined
# If you want to place GLPI in a subfolder of your site (e.g. your virtual host is serving multiple applications),
# you can use an Alias directive. If you do this, the DocumentRoot directive MUST NOT target the GLPI directory itself.
# Alias "/glpi" "/var/www/glpi/public"
<Directory /var/www/glpi/public>
Require all granted
RewriteEngine On
# Ensure authorization headers are passed to PHP.
# Some Apache configurations may filter them and break usage of API, CalDAV, ...
RewriteCond %{HTTP:Authorization} ^(.+)$
RewriteRule .* - [E=HTTP_AUTHORIZATION:%{HTTP:Authorization}]
# Redirect all requests to GLPI router, unless file exists.
RewriteCond %{REQUEST_FILENAME} !-f
RewriteRule ^(.*)$ index.php [QSA,L]
</Directory>
</VirtualHost>
I of course had to add an entry for glpi in the hosts file on my laptop for this to work.
I got bored last night and started researching OpenWRT. There is no particular feature that it supports, that my current router firmware doesn’t, but I haven’t looked into the project in at least 10 years.
I currently run an ASUS AX-3000, which I bought because I thought my old Netgear X8 R8300 was malfunctioning, but when I had the same issue with the ASUS, I found it was a config problem. Since the Netgear is just sitting in the basement, I though I’d install OpenWRT on that first and then see if it’s worth it to install on the ASUS. The Netgear is a little more high end of a router, but it doesn’t have WiFi 6. The ASUS does, but has one less radio, so I’ll need to see how they perform.
Unfortunately, neither router has images prebuilt for it, so I had to build my own image. Luckily there was already a profile for an R8500, which hardware wise is almost identical to the model I have.
The build environment setup and instructions can be found here. It was a simple matter of firing up a Ubuntu VM and following along. I can’t flash it while I’m at work, so that will have to wait.
The most annoying thing with getting this setup is how confusing the OpenWRT documentation is. I can see why they would organize it this way. It seems to me that unless you have a router that one of the maintainers owns, you are left with manual. Even though it’s just linux, so you really just need the hardware support to get up and running. I would think a more broad generic image to test things would make more sense. Oh well.
cweb@testvmhost:~/openwrt-imagebuilder-bcm53xx-generic.Linux-x86_64$ make image \
PROFILE="netgear_r8500"
Generate local signing keys...
WARNING: can't open config file: /builder/shared-workdir/build/staging_dir/host/etc/ssl/openssl.cnf
WARNING: can't open config file: /builder/shared-workdir/build/staging_dir/host/etc/ssl/openssl.cnf
read EC key
writing EC key
Checking 'true'... ok.
Checking 'false'... ok.
Checking 'working-make'... ok.
Checking 'case-sensitive-fs'... ok.
Well, that was easy. I literally just copied over the jar file and restarted guacd, apache2, and tomcat9. After that I just logged out and back in to enroll in TOTP.
I did find unfortunately that the KeePass app I’m using on Android doesn’t seem to sync things both ways. Entries I create on my phone do not see to be able to sync to google drive, but that just took me a second to work around. It’s not really a big deal but it meant I had to manually enter in the secret key and such. Guac TOTP supports QR codes, and I was able to add it with my phone, but wasn’t able to get it to sync back to my computer(after five minutes of trying). That may be a project for another day.