Dietrich Schroff

Subscribe to Dietrich Schroff feed
Dietrich Schroffhttp://www.blogger.com/profile/18397485256708620180noreply@blogger.comBlogger557125
Updated: 11 hours 55 min ago

Microk8s: publishing the dashboard (reachable from remote/internet)

Sat, 2021-01-23 15:22

 

If you enable the dashboard on a microk8s cluster (or single node) you can follow this tutorial: https://microk8s.io/docs/addon-dashboard

The problem is, the command

microk8s kubectl port-forward -n kube-system service/kubernetes-dashboard 10443:443

has to be reexecuted every time you restart your node, which you use to access the dashboard.

A better configuration can be done this way: Run the following command and change 

type: ClusterIP -->   type: NodePort

kubectl -n kube-system edit service kubernetes-dashboard

# Please edit the object below. Lines beginning with a '#' will be ignored,
# and an empty file will abort the edit. If an error occurs while saving this file will be
# reopened with the relevant failures.
#
apiVersion: v1
kind: Service
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"k8s-app":"kubernetes-dashboard"},"name":"kubernetes-dashboard","namespace":"kube-system"},"spec":{"ports":[{"port":443,"targetPort":8443}],"selector":{"k8s-app":"kubernetes-dashboard"}}}
  creationTimestamp: "2021-01-22T21:19:24Z"
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kube-system
  resourceVersion: "3599"
  selfLink: /api/v1/namespaces/kube-system/services/kubernetes-dashboard
  uid: 19496d44-c454-4f55-967c-432504e0401b
spec:
  clusterIP: 10.152.183.81
  clusterIPs:
  - 10.152.183.81
  ports:
  - port: 443
    protocol: TCP
    targetPort: 8443
  selector:
    k8s-app: kubernetes-dashboard
  sessionAffinity: None
  type: ClusterIP
status:
  loadBalancer: {}
Then run

root@ubuntu:/home/ubuntu# kubectl -n kube-system get service kubernetes-dashboard
NAME                   TYPE       CLUSTER-IP      EXTERNAL-IP   PORT(S)         AGE
kubernetes-dashboard   NodePort   10.152.183.81   <none>        443:30713/TCP   4m14s

After that you can access the dashboard over the port which is given behind the 443: - in my case https://zigbee:30713

 

 

Microk8s: No such file or directory: '/var/snap/microk8s/1908/var/kubernetes/backend.backup/info.yaml' while joining a cluster

Fri, 2021-01-22 15:12

 Kubernetes cluster with microk8s on raspberry pi

If you want to join a node and you get the following error:

microk8s join 192.168.178.57:25000/6a3ce1d2f0105245209e7e5e412a7e54

Contacting cluster at 192.168.178.57
Traceback (most recent call last):
  File "/snap/microk8s/1908/scripts/cluster/join.py", line 967, in <module>
    join_dqlite(connection_parts)
  File "/snap/microk8s/1908/scripts/cluster/join.py", line 900, in join_dqlite
    update_dqlite(info["cluster_cert"], info["cluster_key"], info["voters"], hostname_override)
  File "/snap/microk8s/1908/scripts/cluster/join.py", line 818, in update_dqlite
    with open("{}/info.yaml".format(cluster_backup_dir)) as f:
FileNotFoundError: [Errno 2] No such file or directory: '/var/snap/microk8s/1908/var/kubernetes/backend.backup/info.yaml'

 This error happens, if you have not enabled dns on your nodes.

So just run "microk8s.enable dns" on every machine:

microk8s.enable dns

Enabling DNS
Applying manifest
serviceaccount/coredns created
configmap/coredns created
deployment.apps/coredns created
service/kube-dns created
clusterrole.rbac.authorization.k8s.io/coredns created
clusterrolebinding.rbac.authorization.k8s.io/coredns created
Restarting kubelet
Adding argument --cluster-domain to nodes.
Configuring node 192.168.178.57
Adding argument --cluster-dns to nodes.
Configuring node 192.168.178.57
Restarting nodes.
Configuring node 192.168.178.57
DNS is enabled

And after that the join will work like expected:

root@ubuntu:/home/ubuntu# microk8s join 192.168.178.57:25000/ed3f57a3641581964cad43f0ceb2b526
Contacting cluster at 192.168.178.57
Waiting for this node to finish joining the cluster. ..  
root@ubuntu:/home/ubuntu# kubectl get nodes
NAME     STATUS   ROLES    AGE     VERSION
ubuntu   Ready    <none>   3m35s   v1.20.1-34+97978f80232b01
zigbee   Ready    <none>   37m     v1.20.1-34+97978f80232b01
 

MicroK8s: Kubernetes on raspberry pi - get nodes= NotReady

Wed, 2021-01-20 15:44

On my little kubernetes cluster with microK8s

 


 i got this problem:

kubectl get nodes
NAME     STATUS     ROLES    AGE   VERSION
zigbee   NotReady   <none>   59d   v1.19.5-34+b1af8fc278d3ef
ubuntu   Ready      <none>   59d   v1.19.6-34+e6d0076d2a0033

The solution was:

kubectl describe node zigbee

and in the output i found:

Events:
  Type     Reason                   Age                From        Message
  ----     ------                   ----               ----        -------
  Normal   Starting                 18m                kube-proxy  Starting kube-proxy.
  Normal   Starting                 14m                kubelet     Starting kubelet.
  Warning  SystemOOM                14m                kubelet     System OOM encountered, victim process: influx, pid: 3256628
  Warning  InvalidDiskCapacity      14m                kubelet     invalid capacity 0 on image filesystem
  Normal   NodeHasNoDiskPressure    14m (x2 over 14m)  kubelet     Node zigbee status is now: NodeHasNoDiskPressure
  Normal   NodeHasSufficientPID     14m (x2 over 14m)  kubelet     Node zigbee status is now: NodeHasSufficientPID
  Normal   NodeHasSufficientMemory  14m (x2 over 14m)  kubelet     Node zigbee status is now: NodeHasSufficientMemory
Hmmm - so running additional databases, processes outside of kubernetes is not such a good idea.

But as a fast solution: I ejected the SD card and did a resize + add swap on my laptop and put the SD card back to the raspberry pi...

Review: Kafka: The Definitive Guide

Wed, 2021-01-06 14:55

Last week i read the book "Kafka: The Definitive Guide" with the subtitle "Real-Time Data and Stream Processing at Scale" which was provided by confluent.io:


The book contains 11 chapters on 288 pages - let's take look on the content:

Chapter 1 "meet Kafka" start with a motivation, why moving data is important and why you should not spend your effort not into moving but into your business. In addition an introduction to the messaging concepts like publish/subscribe, queues, messages, batches, schemas, topics, partitions, ... Many technical terms are defined there, but some are specific to Kafka and some are more general definitions. One additional info: Kafka was built by linkedin - the complete story told in the last section of this chapter.

The second chapter is about installing Kafka. Nothing special. OS, Java, Zookeeper (if clustered), Kafka.

Chapter 3 is called "Kafka producers: Writing messages to Kafka". Like the title indicates: all configuration details about sending messages are listed and explained. 

Chapter 4 is the same as the previous chapter but for reading messages. Both chapters contain many java example listings.

Chapters 5 & 6 are about clusters and reliability. Here are the nifty details explained like high water marks, message replication, timeouts, indices, ... If you want to run a high available Kafka system, you should read that and in case of failures you will know what to do.

Chapter 7 introduces Kafka Connect. Here a citation, when you should use Connect (it is not possible to summarize this):

You will use Connect to connect Kafka to datastores that you did not write and whose code you cannot or will not modify. Connect will be used to pull data from the external datastore into Kafka or push data from Kafka to an external store. For datastores where a connector already exists, Connect can be used by nondevelopers, who will only need to configure the connectors.

"Cross data cluster mirroring" is the title of chapter 8 - i do not understand why this chapter is not placed before chapter 7...

In chapter 9 and 10 administration and monitoring is explained. Very impressive is the amount of CLI examples. If you have a question: here you will find the CLI command, which provides the answer.

The last chapter "stream processing" is one of the longest chapters (>40 pages).  Here two APIs are presented to do some processing based on the messages. One example is, a stream which processes stock quotes. With stream processing it is possible to calculate the number of trades for every five-second window or the average ask price for every five-second window. Of course this chapter shows much more, but i think this gives the best impression ;-)

All in all a excellent book - even if you are not implementing Kafka ;-)



 

Samsung A50: boot loop problem after last Samsung OS update

Thu, 2020-12-31 04:36

 I used a Samsung A50 for nearly 1,5 years and was very satisfied with the device. 128GB internal storage and dual sim - i do not need more :)

But last week the monthly "security" update was done by Samsung and after booting the new OS everything seems to fine. But only a few hours later (i did not install any new software - was just browsing in the web on my favourite news page) the smartphone froze and after that it keeps showing this screen for hours:

With pressing "Volume Up" and "Power" i was able to open the recovery mode, but after a factory reset, still the boot screen is shown...

Anyone else with this problem? Please leave a comment!


My son started at blogspot.com

Wed, 2020-12-30 14:54

My son started its own blog 

https://holzgeschenkebasteln.blogspot.com/


Of course this blog is in german, but it is nice to see, that he managed to get everything running and configured.

I am curious, if he will write some more postings...

MicroK8s: more problems - log flooding

Wed, 2020-12-23 13:05

After getting my kubernetes nodes running on ubuntu's microK8s

i got thousands of these messages in my syslog:

Dec 22 21:15:00 ubuntu microk8s.daemon-kubelet[10978]: W1122 21:15:00.735176   10978 clientconn.go:1223] grpc: addrConn.createTransport fail
ed to connect to {unix:///var/snap/microk8s/common/run/containerd.sock  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error whil
e dialing dial unix:///var/snap/microk8s/common/run/containerd.sock: timeout". Reconnecting...

Dec 22 21:15:00 ubuntu microk8s.daemon-kubelet[10978]: W1122 21:15:00.737524   10978 clientconn.go:1223] grpc: addrConn.createTransport fail
ed to connect to {unix:///var/snap/microk8s/common/run/containerd.sock  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error whil
e dialing dial unix:///var/snap/microk8s/common/run/containerd.sock: timeout". Reconnecting...

Really annoying i found no solution for this problem. But there is an easy way to correct this problem:

snap disable microk8s
snap enable microk8s
Run this on both nodes and the problem is gone (i think rebooting will do the same job).



Review AIOPS for dummies - the newest buzzword in town...

Fri, 2020-12-18 12:03

 Today i ran over an article in linkedin, where this book was announced:


The next big thing behind DevOps is AIOps?

Moogsoft says about themselves: "Moogsoft is a pioneer and leading provider of AIOps solutions that help IT teams work faster and smarter. With patented AI analyzing billions of events daily across the world's most complex IT environments, the Moogsoft AIOps platform helps the world's top enterprises avoid outages, automate service assurance, and accelerate digital transformation initiatives...."

So let's take a look inside this book with 43 pages and 7 chapters:

Chapter one start with the declaration of the problem: DevOps & reliability need improvements in incident resolution, meeting SLAs and accelerating digital transformation. Very nice is the short case study, which is provided there.

The beginning of chapter 2 starts with this setence: "AI is technology used to create machines that imitate intelligent human behaviour." YES! They are not talking over the almighty AI - this sounds very promising. AI is for moogsoft statistics, probabilites, calculations and algebra - as physicist i strongly agree with that "legacy" approach. Then this book covers very brief the ai learning techniques.

In chapter 3 the AIOps workflow is presented. Without going into any details here: Moogsoft uses a very nice iconic design, which explains their procedure well. At this point i would recommend you, to take a look on that...

Chapter 4 provides some more use cases for AIOps. Nice - but nothing really new.

Chapter 5 claims, that AIOps is providing a unified view for monitoring, observability and change data. Sounds good - but i think digging into details will show limits of the promise. But page 32 shows a list of systems which are already integrated - this is really a very impressive list.

In chapter 6 moogsoft advertise their small entry solution "moogsoft express".  

The last chapter closes with the typical "ten tips".

 

All in all a nice idea and let's see how this solution performs on the market!

zigbee: moving data from mqtt to influxdb - transforming strings to integers

Wed, 2020-12-16 14:41

After some first steps with zigbee devices and storing the data in an influxdb, i noticed that string values are suboptimal for building graphs. 

Moving the data from mqtt to influxdb was done with telegraf:

https://www.influxdata.com/time-series-platform/telegraf/

And i was wondering, how i can change string to integers, but this i very easy:

  [[processors.enum]]
    order = 2
    [[processors.enum.mapping]]
      field = "state"
      [processors.enum.mapping.value_mappings]
        "ON" = 1
        "OFF" = 0
    [[processors.enum.mapping]]
      field = "contact"
      [processors.enum.mapping.value_mappings]
        "true" = 2
        "false" = 1
    [[processors.enum.mapping]]
      field = "tamper"
      [processors.enum.mapping.value_mappings]
        "true" = 1
        "false" = 0
    [[processors.enum.mapping]]
      field = "water_leak"
      [processors.enum.mapping.value_mappings]
        "true" = 1
        "false" = 0
Next problem: if the column "water_leak" was already added inside your influxdb, you can not add numbers - so you have to drop the table and loose your data...

(This is not the full truth: you can export the data via a select to a file and insert the data afterwards - with the appropriate numbers...)
 


My start to a local kubernetes cluster: microK8s @ubuntu

Sat, 2020-12-12 12:54

After playing around with zigbee on raspberry pi, i decided to build up my own kubernetes cluster at home. I have to raspberry pi running ubuntu server, so i wanted to go this direction:


The start is very easy. Just follow the steps shown here:

https://microk8s.io/docs

But by adding the second node i got the following result:

root@zigbee:/home/ubuntu/kubernetes# microk8s kubectl get nodes
NAME     STATUS     ROLES    AGE   VERSION
ubuntu   NotReady   <none>   98s   v1.19.3-34+b9e8e732a07cb6
zigbee   NotReady   <none>   37m   v1.19.3-34+b9e8e732a07cb6
Hmmm.

The best way to debug this problem is

# microk8s inspect
Inspecting Certificates
Inspecting services
  Service snap.microk8s.daemon-cluster-agent is running
  Service snap.microk8s.daemon-containerd is running
  Service snap.microk8s.daemon-apiserver is running
  Service snap.microk8s.daemon-apiserver-kicker is running
  Service snap.microk8s.daemon-control-plane-kicker is running
  Service snap.microk8s.daemon-proxy is running
  Service snap.microk8s.daemon-kubelet is running
  Service snap.microk8s.daemon-scheduler is running
  Service snap.microk8s.daemon-controller-manager is running
  Copy service arguments to the final report tarball
Inspecting AppArmor configuration
Gathering system information
  Copy processes list to the final report tarball
  Copy snap list to the final report tarball
  Copy VM name (or none) to the final report tarball
  Copy disk usage information to the final report tarball
  Copy memory usage information to the final report tarball
  Copy server uptime to the final report tarball
  Copy current linux distribution to the final report tarball
  Copy openSSL information to the final report tarball
  Copy network configuration to the final report tarball
Inspecting kubernetes cluster
  Inspect kubernetes cluster
Inspecting juju
  Inspect Juju
Inspecting kubeflow
  Inspect Kubeflow

# Warning: iptables-legacy tables present, use iptables-legacy to see them
WARNING:  Docker is installed.
File "/etc/docker/daemon.json" does not exist.
You should create it and add the following lines:
{
    "insecure-registries" : ["localhost:32000"]
}
and then restart docker with: sudo systemctl restart docker
WARNING:  The memory cgroup is not enabled.
The cluster may not be functioning properly. Please ensure cgroups are enabled
See for example: https://microk8s.io/docs/install-alternatives#heading--arm
Building the report tarball
  Report tarball is at /var/snap/microk8s/1794/inspection-report-20201212_194335.tar.gz
And as you can see: this contains the solution!

After adding the /etc/docker/daemon.json everything went fine:

root@zigbee:~# kubectl get nodes 
NAME     STATUS   ROLES    AGE    VERSION
ubuntu   Ready    <none>   46h    v1.19.3-34+b9e8e732a07cb6
zigbee   Ready    <none>   2d3h   v1.19.3-34+b9e8e732a07cb6

MicroK8s: Dashboard & RBAC

Fri, 2020-12-11 15:01

If you want to access your dashboard and you have enabled RBAC (like shown here), you will get this error, if you follow the default manual (https://microk8s.io/docs/addon-dashboard):

secrets is forbidden: User "system:serviceaccount:default:default" cannot list resource "secrets" in API group "" in the namespace "default"
error
persistentvolumeclaims is forbidden: User "system:serviceaccount:default:default" cannot list resource "persistentvolumeclaims" in API group "" in the namespace "default"
error
configmaps is forbidden: User "system:serviceaccount:default:default" cannot list resource "configmaps" in API group "" in the namespace "default"
error
services is forbidden: User "system:serviceaccount:default:default" cannot list resource "services" in API group "" in the namespace "default"
error
statefulsets.apps is forbidden: User "system:serviceaccount:default:default" cannot list resource "statefulsets" in API group "apps" in the namespace "default"
error
ingresses.extensions is forbidden: User "system:serviceaccount:default:default" cannot list resource "ingresses" in API group "extensions" in the namespace "default"
error
replicationcontrollers is forbidden: User "system:serviceaccount:default:default" cannot list resource "replicationcontrollers" in API group "" in the namespace "default"
error
jobs.batch is forbidden: User "system:serviceaccount:default:default" cannot list resource "jobs" in API group "batch" in the namespace "default"
error
replicasets.apps is forbidden: User "system:serviceaccount:default:default" cannot list resource "replicasets" in API group "apps" in the namespace "default"
error
deployments.apps is forbidden: User "system:serviceaccount:default:default" cannot list resource "deployments" in API group "apps" in the namespace "default"
error
events is forbidden: User "system:serviceaccount:default:default" cannot list resource "events" in API group "" in the namespace "default"
error
pods is forbidden: User "system:serviceaccount:default:default" cannot list resource "pods" in API group "" in the namespace "default"
error
daemonsets.apps is forbidden: User "system:serviceaccount:default:default" cannot list resource "daemonsets" in API group "apps" in the namespace "default"
error
cronjobs.batch is forbidden: User "system:serviceaccount:default:default" cannot list resource "cronjobs" in API group "batch" in the namespace "default"
error
namespaces is forbidden: User "system:serviceaccount:default:default" cannot list resource "namespaces" in API group "" at the cluster scope
 To get the right bearer token you have to this:

export K8S_USER="system:serviceaccount:default:default"
export NAMESPACE="default"
export BINDING="defaultbinding"
export ROLE="defaultrole"
kubectl create clusterrole $ROLE  --verb="*"  --resource="*.*"    
kubectl create rolebinding $BINDING --clusterrole=$ROLE --user=$K8S_USER -n $NAMESPACE
kubectl -n ${NAMESPACE} describe secret $(kubectl -n ${NAMESPACE} get secret | (echo "$_") | awk '{print $1}') | grep token: | awk '{print $2}'\n

(create role, add a role binding and then get the token)

But there is still one error:

To fix this, you have add the cluster-admin role to this account (if you really want clusterwide permissions):

kubectl create clusterrolebinding root-cluster-admin-binding --clusterrole=cluster-admin --user=$K8S_USER

Securing InfluxDB

Sat, 2020-12-05 01:51

In my monitoring setup i am heavily using InfluxDB. Starting with one linux server with grafana which loads the data from its local influxdb, i wanted to setup a second linux server.

My options:

  1. new telegraf, new influxdb, new grafana
    but then i have two url (because of two grafanas and i can not copy graphs from one dashboard to the other)
  2. new telegraf, new influxdb, but grafana from first server
    grafana has to get the data over the network
  3. new telegraf, influxdb & grafana from first server
    what is happening if telegraf can not reach influxdb, because of network problem? what if the first server is down?
  4. completely remote monitoring
    what is happening if telegraf can not reach the other server? what if the first server is down? 

As you can see, option 2 is the favorite here.

But therefore InfluxDB has to be secured: SSL + user/password.

So let's start with creating some certificates:

openssl req -new -x509 -nodes -out server-cert.pem -days 3650 -keyout server-key.pem

So that you get:

zigbee:/etc/influxdb# ls -lrt *pem
-rw-r--r-- 1 influxdb root  1704 Nov  7 09:48 key.pem
-rw-r--r-- 1 influxdb root  1411 Nov  7 09:48 cert.pem

Then add this in /etc/influxdb/influxdb.conf

 https-enabled = true
 https-certificate = "/etc/influxdb/cert.pem"
 https-private-key = "/etc/influxdb/key.pem"

But still a user is missing, so we have to create users (via bash):

influx -ssl -unsafeSsl

create user admin with password 'XXXXXXX' with all privileges

After that you can test this with

root@zigbee:# influx -ssl -unsafeSsl  
Connected to https://localhost:8086 version 1.6.4
InfluxDB shell version: 1.6.4
> show databases
ERR: unable to parse authentication credentials
Warning: It is possible this error is due to not setting a database.
Please set a database with the command "use <database>".
> auth
username: admin
password:
> show databases
name: databases
name
----
_internal

 


 

AVM Fritz.Box: how to do an automatic login and get the logged in WLAN devices

Fri, 2020-12-04 13:29

The AVM Fritz.Box is really a great device - but the possibilities to get monitoring data are very limited. (Please read this posting)

Which data do i want?


I want the data, which is presented in the networking tab:

If i trace the networking with the developer tools, i the the following:

To reproduce this on my command line, i have to enter this into my bash:

curl 'http://fritz.box/data.lua' 
-H 'User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:82.0) Gecko/20100101 Firefox/82.0'
-H 'Accept: */*'
-H 'Accept-Language: de,en;q=0.7,en-US;q=0.3' --compressed
-H 'Content-Type: application/x-www-form-urlencoded'
-H 'Origin: http://fritz.box' -H 'Connection: keep-alive'
-H 'Referer: http://fritz.box/' -H 'Pragma: no-cache'
-H 'Cache-Control: no-cache'
--data-raw 'xhr=1&sid=cb......SID&lang=de&page=netDev&xhrId=cleanup&useajax=1&no_sidrenew='

(you have to add the line breaks and the SID in the last line).

Then you will get a JSON object beginning with these lines:

{
  "pid": "netDev",
  "hide": {
    "ssoEmail": true,
    "shareUsb": true,
    "liveTv": true,
    "faxSet": true,
    "dectMoniEx": true,
    "rss": true,
    "mobile": true,
and all the other information.

The problem: How to get this SID?

If you trace the login, it is not so easy, that the password is just send to the Fritz.Box. They use PBDFK2 to encrypt the password and then send it to the Fritz.Box.

You can find some information about that here:

https://avm.de/fileadmin/user_upload/Global/Service/Schnittstellen/AVM%20Technical%20Note%20-%20Session%20ID_EN%20-%20Nov2020.pdf


Inside this document a PHP program is stated, which does the login (not really - i think it does the job years ago - but now it does a fallback to md5 authentication. I fixed this, just post a comment, if you want this pbkdf2 enabled php script). I wrote a small javascript, which i execute with node and after that i was able to log the data into my influxdb and build a show it inside grafana:


If you are interested in the configuration, the js script and the collect commands, then post me a comment...

Kubernetes: Rights & Roles with kubectl and RBAC - How to restrict kubectl for a user to a namespace

Wed, 2020-12-02 13:39

Playing around with my MicroK8S i was thinking about restricting access to the default namespace. Why?

Every command adds something and so your default namespace gets polluted more and more and cleaning up might be a lot of work.

But:

There is neither a HOWTO nor some quickstart into this. Everything you can find is:

https://kubernetes.io/docs/reference/access-authn-authz/rbac/

But after this very detailed article you know a lot of things, but for restricting the kubectl you are as smart as before.

One thing i learned in this article:

You do not have to use these YAML files - everything can be done with commands and their options (i do not like YAML, so this was a very important understanding for me).

At the end it is very easy:

export K8S_USER="ateamuser"
export NAMESPACE="ateam"
export BINDING="ateambinding"
export ROLE="ateamrole"
kubectl create namespace $NAMESPACE
kubectl label namespaces $NAMESPACE team=a
kubectl create clusterrole ateamrole  --verb="*"  --resource="*.*"
kubectl create rolebinding $BINDING --clusterrole=$ROLE --user=$K8S_USER -n $NAMESPACE
kubectl create serviceaccount $K8S_USER -n $NAMESPACE
kubectl describe sa $K8S_USER -n $NAMESPACE
and just test it with:

root@zigbee:/home/ubuntu/kubernetes# kubectl get pods -n ateam  --as=ateamuser
NAME                  READY   STATUS    RESTARTS   AGE
web-96d5df5c8-cc9jv   1/1     Running   0          14m
root@zigbee:/home/ubuntu/kubernetes# kubectl get pods -n default  --as=ateamuser
Error from server (Forbidden): pods is forbidden: User "ateamuser" cannot list resource "pods" in API group "" in the namespace "default"
So there is not a big script needed - but building these commands was really a hard job...

If you want to know, how to restrict the kubectl on a remote computer, please write a comment. 

One last remark: In microK8s you enable RBAC with the command

microk8s.enable rbac

Check this with

microk8s.status
microk8s is running
high-availability: no
  datastore master nodes: 192.168.178.57:19001
  datastore standby nodes: none
addons:
  enabled:
    dashboard            # The Kubernetes dashboard
    dns                  # CoreDNS
    ha-cluster           # Configure high availability on the current node
    ingress              # Ingress controller for external access
    metrics-server       # K8s Metrics Server for API access to service metrics
    rbac                 # Role-Based Access Control for authorisation
  disabled:
    helm                 # Helm 2 - the package manager for Kubernetes
    helm3                # Helm 3 - Kubernetes package manager
    host-access          # Allow Pods connecting to Host services smoothly
    linkerd              # Linkerd is a service mesh for Kubernetes and other frameworks
    metallb              # Loadbalancer for your Kubernetes cluster
    registry             # Private image registry exposed on localhost:32000
    storage              # Storage class; allocates storage from host directory



Kubernetes with microK8s: First steps to expose a service to external

Fri, 2020-11-27 15:02

At home i wanted to have my own kubernetes cluster. I own 2 raspberry pi based on ubuntu, so i decided to install microK8s:

--> https://ubuntu.com/blog/what-can-you-do-with-microk8s

The installation is very well explained here:

https://ubuntu.com/tutorials/install-a-local-kubernetes-with-microk8s#1-overview

 

BUT: i found nowhere a tutorial how to run an container and expose the port in a way that i is reachable from other pc like localhost.

So here we go:

kubectl create deployment web --image=nginx
kubectl expose deployment web --type=NodePort --port=80

After that just do:

# kubectl get all
NAME                      READY   STATUS    RESTARTS   AGE
pod/web-96d5df5c8-5xvfc   1/1     Running   0          112s

NAME                 TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE
service/kubernetes   ClusterIP   10.152.183.1    <none>        443/TCP        2d5h
service/web          NodePort    10.152.183.66   <none>        80:32665/TCP   105s

NAME                  READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/web   1/1     1            1           112s

NAME                            DESIRED   CURRENT   READY   AGE
replicaset.apps/web-96d5df5c8   1         1         1       112s

On you kubernetes node you can reach the service with 10.152.183.66:80.

For getting the nginx from another pc just use:

<yourkuberneteshost>:32665

For me:



 


ZigBee@Linux: Getting Data from ZigBee Devices via MQTT to InfluxDB and Grafana

Fri, 2020-11-20 15:56

Getting sensors with zigbee integrated with my linux raspberry pi, i did some monitoring tasks on my raspberry pi.

  1. Monitoring my raspberry pi:
    There is a very nice tutorial:
    https://medium.com/@andreea.sonda31/monitor-raspberry-pi-resources-and-parameters-with-grafana-board-part-1-ab0567303e8
    Or even better: Just use this from grafana:
    https://grafana.com/grafana/dashboards/10578
    1. add deb https://packages.grafana.com/oss/deb stable main to a file in /etc/apt/sources.list.d/
    2. apt install grafana telegraf influxdb
    3. configure telegraf for your influxdb
    4. import the json from the grafana.com-link above



  2. Monitoring my Fritz.Box with Grafana:
    https://grafana.com/grafana/dashboards/713 
    and follow the given tutorial https://fetzerch.github.io/2014/08/23/fritzcollectd/
After these steps i have the following infrastructures running:
  1. zigbee2mqtt --> MQTT -->FHEM


  2. Fritz.box --> collectd --> InfluxDB --> Grafana

  3. raspberry --> telegraf --> InfluxDB --> Grafana


For  2 and 3 it is very easy to create graphics and the presentation looks little bit prettier than 1 (imho). 

AND there is only one frontend to configure. So what about the following chain for my zigbee sensors:

  1. zigbee2mqtt --> MQTT -->telegraf --> InfluxDB --> Grafana 

Looks like some more steps, but the telegraf --> InfluxDB --> Grafana chain is already there for monitoring my raspberry pi.

So i only had to add the following on /etc/telegraf/telegraf.conf:

[[inputs.mqtt_consumer]]
   servers = ["tcp://127.0.0.1:1883"]
   topics = [
     "zigbee2mqtt/0x00158d000542239e",
     "zigbee2mqtt/0x00158d00044a6378",
     "zigbee2mqtt/0x00158d0003f0faad",
     "zigbee2mqtt/0x00158d00044a72a2",
   data_format = "json"

And after that i was able to use the data in Grafana:


 


ZigBee@Linux: Securing zigbee2mqtt & MQTT@FHEM & FHEM

Sun, 2020-11-15 09:58


After my setup is running, just some words about securing the whole setup.

The web gui of FHEM was already setup with SSL/HTTPS but the MQTT server is listening for all ips.

The easiest way to get this secure is change the listener to localhost, so that no connections from outside can be made. Just change in /opt/fhem/fhem.cfg:

define MQTT2_FHEM_Server MQTT2_SERVER 1883 127.0.0.1

Just a checklist, if we secured everything:
  • FHEM
  • zigbee2mqtt
    • add permit_join: false to configuration.yaml




ZigBee@Linux: Integration of zigbee2mqtt with FHEM (mqtt server) on ubuntu server

Sat, 2020-11-14 16:03

After the setup of FHEM and zigbee2mqtt the integration of both components has to be done.

What has to be done?

After reading the excellent documentation of FHEM it is very easy - FHEM can be configured, so that it is providing a mqtt server. 

First you have to add the following line in /opt/zigbee2mqtt/data/configuration.yaml inside the "mqtt:" section:

  client_id: 'zigbee_pi'

Then go to the command prompt of the FHEM webgui and enter the following:

define MQTT2_FHEM_Server MQTT2_SERVER 1883 global
defmod MQTT2_zigbee_pi MQTT2_DEVICE zigbee_pi
attr MQTT2_zigbee_pi IODev MQTT2_FHEM_Server
attr MQTT2_zigbee_pi bridgeRegexp zigbee2mqtt/([A-Za-z0-9]*)[/]?.*:.* "zigbee_$1"
After that you should see something like this:

(you can change the style of the page via "select style" on the left column)

Then you should save:


To create a graph just click on the file which is created for your zigbee device:


and then there should be something like:

Here you can click on "Create SVG plot" and on:

click on "write .gplot file" and your first graph is there... Repeat this and you can get:







 


Zigbee@Linux: Infrastructure - Setup

Sat, 2020-11-14 01:36

On my way to home automation with zigbee@linux my decision (as i wrote in this posting) was

  • Hardware
  • OS
    • Ubuntu server
  • Software
    • FHEM (which is the acronym for Freundliche Hausautomation und Energie-Messung = Friendly home automation and energy metering)
      This includes the server with MQTT infrastructure & webserver & gui based on perl
    • zigbee2mqtt
      The server which does the communication with the usb zigbee stick and talking to the MQTT infrastructure based on nodejs

 



The installation of FHEM was quite easy (see here) and the installation of zigbee2mqtt just worked like described here.

  1. Problem:
    FHEM is per default installed without SSL/HTTPS and without user authentication
  2. Problem:
    The communication between both components has to be setup

Here the solution for problem 1:

Login to your raspberry and type the following commands:

cd /opt/fhem
chown fhem:dialout certs
cd certs/
openssl req -new -x509 -nodes -out server-cert.pem -days 3650 -keyout server-key.pem
chown fhem:dialout *
apt  install libio-socket-ssl-perl
After that move the webgui (something like http://yourraspberry:8083) and submit the following commands on the prompt:

attr WEB sslVersion TLSv12:!SSLv3
attr WEB HTTPS 1

And then open your webfrontend with https://yourraspberry:8083.

To add a user:

@bash

echo -n fhem:MYPASSWD| base64

@Webfrontend:

attr WEB basicAuth BASE64String

 The second problem will be solved on a future posting. Just wait...



Home automation with linux: How to use zigbee sensors on an ubuntu raspberry pi...

Sun, 2020-11-08 13:19

To the end of the year i wanted to start a new project: Home automation...

I decided to use a linux system (of course) on a raspberry pi (see the OS installation here) and the zigbee protocol.

The main problem: What packages are needed 

  • to get a communication with zigbee components?
  • to get a website or app to get the data / visualize the data?
  • to set up a daemon/server which controls the devices?

Let's start with the third point: I will try FHEM.

The installation is described here:

https://debian.fhem.de/

wget -qO - http://debian.fhem.de/archive.key | apt-key add -
echo "deb http://debian.fhem.de/nightly/ /" >> /etc/apt/sources.list
apt update
apt upgrade
apt install fhem

After you follow the steps you can check, if FHEM is running with

root@zigbee:/home/ubuntu# netstat -ltnup | grep 8083
tcp 0 0 0.0.0.0:8083 0.0.0.0:* LISTEN 19446/perl

 or just connect to your raspberry via browser: http://zigbee:8083

 
 
and here a screenshot of the goal i want to achieve (maybe with some graphs added):


Here a list of the supported hardware:

https://wiki.fhem.de/wiki/Kategorie:Hardware

and a list of all supported protocols:

https://wiki.fhem.de/wiki/System%C3%BCbersicht#Protokolle


 

 

 

 

Pages