Using Samba share to backup iMac with Time Machine

We have one iMac, which I backup to my Synology with Time Machine. Right now, I’m switching to mini pc with external harddrives and Nextcloud.

To enable backup for the iMac over Time Machine, I had to setup Samba. These are the steps:

Install Samba

sudo apt install samba

Setup a new user to connect to the share, where the backup files should be stored.

sudo useradd -m backupuser
sudo passwd backupuser
sudo smbpasswd -a backupuser

My external drives are mounted to /mnt/data. For this backup case I’ve created the following folder structure: /mnt/data/samba/time-machine/imac

Samba configuration happens in the file /etc/samba/smb.conf

First some basic configuration to enable Samba to handle requests from Apple correct:

[global]
## Configuration for Mac OSX
vfs objects = fruit streams_xattr
fruit:metadata = stream
fruit:model = MacSamba
fruit:veto_appledouble = no
fruit:nfs_aces = no
fruit:wipe_intentionally_left_blank_rfork = yes
fruit:delete_empty_adfiles = yes
fruit:posix_rename = yes
Then I have setup a share for time machine. See the fruit: lines, these are for to make this share Time Machine compatible.
 
[timemachine_imac]
path = /mnt/data/samba/time-machine/imac
read only = no
writeable = yes
valid users = backupuser
fruit:time machine = yes
fruit:time machine max size = 1T
Change the ownership of the folder to the backupuser.
 
chown -R backupuser:backupuser /mnt/data/samba/time-machine/imac
 
And finally restart Samba to apply the configuration changes
 
systemctl restart smbd
 
Try to access the share from your client with the smb protocol. Used url could be smb://yourserver/timemachine_imac. Login with the backupuser.
 
On your Mac, open up the connection in Finder, login. After that, you can add this share as a destination for a Time Machine Backup.
 
 
 
 
 

Zigbee -> OpenHAB -> ha-bridge -> Alexa | Device not recognized by Alexa

I’m using Zigbee switches to control some lights at home. They are imported into OpenHAB, so that I can control them with the OpenHAB mobile app.

To control them by voice over one of the Echo devices, I use ha-bridge, which emulates a Philips Hue bridge which then can be used to discovery the switches by Alexa. Until now, everything worked in this combination. Today I tried to setup a new switch for a new light. Setup in OpenHAB worked out ot the box, but Alexa did not recognize it.

After some digging I found this comment. Main problem solver seems to add a „00:“ in front of the unique id for the device, because it needs now 9 bytes for the unique id. In case the comment would be deleted:

  • sudo systemctl stop ha-bridge
  • cd ha-bridge folder
  • cd data
  • sudo nano device.db
  • use CTRL+W and search „unique“ to find the unique ID’s
  • add „00:“ at te beginning of every unique ID
  • save the file
  • sudo systemctl start ha-bridge
  • press „Discover“ within the Alexa app/site
  • while Alexa is discovering, press the Link-button under „Bridge Devices“ within HA-bridge (make sure „Use Link Button“ is checked under Bridge Control -> Update Security Settings)
  • the devices should show up within de Alexa app/site

Setup GitLab and GitLab-Runner with Docker

For learning GitLab CI/CD pipeline stuff, I had the idea to install GitLab on my notebook. To make things easy to setup, I use Docker and on top of it Portainer CE. With this I can use Docker Compose configurations with a WebUI.

When I was writting this blogpost, GitLab 17.2 was the current released version, I wanted to use GitLab CE. I only want to use this GitLab installation from my notebook. First thing I learned: using localhost is not really possible. The hostname of the notebook is thinkpad, so the configurations will also use this DNS-name and the GitLab installation will be available by https://thinkpad/.

To setup the GitLab CE installation and a GitLab-Runner, I use the following Docker Compose configuration.
Warning: starting of the GitLab instance can take some time!

version: '3.6'
services:
gitlab:
image: gitlab/gitlab-ce:17.2.1-ce.0
container_name: gitlab
restart: always
hostname: 'thinkpad'
networks:
- gitlab-network
environment:
GITLAB_OMNIBUS_CONFIG: |
# Add any other gitlab.rb configuration here, each on its own line
external_url 'https://thinkpad'
ports:
- '80:80'
- '443:443'
- '22:22'
volumes:
- gitlab-config:/etc/gitlab
- gitlab-logs:/var/log/gitlab
- gitlab-data:/var/opt/gitlab
shm_size: '256m'

gitlab-runner:
image: gitlab/gitlab-runner:latest
container_name: gitlab-runner
restart: always
networks:
- gitlab-network
volumes:
- gitlab-runner-config:/etc/gitlab-runner
- /var/run/docker.sock:/var/run/docker.sock

volumes:
gitlab-config:
gitlab-logs:
gitlab-data:
gitlab-runner-config:

networks:
gitlab-network:
After that, it was not possible to register the GitLab-Runner with the GitLab-instance. When I tried to register, I got errors like
x509: certificate signed by unknown authority
or
tls: failed to verify certificate: x509: certificate relies on legacy Common Name field, use SANs instead

I had to create my own X509 certificates and made them available in the GitLab and GitLab-Runner containers (or more precise inside of the volumes that contains the configurations).

The paths used in the commands below are based on the Docker Compose configuration up here. If you use different volume names, you have to adjust the names below. I needed to execute the commands as root, so a

sudo su -

was done first. First the new certificate is created, important is the part with the subjectAltName configuration.

cd /var/lib/docker/volumes/gitlab_gitlab-config/_data/ssl/
openssl genrsa -out thinkpad-ca.key 2048
openssl req -new -x509 -days 365 -key thinkpad-ca.key -subj "/C=CN/ST=GD/L=SZ/O=Acme, Inc./CN=Acme Root CA" -out thinkpad-ca.crt
openssl req -newkey rsa:2048 -nodes -keyout thinkpad.key -subj "/C=CN/ST=GD/L=SZ/O=Acme, Inc./CN=*.thinkpad" -out thinkpad.csr
openssl x509 -req -extfile <(printf "subjectAltName=DNS:thinkpad") -days 365 -in thinkpad.csr -CA thinkpad-ca.crt -CAkey thinkpad-ca.key -CAcreateserial -out thinkpad.crt

Check that the right hostname is configured into the certificate:

openssl s_client -connect thinkpad:443 </dev/null 2>/dev/null | openssl x509 -noout -text | grep DNS:

In my case it returned

DNS:thinkpad

 

Now we have to link the certificate files into the GitLab-Runner configuration.

ln -s /var/lib/docker/volumes/gitlab_gitlab-config/_data/ssl/thinkpad.crt thinkpad.crt
ln -s /var/lib/docker/volumes/gitlab_gitlab-config/_data/ssl/thinkpad.key thinkpad.key

Restart the GitLab and GitLab-Runner instance.

When the GitLab instance is back online, try register the GitLab-Runner.

docker exec -it gitlab-runner gitlab-runner register

Runtime platform arch=amd64 os=linux pid=52 revision=9882d9c7 version=17.2.1
Runninginsystem-mode.

Enter the GitLab instance URL (for example, https://gitlab.com/):
https://thinkpad/
Enter the registration token:
glrt-somevalues
Verifying runner... is valid runner=yaoFUzzEE
Enter a name for the runner. This is stored only in the local config.toml file:
[b018679db44f]: instance
Enter an executor: custom, docker, docker-windows, docker-autoscaler, shell, ssh, parallels, virtualbox, docker+machine, kubernetes, instance:
docker
Enter the default Docker image (for example, ruby:2.7):
alpine:latest
Runner registered successfully. Feel free to start it, but if it's running already the config should be automatically reloaded!

Configuration (with the authentication token) was saved in "/etc/gitlab-runner/config.toml"

After that, the GitLab-Runner is shown in GitLab as online.

We need to give the GitLab-Runner the certificates for the CA and the GitLab instance. Without it, the connection could not be established:

cd /var/lib/docker/volumes/gitlab_gitlab-runner-config/_data/certs
cp /var/lib/docker/volumes/gitlab_gitlab-config/_data/ssl/thinkpad-ca.crt ca.crt
cp /var/lib/docker/volumes/gitlab_gitlab-config/_data/ssl/thinkpad.crt .
cp /var/lib/docker/volumes/gitlab_gitlab-config/_data/ssl/thinkpad.key .

After that, restart the GitLab-Runner with

docker restart gitlab-runner

 

 

If you login the first time to your GitLab instance and you are wondering, where you can find the initial root passwort (and yes, username is root):

/var/lib/docker/volumes/gitlab_gitlab-config/initial_root_password

Setup Docker in Fedora 31

For setting up Docker on Fedora 31, I used the documentation found here: https://docs.docker.com/install/linux/docker-ce/fedora/ 

Following problems occured:

1. Url to Docker repo seems to be wrong

Documentation mention to add https://download.docker.com/linux/fedora/docker-ce.repo as repo. With this, errors occur and docker-ce could not be installed. I had to use the following url, with this I could install it: https://download.docker.com/linux/fedora/31/x86_64/stable

sudo dnf config-manager --add-repo https://download.docker.com/linux/fedora/31/x86_64/stable

2. Error while executing docker run

After installation I tried Docker with the default example:

sudo docker run hello-world

Instead of showing the expected output it showed me the following error message:

docker: Error response from daemon: OCI runtime create failed: container_linux.go:346: starting container process caused "process_linux.go:297: applying cgroup configuration for process caused \"open /sys/fs/cgroup/docker/cpuset.cpus.effective: no such file or directory\"": unknown.

After searching around I found on bug mention the same error: https://github.com/microsoft/vscode-docker/issues/1402

In there it was pointing to a solution: https://fedoraproject.org/wiki/Common_F31_bugs#Docker_package_no_longer_available_and_will_not_run_by_default_.28due_to_switch_to_cgroups_v2.29

sudo grubby --update-kernel=ALL --args="systemd.unified_cgroup_hierarchy=0"

Problem is, that with Fedora 31 Cgroups v2 was enabled while Docker still uses Cgroups v1.

Connect Nokia 8 with Ubuntu 18.04 for filetransfer

Install packages for Media Transfer Protocol

sudo apt-get install mtpfs mtp-tools

Check, if your machine has the file /etc/udev/rules.d/69-libmtp.rules.

sudo less /etc/udev/rules.d/69-libmtp.rules

If not copy it from /lib/udev/rules.d/69-libmtp.rules.

sudo cp /lib/udev/rules.d/69-libmtp.rules /etc/udev/rules.d/69-libmtp.rules

Now open the copied file with root rights and add the following line

# Nokia 8
ATTR{idVendor}=="2e04", ATTR{idProduct}=="c025", SYMLINK+="libmtp-%k", MODE="660", GROUP="disk", ENV{ID_MTP_DEVICE}="1", ENV{ID_MEDIA_PLAYER}="1"

I had at the end the following lines in my file

# Parrot Bebop Drone
ATTR{idVendor}=="19cf", ATTR{idProduct}=="5038", SYMLINK+="libmtp-%k", MODE="660", GROUP="audio", ENV{ID_MTP_DEVICE}="1", ENV{ID_MEDIA_PLAYER}="1"
# Isabella Her Prototype
ATTR{idVendor}=="0b20", ATTR{idProduct}=="ddee", SYMLINK+="libmtp-%k", MODE="660", GROUP="audio", ENV{ID_MTP_DEVICE}="1", ENV{ID_MEDIA_PLAYER}="1"

# Nokia 8
ATTR{idVendor}=="2e04", ATTR{idProduct}=="c025", SYMLINK+="libmtp-%k", MODE="660", GROUP="disk", ENV{ID_MTP_DEVICE}="1", ENV{ID_MEDIA_PLAYER}="1"

# Autoprobe vendor-specific, communication and PTP devices
ENV{ID_MTP_DEVICE}!="1", ENV{MTP_NO_PROBE}!="1", ENV{COLOR_MEASUREMENT_DEVICE}!="1", ENV{libsane_matched}!="yes", ATTR{bDeviceClass}=="00|02|06|ef|ff", PROGRAM="mtp-probe /sys$env{DEVPATH} $attr{busnum} $attr{devnum}", RESULT=="1", SYMLINK+="libmtp-%k", MODE="660", GROUP="audio", ENV{ID_MTP_DEVICE}="1", ENV{ID_MEDIA_PLAYER}="1"

LABEL="libmtp_rules_end"

Information, how to setup udev can you find here: https://wiki.ubuntuusers.de/MTP/

Wildfly-Swarm – Database

Tried this with Wildfly Swarm version 2017.8.1.

Adding Oracle driver.

In the project pom.xml, add the driver as a dependency:

<dependency>
   <groupId>com.oracle</groupId>
   <artifactId>ojdbc7</artifactId>
   <version>12.1.0.2</version>
</dependency>

We have to tell Wildfly-Swarm, that there is a Oracle driver to use. Therefor we add module.xml to /src/main/resources/modules/com/oracle/ojdbc7/main/

<?xml version="1.0" encoding="UTF-8"?>
<module xmlns="urn:jboss:module:1.3" name="com.oracle.ojdbc7">
   <resources>
      <artifact name="com.oracle:ojdbc7:12.1.0.2"/>
   </resources>
   <dependencies>
      <module name="javax.api"/>
      <module name="javax.transaction.api"/>
      <module name="javax.servlet.api" optional="true"/>
   </dependencies>
</module>

Add the datasource fraction as dependency:

<dependency>
   <groupId>org.wildfly.swarm</groupId>
   <artifactId>datasources</artifactId>
   <version>2017.8.1</version>
</dependency>

Define datasources

The files, which defines the datasources, are placed in /src/main/webapp/WEB-INF/ and have a naming convention *-ds.xml

example1-ds.xml:

<?xml version="1.0" encoding="UTF-8"?>
<datasources  
   xmlns="http://www.jboss.org/ironjacamar/schema"    
   xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"    
   xsi:schemaLocation="http://www.jboss.org/ironjacamar/schema http://docs.jboss.org/ironjacamar/schema/datasources_1_0.xsd">
   <!-- The datasource is bound into JNDI at this location. We reference
        this in META-INF/persistence.xml -->
   <datasource
      jndi-name="java:jboss/datasources/example1"
      pool-name="example1-pool"
      enabled="true"
      use-java-context="true">
      <connection-url>jdbc:oracle:thin:@serverUrl1:1521:SID</connection-url>
      <driver>oracle</driver>
      <security>
         <user-name>username1</user-name>
         <password>password1</password>
      </security>
   </datasource>
</datasources>

example2-ds.xml:

<?xml version="1.0" encoding="UTF-8"?>
<datasources  
 xmlns="http://www.jboss.org/ironjacamar/schema"    
 xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"    
 xsi:schemaLocation="http://www.jboss.org/ironjacamar/schema http://docs.jboss.org/ironjacamar/schema/datasources_1_0.xsd">
   <!-- The datasource is bound into JNDI at this location. We reference
        this in META-INF/persistence.xml -->
   <datasource
     jndi-name="java:jboss/datasources/example2"
     pool-name="example2-pool"
     enabled="true"
     use-java-context="true">
     <connection-url>jdbc:oracle:thin:@serverUrl2:1521:SID</connection-url>
      <driver>oracle</driver>
      <security>
         <user-name>username2</user-name>
         <password>password2</password>
      </security>
 </datasource>
</datasources>

Define the persistence unit

Place a persistence.xml into /src/main/resources/META-INF/. Here is an example with two datasources:

<?xml version="1.0" encoding="UTF-8"?>
<persistence
    xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
    version="2.1"
    xmlns="http://xmlns.jcp.org/xml/ns/persistence"
    xsi:schemaLocation="http://xmlns.jcp.org/xml/ns/persistence http://xmlns.jcp.org/xml/ns/persistence/persistence_2_1.xsd">  
   <persistence-unit   
      name="example1PU"   
      transaction-type="JTA">
      <jta-data-source>java:jboss/datasources/example1</jta-data-source>
      <exclude-unlisted-classes>false</exclude-unlisted-classes>
      <properties>
         <property name="javax.persistence.schema-generation.database.action" value=""/>
         <property name="javax.persistence.schema-generation.create-source" value="metadata"/>
         <property name="javax.persistence.schema-generation.drop-source" value="metadata"/>
      </properties>
   </persistence-unit>
   <persistence-unit
      name="example2PU"
      transaction-type="JTA">
      <jta-data-source>java:jboss/datasources/example2</jta-data-source>
      <exclude-unlisted-classes>false</exclude-unlisted-classes>
      <properties>
         <property name="javax.persistence.schema-generation.database.action" value=""/>
         <property name="javax.persistence.schema-generation.create-source" value="metadata"/>
         <property name="javax.persistence.schema-generation.drop-source" value="metadata"/>
      </properties>
   </persistence-unit>
</persistence>

Reference the persistence unit

In your EJBs you can reference the persistence units:

@PersistenceContext(unitName = "example1PU" )
EntityManager em1;

@PersistenceContext(unitName = "example2PU" )
EntityManager em2;

Wildfly-Swarm – Logging

Tried this with Wildfly Swarm version 2017.8.1. Seems like in newer versions a new approach is used.

I added the logging artifact in the maven build file:

<dependency>
   <groupId>org.wildfly.swarm</groupId>
   <artifactId>logging</artifactId>
   <version>2017.8.1</version>
</dependency>

In /src/main/resources/ a file logging.properties with the following content is setup:

logger.level=INFO
logger.handlers=FILE

handler.FILE=org.jboss.logmanager.handlers.FileHandler
handler.FILE.level=INFO
handler.FILE.formatter=PATTERN
handler.FILE.properties=append,fileName,autoFlush
handler.FILE.append=false
handler.FILE.autoFlush=true
handler.FILE.fileName=./myapp.log

formatter.PATTERN=org.jboss.logmanager.formatters.PatternFormatter
formatter.PATTERN.properties=pattern
formatter.PATTERN.pattern=%d{HH:mm:ss,SSS} %-5p [%c] %s%E%n

In your code, you setup and use org.jboss.logging.Logger instances like this:

Logger LOG = Logger.getLogger(MyClass.class);
LOG.info("Example log message");
LOG.error("Something strange happend", new Exception() );

Wildfly-Swarm – Basic setup

Tried this with Wildfly Swarm version 2017.8.1.

In the maven pom.xml the following entries are added.

<dependencyManagement>
   <dependencies>
      <dependency>
         <groupId>org.wildfly.swarm</groupId>
         <artifactId>bom-all</artifactId>
         <version>2017.8.1</version>
         <scope>import</scope>
         <type>pom</type>
      </dependency>
   </dependencies>
</dependencyManagement>

 

For building the uberjar or executing the plugin needs to be added.

<plugins>
   <plugin>
      <groupId>org.wildfly.swarm</groupId>
      <artifactId>wildfly-swarm-plugin</artifactId>
      <version>${version.wildfly.swarm}</version>
      <executions>
         <execution>
            <goals>
               <goal>package</goal>
            </goals>
         </execution>
      </executions>
   </plugin>
</plugins>

 

Creation of the uberjar happens, when you call package goal:

mvn package

 

The uberjar could be found in the target folder. Execute it with java -jar:

java -jar ./target/myapp-swarm.jar

 

If you want to debug your code, run it from maven, add the following parameter to your maven command and do a remote debugging session in your ide. Beware, that the Wildfly-Swarm process stops until a connection is open. Here we open a debug port on 8888.

maven run -Ddebug=8888

Sharing mouse and keyboard on Ubuntu with Synergy

For controlling my desktop (keyboard and mouse) from my notebook,  I’m using Synergy. This means, I’m using keyboard and mouse from my notebook and can seamless control notebook and desktop with them. The mouse pointer moves from the notebook display to the desktop display as it is one computer.

On the notebook, it will run as a server, on the desktop as client. The description here is written for Ubuntu 16.04.

Notebook / Server

Simple solution, a shell script which kills old running synergys processes and starts a new one. It is places in the home folder and named start-synergy.sh.

#!/bin/bash
killall -r synergys
synergys -c ~/synergy.conf -d WARNING --daemon --log /var/log/synergy.log --restart

Open the dash and search for „Startup applications“. Add a new entry, name it „Synergy“ and add the shell script like this: /home/yourusername/start-synergy.sh

 

Desktop / Client

On the clientside I use a feature from LightDM (the windows manager used in Ubuntu 16.04) to execute commands in special cases (called system hooks, see https://wiki.ubuntu.com/LightDM for some infos).

Create a new config file:

sudo joe /etc/lightdm/lightdm.conf.d/50-synergy.conf

and add the following lines:

[SeatDefaults]
display-setup-script=/usr/bin/synergyc --daemon -d WARNING --log /var/log/synergy.log serveradress

serveradress can be the ip adress or hostname of your synergy server. In my case it is ZenbookLAN.fritz.box:

[SeatDefaults]
display-setup-script=/usr/bin/synergyc --daemon -d WARNING --log /var/log/synergy.log ZenbookLAN.fritz.box

Save the file and logout. If the server is running on your notebook (or what machine you using as server), try to move the mouse pointer to the client. If it is not working, restart the client machine. Then it should work.

In my case with this configuration I had a problem with the screen sharing. My desktop has 4 displays connected, but only the upper 2 could be accessed with this configuration. After login on the desktop (client) I have to restart the synergyc process, then it is working. Right now I could not find a different solution then to write a little script, which kills the running synergyc process and restarts it. Create a file start-synergy.sh in your home directory on the desktop (client):

#!/bin/bash
sudo killall synergyc
/usr/bin/synergyc --daemon -d WARNING --log /var/log/synergy.log serveradress

serveradress can be the ip adress or hostname of your synergy server. In my case it is ZenbookLAN.fritz.box.

Now everytime you login on the desktop (client) start this script manually (you have to enter your password because we need sudo for killing the process).