/dev/blog/ID10T

Advertisement

Container Linux: Get a TTY without password

• coreos • Comments

I’ve been experimenting a lot with CoreOS Container Linux (formerly simply CoreOS). One of the issues I’ve had regularly was getting my cloud-config to the server after an initial install. There’s no meta data drive or something similar on bare metal servers or normal VPS.

Easy solution: Bypass the authentication. If you have access to a VNC or something similar, you can add coreos.autologin=tty0 (or just omit the =tty0) kernel option to get to a login shell directly. If you don’t know where to put that, it needs to be put in the Grub command line. Here’s an image of it.

Container Linux Grub

So you need to press e in the Grub menu, enter the option behind the original options (in my case I put it behind $linux_cmdline) and press Ctrl+x or F10 afterwards to boot with it.

That’s it, your window will now put you into a login shell of the coreos user and you’ll be able to curl cloud-config.

Simplifying cloud-config creation for clusters

• coreos and cloud • Comments

I’m still experimenting with container orchestration. Currently I’m in the process of building a three node CoreOS cluster with Kubernetes on top of it, connected over the Internet. One problem I was constantly struggling with was keeping my cloud-configs in sync. Most of the configuration settings were identical or nearly identical on all three nodes. Still, when adding a small change, I needed to apply this change to all three files. Forgetting one or mistyping led to errors and unnecessary debugging sessions.

This weekend I decided I’ve had enough of it. I created a small Python script to simplify working on several nearly identical configuration files, cloud-config-creator. By iterating over a set of node values and one master template the script creates the cloud-configs for all nodes. It’s little more than a wrapper for the Jinja2 templating engine, but I still find this incredibly useful. That’s why I want to add a bit more explanation around it.

Prerequisites

You will need Python. I used Python 3, never tested it with 2.x. Furthermore you need to install the pyyaml and jinja2 modules. Before starting to use cloud-config-creator you should have basic knowledge of how to use templates. If you ever worked with a templating engine (e.g. for consul-template or Jekyll), you’ll quickly feel at home. Otherwise I recommend the Jinja2 documentation.
Furthermore you should know how to format YAML. My script uses PyYAML, which isn’t YAML 1.2 compatible (yet), so you’ll need to use YAML 1.1.

Script usage

./cloud-config --templatefile master.tmpl --valuesfile values.yml --outpath out/ --includepath includes/

Running Caddy and Go on ARMv6 Alpine Linux

• docker, go, raspberry pi, and linux • Comments

My goal was compiling Caddy for my old Raspberry Pi 1 Model B. Caddy only provides an ARMv7 binary which isn’t compatible to the original Pis ARMv6. My Raspi is running on Hypriot, the Docker distribution for the Pi, therefore I wanted Caddy to run in a container as well. I chose my own Alpine Linux base image as its foundation.

As Caddy is written in Go, compiling it from source should be very easy:

  1. go get github.com/mholt/caddy/caddy
  2. cd into your website’s directory
  3. Run caddy (assuming $GOPATH/bin is in your $PATH)

While trying to get Go running in my Alpine container I encountered a small problem.
Go officialy provides an ARMv6 binary package, which is able to natively run on the Pi. But when trying to run this package in my Alpine container, a rather nondescriptive error blocked me:

/bin/sh: go: not found

I admit I needed longer than expected to solve this problem. After straceing, debugging and a lot of Internet research without result, using a simple file gave me the deciding hint:

/go # file $(which go)
/usr/local/go/bin/go: ELF 32-bit LSB executable, ARM, EABI5 version 1 (SYSV), dynamically linked, interpreter /lib/ld-linux-armhf.so.3, not stripped

Go is looking for /lib/ld-linux-armhf.so.3 which isn’t available on a minimal Alpine Linux installation. Running apk add libc6-compat finally solved this problem.

SSH: Disable Host Key Checking temporarily

• linux • Comments

A couple of days ago I found an easy solution to a problem I ignored way too long. When working with Virtual Servers it’s a common occurence that you test something, it doen’t go as planned and the server doesn’t boot properly anymore. Most VPS providers offer some kind of Recovery OS or a Rescue System for those situations. Just boot the server into this OS, revert your faulty changes, reboot the system and you’re set to nuke your server again.
Sadly, I always had a small problem. As the Recovery OS uses a different SSH Host Key, you get a warning when connecting to the server:

○ → ssh testserver
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
@    WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED!     @
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
IT IS POSSIBLE THAT SOMEONE IS DOING SOMETHING NASTY!
Someone could be eavesdropping on you right now (man-in-the-middle attack)!
It is also possible that a host key has just been changed.
The fingerprint for the RSA key sent by the remote host is
4b:15:69:f9:0d:8d:e8:2e:f6:1d:d8:5a:c0:a2:9c:31.
Please contact your system administrator.
Add correct host key in /home/m3adow/.ssh/known_hosts to get rid of this message.
Offending RSA key in /home/m3adow/.ssh/known_hosts:27
  remove with: ssh-keygen -f "/home/m3adow/.ssh/known_hosts" -R testserver.adminswerk.de
RSA host key for testserver.adminswerk.de has changed and you have requested strict checking.
Host key verification failed.

Video: Dockers '--userns-remap' feature explained

• docker • Comments

I have been up to the ears in work and in projects, that’s why I haven’t been posting a lot. I initially wanted to create a series about automatic service discovery and configuration with Consul, Registrator and consul-template, but decided to switch to Rancher in the process as I encountered too much hassle and had to make too much workarounds.

But that’s not the topic of this post. I recently created a short video on asciinema to further explain dockers --userns-remap feature which significantly improves security.

What is it you ask? The original description from the docker-daemon manpage:

--userns-remap=default|uid:gid|user:group|user|uid Enable user namespaces for containers on the daemon. Specifying “default” will cause a new user and group to be created to handle UID and GID range remapping for the user namespace mappings used for contained processes. Specifying a user (or uid) and optionally a group (or gid) will cause the daemon to lookup the user and group’s subordinate ID ranges for use as the user namespace mappings for contained processes.

Here’s my video explaining it:

Advertisement