Kibana on ARM & Docker

Dockerfile

FROM armv7/armhf-debian

# properly setup debian sources
ENV DEBIAN_FRONTEND noninteractive
RUN apt-key adv --keyserver keyserver.ubuntu.com --recv-keys 9165938D90FDDD2E && \
apt-get -y update && \
apt-get -y upgrade && \
apt-get -y --force-yes install sudo npm wget \
                               gzip unzip git

RUN wget http://node-arm.herokuapp.com/node_latest_armhf.deb && \
dpkg -i node_latest_armhf.deb && \
rm node_latest_armhf.deb

RUN wget --quiet https://github.com/elastic/kibana/archive/v4.4.2.zip && \
unzip v4.4.2.zip && \
rm v4.4.2.zip

WORKDIR /kibana-4.4.2
RUN npm install && \
npm install gridster

ENV PATH /kibana-4.4.2/bin:$PATH
CMD ["/kibana-4.4.2/bin/kibana"]

EXPOSE 5601/tcp

DaemonSet in version extensions/v1beta1 cannot be handled as a DaemonSet

Trying to use DaemonSet with K8s 1.1.2.  I get the following error:

$ kubectl create -f node-monitor-ds.yaml
Error from server: error when creating "node-monitor-ds.yaml": DaemonSet in version extensions/v1beta1 cannot be handled as a DaemonSet: json: cannot unmarshal object into Go value of type string

Found this discussion with the answer:  https://github.com/kubernetes/kubernetes/issues/18130

"I had to add selector: {} to the spec."

Made this change to add a node selector:

rcsdiff -c3 node-monitor-ds.yaml
===================================================================
RCS file: RCS/node-monitor-ds.yaml,v
retrieving revision 1.1
diff -c3 -r1.1 node-monitor-ds.yaml
*** node-monitor-ds.yaml 2016/03/13 18:02:34 1.1
--- node-monitor-ds.yaml 2016/03/13 18:03:16
***************
*** 3,8 ****
--- 3,9 ----
metadata:
name: "node-monitor"
spec:
+ selector: {}
template:
metadata:
name: "node-monitor"

Good to go:

$ kubectl create -f node-monitor-ds.yaml
daemonset "node-monitor" created

 

[WRN] OSD near full

Got this from Ceph:

2016-03-02 18:27:50.058991 osd.1 [WRN] OSD near full (90%)

Reweight the cluster based on utilization:

# ceph osd reweight-by-utilization
SUCCESSFUL reweight-by-utilization: average 0.463630, overload 0.556356. reweighted: osd.1 [1.000000 -> 0.543259], osd.2 [1.000000 -> 0.699539], osd.3 [1.000000 -> 0.595016], osd.4 [1.000000 -> 0.714600], osd.6 [1.000000 -> 0.648270],

Cluster begins to rebalance:

pgmap v3451806: 672 pgs, 4 pools, 367 GB data, 286 kobjects
834 GB used, 871 GB / 1797 GB avail
2928/707598 objects degraded (0.414%)
236256/707598 objects misplaced (33.388%)
478 active+clean
111 active+remapped+wait_backfill
22 active+recovering+degraded
22 active+remapped+backfilling
19 active+recovery_wait+degraded
9 remapped+peering
5 peering
4 activating+remapped
2 activating+degraded

Problem Solved:

root@node7:~# df -h /var/lib/ceph/osd/ceph-1/
Filesystem Size Used Avail Use% Mounted on
/dev/sda1 124G 74G 44G 63% /ceph0

Ceph Cluster on armv7 – Benchmarking v1

 Benchmark:  2/16/2016

Cluster Configuration:

  • mdsmap: 1/1/1 up {0=node12=up:active}, 1 up:standby (node12 is Orange Pi Plus)
  • monmap: 3 mons … quorum 0,1,2 node14,node15,blotter (node14 & node15 are Orange Pi Plus, blotter is Intel i7)
  • osdmap: 10 osds: 10 up, 10 in (4xUSB Flash, 3×3.5″ SATA, 1×2.5″ SATA, 2xSSD)
  • pgmap: 664 pgs, 3 pools, 328 GB data, 129 kobjects
    749 GB used, 956 GB / 1797 GB avail

In this benchmark the 2xSSD are mixed in as standard nodes with slower OSD:

Results:

Test 1: Use dd for sustained read and write of large files

dd

Test 2: sql_bench

Tests are sorted by time, and broken into 2 charts to show detail

mysql

 

Test 3: iozone

Charting for 64K “record size”.  (I got bored charting so only posting a few)

before_iozone

 

Discussion:

I probably should have added a delay between iozone tests.

Next:  With SSD configured as a cache tier

CephFS volume support for Kubernetes – Part 2

(<–part 1 can be found HERE)

Testing a K8s pod mounting a CephFS export subdirectory:

Define the test Pod:

This pod executes the “pause” container so I have time to log in and verify the mount is correct.  In the “cephfs” volume definition you can see the attribute path: “/Media” which specifies that as the CephFS subdirectory to mount into the container.

$ cat cephfs.yaml

apiVersion: v1
kind: Pod
metadata:
  name: cephfs2
spec:
  containers:
  - name: cephfs-rw
    image: docker.lsd25.net:443/rpi/pause
    volumeMounts:
    - mountPath: "/mnt/cephfs"
      name: cephfs
  imagePullSecrets:
    - name: "dockerregistrykey"
  volumes:
  - name: cephfs
    cephfs:
      monitors:
        - "192.168.2.224:6789"
        - "192.168.2.45:6789"
      user: admin
      secretRef:
        name: ceph-secret
      readOnly: false
      path: "/Media"

Start the pod:

$ kubectl create -f cephfs.yaml
pod "cephfs2" created

$ kubectl get pods
NAME               READY     STATUS              RESTARTS   AGE
cephfs2            0/1       ContainerCreating   0          45s

 

Verify the mount:

$ kubectl exec -it cephfs2 df /mnt/cephfs
Filesystem                                   1K-blocks      Used  Available Use% Mounted on
192.168.2.224:6789,192.168.2.45:6789:/Media 1884667904 881512448 1003155456  47% /mnt/cephfs

There it is!  cephFS subrirectory support in k8s on armv7.

CephFS Cache-Tiering

I was telling a friend about my RPi CEPH cluster.  We got to the point where I was saying how proud I was that I had many different types of physical media in my cluster.

  • USB Flash Drives
  • microSD
  • SATA 3.5″ 7200RPM (12v + 5v)
  • SATA 2.5″ (5v)
  • SATA 3.5″ SSD

When I got to the SSD drives he said “You are wasting your SSD by mixing them in with all that slow storage”.  In his view they would be better utilized as a cahing layer.  So I went in search of a way to do that.

Cache Tiering

Holy sh!t, Ceph is awesome:  It does that.

The Project

So obviously, I need it in my cluster.

I have 2 x 128GB SATA SSD, and I’ll use both of them for the cache tier.  All the slower USB and spinning disks will be in the storage tier.

Looks like “Writeback Mode” and “Read-forward Mode” are right for me ?

Also, it’s always nice to have numbers to show success, so I’ll run a few benchmarks before and after.

CephFS volume support for Kubernetes

I have been using CephFS with my K8s cluster since the beginning, but until now the shared filesystem has been mounted in a “standard location” on each node as part of the cluster startup procedure and then used “hostPath” volume type to map a subdirectory into the container.

Today I found source code suggesting that K8s added “cephfs” as a supported volume type.  See: k8s cephfs volume code and example.

When I tried it I had only partial success.  I was able to mount the entire cephfs filesystem into a container (which is awesome) but found that the k8s version I was using (GitVersion:”v1.1.2-dirty”) does not support that mounting a subdirectory of the larger CephFS export:

...
...
root@node0 in ~/master
$ kubectl create -f cephfs.yaml
error validating "cephfs.yaml": error validating data: found invalid field path for v1.CephFSVolumeSource; if you choose to ignore these errors, turn validation off with --validate=false

Although the example code (even in master) does not show “path” being used, I think it is supported because I can see it in this code commit.  In the newBuilderInternal() method there is code for adding a path name to the mount.

Conclusion:

I need to update my kubernetes version to get better CephFS support.  Current version is  V1.2.0-alpha

Building Kubernetes V1.2.0-alpha for armv7

2/16/2016

Building on a Raspberry Pi 2.

Get go:

$ go version
go version go1.4.2 linux/arm

Check out the latest K8s source:

git clone https://github.com/kubernetes/kubernetes.git
cd kubernetes
git checkout master

Needed to fake a x-compile configuration:

ln -s /usr/bin/gcc /usr/bin/arm-linux-gnueabi-gcc

Needed to increase swap space to 4GB:

dd if=/dev/zero of=/swap.file bs=4M count=1024
mkswap /swap.file
chmod 0600 /swap.file
swapon /swap.file

Run the build:

$ make

After 16 hours building native on a Rpi2 I have some armv7 binaries for Kubernetes V1.2.0-alpha:

...
...
root@node0 in ~/kubernetes/_output/local/go/bin on master
$ ./kubelet --version
Kubernetes v1.2.0-alpha.7.874+a7818ac3a1d64a
root@node0 in ~/kubernetes/_output/local/go/bin on master
$ file ./kubelet
./kubelet: ELF 32-bit LSB executable, ARM, EABI5 version 1 (SYSV), dynamically linked, interpreter /lib/ld-linux-armhf.so.3, for GNU/Linux 2.6.32, BuildID[sha1]=0c0e10ad106863254c98f26b692484d548973e56, not stripped

After just more than 24 hours the build completed:

...
...
root@node0 in ~/kubernetes on master
$ make
hack/build-go.sh
+++ [0215 17:27:27] Building go targets for linux/arm:
cmd/kube-proxy
cmd/kube-apiserver
cmd/kube-controller-manager
cmd/kubelet
cmd/kubemark
cmd/hyperkube
cmd/linkcheck
plugin/cmd/kube-scheduler
cmd/kubectl
cmd/integration
cmd/gendocs
cmd/genkubedocs
cmd/genman
cmd/mungedocs
cmd/genbashcomp
cmd/genconversion
cmd/gendeepcopy
cmd/genswaggertypedocs
examples/k8petstore/web-server/src
github.com/onsi/ginkgo/ginkgo
test/e2e/e2e.test
+++ [0216 17:53:23] Placing binaries
$ ls _output/local/bin/linux/arm/
e2e.test gendocs ginkgo kube-controller-manager kube-proxy src
genbashcomp genkubedocs hyperkube kubectl kube-scheduler
genconversion genman integration kubelet linkcheck
gendeepcopy genswaggertypedocs kube-apiserver kubemark mungedocs

Now I have the latest K8s running in my cluster:

root@node0 in ~/master
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"2+", GitVersion:"v1.2.0-alpha.7.874+a7818ac3a1d64a", GitCommit:"a7818ac3a1d64a9a67e0a44e0c07677587bed27f", GitTreeState:"clean"}
Server Version: version.Info{Major:"1", Minor:"2+", GitVersion:"v1.2.0-alpha.7.874+a7818ac3a1d64a", GitCommit:"a7818ac3a1d64a9a67e0a44e0c07677587bed27f", GitTreeState:"clean"}

Raspberry PiLO

Raspberry PiLO


 [This document is under construction]

The Raspberry PiLO is a Raspberry Pi “Integrated Lights Out” solution providing out-of-band server management for $35.

Features to be supported include:

  1. Reset the server
  2. Power down the server
  3. Power-up the server
  4. Remote console for server
  5. Full CLI support through server RS-232 port, including the ability to interact with the GRUB boot process and to enter a password for a LUKS encrypted root partition.

 

Hardware


I recently upgraded my SlowBOT to a Raspberry Pi 2. That freed up the old B+ that I will use for this project.  A few accessories include:

USB Wifi: Bus 001 Device 004: ID 148f:5370 Ralink Technology, Corp. RT5370 Wireless Adapter

Micro SD Storage:  Disk /dev/mmcblk0: 8026 MB, 8026849280 bytes

RS232 Serial Cable

          PCI Express RS232 Serial Adapter Card with 16950 UART

A few miscellaneous items will also be used:

??

Software


OS: Raspbian, based on Debian Wheezy

Language: PiLO utilities will be written in Python (and a little c)

Other Services: SSH, screen,

 

Architecture


The Raspberry PiLO will be connected to the servers LAN via Ethernet, and will also be physically connected to the server in several ways:

Raspberry Pi B+ GPIO
Raspberry Pi B+ GPIO

GPIO pins on the Pi will connect to power and reset pins on the motherboard and power supply

GPIO pins on the Pi will connect to pins of the servers RS232 serial port

The Pi will be powered by a 5v source from the servers power supply

It will accept network login via SSH, and allow the remote user to manipulate the server through a command-line interface.

 

Implementation


The project can be divided into several sub-projects:

  1. Prepare the Pi: install os, configure user, configure ssh server, install Python, install gpio module(s)
  2. Connect to serial port and configure console access
  3. Write the PiLO utility code
  4. Connect to the reset and power controls
  5. Get power from servers power supply

 

 

One:  Prepare the Pi


 

OS:  Raspbian with 3.18 kernel

Python:  2.7

Python GPIO ModulesRPIO 0.10.0

 

Two:  Connect to serial port and configure console


  • Physical Connection:

The USB serial cable provides the correct 3.3V logic levels.

On the Pi:  5V, GND, GPIO pins: 14-TXD, 15-RXD

On the Server:  USB

 

  • Software Configuration:

Its a bit tedious to get the serial console configured.

First thing to do is identify the tty on each side.

root@pilo:/# dmesg | grep tty | grep enabled
[ 0.001587] console [tty1] enabled
[ 1.136080] console [ttyAMA0] enabled

so /dev/ttyAMA0 is the serial device on the Pi side.

[root@server:/]# dmesg | grep -i PL2303
usbcore: registered new interface driver pl2303
usbserial: USB Serial support registered for pl2303
pl2303 3-3:1.0: pl2303 converter detected
usb 3-3: pl2303 converter now attached to ttyUSB0

and

[root@server:/]# lsusb | grep -i serial
Bus 003 Device 002: ID 067b:2303 Prolific Technology, Inc. PL2303 Serial Port

so /dev/ttyUSB0 is the serial device on the Server side.

 

Next, make sure getty is spawning on the server side and *not* on the Pi side.

On the Pi, edit /etc/inittab and comment out the entry for ttyAMA0.

root@pilo:/# vi /etc/inittab
71c71
< T0:23:respawn:/sbin/getty -L ttyAMA0 115200 vt100

> # T0:23:respawn:/sbin/getty -L ttyAMA0 115200 vt100

Tell init we changed something:
root@pilo:/# kill -HUP 1

Verify: Make sure getty is not running on ttyAMA0
root@pilo:/# ps -ef | grep ttyAMA0
On the Server, edit /etc/sysconfig/init and add the new tty to the list of active consoles:
[bdoyle@server:/]# vi /etc/sysconfig/init
< ACTIVE_CONSOLES=/dev/tty[1-6]

> ACTIVE_CONSOLES=”/dev/tty[1-6] /dev/ttyUSB0″

Tell init we changed something:
[root@server:/]# kill -HUP 1
Verify: Make sure getty is running on ttyUSB0
[root@server:/]# ps -ef | grep ttyUSB0
root 24603 1 0 13:33 ttyUSB0 00:00:00 /sbin/mingetty /dev/ttyUSB0

 

Test that connection by running “gnu screen” on the Pi to attach to the console on the server. If all goes well, you should be able to log in and get a shell:

On the Pi:

$ sudo apt-get install screen

root@pilo:/# screen /dev/ttyAMA0 9600

CentOS release 6.6 (Final)
Kernel 3.10.73 on an x86_64

server login: xxxxx
Password:
Last login: Wed Jun 17 13:41:03 on ttyUSB0
[xxxxx@server ~]$

———————————-

If the serial connection is not working you may need to adjust the tty settings.

xxx
xxx

———

Ok, now thats working, but we also want to be able to access the console while the system is booting.

On the Server Kernel config

Serial console support:

CONFIG_VT=y
CONFIG_VT_CONSOLE=y
CONFIG_SERIAL=y
CONFIG_SERIAL_CONSOLE=y

USB serial console support:

CONFIG_USB_SERIAL=m
CONFIG_USB_SERIAL_CONSOLE=m
CONFIG_USB_SERIAL_GENERIC=m
  • Testing:

    On the Pi:

$ sudo screen /dev/tty??? 115200

 

 

 

References:

Serial port and console access: elinux.org, tldp.org