Tuesday, December 8, 2015

[Rapsberry pi Zero] Green led flashing 8 times when powered on | no boot && no display

My Raspberry pi Zero has been delivered to night, but refused to boot ... stupefaction ...

I ordered this pi Zero + the NOOBS 8GB SDCard  from the swag Store, but when powered on, the green led was flashing 8 times, and nothing was displayed on the screen

After some googling, the "8 flashes" mean "SDRAM not recognised"

Apparently, my brand new NOOBS SD Card was not so "brand new", and needed some firmware updates

A a few copy/paste from the official 1.5 NOOBS image later (including bootcode.bin, recovery.img, recovery7.img, recovery.rfs, recovery.elf), I couldn't get it to boot past a kernel panic

I ended up formating and recreating the SDCard from a fresh "Offline and network install" NOOBS image (the NOOBS Lite "Network install only" does not detect my WiFi dongle and needs a "wired connexion" that the pi Zero does not provide)

Every thing now works like a charm ... happiness !!!

Download : https://www.raspberrypi.org/downloads/noobs/
Tuto : https://www.raspberrypi.org/help/noobs-setup/


If you find a way to upgrade the needed files on the original NOOBS 8GB SDCard from the swap Store without formating, feel free to leave a comment

Happy Zeroing !!


Wednesday, April 1, 2015

[couchbase] [nodejs 0.12] Build fail


When trying to npm install couchbase after migrating to node.js v0.12, the build process silently fails on stdout and stderr, and you get a "Failed to locate couchnode native binding" when starting your application

A quick look at "node_modules/couchbase/builderror.log"  indicates undeclared vars :
.node-gyp/0.12.0/deps/uv/include/uv.h:75:6: error: 'EAI_BADHINTS' undeclared (first use in this function)
   XX(EAI_BADHINTS, "invalid value for hints")                                 \
/home/zup/.node-gyp/0.12.0/deps/uv/include/uv.h:80:6: error: 'EAI_NODATA' undeclared (first use in this function)
   XX(EAI_NODATA, "no address")                                                \
(...)
/home/zup/.node-gyp/0.12.0/deps/uv/include/uv.h:83:6: error: 'EAI_PROTOCOL' undeclared (first use in this function)
   XX(EAI_PROTOCOL, "resolved protocol is unknown")                            \
(...)
gyp ERR! not ok
According to this post, the problem should have been fixed with libcouchbase 2.4.8, but when trying to rebuild couchnode using libcouchbase 2.4.8, you now get the following errors :
In file included from ../deps/lcb/include/libcouchbase/plugins/io/libuv/plugin-internal.h:31,
                 from ../deps/lcb/include/libcouchbase/plugins/io/libuv/plugin-libuv.c:18,
                 from ../src/uv-plugin-all.c:17:
../deps/lcb/include/libcouchbase/plugins/io/libuv/libuv_compat.h: In function 'uv_uv2syserr':
../deps/lcb/include/libcouchbase/plugins/io/libuv/libuv_compat.h:168: error: 'EAI_BADHINTS' undeclared (first use in this function)
../deps/lcb/include/libcouchbase/plugins/io/libuv/libuv_compat.h:168: error: (Each undeclared identifier is reported only once
../deps/lcb/include/libcouchbase/plugins/io/libuv/libuv_compat.h:168: error: for each function it appears in.)
../deps/lcb/include/libcouchbase/plugins/io/libuv/libuv_compat.h:168: error: 'EAI_NODATA' undeclared (first use in this function)
../deps/lcb/include/libcouchbase/plugins/io/libuv/libuv_compat.h:168: error: 'EAI_PROTOCOL' undeclared (first use in this function)
make: *** [Release/obj.target/couchbase_impl/src/uv-plugin-all.o] Error 1
gyp ERR! build error
After almost a day fighting with this issue, and thanks to Corbin Uselton for pointing me to the solution, here is how you can build the npm couchbase module when running node.js engine 0.12+

# Update libcouchbase to 2.4.8

For non CentOS distribs, adapt to your env using the official doc

## Add the couchbase yum repo  :

vi /etc/yum.repos.d/couchbase.repo
[couchbase]
name = Couchbase package repository
gpgkey = http://packages.couchbase.com/rpm/couchbase-rpm.key
enabled = 1
baseurl = http://packages.couchbase.com/rpm/6.2/x86_64
gpgcheck = 1
## Install latest libcouchbase (2.4.8 as of 20150401)
yum install libcouchbase-devel.x86_64 \  libcouchbase2-bin.x86_64 \  libcouchbase2-core.x86_64 \  libcouchbase2-libev.x86_64 \  libcouchbase2-libevent.x86_64 \  -y
# Add the missing definitions in the newly installed libcouchbase (starting at line 87)
vi /usr/include/libcouchbase/plugins/io/libuv/libuv_compat.h
(...)
#ifndef EAI_NODATA
#define EAI_NODATA EAI_FAIL
#endif
#ifndef EAI_PROTOCOL
#define EAI_PROTOCOL EAI_FAIL
#endif
(...)
# Rebuild node module couchbase using the newly installed libcouchbase 
npm install couchbase --couchbase-root=/usr

You can now start your app and make sure you do not get the "Failed to locate couchnode native binding" error message


Happy  node0.12

Wednesday, February 18, 2015

[jq] Transforming a JSON Object into an Associative Bash Array using jq

JSON is everywhere and playing with API outputs from the command line is getting much more easier since we have jq, the "lightweight and flexible command-line JSON processor"

Here is a small snipet I use to transform some simple JSON into an associative bash array

Lets pretend that we have an api at http://example.com/myapi returning the following output

[
  {
    "service": "one",
    "endpoint": "example.com/service1",
    "timeout": 1,
    "status": "up"
  },
  {
    "service": "two",
    "endpoint": "example.com/service2",
    "timeout": 2,  
    "status": "down"
  },
  {
    "service": "three",
    "endpoint": "example.com/service3",
    "timeout": 3,
    "status": "up"
  }
]
Imagine now that we need to expose this information to some program, we could do something like that :

URL="http://example.com/myapi"

declare -A assoc_array="($(
  curl -sS "${URL}" \
  | jq '.[]  | "[" + .service + "]=\"" +.endpoint + "\""' -r
))"
We now have a populated ${assoc_array} bash array that we can walk like this

for i in ${!assoc_array[@]}; do echo $i ${assoc_array[$i]}; done
Using jq we could even filter out the services that  are "down"

declare -A assoc_array="($(
  curl -sS "${URL}" \
  | jq '.[] | select(.status == "up") | "[" + .service + "]=\"" +.endpoint + "\""' -r
))"
Imagine now that we need to export as many environment variables as services present in the API output; We could do :

URL="http://example.com/myapi"

declare -A assoc_array="($(
  curl -sS "${URL}" \
  | jq '.[]  | "[" + .service + "]=\"" +.endpoint + "\""' -r
))"

for key in ${!assoc_array[@]}; do 
  sanitizedKey=${key/-/_};
  eval ZBO_${sanitizedKey^^}='$(echo ${assoc_array[$key]})';
done

Happy JSONing !!

[OpenVZ] Create an Alpine Linux container from scratch

Containers are pretty much popular those days, especially due to Docker, but I have to say that when it comes to production environment, I stick with OpenVZ that has been proven to be rock solid and stable for ages

With OpenVZ, a lot of precreated os templates are available out of the box (CentOS, Ubuntu, etc...), and they come in deferent flavors (minimal, devel, ... ), but even the smallest one is around 100MB in its tar.gz format, and uses ~500MB once created and started as a vanilla live container (CT)

This makes things a little bit inefficient as it takes time to download (vztmpl-dl), create (vzctl create), migrate (vzmigrate), upload CT backups to S3, and it can waste a lot of disk space on the hardware node (HN), especially when you have a lot of containers that expose a few services in a micro service way

You usually end up using most of the HN disk space for the bare OS (no deduplication between mostly identical CTs on the HNs), and only a couple of MB for what the CT is really meant to do/specialized for

More over, if you decide to keep the last n versions for each service container in the form of a customized os template/tarball, (so that you can instantly recreate your full IT stack at any given version using the under rated but beautiful ${TMPL_REPO_PREFIX} exposed thru /etc/vz/download.conf), you just waste space or have to limit you to a few versions

This is where Alpine Linux can help you !!

A barbone Alpine Linux is just 6MB in size and has access to a surprisingly rich package repository

Even if the "dists/scripts support for Alpine Linux" as been added to OpenVZ since vzctl 3.3, no precreated template seams to be available, and if you google "Alpine Linux" + "OpenVZ" + "os template", the result is pretty much disappointing and leads to incomplete informations or old and partially working Alpine template 

If you are interested in creating your own Alpine Linux os tempalte for OpenVZ, here are the quick steps you could follow :

  - Step 1 : From your HN command line, create a minimal CentOS container from which you will install Alpine Linux in a chroot

vzctl create ${CTID} --ipadd ${IP} --hostname ${HOSTNAME} --ostemplate centos-6-x86-minimal
  - Step 2 : From this newly created build container, download the latest apk static package :

export mirror="http://nl.alpinelinux.org/alpine/"
wget ${mirror}/v3.1/main/x86_64/apk-tools-static-2.5.0_rc1-r0.apk
tar -xzf apk-tools-static-2.5.0_rc1-r0.apk
  - Step 3 : Install the alpine base installation into the chroot

export chroot_dir="/tmp/alp"
mkdir -p ${chroot_dir}
./sbin/apk.static -X ${mirror}/v3.1/main -U --allow-untrusted --root ${chroot_dir} --initdb add alpine-base
  - Step 4 : Create the necessary devices in the chroot :

mknod -m 666 ${chroot_dir}/dev/full c 1 7
mknod -m 666 ${chroot_dir}/dev/ptmx c 5 2
mknod -m 644 ${chroot_dir}/dev/random c 1 8
mknod -m 644 ${chroot_dir}/dev/urandom c 1 9
mknod -m 666 ${chroot_dir}/dev/zero c 1 5
mknod -m 666 ${chroot_dir}/dev/tty c 5 0
rm -rf  ${chroot_dir}/dev/null
mknod -m 666 ${chroot_dir}/dev/null c 1 3
  - Step 5 : Edit fstab and inittab so that it can work in an OpenVZ environment

vi ${chroot_dir}/etc/fstab
# START -------------------------------------
none /dev/pts devpts rw,gid=5,mode=620 0 0
none /dev/shm tmpfs defaults 0 0
# END ---------------------------------------
vi ${chroot_dir}/etc/inittab
# START -------------------------------------
(...)
# Set up a couple of getty's
#tty1::respawn:/sbin/getty 38400 tty1
#tty2::respawn:/sbin/getty 38400 tty2
#tty3::respawn:/sbin/getty 38400 tty3
#tty4::respawn:/sbin/getty 38400 tty4
#tty5::respawn:/sbin/getty 38400 tty5
#tty6::respawn:/sbin/getty 38400 tty6
(...)
#1:2345:respawn:/sbin/getty 38400 console
#2:2345:respawn:/sbin/getty 38400 tty2
# END ---------------------------------------   
  - Step 6 : [optional] Set up the latest APK mirror :

echo "${mirror}/v3.1/main/" > ${chroot_dir}/etc/apk/repositories
At this step you could skip to step 9, but sometimes it is interesting to add a few services at startup or even add sshd into the container

  - Step 7 :  [optional] Install sshd

./sbin/apk.static -X ${mirror}/v3.1/main -U --allow-untrusted --root ${chroot_dir} add sshd
  - Step 8 :  [optional]  Enter the chroot and customize a few things as you see fit :

mount -t proc none ${chroot_dir}/proc
mount -o bind /sys ${chroot_dir}/sys
chroot ${chroot_dir} /bin/sh -l 
Ex : Setup init services

rc-update add hostname default
rc-update add localmount default
rc-update add klogd default
rc-update add networking default
rc-update add syslog default
rc-update add dmesg default
rc-update add sshd default      
  - Step 9 :  Create the OpenVZ template

exit #exit from chroot 
umount ${chroot_dir}/proc
umount ${chroot_dir}/sys    
tar zcf ./alpine-3.1.2-x86_64.tar.gz -C ${chroot_dir} . 
  - Step 10 : Copy the created template at the HN template dir :

exit # exit from centos ct and get back to the HN command line
scp ${IP}:/root/alpine-3.1.2-x86_64.tar.gz /vz/template/cache/
  - Step 11 : Start a brand new CT using your Alpine Linux template

vzctl create ${CTID2} --ipadd ${IP2} --hostname ${HOSTENAME2} --ostemplate alpine-3.1.2-x86_64
vzctl start ${CTID2}
  - Step 12 : Enter your CT

vzctl enter ${CTID2}
or ssh into the CT if you have installed sshd into your template (optional step 7) : 

vzctl set ${CTID2} --userpasswd ${LOGIN}:${PASSWORD}
ssh ${LOGIN}@${IP2}

Happy hacking !!


[TODO] : Add fancy formatting to this terrible blog post that hurts my eyes

Sunday, December 1, 2013

[protobuf] Sending protobuf serialized data using curl

Protocol buffer is may be really performant, but when it comes to debugging some APIs, it can be pretty much frustrating not to be able to use your old good usual tools.

One tool I love and use since my really first steps with http is curl, and here is how you can use it when dealing with protobuf payloads :

Assuming that

  - you already have protoc installed (Ex: yum install protobuf.x86_64)
  - the data you want to send is stored in clear text in file.msg
  - your proto file is file.proto
  - you want to encode it using the message type myPost

you can use the following set of commands to POST your data protobuf encoded :

cat file.msg | protoc --encode=myPost ./file.proto | curl -sS -X POST --data-binary @- http://hostname/api-route

---

Reading some protoBuffed output works exactly the same.

Imagine the preceding call to http://hostname/api-route responds some protobuffed output using the message type myResponse, you can use the following to output it in clear text

cat file.msg | protoc --encode=myPost ./file.proto | curl -sS -X POST --data-binary @- http://hostname/api-route | protoc --decode=myResponse ./file.proto


Enjoy

[HTTP Benchmark] Apache ab and HTTP 1.0 KeepAlive against node.js express

Apache ab allows you to specify '-k' to use the HTTP KeepAlive feature

We had to simulate some activity on our servers that use this keep alive extensively (thousands of requests through the same connexion), but couldn't make it happen using ab against our node.js/express application.

Using 'ab -v 2', we could confirm that every http request was closed, forcing the next one to create a new connexion :

---
POST /testroute HTTP/1.0
Connection: Keep-Alive
Content-length: 700
Content-type: text/plain
Host: app1
User-Agent: ApacheBench/2.3
Accept: */*

---
LOG: header received:
HTTP/1.1 200 OK
X-Powered-By: Express
Date: Sun, 01 Dec 2013 18:36:36 GMT
Connection: close

This is where we had the intuition that express was not implementing the 'non official' keepAlive http 1.0;

To confirm this, we tried calling express with curl using http1.1 and http1.0

- Using HTTP 1.1 :

curl -v http://app1/testroute

> POST /testroute HTTP/1.1
> User-Agent: curl/7.19.7 (x86_64-redhat-linux-gnu) libcurl/7.19.7 NSS/3.14.0.0 zlib/1.2.3 libidn/1.18 libssh2/1.4.2
> Host: app1
> Accept: */*
> Connection: Keep-Alive
> Content-Length: 674
> Content-Type: application/x-www-form-urlencoded
>
< HTTP/1.1 200 OK
< X-Powered-By: Express
< Date: Sun, 01 Dec 2013 18:44:35 GMT
< Connection: keep-alive
< Transfer-Encoding: chunked

- Using HTTP 1.0 and the keepAlive header : 

curl -v --http1.0 -H 'Connection: Keep-Alive' http://app1/testroute

> POST /testroute HTTP/1.0
> User-Agent: curl/7.19.7 (x86_64-redhat-linux-gnu) libcurl/7.19.7 NSS/3.14.0.0 zlib/1.2.3 libidn/1.18 libssh2/1.4.2
> Host: app1
> Accept: */*
> Connection: Keep-Alive
> Content-Length: 674
> Content-Type: application/x-www-form-urlencoded
< HTTP/1.1 200 OK
< X-Powered-By: Express
< Date: Sun, 01 Dec 2013 18:44:20 GMT
< Connection: close

Tada !! Our problem was confirmed:

  - apache ab is using HTTP 1.0
  - express don't care about the keepAlive header and close each connexion made with HTTP 1.0

---

After some googling we found some sources for ab and a quick look into them confirmed a quick hack was possible.

Here is what we did :

# Install dependencies :
yum install apr-devel.x86_64 apr-util-devel.x86_64 -y

# Get the sources :
wget https://apachebench-standalone.googlecode.com/files/ab-standalone-0.1.tar.bz2

# Unzip :
bzip2 -d ab-standalone-0.1.tar.bz2
tar xvf ab-standalone-0.1.tar

# Modify sources to force HTTP 1.1 :
cd ab-standalone
vi ab.c
# START ---------------
(...)
            "%s %s HTTP/1.1\r\n"
(...)
            "POST %s HTTP/1.1\r\n"
(...)
# END   ---------------

# Compile :
make apr-skeleton
make ab
chmod +x ./ab

# Alias ab to new newly compiled binary :
alias ab=/zbo/operations/ab-standalone/ab

# Confirm that keepAlive now works :
ab -v 2 -n 1 -c 1 -k http://app1/testroute

---
POST /testroute HTTP/1.1
Connection: Keep-Alive
Content-length: 700
Content-type: text/plain
Host: 192.168.0.161:8080
User-Agent: ApacheBench/2.3
Accept: */*

---
LOG: header received:
HTTP/1.1 200 OK
X-Powered-By: Express
Date: Sun, 01 Dec 2013 19:23:25 GMT
Connection: keep-alive
Transfer-Encoding: chunked


Use at your own risk :)


Wednesday, August 28, 2013

[Couchbase] How to retrieve a key expiration date

It is pretty easy to retrieve a key expiration date from couchbase, but if just like me you have been googling around without any success, and didn't find any thing relevant in the couchbase doc, here is what i came up with after inspecting my browser's network pane while on the admin console :

curl -v "http://couchbaseServer:couchbasePort/couchBase/bucketName/itemID"
(...)
< HTTP/1.1 200 OK
< Date: Tue, 27 Aug 2013 22:20:02 GMT
< Server: MochiWeb/1.0 (Any of you quaids got a smint?)
< X-Couchbase-Meta: {"id":"itemID","rev":"1-0007992d08e3d5750000000000000000","expiration":0,"flags":0,"type":"json"}
< Content-Type: application/json
< Content-Length: 365
< Cache-Control: must-revalidate
< Connection: close
* Closing connection #0
{"someObject":["foo", "bar"]}

To make a long story short, you can retrieve all the item metadata including the expiration, when reading the X-Couchbase-Meta http response header after an HTTP API call to the corresponding item

You can get a detailed explanation on the expiration value and how to set and read it at : http://www.couchbase.com/docs/couchbase-devguide-2.0/about-ttl-values.html

Short version :

  - If Expiry is less than 30*24*60*60 (30 days) : The value is interpreted as the number of seconds from the point of storage or update.
  - If Expiry is greater than 30*24*60*60 : The value is interpreted as the number of seconds from the epoch (January 1st, 1970).
  - If Expiry is 0 : This disables expiry for the item.