<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:media="http://search.yahoo.com/mrss/"><channel><title><![CDATA[Technologee]]></title><description><![CDATA[More of an aide-mémoire than thoughts, stories and ideas.]]></description><link>https://blog.technologee.co.uk/</link><generator>Ghost 3.5</generator><lastBuildDate>Sat, 04 Apr 2026 18:41:41 GMT</lastBuildDate><atom:link href="https://blog.technologee.co.uk/rss/" rel="self" type="application/rss+xml"/><ttl>60</ttl><item><title><![CDATA[Docker Multi-Arch Images, manifests & mquery]]></title><description><![CDATA[Looking at Docker multi-arch images, creating them using the upcoming manifest commands in the Docker client, reading them using Phil Estes' mquery tool, and it's conversion to OpenFaaS function.]]></description><link>https://blog.technologee.co.uk/docker-multi-arch-images-manifests-mquery/</link><guid isPermaLink="false">5e3b286c1c4adf03c4d82d06</guid><category><![CDATA[Docker]]></category><category><![CDATA[OpenFaaS]]></category><category><![CDATA[pi]]></category><category><![CDATA[Multi-arch]]></category><category><![CDATA[raspberry]]></category><dc:creator><![CDATA[Richard Gee]]></dc:creator><pubDate>Mon, 12 Feb 2018 00:00:00 GMT</pubDate><media:content url="https://blog.technologee.co.uk/content/images/2020/02/anthony-delanoix-152871.jpg" medium="image"/><content:encoded><![CDATA[<img src="https://blog.technologee.co.uk/content/images/2020/02/anthony-delanoix-152871.jpg" alt="Docker Multi-Arch Images, manifests & mquery"><p>Putting aside the oft-quoted figures about growth, and number of images downloaded, 2017 was quite a year for <a>Docker</a>.  It culminated in the DockerCon Europe announcement in Copenhagen that the platform would natively support <a href="https://kubernetes.io/">Kubernetes</a>, and we are now starting to see the fruits of this shift in the <a href="https://beta.docker.com/">Docker support for Kubernetes Beta</a>.  Prior to that, however, a couple of 'multi-centric' features were released that started to lower the already negligible barrier to entry &amp; transform how people use <a>Docker</a>.</p><h2 id="multi-stage-builds">Multi-Stage Builds</h2><p>This was the earlier of the two features of interest. It started to emerge during March ahead of general availability during June.  Not wishing to cover this in any great detail here, this change replaced the earlier 'Builder Pattern' which existed in order to facilitate production of smaller images for particular types of applications.  For a first-class discussion of how multi-stage build superseded the builder pattern head over to <a href="https://blog.alexellis.io/mutli-stage-docker-builds/">Alex Ellis' blog</a>.</p><h2 id="multi-architecture-images">Multi-Architecture Images</h2><p>During September <a>Docker</a> announced multi-platform support of their official images and as someone who runs <a>Docker</a> on ARM32v6, ARM32v7 and x86_64 this news was of particular interest.</p><h3 id="why">Why?</h3><p>Quite simply, it removes complexity.  In the pre-multi-arch world it would typically be the case that multiple <a href="https://hub.docker.com/">Docker hub</a> repo's would need to be visited, namespaces determined, or tags forensically interrogated in order to create bespoke Dockerfiles for each platform being targeted by the application.  For example, consider a <a href="https://golang.org/">golang</a> application being developed to run on each of the hardware platforms described earlier; 3 separate Dockerfiles could be required:</p><ul><li>Dockerfile</li></ul><pre><code>FROM golang:1.8.5
....
</code></pre><ul><li>Dockerfile.arm32v6</li></ul><pre><code>FROM arm32v6/golang:1.8.5
...
</code></pre><ul><li>Dockerfile.arm32v7</li></ul><pre><code>FROM arm32v7/golang:1.8.5
...
</code></pre><p>The remainder of the Dockerfiles' content would likely remain the same, and users would need to invoke the build command against the Dockerfile relevant to their environment.</p><ul><li>Dockerfile: <code>docker build .</code></li><li>Dockerfile.arm32v6 <code>docker build . -f Dockerfile.arm32v6</code></li><li>Dockerfile.arm32v7 <code>docker build . -f Dockerfile.arm32v7</code></li></ul><p>Note the <code>-f</code> option which needs to be supplied if a non-standard Dockerfile name is used.</p><p>With the multi-arch approach the responsibility for finding the appropriate base image has been delegated to the docker client, which greatly improves the UX.  Revisiting the earlier example the three Dockerfiles can be condensed into one:</p><ul><li>Dockerfile</li></ul><pre><code>FROM golang:1.8.5
....
</code></pre><p>Consequently the build command is the same regardless of the users' environment, simply: <code>docker build .</code></p><h3 id="how">How?</h3><p>Something called <em>manifest lists</em> make this possible.  In (much) earlier versions of Docker the client/server conversation resulting from an invocation of <code>docker pull</code> / <code>docker run</code> would be very brief.  Latterly an intermediate step has been introduced to the conversation, during which the client can, where available, receive a list of supported platforms for the requested image.  The client subsequently requests an image appropriate for the hardware platform upon which it's running.</p><p>In order to get a taste of how these <em>manifest lists</em> look, try exploring the work that <a href="https://www.docker.com/captains/phil-estes">Docker Captain</a> <a href="https://twitter.com/estesp">Phil Estes</a> has done in this area and in particular his <a href="https://github.com/estesp/mquery">mquery tool</a>.  As you might expect this has also been made available as a docker image:</p><pre><code>$ docker run mplatform/mquery &lt;imageName&gt;
  Image: &lt;imageName&gt;
   * Manifest List: Yes
   * Supported platforms:
     - linux/amd64
     - linux/arm/v7
     - linux/386
     - linux/ppc64le
     - linux/s390x
     - windows/amd64:10.0.14393.2007
     - windows/amd64:10.0.16299.192
</code></pre><h3 id="practical-implications">Practical Implications</h3><p>Looking beyond the build phase. Prior to multi-arch, with applications where stacks might be in use, standard practice would be to assemble separate compose files for each supported hardware platform.  For example, a <code>docker-compose.yml</code> for use on x86 with the relevant x86 images specified, and a <code>docker-compose.armhf.yml</code> which would target 32-bit ARM hardware, and so would detail the respective ARM images.  With multi-arch images, the same stack can be deployed on each supported hardware platform from a single compose file; the client will invisibly select the relevant image during deployment.</p><h3 id="create-your-own-manifest-list-multi-arch">Create your own Manifest List / Multi-Arch</h3><p>At the time of writing, the CLI feature to create  manifest lists is still experimental, so the edge channel version of <a>Docker</a> is required. With 18.02 installed ensure that the following is added to the <code>config.json</code> file</p><pre><code>{
"experimental":"enabled"
}
</code></pre><p>That will enable access to the command in the CLI:</p><pre><code>$ docker manifest

Usage:  docker manifest COMMAND

Manage Docker image manifests and manifest lists

Options:

Commands:
  annotate    Add additional information to a local image manifest
  create      Create a local manifest list for annotating and pushing to a registry
  inspect     Display an image manifest, or manifest list
  push        Push a manifest list to a repository

Run 'docker manifest COMMAND --help' for more information on a command.
</code></pre><p><a href="https://docs.docker.com/edge/engine/reference/commandline/manifest_create/">Create</a> would seem like the obvious place to start.  The first argument specifies the name of the manifest - effectively the image name that will be used to access the hardware specific images - and the subsequent arguments specify those images.  Here a manifest containing two images is created, x86 linux image and an ARM32v7 equivalent.</p><pre><code>$ docker manifest create rgee0/of-mquery:1.0 \
    rgee0/mquery-x86:1.0 \ 
      rgee0/mquery-arm32v7:1.0

Created manifest list docker.io/rgee0/of-mquery:1.0
</code></pre><p>Once created, the manifest can be <a href="https://docs.docker.com/edge/engine/reference/commandline/manifest_annotate/">annotated</a> to add additional information.  It's this information that is extracted by the <code>docker run mplatform/mquery</code> command demonstrated above.</p><pre><code>$ docker manifest annotate rgee0/of-mquery:1.0 \
    rgee0/mquery-x86:1.0 --os linux --arch amd64

$ docker manifest annotate rgee0/of-mquery:1.0 \
    rgee0/mquery-arm32v7:1.0 --os linux --arch arm --variant v7
</code></pre><p>The OS and arch combinations should align with those available from the <a href="https://golang.org/doc/install/source#environment">golang environment options</a>. Notice the <code>--variant</code> option supplied to the ARM annotation; as an example, this is useful in situations where distinct images are available for each of ARM32v6 &amp; ARM32v7.</p><p>The final step, which should be familiar from working with images, is to <a href="https://docs.docker.com/edge/engine/reference/commandline/manifest_push/">push</a> the manifest:</p><pre><code>$ docker manifest push rgee0/of-mquery:1.0
</code></pre><p>To review the changes, the CLI makes an <a href="https://docs.docker.com/edge/engine/reference/commandline/manifest_inspect/">inspect</a> option available, and of course the earlier <code>mquery</code> command could be provided with the manifest name chosen during create:</p><pre><code>$ docker run mplatform/mquery rgee0/of-mquery:1.0
  Image: rgee0/of-mquery:1.0
   * Manifest List: Yes
   * Supported platforms:
     - linux/arm/v7
     - linux/amd64
</code></pre><p>Now, running <code>docker pull rgee0/of-mquery:1.0</code> on a mac, or a Raspberry Pi 3 will see the relevant hardware specific image pulled from <a href="https://hub.docker.com/">Docker hub</a>.</p><h3 id="mquery-as-a-function">mquery as a function</h3><p>Why not try mquery as a function?  By adapting <a href="https://github.com/estesp/mquery">Phil's code</a> very slightly it's possible to make it compatible with the <a href="https://www.openfaas.com/">OpenFaaS</a> framework and present mquery as a serverless function.  <a href="https://www.openfaas.com/">OpenFaaS</a> functions take input via <code>stdin</code> so a small change to swap out <code>os.Args</code> and the addition of a helper function to tidy up <code>stdin</code> prepares the program for use with <a href="https://www.openfaas.com/">OpenFaaS</a>.  Finally, the addition of multi-stage Dockerfiles produce a small function image ready for deployment.  Find the <a href="https://github.com/rgee0/mquery/tree/openfaaschanges">OpenFaaS-ready code over on github</a>.</p><p>Since the images were built and pushed during the manifest creation section, those with access to an <a href="https://www.openfaas.com/">OpenFaaS</a> instance can dive straight in to deploying the function.  Those who haven't, head over to <a href="https://github.com/openfaas/faas/tree/master/guide">the guides</a> and work through the deployment guide for your preferred orchestrator.</p><h3 id="deployment">Deployment</h3><p>One of the core tenets pervading <a href="https://www.openfaas.com/">OpenFaaS</a> is that of choice, and deployment activity is no different. Here are two of the deployment options.</p><h4 id="cli">CLI</h4><p><a href="https://blog.alexellis.io/quickstart-openfaas-cli/">Grab the CLI</a> and use deploy against a <code>yaml</code> file:</p><pre><code>provider:
  name: faas
  gateway: http://127.0.0.1:8080

functions:
  mquery:
    lang: Dockerfile
    handler: ./mquery
    image: rgee0/of-mquery:1.0
</code></pre><p>Then run the <code>deploy</code> command:</p><pre><code>$ faas-cli deploy -f ./mquery.yml
</code></pre><p>or, more succinctly, name the yaml file <code>stack.yml</code> and use the CLI alias:</p><pre><code>$ faas deploy
</code></pre><p>The function can then be invoked via the CLI:</p><pre><code>$ echo 'rgee0/of-mquery:1.0' | faas invoke mquery

  Manifest List: Yes 
  Supported platforms:
   - linux/arm/v7
   - linux/amd64
</code></pre><h4 id="function-store">Function Store</h4><p>By far the easiest way to deploy to <a href="https://www.openfaas.com/">OpenFaaS</a> is via the <a href="https://github.com/openfaas/store">OpenFaaS Function Store</a>. The Function Store is a community curated index of <a href="https://www.openfaas.com/">OpenFaaS</a> functions and deployment is as simple as clicking through the desired function.<br>The function created through this article has been accepted into the <a href="https://github.com/openfaas/store">OpenFaaS Function Store</a> which means it can be used in just 2 clicks:</p><figure class="kg-card kg-image-card"><img src="https://blog.technologee.co.uk/content/images/2018/02/deploy_store.gif" class="kg-image" alt="Docker Multi-Arch Images, manifests & mquery"></figure><p>Once initialised the function can be invoked through the UI by adding an image name and tag (<code>rgee0/of-mquery:1.0</code>) into the request body and selecting <code>invoke</code>.</p><h5 id="credits">Credits</h5><p><a href="https://unsplash.com/photos/FYNh6W7_GII">Title Photo</a> by <a href="https://unsplash.com/@anthonydelanoix">Anthony DELANOIX</a> on <a href="https://unsplash.com">Unsplash</a></p>]]></content:encoded></item><item><title><![CDATA[Controlling Blinkt! using Golang]]></title><description><![CDATA[Article and code snippets explaining how to approach controlling the Pimoroni Blinkt! with Golang using the Blinkt_go library maintained by Alex Ellis]]></description><link>https://blog.technologee.co.uk/blinkt-golang-port/</link><guid isPermaLink="false">5e3b259c1c4adf03c4d82cf6</guid><category><![CDATA[Blinkt]]></category><category><![CDATA[Docker]]></category><category><![CDATA[raspberry]]></category><category><![CDATA[golang]]></category><dc:creator><![CDATA[Richard Gee]]></dc:creator><pubDate>Thu, 27 Jul 2017 00:00:00 GMT</pubDate><media:content url="https://blog.technologee.co.uk/content/images/2020/02/Untitled.jpg" medium="image"/><content:encoded><![CDATA[<img src="https://blog.technologee.co.uk/content/images/2020/02/Untitled.jpg" alt="Controlling Blinkt! using Golang"><p>In the <a href="http://blog.technologee.co.uk/in-a-blinkt-golang-on-a-raspberry-pi/">previous article</a> we looked at how I had become involved in an open source community project which centred around making available a Golang equivalent of <a href="https://github.com/pimoroni/blinkt">Pimoroni's Python library</a>.  In this post I wanted to demonstrate use of the <a href="https://github.com/alexellis/blinkt_go">Golang library</a> by porting one of the <a href="https://github.com/pimoroni/blinkt/tree/master/examples">Pimoroni examples</a>.</p><h3 id="solid-colours">Solid Colours</h3><p>One of the simplest examples in the Python examples is <code>solid_colours.py</code>.  This example cycles the Blinkt through red, green and blue at predetermined intervals. Lets take a look.</p><h3 id="python">Python</h3><p>The python code to do this is really quite straight-forward, which makes it a really good example to start with if you're new to Golang. It allows you to focus on how to make Golang do the same without getting bogged down in complexity of the implementation.</p><pre><code class="language-python">while True:
    if step == 0:
        set_all(128,0,0)
    if step == 1:
        set_all(0,128,0)
    if step == 2:
        set_all(0,0,128)

    step+=1
    step%=3
    show()
    time.sleep(0.5)
</code></pre><p>A simple loop incrementing a counter which uses a modulo function to choose which colour to display.  A half second delay is placed within the loop so that we give our eyes long enough to perceive the change of colour.</p><h3 id="golang">Golang</h3><p>So, we've seen our starting point, but the reason we're here is that we want to build this out in Golang.  Lets start with the scaffolding - the bits we'll need regardless of the example:</p><pre><code class="language-golang">package main

import . "github.com/alexellis/blinkt_go"

</code></pre><p>Nice and simple, we declare this package as main and import the Raspberry Pi <a href="https://github.com/alexellis/blinkt_go">Golang library</a> that we looked at in the <a href="http://blog.technologee.co.uk/in-a-blinkt-golang-on-a-raspberry-pi/">previous article</a>.  You'll notice that we are using dot notation in the import - this is not ideal, and is only really advised for testing code.</p><p>As this example is of limited complexity there are no functions required outside of main, so lets head straight into the body of the program.  First we need to construct a new Blinkt object.  Every program will need one of these as its the means by which you access the functions to manipulate the hardware.</p><pre><code class="language-golang">blinkt := NewBlinkt()
</code></pre><p>In the current version of the library initialising Blinkt! with a brightness value is optional. If we wanted to do this we would pass a value between 0.0 and 1.0 into the <code>NewBlinkt()</code> method:</p><pre><code class="language-golang">initialBrightness := 0.5
blinkt := NewBlinkt(initialBrightness)
</code></pre><p>Next we need to think about the type of program we are creating, is it one that we wish to run and exit leaving the Blinkt! set to a desired state, or do we want to Blinkt! to reset on exit?  Well, this particular example runs an infinite loop, which we'll have to break using <code>CTRL+C</code>, and at the point in which it breaks we want the Blinkt! to go dark.  To do this we need to set <code>SetClearOnExit()</code> to true.</p><pre><code class="language-golang">blinkt.SetClearOnExit(true)
blinkt.Setup()
Delay(100)
</code></pre><p>Once we've set the <code>Blinkt</code> initialisation options we can call <code>Setup()</code> to initialise GPIO via the lower level WiringPi library.  There's a little delay in there to allow time for <code>Setup()</code> to complete.</p><h4 id="lights-on">Lights On</h4><p>Right, so we've successfully initialised a <code>Blinkt</code> object in Golang but nothing will happen until we tell the pixels what we want them to do.  this is where the program starts to look similar to the earlier Python version.</p><pre><code class="language-golang">step := 0

for {

	step = step % 3
	switch step {
	case 0:
		blinkt.SetAll(128, 0, 0)
	case 1:
		blinkt.SetAll(0, 128, 0)
	case 2:
		blinkt.SetAll(0, 0, 128)
	}

	step++
	blinkt.Show()
	Delay(500)

	}
}
</code></pre><p>Here we can see the <code>SetAll()</code> function that I talked about in the <a href="http://blog.technologee.co.uk/in-a-blinkt-golang-on-a-raspberry-pi/">previous article</a> being put to use.  We have created three states, one for Red, one for Green and one for Blue, and we use an incremental counter with modulo arithmetic to carry us from state to state with a half second delay between states.  It's important that <code>Show()</code> is called to apply the values set in <code>SetAll()</code> to the hardware.</p><h4 id="bringing-it-together">Bringing it Together</h4><p>We've now got a viable Golang program through which was can manipulate the Blinkt! hardware via GPIO.</p><pre><code class="language-golang">package main

import . "github.com/alexellis/blinkt_go"

func main() {

	brightness := 0.5
	blinkt := NewBlinkt(brightness)

	blinkt.SetClearOnExit(true)
	blinkt.Setup()
	Delay(100)

	step := 0

	for {

		step = step % 3
		switch step {
		case 0:
			blinkt.SetAll(128, 0, 0)
		case 1:
			blinkt.SetAll(0, 128, 0)
		case 2:
			blinkt.SetAll(0, 0, 128)
		}

		step++
		blinkt.Show()
		Delay(500)

	}

}
</code></pre><h3 id="other-methods">Other methods</h3><p>To help you achieve different an more impressive effects the library makes a number of of other methods available:</p><p><code>blinkt.Clear()</code> - Pretty self-explanitory this one.  It will set all the pixels to 0</p><p><code>blinkt.SetAll(r int, g int, b int)</code> - As we've already seen this will set all the pixels to the RGB values passed in to the function.</p><p><code>blinkt.SetBrightness(brightness float64)</code> - One of the newer functions this sets the brightness of all the pixels enabling dynamic manipulation during runtime.</p><p><code>blinkt.SetPixel(p int, r int, g int, b int)</code> - Set the colour value of a single pixel. The pixels are zero indexed.</p><p><code>blinkt.SetPixelBrightness(p int, brightness float64)</code> - Set the brightness of a single pixel. The pixels are zero indexed.</p><p>The nice feature here is that the final four each return a <code>Blinkt</code> object, which means we can chain the methods to set the colours and brightness in one hit.  We could even add <code>Show</code> to the end:</p><pre><code class="language-golang">switch step {
	case 0:
		blinkt.SetAll(128, 0, 0).SetBrightness(0.5)
	case 1:	
		blinkt.SetBrightness(0.1).SetAll(0, 128, 0)
	case 2:
		blinkt.SetAll(0, 0, 128).SetBrightness(0.9).Show()
		}
</code></pre><p>Through this more granular control we are able to create all manner of intricate effects.  Here's an example of <code>blinkt.SetAll(128, 0, 0).SetBrightness()</code> looping through different brightness values:</p><p>It's alive! Needs cleaning up but well on my way to fixing brightness <a href="https://twitter.com/alexellisuk">@alexellisuk</a> <a href="https://twitter.com/hashtag/blinkt?src=hash">#blinkt</a> <a href="https://twitter.com/hashtag/raspberrypi?src=hash">#raspberrypi</a> <a href="https://twitter.com/hashtag/golang?src=hash">#golang</a> <a href="https://t.co/3MyF63Bz3e">pic.twitter.com/3MyF63Bz3e</a>— Richard Gee (@rgee0) <a href="https://twitter.com/rgee0/status/838468781550747649">March 5, 2017</a></p><h3 id="docker">Docker</h3><p>Now we've written our program to drive our Blinkt! why not make future invocations easy for ourselves by building a <a href="https://www.docker.com">Docker</a> image which we can then run as a container?  We can then push the image up to the <a href="https://hub.docker.com/u/rgee0/">Docker hub</a>, making it available to pull down and run on any device where <a href="https://www.docker.com">Docker</a> is available to us.  The really nice part here is that <a href="https://www.docker.com">Docker</a> now supports multistage builds so with a little thought we can create smaller images containing only what we need at run time - the build stage components can be discarded post-build.  For more on multistage builds see Docker Captain <a href="https://blog.alexellis.io/mutli-stage-docker-builds/">Alex Ellis' excellent article</a> - so informative was this article that Docker use it within their <a href="https://docs.docker.com/engine/userguide/eng-image/multistage-build/">documentation</a>.</p><pre><code class="language-docker">FROM resin/rpi-raspbian AS buildpart

RUN apt-get update &amp;&amp; \
    apt-get install -qy build-essential wiringpi git curl ca-certificates &amp;&amp; \
    curl -sSLO https://storage.googleapis.com/golang/go1.7.5.linux-armv6l.tar.gz &amp;&amp; \
	mkdir -p /usr/local/go &amp;&amp; \
	tar -xvf go1.7.5.linux-armv6l.tar.gz -C /usr/local/go/ --strip-components=1

ENV PATH=$PATH:/usr/local/go/bin/
ENV GOPATH=/go/

RUN mkdir -p /go/src/github.com/rgee0/blinkt_go_examples/app
WORKDIR /go/src/github.com/rgee0/blinkt_go_examples/app

COPY app.go	.
RUN go get -d -v
RUN go build -o ./app

########End of build part########

FROM resin/rpi-raspbian 
RUN apt-get update &amp;&amp;  apt-get install -qy wiringpi --no-install-recommends &amp;&amp; rm -rf /var/lib/apt/lists
COPY --from=buildpart /go/src/github.com/rgee0/blinkt_go_examples/app/app /
CMD ["./app"] 
</code></pre><p>We can then build this using:</p><pre><code>docker build --tag rgee0/rpi_blinkt_go:solid_colours . 
</code></pre><p>The result will be a Docker image of 129MB, which is over 400MB smaller than the 550MB image we would typically have seen prior to multistage builds being supported.  Much smaller then for when we want to push the image up to the <a href="https://hub.docker.com/r/rgee0/rpi_blinkt_go/tags/">Docker hub</a>:</p><pre><code>docker push rgee0/rpi_blinkt_go:solid_colours
</code></pre><p>All that remains is to try running it.  As we've already seen <code>solid_colours</code> lets run <code>random_blink</code> instead:</p><pre><code>docker run -ti --privileged rgee0/rpi_blinkt_go:random_blink
</code></pre>]]></content:encoded></item><item><title><![CDATA[In a Blinkt!, Golang on a Raspberry Pi]]></title><description><![CDATA[Describing the Blinkt_go library for the Raspbery Pi.  The library is open sourced by Alex Ellis and it drives an 8 LED GPIO device called Blinkt! by Pimoroni ]]></description><link>https://blog.technologee.co.uk/in-a-blinkt-golang-on-a-raspberry-pi/</link><guid isPermaLink="false">5e3b24e61c4adf03c4d82ce0</guid><category><![CDATA[pi]]></category><category><![CDATA[golang]]></category><category><![CDATA[open source]]></category><category><![CDATA[Blinkt]]></category><category><![CDATA[raspberry]]></category><dc:creator><![CDATA[Richard Gee]]></dc:creator><pubDate>Thu, 20 Jul 2017 00:00:00 GMT</pubDate><media:content url="https://blog.technologee.co.uk/content/images/2020/02/Gophercolor.jpg" medium="image"/><content:encoded><![CDATA[<img src="https://blog.technologee.co.uk/content/images/2020/02/Gophercolor.jpg" alt="In a Blinkt!, Golang on a Raspberry Pi"><p>The <a href="https://shop.pimoroni.com/products/blinkt">Blinkt! by Pimoroni</a> is a relatively inexpensive 8 LED display which gently introduces GPIO programming concepts.  Pimoroni provide their <a href="https://github.com/pimoroni/blinkt/tree/master/examples">own library and examples</a> based on Python.  There are various library ports available on <a href="https://github.com/search?utf8=%E2%9C%93&amp;q=blinkt&amp;type=">github</a> and this article will focus on the <a href="https://github.com/alexellis/blinkt_go">Golang library</a> created by <a href="https://www.alexellis.io/">Alex Ellis</a>.</p><p>As Alex's <a href="https://github.com/alexellis/blinkt_go_examples">Blinkt! go examples repo</a> welcomes pull requests I thought it an ideal opportunity to get hands on with Golang and port some of the Pimoroni Python examples. Ultimately this led to the <a href="https://github.com/alexellis/blinkt_go">base library</a> also being enhanced, bringing it close to the API offered by the <a href="https://github.com/pimoroni/blinkt/tree/master/examples">Pimoroni Python Library</a>.</p><h2 id="why">Why?</h2><p>The simple answer is that I wanted to get hands on with Golang and from my perspective porting existing code from one language to another offers a means by which to focus more on the target language syntax and semantics than the program's algorithm; this can often remain similar to the source program.</p><p>It has to be said that Alex was also extremely helpful in reviewing code and providing guidance.<br>He even <a href="https://blog.alexellis.io/rebase-and-squash-pr/">blogged</a> around one of the pull requests to demonstrate squashing and merging.</p><h2 id="how">How?</h2><p>Initially my approach was to use the library as was and start porting the simpler examples.  This to me was logical because using the library to build the examples meant my understanding of the library naturally increased.  By the time the features offered by the library were no longer sufficient to port the next example, my understanding of Golang and the library had increased enough for me to start making changes to the library. <em>I took a bit of a leap here because the <a href="https://github.com/alexellis/blinkt_go">library repo</a> wasn't explicit on whether PRs were welcome or not.</em></p><h2 id="what-changed">What changed?</h2><h4 id="new-function">New function</h4><p>The very first change was to introduce a function to set all the pixels to the same values:</p><pre><code>func (bl Blinkt) SetAll(r int, g int, b int) {
 	for i, _ := range bl.pixels {
 		bl.SetPixel(i, r, g, b)
 	}
 }
</code></pre><p>This had a couple of effects: not only would the public function be directly accessible from the examples, but existing library functions could be refactored to call this rather than employing their own loops to do the same.</p><h4 id="qualifiers">Qualifiers</h4><p>The next change was one that Alex suggested.  There was repetition in the examples around clearing the pixels when the program exited &amp; introducing time delays during execution.  Given the repetition the obvious next step was to move these into the library and make them public.  I learnt a lot here about aliasing imports as the package was using dot notation on the single package it imported and we had reused the name of a function (<code>Delay</code>) used in a lower level package meaning leading to some odd behaviour.  For example:</p><pre><code>import . "github.com/alexellis/rpi"
</code></pre><p>Meant that calls to exposed functions could be made without fully referencing the package:</p><pre><code>func pulse(pulses int) {
 	DigitalWrite(GpioToPin(DAT), 0)
...
```
However, removing the dot meant that references to functions provided by that package had to be qualified:
``` 
import "github.com/alexellis/rpi"

func pulse(pulses int) {
 	rpi.DigitalWrite(rpi.GpioToPin(DAT), 0)
...
```

The final merge was by far the largest change. The handling of brightness in library had been static after being set during instantiation. This was by far the biggest limitation to completing some of the elaborate examples. 

####Variadics
There had always been an ambition to align more closely with the mode of operation of the Pimoroni library, and that meant changing the supplied brightness value from an `int` to a `float` between 0.0 and 1.0.  It also meant that we needed to mirror the availability of a default brightness value, which presented a challenge as Golang doesn't accommodate optional parameters.  The closest mechanism to achieve a similar effect is through the use of Variadic parameters and testing the argument for length:

```
func NewBlinkt(brightness ...float64) Blinkt {

	brightnessInt := defaultBrightnessInt

	if len(brightness) &gt; 0 {
		brightnessInt = convertBrightnessToInt(brightness[0])
	}
...
```
The brightness parameter here is a slice of floats, so we interrogate the slice to determine a non-zero length and take the first value we find.  It's worth pointing out at this stage that the supplied value is tested for validity within the `convertBrightnessToInt` function.

####Chaining

This again extended my learning as my initial approach to implementation was to refactor the existing functions to allow for the additional brightness value to be passed.  However, discussions in the open with Alex led me towards method chaining.  This was a great idea as it would limit the amount of refactoring existing users would have to undertake whilst still making dynamic brightness available to them.  To achieve this the existing methods needed to be adapted to return a Blinkt object which would act as an input into the subsequently chained method.

The existing methods such as `SetPixel` were changed from:
```
func (bl *Blinkt) SetPixel(p int, r int, g int, b int) {
```
to
```
func (bl *Blinkt) SetPixel(p int, r int, g int, b int) *Blinkt {
```
This means that at the time of setting a pixel colour we can also set the brightness, thus:
```
blinkt.SetAll(128, 0, 0).SetBrightness(0.5)
```
Furthermore, SetBrightness also returns a Blinkt object, so the two methods are interchangable:
```
blinkt.SetBrightness(0.5).SetAll(128, 0, 0).SetBrightness(0.5)
```
Here I am celebrating initial success: 

&lt;blockquote class="twitter-tweet" data-lang="en"&gt;&lt;p lang="en" dir="ltr"&gt;It&amp;#39;s alive! Needs cleaning up but well on my way to fixing brightness &lt;a href="https://twitter.com/alexellisuk"&gt;@alexellisuk&lt;/a&gt; &lt;a href="https://twitter.com/hashtag/blinkt?src=hash"&gt;#blinkt&lt;/a&gt; &lt;a href="https://twitter.com/hashtag/raspberrypi?src=hash"&gt;#raspberrypi&lt;/a&gt; &lt;a href="https://twitter.com/hashtag/golang?src=hash"&gt;#golang&lt;/a&gt; &lt;a href="https://t.co/3MyF63Bz3e"&gt;pic.twitter.com/3MyF63Bz3e&lt;/a&gt;&lt;/p&gt;&amp;mdash; Richard Gee (@rgee0) &lt;a href="https://twitter.com/rgee0/status/838468781550747649"&gt;March 5, 2017&lt;/a&gt;&lt;/blockquote&gt;
&lt;script async src="//platform.twitter.com/widgets.js" charset="utf-8"&gt;&lt;/script&gt;

####Constants
At this point a decision was made to also implement some constant values to remove magic numbers and aid readability of the code:

```
for i, _ := range pixels {				
                  pixels[i][0] = 0
                  pixels[i][1] = 0
                  pixels[i][2] = 0
                  pixels[i][3] = brightness				
  	}
```
With the addition of constants became:
```
for p, _ := range pixels {
	              pixels[p][redIndex] = 0
	              pixels[p][greenIndex] = 0
	              pixels[p][blueIndex] = 0
	              pixels[p][brightnessIndex] = brightness
	}
```

##Closing thoughts
This was my first real venture into the open source community, and I like to consider it a genuine success.  Alex was a really big help with ideas, and thankfully was both understanding &amp; patient with the git/repo aspects.

With the changes described it will be worth examining  a worked example, so look out for the [next part](http://blog.technologee.co.uk/blinkt-golang-port/) where I'll build a Blinkt! program to demonstrate the use of the library.  I'll also employ [Docker](http://www.docker.com) to build and run the program in a container, thus removing the need for a local Golang build environment.</code></pre>]]></content:encoded></item><item><title><![CDATA[Exploring NATS Message Queues with Docker & Go]]></title><description><![CDATA[Exploring NATS using docker & golang. NATS Server is a simple, high performance open source messaging system for cloud native applications, IoT messaging, and microservices architectures. ]]></description><link>https://blog.technologee.co.uk/testing-nats-withdocker/</link><guid isPermaLink="false">5e3b24371c4adf03c4d82cc9</guid><category><![CDATA[Docker]]></category><category><![CDATA[NATS]]></category><category><![CDATA[Messaging]]></category><category><![CDATA[Queue]]></category><dc:creator><![CDATA[Richard Gee]]></dc:creator><pubDate>Wed, 05 Jul 2017 00:00:00 GMT</pubDate><media:content url="https://blog.technologee.co.uk/content/images/2020/02/11186858_616430551827551_1922640560_n-1.jpg" medium="image"/><content:encoded><![CDATA[<h3 id="what-is-nats">What is NATS?</h3><img src="https://blog.technologee.co.uk/content/images/2020/02/11186858_616430551827551_1922640560_n-1.jpg" alt="Exploring NATS Message Queues with Docker & Go"><p>The <a href="http://nats.io/">NATS website</a> takes care of providing a simple description:</p><p>NATS Server is a simple, high performance open source messaging system for cloud native applications, IoT messaging, and microservices architectures.</p><p>The core of the product is written in golang with official and community clients available across a gamut of development languages.  As you might expect, the official site provides a number of resources to help introduce users new to the concept and product.  They also helpfully provide an <a href="https://store.docker.com/images/nats?tab=description">official Docker image</a> which expediates the setup, allowing us more time in the evaluation process.  No only does it negate many of the pre-requisites but it also means we can have a NATS server up and running through a single command:</p><p><code>$ docker run -d -p 4222:4222 -p 6222:6222 -p 8222:8222 --name nats-main nats</code></p><p>This will pull the image from the hub if you don't already have it locally and start a single server with the exposed ports mapped to the equivalent host ports.</p><h3 id="build-a-client">Build a client</h3><p>There are a number of client examples in the <a href="https://github.com/nats-io">NATS GitHub repositories</a>.  I chose Go since I have a ready made development environment, which means we can get going with a single command (Thanks to <a>Alex Ellis</a> for the steer).</p><p><code>go get github.com/nats-io/go-nats/</code></p><p>This will pull the go-nats repo, along with any dependencies into your go src location; in my case:</p><p><code>$GOPATH/src/github.com/nats-io/go-nats</code></p><p>Now with everything in place head over to the examples directory:</p><p><code>cd $GOPATH/src/github.com/nats-io/go-nats/examples</code></p><p>Subscribing is easy:</p><p><code>go run nats-sub.go &lt;subject&gt;</code></p><p>And to publish:</p><p><code>go run nats-pub.go &lt;subject&gt; &lt;message&gt;</code></p><p>In this, the simplest mode, there is importance to the order of events, as we will see shortly.</p><h3 id="nats-publish-subscribe">NATS Publish Subscribe</h3><p>This is probably the simplest mode, described as a fire and forget mechanism.  It's analogous to an aural message in that subscribers who aren't listening when the message is broadcast won't receive the message.  However, everyone who is listening will get the same message - good for notifications, not so good for despatching distinct units of work.</p><figure class="kg-card kg-image-card"><img src="https://blog.technologee.co.uk/content/images/2017/07/subscribers-2.gif" class="kg-image" alt="Exploring NATS Message Queues with Docker & Go"></figure><p>Here we can see the publisher in the red window publishing messages on the subject of 'rgee0' and subsequently the messages being picked up by the three subscribers in the three light windows.</p><h3 id="nats-queuing">NATS Queuing</h3><p>Sometimes it may be desirable to publish a message and have only one subscriber receive the message.  This is where queue groups can be used.  In queue groups a message is still published with a particular <em>subject</em> but message is removed once one subscriber belonging to a <em>queue group</em> has received the message.  The previous principle of subscribers needing to be listening is still extant, however.<br></p><figure class="kg-card kg-image-card"><img src="https://blog.technologee.co.uk/content/images/2017/07/queue-1.gif" class="kg-image" alt="Exploring NATS Message Queues with Docker & Go"></figure><p><br>Here we can see that the format of the published message is the same as the previous example but the subscribers have passed a second argument, a queue identifier, into <code>nats-qsub.go</code>.  Now when a message is published only one of the three queue subscribers receives that message.</p><h3 id="further-concepts">Further concepts</h3><p>We have only begun to scratch the surface of NATS here.  Clearly where units of work are being buffered, then delivery guarantees and a degree of persistence is required; this could be achieve through NATS Streaming. Similarly a degree of server-side fault tolerance and resilience might be desirable and clustering is available in this regard (also possible via Docker).  These are more detailed concepts which may be worthy of dedicated future articles.</p>]]></content:encoded></item><item><title><![CDATA[Remote Editing using VSCode]]></title><description><![CDATA[Blog post explaining how Mac users can configure their VS Code editor and Raspberry Pis so that the editor can directly edit remote files.]]></description><link>https://blog.technologee.co.uk/remote-editing-using-vs-code/</link><guid isPermaLink="false">5e3b232f1c4adf03c4d82cb2</guid><category><![CDATA[pi]]></category><category><![CDATA[raspberry]]></category><category><![CDATA[VSCode]]></category><dc:creator><![CDATA[Richard Gee]]></dc:creator><pubDate>Sun, 25 Jun 2017 00:00:00 GMT</pubDate><media:content url="https://blog.technologee.co.uk/content/images/2020/02/11186858_616430551827551_1922640560_n.jpg" medium="image"/><content:encoded><![CDATA[<img src="https://blog.technologee.co.uk/content/images/2020/02/11186858_616430551827551_1922640560_n.jpg" alt="Remote Editing using VSCode"><p>Working, as I frequently do, on small projects on headless Raspberry Pis it can sometimes be a challenge to establish an efficient flow while developing. On one hand you could work locally in your favoured editor and benefit from the increased efficiency this provides, but this is soon offset with the transferral of code every time you wish to test it.  Alternatively you could develop remotely through a terminal connection using vi, vim, nano, etc and gradually come to terms with their idiosyncrasies.</p><p>Until recently I was managing to do both; I was working on Windows and would use <a href="https://winscp.net">WinSCP</a> to connect to a Pi and then use the built in menu option within <a href="https://winscp.net">WinSCP</a> to 'Edit with' to open the remote file locally in <a href="https://code.visualstudio.com/">VS Code</a>.  I could then make my changes, save, <a href="https://winscp.net">WinSCP</a> would write the changes to the remote file and I could then run build/run the resulting code via an ssh connection.</p><p>I'm now working on a Mac and my initial approach was to run Samba on my Pis and mount the Pis as local drives.  This works but I've found it to be less reliable than my previous method, and so I started looking into other means.</p><h2 id="vs-code-has-a-remote-extension">VS Code has a remote extension</h2><p>One of the strengths of <a href="https://code.visualstudio.com/">VS Code</a> is its extensibility and the community support thereof, so its perhaps of little surprise that the described problem has already been solved.  There is an extension in the market place called <a href="https://marketplace.visualstudio.com/items?itemName=rafaelmaiolla.remote-vscode">Remote VSCode</a></p><p>Installation is simple, either search for it through the extensions panel or open Go -&gt; Go to File from the menu (<strong>⌘P</strong>) and type:</p><p><code>ext install remote-vscode</code></p><p>Once installed the server will need starting - <strong>⇧⌘P</strong> and then look for:</p><p><code>Remote: Start Server</code></p><h2 id="configure-the-pi">Configure the Pi</h2><p>OK, so have got the extension enabled locally, but how does the Pi know about it?  Well, we need to do some configuration to enable this. When connecting to the Pi we can also take the opportunity to set up an ssh tunnel for the Pi to redirect the remote commands back into the server running within <a href="https://code.visualstudio.com/">VS Code</a>:</p><p><code>ssh -R 52698:127.0.0.1:52698 pi@&lt;pi-ip-address&gt;</code></p><p>Once connected we need to install one of the rmate options from the extension's homepage.  I chose the python version after the bash version failed to install:</p><p><code>sudo pip install rmate</code></p><p>Then with the server running locally within <a href="https://code.visualstudio.com/">VS Code</a> we can call <code>rmate &lt;file&gt;</code> on the Pi and the file will open within VS Code.</p><figure class="kg-card kg-image-card"><img src="https://blog.technologee.co.uk/content/images/2017/06/blog_gif-1.gif" class="kg-image" alt="Remote Editing using VSCode"></figure><h2 id="further-refinement">Further Refinement</h2><p>Adding the ssh tunnel element to the command when initiating the connection may be cumbersome and become difficult to remember. Fortunately, through the use of a <code>~/.ssh/config</code> file we can ensure that the tunnel is configured every time we create a connection to our Pi.</p><pre><code>Host pi3
    HostName &lt;pi-ip-address&gt;
    User pi
    ForwardAgent yes
    RemoteForward 52698 127.0.0.1:52698
</code></pre><p>Now, once we have started the server within <a href="https://code.visualstudio.com/">VS Code</a>, all we need to run is <code>ssh pi3</code> and the resulting connection will have our ssh tunnel automatically configured.</p><h3 id="limitations">Limitations</h3><p>Currently rmate supports only files, it doesn't yet support directory structures; so this is an approach only really for smaller projects.  If you're hacking about with a <a href="https://shop.pimoroni.com/products/blinkt">Blinkt!</a>, or <a href="https://shop.pimoroni.com/products/mote">Mote</a> then why not give it a go?  You can even run <code>rmate</code> with <code>sudo</code> allowing you to work on restricted files like <code>wpa_supplicant.conf</code>.</p>]]></content:encoded></item><item><title><![CDATA[Raspberry Pi - Updating WiFi settings]]></title><description><![CDATA[A quick memory jogger of a one liner used to get the pizero stack back on the network after an SSID change.]]></description><link>https://blog.technologee.co.uk/raspberry-pi-updating-wifi-settings/</link><guid isPermaLink="false">5e3b21781c4adf03c4d82c93</guid><category><![CDATA[pi]]></category><category><![CDATA[wifi]]></category><category><![CDATA[one-liners]]></category><dc:creator><![CDATA[Richard Gee]]></dc:creator><pubDate>Sun, 23 Apr 2017 00:00:00 GMT</pubDate><content:encoded><![CDATA[<p>Having recently had my broadband router updated I found myself in the position where all the Pis which need to access WiFi required updating to enable them to access the new network.</p><figure class="kg-card kg-image-card"><img src="https://blog.technologee.co.uk/content/images/2017/04/image1.JPG" class="kg-image"></figure><p>Although it lives close by, on my desk, the PiZero stack runs headless and to update the settings I wanted something quicker than booting each one to a screen and editing the wpa_supplicant.conf locally.</p><figure class="kg-card kg-image-card"><img src="https://blog.technologee.co.uk/content/images/2017/04/image1--1--1.JPG" class="kg-image"></figure><p>Fortunately, I'm running Raspbian on all my nodes and my <code>wpa_supplicant.conf</code> files are simple, single-network configs. The WiFi settings on a nearby Pi 3 had been updated earlier in the week, this meant that the Pi 3 could be used to update each of the PiZero SD cards with a simple one-liner via a card reader connected to the Pi 3:</p><pre><code class="language-bash">mkdir -p /tmp/usb &amp;&amp; sudo mount /dev/sda2 /tmp/usb &amp;&amp; sudo cp /tmp/usb/etc/wpa_supplicant/wpa_supplicant.conf /tmp/usb/etc/wpa_supplicant/wpa_supplicant.conf.$(date +%Y%m%d%H%M%S) &amp;&amp; sudo cp /etc/wpa_supplicant/wpa_supplicant.conf /tmp/usb/etc/wpa_supplicant/wpa_supplicant.conf &amp;&amp; sudo umount /tmp/usb
</code></pre><p>This copies the Pi 3 <code>wpa_supplicant.conf</code> onto the SD card extracted from the PiZero, replacing the previous <code>wpa_supplicant.conf</code>.  This may not be desirable in situations where either the receiving machine, or the donor machine have a variety of networks configured, which are specific to the respective machines.</p><p>Should we need to extract some of the earlier settings, or even go back to the previous version, then a date-stamped version of the previous <code>wpa_supplicant.conf</code> is created prior to copying the newer file over.</p>]]></content:encoded></item></channel></rss>