going go programming

422
Going Go Programming William Kennedy Why Go Programming For the past 20 years I have been writing server based and application software on the Microsoft stack. First in C/C++ leveraging the Win32 API and then in C# when .Net first was released. Over the past few months I have realized that trying to build scalable code on the Microsoft stack is becoming impossible. Why, Technology and Cost!! Let's start with the licensing. Luckily I was accepted into the Bizspark program. If I didn't have that I would be looking at thousands of dollars to just get access to the tooling I need. Real world applications require a database. If you want to use SQL Server Standard the licensing model is not only convoluted but outrageously expensive once you figure all that out. If I am building something for a client and they need something more than SQL Express I am doing them an injustice. Even using SQL Server in the cloud is no picnic on pricing. Then you have CPU and Memory requirements. I am sorry but trying to run a SQL Server and IIS on anything less than 4 Gig is impossible. Lastly, take this cost for one machine and try to build a scalable architecture around it and your costs are through the roof. I don't think it is an accident that Facebook, Twitter and the like are running on linux based technologies. The linux based technologies are cheaper, faster and require much less metal. You can build larger systems and architectures for fractions of the cost. These are proven technologies that are driving the biggest websites today. I have already moved away from SQL Server to MongoDB. MongoDB has been a real win in terms of development, devops and cost. It has been around since 2009, has a huge community of people supporting it and there are drivers for just about every programming language. In fact, I have been able to do more with MongoDB than I could with SQL Server. Oh yea, no cost !! 1

Upload: gteodorescu

Post on 25-May-2017

243 views

Category:

Documents


2 download

TRANSCRIPT

Page 1: Going Go Programming

Going Go Programming

William Kennedy

Why Go Programming

For the past 20 years I have been writing server based and application software on the Microsoft stack. First in C/C++ leveraging the Win32 API and then in C# when .Net first was released. Over the past few months I have realized that trying to build scalable code on the Microsoft stack is becoming impossible. Why, Technology and Cost!!

Let's start with the licensing. Luckily I was accepted into the Bizspark program. If I didn't have that I would be looking at thousands of dollars to just get access to the tooling I need. Real world applications require a database. If you want to use SQL Server Standard the licensing model is not only convoluted but outrageously expensive once you figure all that out. If I am building something for a client and they need something more than SQL Express I am doing them an injustice. Even using SQL Server in the cloud is no picnic on pricing. Then you have CPU and Memory requirements. I am sorry but trying to run a SQL Server and IIS on anything less than 4 Gig is impossible.  Lastly, take this cost for one machine and try to build a scalable architecture around it and your costs are through the roof.

I don't think it is an accident that Facebook, Twitter and the like are running on linux based technologies. The linux based technologies are cheaper, faster and require much less metal. You can build larger systems and architectures for fractions of the cost. These are proven technologies that are driving the biggest websites today.

I have already moved away from SQL Server to MongoDB. MongoDB has been a real win in terms of development, devops and cost. It has been around since 2009, has a huge community of people supporting it and there are drivers for just about every programming language. In fact, I have been able to do more with MongoDB than I could with SQL Server.  Oh yea, no cost !!

For web development I am now using Ruby on Rails. This was an easy transition from .Net MVC and I use RubyMine as my IDE. RubyMine is a great tool and it only costs $100. If you are building just web services then using Padrino is a great way to go.

Now the rub, building application servers on the linux stack. I didn't want to use C/C++, I was done with that. Someone suggested Ruby but I have never felt it was a great language for server development. I looked at Erlang but that wasn't right. So I continued to use my C# based Windows service I called the TaskServer.

My TaskServer is really cool. It is a Windows service framework that can load and unload DLL based packages at runtime. Each Plugin is loaded into its own application domain. I added facilities to abstract threading, internal and external plugin communication, trace logging and anything I could do to make Plugin development fast and easy.  The TaskServer has served me well but it runs on the Microsoft stack. For my Ruby web apps to talk with it I use RabbitMQ. Eventually I was going to build a Gem that implemented the Socket protocol so I could talk to the server directly.

1

Page 2: Going Go Programming

Then someone told me to look at Go Programming. After reading and watching a few videos I was blown away. This is what I have been searching for. A programming language tailored to server development that would run on the linux stack. I have been able to take all of my C/C++/C# experience and quickly learn the language. Obviously the goroutine/channel construct is quite different but once you get your head wrapped around it you realize how much easier it makes concurrent programming.

As with any new language my first step is to port my utility classes and I have already begun that work. I have finished my ThreadPool package and now I am working on my TraceLog package. Once that is done I am moving to MongoDB and then will begin to build a server that will manage large amounts of data imports for a project I am working on, Details soon to come.

This is the beginning of my journey in Go Programming and never looking back at the Microsoft stack again. Sorry VMWare but I hope I never need to upgrade you again.

Thread Pooling in Go Programming

In my world of server development thread pooling has been the key to building robust code on the Microsoft stack. Microsoft has failed in .Net by giving each Process a single thread pool with thousands of threads and thinking they could manage the concurrency at runtime. Early on I realized this was never going to work. At least not for the servers I was developing.

When I was building servers in C/C++ using the Win32 API, I created a class that abstracted IOCP to give me thread pools I could post work into. This has always worked very well because I could define the number of threads in the pool and the concurrency level (the number of threads allowed to be active at any given time).  I ported this code for all of my C# development. If you want to learn more about this, I wrote an article years ago (http://www.theukwebdesigncompany.com/articles/iocp-thread-pooling.php). Using IOCP gave me the performance and flexibility I needed. BTW, the .NET thread pool uses IOCP underneath.

The idea of the thread pool is fairly simple. Work comes into the server and needs to get processed. Most of this work is asynchronous in nature but it doesn't have to be. Many times the work is coming off a socket or from an internal routine. The thread pool queues up the work and then a thread from the pool is assigned to perform the work. The work is processed in the order it was received. The pool provides a great pattern for performing work efficiently. Spawning a new thread everytime work needs to be processed can put heavy loads on the operating system and cause major performance problems.

So how is the thread pool performance tuned? You need to identify the number of threads each pool should contain to get the work done the quickest. When all the routines are busy processing work, new work stays queued. You want this because at some point having more routines processing work slow things down. This can be for a myriad of reasons such as, the number of cores you have in your machine to the ability of your database to handle requests. During testing you can find that happy number.

I always start with looking at how many cores I have and the type of work being processed.

2

Page 3: Going Go Programming

Does this work get blocked and for how long on average. On the Microsoft stack I found that three active threads per core seemed to yield the best performance for most tasks. I have no idea yet what the numbers will be in Go.

You can also create different thread pools for the different types of work the server will need to process. Because each thread pool can be configured, you can spend time performance tuning the server for maximum throughput. Having this type of command and control to maximize performance is crucial.

In Go we don't create threads but routines. The routines function like multi-threaded functions but Go manages the actual use of OS level threading. To learn more about concurrency in Go check out this document: http://golang.org/doc/effective_go.html#concurrency.

The packages I have created are called workpool and jobpool. These use the channel and go routine constructs to implement pooling.

WorkpoolThis package creates a pool of go routines that are dedicated to processing work posted into the pool. A single Go routine is used to queue the work. The queue routine provides the safe queuing of work, keeps track of the amount of work in the queue and reports an error if the queue is full.

Posting work into the queue is a blocking call. This is so the caller can verify that the work is queued. Counts for the number of active worker routines are maintained.

Here is some sample code on how to use the workpool:

package main

import (    "github.com/goinggo/workpool"    "bufio"    "fmt"    "os"    "runtime"    "strconv"    "time")

type MyWork struct {    Name string "The Name of a person"    BirthYear int "The Year the person was born"    TP *workpool.WorkPool}

func (myWork MyWork) DoWork() {    fmt.Printf("%s : %d\n", myWork.Name, myWork.BirthYear)    fmt.Printf("Q:%d R:%d\n", myWork.TP.QueuedWork(),

3

Page 4: Going Go Programming

myWork.TP.ActiveRoutines())

    // Simulate some delay    time.Sleep(100 * time.Millisecond)}

func main() {    runtime.GOMAXPROCS(runtime.NumCPU())

    workPool := workpool.New(runtime.NumCPU(), 800)

    shutdown := false // Just for testing, I Know

    go func() {        for i := 0; i < 1000; i++ {            work := new(MyWork)            work.Name = "A" + strconv.Itoa(i)            work.BirthYear = i            work.TP = workPool

            err := workPool.PostWork(work)

            if err != nil {                fmt.Printf("ERROR: %s\n", err)                time.Sleep(100 * time.Millisecond)            }

            if shutdown == true {                return            }        }    }()

    fmt.Println("Hit any key to exit")    reader := bufio.NewReader(os.Stdin)    reader.ReadString('\n')

    shutdown = true

    fmt.Println("Shutting Down\n")    workPool.Shutdown()}

If we look at main, we create a thread pool where the number of routines to use is based on the number of cores we have on the machine. This means we have a routine for each core. You can't do any more work if each core is busy. Again, performance testing will determine what this number should be. The second parameter is the size of the queue. In this case I have made the queue large enough to handle all the requests coming in.

The MyWork type defines the state I need to perform the work. The member function DoWork is required because it implements an interface required by the PostWork call. To

4

Page 5: Going Go Programming

pass any work into the thread pool this method must be implement by the type.

The DoWork method is doing two things. First it is displaying the state of the object. Second it is reporting the number of items in queue and the active number of Go Routines. These numbers can be used to determining the health of the thread pool and for performance testing.

Finally I have a Go routine posting work into the work pool inside of a loop. At the same time this is happening, the work pool is executing DoWork for each object queued. Eventually the Go routine is done and the work pool keeps on doing its job. If we hit enter at anytime the programming shuts down gracefully.

The PostWork method could return an error in this sample program. This is because the PostWork method will guarantee work is placed in queue or it will fail. The only reason for this to fail is if the queue is full. Setting the queue length is an important consideration.

JobpoolThe jobpool package is similar to the workpool package except for one implementation detail. This package maintains two queues, one for normal processing and one for priority processing. Pending jobs in the priority queue always get processed before pending jobs in the normal queue.

The use of two queues makes jobpool a bit more complex than workpool.  If you don't need priority processing, then using a workpool is going to be faster and more efficient.

Here is some sample code on how to use the jobpool:

package main

import (    "github.com/goinggo/jobpool"    "fmt"    "time")

type WorkProvider1 struct {    Name string}

func (this *WorkProvider1) RunJob() {    fmt.Printf("Perform Job : Provider 1 : Started: %s\n", this.Name)    time.Sleep(2 * time.Second)    fmt.Printf("Perform Job : Provider 1 : DONE: %s\n", this.Name)}

type WorkProvider2 struct {    Name string}

5

Page 6: Going Go Programming

func (this *WorkProvider2) RunJob() {    fmt.Printf("Perform Job : Provider 2 : Started: %s\n", this.Name)    time.Sleep(5 * time.Second)    fmt.Printf("Perform Job : Provider 2 : DONE: %s\n", this.Name)}

func main() {    jobPool := jobpool.New(2, 1000)

    jobPool.QueueJob(&WorkProvider1{"Normal Priority : 1"}, false)

    fmt.Printf("*******> QW: %d AR: %d\n",        jobPool.QueuedJobs(),        jobPool.ActiveRoutines())

    time.Sleep(1 * time.Second)

    jobPool.QueueJob(&WorkProvider1{"Normal Priority : 2"}, false)    jobPool.QueueJob(&WorkProvider1{"Normal Priority : 3"}, false)

    jobPool.QueueJob(&WorkProvider2{"High Priority : 4"}, true)    fmt.Printf("*******> QW: %d AR: %d\n",        jobPool.QueuedJobs(),        jobPool.ActiveRoutines())

    time.Sleep(15 * time.Second)

    jobPool.Shutdown()}

In this sample code we create two worker type structs. It's best to think that each worker is some independent job in the system.

In main we create a job pool with 2 job routines and support for 1000 pending jobs. First we create 3 different WorkProvider1 objects and post them into the queue, setting the priority flag to false. Next we create a WorkProvider2 object and post that into the queue, setting the priority flag to true.

The first two jobs that are queued will be processed first since the job pool has 2 routines. As soon as one of those jobs are completed, the next job is retrieved from the queue. The WorkProvider2 job will be processed next because it was placed in the priority queue.

To get a copy of the workpool and jobpool packages, go to github.com/goinggo

6

Page 7: Going Go Programming

As always I hope this code can help you in some small way.

Installing Go, Gocode, GDB and LiteIDE

I have been working in windows for 20 years and know the internals of that operating system very well. I am very new to using my Mac and it is always a challenge for me when I need to install software or make configuration changes. I am getting better. It took me about 6 hours over two days to get my Go environment working on my Mac the first time. Here are the steps that will help with installing Go on the Mac.

Step 1: Download Go

Open your favorite browser and go to the following website:

https://code.google.com/p/go/downloads/list

This will show you all the latest builds for the different operating systems. Darwin is the name for the Mac OS. Download the package for your OS version and architecture.

Once downloaded go to your Downloads folder in Finder and double click on the pkg file to start the installation. The installation will put Go in /usr/local/go. Once the installation is complete you will want to check the installation.

Note: Moving forward we will be using both Terminal and Finder. It helps to be able to see all the files in Finder.  Finder by default will not show you everything that is on your hard drive.

To show all files in Finder:

Open a Terminal session. If you don't know where the Terminal program is go to your Applications folder in Finder and then Utilities. Click on Terminal and a bash shell command window will open.

Execute the following commands in Terminal.

defaults write com.apple.finder AppleShowAllFiles TRUEkillall Finder

The killall Finder will reload the Finder program and now you can see all the files.

Step 2:  Check Installation

7

Page 8: Going Go Programming

Open a Terminal session and type the following command

go version

You should see the following if everything installed correctly

go version go1.2.1 darwin/amd64

Now type the which command to verify the installation is in /usr/local/go

which go

You should see that Go can be found in /usr/local/go/bin

/usr/local/go/bin/go

Now Go is installed but we are not ready to start programming just yet. In order to get intellisense when we are using LiteIDE we need GoCode.

Step 3:  Set Your GOPATH

You need a single place to store and work on your Go projects. Create a folder called Projects from inside your home folder:

cd $HOME

mkdir Projects

Now set this as your GOPATH. Open the .bash_profile file from the $HOME folder and add the following items to the end.

nano .bash_profile

export GOPATH="$HOME/Projects"export PATH=$PATH:$GOPATH/bin

Then exit the Terminal App and open a new Terminal App. Check that the Go environment now has your new GOPATH.

go env

You should see all the Go related environment variables including GOPATH. Here are some of them:

GOARCH="amd64"GOCHAR="6"GOHOSTARCH="amd64"GOHOSTOS="darwin"GOOS="darwin"GOPATH="/Users/you/Projects"

8

Page 9: Going Go Programming

GOROOT="/usr/local/go"GOTOOLDIR="/usr/local/go/pkg/tool/darwin_amd64"

Step 4:  Get, Build and Install Gocode

Gocode is a program that provides intellisense for the language. Many editors support using Gocode including LiteIDE. This is going to help with development immeasurably.

We need to download Gocode, build the binary and then install the binary in a place where it can be easily found by LiteIDE and other editors.

Open a Terminal session or use the current one you have open and type the following command. This assumes you setup your GOPATH as instructed in Step 3.

go get github.com/nsf/gocode

Once this is done the source code for Gocode can be found in the folder $HOME/Project/src/github.com/nsf/gocode and the gocode program will be built and placed in $HOME/Project/bin.

To check that everything is working and your go environment is properly setup, run the which command:

which gocode

You should see:

/Users/you/Projects/bin/gocode

Step 5:  Install GDB

Installing the new version of GDB is going to be a unique experience for most. The other problem is that you most likely have a version already installed on your machine.

Run this command in Terminal

gdb --version

If you are like me the following information will be provided

GNU gdb 6.3.50-20050815 (Apple version gdb-1705)

Now run this command in Terminal

which gdb

This version of gdb is installed under

/usr/bin/gdb

9

Page 10: Going Go Programming

So why is this important? When we build and install the new version of GDB it will be installed under /usr/local/bin. Remember Go was installed under /usr/local/go. I don't understand why some programs are installed under /usr/bin or /usr/local/bin or even why you can find different version of the same binary under both. When we are done this is exactly what we will have.

Open your browser again and go to the following url:

http://www.gnu.org/software/gdb/download/

You will see the following on the page

Click the link called http://ftp.gnu.org/gnu/gdb and you will see a list of files you can download

Download gdb-7.7.tar.gz from your browser and then find the file in Finder inside the Downloads folder. Double click on the file in Finder and Finder will unzip the file to a new folder called gdb-7.7.

Go back to your Terminal session and navigate to the gdb-7.7 folder in Downloads

cd ~/downloads/gdb-7.7

You need to run two commands from Terminal that will build the source code.

./configuremake

If the make command does not work then install XCode. I had XCode installed on my machine prior to running these commands. XCode installs the compilers that are used to build code.

Also, I found with version 7.7 the make failed because of some errors. If this is happening,

10

Page 11: Going Go Programming

remove the unzipped folder, unzip the tar.gz file again and use this parameter with the configure call.

./configure --disable-werror

Once the make command is done we need to install this version of GDB to /usr/local/bin.

In Terminal run the following command

sudo make install

Once this command is finished the new version of GDB will almost be ready for use. Remember we now have two version of GDB installed on the machine. From inside of LiteIDE this will not be a problem but from Terminal it is.

Run the following command in Terminal again

gdb --version

You still get the old version

GNU gdb 6.3.50-20050815 (Apple version gdb-1705)

Now run this command in Terminal

which gdb

The old version of gdb is still being used

/usr/bin/gdb

So why is this happening? Because your PATH has /usr/bin before /usr/local/bin. We can fix this for our current Terminal session by running the following Terminal command.

export PATH=/usr/local/bin:$PATH

This command will update the PATH and put /usr/local/bin at the front.  Now run the GDB version command again.

gdb --version

Now you get the new version

GNU gdb (GDB) 7.7

Now run this command in Terminal

which gdb

This version of gdb is being used

/usr/local/bin/gdb

11

Page 12: Going Go Programming

Unfortunately this is not permanent.  If you open a new Terminal window the PATH variable will go back to the original setting. So how do we make /usr/local/bin always come before /usr/bin in the PATH every time we open a Terminal session?

You will need to modify the paths file under the /etc folder.

cd /etcsudo nano paths

In my original version of the paths file I added the following entries:

/usr/bin/bin/usr/sbin/sbin/usr/local/bin

Just move /usr/local/bin to the top and save the file:

/usr/local/bin/usr/bin/bin/usr/sbin/sbin

To see if the change worked, open a new Terminal session and echo the PATH:

echo $PATH

To get more information about how to use the GDB check out this website. You can run these GDB commands from inside of LiteIDE so this web page is helpful.

http://golang.org/doc/gdb

Now that GDB is installed we need to do one more thing, codesign the binary so it can be used to debug the programs we write.

NOTE: Please review the GDB Problems at the end of the article 

Step 6: Codesign GDB

If we don't codesign the GDB executable LiteIDE will run in debug mode but it won't work. The steps I have provided come from the following websites.

http://sourceware.org/gdb/wiki/BuildingOnDarwin

http://iosdevelopertips.com/mac-osx/code-signing-error-object-file-format-unrecognized-invalid-or-unsuitable.html

12

Page 13: Going Go Programming

NOTE: Make sure you have the latest version of Xcode installed before you continue

5a. Creating a Certificate

Start the Keychain Access application/Applications/Utilities/Keychain Access.app

Open the Menu option/Keychain Access/Certificate Assistant/Create a Certificate...

In the Create a Certificate dialog box use the following settings

Name: gdb-certIdentity Type: Self Signed RootCertificate Type: Code SigningLet Me Override Default:  Checked

Click Continue several times until you get to the Specify a Location For The Certificate screen

Keychain: System

If you can't store the certificate in the System keychain, create it in the login keychain, then exported it. You can then imported it into the System keychain.

Then find your new certificate in the list and right click and select Get Info. Then expand the Trust item and find the Code Signing drop down. Change the setting for Code Signing to Always Trust. You must quit the Keychain Access application in order to use the certificate so close the program.5b. Codesigning GDB with new certificate

Before you run the codesign command in Terminal you need to add an export to your Terminal session or you will get the following error

object file format unrecognized, invalid, or unsuitable

Open or reuse your existing Terminal session and run the following export command

export CODESIGN_ALLOCATE="/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/codesign_allocate"

Now change the directory in your Terminal session to where the GDB binary is located and codesign the binary.

cd /usr/local/bincodesign -s gdb-cert gdb

There should be no errors and a new Terminal command prompt should just appear. Now

13

Page 14: Going Go Programming

everything is ready to go. We have Go, Gocode and GDB installed. Next we need to install LiteIDE.

Step 7: Install LiteIDE

LiteIDE is an excellent IDE and I love working in it. Installing and getting it going requires just a few steps.

Open your browser and navigate to http://sourceforge.net/projects/liteide/files/X21.1

Chose the Mac version (liteidex21.1.macosx.zip). Download the file and then open Finder and navigate to the Downloads folder.

Unzip the file and copy LiteIDE.app into the Applications folder.

Open your Applications folder and double click on LiteIDE to start it.

Step 8: Test LiteIDE

Let's create a new program and test our installation

Switch to LiteIDE and find the View menu option.  Select the Manage GOPATH option at the bottom of the menu. The Manage GOPATH dialog box will appear.

14

Page 15: Going Go Programming

You need to add the Projects folder you created in step 3 as a custom directory for your GOPATH. This is where Go will look for custom packages and the code you are writing. The Go libraries will always be available. To learn more about the GOPATH go to this web page: http://golang.org/doc/code.html

Choose File/New and perform the following tasks.

1. Double click on your custom GOPATH. It will be cut off in the dialog box but it should show above the system folder /usr/local/go. This should change the Location field to the GOPATH.

2. Select Go1 Command Project.

3. Name your new program test_program.

4. Click OK and agree to open main.go.Before you build and test the program make sure you are using the correct build environment (darwin64-local).

15

Page 16: Going Go Programming

Test if you can build and run the program. Find the blue "BR" button and select it. If LiteIDE is configured correctly the program should build and run. The Build Output should look like the image below.

You should see the Hello World! print in the Build Output window.

Next test if the debugger is working. Select line 9 of the program and click on the orange button to add a breakpoint.

Next you need to change the Build configuration a bit. Add these build arguments. They help support the GDB. The debugging is not perfect but this makes it better. The debugger will work without it but my experience is that the debugger can pick up more information.

-gcflags "-N -l"(Capital N and Lowercase L)

To learn more about debugging in Go click on this link:

http://golang.org/doc/gdb

16

Page 17: Going Go Programming

Now from the Debug menu select Start Debugging.

If the debugger is working a green arrow will stop at line 8. You may be asked to enter your password before debugging begins.

If this is happening you are good to go.If things stop working always double check that you have the right environment selected (darwin64-local). I can't tell you how much time I have lost because the environment changed without me realizing it.

Possible Build Errors

"can't open output file for writing: a.out, errno=13 for architecture x86_64"Solution By: Alejandro GaviriaVersion: Xcode 4.2, Mac OS X (10.7.1), Mac OS X (10.7.2)

This is due to the gcc compiler that comes with XCode ~4.1.

Solution comes from this Apple discussion thread:https://discussions.apple.com/thread/3406578

Possible GDB Errors

"Unable to find Mach task port for process-id 12383: (os/kern) protection failure (0x2).\n (please check gdb is codesigned - see taskgated(8)"

17

Page 18: Going Go Programming

Solution By: Karl TuhkanenVersion: Mac OS X (10.8)

1. sudo chgrp procmod /usr/local/bin/gdb2. sudo chmod g+s /usr/local/bin/gdb3. add the legacy switch -p to taskgated by modifying the file System/Library/LaunchAgents/com.apple.taskgated.plist4. Force kill taskgated process (it will restart) or reboot if necessary

Solution comes from this stackoverflow thread:http://stackoverflow.com/questions/12050257/gdb-fails-on-mountain-lion

ALERT! Be sure to use standard .plist markup with the file modification. Otherwise OSX won't start next time you reboot. This happened to me. Solution to this is to reboot with recovery option (Cmd-R) and modifying the file with vi to match the standard.

Version 7.6.2 still has a bug that prevents it from loading the Go runtime integration! Solution By: Chris McGeeVersion: Mac OS X

You can see the problem when you fire up GDB and it does not have the special "Loading Go Runtime Support" message. There is a patch for this:

http://sourceware-org.1504.n7.nabble.com/Path-Add-support-for-mach-o-reader-to-be-aware-of-debug-gdb-scripts-td238372.html

GDB Freezes And Consumes CPU When Using "Info Locals"Solution By: Chris McGeeVersion: Mac OS X

GDB freezes. Only Ctrl-C seems to bring it back. When using an application that uses gdb/MI for a GUI interface gdb is totally unresponsive.

https://code.google.com/p/go/issues/detail?id=6598

Conclusion

I hope these instructions get you up and running with Go quickly on your Mac. Now that you can write code using LiteIDE I suggest building some test programs and learning how all the different windows in the IDE work. There are a lot of great goodies in there.

Check out this web page: http://go-lang.cat-v.org/books

It is the Go language resources page and contains a lot of great links to information. The Book Programming in Go by Mark Summerfield is outstanding and I highly recommend buying it.

Here are other links to web pages that will be very helpful:

18

Page 19: Going Go Programming

http://golang.org/https://plus.google.com/+golang/postshttp://blog.golang.org/http://golang.org/doc/gdbhttp://www.youtube.com/user/gocoding

You must watch these videos on Go Concurrency Patterns

http://www.youtube.com/watch?v=QDDwwePbDtwhttp://www.youtube.com/watch?v=f6kdp27TYZs

Documenting Go Code With Godoc

As you know if you read my blog, I have been building a set of new utility packages so I can start developing an application server I need for a new project. I am brand new to Go and the Mac OS. Needless to say it has been one hell of an education over the past month. But I don't miss Windows or C# at all.

I made some progress in my coding and wanted to build documentation for the code. I have been using the documentation viewer in LiteIDE and I was hoping to integrate my documentation in there was well.  I was really surprised to see see that LiteIDE already had my packages listed inside of their integrated Godoc viewer. So it then begged the question. How is that working?

After some digging around I found this local HTML file. If you have LiteIDE installed you can copy the following url into your browser.

file:///Applications/LiteIDE.app/Contents/Resources/golangdoc/about.html

This is what it shows

OverviewThe integrated Godoc viewer in LiteIDE provides an easy way to browse documentation generated by the godoc tool without leaving the editor. Documentation can be viewed for both the official Go language as well as custom packages. The remainder of this page describes ways to invoke the Godoc viewer.

Supported URL SchemesIt is possible to view documentation by directly entering a URL into the Godoc viewer's address bar. When doing this, you can specify what type of documentation you are looking for by prefixing the address with one of the following URL schemes:

find

Searches for packages with a specified string in their name. For example: find:zip find:godoc

19

Page 20: Going Go Programming

list

Lists all packages in a given directory. The main choices are "pkg" and "cmd", which can be found as links in the header of the page. For example:

list:pkg - displays the Golang packages list:cmd - displays the Golang commands

pdoc

Views documentation for a specified package or command. For example: pdoc:fmt pdoc:archive/zip pdoc:gofmt pdoc:f:/hg/zmq/gozmq

file

Views a specified HTML, Markdown, or plain-text file. For example: file:c:/go/doc/docs.html

Automatic SchemesFor the "file" and "pdoc" schemes, you do not need to type the scheme as part of the URL. For example:

/doc/code.html /src/pkg /src/cmd /pkg/fmt /cmd/cgo archive/zip go

File BrowserYou can open the Godoc viewer directly from the file browser by right clicking on a file or directory and selecting "View Godoc Here". The Godoc viewer will automatically open the package documentation for the chosen item.

When I clicked on my package ArdanStudios/threadpool from within the LiteIDE Godoc search tool it used the pdoc URL scheme, pdoc:ArdanStudios/threadpool.

I quickly reasoned that LiteIDE was using the GOROOT and GOPATH variables to find the documentation. There is only one problem, I haven't created any documentation yet.

So I looked around in both /usr/local/go and my own space to find the documentation files and there was nothing. So how the heck was this documentation being generated and published on the screen?

Then I found this document from the Go team:

http://golang.org/cmd/godoc/

20

Page 21: Going Go Programming

The very first line states, "Godoc extracts and generates documentation for Go programs." Ok, so this program is being used by LiteIDE but how?  Where are the files that Godoc is generating for all this documentation?

LOL, boy it is difficult coming from a Windows environment for the past 20 years.

After reading the documentation a bunch of times I opened up a Terminal session and ran the following command.

godoc /Users/bill/Spaces/GoPackages/src/ArdanStudios/threadpool

Suddenly the documentation appeared on my screen in text format. But I am seeing HTML inside of LiteIDE? I found the -html option.

godoc -html /Users/bill/Spaces/GoPackages/src/ArdanStudios/threadpool

Now I produced the same documentation I am seeing inside of LiteIDE. There are no extra files on my machine, LiteIDE is streaming the output of Godoc directly into the screen. Very smart way of doing things!!

So if I can see documentation for the standard Go packages, then the source code for those packages must be on my machine. After a bit of looking I found them in:

/usr/local/go/src/pkg

It seems they are located inside a folder called pkg under src. This is because the Go team likes to put source code for reusable libraries within a project under pkg. Not all developers follow that same convention and you have the freedom to choose. I personally don't follow that convention. Apparently the Godoc tool has no problems finding the source code files.

Godoc tool is always reading the source code files to produce the latest version of the documentation. So in LiteIDE when you update your documentation and save the code file, the Godoc tool will show the changes immediately.

Now the next problem I have, my documentation looks really bad. The documentation that I see from the standard library files looks much better. So how do I format my documentation properly inside the Go code files?

I found this document from the Go team:

http://golang.org/doc/articles/godoc_documenting_go_code.html

The introduction reads:

Godoc: documenting Go codeThe Go project takes documentation seriously. Documentation is a huge part of making software accessible and maintainable. Of course it must be well-written and accurate, but it also must be easy to write and to maintain. Ideally, it should be coupled to the code itself so the documentation evolves along with the code. The easier it is for programmers to produce

21

Page 22: Going Go Programming

good documentation, the better for everyone.

To that end, we have developed the godoc documentation tool. This article describes godoc's approach to documentation, and explains how you can use our conventions and tools to write good documentation for your own projects.

Godoc parses Go source code - including comments - and produces documentation as HTML or plain text. The end result is documentation tightly coupled with the code it documents. For example, through godoc's web interface you can navigate from a function's documentation to its implementation with one click.

Coming from the C# world and using XML tags like <summary> for that past 10 years and having to remember to check the "produce XML documentation file" option, this was a dream. Oh yea, no extra documentation file.

However the rest of the page was lacking. I liked the way the documentation for fmt.Printf looked so I quickly found the go source files and studied what the programmer did. After a bit of playing I finally figured out the 3 basic rules you need to help the Godoc tool format the documentation cleanly.

Here is a sample of the documentation I have for my tracelog package:

There are 3 elements in play when writing your documentation. You have header sections, standard text and highlighted text.

At the very top of your code file add the following using the // comment operation or something similar. Obviously you want to give yourself the credit for your work, LOL.

// Copyright 2013 Ardan Studios. All rights reserved.// Use of this source code is governed by a BSD-style// license that can be found in the LICENSE file.

Then add a block comment operator and we can start. Make sure the package code statement

22

Page 23: Going Go Programming

is exactly after the closing comment operator. There can not be no blank lines between the two.

Tabbing is very important. We are using two layers of tabbing. Keep these two layers of tabbing consistent.

/*->TAB Package TraceLog implements a file based logging.->TAB The programmer should feel free to tace log as much of the code.->CRLF->TAB New Parameters->CRLF->TAB The following is a list of parameters for creating a TraceLog:      ->TAB baseFilePath: The base location to store all log directories.      ->TAB machineName: The name of the machine or system. Information is used.      ->TAB writeToStdout: Set to True if you want the system to also write.->CRLF->TAB TraceLog File Management->CRLF->TAB Every 2 minutes TraceLog will check each open file for two conditions:->CRLF      ->TAB 1. Has a message been written to the file within the last 2 minutes.      ->TAB 2. Is the size of the file greater than 10 Meg.*/package tracelog

The first section of comments will show at the top of our documentation just below the Overview Section. Also the first sentence will appear in Godoc's package list.

Then we have a blank line and a string that will become a header as long as the next line is double spaced and has the same indentation.

The final component is the second tabbing indentation. This will cause that text to be highlighted with a grey background.

You may need to remove all of your existing documentation from your Go code file and throw it into a text editor. Then put it back in to make sure all the tabs and carriage returns are clean.

Using GoDoc.orgIf you are building a public package you can use the GoDoc website to publish your

23

Page 24: Going Go Programming

documentation. Check out the GoDoc website:

http://godoc.org/

This website has been setup to read your code files and display all of your great documentation. Enter this url (github.com/goinggo/utilities/v1/workpool) into the search box and see the documentation that GoDoc produces for my workpool package:

You can see the same documentation that is being given to you locally is now published on the GoDoc website with your own reuseable url:

http://godoc.org/github.com/goinggo/utilities/v1/workpool

So how can you best use this url to provide people your documentation? When you create a repository for your package add a README.md file. This is a special "Markdown" file that supports standard text, html and a few special operators of its own. Github has its own extensions and you can find documentation about Markdown here:

https://help.github.com/articles/github-flavored-markdown

If you happened to come across my public workpool package in Github, you would see the following:

24

Page 25: Going Go Programming

There is my code file, license file and my readme Markdown file.

Here is a typical README Markdown file that I use:

# Workpool - Version 1.0.0

Copyright 2013 Ardan Studios. All rights reserved.<br />Use of this source code is governed by a BSD-style license that can be found in the LICENSE handle.

Ardan Studios<br />12973 SW 112 ST, Suite 153<br />Miami, FL 33186<br />[email protected]<br />

[Click To View Documentation](http://godoc.org/github.com/goinggo/utilities/v1/workpool)

View The Raw Version Here:https://raw.github.com/goinggo/utilities/master/v1/workpool/README.md

Look at the Markdown link at the bottom of the file. This syntax creates a link to the documentation. The text in the hard brackets [], provides the anchor text for the link.

Since Github always display the Readme Markdown file to the user if one exists, this is what people see when they come to that Github page:

25

Page 26: Going Go Programming

Now people have access to the documentation I write on the web as well. I don't need to copy and paste the documentation into the Readme Markdown file, just provide a link. All the documentation is in one place and formatted cleanly and consistently.

As always, I hope this helps you in some small way and your documentation draws people to your work.

Understanding Defer, Panic and Recover

I am building my TraceLog package and it is really important that the package logs any internal exceptions and prevents panics from shutting down the application. The TraceLog package must never be responsible for shutting down an application. I also have internal go routines that must never terminate until the application is shut down gracefully.

Understanding how to use Defer and Recover in your application can be a bit tricky at first, especially if you are used to using try/catch blocks. There is a pattern you can implement to provide that same type of try/catch protection in Go. Before I can show you this you need to learn how Defer, Panic and Recover work.

First you need to understand the intricacies of the keyword defer. Start with this piece of code:

package main

import (  "errors"  "fmt")

func main() {  Test()}

func MimicError(key string) error {

26

Page 27: Going Go Programming

  return errors.New(fmt.Sprintf("Mimic Error : %s", key))}

func Test() {   fmt.Printf("Start Test\n")

  err := MimicError("1")

  defer func() {    fmt.Printf("Start Defer\n")      if err != nil {      fmt.Printf("Defer Error : %v\n", err)    }  }()

  fmt.Printf("End Test\n")}

The MimicError function is a test function that will be used to simulate an error. It is following the Go convention of using the error type to return an indication if something went wrong.

In Go the error type is defined as an interface:

type error interface {  Error() string}

If you don't understand what a Go interface is at the moment then this might help for now. Any type that implements the Error() function implements this interface and can be used as a variable of this type. The MimicError function is using errors.New(string) to create an error type variable. The errors type can be found in the errors package.

Test function produces the following output:

Start TestEnd TestStart DeferDefer Error : Mimic Error : 1

When you study the output you see that the Test function started and ended. Then right before the Test function terminated for good, the inline defer function was called. Two interesting things are happening here. First, the defer keyword is deferring the execution of the inline function until the Test function ends. Second, because Go supports closure, the err variable is accessible to the inline function and its message "Mimic Error : 1" is written to stdout.

You can define a defer function at any time inside your function. If that defer function requires state, as in this case with the err variable, then it must exist before the defer function is defined.

27

Page 28: Going Go Programming

Now change the Test function a bit:

func Test() {  fmt.Printf("Start Test\n")

  err := MimicError("1")

  defer func() {    fmt.Printf("Start Defer\n")

    if err != nil {      fmt.Printf("Defer Error : %v\n", err)    }  }()

  err = MimicError("2")

  fmt.Printf("End Test\n")}

This time the code is calling the MimicError function a second time after creating the inline defer function. Here is the output:

Start TestEnd TestStart DeferDefer Error : Mimic Error : 2

The output is identical from the first test except for one change. This time the inline defer function wrote "Mimic Error : 2". It appears that the inline defer function has a reference to the err variable. So if the state of the err variable changes at any time before the inline defer function is called, you will see that value. To verify that the inline defer function is getting a reference to the err variable, change the code to write the address of the err variable in both the Test function and the inline defer function.

func Test() {  fmt.Printf("Start Test\n")

  err := MimicError("1")

  fmt.Printf("Err Addr: %v\n", &err)

  defer func() {    fmt.Printf("Start Defer\n")

    if err != nil {      fmt.Printf("Err Addr Defer: %v\n", &err)      fmt.Printf("Defer Error : %v\n", err)    }  }()

28

Page 29: Going Go Programming

  err = MimicError("2")

  fmt.Printf("End Test\n")}

As you can see from the output below, the inline defer function has the same reference to the err variable. The address is the same inside of the Test function and inside of the inline defer function.

Start TestErr Addr: 0x2101b3200End TestStart DeferErr Addr Defer: 0x2101b3200Defer Error : Mimic Error : 2

As long as the defer function is stated before the Test function terminates, the defer function will be executed. This is great, but what I want is the ability to always place the defer statement at the beginning of any function. This way the defer function is guaranteed to be called every time the function is executed and I don't have to over think where to place the defer statement. Occam's Razor applies here: "When you have two competing theories that make exactly the same predictions, the simpler one is the better". What I want is an easy pattern that can be duplicated without requiring any thought.

The only problem is that the err variable needs to be defined before the defer statement can be implemented. Fortunately Go allows return variables to have names.

Now change the entire program as follows:

package main

import (  "errors"  "fmt")

func main() {  var err error

  err = Test()

  if err != nil {    fmt.Printf("Main Error: %v\n", err)  }}

func MimicError(key string) error {  return errors.New(fmt.Sprintf("Mimic Error : %s", key))}

29

Page 30: Going Go Programming

func Test() (err error) {  defer func() {    fmt.Printf("Start Defer\n")

    if err != nil {      fmt.Printf("Defer Error : %v\n", err)    }  }()

  fmt.Printf("Start Test\n")

  err = MimicError("1")

  fmt.Printf("End Test\n")

  return err}

The Test function now defines a return variable called err of type error. This is great because the err variable exists immediately and you can put the defer statement at the very beginning of the function. Also, the Test function now follows the Go convention and returns an error type back to the calling routine.

When you run the program you get the following output:

Start TestEnd TestStart DeferDefer Error : Mimic Error : 1Main Error: Mimic Error : 1

Now it is time to talk about the built-in function panic. When any Go function calls panic the normal flow of the application stops. The function that calls panic ends immediately and causes a chain reaction of panics up the call stack. All the functions in the same call stack will end, one after the next, like dominos falling down. Eventually the panic reaches the top of the call stack and the application crashes. One good thing is that any existing defer functions will be executed during this panic sequence and they have the ability to stop the crash.

Look at this new Test function that calls the built in panic function and recovers from the call:

func Test() (err error) {  defer func() {    fmt.Printf("Start Panic Defer\n")

    if r := recover(); r != nil {      fmt.Printf("Defer Panic : %v\n", r)    }  }()

30

Page 31: Going Go Programming

  fmt.Printf("Start Test\n")

  panic("Mimic Panic")

  fmt.Printf("End Test\n")

  return err}

Look closely at the new inline defer function:

  defer func() {    fmt.Printf("Start Panic Defer\n")

    if r := recover(); r != nil {      fmt.Printf("Defer Panic : %v\n", r)    }  }()

The inline defer function is now calling another built-in function recover. The recover function stops the chain reaction from going any farther up the call stack. It is like swiping a domino away so no more can fall down. The recover function can only be used inside of a defer function. This is because during the panic chain reaction only defer functions will be executed.

If the recover function is called and there is no panic occurring, the recover function will return nil. If there is a panic occurring, then the panic is stopped and the value given to the panic call will be returned.

This time the code is not calling the MimicError function but the built in panic function to simulate a panic. Look at the output from running the code:

Start TestStart Panic DeferDefer Panic : Mimic Panic

The inline defer function captures the panic, prints it to the screen and stops it dead in its tracks. Also notice that "End Test" is never displayed. The function terminated as soon as panic was called.

This is great but if there is an error I still want to display that as well. Something cool about Go and the defer keyword is that you can have more than one defer function stated at a time.

Change the Test function as follows:

func Test() (err error) {  defer func() {    fmt.Printf("Start Panic Defer\n")

    if r := recover(); r != nil {      fmt.Printf("Defer Panic : %v\n", r)

31

Page 32: Going Go Programming

    }  }()

  defer func() {    fmt.Printf("Start Defer\n")

    if err != nil {      fmt.Printf("Defer Error : %v\n", err)    }  }()

  fmt.Printf("Start Test\n")

  err = MimicError("1")

  panic("Mimic Panic")

  fmt.Printf("End Test\n")

  return err}

Now both inline defer functions have been incorporated into beginning of the Test function. First the inline defer function that that recovers from panics and then the inline defer function that prints errors. One thing to note is that Go will execute these inline defer functions in the opposite order that are defined (First In - Last Out).

Run the code and look at the output:

Start TestStart Error DeferDefer Error : Mimic Error : 1Start Panic DeferDefer Panic : Mimic PanicMain Error: Mimic Error : 1

The Test function starts as expected and the call to panic halts the execution of the Test function. This causes the inline defer function that prints errors to get called first. Since the Test function called the MimicError function before the panic, the error is printed. Then the inline defer function that recovers from panics is called and the panic is recovered.

There is one problem with this code. The main function had no idea that a panic was averted. All the main function knows is that an error occurred thanks to the MimicError function call. This is not good. I want the main function to know about the error that caused the panic. That is really the error that must be reported.

In the inline defer function that handles the panic we need to assign the error that caused the panic to the err variable.

func Test() (err error) {  defer func() {

32

Page 33: Going Go Programming

    fmt.Printf("Start Panic Defer\n")

    if r := recover(); r != nil {      fmt.Printf("Defer Panic : %v\n", r)

      err = errors.New(fmt.Sprintf("%v", r))    }  }()

  defer func() {    fmt.Printf("Start Defer\n")

    if err != nil {      fmt.Printf("Defer Error : %v\n", err)    }  }()

  fmt.Printf("Start Test\n")

  err = MimicError("1")

  panic("Mimic Panic")

  fmt.Printf("End Test\n")

  return err}

Now when you run the code you get the following output:

Start TestStart Error DeferDefer Error : Mimic Error : 1Start Panic DeferDefer Panic : Mimic PanicMain Error: Mimic Panic

This time when main function reports the error that caused the panic.

Everything looks good but this code is not really scalable. Having two inline defer functions is cool but not practical. What I need is a single function that can handle both errors and panics.

Here is a revised version of the full program with a new function called _CatchPanic:

package main

import (  "errors"  "fmt")

33

Page 34: Going Go Programming

func main() {  var err error

  err = Test()

  if err != nil {    fmt.Printf("Main Error: %v\n", err)  }}

func _CatchPanic(err error, functionName string) {  if r := recover(); r != nil {    fmt.Printf("%s : PANIC Defered : %v\n", functionName, r)

    if err != nil {      err = errors.New(fmt.Sprintf("%v", r))    }  } else if err != nil {    fmt.Printf("%s : ERROR : %v\n", functionName, err)  }}

func MimicError(key string) error {  return errors.New(fmt.Sprintf("Mimic Error : %s", key))}

func Test() (err error) {  defer _CatchPanic(err, "Test")

  fmt.Printf("Start Test\n")

  err = MimicError("1")

  fmt.Printf("End Test\n")

  return err}

The new function _CatchPanic incorporates both the panic recover and error handling. This time instead of defining an inline defer function the code is using an external function for the defer statement.

In this first test with the new _CatchPanic defer function, we need to make sure we didn't break our error handling.

Run the code and look at the output:

Start TestEnd TestMain Error: Mimic Error : 1

34

Page 35: Going Go Programming

Everything looks good. Now we need to test a panic.

func Test() (err error) {  defer _CatchPanic(err, "Test")

  fmt.Printf("Start Test\n")

  err = MimicError("1")

  panic("Mimic Panic")

  fmt.Printf("End Test\n")

  return err}

Run the code and look at the output:

Start TestTest5 : PANIC Defered : Mimic PanicMain Error: Mimic Error : 1

Houston we have a problem.  Main was provided the error from the MimicError function call and not from the panic. What went wrong?

func _CatchPanic(err error, functionName string) {  if r := recover(); r != nil {    fmt.Printf("%s : PANIC Defered : %v\n", functionName, r)

    if err != nil {      err = errors.New(fmt.Sprintf("%v", r))    }  } else if err != nil {    fmt.Printf("%s : ERROR : %v\n", functionName, err)  }}

Because defer is now calling an external function the code lost all the goodness that came with inline functions and Closure.

Change the code to print the address of the err variable from inside the Test function and_CatchPanic defer function.

func _CatchPanic(err error, functionName string) {  if r := recover(); r != nil {    fmt.Printf("%s : PANIC Defered : %v\n", functionName, r)

    fmt.Printf("Err Addr Defer: %v\n", &err)      if err != nil {

35

Page 36: Going Go Programming

      err = errors.New(fmt.Sprintf("%v", r))    }  } else if err != nil {    fmt.Printf("%s : ERROR : %v\n", functionName, err)  }}

func Test() (err error) {  fmt.Printf("Err Addr: %v\n", &err)

  defer _CatchPanic(err, "Test7")

  fmt.Printf("Start Test\n")

  err = MimicError("1")

  panic("Mimic Panic")

  fmt.Printf("End Test\n")

  return err}

When you run the code you can see why main did not get the error from the panic.

Err Addr: 0x2101b31f0Start TestTest5 : PANIC Defered : Mimic PanicErr Addr Defer: 0x2101b3270Main Error: Mimic Error : 1

When the Test function passes the err variable to the _CatchPanic defer function it is passing the variable by value. In Go all arguments are passed by value. So the _CatchPanic defer function has its own copy of the err variable. Any changes to _CatchPanic's copy remains with _CatchPanic.

To fix the pass by value problem the code needs to pass the err variable by reference.

package main

import (  "errors"  "fmt")

func main() {  var err error

  err = TestFinal()

  if err != nil {

36

Page 37: Going Go Programming

    fmt.Printf("Main Error: %v\n", err)  }}

func _CatchPanic(err *error, functionName string) {  if r := recover(); r != nil {    fmt.Printf("%s : PANIC Defered : %v\n", functionName, r)

    if err != nil {      *err = errors.New(fmt.Sprintf("%v", r))    }  } else if err != nil && *err != nil {    fmt.Printf("%s : ERROR : %v\n", functionName, *err)  }}

func MimicError(key string) error {  return errors.New(fmt.Sprintf("Mimic Error : %s", key))}

func TestFinal() (err error) {  defer _CatchPanic(&err, "TestFinal")

  fmt.Printf("Start Test\n")

  err = MimicError("1")

  panic("Mimic Panic")

  fmt.Printf("End Test\n")

  return err}

Now run the code and look at the output:

Start TestTest6 : PANIC Defered : Mimic PanicMain Error: Mimic Panic

The main function now reports the error that occurred because of the panic.

If you want to capture a stack trace as well just make this change to _CatchPanic. Remember to import "runtime".

func _CatchPanic(err *error, functionName string) {  if r := recover(); r != nil {    fmt.Printf("%s : PANIC Defered : %v\n", functionName, r)

    // Capture the stack trace    buf := make([]byte, 10000)

37

Page 38: Going Go Programming

    runtime.Stack(buf, false)

    fmt.Printf("%s : Stack Trace : %s", functionName, string(buf))

    if err != nil {      *err = errors.New(fmt.Sprintf("%v", r))    }  } else if err != nil && *err != nil {    fmt.Printf("%s : ERROR : %v\n", functionName, *err)  }}

With this pattern you can implement Go routines that can handle errors and trap panic situations. In many cases these conditions just need to be logged or reported up the call stack to be handled gracefully. Having a single place to implement this type of code and a simple way to integrate it into each function will reduce errors and keep your code clean.

However I have learned it is best to only use this pattern to catch panics only. Leave the logging of errors to the application logic. If not then you may be logging the errors twice.

func _CatchPanic(err *error, functionName string) {  if r := recover(); r != nil {    fmt.Printf("%s : PANIC Defered : %v\n", functionName, r)

    // Capture the stack trace    buf := make([]byte, 10000)    runtime.Stack(buf, false)

    fmt.Printf("%s : Stack Trace : %s", functionName, string(buf))

    if err != nil {      *err = errors.New(fmt.Sprintf("%v", r))    }  }}

As always I hope this can help you with your Go programming.

Go's time.Duration Type Unravelled

I have been struggling with using the Time package that comes in the Go standard library. My struggles have come from two pieces of functionality. First, trying to capture the number of milliseconds between two different time periods. Second, comparing that duration in milliseconds against a pre-defined time span. It sounds like a no brainier but like I said, I have been struggling.

In the Time package there is a custom type called Duration and a set of helper constants:

38

Page 39: Going Go Programming

type Duration int64

const (  Nanosecond Duration = 1  Microsecond = 1000 * Nanosecond  Millisecond = 1000 * Microsecond  Second = 1000 * Millisecond  Minute = 60 * Second  Hour = 60 * Minute)

I looked at this maybe 1000 times but it didn't mean anything to me. I just want to compare two time periods, get the duration back, compare the duration and do something if the right amount of time has elapsed. I could not for the life of me understand how this structure was going to help me unravel this mystery.

I wrote this Test function but it didn't work.

func Test() {  var waitFiveHundredMillisections int64 = 500

  startingTime := time.Now().UTC()  time.Sleep(10 * time.Millisecond)  endingTime := time.Now().UTC()

  var duration time.Duration = endingTime.Sub(startingTime)  var durationAsInt64 = int64(duration)

  if durationAsInt64 >= waitFiveHundredMillisections {    fmt.Printf("Time Elapsed : Wait[%d] Duration[%d]\n",       waitFiveHundredMillisections, durationAsInt64)  } else {

    fmt.Printf("Time DID NOT Elapsed : Wait[%d] Duration[%d]\n",       waitFiveHundredMillisections, durationAsInt64)  }}

When I ran the Test function and look at the output the code thinks that 500 milliseconds have elapsed.

Time Elapsed : Wait[500] Duration[10724798]

So what went wrong? I looked at the Duration type and constants again.

type Duration int64

const (  Nanosecond Duration = 1  Microsecond = 1000 * Nanosecond

39

Page 40: Going Go Programming

  Millisecond = 1000 * Microsecond  Second = 1000 * Millisecond  Minute = 60 * Second  Hour = 60 * Minute)

The basic unit of time for the Duration type is a Nanosecond. Ah, that is why when casting a Duration type that contains 10 milliseconds to an int64 I get 10,000,000.

So direct casting is not going to work. I need a different strategy and a better understanding of how to use and convert the Duration type.

I know it would be best to use the Duration type natively. This will minimize problems when using the type. Based on the constant values, I can create a Duration type variable in the following ways:

func Test() {  var duration_Milliseconds time.Duration = 500 * time.Millisecond  var duration_Seconds time.Duration = (1250 * 10) * time.Millisecond  var duration_Minute time.Duration = 2 * time.Minute

  fmt.Printf("Milli [%v]\nSeconds [%v]\nMinute [%v]\n",    duration_Milliseconds,    duration_Seconds,    duration_Minute)}

I created 3 variables of type Duration. Then by using the time constants, I am able to create the correct duration time span values. When I use the standard library Printf function and the %v operator I get the following output:

Milli [500ms]Seconds [12.5s]Minute [2m0s]

This is very cool. The Printf function knows how to natively display a Duration type. Based on the value in each Duration variable, the Printf function prints the value in the proper time period. I am also getting the values I expected.

The Duration type does have member functions that will convert the value of the Duration variable to a native Go type, either in int64 or float64 respectively:

func Test() {  var duration_Seconds time.Duration = (1250 * 10) * time.Millisecond  var duration_Minute time.Duration = 2 * time.Minute

  var float64_Seconds float64 = duration_Seconds.Seconds()  var float64_Minutes float64 = duration_Minute.Minutes()

40

Page 41: Going Go Programming

  fmt.Printf("Seconds [%.3f]\nMinutes [%.2f]\n",    float64_Seconds,    float64_Minutes) }

I noticed pretty quickly that there is no Milliseconds function. There is a function for every other time unit but Milliseconds. When I display the Seconds and Minutes natively I get the following output as expected:

Seconds [12.500]Minutes [2.00]

However I need the Milliseconds, So why is the Milliseconds function missing?

The designers of Go didn't want to lock me into a single native type for the Milliseconds. They wanted me to have options.

The following code converts the value of the Duration variable to Milliseconds as both an int64 and float64:

func Test() {  var duration_Milliseconds time.Duration = 500 * time.Millisecond

  var castToInt64 int64 = duration_Milliseconds.Nanoseconds() / 1e6  var castToFloat64 float64 = duration_Milliseconds.Seconds() * 1e3

  fmt.Printf("Duration [%v]\ncastToInt64 [%d]\ncastToFloat64 [%.0f]\n",    duration_Milliseconds,    castToInt64,    castToFloat64)}

If I divide the Nanoseconds by 1e6 I get the Milliseconds as a int64. If I multiply the Seconds by 1e3 I get the Milliseconds as a float64.

Here is the output:

Duration [500ms]castToInt64 [500]castToFloat64 [500]

If you are wondering what 1e6 or 1e3 represents you are not alone:

1e3 = 103 = One Thousand

1e6 = 106 = One Million

41

Page 42: Going Go Programming

Now that I understand what a Duration type is and how best to use and manipulate it, I have my final test example using Milliseconds:

func Test() {  var waitFiveHundredMillisections time.Duration = 500 * time.Millisecond

  startingTime := time.Now().UTC()  time.Sleep(600 * time.Millisecond)  endingTime := time.Now().UTC()

  var duration time.Duration = endingTime.Sub(startingTime)

  if duration >= waitFiveHundredMillisections {    fmt.Printf("Wait %v\nNative [%v]\nMilliseconds [%d]\nSeconds [%.3f]\n",       waitFiveHundredMillisections,       duration,       duration.Nanoseconds()/1e6,       duration.Seconds())  }}

I get the following output:

Wait 500msNative [601.091066ms]Milliseconds [601]Seconds [0.601]

Instead of comparing native types to determine if the time has elapsed I am comparing two Duration types. This is much cleaner. When displaying the values I am using the Duration type custom formatting and converting the value of the Duration variable to Milliseconds as both an int64 and float64.

It took a while but eventually using the Duration type started to make sense. As always I hope this helps someone else navigate using the Duration type in their Go applications.

Send an email in Go with smtp.SendMail

I wanted to send an email from my TraceLog package when a critical exception occurred. Fortunately Go's standard library has a package called smpt which can be found inside the net package. When you look at the documentation you are left wanting.

I spent 20 minutes researching how to use this package. After fighting through the parameters and bugs, I came up with this sample code:

package main

import (

42

Page 43: Going Go Programming

    "bytes"    "errors"    "fmt"    "net/smtp"    "runtime"    "strings"    "text/template")

func main() {    SendEmail(        "smtp.1and1.com",        587,        "[email protected]",        "password",        []string{"[email protected]"},        "testing subject",        "<html><body>Exception 1</body></html>Exception 1")    }

func _CatchPanic(err *error, functionName string) {    if r := recover(); r != nil {        fmt.Printf("%s : PANIC Defered : %v\n", functionName, r)

        // Capture the stack trace        buf := make([]byte, 10000)        runtime.Stack(buf, false)

        fmt.Printf("%s : Stack Trace : %s", functionName, string(buf))

        if err != nil {            *err = errors.New(fmt.Sprintf("%v", r))        }    } else if err != nil && *err != nil {        fmt.Printf("%s : ERROR : %v\n", functionName, *err)

        // Capture the stack trace        buf := make([]byte, 10000)        runtime.Stack(buf, false)

        fmt.Printf("%s : Stack Trace : %s", functionName, string(buf))    }}

func SendEmail(host string, port int, userName string, password string,to []string, subject string, message string) (err error) {    defer _CatchPanic(&err, "SendEmail")

43

Page 44: Going Go Programming

    parameters := &struct {        From string        To string        Subject string        Message string    }{        userName,        strings.Join([]string(to), ","),        subject,        message,    }

    buffer := new(bytes.Buffer)

    template := template.Must(template.New("emailTemplate").Parse(_EmailScript()))    template.Execute(buffer, parameters)

    auth := smtp.PlainAuth("", userName, password, host)

    err = smtp.SendMail(        fmt.Sprintf("%s:%d", host, port),        auth,        userName,        to,        buffer.Bytes())

    return err}

// _EmailScript returns a template for the email message to be sentfunc _EmailScript() (script string) {    return `From: {{.From}}To: {{.To}}Subject: {{.Subject}}MIME-version: 1.0Content-Type: text/html; charset="UTF-8"

{{.Message}}`}

The auth variable does not have to be recreated on every call. That can be created once and reused. I added my _CatchPanic function so you can see any exceptions that are returned while testing the code.

If you look at the raw source of the email you will see how the message parameter works:

44

Page 45: Going Go Programming

Return-Path: Delivery-Date: Wed, 12 Jun 2013 20:34:59 -0400Received: from mout.perfora.net (mout.perfora.net [X.X.X.X])    by mx.perfora.net (node=mxus0) with ESMTP (Nemesis)    id 0MZTCn-1V3EP13 for [email protected]; Wed, 12 Jun 2013 20:34:58 -0400Received: from localhost (c-50-143-31-151.hsd1.fl.comcast.net [X.X.X.X])    by mrelay.perfora.net (node=mrus4) with ESMTP (Nemesis)    id 0Mhi4R-1V0RuN48ot-00MTc6; Wed, 12 Jun 2013 20:34:58 -0400From: [email protected]: [email protected]: testing subjectMIME-version: 1.0Content-Type: text/html; charset="UTF-8"Message-Id: <0mhi4r- data-blogger-escaped-mrelay.perfora.net="">Date: Wed, 12 Jun 2013 20:34:56 -0400Envelope-To: [email protected]

<html><body>Exception 1</body></html>

As always I hope this sample program saves you time and aggravation.

Reading XML Documents in Go

I was really surprised how easy it was to read an XML document using the encoding/xml package that comes with the standard library. The package works by defining structs that map the XML document. If you need more flexibility then use Gustavo Niemeyer's xmlpath package (found here).

Here is the XML document we are going to read and de-serialize:

<straps>    <strap key="CompanyName" value="NEWCO" />    <strap key="UseEmail" value="true" /></straps>

The first thing we need to do is define the structs we will use to map the document:

type XMLStrap struct {    XMLName  xml.Name `xml:"strap"`    Key      string   `xml:"key,attr"`    Value    string   `xml:"value,attr"`}

type XMLStraps struct {    XMLName  xml.Name    `xml:"straps"`

45

Page 46: Going Go Programming

    Straps   []*XMLStrap `xml:"strap"`}

There are two structs, one for the entire document (<straps>) and one for each individual child node (<strap>). If you look closely at the structs you may see something new. Each field has a tag associated with it. These tags are bound to each individual field. Go's reflect package allows you to access these tags.

These tag formats are specific to the decoding support inside the encoding/xml package. The tags map the nodes and attributes of the XML document to the struct.

The following code decodes the XML document and returns the array of strap nodes:

func ReadStraps(reader io.Reader) ([]*XMLStrap, error) {    xmlStraps := &XMLStraps{}    decoder := xml.NewDecoder(reader)

    if err := decoder.Decode(xmlStraps); err != nil {        return nil, err    }

    return xmlStraps.Straps, nil}

The function takes an io.Reader. We will be passing a os.File variable into this method. The function returns an array of pointers for each strap we read from the file.

First we create a XMLStraps variable and get its address. Next we create a decoder using the xml.NewDecoder method passing the io.Reader object. Then we call Decode which reads the file and de-serializes the file into the XMLStraps variable. Then we just return the array of strap values.

The following completes the sample code:

/*straps.xml should be located in the default working directory

<straps>    <strap key="CompanyName" value="NEWCO" />    <strap key="UseEmail" value="true" /></straps>*/package main

import (    "encoding/xml"    "fmt"    "io"    "os"    "path/filepath")

46

Page 47: Going Go Programming

type XMLStrap struct {    XMLName  xml.Name `xml:"strap"`    Key      string   `xml:"key,attr"`    Value    string   `xml:"value,attr"`}

type XMLStraps struct {    XMLName  xml.Name    `xml:"straps"`    Straps   []*XMLStrap `xml:"strap"`}

func ReadStraps(reader io.Reader) ([]*XMLStrap, error) {    xmlStraps := &XMLStraps{}    decoder := xml.NewDecoder(reader)

    if err := decoder.Decode(xmlStraps); err != nil {        return nil, err    }

    return xmlStraps.Straps, nil}

func main() {    var xmlStraps []*XMLStrap    var file *os.File

    defer func() {        if file != nil {            file.Close()        }    }()

    // Build the location of the straps.xml file    // filepath.Abs appends the file name to the default working directly    strapsFilePath, err := filepath.Abs("straps.xml")

    if err != nil {        panic(err.Error())    }

    // Open the straps.xml file    file, err = os.Open(strapsFilePath)

    if err != nil {        panic(err.Error())    }

    // Read the straps file    xmlStraps, err = ReadStraps(file)

47

Page 48: Going Go Programming

    if err != nil {        panic(err.Error())    }

    // Display The first strap    fmt.Printf("Key: %s  Value: %s", xmlStraps[0].Key, xmlStraps[0].Value)}

I hope this sample gets you started with reading XML documents for your Go applications.

Running Go Programs as a Background Process

I have been writing Windows services in C/C++ and then in C# since 1999. Now that I am writing server based software in Go for the Linux OS I am completely lost. What is even more frustrating, is that for the first time the OS I am developing on (Mac OSX) is not the operating system I will be deploying my code on. That will be for another blog post.

I want to run my code as a background process (daemon) on my Mac. My only problem is, I have no idea how that works on the Mac OS.

I was lucky to find an open source project called service on Bitbucket by Daniel Theophanes. This code taught me how to create, install, start and stop daemons on the Mac OS. The code also supports daemons for the Linux OS and Windows.

Background Processes on the Mac OS

The Mac OS has two types of background processes, Daemons and Agents. Here is a definition for each:

A daemon is a program that runs in the background as part of the overall system (that is, it is not tied to a particular user). A daemon cannot display any GUI; more specifically, it is not allowed to connect to the window server. A web server is the perfect example of a daemon.

An agent is a process that runs in the background on behalf of a particular user. Agents are useful because they can do things that daemons can't, like reliably access the user's home directory or connect to the window server.

For More Information:http://developer.apple.com/library/mac/#documentation/MacOSX/Conceptual/BPSystemStartup/Chapters/Introduction.html

Let's start with how to configure a daemon in the Mac OS.

48

Page 49: Going Go Programming

If you open up finder you will see the following folders. The LaunchDaemons folder under Library is where we we need to add a launchd .plist file. There is also a Library/LaunchDaemons folder under /System for the OS daemons.

The launchd program is the service management framework for starting, stopping and managing daemons, applications, processes, and scripts in the Mac OS. Once the kernel starts launchd, the program scans several directories including /etc for scripts and the LaunchAgents and LaunchDaemons folders in both /Library and /System/Library. Programs found in the LaunchDaemons directories are run as the root user.

Here is the version of the launchd .plist file with all the basic configuration we need:

<?xml version='1.0' encoding='UTF-8'?><!DOCTYPE plist PUBLIC \"-//Apple Computer//DTD PLIST 1.0//EN\" \"http://www.apple.com/DTDs/PropertyList-1.0.dtd\" ><plist version='1.0'><dict> <key>Label</key><string>My Service</string> <key>ProgramArguments</key> <array> <string>/Users/bill/MyService/MyService</string> </array> <key>WorkingDirectory</key><string>/Users/bill/MyService</string> <key>StandardOutPath</key><string>/Users/bill/MyService/My.log</string> <key>KeepAlive</key><true/> <key>Disabled</key><false/></dict></plist>

You can find all the different options for the .plist file here:https://developer.apple.com/library/mac/documentation/Darwin/Reference/ManPages/man5/launchd.plist.5.html

The ProgramArguments key is an important tag:

<key>ProgramArguments</key><array>    <string>/Users/bill/MyService/MyService</string></array>

Here you specify the name of the program to run and any other arguments to be passed into main.

These other two tags, WorkingDirectory and StandardOutPath are real helpful too:

<key>WorkingDirectory</key><string>/Users/bill/MyService</string><key>StandardOutPath</key><string>/Users/bill/MyService/My.log</string>

Once we have a launchd .plist file we can use a special program called launchctl to start our program as a background process (daemon).

49

Page 50: Going Go Programming

launchctl load /Library/LaunchDaemons/MyService.plist

The launchctl program provides service control and reporting. The load command is used to start a daemon based on the launchd .plist file. To verify that a program is running use the list command:

launchctl list

PID  Status  Label948  -       0x7ff4a9503410.anonymous.launchctl946  -       My Service910  -       0x7ff4a942ce00.anonymous.bash

PID 946 was assigned to the running program, My Service. Now to stop the program from running issue an unload command:

launchctl unload /Library/LaunchDaemons/MyService.plistlaunchctl list

PID  Status  Label948  -       0x7ff4a9503410.anonymous.launchctl910  -       0x7ff4a942ce00.anonymous.bash

Now the program has been terminated. There is some code we need to implement to handle the start and stop requests from the OS when our program is started and terminated.

OS Specific Go Coding Files

You can create Go source code files that are only compiled for the target platform you're building.

In my LiteIDE project for Going Go you will see five Go source code files. Three of these files have the name of an environment we can build the code for, darwin (Mac), linux and windows.

Since I am building against the Mac OS, the service_linux.go and service_windows.go files are ignored by the compiler.

50

Page 51: Going Go Programming

The compiler recognizes this naming convention by default.

This is very cool because each environment needs to do a few things differently and use different packages. As in the case of service_windows.go, the following imports are required:

"code.google.com/p/winsvc/eventlog""code.google.com/p/winsvc/mgr""code.google.com/p/winsvc/svc"

I don't have these packages installed right now because I don't plan to run the code on windows. It doesn't affect building the code because service_windows.go is ignored.

There is another really cool side effect from this, I can reuse types and function names within these files since only one of these files are ever compiled with the program. This means that any code that uses this package does not have to be modified when changing environments. Really Cool !!

Service Interfaces

Each service must implement three interfaces that provide command and control for the service.

type Service interface {    Installer    Controller    Runner}

type Installer interface {    Install(config *Config) error    Remove() error}

type Controller interface {    Start() error    Stop() error}

type Runner interface {    Run(config *Config) error}

The Installer interface provides the logic for installing and uninstalling the program as a background process on the specific OS. The Controller interface provides logic to start and stop the service from the command line. The final interface Runner is used to perform all application logic and run the program as a service when requested.

Darwin Implementation

Since this post is specific to the Mac OS I will concentrate on the implementation of the

51

Page 52: Going Go Programming

service_darwin.go code file.

The Installer interface requires the implementation of two functions, Install and Remove. As described above we need to create a launchd .plist file for the service. The best way to accomplish this is to use the text/template package.

The _InstallScript function uses a multi-line string to create the template for the launchd .plist file.

func _InstallScript() (script string) {    return `<?xml version='1.0' encoding='UTF-8'?><!DOCTYPE plist PUBLIC \"-//Apple Computer//DTD PLIST 1.0//EN\" \"http://www.apple.com/DTDs/PropertyList-1.0.dtd\" ><plist version='1.0'><dict>    <key>Label</key><string>{{.DisplayName}}</string>    <key>ProgramArguments</key>    <array>        <string>{{.WorkingDirectory}}/{{.ExecutableName}}</string>    </array>    <key>WorkingDirectory</key><string>{{.WorkingDirectory}}</string>    <key>StandardOutPath</key><string>{{.LogLocation}}/{{.Name}}.log</string>    <key>KeepAlive</key><true/>    <key>Disabled</key><false/></dict></plist>`}

What is cool about multi-line strings is that the carriage return, line feeds and spaces are respected.  Since this is a template, we need to have variables that will be substituted with data.  The {{.variable_name}} convention is used to define those variables.

Here is the implementation of the Install function:

func (service *_DarwinLaunchdService) Install(config *Config) error {    confPath := service._GetServiceFilePath()

    _, err := os.Stat(confPath)    if err == nil {        return fmt.Errorf("Init already exists: %s", confPath)    }

    file, err := os.Create(confPath)    if err != nil {        return err

52

Page 53: Going Go Programming

    }    defer file.Close()

    parameters := &struct {        ExecutableName string        WorkingDirectory string        Name string        DisplayName string        LongDescription string        LogLocation string    }{        service._Config.ExecutableName,        service._Config.WorkingDirectory,        service._Config.Name,        service._Config.DisplayName,        service._Config.LongDescription,        service._Config.LogLocation,    }

    template := template.Must(template.New("launchdConfig").Parse(_InstallScript()))    return template.Execute(file, parameters)}

The _GetServiceFilePath() abstracts the location of the configuration file for each environment implementation. For Darwin the function looks like this:

func (service *_DarwinLaunchdService) _GetServiceFilePath() string {    return fmt.Sprintf("/Library/LaunchDaemons/%s.plist", service._Config.Name)}

Now the code checks if the file already exists and if it doesn't, creates an empty file. Next we build a struct on the fly and populate it with all the parameters that we need for the template Execute function call. Notice the names of the fields match the {{.variable_name}} variables in the template.

The Execute function will process the template and then write the finished product to disk using the file handle.

The Controller interface requires two functions, Start and Stop. In the Darwin source code file the implementation is simple:

func (service *_DarwinLaunchdService) Start() error {    confPath := service._GetServiceFilePath()

    cmd := exec.Command("launchctl", "load", confPath)    return cmd.Run()}

53

Page 54: Going Go Programming

func (service *_DarwinLaunchdService) Stop() error {    confPath := service._GetServiceFilePath()

    cmd := exec.Command("launchctl", "unload", confPath)    return cmd.Run()}

Each function executes the launchctl program the same way as we did above. This provides a convenient way to start and stop the daemon.

The final interface that needs to be implemented is Runner with one function called Run.

func (service *_DarwinLaunchdService) Run(config *Config) error {    defer func() {        if r := recover(); r != nil {            fmt.Printf("******> SERVICE PANIC: %s\n", r)        }    }()

    var err error

    fmt.Print("******> Initing Service\n")

    if config.Init != nil {        err = config.Init()

        if err != nil {            return err        }    }

    fmt.Print("******> Starting Service\n")

    if config.Start != nil {        err = config.Start()

        if err != nil {            return err        }    }

    fmt.Print("******> Service Started\n")

    // Create a channel to talk with the OS    var sigChan = make(chan os.Signal, 1)    signal.Notify(sigChan, os.Interrupt)

    // Wait for an event    whatSig := <-sigChan

54

Page 55: Going Go Programming

    fmt.Print("******> Service Shutting Down\n")

    if config.Stop != nil {        err = config.Stop()

        if err != nil {            return err        }    }

    fmt.Print("******> Service Down\n")    return err}

Run is called when the program is going to run as daemon. It first makes a call to the users onInit and onStart functions. The user is expected to perform any initialization, start their routines and then return control back.

Next the code creates a channel that will be used to communicate with the operating system. The call to signal.Notify binds the channel to receive operating system events. The code then starts an endless loop and waits until the operating system notifies the program with an event. The code is looking for any events that tell it to shutdown. Once an event to shutdown is received, the user onStop function is called and the Run function returns control back to shutdown the program.

Service Manager

Service Manager provides all the boilerplate code so Service can be easily implemented by any program. It implements the Config member function called Run.

func (config *Config) Run() {    var service, err = _NewService(config)    config._Service = service

    if err != nil {        fmt.Printf("%s unable to start: %s", config.DisplayName, err)        return    }

    // Perform a command and then return    if len(os.Args) > 1 {        var err error        verb := os.Args[1]

        switch verb {            case "install":                err = service.Install(config)

                if err != nil {

55

Page 56: Going Go Programming

                    fmt.Printf("Failed to install: %s\n", err)                    return                }

                fmt.Printf("Service \"%s\" installed.\n", config.DisplayName)                return

            case "remove":                err = service.Remove()

                if err != nil {                    fmt.Printf("Failed to remove: %s\n", err)                    return                }

                fmt.Printf("Service \"%s\" removed.\n", config.DisplayName)                return

            case "debug":                config.Start(config)

                fmt.Println("Starting Up In Debug Mode")

                reader := bufio.NewReader(os.Stdin)                reader.ReadString('\n')

                fmt.Println("Shutting Down")

                config.Stop(config)                return

           case "start":               err = service.Start()

               if err != nil {                   fmt.Printf("Failed to start: %s\n", err)                   return               }

               fmt.Printf("Service \"%s\" started.\n", config.DisplayName)               return

           case "stop":               err = service.Stop()

               if err != nil {                   fmt.Printf("Failed to stop: %s\n", err)                   return

56

Page 57: Going Go Programming

               }

               fmt.Printf("Service \"%s\" stopped.\n", config.DisplayName)               return

           default:               fmt.Printf("Options for \"%s\": (install | remove | debug | start | stop)\n", os.Args[0])               return        }    }

    // Run the service    err = service.Run(config)}

The Run function starts by creating the service object based on the configuration that is provided. Then it looks to at the command line arguments. If there is a command, it is processed and the program terminates. If the command is debug, the program is started as if it were running as a service except it does not hook into the operating system. Hitting the <enter> kill will shut down the program.

If no command line arguments are provided, the code attempts to start as a daemon by calling service.Run.

Implementing The Service

The following code shows an example of using the service:

package main

import (    "fmt"    "github.com/goinggo/service/v1"    "path/filepath")

func main() {    // Capture the working directory    workingDirectory, _ := filepath.Abs("")

    // Create a config object to start the service    config := &service.Config{        ExecutableName: "MyService",        WorkingDirectory: workingDirectory,        Name: "MyService",        DisplayName: "My Service",        LongDescription: "My Service provides support for...",        LogLocation: _Straps.Strap("baseFilePath"),

57

Page 58: Going Go Programming

        Init: InitService,        Start: StartService,        Stop: StopService,    }

    // Run any command line options or start the service    config.Run()}

func InitService() {    fmt.Printf("Service Inited")}

func StartService() {    fmt.Printf("Service Started\n")}

func StopService() {    fmt.Printf("Service Stopped\n")}

The Init, Start and Stop functions must return control back to the config.Run function.

The code I have has been tested with the Mac OS. The code for linux is identical except for the script that needs to be created and installed. Also the implementation for Start and Stop uses different programs. In the near future I will test the Linux portion of the code. The Window portion requires some refactoring and will not build. If you plan to use Windows start with Daniel's code.

Once you build the code, open a Terminal session where the binary has been created and run the different commands.

./MyService debug

./MyService install

./MyService start

./MyService stop

As always I hope the code helps you create and run your own services.

How Packages Work in Go

Since I started writing code in Go it has been a mystery to me how best to organize my code and use the package keyword. The package keyword is similar to using a namespace in C#, however the convention is to tie the package name to the directory structure.

Go has this web page that attempts to explain how to write Go Code.

58

Page 59: Going Go Programming

http://golang.org/doc/code.html

When I started programming in Go this was one of the first documents I read. This went way over my head, mainly because I have been working in Visual Studio and code is packaged for you in Solution and Project files. Working out of a directory on the file system was a crazy thought.  Now I love the simplicity of it but it has taken quite a while for it all to make sense.

"How to Write Go Code" starts out with the concept of a Workspace. Think of this as the root directory for your project. If you were working in Visual Studio this is where the solution or project file would be located. Then from inside your Workspace you need to create a single sub-directory called src. This is mandatory if you want the Go tools to work properly. From within the src directory you have the freedom to organize your code the way you want. However you need to understand the conventions set forth by the Go team for packages and source code or you could be refactoring your code down the line.

On my machine I created a Workspace called Test and the required sub-directory called src. This is the first step in creating your project.

Then in LiteIDE open the Test directory (the Workspace), and create the following sub-directories and empty Go source code files.

First we create a sub-directory for the application we are creating. The name of the directory

59

Page 60: Going Go Programming

where the main function is located will be the name of the executable. In our case main.go contains the main function and is under the myprogram directory. This means our executable file will be called myprogram.

The other sub-directories inside of src will contain packages for our project. By convention the name of the directory should be the name of the package for those source code files that are located in that directory. In our case the new packages are called samplepkg and subpkg. The name of the source code files can be anything you like.

Create the same package folders and empty Go source code files to follow along.

If you don't add the Workspace folder to the GOPATH we will have problems.

It took me a bit to realize that the Custom Directories window is a Text Box. So you can edit those folders directly. The System GOPATH is read only.

The Go designers have done several things when naming their packages and source code files. All the file names and directories are lowercase and they did not use underscores to break words apart in the package directory names. Also, the package names match the directory names. Code files within a directory belong to a package named after the directory.

Take a look at the Go source code directory for a few of the standard library packages:

The package directories for bufio and builtin are great examples of the directory naming convention. They could have been called buf_io and built_in.

Look again at the Go source code directory and review the names of the source code files.

60

Page 61: Going Go Programming

Notice the use of the underscore in some of the file names. When the file contains test code or is specific to a particular platform, an underscore is used.

The usual convention is to name one of the source code files the same as the package name. In bufio this convention is followed. However, this is a loosely followed convention.

In the fmt package you will notice there is no source code file named fmt.go. I personally like naming my packages and source code files differently.

Last, open the doc.go, format.go, print.go and scan.go files. They are all declared to be in the fmt package.

Let's take a look at the code for sample.go:

package samplepkg

import (    "fmt")

type Sample struct {    Name string}

func New(name string) (sample *Sample) {    sample = &Sample{        Name: name,    }

    return sample}

61

Page 62: Going Go Programming

func (sample *Sample) Print() {    fmt.Printf("Sample Name : %s\n", sample.Name)}

The code is useless but it will let us focus on the two important conventions. First, notice the name of the package is the same as the name of the sub-directory. Second, there is a function called New.

The function New is a Go convention for packages that create a core type or different types for use by the application developer. Look at how New is defined and implemented in log.go, bufio.go and cypto.go:

log.go// New creates a new Logger. The out variable sets the// destination to which log data will be written.// The prefix appears at the beginning of each generated log line.// The flag argument defines the logging properties.func New(out io.Writer, prefix string, flag int) *Logger {    return &Logger{out: out, prefix: prefix, flag: flag}}

bufio.go// NewReader returns a new Reader whose buffer has the default size.func NewReader(rd io.Reader) *Reader {    return NewReaderSize(rd, defaultBufSize)}

crypto.go// New returns a new hash.Hash calculating the given hash function. New panics// if the hash function is not linked into the binary.func (h Hash) New() hash.Hash {    if h > 0 && h < maxHash {        f := hashes[h]        if f != nil {            return f()        }    }    panic("crypto: requested hash function is unavailable")}

Since each package acts as a namespace, every package can have their own version of New. In bufio.go multiple types can be created, so there is no standalone New function. Here you will find functions like NewReader and NewWriter.

Look back at sample.go. In our code the core type is Sample, so our New function returns a reference to a Sample type. Then we added a member function to display the name we provided in New.

62

Page 63: Going Go Programming

Now let's look at the code for sub.go:

package subpkg

import (    "fmt")

type Sub struct {    Name string}

func New(name string) (sub *Sub) {    sub = &Sub{        Name: name,    }

    return sub}

func (sub *Sub) Print() {    fmt.Printf("Sub Name : %s\n", sub.Name)}

The code is identical except we named our core type Sub. The package name matches the sub-directory name and New returns a reference to a Sub type.

Now that our packages are properly defined and coded we can use them.

Look at the code for main.go:

package main

import (    "samplepkg"    "samplepkg/subpkg")

func main() {

    sample := samplepkg.New("Test Sample Package")    sample.Print()

    sub := subpkg.New("Test Sub Package")    sub.Print()}

Since our GOPATH points to the Workspace directory, in my case /Users/bill/Spaces/Test, our import references start from that point. Here we are referencing both packages based on the directory structure.

63

Page 64: Going Go Programming

Next we call the New functions for each respective package and create variables of those core types.

Now build and run the program. You should see that an executable program was created called myprogram.

Once your program is ready for distribution you can run the install command.

The install command will create the bin and pkg folders in your Workspace. Notice the final executable was placed under the bin directory.

The compiled packages were placed under the pkg directory. Within that directory a sub-directory that describes the target architecture is created and that mirrors the source directories.

These compiled packages exist so the go tool can avoid recompiling the source code unnecessarily.

The problem with that last statement, found in the "How to Write Go Code" post, is that these .a files are ignore by the Go tool when performing future builds of your code. Without the source code files you can't build your program. I have not found any documentation to really explain how these .a files can be used directly to build Go programs. If anyone can shed some light on this topic it would be greatly appreciated.

At the end of the day it is best to follow the conventions handled down from the Go designers. Looking at the source code they have written provides the best documentation for how to do things. Many of us are writing code for the community. If we all follow the same conventions we can be assured of compatibility and readability. When in doubt open finder to /usr/local/go/src/pkg and start digging.

64

Page 65: Going Go Programming

As always I hope this helps you understand the Go programming language a little better.

Singleton Design Pattern in Go

Multi-threaded applications are very complicated, especially when your code is not organized and consistent with how resources are accessed, managed and maintained. If you want to minimize bugs you need philosophies and rules to live by. Here are some of mine:

1. Resource allocation and de-allocation should be abstracted and managed within the same type.

2. Resource thread safeness should be abstracted and managed within the same type.3. A public interface should be the only means to accessing shared resources.4. Any thread that allocates a resource should de-allocated the same resource.

In Go we don't have threads but Go Routines. The Go runtime abstracts the threading and task swapping of these routines. Regardless, the same philosophies and rules apply.

One of my favorite design patterns is the Singleton. It provides a great implementation when you only need one instance of a type and that type manages shared resources. A Singleton is a design pattern where the type creates an instance of itself and keeps that reference private. Access to the shared resources managed by that reference is abstracted through a static pubic interface. These static methods also provide thread safeness. The application using the Singleton is responsible for initializing and de-initializing the Singleton but never has direct access to the internals.

It escaped me for some time how to implement a Singleton in Go because Go is not a traditional object oriented programming language and there are no static methods.

I consider Go to be a light object oriented programming language. Yes it does have encapsulation and type member functions but it lacks inheritance and therefore traditional polymorphism. In all of the OOP languages I have ever used, I never used inheritance unless I wanted to implement polymorphism. With the way interfaces are implemented in Go there is no need for inheritance. Go took the best parts of OOP, left out the rest and gave us a better way to write polymorphic code.

In Go we can implement a Singleton by leveraging the scoping and encapsulation rules of packages and types. For this post we will explore my straps package since it will give us a real world example.

The straps package provides a mechanism to store configuration options (straps) in an XML document and read them into memory for use by the application. The name strap comes from the early days of configuring networking equipment. Those settings were called straps and that name has always stuck with me. In the MacOS we have .plist files, in .Net we have app.config files and in Go I have straps.xml files.

Here is a sample straps files for one of my applications:

<straps>    <!-- Log Settings -->    <strap key="baseFilePath" value="/Users/bill/Logs/OC-

65

Page 66: Going Go Programming

DataServer">    <strap key="machineName" value="my-machine">    <strap key="daysToKeep" value="1">

    <!-- ServerManager Settings -->    <strap key="cpuMultiplier" value="100"></straps>

The straps package knows how to read this xml file and provide access to the values via a Singleton based public interface. Since these values only need to be read into memory once a Singleton is a great option for this package.

Here is the package and type information for straps:

package straps

import (    "encoding/xml"    "io"    "os"    "path/filepath"    "strconv")

.

. Types Removed

.

type straps struct {    StrapMap map[string]string // The map of strap key value pairs}

var _This *straps // A reference to the singleton

I am not going to talk about the aspects of reading the XML document. If you are interested please read this blog post http://www.goinggo.net/2013/06/reading-xml-documents-in-go.html.

In the code snippet above you will see the package name (straps), the definition of the private type _Straps and the private package variable _This. The _This variable will contain the reference for the Singleton.

The scoping rules for Go state that types and functions that start with a capital letter are public and accessible outside of the package. Types and functions that start with a lowercase letter are private and not accessible outside of the package.

I name my variables that are defined within the scope of a function with lowercase letters. Variable names defined outside the scope of a function, such as type members and package variables start with a capital letter. This allows me to look at the code and know instantly where memory for any given variable is being referenced. Luckily for me Go allows the use

66

Page 67: Going Go Programming

an underscore for variable names and makes them private.

Both the _Straps type and the _This variable are private and only accessible from within the package.

Look at the Load function which initializes the Singleton for use:

func Load() {    var xmlStraps []*_XMLStrap    var file *os.File

    defer func() {        if file != nil {            file.Close()        }    }()

    // Find the location of the straps.xml file    strapsFilePath, err := filepath.Abs("straps.xml")

    // Open the straps.xml file    file, err = os.Open(strapsFilePath)

    // We need this file so panic    if err != nil {        panic(err.Error())    }

    // Read the straps file    xmlStraps, err = _ReadStraps(file)

    if err != nil {        panic(err.Error())    }

    // Create a straps object    _This = &straps{        StrapMap: make(map[string]string),    }

    // Store the key/value pairs for each strap    for _, strap := range xmlStraps {        _This.StrapMap[strap.Key] = strap.Value    }}

The Load function is a public function of the package. Applications can access this function through the package name. You can see how I use names that start with a lowercase letter for local variables. At the bottom of the Load function a straps object is created and the reference is set to the _This variable. At this point the Singleton exists and straps is ready to use.

67

Page 68: Going Go Programming

Accessing the straps is done with the public function Strap:

func Strap(key string) string {    strap, found := _This.StrapMap[key]

    if found == false {        panic("Unable To Locate Key")    }

    return strap}

The public function Strap uses the Singleton reference to access the shared resource. In this case the map of straps. If the map could change during the lifetime of the application, then a mutex or some other synchronization object would need to be used to protect the map. Luckily the straps never change once they are loaded.

Since the resource being managed by straps is just memory there is no need for an Unload or Close method. If we needed a function to close any resources another public function would have to be created.

If private methods are required in the Singleton package to help organize code, I like to use member functions. One reason is I don't need to use the underscore when naming these methods. Since the type is private I can make the member functions public because they won't be accessible. I also think the member functions help make the code more readable. By looking to see if the function is a member function or not I know if the function is private or part of the public interface.

Here is a example of using a member function:

func SomePublicFunction() {    .

    _This.SomePrivateMemberFunction("key")

    .}

func (straps *straps) SomePrivateMemberFunction(key string) {    strap, found := straps.StrapMap[key]

    .}

Since the function is a member function we need to use the _This variable to make the function call. From within the member function I use the local variable (straps) and not the _This variable. The member function is public but the reference is private so only the package can reference the member function. This is just a convention I established for myself.

68

Page 69: Going Go Programming

Here is a sample program that uses the straps package:

package main

import (    "ArdanStudios/straps")

func main() {    straps.Load()    cpu := straps.Strap("cpuMultiplier")}

In main we don't need to allocate any memory or maintain references. Through the package name we call Load to initialize the Singleton. Then through the package name again we access the public interface, in this case the Strap function.

If you have the same need to managed shared resources through a public interface try using a Singleton.

As always I hope this helps you write better and bug less code.

Object Oriented Programming in Go

Someone asked a question on the forum today on how to gain the benefits of inheritance without embedding. It is really important for everyone to think in terms of Go and not the languages they are leaving behind. I can't tell you much code I removed from my early Go implementations because it wasn't necessary. The language designers have years of experience and knowledge. Hindsight is helping to create a language that is fast, lean and really fun to code in.

I consider Go to be a light object oriented programming language. Yes it does have encapsulation and type member functions but it lacks inheritance and therefore traditional polymorphism. For me, inheritance is useless unless you want to implement polymorphism. With the way interfaces are implemented in Go, there is no need for inheritance. Go took the best parts of OOP, left out the rest and gave us a better way to write polymorphic code.

Here is a quick view of OOP in Go. Start with these three structs:

type Animal struct {    Name string    mean bool}

type Cat struct {    Basics Animal    MeowStrength int}

69

Page 70: Going Go Programming

type Dog struct {    Animal    BarkStrength int}

Here are three structs you would probably see in any OOP example. We have a base struct and two other structs that are specific to the base. The Animal structure contains attributes that all animals share and the other two structs are specific to cats and dogs.

All of the member properties are public except for mean. The mean property in the Animal struct starts with a lowercase letter. In Go, the case of the first letter for variables, structs, properties, functions, etc. determine the access specification. Use a capital letter and it's public, use a lowercase letter and it's private. I like using underscore (_) to make things private. In my program I would have written mean as _Mean.

Since there is no inheritance in Go, composition is your only choice. The Cat struct has a property called Basics which is of type Animal. The Dog struct is using an un-named struct for the Animal property. It's up to you to decide which is better for you and I will show you both implementations.

I want to thank John McLaughlin for his comment about un-named structs!!

To create a member function for both Cat and Dog, the syntax is as follows:

func (dog *Dog) MakeNoise() {    barkStrength := dog.BarkStrength

    if dog.mean == true {        barkStrength = barkStrength * 5    }

    for bark := 0; bark < barkStrength; bark++ {        fmt.Printf("BARK ")    }

    fmt.Printf("\n")}

func (cat *Cat) MakeNoise() {    meowStrength := cat.MeowStrength

    if cat.Basics.mean == true {        meowStrength = meowStrength * 5    }

    for meow := 0; meow < meowStrength; meow++ {        fmt.Printf("MEOW ")    }

    fmt.Printf("\n")}

70

Page 71: Going Go Programming

Before the name of the function we specify a pointer to the type struct. Now both Cat and Dog have member functions called MakeNoise.

Both these member functions do the same thing. Each animal speaks in their native tongue based on their bark or meow strength and if they are mean. Silly, but it shows you how to access the referenced object.

When using the Dog reference we access the Animal properties directly. With the Cat reference we use the named property called Basics.

One thing that is missing is the famous "this" pointer. If you are really missing "this" you could change the local variable pointer as follows:

func (this *Dog) MakeNoise() {    barkStrength := this.BarkStrength

    if this.mean == true {        barkStrength = barkStrength * 5    }

    for bark := 0; bark < barkStrength; bark++ {        fmt.Printf("BARK ")    }

    fmt.Printf("\n")}

So far we have covered encapsulation, composition, access specifications and member functions. All that is left is how to create polymorphic behavior.

We use interfaces to create polymorphic behavior:

type AnimalSounder interface {    MakeNoise()}

func MakeSomeNoise(animalSounder AnimalSounder) {    animalSounder.MakeNoise()}

Here we add an interface and a public method that takes a reference to this interface. Actually the function will take a reference to an object that implements this interface. An interface is not a type that can be instantiated. An interface is a declaration of behavior.

There is a Go convention of naming interfaces with the "er" suffix when the interface only contains one method.

In Go, any type struct that implements this interface, via a member function, then represents this type. In our case both Cat and Dog have implemented the AnimalSounder interface and therefore are considered to be of type AnimalSounder.

71

Page 72: Going Go Programming

This means that objects of both Cat and Dog can be passed as parameters to the MakeSomeNoise function. The MakeSomeNoise function implements polymorphic behavior through the AnimalSounder interface.

If you wanted to reduce the duplication of code in the MakeNoise member functions of Cat and Dog, you could create a member function in Animal to handle it:

func (animal *Animal) PerformNoise(strength int, sound string) {    if animal.mean == true {        strength = strength * 5    }

    for voice := 0; voice < strength; voice++ {        fmt.Printf("%s ", sound)    }

    fmt.Printf("\n")}

func (dog *Dog) MakeNoise() {    dog.PerformNoise(dog.BarkStrength, "BARK")}

func (cat *Cat) MakeNoise() {    cat.Basics.PerformNoise(cat.MeowStrength, "MEOW")}

Now the Animal type has a member function with the business logic for making noise. The business logic stays within the scope of the objects it belongs to. The other cool benefit is we don't need to pass the mean value in as a parameter because it already belongs to the Animal type.

Here is the complete working sample program:

package main

import (    "fmt")

type Animal struct {    Name string    mean bool}

type AnimalSounder interface {    MakeNoise()}

72

Page 73: Going Go Programming

type Dog struct {    Animal    BarkStrength int}

type Cat struct {    Basics Animal    MeowStrength int}

func main() {    myDog := &Dog{        Animal{           "Rover", // Name           false,   // mean        },        2, // BarkStrength    }

    myCat := &Cat{        Basics: Animal{            Name: "Julius",            mean: true,        },        MeowStrength: 3,    }

    MakeSomeNoise(myDog)    MakeSomeNoise(myCat)}

func (animal *Animal) PerformNoise(strength int, sound string) {    if animal.mean == true {        strength = strength * 5    }

    for voice := 0; voice < strength; voice++ {        fmt.Printf("%s ", sound)    }

    fmt.Printf("\n")}

func (dog *Dog) MakeNoise() {    dog.PerformNoise(dog.BarkStrength, "BARK")}

func (cat *Cat) MakeNoise() {    cat.Basics.PerformNoise(cat.MeowStrength, "MEOW")}

73

Page 74: Going Go Programming

func MakeSomeNoise(animalSounder AnimalSounder) {    animalSounder.MakeNoise()}

Here is the output:

BARK BARKMEOW MEOW MEOW MEOW MEOW MEOW MEOW MEOW MEOW MEOW MEOW MEOW MEOW MEOW MEOW

Someone posted an example on the board about embedding an interface inside of a struct. Here is an example:

package main

import (    "fmt")

type HornSounder interface {    SoundHorn()}

type Vehicle struct {    List [2]HornSounder}

type Car struct {    Sound string}

type Bike struct {   Sound string}

func main() {    vehicle := new(Vehicle)    vehicle.List[0] = &Car{"BEEP"}    vehicle.List[1] = &Bike{"RING"}

    for _, hornSounder := range vehicle.List {        hornSounder.SoundHorn()    }}

func (car *Car) SoundHorn() {    fmt.Println(car.Sound)}

func (bike *Bike) SoundHorn() {

74

Page 75: Going Go Programming

    fmt.Println(bike.Sound)}

func PressHorn(hornSounder HornSounder) {    hornSounder.SoundHorn()}

In this example the Vehicle struct maintains a list of objects that implement the HornSounder interface. In main we create a new vehicle and assign a Car and Bike object to the list. This assignment is possible because Car and Bike both implement the interface. Then using a simple loop, we use the interface to sound the horn.

Everything you need to implement OOP in your application is there in Go. As I said before, Go took the best parts of OOP, left out the rest and gave us a better way to write polymorphic code.

To learn more on related topics check out these posts:

http://www.goinggo.net/2013/07/how-packages-work-in-go-language.htmlhttp://www.goinggo.net/2013/07/singleton-design-pattern-in-go.html

I hope this small example helps you in your future Go programming.

Understanding Type in Go

When I was coding in C/C++ it was imperative to understand type. If you didn't, you would get into a lot of trouble with both the compiler and running your code. Regardless of the language, type touches every aspect of programming syntax. A good understand of types and pointers is critical to good programming. This post will focus on type.

Take these bytes of memory for starters:

FFE4 FFE3 FFE2 FFE100000000 11001011 01100101 00001010

What is the value of the byte at address FFE1? If you try to answer the question you will be wrong. Why, because I have not told you what that byte represents. I have not given you the type information.

What if I say that same byte represents a number? Your answer would probably be 10 and again you would be wrong. Why, because you are assuming that when I said it was a number I meant a base 10 number.

Number Bases:

All numbering systems have a base that they function within. Since you werea baby you were taught to count in base 10. This may be due to the fact thatmost of us have 10 fingers and 10 toes. Also, it seems natural

75

Page 76: Going Go Programming

to performmath in base 10.

Base defines the number of symbols a numbering system contains. In base 10there are 10 distinct symbols we use to represent the infinite number ofthings we can count. In base 10 the symbols are 0, 1, 2, 3, 4, 5, 6, 7, 8, 9.Once we reach the symbol 9 we need to grow the length of the number. As anexample, 10, 100 and 1000.

There are two other bases we use all the time in computing. Base 2 or binarynumbers, such as the bits represented in the diagram above. Base 16 orhexadecimal numbers, such as the addresses represented in the diagram above.

In a binary numbering system (base 2), there are only 2 symbols and thosesymbols are 0 and 1.

In a hexadecimal number system (base 16), there are 16 symbols and thosesymbols are 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, A, B, C, D, E, F.

If there were apples sitting on a table, those apples could be representedin any numbering system. We could say there are:

In Base 2:  10010001 applesIn Base 10: 145 applesIn Base 16: 91 apples

All of those answers are correct when given the correct base.

Notice the number of symbols required in each numbering system to representthose apples. The larger the base, the more efficient the numbering system.

Using base 16 for computer addresses, IP addresses and color codes makesa lot of sense now.

Look at the number for the HTML color white in all three bases:

In Base 2:  1111 1111 1111 1111 1111 1111 (24 characters)

76

Page 77: Going Go Programming

In Base 10: 16777215 (10 characters)In Base 16: FFFFFF (6 characters)

Which numbering system would you have chosen to represent colors?

Now if I tell you the byte at address FFE1 represents a base 10 number,  your answer of 10 is correct.

Type provides two pieces of information that both the compiler and you need to perform the same exercise we just went through.

1. The amount of memory, in bytes, to look at2. The representation of those bytes

The Go language provides these basic numeric types:

Unsigned Integersuint8, uint16, uint32, uint64

Signed Integersint8, int16, int32, int64

Real Numbersfloat32, float64

Predeclared Integersuint, int, uintptr

The names for these keywords provide both pieces of the type information.

The uint8 contains a base 10 number using one byte of memory. The value can be between 0 to 255.

The int32 contains a base 10 number using 4 bytes of memory. The value can be between -2147483648 to 2147483647.

The predeclared integers get mapped based on the architecture you are building the code against. On a 64 bit OS, int will map to int64 and on a 32 bit OS, it will be mapped to int32.

Everything that is stored in memory comes back to one of these numeric types. Strings in Go are just a series of uint8 types, with rules around stringing those bytes together and identifying end of string positions.

A pointer in Go is of type uintptr. Again, based on the OS architecture this will be a uint32 or uint64. It is good that Go created a special type for pointers. In the old days many C programmers would write code that assumed a pointer value would always fit inside an unsigned int. With upgrades to the language and architectures over time, that eventually was no longer true. A lot of code broke because addresses became larger than the predeclared unsigned int.

77

Page 78: Going Go Programming

Struct types are just combinations of know types that also eventually resolve to a numeric type.

type Example struct{    BoolValue bool    IntValue  int16    FloatValue float32}

This structure represents a complex type. It represents 7 bytes with three different numeric representations. The bool is one byte, the int16 is 2 bytes and the float32 adds 4 more bytes. However, 8 bytes are actually allocated in memory for this struct.

All memory is allocated on an alignment boundary to minimize memory defragmentation. To determine the alignment boundary Go is using for your architecture, you can run the unsafe.Alignof function. The alignment boundary in Go for the 64bit Darwin platform is 8 bytes. So when Go determines the memory allocation for our structs, it will pad bytes to make sure the final memory footprint is a multiple of 8. The compiler will determine where to add the padding.

If you want to learn more about structure member alignment and padding check out this link:

http://www.geeksforgeeks.org/structure-member-alignment-padding-and-data-packing/

This program shows the padding that Go inserted into the memory footprint for the Example type struct:

package main

import (    "fmt"    "unsafe")

type Example struct {    BoolValue bool    IntValue int16    FloatValue float32}

func main() {    example := &Example{        BoolValue:  true,        IntValue:   10,        FloatValue: 3.141592,    }

    exampleNext := &Example{        BoolValue:  true,        IntValue:   10,

78

Page 79: Going Go Programming

        FloatValue: 3.141592,    }

    alignmentBoundary := unsafe.Alignof(example)

    sizeBool := unsafe.Sizeof(example.BoolValue)    offsetBool := unsafe.Offsetof(example.BoolValue)

    sizeInt := unsafe.Sizeof(example.IntValue)    offsetInt := unsafe.Offsetof(example.IntValue)

    sizeFloat := unsafe.Sizeof(example.FloatValue)    offsetFloat := unsafe.Offsetof(example.FloatValue)

    sizeBoolNext := unsafe.Sizeof(exampleNext.BoolValue)    offsetBoolNext := unsafe.Offsetof(exampleNext.BoolValue)

    fmt.Printf("Alignment Boundary: %d\n", alignmentBoundary)

    fmt.Printf("BoolValue = Size: %d Offset: %d Addr: %v\n",        sizeBool, offsetBool, &example.BoolValue)

    fmt.Printf("IntValue = Size: %d Offset: %d Addr: %v\n",        sizeInt, offsetInt, &example.IntValue)

    fmt.Printf("FloatValue = Size: %d Offset: %d Addr: %v\n",        sizeFloat, offsetFloat, &example.FloatValue)

    fmt.Printf("Next = Size: %d Offset: %d Addr: %v\n",        sizeBoolNext, offsetBoolNext, &exampleNext.BoolValue)}

Here is the output:

Alignment Boundary: 8BoolValue  = Size: 1  Offset: 0  Addr: 0x21015b018IntValue   = Size: 2  Offset: 2  Addr: 0x21015b01aFloatValue = Size: 4  Offset: 4  Addr: 0x21015b01cNext       = Size: 1  Offset: 0  Addr: 0x21015b020

The alignment boundary for the type struct is 8 bytes as expected.

The size value shows how much memory, for that field, will be read and written to. As expected, the size is inline with the type information.

The offset value shows how many bytes into the memory footprint we will find the start of that field.

The address is where the start of each field, within the memory footprint, can be found.

We can see that Go is padding 1 byte between the BoolValue and IntValue fields. The offset

79

Page 80: Going Go Programming

value and the difference between the two addresses is 2 bytes. You can also see that the next allocation of memory is starting 4 bytes away from the last field in the struct.

Let's prove the 8 byte alignment rule by only keeping the 1 byte bool field in the struct:

package main

import (    "fmt"    "unsafe")

type Example struct {    BoolValue bool}

func main() {    example := &Example{        BoolValue:  true,    }

    exampleNext := &Example{        BoolValue:  true,    }

    alignmentBoundary := unsafe.Alignof(example)

    sizeBool := unsafe.Sizeof(example.BoolValue)    offsetBool := unsafe.Offsetof(example.BoolValue)

    sizeBoolNext := unsafe.Sizeof(exampleNext.BoolValue)    offsetBoolNext := unsafe.Offsetof(exampleNext.BoolValue)

    fmt.Printf("Alignment Boundary: %d\n", alignmentBoundary)

    fmt.Printf("BoolValue = Size: %d Offset: %d Addr: %v\n",        sizeBool, offsetBool, &example.BoolValue)

    fmt.Printf("Next = Size: %d Offset: %d Addr: %v\n",        sizeBoolNext, offsetBoolNext, &exampleNext.BoolValue)}

And the output:

Alignment Boundary: 8BoolValue = Size: 1 Offset: 0 Addr: 0x21015b018Next      = Size: 1 Offset: 0 Addr: 0x21015b020

Subtract the two addresses and you will see there is an 8 byte difference between the two type struct allocations. Also, the next allocation of memory is starting at the same address from the first example. Go is padding 7 bytes to the struct to maintain the alignment boundary.

80

Page 81: Going Go Programming

Regardless of the padding, the value of size truly represents the amount of memory we can read and write to for each field.

We can only manipulate memory when we are working with a numeric type and the assignment operator (=) is how we do it. To make life easier for us, Go has created some complex types that support the assignment operator directly. Some of these types are strings, arrays and slices. To see a complete list of these types check out this document http://golang.org/ref/spec#Types.

These complex types abstract the manipulation of the underlining numeric types that can be found in each implementation. In doing so, these complex types can be used like numeric types that directly read and write to memory.

Go is a type safe language. This means that the compiler will always enforce like types on each side of an assignment operator.  This is really important because it prevents us from reading or writing to memory incorrectly.

Imagine if we could do the following. If you try to compile this code you will get an error.

type Example struct{    BoolValue bool    IntValue  int16    FloatValue float32}

example := &Example{    BoolValue:  true,    IntValue:   10,    FloatValue: 3.141592,}

var pointer *int32pointer = *int32(&example.IntValue)*pointer = 20

What I am trying to do is get the memory address of the 2 byte IntValue field and store it in a pointer of type int32. Then I am trying to use the pointer to write a 4 byte integer into that memory address.  If I was able to use that pointer, I would be violating the type rules for the IntValue field and corrupting memory along the way.

FFE8 FFE7 FFE6 FFE5 FFE4 FFE3 FFE2 FFE10 0 0 3.14 0 10 0 true

pointerFFE3

FFE8 FFE7 FFE6 FFE5 FFE4 FFE3 FFE2 FFE10 0 0 0 0 20 0 true

81

Page 82: Going Go Programming

Based on the memory footprint above, the pointer would be writing the value of 20 across the 4 bytes between FFE3 and FFE6. The value of IntValue would be 20 as expected but the value of FloatValue would now be 0. Imagine if writing those bytes went outside the memory allocation for this struct and started to corrupt memory in other areas of the application. The bugs that would follow would appear random and unpredictable.

The Go compiler will always make sure assigning memory and casting is safe.

In this casting example the compiler is going to complain:

package main

import (    "fmt")

// Create a new typetype int32Ext int32

func main() {    // Cast the number 10 to a value of type Jill    var jill int32Ext = 10

    // Assign the value of jill to jack    // ** cannot use jill (type int32Ext) as type int32 in assignment **    var jack int32 = jill

    // Assign the value of jill to jack by casting    // ** the compiler is happy **    var jack int32 = int32(jill)

    fmt.Printf("%d\n", jack)}

First we create a new type in the system called int32Ext and tell the compiler this new type represents a single int32. Next we create a new variable called jill and assign the value of 10. The compiler allows the value to be assigned because the numeric type is on the right side of the assignment operator. The compiler knows the assignment is safe.

Now we try to create a second variable called jack of type int32 and assign the jill variable. Here the compiler throws an error:

"cannot use jill (type int32Ext) as type int32 in assignment"

The compiler respects that jill is of type int32Ext and makes no assumptions about the safeness of the assignment.

Now we use casting and the compiler allows the assignment and the value prints as expected. When we perform the cast the compiler checks the safeness of the assignment. In our case it

82

Page 83: Going Go Programming

identifies the values are of the same type and allows the assignment.

This may seem like basic stuff to some of you but it is the foundation for working in any programming language. Even if type is abstracted you are manipulating memory and should understand what you are doing.

With this foundation we can talk about pointers in Go and passing parameters to functions next.

As always, I hope this post helps shed some light on things you may not have know.

Understanding Pointers and Memory Allocation

In the documentation provided by the Go language team you will find great information on pointers and memory allocation. Here is a link to that documentation:

http://golang.org/doc/faq#Pointers

We need to start with the understanding that all variables contain a value. Based on the type that variable represents will determine how we can use it to manipulate the memory it contains. Read this post to learn more: Understanding Type In Go

In Go we can create variables that contain the "value of" the object itself or an address to the object. When the "value of" the variable is an address, the variable is considered a pointer.

In the diagram below we have a variable called myVariable. The "value of" myVariable is the address to an object that was allocated of the same type. myVariable is considered a pointer variable.

In the next diagram the "value of" myVariable is the object itself, not a reference to the object.

83

Page 84: Going Go Programming

To access properties of an object we use a selector operator. The selector operator allows us to access a specific field in the object. The syntax is always Object.FieldName, where the period (.) is the selector operator.

In the C programming language we need to use different selector operators depending on the type of variable we are using. If the "value of" the variable is the object, we use a period (.). If the "value of" the variable is an address, we use an arrow (->).

One really nice thing about Go is that you don't need to worry about what type of selector operator to use. In Go we only use the period (.) regardless if the variable is the object or a pointer. The compiler takes care of the underlying details to access the object.

So why is all of this important? It becomes important when we start using functions to abstract and break up logic. Eventually you need to pass variables to these functions and you need to know what you are passing.

In Go, variables are passed to a function by value. That means the "value of" each variable that is specified is copied onto the stack for access by that function. In this example we call a function that is supposed to change the value of an object that is allocated in main.

package main

import (    "fmt"    "unsafe")

type MyType struct {    Value1 int    Value2 string}

func main() {    // Allocate an object of type MyType    myObject := MyType{10, "Bill"}

    // Create a pointer to the memory for myObject    //  For Display Purposes    pointer := unsafe.Pointer(&myObject)

    // Display the address and values    fmt.Printf("Addr: %v Value1 : %d Value2: %s\n",        pointer,        myObject.Value1,        myObject.Value2)

    // Change the values of myObject    ChangeMyObject(myObject)

    // Display the address and values    fmt.Printf("Addr: %v Value1 : %d Value2: %s\n",

84

Page 85: Going Go Programming

        pointer,        myObject.Value1,        myObject.Value2)}

func ChangeMyObject(myObject MyType) {    // Change the values of myObject    myObject.Value1 = 20    myObject.Value2 = "Jill"

    // Create a pointer to the memory for myObject    pointer := unsafe.Pointer(&myObject)

    // Display the address and values    fmt.Printf("Addr: %v Value1 : %d Value2: %s\n",        pointer,        myObject.Value1,        myObject.Value2)}

Here is the output of the program:

Addr: 0x2101bc000 Value1 : 10 Value2: BillAddr: 0x2101bc040 Value1 : 20 Value2: JillAddr: 0x2101bc000 Value1 : 10 Value2: Bill

So what went wrong? The changes the function made to main's myObject did not stay changed after the function call. The "value of" the myObject variable in main does not contain a reference to the object, it is not a pointer. The "value of" the myObject variable in main is the object. When we pass the "value of" the myObject variable in main to the function, a copy of the object is placed on the stack. The function is altering its own version of the object. Once the function terminates, the stack is popped, and the copy is technically gone. The "value of" the myObject variable in main is never touched.

To fix this we can allocate the memory in a way to get a reference back. Then the "value of" the myObject variable in main will be the address to the new object, a pointer variable. Then we can change the function to accept the "value of" an address to the object.

package main

import (    "fmt"    "unsafe")

type MyType struct {    Value1 int    Value2 string}

func main() {

85

Page 86: Going Go Programming

    // Allocate an object of type MyType    myObject := &MyType{10, "Bill"}

    // Create a pointer to the memory for myObject    //  For Display Purposes    pointer := unsafe.Pointer(myObject)

    // Display the address and values    fmt.Printf("Addr: %v Value1 : %d Value2: %s\n",        pointer,        myObject.Value1,        myObject.Value2)

    // Change the values of myObject    ChangeMyObject(myObject)

    // Display the address and values    fmt.Printf("Addr: %v Value1 : %d Value2: %s\n",        pointer,        myObject.Value1,        myObject.Value2)}

func ChangeMyObject(myObject *MyType) {    // Change the values of myObject    myObject.Value1 = 20    myObject.Value2 = "Jill"

    // Create a pointer to the memory for myObject    pointer := unsafe.Pointer(myObject)

    // Display the address and values    fmt.Printf("Addr: %v Value1 : %d Value2: %s\n",        pointer,        myObject.Value1,        myObject.Value2)}

When we use the ampersand (&) operator to allocate the object, a reference is returned. That means the "value of" the myObject variable in main is now a pointer variable, whose value is the address to the newly allocated object. When we pass the "value of" the myObject variable in main to the function, the functions myObject variable now contains the address to the object, not a copy. We now have two pointers pointing to the same object. The myObject variable in main and the myObject variable in the function.

If we run the program again, the function is now working the way we wanted it to. It changes the value of the object allocated in main.

Addr: 0x2101bc000 Value1 : 10 Value2: BillAddr: 0x2101bc000 Value1 : 20 Value2: JillAddr: 0x2101bc000 Value1 : 20 Value2: Jill

86

Page 87: Going Go Programming

During the function call the object is no longer being copied on the stack, the address to the object is being copied. The function is now referencing the same object, via the local pointer variable, and changing the values.

The Go document titled "Effective Go" has a great section about memory allocations, which includes how arrays, slices and maps work:

http://golang.org/doc/effective_go.html#allocation_new

Let's talk about the keywords new and make.

The new keyword is used to allocate objects of a specified type in memory. The memory allocation is zeroed out. The memory can't be further initialized on the call to new. In other words, you can't specify specific values for properties of the specified type when using new.

If you want to specify values at the time you allocate the object, use a composite literal. They come in two flavors, with or without specifying the field names.

    // Allocate an object of type MyType    // Values must be in the correct order    myObject := &MyType{10, "Bill"}

      // Allocate an object of type MyType    // Use labeling to specify the values    myObject := &MyType{        Value1: 10,        Value2: "Bill",    }

The make keyword is used to allocate and initialize slices, maps and channels only. Make does not return a reference, it returns the "value of" a data structure that is created and initialized to manipulate the new slice, map or channel. This data structure contains references to other data structures that are used to manipulate the slice, map or channel.

How does passing a map by value to a function work. Look at this sample code:

package main

import (    "fmt"    "unsafe")

type MyType struct {    Value1 int    Value2 string}

func main() {

87

Page 88: Going Go Programming

    myMap := make(map[string]string)    myMap["Bill"] = "Jill"

    pointer := unsafe.Pointer(&myMap)    fmt.Printf("Addr: %v Value : %s\n", pointer, myMap["Bill"])

    ChangeMyMap(myMap)    fmt.Printf("Addr: %v Value : %s\n", pointer, myMap["Bill"])

    ChangeMyMapAddr(&myMap)    fmt.Printf("Addr: %v Value : %s\n", pointer, myMap["Bill"])}

func ChangeMyMap(myMap map[string]string) {    myMap["Bill"] = "Joan"

    pointer := unsafe.Pointer(&myMap)

    fmt.Printf("Addr: %v Value : %s\n", pointer, myMap["Bill"])}

// Don't Do This, Just For Use In This Articlefunc ChangeMyMapAddr(myMapPointer *map[string]string) {    (*myMapPointer)["Bill"] = "Jenny"

    pointer := unsafe.Pointer(myMapPointer)

    fmt.Printf("Addr: %v Value : %s\n", pointer, (*myMapPointer)["Bill"])}

Here is the output for the program:

Addr: 0x21015b018 Value : JillAddr: 0x21015b028 Value : JoanAddr: 0x21015b018 Value : JoanAddr: 0x21015b018 Value : JennyAddr: 0x21015b018 Value : Jenny

We make a map and add a single key called "Bill" assigning the value of "Jill". Then we pass the value of the map variable to the ChangeMyMap function. Remember the myMap variable is not a pointer so the "value of" myMap, which is a data structure, is copied onto the stack during the function call. Because the "value of" myMap is a data structure that contains references to the internals of the map, the function can use its copy of the data structure to make changes to the map that will be seen by main after the function call.

If you look at the output you can see that when we pass the map by value, the function has its

88

Page 89: Going Go Programming

own copy of the map data structure. You can see changes made to the map are reflected after the function call. In main we display the value of the map key "Bill" after the function call and it has changed.

It is unnecessary but the ChangeMyMapAddr function shows how you could pass and use a reference to the myMap variable in main. Again the Go team has made sure passing the "value of" a map variable can be performed without problems. Notice how we need to dereference the myMapPointer variable when we want to access the map. This is because the Go compiler will not allow us to access the map through a pointer variable directly. Dereferencing a pointer variable is equivalent to having a variable whose value is the object.

I have taken the time to write this post because sometimes it can be confusing as to what the "value of" your variable contains. If the "value of" your variable is a large object, and you pass the "value of" that variable to a function, you will be making a large copy of that variable on the stack. You want to make sure you're passing addresses to your functions unless you have a very special use case.

Maps, slices and channels are different. You can pass these variables by value without any concern. When we pass a map variable to a function, we are copying a data structure not the entire map.

I recommend that you read the Effective Go documentation. I have read that document several times since I started programming in Go. As I gain more experience I always go back and read this document again. I always pick up on something new that didn't make sense to me before.

Gustavo's IEEE-754 Brain Teaser

Back in June, Gustavo Niemeyer posted the following question on his Labix.org blog:

Assume uf is an unsigned integer with 64 bits that holds the IEEE-754 representation for a binary floating point number of that size.

How can you tell if uf represents an integer number?

I can't talk for you, but I write business applications. I just don't have the background to quickly knock out an answer for a question like this. Fortunately Gustavo posted the logic for the answer. I thought it would be fun to better understand the question and break down his answer. I am going to work with a 32 bit number just to make everything easier.

How does the IEEE-754 standard store a floating point number in a binary format?  To start I found these two pages:

http://steve.hollasch.net/cgindex/coding/ieeefloat.htmlhttp://class.ece.iastate.edu/arun/CprE281_F05/ieee754/ie5.html

The IEEE-754 specification represents a floating point number in base 2 scientific notation using a special binary format. If you don't know what I mean by base 2 then look at my post on Understanding Type In Go (http://www.goinggo.net/2013/07/understanding-type-in-go.html).

89

Page 90: Going Go Programming

Scientific notation is an efficient way of writing very large or small numbers. It works by using a decimal format with a multiplier. Here are some examples:

Base 10 Number Scientific Notation Calculation Coefficient Base Exponent Mantissa

700 7e+2 7 * 102 7 10 2 0

4,900,000,000 4.9e+9 4.9 * 109 4.9 10 9 .9

5362.63 5.36263e+3 5.36263 * 103 5.36263 10 3 .36263

-0.00345 3.45e-3 3.45 * 10-3 3.45 10 -3 .45

0.085 1.36e-4 1.36 * 2-4 1.36 2 -4 .36

In normal scientific notation form there is always just one digit on the left side of the decimal point. For base 10 numbers that digit must be between 1 through 9 and for base 2 numbers that digit can only be 1.

The entire number in scientific notation is called the Coefficient. The Mantissa is all the numbers to the right of the decimal point. These terms are important so take the time to study and understand the chart above.

How we move the decimal point to that first position determines the value of the Exponent. If we have to move the decimal point to the left, the Exponent is a positive number, to the right, it is a negative number. Look at the chart above and see the Exponent value for each example.

The Base and the Exponent work together in the notation. The exponent determines the "Power Of" calculation we need to perform on the base. In the first example the number 7 is multiplied by 10 (The Base) to the power of 2 (The Exponent) to get back to the original base 10 number 700. We moved the decimal point to the left two places to convert 700 to 7.00, which made the Exponent +2 and created the notation of 7e+2.

The IEEE-754 standard does not store base 10 scientific notation numbers but base 2 scientific notation numbers. The last example in the chart above represents the base 10 number 0.085 in base 2 scientific notation. Let's learn how that notation is calculated.

Base 10 Number Scientific Notation Calculation Coefficient Base Exponent Mantissa

0.085 1.36e-4 1.36 * 2-4 1.36 2 -4 .36

We need to divide the base 10 number (0.085) by some power of two so we get a 1 + Fraction value. What do I mean by a 1 + Fraction value? We need a number that looks like the Coefficient in the example, 1 + .36. The IEEE-754 standard requires that we have a "1." in the Coefficient. This allows us to only have to store the Mantissa and give us an extra bit of precision.

If we use brute force you will see when we finally get the 1 + Fraction value for 0.085:

90

Page 91: Going Go Programming

0.085 / 2-1 = 0.170.085 / 2-2 = 0.340.085 / 2-3 = 0.680.085 / 2-4 = 1.36   ** We found the 1 + Fraction

An exponent of -4 gives us the 1 + Fraction value we need. Now we have everything we need to store the base 10 number 0.085 in IEEE-754 format.

Let's look at how the bits are laid out in the IEEE-754 format.

Precision Sign Exponent Fraction Bits Bias

Single (32 Bits) 1 [31] 8 [30-23] 23 [22-00] 127

Double (64 Bits) 1 [63] 11 [62-52] 52 [51-00] 1023

The bits are broken into three parts. There is a bit reserved for the sign, bits for the exponent and bits that are called fraction bits. The fractions bits are where we store the mantissa as a binary fraction.

If we store the value of 0.085 using Single Precision (a 32 bit number), the bit pattern in IEEE-754 would look like this:

Sign Exponent (123) Fraction Bits (.36)

0 0111 1011 010 1110 0001 0100 0111 1011

The Sign bit, the leftmost bit, determines if the number is positive or negative. If the bit is set to 1 then the number is negative else it is positive.

The next 8 bits represent the Exponent. In our example, the base 10 number 0.085 is converted to 1.36 * 2-4 using base 2 scientific notation. Therefore the value of the exponent is -4. In order to be able to represent negative numbers, there is a Bias value. The Bias value for our 32 bit representation is 127. To represent the number -4, we need to find the number, that when subtract against the Bias, gives us -4. In our case that number is 123. If you look at the bit pattern for the Exponent you will see it represents the number 123 in binary.

The remaining 23 bits are the Fraction bits. To calculate the bit pattern for the fraction bits, we need to calculate and sum binary fractions until we get the value of the Mantissa, or a value that is as close as possible. Remember, we only need to store the Mantissa because we always assume that the "1." value exists.

To understand how binary fractions are calculated, look at the following chart. Each bit position from left to right represents a fractional value:

Binary Fraction Decimal Power

0.1 1/2 0.5 2-1

91

Page 92: Going Go Programming

0.01 1/4 0.25 2-2

0.001 1/8 0.125 2-3

We need to set the correct number of fraction bits that add up to or gets us close enough to the mantissa. This is why we can sometimes lose precision.

010 1110 0001 0100 0111 1011 = (0.36)

Bit Value Fraction Decimal Total

2 4 1/4 0.25 0.25

4 16 1/16 0.0625 0.3125

5 32 1/32 0.03125 0.34375

6 64 1/64 0.015625 0.359375

11 2048 1/2048 0.00048828125 0.35986328125

13 8192 1/8192 0.0001220703125 0.3599853515625

17 131072 1/131072 0.00000762939453 0.35999298095703

18 262144 1/262144 0.00000381469727 0.3599967956543

19 524288 1/524288 0.00000190734863 0.35999870300293

20 1048576 1/1048576 0.00000095367432 0.35999965667725

22 4194304 1/4194304 0.00000023841858 0.35999989509583

23 8388608 1/8388608 0.00000011920929 0.36000001430512

You can see that setting these 12 bits get us to the value of 0.36 plus some extra fractions.

Let's sum up what we now know about the IEEE-754 format:

1. Any base 10 number to be stored is converted to base 2 scientific notation.2. The base 2 scientific notation value we use must follow the 1 + Fraction format.3. There are three distinct sections in the format.4. The Sign bit determines if the number is positive or negative.5. The Exponent bits represent a number that needs to be subtracted against the Bias.6. The Fraction bits represent the Mantissa using binary fraction summation.

Let's prove that our analysis is correct about the IEEE-754 format. We should be able to store the number 0.85 and display bit patterns and values for each section that match everything we have seen.

The following code stores the number 0.085 and displays the IEEE-754 binary representation:

92

Page 93: Going Go Programming

package main

import (    "fmt"    "math")

func main() {    var number float32 = 0.085

    fmt.Printf("Starting Number: %f\n\n", number)

    // Float32bits returns the IEEE 754 binary representation    bits := math.Float32bits(number)

    binary := fmt.Sprintf("%.32b", bits)

    fmt.Printf("Bit Pattern: %s | %s %s | %s %s %s %s %s %s\n\n",        binary[0:1],        binary[1:5], binary[5:9],        binary[9:12], binary[12:16], binary[16:20],        binary[20:24], binary[24:28], binary[28:32])

    bias := 127    sign := bits & (1 << 31)    exponentRaw := int(bits >> 23)    exponent := exponentRaw - bias

    var mantissa float64    for index, bit := range binary[9:32] {        if bit == 49 {            position := index + 1            bitValue := math.Pow(2, float64(position))            fractional := 1 / bitValue

            mantissa = mantissa + fractional        }    }

    value := (1 + mantissa) * math.Pow(2, float64(exponent))

    fmt.Printf("Sign: %d Exponent: %d (%d) Mantissa: %f Value: %f\n\n",        sign,        exponentRaw,        exponent,        mantissa,        value)}

93

Page 94: Going Go Programming

When we run the program we get the following output:

Starting Number: 0.085000

Bit Pattern: 0 | 0111 1011 | 010 1110 0001 0100 0111 1011

Sign: 0 Exponent: 123 (-4) Mantissa: 0.360000 Value: 0.085000

If you compare the displayed bit pattern with our example above, you will see that it matches. Everything we learned about IEEE-754 is true.

Now we should be able to answer Gustavo's question. How can we tell if the value being stored is an integer? Here is a function, thanks to Gustavo's code, that tests if the IEEE-754 stored value is an integer:

func IsInt(bits uint32, bias int) {    exponent := int(bits >> 23) - bias - 23    coefficient := (bits & ((1 << 23) - 1)) | (1 << 23)    intTest := (coefficient & (1 << uint32(-exponent) - 1))

    fmt.Printf("\nExponent: %d Coefficient: %d IntTest: %d\n",        exponent,        coefficient,        intTest)

    if exponent < -23 {        fmt.Printf("NOT INTEGER\n")        return    }

    if exponent < 0 && intTest != 0 {        fmt.Printf("NOT INTEGER\n")        return    }

    fmt.Printf("INTEGER\n")}

So how does this function work?

Let's start with the first condition which tests if the Exponent is less than -23. If we use the number 1 as our test number, the exponent will be 127 which is the same as the Bias. This means when we subtract the Exponent against the Bias we will get zero.

Starting Number: 1.000000

Bit Pattern: 0 | 0111 1111 | 000 0000 0000 0000 0000 0000

Sign: 0 Exponent: 127 (0) Mantissa: 0.000000 Value: 1.000000

94

Page 95: Going Go Programming

Exponent: -23 Coefficient: 8388608 IntTest: 0INTEGER

The test function adds an extra subtraction of 23, which represents the starting bit position for the Exponent in the IEEE-754 format. That is why you see -23 for the Exponent value coming from the test function.

Precision Sign Exponent Fraction Bits Bias

Single (32 Bits) 1 [31] 8 [30-23] 23 [22-00] 127

This subtraction is required to help with the second test. So any value that is less than -23 must be less than one (1) and therefore not an integer.

To understand how the second test works, let's use an integer value. This time we will set the number to 234523 in the code and run the program again.

Starting Number: 234523.000000

Bit Pattern: 0 | 1001 0000 | 110 0101 0000 0110 1100 0000

Sign: 0 Exponent: 144 (17) Mantissa: 0.789268 Value: 234523.000000

Exponent: -6 Coefficient: 15009472 IntTest: 0INTEGER

The second test looks for two conditions to identify if the number is not an integer. This requires the use of bitwise mathematics. Let's look at the math we are performing in the function:

    exponent := int(bits >> 23) - bias - 23    coefficient := (bits & ((1 << 23) - 1)) | (1 << 23)    intTest := coefficient & ((1 << uint32(-exponent)) - 1)

The coefficient calculation is adding the 1 + to the Mantissa so we have the base 2 Coefficient value.

When we look at the first part of the coefficient calculation we see the following bit patterns:

coefficient := (bits & ((1 << 23) - 1)) | (1 << 23)

Bits:                   01001000011001010000011011000000(1 << 23) - 1:          00000000011111111111111111111111bits & ((1 << 23) - 1): 00000000011001010000011011000000

The first part of the coefficient calculation removes the bits for the Sign and Exponent from the entire IEEE-754 bit pattern.

95

Page 96: Going Go Programming

The second part of the coefficient calculation adds the "1 +" into the binary bit pattern:

coefficient := (bits & ((1 << 23) - 1)) | (1 << 23)

bits & ((1 << 23) - 1): 00000000011001010000011011000000(1 << 23):              00000000100000000000000000000000coefficient:            00000000111001010000011011000000

Now that the coefficient bit pattern is set, we can calculate the intTest value:

exponent := int(bits >> 23) - bias - 23intTest := (fraction & ((1 << uint32(-exponent)) - 1))

exponent:                     (144 - 127 - 23) = -61 << uint32(-exponent):       000000(1 << uint32(-exponent)) - 1: 111111

coefficient:                 000000001110010100000110110000001 << uint32(-exponent)) - 1: 00000000000000000000000000111111intTest:                     00000000000000000000000000000000

The value of the exponent we calculate in the test function is used to determine the number of bits we will compare against the Coefficient. In this case the exponent value is -6. That is calculated by subtracting the stored Exponent value (144) against the Bias (127) and then against the starting bit position for the Exponent (23). This gives us a bit pattern of 6 ones (1's). The final operation takes those 6 bits and AND's them against the rightmost 6 bits of the Coefficient to get the intTest value.

The second test is looking for an exponent value that is less than zero (0) and an intTest value that is NOT zero (0). This would indicate the number being stored is not an integer. In our example with 234523, the Exponent is less than zero (0), but the value of intTest is zero (0). We have an integer.

I have included the sample code in the Go playground so you can play with it.

http://play.golang.org/p/3xraud43pi

If it wasn't for Gustavo's code I could never have identified the solution. Here is a link to his solution:

http://bazaar.launchpad.net/~niemeyer/strepr/trunk/view/6/strepr.go#L160

Here is a copy of the code that you can copy and paste:

package main

import (    "fmt"    "math")

96

Page 97: Going Go Programming

func main() {    var number float32 = 234523

    fmt.Printf("Starting Number: %f\n\n", number)

    // Float32bits returns the IEEE 754 binary representation    bits := math.Float32bits(number)

    binary := fmt.Sprintf("%.32b", bits)

    fmt.Printf("Bit Pattern: %s | %s %s | %s %s %s %s %s %s\n\n",        binary[0:1],        binary[1:5], binary[5:9],        binary[9:12], binary[12:16], binary[16:20],        binary[20:24], binary[24:28], binary[28:32])

    bias := 127    sign := bits & (1 << 31)    exponentRaw := int(bits >> 23)    exponent := exponentRaw - bias

    var mantissa float64    for index, bit := range binary[9:32] {        if bit == 49 {            position := index + 1            bitValue := math.Pow(2, float64(position))            fractional := 1 / bitValue

            mantissa = mantissa + fractional        }    }

    value := (1 + mantissa) * math.Pow(2, float64(exponent))

    fmt.Printf("Sign: %d Exponent: %d (%d) Mantissa: %f Value: %f\n\n",        sign,        exponentRaw,        exponent,        mantissa,        value)

    IsInt(bits, bias)}

func IsInt(bits uint32, bias int) {    exponent := int(bits>>23) - bias - 23    coefficient := (bits & ((1 << 23) - 1)) | (1 << 23)    intTest := coefficient & ((1 << uint32(-exponent)) - 1)

97

Page 98: Going Go Programming

    ShowBits(bits, bias, exponent)

    fmt.Printf("\nExp: %d Frac: %d IntTest: %d\n",        exponent,        coefficient,        intTest)

    if exponent < -23 {        fmt.Printf("NOT INTEGER\n")        return    }

    if exponent < 0 && intTest != 0 {        fmt.Printf("NOT INTEGER\n")        return    }

    fmt.Printf("INTEGER\n")}

func ShowBits(bits uint32, bias int, exponent int) {    value := (1 << 23) - 1    value2 := (bits & ((1 << 23) - 1))    value3 := (1 << 23)    coefficient := (bits & ((1 << 23) - 1)) | (1 << 23)

    fmt.Printf("Bits:\t\t\t%.32b\n", bits)    fmt.Printf("(1 << 23) - 1:\t\t%.32b\n", value)    fmt.Printf("bits & ((1 << 23) - 1):\t\t%.32b\n\n", value2)

    fmt.Printf("bits & ((1 << 23) - 1):\t\t%.32b\n", value2)    fmt.Printf("(1 << 23):\t\t\t%.32b\n", value3)    fmt.Printf("coefficient:\t\t\t%.32b\n\n", coefficient)

    value5 := 1 << uint32(-exponent)    value6 := (1 << uint32(-exponent)) - 1    inTest := (coefficient & ((1 << uint32(-exponent)) - 1))

    fmt.Printf("1 << uint32(-exponent):\t\t%.32b\n", value5)    fmt.Printf("(1 << uint32(-exponent)) - 1:\t%.32b\n\n", value6)

    fmt.Printf("coefficient:\t\t\t%.32b\n", coefficient)    fmt.Printf("(1 << uint32(-exponent)) - 1:\t%.32b\n", value6)    fmt.Printf("intTest:\t\t\t%.32b\n", inTest)}

I want to thank Gustavo for posting the question and giving me something to really fight through to understand.

98

Page 99: Going Go Programming

Using Time, Timezones and Location in Go

I ran into a problem today. I was building code to consume NOAA's tide station XML document and quickly realized I was in trouble. Here is a small piece of that XML document:

<timezone>LST/LDT</timezone><item><date>2013/01/01</date><day>Tue</day><time>02:06 AM</time><predictions_in_ft>19.7</predictions_in_ft><predictions_in_cm>600</predictions_in_cm><highlow>H</highlow></item>

If you notice the timezone tag, it states the time is in Local Standard Time / Local Daylight Time. This is a real problem because I need to store this data in UTC. Without a proper timezone I am lost. After scratching my head for a bit my business partner showed me two API's that take a latitude and longitude position and return timezone information. Luckily for me I have a latitude and longitude position for each tide station.

If you open this web page you can read the documentation for Google's Timezone API:

https://developers.google.com/maps/documentation/timezone/

The API is fairly simple. It requires a location, timestamps and a flag to identify if the requesting application is using a sensor, like a GPS device, to determine the location.

Here is a sample call to the Google API and response:

https://maps.googleapis.com/maps/api/timezone/json?location=38.85682,-92.991714&sensor=false&timestamp=1331766000

{    "dstOffset" : 3600.0,    "rawOffset" : -21600.0,    "status" : "OK",    "timeZoneId" : "America/Chicago",    "timeZoneName" : "Central Daylight Time"}

There is a limit of 2,500 calls a day. For my initial load of the tide stations, I knew I was going to hit that limit and I didn't want to wait several days to load all the data. So my business partner found the timezone API from GeoNames.

If you open this web page you can read the documentation for GeoNames's Timezone API:

http://www.geonames.org/export/web-services.html#timezone

The API requires a free account which is real quick to setup. Once you activate your account

99

Page 100: Going Go Programming

you need to find the account page and activate your username for use with the API.

Here is a sample call to the GeoNames API and response:

http://api.geonames.org/timezoneJSON?lat=47.01&lng=10.2&username=demo

{    "time":"2013-08-09 00:54",    "countryName":"Austria",    "sunset":"2013-08-09 20:40",    "rawOffset":1,    "dstOffset":2,    "countryCode":"AT",    "gmtOffset":1,    "lng":10.2,    "sunrise":"2013-08-09 06:07",    "timezoneId":"Europe/Vienna",    "lat":47.01}

This API returns a bit more information. There is no limit to the number of calls you can make but the response times are not guaranteed. I used it for several thousand calls and had no problems.

So now we have two different web calls we can use to get the timezone information. Let's look at how we can use Go to make the Google web call and get an object back that we can use in our program.

First, we need to define a new type that can contain the information we will get back from the API.

package main

import (    "encoding/json"    "fmt"    "io/ioutil"    "net/http"    "time")

const (    _GOOGLE_URI string = "https://maps.googleapis.com/maps/api/timezone/json?location=%f,%f&timestamp=%d&sensor=false")

type GoogleTimezone struct {    DstOffset    float64 `bson:"dstOffset"`    RawOffset    float64 `bson:"rawOffset"`

100

Page 101: Going Go Programming

    Status       string  `bson:"status"`    TimezoneId   string  `bson:"timeZoneId"`    TimezoneName string  `bson:"timeZoneName"`}

Go has awesome support for JSON and XML. If you look at the GoogleTimezone struct you will see that each field contains a "tag". A tag is extra data attached to each field that can later be retrieved by our program using reflection. To learn more about tags read this document:

http://golang.org/pkg/reflect/#StructTag

The encoding/json package has defined a set of tags it looks for to help with marshaling and unmarshaling JSON data. To learn more about the JSON support in Go read these documents:

http://golang.org/doc/articles/json_and_go.htmlhttp://golang.org/pkg/encoding/json/

If you make the field names in your struct the same as the field names in the JSON document, you don't need to use the tags. I didn't do that so the tags are there to tell the Unmarshal function how to map the data.

Let's look at a function that can make the API call to Google and unmarshal the JSON document to our new type:

func RetrieveGoogleTimezone(latitude float64, longitude float64) (googleTimezone *GoogleTimezone, err error) {    defer func() {        if r := recover(); r != nil {            err = fmt.Errorf("%v", r)        }    }()

    uri := fmt.Sprintf(_GOOGLE_URI, latitude, longitude, time.Now().UTC().Unix())

    resp, err := http.Get(uri)

    defer func() {        if resp != nil {            resp.Body.Close()        }    }()

    if err != nil {        return nil, err    }

    // Convert the response to a byte array    var rawDocument []byte    rawDocument, err = ioutil.ReadAll(resp.Body)

101

Page 102: Going Go Programming

    if err != nil {        return nil, err    }

    // Unmarshal the response to a GoogleTimezone object    err = json.Unmarshal(rawDocument, &googleTimezone)

    if err != nil {        return nil, err    }

    if googleTimezone.Status != "OK" {        return nil, fmt.Errorf("Error : Google Status : %s", googleTimezone.Status)    }

    if len(googleTimezone.TimezoneId) == 0 {        return nil, fmt.Errorf("Error : No Timezone Id Provided")    }

    return googleTimezone, err}

The web call and error handling is fairly boilerplate so let's just talk briefly about the Unmarshal call.

    var rawDocument []byte    rawDocument, err = ioutil.ReadAll(resp.Body)

    err = json.Unmarshal(rawDocument, &googleTimezone)

When the web call returns, we take the response and store it in a byte array. Then we call the json Unmarshal function, passing the byte array and a reference to our return type pointer variable. The Unmarshal call creates an object of type GoogleTimezone, extracts and copies the data from the returned JSON document and sets the value of our pointer variable. It's really brilliant. If any fields can't be mapped they are simply ignored. The Unmarshal call will return an error if there are casting issues.

So this is great, we can get the timezone data and unmarshal it to an object with three lines of code. Now the only problem is, how the heck do we use the timezoneid to set our location?

Here is the problem again. We have to take the local time from the feed document, apply the timezone information and then convert everything UTC.

Let's look at the feed document again:

<timezone>LST/LDT</timezone><item><date>2013/01/01</date>

102

Page 103: Going Go Programming

<day>Tue</day><time>02:06 AM</time><predictions_in_ft>19.7</predictions_in_ft><predictions_in_cm>600</predictions_in_cm><highlow>H</highlow></item>

Assuming we have extracted the data from this document, how can we use the timezoneid to get us out of this jam? Look at the code I wrote in the main function. It uses the time.LoadLocation function and the timezone id we get from the API call to solve the problem:

func main() {    // Call to get the timezone for this lat and lng position    googleTimezone, err := RetrieveGoogleTimezone(38.85682, -92.991714)

    // Pretend this is the date and time we extracted    year := 2013    month := 1    day := 1    hour := 2    minute := 6

    // Capture the location based on the timezone id from Google    location, err := time.LoadLocation(googleTimezone.TimezoneId)

    if err != nil {        fmt.Printf("ERROR : %s", err)        return    }

    // Capture the local and UTC time based on timezone    localTime := time.Date(year, time.Month(month), day, hour, minute, 0, 0, location)    utcTime := localTime.UTC()

    // Display the results    fmt.Printf("Timezone:\t%s\n", googleTimezone.TimezoneId)    fmt.Printf("Local Time: %v\n", localTime)    fmt.Printf("UTC Time: %v\n", utcTime)}

Here is the output:

Timezone:   America/ChicagoLocal Time: 2013-01-01 02:06:00 -0600 CSTTime:       2013-01-01 08:06:00 +0000 UTC

103

Page 104: Going Go Programming

Everything worked like a champ. Our localTime variable is set to CST or Central Standard Time, which is where Chicago is located.  The Google API provided the correct timezone for the latitude and longitude because that location falls within Missouri.

https://maps.google.com/maps?q=39.232253,-92.991714&z=6

The last question we have to ask is how did the LoadLocation function take that timezone id string and make this work. The timezone id contains both a country and city (America/Chicago). There must be thousands of these timezone ids.

If we take a look at the time package documentation for LoadLocation, we will find the answer:

http://golang.org/pkg/time/#LoadLocation

Here is the documentation for LoadLocation:

LoadLocation returns the Location with the given name.

If the name is "" or "UTC", LoadLocation returns UTC. If the name is "Local", LoadLocation returns Local.

Otherwise, the name is taken to be a location name corresponding to a file in the IANA Time Zone database, such as "America/New_York".

The time zone database needed by LoadLocation may not be present on all systems, especially non-Unix systems. LoadLocation looks in the directory or uncompressed zip file named by the ZONEINFO environment variable, if any, then looks in known installation locations on Unix systems, and finally looks in $GOROOT/lib/time/zoneinfo.zip.

If you read the last paragraph you will see that the LoadLocation function is reading a database file to get the information. I didn't download any database, nor did I set an environment variable called ZONEINFO. The only answer is that this zoneinfo.zip file exists in GOROOT. Let's take a look:

Sure enough there is a zoneinfo.zip file located in the lib/time directory where Go was installed. Very Cool !!

There you have it. Now you know how to use the time.LoadLocation function to help make sure your time values are always in the correct timezone. If you have a latitude and longitude, you can use either API to get that timezone id.

104

Page 105: Going Go Programming

I have added a new package called timezone to the GoingGo repository in Github if you want a reusable copy of the code with both API calls.  Here is the entire working sample program:

package main

import (    "encoding/json"    "fmt"    "io/ioutil"    "net/http"    "time")

const (    _GOOGLE_URI string = "https://maps.googleapis.com/maps/api/timezone/json?location=%f,%f&timestamp=%d&sensor=false")

type GoogleTimezone struct {    DstOffset    float64 `bson:"dstOffset"`    RawOffset    float64 `bson:"rawOffset"`    Status       string  `bson:"status"`    TimezoneId   string  `bson:"timeZoneId"`    TimezoneName string  `bson:"timeZoneName"`}

func main() {    // Call to get the timezone for this lat and lng position    googleTimezone, err := RetrieveGoogleTimezone(38.85682, -92.991714)

    // Pretend this is the date and time we extracted    year := 2013    month := 1    day := 1    hour := 2    minute := 6

    // Capture the location based on the timezone id from Google    location, err := time.LoadLocation(googleTimezone.TimezoneId)

    if err != nil {        fmt.Printf("ERROR : %s", err)        return    }

    // Capture the local and UTC time based on timezone    localTime := time.Date(year, time.Month(month), day, hour,

105

Page 106: Going Go Programming

minute, 0, 0, location)    utcTime := localTime.UTC()

    // Display the results    fmt.Printf("Timezone:\t%s\n", googleTimezone.TimezoneId)    fmt.Printf("Local Time: %v\n", localTime)    fmt.Printf("UTC Time: %v\n", utcTime)}

func RetrieveGoogleTimezone(latitude float64, longitude float64) (googleTimezone *GoogleTimezone, err error) {

    defer func() {        if r := recover(); r != nil {            err = fmt.Errorf("%v", r)        }    }()

    uri := fmt.Sprintf(_GOOGLE_URI, latitude, longitude, time.Now().UTC().Unix())

    resp, err := http.Get(uri)

    defer func() {        if resp != nil {

            resp.Body.Close()        }    }()

    if err != nil {        return nil, err    }

    // Convert the response to a byte array    var rawDocument []byte    rawDocument, err = ioutil.ReadAll(resp.Body)

    if err != nil {        return nil, err    }

    // Unmarshal the response to a GoogleTimezone object    err = json.Unmarshal(rawDocument, &googleTimezone)

    if err != nil {        return nil, err    }

    if googleTimezone.Status != "OK" {        return nil, fmt.Errorf("Error : Google Status : %s",

106

Page 107: Going Go Programming

googleTimezone.Status)    }

    if len(googleTimezone.TimezoneId) == 0 {        return nil, fmt.Errorf("Error : No Timezone Id Provided")    }

    return googleTimezone, err}

Understanding Slices in Go Programming

Since I started programming in Go the concept and use of slices has been confusing. This is something completely new to me. They look like an array, and feel like an array, but they are much more than an array. I am constantly reading how slices are used quite a bit by Go programmers and I think it is finally time for me to understand what slices are all about.

There is a great blog post written by Andrew Gerrand about slices:

http://blog.golang.org/go-slices-usage-and-internals

There is no reason to repeat everything Andrew has written so please read his post before continuing. Let's just cover the internals of a slice.

The picture above represents the internal structure of a slice. When you allocate a slice this data structure along with an underlying array is created. The value of your slice variable will be this data structure. When you pass a slice to a function, a copy of this data structure is created on the stack.

We can create a slice in two ways:

Here we use the keyword make, passing the type of data we are storing, the initial length of the slice and the capacity of the underlying array.

mySlice := make([]string, 5, 8)mySlice[0] = "Apple"mySlice[1] = "Orange"

107

Page 108: Going Go Programming

mySlice[2] = "Banana"mySlice[3] = "Grape"mySlice[4] = "Plum"

// You don't need to include the capacity. Length and Capacity will be the samemySlice := make([]string, 5)

You can also use a slice literal. In this case the length and capacity will be the same. Notice no value is provided inside the hard brackets []. If you add a value you will have an array. If you don't add a value you will have a slice.

mySlice := []string{"Apple", "Orange", "Banana", "Grape", "Plum"}

You can't extend the capacity of a slice once it is created. The only way to change the capacity is to create a new slice and perform a copy. Andrew provides a great sample function that shows an efficient way to check the remaining capacity for room and only if necessary, perform a copy.

The length of a slice identifies the number of elements of the underlying array we are using from the starting index position. The capacity identifies the number of elements we have available for use.

We can create a new slice from the original slice:

newSlice := mySlice[2:4]

The value of the new slice's pointer variable is associated with index positions 2 and 3 of the initial underlying array. As far as this new slice is concerned, we now have a underlying array of 3 elements and we are only using 2 of those 3 elements. This new slice has no knowledge of the first two elements from the initial underlying array and never will.

When performing a slice operation the first parameter specifies the starting index from the slices pointer variable position. In our case we said index 2 which is 3 elements inside the initial underlying array we are taking the slice from. The second parameter is the last index position plus one (+1). In our case we said index 4 which will include all indexes between index 2 (the starting position) and index 3 (the final position).

We don't always need to include a starting or ending index position when performing a slice operation:

108

Page 109: Going Go Programming

newSlice2 = newSlice[:cap(newSlice)]

In this example we use the new slice we created before to create a third slice. We don't provide a starting index position but we do specify the last index position. Our latest slice has the same starting position and capacity but the length has changed. By specifying the last index position as the capacity size, this length of this slice uses all remaining elements from the underlying array.

Now let's run some code to prove this data structure actually exists and slices work as advertised.

I have created a function that will inspect the memory associated with any slice:

func InspectSlice(slice []string) {    // Capture the address to the slice structure    address := unsafe.Pointer(&slice)

    // Capture the address where the length and cap size is stored    lenAddr := uintptr(address) + uintptr(8)    capAddr := uintptr(address) + uintptr(16)

    // Create pointers to the length and cap size    lenPtr := (*int)(unsafe.Pointer(lenAddr))    capPtr := (*int)(unsafe.Pointer(capAddr))

    // Create a pointer to the underlying array    addPtr := (*[8]string)(unsafe.Pointer(*(*uintptr)(address)))

    fmt.Printf("Slice Addr[%p] Len Addr[0x%x] Cap Addr[0x%x]\n",        address,        lenAddr,        capAddr)

    fmt.Printf("Slice Length[%d] Cap[%d]\n",        *lenPtr,        *capPtr)

    for index := 0; index < *lenPtr; index++ {

109

Page 110: Going Go Programming

        fmt.Printf("[%d] %p %s\n",            index,            &(*addPtr)[index],            (*addPtr)[index])    }

    fmt.Printf("\n\n")}

This function is performing a bunch of pointer manipulations so we can inspect the memory and values of a slice's data structure and underlying array.

We will break it down, but first let's create a slice and run it through the inspect function:

package main

import (    "fmt"    "unsafe")

func main() {    orgSlice := make([]string, 5, 8)    orgSlice[0] = "Apple"    orgSlice[1] = "Orange"    orgSlice[2] = "Banana"    orgSlice[3] = "Grape"    orgSlice[4] = "Plum"

    InspectSlice(orgSlice)}

Here is the output of the program:

Slice Addr[0x2101be000] Len Addr[0x2101be008] Cap Addr[0x2101be010]Slice Length[5] Cap[8][0] 0x2101bd000 Apple[1] 0x2101bd010 Orange[2] 0x2101bd020 Banana[3] 0x2101bd030 Grape[4] 0x2101bd040 Plum

It appears the slice's data structure really does exist as described by Andrew.

The InspectSlice function first displays the address of the slice's data structure and the address positions where the length and capacity values should be. Then by creating int pointers using those addresses, we display the values for length and capacity. Last we create a pointer to the underlying array. Using the pointer, we iterate through the underlying array displaying the index position, the starting address of the element and the value.

110

Page 111: Going Go Programming

Let's break down the InspectSlice function to understand how it works:

// Capture the address to the slice structureaddress := unsafe.Pointer(&slice)

// Capture the address where the length and cap size is storedlenAddr := uintptr(address) + uintptr(8)capAddr := uintptr(address) + uintptr(16)

unsafe.Pointer is a special type that is mapped to an uintptr type. Because we need to perform pointer arithmetic, we need to work with generic pointers. The first line of code casts the address of the slice's data structure to an unsafe.Pointer. Then we create two generic pointers that point 8 and 16 bytes into the slice's data structure respectively.

The following diagram shows each pointer variable, the value of the variable and the value that the pointer points to:

address lenAddr capAddr0x2101be000 0x2101be008 0x2101be0100x2101bd000 5 8

With our pointers in hand, we can now create properly typed pointers so we can display the values. Here we create two integer pointers that can be used to display the length and capacity values from the slice's data structure.

// Create pointers to the length and cap sizelenPtr := (*int)(unsafe.Pointer(lenAddr))capPtr := (*int)(unsafe.Pointer(capAddr))

We now need a pointer of type [8]string, which is the type of underlying array.

// Create a pointer to the underlying arrayaddPtr := (*[8]string)(unsafe.Pointer(*(*uintptr)(address)))

There is a lot going on in this one statement so let's break it down:

(*uintptr)(address) : 0x2101be000This code takes the starting address of the slice's data structure and casts it as a generic pointer.

* (*uintptr)(address) : 0x2101bd000Then we get the value that the pointer is pointing to, which is the starting address of the underlying array.

unsafe.Pointer(*(*uintptr)(address))Then we cast the starting address of the underlying array to an unsafe.Pointer type. We need a pointer of this type to perform the final cast.

(*[8]string)(unsafe.Pointer(*(*uintptr)(address)))

111

Page 112: Going Go Programming

Finally we cast the unsafe.Pointer to a pointer of the proper type.

The remaining code uses the proper pointers to display the output:

fmt.Printf("Slice Addr[%p] Len Addr[0x%x] Cap Addr[0x%x]\n",    address,    lenAddr,    capAddr)

fmt.Printf("Slice Length[%d] Cap[%d]\n",    *lenPtr,    *capPtr)

for index := 0; index < *lenPtr; index++ {    fmt.Printf("[%d] %p %s\n",        index,        &(*addPtr)[index],        (*addPtr)[index])}

Now let's put the entire program together and create some slices. We will inspect each slice and make sure everything we know about slices is true:

package main

import (    "fmt"    "unsafe")

func main() {    orgSlice := make([]string, 5, 8)    orgSlice[0] = "Apple"    orgSlice[1] = "Orange"    orgSlice[2] = "Banana"    orgSlice[3] = "Grape"    orgSlice[4] = "Plum"

    InspectSlice(orgSlice)

    slice2 := orgSlice[2:4]    InspectSlice(slice2)

    slice3 := slice2[1:cap(slice2)]    InspectSlice(slice3)

    slice3[0] = "CHANGED"    InspectSlice(slice3)    InspectSlice(slice2)}

112

Page 113: Going Go Programming

func InspectSlice(slice []string) {    // Capture the address to the slice structure    address := unsafe.Pointer(&slice)

    // Capture the address where the length and cap size is stored    lenAddr := uintptr(address) + uintptr(8)    capAddr := uintptr(address) + uintptr(16)

    // Create pointers to the length and cap size    lenPtr := (*int)(unsafe.Pointer(lenAddr))    capPtr := (*int)(unsafe.Pointer(capAddr))

    // Create a pointer to the underlying array    addPtr := (*[8]string)(unsafe.Pointer(*(*uintptr)(address)))

    fmt.Printf("Slice Addr[%p] Len Addr[0x%x] Cap Addr[0x%x]\n",        address,        lenAddr,        capAddr)

    fmt.Printf("Slice Length[%d] Cap[%d]\n",        *lenPtr,        *capPtr)

    for index := 0; index < *lenPtr; index++ {        fmt.Printf("[%d] %p %s\n",            index,            &(*addPtr)[index],            (*addPtr)[index])    }

    fmt.Printf("\n\n")}

Here is the code and output for each slice:

Here we create the initial slice with a length of 5 elements and a capacity of 8 elements.

Code:orgSlice := make([]string, 5, 8)orgSlice[0] = "Apple"orgSlice[1] = "Orange"orgSlice[2] = "Banana"orgSlice[3] = "Grape"orgSlice[4] = "Plum"

Output:Slice Addr[0x2101be000] Len Addr[0x2101be008] Cap

113

Page 114: Going Go Programming

Addr[0x2101be010]Slice Length[5] Cap[8][0] 0x2101bd000 Apple[1] 0x2101bd010 Orange[2] 0x2101bd020 Banana[3] 0x2101bd030 Grape[4] 0x2101bd040 Plum

The output is as expected. A length of 5, capacity of 8 and the underlying array contains our values.

Next we take a slice from the original slice. We ask for 2 elements between indexes 2 and 3.

Code:slice2 := orgSlice[2:4]InspectSlice(slice2)

Output:Slice Addr[0x2101be060] Len Addr[0x2101be068] Cap Addr[0x2101be070]Slice Length[2] Cap[6][0] 0x2101bd020 Banana[1] 0x2101bd030 Grape

In the output you can see we have a slice with a length of 2 and a capacity of 6. Because this new slice is starting 3 elements into the underlying array for the original slice, there is a capacity of 6 elements. The capacity includes all possible elements that can be accessed by the new slice. Index 0 of the new slice maps to index 2 of the original slice. They both have the same address of 0x2101bd020.

This time we ask for a slice starting from index position 1 up to the last element of slice2.

Code:slice3 := slice2[1:cap(slice2)]InspectSlice(slice3)

Output:Slice Addr[0x2101be0a0] Len Addr[0x2101be0a8] Cap Addr[0x2101be0b0]Slice Length[5] Cap[5][0] 0x2101bd030 Grape[1] 0x2101bd040 Plum[2] 0x2101bd050[3] 0x2101bd060[4] 0x2101bd070

As expected the length and the capacity are both 5. When we display all the values of the slice you see the last three elements don't have a value. The slice initialized all the elements when the underlying array was created. Also index 0 of this slice maps to index 1 of slice 2 and index 3 of the original slice. They all have the same address of 0x2101bd030.

114

Page 115: Going Go Programming

The final code changes the value of the first element, index 0 in slice3 to the value CHANGED. Then we display the values for slice3 and slice2.

slice3[0] = "CHANGED"InspectSlice(slice3)InspectSlice(slice2)

Slice Addr[0x2101be0e0] Len Addr[0x2101be0e8] Cap Addr[0x2101be0f0]Slice Length[5] Cap[5][0] 0x2101bd030 CHANGED[1] 0x2101bd040 Plum[2] 0x2101bd050 [3] 0x2101bd060 [4] 0x2101bd070

Slice Addr[0x2101be120] Len Addr[0x2101be128] Cap Addr[0x2101be130]Slice Length[2] Cap[6][0] 0x2101bd020 Banana[1] 0x2101bd030 CHANGED

Notice that both slices show the changed value in their respect indexes. This proves all the slices are using the same underlying array.

The InspectSlice function proves that each slice contains its own data structure with a pointer to an underlying array, a length for the slice and a capacity. Take some time to create more slices and use the InspectSlice function to validate your assumptions.

Using C Dynamic Libraries In Go Programs

My son and I were having fun last weekend building a console based game in Go. I was recreating a game from my youth, back when I was programming on a Kaypro II.

I loved this computer. I would write games in BASIC on it all day and night. Did I mention it was portable. The keyboard would strap in and you could carry it around. LOL.

But I digress, back to my Go program. I figured out a way to use the VT100 escape character

115

Page 116: Going Go Programming

codes to draw out a simple screen and started programming some of the logic.

Then something horrible happened and I had a major flashback. I could not get input from stdin without hitting the enter key. Ahhhhh  I spent all weekend reading up on how to make this happen. I even found two Go libraries that had support for this but they didn't work. I realized that if I was going to make this happen I needed to build the functionality in C and link that to my Go program.

After a 4 hour coding session at the local Irish pub, I figured it out. I would like to thank Guinness for the inspiration and encouragement I needed. Understand that for the past 10 years I have been writing windows services in C#. For 10 years before that I was writing C/C++ but on the Microsoft stack. Everything I was reading: gcc, gco, static and shared libraries on the Mac and Linux, etc, was foreign to me. I had a lot to learn and still do.

After all my research it became clear I needed to use the ncurses dynamic library. I decided to write a simple program in C using the library. If I could make it work in a compiled C program, I was sure I could get it to work in Go.

The ncurses library on the Mac is located in /usr/lib. Here is a link to the documentation:

https://developer.apple.com/library/mac/documentation/Darwin/Reference/ManPages/man3/ncurses.3x.html

Here is the C header file code for the test program:

test.hint GetCharacter();void InitKeyboard();void CloseKeyboard();

And now the code for the C source file:

test.c#include <curses.h>#include <stdio.h>#include "test.h"

int main() {    InitKeyboard();

    printf("\nEnter: ");    refresh();

116

Page 117: Going Go Programming

    for (;;) {        int r = GetCharacter();        printf("%c", r);        refresh();

        if (r == 'q') {            break;        }    }

    CloseKeyboard();

    return 0;}

void InitKeyboard() {    initscr();    noecho();    cbreak();    keypad(stdscr, TRUE);    refresh();}

int GetCharacter() {    return getch();}

void CloseKeyboard() {    endwin();}

Now the hard part. How do I build this program using the gcc compiler? I want to make sure I am using the same compiler that Go is using. I also want to make sure I am using the bare minimum parameters and flags.

After about an hour of researching, I came up with this makefile. I told you, I have never done this before.

makefilebuild:    rm -f test    gcc -c test.c    gcc -lncurses -r/usr/lib -o test test.o    rm -f *.o

When you run the make command it will look for a file called makefile in the local directory and execute it. Be aware that each command is one (1) TAB to the right. If you use spaces you will have problems. Obviously you can run these command manually as well. I built this

117

Page 118: Going Go Programming

makefile for convenience.

Let's break down the gcc compiler calls from the makefile:

This call to gcc is creating an object file called test.o from the source file test.c. The -c parameter tells gcc to just compile the source file and create an object file called test.o

gcc -c test.c

This second call to gcc links the test.o object file together with the shared dynamic library libncurses.dylib to create the test executable file. The -l (minus lowercase L) parameter is telling gcc to link the libncurses.dylib file and the -r (minus lowercase R) parameter is telling gcc where it can find the library. The -o (minus lowercase O) parameter tells gcc to create an executable output file called test and finally we tell gcc to include test.o in the link operation.

gcc -lncurses -r/usr/lib -o test test.o

Once these two gcc commands run we have a working version of the program called test. You must run this program from a terminal window for it to work. To execute the program type ./test from the terminal window:

Here I started typing letters which are being displayed by the call to printf inside the for loop.

As soon as I hit the letter 'q', the program terminates.

I now have a working version of a program that uses the ncurses dynamic library I want to use in my Go program. Now I need to find a way to wrap these calls into a dynamic library that I can link to from Go.

I was very fortunate to find these web pages which brilliantly show everything we need to know to create shared and dynamic libraries:

http://www.adp-gmbh.ch/cpp/gcc/create_lib.html

http://stackoverflow.com/questions/3532589/how-to-build-a-dylib-from-several-o-in-mac-os-x-using-gcc

Let's work together on making all this work in Go. Start with setting up a Workspace for our new project:

118

Page 119: Going Go Programming

I have created a folder called Keyboard and two sub-folders called DyLib and TestApp.

Inside the DyLib folder we have our C based dynamic library code with a makefile. Inside the TestApp folder we have a single go source code file to test our Go integration with the new dynamic library. Here is the C header file for the dynamic library. It is identical to the C header file I used in the test application.

keyboard.hint GetCharacter();void InitKeyboard();void CloseKeyboard();

Here is the C source file that implements those functions. Again it is identical to the C source file from the test application without the main function. We are building a library so we don't want main.

keyboard.c#include <curses.h>#include "keyboard.h"

void InitKeyboard() {    initscr();    noecho();    cbreak();    keypad(stdscr, TRUE);    refresh();}

int GetCharacter() {    return getch();}

void CloseKeyboard() {    endwin();}

Here is the makefile for creating the dynamic library:

119

Page 120: Going Go Programming

makefiledynamic:    rm -f libkeyboard.dylib    rm -f ../TestApp/libkeyboard.dylib    gcc -c -fPIC keyboard.c    gcc -dynamiclib -lncurses -r/usr/lib -o libkeyboard.dylib keyboard.o    rm -f keyboard.o    cp libkeyboard.dylib ../TestApp/libkeyboard.dylib

shared:    rm -f libkeyboard.so    rm -f ../TestApp/libkeyboard.so    gcc -c -fPIC keyboard.c    gcc -shared -W1 -lncurses -r/usr/lib -soname,libkeyboard.so -o libkeyboard.so keyboard.o    rm -f keyboard.o    cp libkeyboard.so ../TestApp/libkeyboard.so

With this make file you can build either a dynamic library or shared library. If you just run the make command without any parameters, it will execute the dynamic set of commands. To create the shared library run make passing 'shared' (without quotes) as a parameter.

The important flag to notice is -fPIC. This flag tells gcc to create position independent code which is necessary for shared libraries. We did not include this flag when we built the executable program.

We are going to use the dynamic library moving forward. Mainly because on the Mac this is the most common format. Also, if we clean our Go project later in LiteIDE, it won't remove the file along with the binary. LiteIDE will remove shared libraries on the call to clean.

Let's create the dynamic library by running the make file:

We call the make command and it runs the dynamic section of the make file successfully. Once this is done we now have our new dynamic library.

120

Page 121: Going Go Programming

Now we have a new file in both the DyLib and TestApp folders called libkeyboard.dylib.

One thing I forgot to mention is that our dynamic and shared libraries must start with the letters lib. This is mandatory for things to work correctly later. Also the library will need to be in the working folder for the program to load it when we run the program.Let's look at the Go source code file for our test application:

package main

/*#cgo CFLAGS: -I../DyLib#cgo LDFLAGS: -L. -lkeyboard#include <keyboard.h>*/import "C"import (    "fmt")

func main() {    C.InitKeyboard()

    fmt.Printf("\nEnter: ")

    for {        r := C.GetCharacter()

        fmt.Printf("%c", r)

        if r == 'q' {            break        }    }

    C.CloseKeyboard()}

The Go team has put together these two documents that explain how Go can incorporate C code directly or use libraries like we are doing. It is really important to read these documents to better understand this Go code:

121

Page 122: Going Go Programming

http://golang.org/cmd/cgo/http://golang.org/doc/articles/c_go_cgo.html

If you are interested binding into C++ libraries, then SWIG (Simplified Wrapper and Interface Generator) is something you need to look at:

http://www.swig.org/http://www.swig.org/Doc2.0/Go.html

We will leave SWIG for another day. For now let's break down the Go source code.

package main

/*#cgo CFLAGS: -I../DyLib#cgo LDFLAGS: -L. -lkeyboard#include <keyboard.h>*/import "C"

In order to provide the compiler and linker the parameters it needs, we use these special cgo commands. They are always provided inside a set of comments and must be on top of the import "C" statement. If there is a gap between the closing comment and the import command you will get compiler errors.

Here we are providing the Go build process flags for the compiling and linking of our program. CFLAGS provides parameters to the compiler. We are telling the compiler it can find our header files in the SharedLib folder. LDFLAGS provide parameters to the linker. We are providing the linker two parameters, -L (minus capital L) which tells the linker where it can find our dynamic library and -l (minus lowercase L) which tells the linker the name of our library.

Notice when we specify the name of our library it does not include the lib prefix or the extension. It is expected that the library name starts with lib and ends in either .dylib or the .so extensions.

Last we tell Go to import the seudo-package "C". This seudo-package provides all the Go level support we need to access our library. None of this is possible without it.

Look at how we call into the each of our functions from the library:

C.InitKeyboard()r := C.GetCharacter()C.CloseKeyboard()

Thanks to the seudo-package "C" we have function wrappers for each function from the header file. These wrappers handle the marshaling of data in and out of our functions. Notice how we can use a native Go type and syntax to get the character that is entered into the keyboard.

122

Page 123: Going Go Programming

Now we can build the test application and run it from a terminal session:

Awesome. Working like a champ.

Now my son and I can continue building our game and get the keyboard action we need to make the game really fun. It has taken me quite a few number of hours to get a handle on all of this. There is still a lot to learn and support for this will only get better. At some point I will look at SWIG to incorporate C++ object oriented libraries. For now, being able to bring in and leverage C libraries is awesome.

If you want to see and access the code, I have put it up in the GoingGo github repository under Keyboard. Have Fun !!

Read Part II:  Using CGO with Pkg-Config And Custom Dynamic Library Locations

Using CGO with Pkg-Config And Custom Dynamic Library Locations

Earlier in the month I wrote a post about using C Dynamic Libraries in Go Programs. The article built a dynamic library in C and created a Go program that used it. The program worked but only if the dynamic library was in the same folder as the program.

This constraint does not allow for the use of the go get command to download, build and install a working version of the program. I did not want to have any requirements to pre-install dependencies or run extra scripts or commands after the call to go get. The Go tool was not going to copy the dynamic library into the bin folder and therefore I would not be able to run the program once the go get command was complete. This was simply unacceptable and there had to be a way to make this work.

The solution to this problem was twofold. First, I needed to use a package configuration file to specify the compiler and linker options to CGO. Second, I needed to set an environment variable so the operating system could find the dynamic library without needing to copy it to the bin folder.

If you look, you will see that some of the standard libraries provide a package configuration (.pc) file. A special program called pkg-config is used by the build tools, such as gcc, to retrieve information from these files.

123

Page 124: Going Go Programming

If we look in the standard locations for header files, /usr/lib or /usr/local/lib, you will find a folder called pkgconfig. Package configuration files that exist in these locations can be found by the pkg-config program by default.

Look at the libcrypto.pc file and you can see the format and how it provides compiler and linker information.

This particular file is nice to look at because it includes the bare minimum format and parameters that are required.

To learn more about these files read this web page: www.freedesktop.org/wiki/Software/pkg-config

The prefix variable at the top of the file is very important. This variable specifies the base folder location where the library and include files are installed.

Something very important to note is that you can't use an environment variable to help specify a path location. If you do, you will have problems with the build tools locating any of the files it needs. The environment variables end up being provided to the build tools as a literal string. Remember this for later because it is important.

Run the pkg-config program from a Terminal session using these parameters:

pkg-config --cflags --libs libcrypto

These parameters ask the pkg-config program to show the compiler and linker settings specified in the .pc file called libcrypto.

This is what should be returned:

-lcrypto -lz

124

Page 125: Going Go Programming

Let's look at one of the package configuration files from ImageMagick that I downloaded and installed under /usr/local for a project I am working on:

This file is a bit more complex. You will notice it specifies that the MagickCode library is also required and specifies more flags such as environmental variables.

When I run the pkg-config program on this file I get the following information back:

pkg-config --cflags --libs MagickWand

-fopenmp -DMAGICKCORE_HDRI_ENABLE=0 -DMAGICKCORE_QUANTUM_DEPTH=16-I/usr/local/include/ImageMagick-6  -L/usr/local/lib -lMagickWand-6.Q16-lMagickCore-6.Q16

You can see that the path locations for the header and library files are fully qualified paths. All of the other flags defined in the package configuration file are also provided.

Now that we know a bit about package configuration files and how to use the pkg-config tool, let's take a look at the changes I made to the project for the C Dynamic Libraries in Go Programs post. This project is now using a package configuration file and new cgo parameters.

Before we begin I must apologize. The dynamic library that I built for this project will only build on the Mac. Read the post I just mentioned to understand why. A pre-built version of the dynamic library already exists in version control. If you are not working on a Mac, the project will not build properly, however all the ideas, settings and constructs still apply.

Open a Terminal window and run the following commands:

cd $HOMEexport GOPATH=$HOME/keyboardexport PKG_CONFIG_PATH=$GOPATH/src/github.com/goinggo/keyboard/pkgcon

125

Page 126: Going Go Programming

figexport DYLD_LIBRARY_PATH=$GOPATH/src/github.com/goinggo/keyboard/DyLibgo get github.com/goinggo/keyboard

After you run these commands, you will have all the code from the GoingGo keyboard repository downloaded under a subfolder called keyboard inside your home directory.

You will notice the Go tool was able to download, build and install the keyboard program. Even though the header file and dynamic library was not located in the default /usr or /usr/local folders.

In the bin folder we have the single executable program without the dynamic library. The dynamic library is only located in the DyLib folder.

There is a new folder in the project now called pkgconfig. This folder contains the package configuration file that makes this all possible.

The main.go source code file has been changed to take advantage of the new package configuration file.

If we immediately switch to the bin folder and run the program, we will see that it works.

cd $GOPATH/bin./keyboard

126

Page 127: Going Go Programming

When you start the program, it immediately asks you to enter some keys. Type a few letters and then hit the letter q to quit the program.

This is only possible if the OS can find all the dynamic libraries this program is dependent on.

Let's take a look at the code changes that make this possible. Look at the main.go source code file to see how we reference the new package configuration file.

This is the original code from the first post. In this version I specified the compiler and linker flags directly. The location of the header and dynamic library are referenced with a relative path.

package main

/*#cgo CFLAGS: -I../DyLib#cgo LDFLAGS: -L. -lkeyboard#include <keyboard.h>*/import "C"

This is the new code. Here I tell CGO to use the pkg-config program to find the compiler and linker flags. The name of the package configuration file is specified at the end.

package main

/*#cgo pkg-config: --define-variable=prefix=. GoingGoKeyboard#include <keyboard.h>*/import "C"

Notice the use of the pkg-config program option --define-variable. This option is the trick to making everything work. Let's get back to that in a moment.

Run the pkg-config program against our new package configuration file:

pkg-config --cflags --libs GoingGoKeyboard

-I$GOPATH/src/github.com/goinggo/keyboard/DyLib-L$GOPATH/src/github.com/goinggo/keyboard/DyLib -lkeyboard

127

Page 128: Going Go Programming

If you look closely at the output from the call, you will see something that I told you was wrong. The $GOPATH environment variable is being provided.

Open the package config file which is located in the pkgconfig folder and you will see the pkg-config program doesn't lie. Right there at the top I am setting the prefix variable to a path using $GOPATH. So why is everything working?

Now run the command again using the same --define-variable option we are using in main.go:

pkg-config --cflags --libs GoingGoKeyboard --define-variable=prefix=.

-I./DyLib-L./DyLib -lkeyboard 

Do you see the difference? In the first call to the pkg-config program we get back paths that have the literal $GOPATH string because that is how the prefix variable is set. In the second call we override the value of the prefix variable to the current directory. What we get back is exactly what we need.

Remember this environment variable that we set prior to using the Go tool?

PKG_CONFIG_PATH=$GOPATH/src/github.com/goinggo/keyboard/pkgconfig

The PKG_CONFIG_PATH environment variable tells the pkg-config program where it can find package configuration files that are not located in any of the default locations. This is how the pkg-config program is able to find our GoingGoKeyboard.pc file.

The last mystery to explain is how the OS can find the dynamic library when we run the program. Remember this environment variable that we set prior to using the Go tool?

export DYLD_LIBRARY_PATH=$GOPATH/src/github.com/goinggo/keyboard/DyLib

The DYLD_LIBRARY_PATH environment variable tells the OS where else it can look for dynamic libraries.

128

Page 129: Going Go Programming

Installing your dynamic libraries in the /usr/local folder keeps things simple. All of the build tools are configured to look in this folder by default. However, using the default locations for your custom or third party libraries require extra steps of installation prior to running the Go tools. By using a package configuration file and passing the pkg-config program the options it needs, Go with CGO can deploy builds that will install and be ready to run instantly.

Something else I didn't mention is that you can use this technique to install 3rd party libraries that you may be trying out in a temp location. This makes it real easy to remove the library if you decide you don't want to use it.

If you want to play with the code or concepts on a Windows or Ubuntu machine, read C Dynamic Libraries in Go Programs to learn how to build your own dynamic libraries that you can experiment with.

Collections Of Unknown Length in Go

If you are coming to Go after using a programming language like C# or Java, the first thing you will discover is that there are no traditional collection types like List and Dictionary. That really threw me off for months. I found a package called container/list and gravitated to using it for almost everything.

Something in the back of my head kept nagging me. It didn't make any sense that the language designers would not directly support managing a collection of unknown length. Everyone talks about how slices are widely used in the language and here I am only using them when I have a well defined capacity or they are returned to me by some function. Something is wrong!!

So I wrote an article earlier in the month that took the covers off of slices in a hope that I would find some magic that I was missing. I now know how slices work but at the end of the day, I still had an array that would have to grow. I was taught in school that linked lists were more efficient and gave you a better way to store large collections of data. Especially when the number of items you need to collect is unknown. It made sense to me.

When I thought about using an empty slice, I had this very WRONG picture in my head:

129

Page 130: Going Go Programming

I kept thinking how Go would be creating a lot of new slice objects and lots of other memory allocations for the underlying array with values constantly being copied. Then the garbage collector would be overworked because of all these little objects being created and discarded.

I could not imagine having to do this potentially thousands of times. There had to be a better way or efficiencies that I was not aware of.

After researching and asking a lot of questions, I came to the conclusion that in most practical cases using a slice is better than using a linked list. This is why the language designers have spent the time making slices work as efficiently as possible and didn't introduce collection types into the language.

We can talk about edge cases and performance numbers all day long but Go wants you to use slices and therefore it should be our first choice until the code tells us otherwise. Understand that slices are like the game of chess, easy to learn but takes a lifetime to master. There are gotchas and things you need to be aware of, because the underlying array can be shared.

Now is a good time to read my post, Understanding Slices in Go Programming, before continuing.

The rest of this post will explain how to use a slice when dealing with an unknown capacity and what is happening underneath.

Here is an example of how to use an empty slice to manage a collection of unknown length:

package main

import (    "fmt"    "math/rand"    "time"

130

Page 131: Going Go Programming

)

type Record struct {    Id int    Name string    Color string}

func main() {    // Let's keep things unknown    random := rand.New(rand.NewSource(time.Now().Unix()))

    // Create a large slice pretending we retrieved data    // from a database    data := make([]Record, 1000)

    // Create the data set    for record := 0; record < 1000; record++ {        pick := random.Intn(10)        color := "Red"

        if pick == 2 {            color = "Blue"        }

        data[record] = Record{            Id: record,            Name: fmt.Sprintf("Rec: %d", record),            Color: color,        }    }

    // Split the records by color    red := []*Record{}    blue := []*Record{}

    for _, record := range data {        if record.Color == "Red" {            red = append(red, &record)        } else {            blue = append(blue, &record)        }    }

    // Display the counts    fmt.Printf("Red[%d] Blue[%d]\n", len(red), len(blue))}

When we run this program we will get different counts for the red and blue slices thanks to the randomizer. We don't know what capacity we need for the red or blue slices ahead of time. This is a typical situation for me.

131

Page 132: Going Go Programming

Let's break down the more important pieces of the code:

These two lines of code create an empty slice.

red := []*Record{}blue := []*Record{}

This syntax will also create an empty slice.

red := make([]*Record, 0)blue := make([]*Record, 0)

In both cases we have a slice whose length and capacity is 0. To add items to the slice we use the built in function called append:

red = append(red, &record)blue = append(blue, &record)

The append function is really cool and does a bunch of stuff for us.

Kevin Gillette wrote this in the group discussion I started:(https://groups.google.com/forum/#!topic/golang-nuts/nXYuMX55b6c)

In terms of Go specifics, append doubles capacity each reallocation for the first few thousand elements, and then progresses at a rate of ~1.25 capacity growth after that.

I am not an academic but I see the use of tilde (~) quite a bit. For those of you who don't know what that means, it means approximately. So the append function will increase the capacity of the underlying array to make room for future growth. Eventually append will grow capacity approximately by a factor of 1.25 or 25%.

Let's prove that append is growing capacity to make things more efficient:

package main

import (    "fmt"    "reflect"    "unsafe")

func main() {    data := []string{}

    for record := 0; record < 1050; record++ {        data = append(data, fmt.Sprintf("Rec: %d", record))

        if record < 10 || record == 256 || record == 512 || record == 1024 {            sliceHeader := (*reflect.SliceHeader)

132

Page 133: Going Go Programming

((unsafe.Pointer(&data)))

            fmt.Printf("Index[%d] Len[%d] Cap[%d]\n",                record,                sliceHeader.Len,                sliceHeader.Cap)        }    }}

Here is the output:

Index[0] Len[1]  Cap[1]Index[1] Len[2]  Cap[2]Index[2] Len[3]  Cap[4]          - Ran Out Of Room, Double CapacityIndex[3] Len[4]  Cap[4]Index[4] Len[5]  Cap[8]          - Ran Out Of Room, Double CapacityIndex[5] Len[6]  Cap[8]Index[6] Len[7]  Cap[8]Index[7] Len[8]  Cap[8]Index[8] Len[9]  Cap[16]         - Ran Out Of Room, Double CapacityIndex[9] Len[10] Cap[16]Index[256] Len[257] Cap[512]     - Ran Out Of Room, Double CapacityIndex[512] Len[513] Cap[1024]    - Ran Out Of Room, Double CapacityIndex[1024] Len[1025] Cap[1280]  - Ran Out Of Room, Grow by a factor of 1.25

If we look at the capacity values we can see that Kevin is absolutely correct. The capacity is growing exactly as he stated. Within the first 1k of elements, capacity is doubled. Then capacity is being grown by a factor of 1.25 or 25%. This means that using a slice in this way will yield the performance we need for most situations and memory won't be an issue.

Originally I thought that a new slice object would be created for each call to append but this isn't the case. A copy of red is being copied on the stack when we make the call to append. Then when append returns, another copy is made using the existing memory we already had.

red = append(red, &record)

In this case the garbage collector is not involved so we have no issues with performance or memory at all. My C# and reference type mentality strikes me down again.

Hold on to your seat because there are changes coming to slices in the next release.

Dominik Honnef has started a blog that explains, in plain english (Thank You), what is being worked on in Go tip. These are things coming in the next release. Here is a link to his awesome blog and the section on slices. Read it, it is really awesome.

133

Page 134: Going Go Programming

http://dominik.honnef.co/go-tip/http://dominik.honnef.co/go-tip/2013-08-23/#slicing

There is so much more you can do with slices. So much so that you could write an entire book on the subject. Like I said earlier, slices are like chess, easy to learn but takes a lifetime to master. If you are coming from other languages like C# and Java then embrace the slice and use it. It is the Go way.

Organizing Code to Support Go Get

For those of you who are like me, trying to learn the Mac and Linux operating systems, Golang programming and deployment constructs all at the same time, I feel your pain. I have been building a Go application for a couple of months on my Mac and it was time to deploy the code on a local Ubuntu server. I was having a really tough time and it was turning into a disaster. Like always, I kept telling myself, I must be doing something wrong.

Well, I was, big-time. Let me put it this way. After spending 2 days reorganizing repositories and code, I finally figured out how the Go tool works. Now I can update my development environment, deploy, build, install and update my Go applications on any machine with one simple call to go get.

I am sure there are several ways you can organize code that will work with the Go tool. What I am going to present is working for me and I wish I knew all of this when I started. There are a 1000 ways to skin a cat and we all have our favorite way, this one is now mine.

Everything we do in Go should be based on what the Go tool can do. Going off the reservation will get you in trouble, I am living proof. One special command we are going to focus on is get. The get command will download code, build and install packages and produce an executable binary if you are building a program. It is very smart and can read your code to find dependences that it will download, built and install as well. That is, if everything is structured and set correctly.

This document explains all the things the Go tool can do:

http://golang.org/cmd/go/

But I want to concentrate on this section:

http://golang.org/cmd/go/#hdr-Remote_import_path_syntax

Even as I read this document now I have a hard time understanding what it is trying to tell me.

I use Github but you don't have to. The Go tool supports Github, Mercurial, Subversion and Bazaar out of the box. If you are not using any of these systems for version control, there are ways to give the Go tool the information it needs to support your version control system.

When I say the Go tool supports these version control systems it's a bit vague right now, so let's jump into this. Everything starts with a repository, so let's look at the ones I have for my

134

Page 135: Going Go Programming

project. I have two accounts in Github. One is called goinggo which contains all my shared and reusable repositories. The other is ArdanStudios which contains my private repositories.

Here is the goinggo repository:

Choosing a name for your repository is really important. The name of the repository is going to be used later to reference the code. Having a good name makes things easier for everyone. You can see I have 2 of my repositories listed. Both of these repositories contain reusable code and are referenced in my project.

When I first put my repository together for the GoingGo.net website, I pull all these projects under a single repository. This ended up being a very bad idea. It is best to organize your code in different repositories that may or may not be needed depending on the project. As an example, I will never need the newsearch code for any project I am building. That was an application I built for an article I wrote. When I had that code in the same repository as the utilities code, and I reference just one code file from utilities, the Go tool still pulled down everything in that repository. Not Good.

Let's look at the utilities repository:

135

Page 136: Going Go Programming

You will notice the repository has a folder with a major version number. There is a good amount of debate and research going on in the Go community about package management and versioning. This is a complicated problem with many twists and turns. There is a document that is being worked on by several people who are doing the research and leading the discussion. Here is that document:

https://docs.google.com/document/d/1_IJTRD6dDQvyCfyim4KJexq8ZrKUPvUd3GMSA8cw8A4/edit#heading=h.w4quaql3iduf

Since we can't wait for the final outcome to this debate, I have chosen to structure my repository with a major version number and with the idea that future versions of the Go tool will provide better version control system support. I also imagine that we will eventually have a standard package manager that will provide all the support we need to make this manageable and flexible.

There is another reason for using a major version number. I don't want users to worry about making changes to the references of this package every time I have a new minor or patch release. Since minor versions and patches are not allowed to break existing interfaces, this reference to just the major version is enough. Obviously I must comply with this rule and give a personal guarantee.

If I want to work on version 2, which will not be compatible with version 1, I can create a new v2 folder and not affect those who rely on the code. This also gives others confidence that they can use and rely on my package, which is very important.

Many feel that trusting others blindly to not break compatibility is too risky. I can understand that position. One option to mitigate that risk is to take a copy of the packages you want to use and repo it yourself. Being able to do this depends on the license for the code so check that first.

136

Page 137: Going Go Programming

You will also notice I have two branches on this code. A develop branch and a master branch. Thanks to the Git Flow tool this is really easy to setup.

This allows me to work in the develop branch, making changes to the code without affecting those who are using the master branch. People can use the develop branch if they wish and start testing new features and bug fixes. I can also get feedback before any final release. Once I feel the new version is stable, I can merge and push the final changes into master and tell the world the release is ready. It is good practice to tag or label your branches, especially in master after a release of code. This will allow access to previous releases if it is necessary.

In the case of Github, the Go tool is going to download and use master. If you are building a package for the masses, branching and versioning is something you want to work out in the beginning.

Let's look at the project code that is sitting in the Github repository under my private account:

You can see this repository is under my ArdanStudios account and I have a single source code file called main.go. You can also see some of the internal package folders that make up the project.

Let's look inside the mongo folder and view the mongo.go code file that is in there.

137

Page 138: Going Go Programming

This single code file in the mongo folder provides connection support for accessing my MongoDB database.

What is important to see is the import references. Notice that the references to the goinggo utilities, mongo driver and even the internal helper package is all done with a full url qualifier.

This is going to allow the Go tool to download these packages for us when we are ready to build and install our program.

For all this to work, in both our development and production environments, the code must reside in physical folders that match these url paths.

The Projects and PublicPackages folders in my development environment are located under a parent folder called Go. Under each of these folders I have a special Go folder called src.

In order for the Go tool to find the source code for the packages we import, the code must

138

Page 139: Going Go Programming

reside underneath a folder called src. In my development environment, the GOPATH contains these two folder:

$HOME/Spaces/Go/Projects$HOME/Spaces/Go/PublicPackages

Notice the directories we add to the GOPATH variable point to the src folders. The Go tool assumes there is a src folder immediately following each folder listed in the GOPATH.

If you look at the folders underneath src you will see the entire url is laid out as sub-folders and then the code folders and files follow:

In my development environment I like having a folder called Projects and PublicPackages. Code in the PublicPackages folder is just that, public code that I don't own and I am using in my projects. I could keep a single GOPATH and put the public code and my project code under a single folder. There is nothing wrong with that. I just like separating the code that I own from the code I don't.

To build out the PublicPackages folder you must manually bring down each repository yourself using to Go tool. Let's say you wanted to use the GoingGo utilities code in your dev environment.

Here is what you do. Open a terminal session and run these commands:

cd $HOMEexport GOPATH=$HOME/examplego envgo get github.com/goinggo/utilities

139

Page 140: Going Go Programming

I always run go env after I set the GOPATH to double check that it is properly set. In this example it should say GOPATH="/Users/bill/example".

When you run the go get command on the goinggo utilities repository, you will get the following message:

package github.com/goinggo/utilities   imports github.com/goinggo/utilities   imports github.com/goinggo/utilities: no Go source files in   /Users/bill/example/src/github.com/goinggo/utilities

This is because there is nothing for the Go tool to build. It's ok because this package just provides utility code that will be built by each individual project. If you remember, I pointed out how there was a main.go file in the root of my project code. This is why. I want to make sure the Go tool finds the main source code file to build the application.

You can specify additional paths in the call to go get if the code you want to build is not in the root of the project. Something like this:

go get github.com/goinggo/utilities/workpool

When this runs you will not get any warnings and a static library file for workpool will exist in the package folder. All of the code for the specified repository will still be downloaded. Adding the extra folders to the end of the repository url only tell the Go tool where to start building the code.

When we open the example folder we see the Go tool created the entire tree structure:

What is even better is we have a cloned Github repository. The image on the right shows the hidden files for the .git folder. Why is this important? Anytime we want to update the code we can run the Go tool again:

140

Page 141: Going Go Programming

cd $HOMEexport GOPATH=$HOME/examplego envgo get -u github.com/goinggo/utilities

Using the -u option will perform an update to the local repository. Setting up the PublicPackages folder for your development environment will keep one version of all the packages you use in a single place under a single GOPATH folder. Minimizing the number of GOPATH folders in your development environment is always a good thing. You can also update any of the public code if you need to very quickly.

Next let's simulate a production build and installation using the Mongo Rules program. This is going to show you the real power of the Go tool when we structure our repositories and code correctly.

Before we can try this, we need to install the Bazaar program. Mongo Rules references the labix.org mgo driver. The mgo driver is being held in a Bazaar version control system and the Go tool can not download the code without it. This is a great example of a project that is getting code from multiple types of repositories and version control systems.

If you are running on a Mac or Windows use these links and follow the instructions:

http://wiki.bazaar.canonical.com/MacOSXDownloadshttp://wiki.bazaar.canonical.com/WindowsDownloads

If you are running on Linux just run apt-get:

sudo apt-get install bzr

With Bazzar installed we can use the Go tool to download and install the Mongo Rules program in our simulated production environment.

** WAIT **   It is important that the GOBIN environment variable is not set. If this variable is set then the Go tool will attempt to install the Mongo Rules program in the location specified by the variable. This is not what we want. To clear the variable if it is set, issue this call:

export GOBIN=

Now run these commands:

cd $HOMEexport GOPATH=$HOME/examplego envgo get github.com/goinggo/mongorules

After the Go tool is done we have a build and installed program that is ready to go. Look at the directory structure after the call:

141

Page 142: Going Go Programming

How cool is this !!

With a single call to go get all the code, including the packages the code depends on is downloaded, built and then installed.

If you look at the bin folder you will see the executable binary for the Mongo Rules program that was built.

Inside the pkg folders we have the static library files that were produced when go get performed the build.

In the src folder you can see all the code, including the code from the labix.org website that was downloaded. The Go tool looked at all the public references in the code and downloaded and built everything it needed to create a final executable.

142

Page 143: Going Go Programming

What is also really nice is that everything works within a single directory from our GOPATH.

GOPATH=$HOME/example

If you want to learn more about this program check out this article I wrote for Safari Books Online. It talks about how you can use MongoDB and Go to analyze data.

http://www.goinggo.net/2013/07/analyze-data-with-mongodb-and-go.html

All of this knowledge came to light when I needed to build, deploy and install my program on a different OS and machine. I really struggled to create a build package that would install all these dependencies and put everything in the right place. Little did I know a few days ago that the Go tool does all this for you. You just need to know how it works. Hopefully now, you do as well.

Timer Routines And Graceful Shutdowns In Go

In my Outcast data server I have several data retrieval jobs that run using different go routines. Each routine wakes up on a set interval. The most complex job is the downloading of radar images. What makes this complex is that there are 155 radar stations throughout the United States that take a new picture every 120 seconds. All these radar images can be put together to create a mosaic. When the go routine wakes up to pull down the new images, it must do this as quickly as possible for all 155 stations. If it doesn't, the mosaics will be out of sync and any overlays across station boundaries will look off.

143

Page 144: Going Go Programming

The radar image on the left is for Tampa Bay at 4:51 PM EST. You can see the coverage of that radar station crosses over a large area of the state of Florida. This radar image actually cuts into several other radar stations including Miami.

The radar image on the right is for Miami at 4:53 PM EST. There is a two minute difference, or what I call glare, between these two radar images. When we overlay these two images on a map you would not notice any difference. However, if the glare between these images get any greater than a couple of minutes, it can become obvious to the naked eye.

144

Page 145: Going Go Programming

The blue colors are radar noise that gets filtered out, so we are left with greens, reds and yellows that represent real weather. These images were downloaded and cleaned at 4:46 PM EST. You can see they are pretty close and would overlay well.

The first implementation of the code used a single go routine on a 10 minute interval. When the go routine woke up it would take 3 to 4 minutes to download, process, store and write a record to mongo for all 155 stations. Even though I would process each region as close together as possible, the glare between the images was too great. The radar stations already contain a glare of one to two minutes so adding another one to two minutes more presented a problem.

I always try to use a single routine if I can for any work that needs to be performed, just to keep things simple. In this case one routine didn't work. I needed to process multiple stations at the same time and reduce the amount of glare between the images. After adding a work pool to process multiple stations at once, I was able to process all 155 stations in under a minute. So far I have received no complaints from the client team.

In this post we are going to concentrate on the timer routine and shutdown code. In the next

145

Page 146: Going Go Programming

post I will show you how add a work pool to the solution.

I have attempted to provide a complete working code sample. It should work as a good template for your own implementations. To download and run the code, open a terminal session and issue the following commands:

cd $HOMEexport GOPATH=$HOME/examplego get github.com/goinggo/timerdesignpatterncd example/bin./timerdesignpattern

The Outcast data server is a single application that is started and hopefully runs for long period of time. Occasionally these types of applications do have to be shut down. It is important that you can always shut down your application gracefully on demand. When I am developing these types of applications, I always make sure, right from the beginning, I can signal the application to terminate and it does without hanging. There is nothing worse than an application that you need to kill by force.

The sample program creates a single go routine and tells the routine to wake up every 15 seconds. When the routine wakes up, it performs 10 seconds of work. When the work is over, it calculates the amount of time it needs to sleep so it can wake up on that 15 second cycle again.

Let's run the application and shut it down while it is running. Then we can learn how it all works. We can shutdown the program by hitting the enter key at any time.

Here is the program running and being shut down 7 seconds later:

2013-09-04T18:58:45.505 : main : main : Starting Program2013-09-04T18:58:45.505 : main : workmanager.Startup : Started2013-09-04T18:58:45.505 : main : workmanager.Startup : Completed2013-09-04T18:58:45.505 : WorkTimer : _WorkManager.GoRoutine_WorkTimer : Started2013-09-04T18:58:45.505 : WorkTimer : _WorkManager.GoRoutine_WorkTimer : Info : Wait To Run : Seconds[15]

2013-09-04T18:58:52.666 : main : workmanager.Shutdown : Started2013-09-04T18:58:52.666 : main : workmanager.Shutdown : Info : Shutting Down2013-09-04T18:58:52.666 : main : workmanager.Shutdown : Info : Shutting Down Work Timer2013-09-04T18:58:52.666 : WorkTimer : _WorkManager.GoRoutine_WorkTimer : Shutting Down2013-09-04T18:58:52.666 : main : workmanager.Shutdown : Completed2013-09-04T18:58:52.666 : main : main : Program Complete

146

Page 147: Going Go Programming

This is a great first test. As soon as we tell the program to shutdown, it does and gracefully. Next let's have the program start it's work and try to shut it down:

2013-09-04T19:14:21.312 : main : main : Starting Program2013-09-04T19:14:21.312 : main : workmanager.Startup : Started2013-09-04T19:14:21.312 : main : workmanager.Startup : Completed2013-09-04T19:14:21.312 : WorkTimer : _WorkManager.GoRoutine_WorkTimer : Started2013-09-04T19:14:21.313 : WorkTimer : _WorkManager.GoRoutine_WorkTimer : Info : Wait To Run : Seconds[15]2013-09-04T19:14:36.313 : WorkTimer : _WorkManager.GoRoutine_WorkTimer : Woke Up2013-09-04T19:14:36.313 : WorkTimer : _WorkManager.GoRoutine_WorkTimer : Started2013-09-04T19:14:36.313 : WorkTimer : _WorkManager.GoRoutine_WorkTimer : Processing Images For Station : 02013-09-04T19:14:36.564 : WorkTimer : _WorkManager.GoRoutine_WorkTimer : Processing Images For Station : 12013-09-04T19:14:36.815 : WorkTimer : _WorkManager.GoRoutine_WorkTimer : Processing Images For Station : 22013-09-04T19:14:37.065 : WorkTimer : _WorkManager.GoRoutine_WorkTimer : Processing Images For Station : 3

2013-09-04T19:14:37.129 : main : workmanager.Shutdown : Started2013-09-04T19:14:37.129 : main : workmanager.Shutdown : Info : Shutting Down2013-09-04T19:14:37.129 : main : workmanager.Shutdown : Info : Shutting Down Work Timer2013-09-04T19:14:37.315 : WorkTimer : _WorkManager.GoRoutine_WorkTimer : Info : Request To Shutdown2013-09-04T19:14:37.315 : WorkTimer : _WorkManager.GoRoutine_WorkTimer : Info : Wait To Run : Seconds[14]2013-09-04T19:14:37.315 : WorkTimer : _WorkManager.GoRoutine_WorkTimer : Shutting Down2013-09-04T19:14:37.316 : main : workmanager.Shutdown : Completed2013-09-04T19:14:37.316 : main : main : Program Complete

This time I waited the 15 seconds and let the work begin. After it finished processing the forth image, I told the program to shutdown. It did so immediately and gracefully.

147

Page 148: Going Go Programming

Let's look at the core piece of the code that implement the timer routine and the graceful shutdown:

func (this *_WorkManager) GoRoutine_WorkTimer() {    wait := TIMER_PERIOD

    for {        select {        case <-this.ShutdownChannel:            this.ShutdownChannel <- "Down"            return

        case <-time.After(wait):            break        }

        startTime := time.Now()        this.PerformTheWork()        endTime := time.Now()

        duration := endTime.Sub(startTime)        wait = TIMER_PERIOD - duration    }}

I have removed all the comments and logging to make it easier to read. This is classic channels at work and the solution is really elegant. Elegant compared to how something like this needs to be implemented in C#.

The GoRoutine_WorkTimer function runs as a Go Routine and is started with the keyword go:

func Startup() (err error) {    _This = &_WorkManager{        Shutdown: false,        ShutdownChannel: make(chan string),    }

    go _This.GoRoutine_WorkTimer()

    return err}

The WorkManager is created as a singleton and then the timer routine is started. There is a single channel for shutting down the timer routine and a flag to denote when the system is shutting down.

The timer routine runs inside an endless for loop so it does not terminate until we ask it to. Let's look at the channel related code inside the for loop:

148

Page 149: Going Go Programming

select {case <-this.ShutdownChannel:    this.ShutdownChannel <- "Down"    return

case <-time.After(wait):    break}

this.PerformTheWork()

We are using a special keyword called select. Here is Go documentation on the keyword select:

http://golang.org/ref/spec#Select_statements

We are using the select statement to keep the timer routine asleep until it is time to perform work or time to shut down. The select puts the timer routine to sleep until one of the channels are signaled. Only one case will execute at a time, making the code synchronous. This really helps keep things simple and allows us to run atomic, "routine safe", operations across the multiple channels cased inside the select.

The select in the timer routine contains two channels, one for shutting down the routine and one for performing the work. Shutting down the routine is performed by the following code:

func Shutdown() (err error) {    _This.Shutdown = true

    _This.ShutdownChannel <- "Down"    <-_this.ShutdownChannel

    close(_This.ShutdownChannel)

    return err}

When it is time to shut down, we set the ShutDown flag to true and then signal the ShutDownChannel by passing the string "Down" through the channel. Then we wait for a response back from the timer routine. This communication of data synchronizes the entire shutdown process between the main routine and the timer routine. Really nice, simple and elegant.

To wake up on an interval using the select statement, I use a special function called time.After. This function waits for the specified duration to elapse and then returns the current time on a signaled channel. This wakes up the select allowing the PerformTheWork function to be executed. Once the PerformTheWork function returns, the timer routine is put back to sleep by the select statement again, unless another channel is in the signaled state.

Let's look at the PerformTheWork function:

149

Page 150: Going Go Programming

func (this *_WorkManager) PerformTheWork() {    for count := 0; count < 40; count++ {        if this.Shutdown == true {            return        }

        fmt.Printf("Processing Images For Station : %d", count)        time.Sleep(time.Millisecond * 250)    }}

The function is printing a message to the console window 40 times every 250 milliseconds. This will take 10 seconds to complete. Within the loop the code is checking the Shutdown flag. It is really important for this function to terminate quickly if the program is shutting down. We don't want the admin who is shutting the program down to think the program has hung.

Once the function terminates, the timer routine can execute the select statement again. If the program is shutting down, the select will immediately wake up again to process the signaled Shutdown channel. From there the timer routine signals back to the main routine that it is shutting down and the program terminates gracefully.

This is my timer and graceful shutdown code pattern that you can also use in your applications. If you download the full example from the GoingGo repository, you can see the code in action and a few more goodies.

Read this post to learn how to implement a work pool to process work across multiple go routines. Like in the radar image processing I described above.

http://www.goinggo.net/2013/09/pool-go-routines-to-process-task.html

Running Go Programs In IronWorker

IntroductionIron.io has a product called IronWorker which provides a task oriented Linux container that you can run your programs inside. If you are not sure what I mean, think of this as having a temporary Linux virtual machine instantly available for your personal but short term use. IronWorker allows you to load your binaries, code files, support files, shells scripts and just about anything else you may need to run your program in the container. You specify a single task to execute, such as running a shell script or a binary and IronWorker will perform that task when requested. Once the task is complete, IronWorker will tear down the container as if it never existed.

If you are developing in Windows or on the Mac and plan to load and run pre-built binaries in IronWorker, they must be built for Linux. If that could be a problem don't despair, you have options.

You can create an Ubuntu VM so you can build the Linux binaries you need. This is what I do. I don't develop in my Ubuntu VM, I use it as a testing and staging area. Your second

150

Page 151: Going Go Programming

option is to build your program inside of the IronWorker container and then execute it. You have a lot of flexibility to do what you need with IronWorker.

To Install or Not to Install UbuntuI highly recommend that you buy VMWare Fusion or Parallels and create an Ubuntu VM. Most of the cloud based environments you may choose to use for your testing and production environments will be running Linux. It helps to stage and test things before uploading code to these environments. I use VMWare Fusion and I love the product. VMWare Fusion will cost around $60 USD and Ubuntu is FREE. I run Ubuntu 12.04 LTS but version 13.04 is now available.

http://www.vmware.com/products/fusion/http://www.ubuntu.com/download/desktop

In this post I will be using my Mac for everything. So if you are running Windows you should be able to follow along. Once you see how to build your applications inside the IronWorker container, you will know how to load and run Linux binaries. For those who are interested in setting up or using Ubuntu, read the next section. If you want to continue in your Windows or Mac environment, go to the Installing Ruby section.

Setting Up Your Terminal In UbuntuIf you decided to go ahead and install an Ubuntu VM or you already have one, awesome. Here are a few things you will want to do so you can use IronWorker.

The first Icon in the Launcher bar is the Dash Home Icon. Select this and perform a search for the Terminal application.

You will see several terminal applications show up. Select the Terminal program, which will launch a Terminal Session. Now back in Launcher you will see an icon for the Terminal session you started. Right Click on the Terminal Icon and select Lock To Launcher.

One last step, we need to change a configuration option for Terminal:

151

Page 152: Going Go Programming

Make sure the Terminal session is the active program and move your mouse to the top of the screen. The menu for Terminal will appear. Under Edit choose Profile Preferences. Then select the Title and Command tab and check the Run command as a login shell. We need this for our Ruby programs to work properly in our Terminal sessions.

Installing RubyBefore we can start using IronWorker we need the latest version of Ruby installed. IronWorker provides a Ruby based command line tool that allows us to upload IronWorker tasks for the different projects we create.

Window Users

In Windows use this website to install the latest version of Ruby:

http://rubyinstaller.org/

Once you are done skip to the next section called Installing IronWorker.

Mac and Linux Users

On the Mac and Linux I have found the best way to install and manage Ruby is with the Ruby Version Manager (RVM):

https://rvm.io/rvm/install

152

Page 153: Going Go Programming

Perform these steps on both your Mac and Linux operating systems. Open a Terminal session and run this command:

\curl -L https://get.rvm.io | bash -s stable --ruby --autolibs=enable --auto-dotfiles

Use the backslash when running the command. This prevents misbehaving if you have aliased it with configuration in your ~/.curlrc file.

That command should download the latest version of Ruby and make it the default version. Before moving on, close your Terminal session and start a new one. Check that everything is good by running this command:

run rvm list

This should be the output:

rvm rubies

=* ruby-2.0.0-p247 [ x86_64 ]

# => - current# =* - current && default# * - default

There may be a newer version of Ruby by the time you are reading this. But as long as the version is greater than or equal to 2.0.0 and has the =* operator in front, you will be all set.

If you are not sure if you have the correct default version set and want to make absolutely sure, run this command:

rvm use ruby-2.0.0-p247 --default

Just make sure you specify the version correctly.

Just for your information, the rvm tool and the different versions of Ruby you install get placed in a folder called .rvm inside your $HOME folder. It will be a hidden folder. To see that is does exist, run the following command in your Terminal session:

cd $HOMEls -d .rvm

This should display the name of the directory back in the Terminal session.

Installing IronWorkerWith Ruby properly installed we can install the IronWorker tool. This tool is a Ruby program that allows us to create tasks by uploading our programs and support files into IronWorker.

To install the tool we must install the IronWorker Ruby gem. Gem's are Ruby packages that get installed under the default version of Ruby. This is why I stressed we have the right default version set.

153

Page 154: Going Go Programming

Run the following command in the Terminal session:

gem install iron_worker_ng

If everything installs properly, the output should end like this:

7 gems installed

As a final test to make sure everything is setup correctly, run the IronWorker program:

iron_worker

If everything is working properly you should get the following output:

usage: iron_worker COMMAND [OPTIONS]

    COMMAND: upload, patch, queue, retry, schedule, log, run, install, webhook, info    run iron_worker COMMAND --help to get more information about each command

Now we have our environment setup to talk with the IronWorker platform. Make sure you do this in all of your environments.

The Test ProgramI have built a test application that we are going to run in IronWorker. To download the code and the IronWorker support files, run the following commands:

cd $HOMEexport GOPATH=$HOME/examplego get github.com/goinggo/ironworker

This will copy, build and install the code into the example folder under $HOME. The program has been written to test a few things about the IronWorker Linux container environment. Let's review the code for the program first and test it locally.

For IronWorker to be really effective you want to build programs that perform a specific task. The program should be designed to be started on demand or on a schedule, run and then be terminated. You don't want to build programs that run for long periods of time. The test program runs for 60 seconds before it terminates.

There were two things I wanted to know about IronWorker that the program tests. First, I wanted to know how logging worked. Second, I wanted to know if the program would receive OS signals when I requested the running program to be killed.

With all that said let's review the code we are going to run.

Here is the logging code which can be found in the helper package:

154

Page 155: Going Go Programming

package helper

import (    "fmt"    "runtime"    "time")

func WriteStdout(goRoutine string, functionName string, message string) {    fmt.Printf("%s : %s : %s : %s\n",        time.Now().Format("2006-01-02T15:04:05.000"),        goRoutine,        functionName,        message)}

func WriteStdoutf(goRoutine string, functionName string, format string, a ...interface{}) {    WriteStdout(goRoutine, functionName, fmt.Sprintf(format, a...))}

There is nothing fancy here, just an abstraction layer so I change out the logging mechanism if this doesn't work.

Here is the controller code that manages the starting and termination of the program:

package controller

import (    "github.com/goinggo/ironworker/helper"    "github.com/goinggo/ironworker/program"    "os"    "os/signal")

const (    NAMESPACE = "controller"    GO_ROUTINE = "main")

func Run() {

    helper.WriteStdout("Main", "controller.Run", "Started")

    // Create a channel to talk with the OS.    sigChan := make(chan os.Signal, 1)

    // Create a channel to let the program tell us it is done.    waitChan := make(chan bool)

155

Page 156: Going Go Programming

    // Create a channel to shut down the program early.    shutChan := make(chan bool)

    // Launch the work routine.    go program.DoWork(shutChan, waitChan, "Test")

    // Ask the OS to notify us about interrupt events.    signal.Notify(sigChan, os.Interrupt)

    for {        select {        case <-sigchan:            helper.WriteStdout("Main", "controller.Run", "******> Program Being Killed")

            // Signal the program to shutdown and wait for confirmation.            shutChan <- true            <-shutChan

            helper.WriteStdout("Main", "controller.Run", "******> Shutting Down")            return

        case <-waitchan:            helper.WriteStdout("Main", "controller.Run", "******> Shutting Down")            return        }    }}

I like to shutdown my applications gracefully, so if I could receive an OS signal on a kill request, that would be fantastic. I am not a channel guru and I am sure there are better ways to accomplish this. I welcome any suggestions. For now this is what we got.

The function creates three channels. The first channel is used to receive signals from the OS. The second channel is used by the Go routine that is performing the program logic to signal when it is done. The third channel is used by the controller to signal the Go routine to terminate early if necessary.

The program.DoWork function is started as a Go routine and then the controller waits for either the OS to signal or the running Go routine to signal it is done. If the OS signals to terminate, then the controller uses the shutdown channel and waits for the Go routine to respond. Then everything shuts down gracefully.

Here is the code for the Go routine that is simulating the work:

package program

156

Page 157: Going Go Programming

import (    "github.com/goinggo/ironworker/helper"    "time")

func DoWork(shutChan chan bool, waitChan chan bool, logKey string) {    helper.WriteStdout("Program", "program.DoWork", "Program Started")

    defer func() {        waitChan <- true    }()

    // Perform work for 60 seconds    for count := 0; count < 240; count++ {        select {        case <-shutChan:            helper.WriteStdout("Program", "program.DoWork", "Info : Completed : KILL REQUESTED")            shutChan <- true            return

        default:            helper.WriteStdoutf("Program", "program.DoWork", "Info : Performing Work : %d", count)            time.Sleep(time.Millisecond * 250)        }    }

    helper.WriteStdout("Program", "program.DoWork", "Completed")}

The DoWork function prints a message to the log every 250 milliseconds 240 times. This gives us a minute of work that must be performed. After each write log call, the function checks if the shutdown channel has been signaled. If it has, the function terminates immediately.

Just to have a complete code sample in the post, here is the main function:

package main

import (    "github.com/goinggo/ironworker/controller"    "github.com/goinggo/ironworker/helper")

func main() {    helper.WriteStdout("Main", "main", "Started")

157

Page 158: Going Go Programming

    controller.Run()

    helper.WriteStdout("Main", "main", "Completed")}

In your native environment, mine being the Mac, download the code and let Go tool build and install the application.

cd $HOMEexport GOPATH=$HOME/examplego get github.com/goinggo/ironworkercd $HOME/example/bin./ironworker

When you run the program, let it run to completion. You should see the following output:

2013-09-07T11:42:48.701 : Main : main : Started2013-09-07T11:42:48.701 : Main : controller.Run : Started2013-09-07T11:42:48.701 : Program : program.DoWork : Program Started2013-09-07T11:42:48.701 : Program : program.DoWork : Info : Performing Work : 02013-09-07T11:42:48.951 : Program : program.DoWork : Info : Performing Work : 12013-09-07T11:42:49.203 : Program : program.DoWork : Info : Performing Work : 22013-09-07T11:42:49.453 : Program : program.DoWork : Info : Performing Work : 32013-09-07T11:42:49.704 : Program : program.DoWork : Info : Performing Work : 42013-09-07T11:42:49.955 : Program : program.DoWork : Info : Performing Work : 5...2013-09-07T11:43:48.161 : Program : program.DoWork : Info : Performing Work : 2372013-09-07T11:43:48.412 : Program : program.DoWork : Info : Performing Work : 2382013-09-07T11:43:48.662 : Program : program.DoWork : Info : Performing Work : 2392013-09-07T11:43:48.913 : Program : program.DoWork : Completed2013-09-07T11:43:48.913 : Main : controller.Run : ******> Shutting Down2013-09-07T11:43:48.913 : Main : main : Completed

The program started and terminated successfully. So the Controller logic is working. This time let's kill the program early by hitting <Ctrl> C after it starts:

2013-09-07T11:46:31.854 : Main : main : Started2013-09-07T11:46:31.854 : Main : controller.Run : Started2013-09-07T11:46:31.854 : Program : program.DoWork : Program Started

158

Page 159: Going Go Programming

2013-09-07T11:46:31.854 : Program : program.DoWork : Info : Performing Work : 02013-09-07T11:46:32.105 : Program : program.DoWork : Info : Performing Work : 12013-09-07T11:46:32.356 : Program : program.DoWork : Info : Performing Work : 22013-09-07T11:46:32.607 : Program : program.DoWork : Info : Performing Work : 3^C2013-09-07T11:46:32.706 : Main : controller.Run : ******> OS Notification: interrupt : 0x22013-09-07T11:46:32.706 : Main : controller.Run : ******> Program Being Killed2013-09-07T11:46:32.857 : Program : program.DoWork : Info : Completed : KILL REQUESTED2013-09-07T11:46:32.857 : Main : controller.Run : ******> Shutting Down2013-09-07T11:46:32.857 : Main : main : Completed

As soon as I hit <Ctrl> C the OS signaled the program with the syscall.SIGINT message. That caused the Controller to signal the running program to shutdown and the program terminated gracefully.

Configuring IronWork This is the documentation for using IronWorker:

http://dev.iron.io/worker/

I am going to walk you through the process for the test application we have. Login to your Iron.io account and select Projects from the top menu:

Enter Test into the text box and hit the Create New Project button. That should send you to the following screen:

159

Page 160: Going Go Programming

Select the Key icon which is where you will find the Credentials for this project. We need these credentials to create a special file called iron.json. This file will be required by the IronWorker Ruby program to load our tasks into this project.

In our Terminal session let's move to the folder where the IronWorker script files are located. I want to work with the buildandrun scripts:

cd $HOME/example/src/github.com/goinggo/ironworker/scripts/buildandrunls -l

You should see the following files:

-rw-r--r-- 1 bill staff 995 Sep 8 20:12 buildandrun.sh-rw-r--r-- 1 bill staff 106 Sep 8 20:12 buildandrun.worker-rw-r--r-- 1 bill staff 81 Sep 8 20:12 iron.json

You will find a iron.json file in the folder. Edit the file and put in your credentials:

{    "project_id" : "XXXXXXXXXXXXXXXXXXXXXX",    "token" : "XXXXXXXXXXXXXXXXXXXXXX"}

The next file we want to look at is the .worker file. Here is the documentation for .worker files:

http://dev.iron.io/worker/reference/dotworker/

160

Page 161: Going Go Programming

Here is what the buildandrun.worker file looks like:

# define the runtime languageruntime 'binary'

# exec is the file that will be executed:exec 'buildandrun.sh'

The buildandrun.worker file is telling the IronWorker Ruby program to upload and execute the buildandrun.sh file. This is the only file that will be placed into the IronWork container. Here is what the buildandrun.sh file looks like:

export HOME_FOLDER="$HOME/Container"export CODE_FOLDER="$HOME_FOLDER/code"export PROGRAM_FOLDER="$CODE_FOLDER/src/github.com/goingo/ironworker"

if [ ! -e $CODE_FOLDER/bin/ironworker ]then  mkdir $HOME_FOLDER  cd $HOME_FOLDER  curl https://go.googlecode.com/files/go1.1.2.linux-amd64.tar.gz -o p.tar.bz2 && tar xf p.tar.bz2 && rm p.tar.bz2  export GOARCH="amd64"  export GOBIN=""  export GOCHAR="6"  export GOEXE=""  export GOHOSTARCH="amd64"  export GOHOSTOS="linux"  export GOOS="linux"  export GOPATH="$CODE_FOLDER"  export GORACE=""  export GOROOT="$HOME_FOLDER/go"  export GOTOOLDIR="$HOME_FOLDER/go/pkg/tool/linux_amd64"  export CC="gcc"  export GOGCCFLAGS="-g -O2 -fPIC -m64 -pthread"  export CGO_ENABLED="1"  export PATH=$GOROOT/bin:$PATH  go get github.com/goinggo/ironworker

  #git clone https://username:[email protected]/goinggo/ironworker $PROGRAM_FOLDER  #cd $PROGRAM_FOLDER  #go clean -i  #go build  #go installfi

161

Page 162: Going Go Programming

cd $CODE_FOLDER/bin./ironworker

This script tests to see if the ironworker test application already exists. If it doesn't, it then proceeds to download the latest Linux binary package for Go and builds the program using the Go tool. Once the build and install is complete, the script executes the program.

You will notice these lines have been commented out in the shell script:

  #git clone https://username:[email protected]/goinggo/ironworker $PROGRAM_FOLDER  #cd $PROGRAM_FOLDER  #go clean -i  #go build  #go install

If you have a repository that requires authentication you can use this technique. This calls git clone the same way the Go tool does, so everything is copied to the right directory structure. If your code references other libraries, you will need to do that manually.  If you run the Go Get command with the [-x] option, you can see all the calls the Go tool issues. Just copy what you need and add it to your shell script.

IronWorker does have a copy of the Linux binary package for Go version 1.0.2 already pre-configured in every IronWorker container. The script installs the latest version to show you how that can be accomplished if the version you need is not the one installed. The technique can also be used to install other packages that might be required.

If you want to build the code every time the task runs, you could be smarter and run go version first. If the right version is not already available then you can download the version of Go you need:

export HOME_FOLDER="$HOME/Container"export CODE_FOLDER="$HOME_FOLDER/code"export PROGRAM_FOLDER="$CODE_FOLDER/src/github.com/goingo/ironworker"

go version > ver.txtgoversion=$(<ver.txt)head ver.txt

# IronWorker will report go version go1.0.2

if [ "$goversion" != "go version go1.1.2 linux/amd64" ]then  mkdir $HOME_FOLDER  cd $HOME_FOLDER  curl https://go.googlecode.com/files/go1.1.2.linux-amd64.tar.gz -o p.tar.bz2 && tar xf p.tar.bz2 && rm p.tar.bz2  export GOARCH="amd64"  export GOBIN=""

162

Page 163: Going Go Programming

  export GOCHAR="6"  export GOEXE=""  export GOHOSTARCH="amd64"  export GOHOSTOS="linux"  export GOOS="linux"  export GOPATH="$CODE_FOLDER"  export GORACE=""  export GOROOT="$HOME_FOLDER/go"  export GOTOOLDIR="$HOME_FOLDER/go/pkg/tool/linux_amd64"  export CC="gcc"  export GOGCCFLAGS="-g -O2 -fPIC -m64 -pthread"  export CGO_ENABLED="1"  export PATH=$GOROOT/bin:$PATHfi

go get -x github.com/goinggo/ironworkercd $CODE_FOLDER/bin./ironworker

Loading IronWork With A TaskWe have everything we need to load and run our first task for the Test project. In your Terminal session run the following command:

cd $HOME/example/src/github.com/goinggo/ironworker/scripts/buildandruniron_worker upload buildandrun

If everything is successful you should see the following output:

------> Creating client        Project 'Test' with id='522b4c518a0c960009000007'------> Creating code package        Found workerfile with path='buildandrun.worker'        Detected exec with path='buildandrun.sh' and args='{}'        Code package name is 'buildandrun'------> Uploading code package 'buildandrun'        Code package uploaded with id='522d147b91c530531f6f4e92' and revision='1'        Check 'https://hud.iron.io/tq/projects/522b4c518a0c960009000007/code/522d147b91c530531f6f4e92' for more info

Go back to the Iron.io website and let's see if our new task is there. Select Projects again from the main menu and select the Worker button to the right of your Test project.

163

Page 164: Going Go Programming

Select the Tasks tab and you should see the buildandrun task we just uploaded. Select the task and you should see the following screen:

There is a big grey button that says Queue a Task. Let's hit that button to run our task.

This dialog box will popup. Use all the defaults and hit the Queue Task button.

This will queue the task and then it should start running. Once you hit the queue button the screen should change:

164

Page 165: Going Go Programming

The task will start in the queued state, then running and finally complete. Once the task is done, click on the Log link.

You will see the shell script did everything perfectly. It downloaded Go and successfully built the program. Then it started executing it.

Let's run the program again and check two things. First, let's see if the container is saved and the download of Go does not have to occur again. Second, let's kill the program early and see if it receives any OS signals.

Hit the Queue a Task button again and after several seconds let's kill it:

165

Page 166: Going Go Programming

You can see that I killed the task 21 seconds into the run. When I look at the logs I am a bit disappointed. First, the task downloaded Go again. This means I have a new and clean IronWorker container every time the task runs. Second, the program did not receive any OS signals when I issued the kill. It appears the IronWork container is forced to die and the program gets no OS notifications.

It is not the end of the world, just something that is good to know. Based on this new information it seems we want to load binaries into the IronWorker container when we can. This way we don't need to spend time downloading things that can be pre-compiled. However, I was able to use IronWorker from my Mac environment which is a real plus.

On the other hand, having Go build and install your program every time could be huge. If you upload code changes to your repository you don't need to upload a new revision of the task. The Go tool will pull down the latest code, build it and run the program. Then again, that could cause problems.

At the end of the day you need to decide what is best for your different scenarios. What's awesome is that IronWorker gives you all the flexibility you need to make it work.

Working With Builder TasksBuilder Tasks are a hidden gem within IronWorker. A builder task allows you to build your code and generate a task that you will use for your application inside of IronWorker. This is really the best of both worlds because you don't need to download Go and build the code every time the task runs. You can do your build once. Your application task runs immediately because it is always ready to go with the binaries and other support files it needs.

Go back into the scripts folder and find the sub-folder called buildertask. Let's quickly run through the files and see how this works.

The task-builder.sh file is the script that knows how to build our code.

export HOME_FOLDER="$HOME/Container"export CODE_FOLDER="$HOME_FOLDER/code"export PROGRAM_FOLDER="$CODE_FOLDER/src/github.com/goingo/ironworker"

166

Page 167: Going Go Programming

mkdir $HOME_FOLDERcd $HOME_FOLDERcurl https://go.googlecode.com/files/go1.1.2.linux-amd64.tar.gz -o p.tar.bz2 && tar xf p.tar.bz2 && rm p.tar.bz2export GOARCH="amd64"export GOBIN=""export GOCHAR="6"export GOEXE=""export GOHOSTARCH="amd64"export GOHOSTOS="linux"export GOOS="linux"export GOPATH="$CODE_FOLDER"export GORACE=""export GOROOT="$HOME_FOLDER/go"export GOTOOLDIR="$HOME_FOLDER/go/pkg/tool/linux_amd64"export CC="gcc"export GOGCCFLAGS="-g -O2 -fPIC -m64 -pthread"export CGO_ENABLED="1"export PATH=$GOROOT/bin:$PATH

go get github.com/goinggo/ironworker

cd $CODE_FOLDER/bincp ironworker $HOME/__build__/ironworker

The code downloads Go and then uses the Go tool to build the program. At the end of the script we copy the binary that the Go tool built to the IronWorker staging area. Anything you copy to this folder will be placed into our new application task.

The task.sh file is the script that is executed by the application task.

./ironworker

In our case we only need to run the binary. Remember the binary is being created by the task-builder.sh script file.

The task.worker file performs all the magic:

runtime 'binary'exec 'task.sh'build 'sh ./task-builder.sh'file 'task-builder.sh'

The worker file tells IronWorker that our application task is a binary and to load and run the task.sh script. Next we have the build command. This tells IronWorker to perform a remote build by exexuting the task-builder.sh script. The file command pulls the task-builder.sh file into the builder task so it can executed remotely.

Let's navigate to the buildertask folder and try all this out:

167

Page 168: Going Go Programming

cd $HOME/example/src/github.com/goinggo/ironworker/scripts/buildertask

We need to edit the iron.json file with the credentials again. Once you do that run the following command:

iron_worker upload task

This time the IronWorker will take a bit longer to run. It will be performing a remote build and we must wait until it is complete. Once everything is done you should see the following:

------> Creating client        Project 'Test' with id='522b4c518a0c960009000007'------> Creating code package        Found workerfile with path='task.worker'        Detected exec with path='task.sh' and args='{}'        Merging file with path='task-builder.sh' and dest=''        Code package name is 'task'------> Uploading and building code package 'task'        Remote building worker        Code package uploaded with id='522d0cad3cb46653c5e15cbe' and revision='1'        Check 'https://hud.iron.io/tq/projects/522b4c518a0c960009000007/code/522d0cad3cb46653c5e15cbe' for more info

Now let's switch to the Iron.io website and see what we have. Go back to the Tasks tab and you should see two new tasks:

You will notice the task::builder task has already been executed. This is the remote build that was being performed. Let's look at the logs. You will see that the Go binary package for Linux was downloaded and the project was also downloaded and built. Look at the last two lines in the log:

This is where we copied the binary to the staging folder __build__. We didn't have any errors or problems coping the final binary.

168

Page 169: Going Go Programming

Now we can try running the application task. Select task from the Task list and queue the job. It should start right up and run for a minute. Once it is done let's look at the log:

When you look at the log you can see the program starts running. There is no downloading of Go or any other build work.

Using a builder task is a great way to have IronWorker stage and build the code for us. If you change the code and need to perform another build you must run the IronWorker program again. This will create new revisions of both the builder and application tasks. You can't rerun the Builder task manually.

IronWork and PaperTrailLooking at the logs in IronWork is a real convenience but you will want to use a third party system to manage your logging. I love PaperTrail and the IronWorker integration is seamless.

Go to PaperTrail and create a free account:

https://papertrailapp.com/

After you get your account and login, go to the Dashboard.

Find the Create Group button and create a new group:

Click the Save button at the bottom. Now on the Dashboard you should have your new group:

169

Page 170: Going Go Programming

Go to the Account options.

Create a Log Destination for your IronWorker Test project. Click on the Create log destination button:

Click on the Edit Settings button:

Make sure you accept logs from unrecognized systems and select the group you just created. Then hit the Update button.

Now copy the destination url and go back to your IronWorker Test project. Select the settings icon:

170

Page 171: Going Go Programming

Take the PaperTrail url and enter it into the Logger URL text box using udp as the protocol. Click the Update button and then click on the Worker button again and find the buildandrun task. The udp must be lowercase or your task will fail.

Queue the task one last time and let it run to completion. As the task is running, go back to the PaperTrail website. Go to the Dashboard and hit the refresh button on the browser:

You should start seeing events coming into your group.  Click on the All events drop down on the right and you will see the logs in PaperTrail.

ConclusionI just scratched the surface with how you can use IronWorker to run your applications. This is a really flexible environment with really no restrictions. You have access to download and install any packages you need, access to the local disk and integration to PaperTrail and a few other logging systems.

Though you don't need a Linux VM to use IronWorker, you may want to consider having one so you can stage and load your binary programs directly. Again, you have the flexibility to use IronWorker as you see fit.

I hope you try out the service. Use my application or build your own. Post a comment about your experience and anything new your learn. I plan on using IronWorker for two projects I am working on and expect only great things.

171

Page 172: Going Go Programming

Slices of Slices of Slices in Go

I am working on building code to load polygons for the different Marine Forecast areas in the United States. These polygons need to be stored in MongoDB and there is a special way that needs to be done. It would not have been a big deal if it wasn't for this fact. There isn't just one polygon for each area. There is an external polygon and then zero to many interior polygons that need to be stored in relationship.

After staring at the problem for a bit I realized that I needed to create a slice of Marine Forecast areas, each of which contained a slice of polygons. To store each polygon ring I needed a slice of geographic coordinates. Finally each coordinate needed to be stored in a two dimensional array of floats.

A picture is worth a thousand words:

When the data is stored in MongoDB it needs to follow this data pattern:

My head is spinning just looking at the diagram and the picture. The diagram depicts how all the slices and objects need to be organized.

172

Page 173: Going Go Programming

The picture shows how the polygons need to be stored in MongoDB. There will be multiple elements under coordinates, each with its own set of points.

I decided to build a test application to figure out how to structure and store the data.

The more I use slices the more I really love them. I love how I can pass them in and out of functions and not concern myself with handling references or how memory is being handled. A slice is a lightware data structure that can safely be copied in and out of functions.

I catch myself thinking all the time that I need to pass a reference of the slice so a copy of the data structure is not made on the stack. Then I remember, the data structure is 24 bytes, I am not copying all the data that is abstracted underneath it.

Read these two articles to learn more about slices:

http://www.goinggo.net/2013/08/understanding-slices-in-go-programming.htmlhttp://www.goinggo.net/2013/08/collections-of-unknown-length-in-go.html

Let's look at the data structure that will hold and store the data for MongoDB:

// Polygon defines a set of points that complete a ring// around a geographic areatype Polygon [][2]float64

// PolygonRings defines a MongoDB Structure for storing multiple polygon ringstype PolygonRings struct {    Type string           `bson:"type"`    Coordinates []Polygon `bson:"coordinates"`}

// Represents a marine station and its polygonstype MarineStation struct {    StationId string      `bson:"station_id"`    Polygons PolygonRings `bson:"polygons"`}

The Polygon type represents a slice of 2 floating point numbers. This will represent each point that makes up the polygon.

The PolygonRings structure takes on the MongoDB format required for storing polygons. If you want to use MongoDB to perform geospatial searches against the polygons this is required.

The MarineStation structure simulates an individual station and the set of polygons associated with the station.

The test code is going to create one station with two polygons. Then it will display everything.  Let's look at how to create the slice of marine stations and create a single marine station for testing:

173

Page 174: Going Go Programming

// Create an empty slice to store the polygon rings// for the different marine stationsmarineStations := []MarineStation{}

// Create a marine station for AMZ123marineStation := MarineStation{    StationId: "AMZ123",    Polygons: PolygonRings{        Type: "Polygon",        Coordinates: []Polygon{},    },}

The first line of code creates an empty slice that can hold MarineStation objects. Then we create a MarineStation object using a composite literal. Within the composite literal we have another composite literal to create an object of type PolygonRings for the Polygons property. Then within the creation of the PolygonRings object we create an empty slice that can hold Polygon objects for the Coordinates property.

To learn more about composite literals check out this document:

http://golang.org/ref/spec#Composite_literals

Now it is time to add a couple of polygons to the station:

// Create the points for the first polygon ringpoint1 := [2]float64{-79.7291190729999, 26.9729398600001}point2 := [2]float64{-80.0799532019999, 26.9692689500001}point3 := [2]float64{-80.0803627959999, 26.970533371}point4 := [2]float64{-80.0810508729999, 26.975004196}point5 := [2]float64{-79.7291190729999, 26.9729398600001}

// Create a polygon for this ringpolygon := Polygon{point1, point2, point3, point4, point5}

// Add the polygon to the slice of polygon coordinatesmarineStation.Polygons.Coordinates = append(marineStation.Polygons.Coordinates, polygon)

First we create five points. Notice the first and last point are identical. This completes the ring. Then we store all the points into a Polygon object, using a composite literal. Last, we append the Polygon object to the slice of Polygons for the marine station.

Then we do it all over again so we have two polygons associated with this marine station:

// Create the points for the second polygon ringpoint1 = [2]float64{-80.4370117189999, 27.7877197270001}point2 = [2]float64{-80.4376220699999, 27.7885131840001}point3 = [2]float64{-80.4384155269999, 27.7885131840001}point4 = [2]float64{-80.4370117189999, 27.7877197270001}

174

Page 175: Going Go Programming

// Create a polygon for this ringpolygon = Polygon{point1, point2, point3, point4}

// Add the polygon to the slice of polygon coordinatesmarineStation.Polygons.Coordinates = append(marineStation.Polygons.Coordinates, polygon)

This second polygon has four points instead of five. The last thing left to do is add the MarineStation object to the slice of stations and display everything:

// Add the marine stationmarineStations = append(marineStations, marineStation)

Display(marineStations)

The display function uses the keyword range to iterator over all the slices:

func Display(marineStations []MarineStation) {    for _, marineStation := range marineStations {        fmt.Printf("\nStation: %s\n", marineStation.StationId)

        for index, rings := range marineStation.Polygons.Coordinates {            fmt.Printf("Ring: %d\n", index)

            for _, coordinate := range rings {                fmt.Printf("Point: %f,%f\n", coordinate[0], coordinate[1])            }        }    }}

The function takes a slice of MarineStation objects. Remember only the slice structure is being copied on the stack, not all the objects the slice represents.

When we iterate through the slice of MarineStation objects and all the internal slices that make up the object, we get the following result:

Station: AMZ123Ring: 0Point: -79.729119,26.972940Point: -80.079953,26.969269Point: -80.080363,26.970533Point: -80.081051,26.975004Point: -79.729119,26.972940Ring: 1Point: -80.437012,27.787720Point: -80.437622,27.788513Point: -80.438416,27.788513Point: -80.437012,27.787720

175

Page 176: Going Go Programming

Using slices to solve this problem is fast, easy and effective. I have placed a working copy of the test code in the Go Playground:

http://play.golang.org/p/UYO2HIKggy

Building this quick test application has shown me again how using slices has very real advantages. They will make you more productive and your code perform well. Not having to worry about memory management and handling references to pass data in and out of functions is huge. Take the time to learn how to use slices in your code, you will thank yourself later.

Pool Go Routines To Process Task Oriented Work

On more than one occasion I have been asked why I use the Work Pool pattern. Why not just start as many Go routines as needed at any given time to get the work done? My answer is always the same. Depending on the type of work, the computing resources you have available and the constraints that exist within the platform, blindly throwing Go routines to perform work could make things slower and hurt overall system performance and responsiveness.

Every application, system and platform has a breaking point. Resources are not unlimited, whether that is memory, CPU, storage, bandwidth, etc. The ability for our applications to reduce and reuse resources is important. Work pools provide a pattern that can help applications manage resources and provide performance tuning options.

Here is the pattern behind the work pool:

In the diagram above, the Main Routine posts 100 tasks into the Work Pool. The Work Pool queues each individual task and once a Go routine is available, the task is dequeued, assigned

176

Page 177: Going Go Programming

and performed. When the task is finished, the Go routine becomes available again to process more tasks. The number of Go routines and the capacity of the queue can be configured, which allows for performance tuning the application.

With Go don't think in terms of threads but in Go routines. The Go runtime manages an internal thread pool and schedules the Go routines to run within that pool. Thread pooling is key to minimizing load on the Go runtime and maximizing performance. When we spawn a Go routine, the Go runtime will manage and schedule that Go routine to run on its internal thread pool. No different than the operating system scheduling a thread to run on an available CPU. We can gain the same benefits out of a Go routine pool as we can with a thread pool. Possibly even more.

I have a simple philosophy when it comes to task oriented work, Less is More. I always want to know what is the least number of Go routines, for a particular task, I need that yields the best result. The best result must take into account not only how fast all the tasks are geting done, but also the total impact processing those tasks have on the application, system and platform. You also have to look at the impact short term and long term.

We might be able to yield very fast processing times in the beginning, when the overall load on the application or system is light. Then one day the load changes slightly and the configuration doesn't work anymore. We may not realize that we are crippling a system we are interacting with. We could be pushing a database or a web server too hard and eventually, always at the wrong time, the system shuts down. A burst run of 100 tasks might work great, but sustained over an hour might be deadly.

A Work Pool is not some magic pixie dust that will solve the worlds computing problems. It is a tool you can use for your task oriented work inside your applications. It provides options and some control on how your application performs. As things change, you have flexibility to change with it.

Let's prove the simple case that a Work Pool will process our task oriented work faster than just blindly spawning Go routines. The test application I built runs a task that grabs a MongoDB connection, performs a Find on that MongoDB and retrieves the data. This is something the average business application would do. The application will post this task 100 times into a Work Pool and do this 5 times to get an average runtime.

To download the code, open a Terminal session and run the following commands:

export GOPATH=$HOME/examplego get github.com/goinggo/workpooltestcd $HOME/example/bin

Let's start with a work pool of 100 Go routines. This will simulate the model of spawning as many routines and as we have tasks.

./workpooltest 100 off

The first argument tells the program to use 100 Go routines in the pool and the second parameter turns off the detailed logging.

177

Page 178: Going Go Programming

Here is the result of using 100 Go routines to process 100 tasks on my Macbook:

CPU[8] Routines[100] AmountOfWork[100] Duration[4.599752] MaxRoutines[100] MaxQueued[3]CPU[8] Routines[100] AmountOfWork[100] Duration[5.799874] MaxRoutines[100] MaxQueued[3]CPU[8] Routines[100] AmountOfWork[100] Duration[5.325222] MaxRoutines[100] MaxQueued[3]CPU[8] Routines[100] AmountOfWork[100] Duration[4.652793] MaxRoutines[100] MaxQueued[3]CPU[8] Routines[100] AmountOfWork[100] Duration[4.552223] MaxRoutines[100] MaxQueued[3]Average[4.985973]

The output tells us a few things about the run:

CPU[8]             : The number of cores on my machineRoutines[100]      : The number of routines in the work poolAmountOfWork[100]  : The number of tasks to runDuration[4.599752] : The amount of time in seconds the run tookMaxRoutines[100]   : The max number of routines that were active during the runMaxQueued[3]       : The max number of tasks waiting in queued during the run

Next let's run the program using 64 Go routines:

CPU[8] Routines[64] AmountOfWork[100] Duration[4.574367] MaxRoutines[64] MaxQueued[35]CPU[8] Routines[64] AmountOfWork[100] Duration[4.549339] MaxRoutines[64] MaxQueued[35]CPU[8] Routines[64] AmountOfWork[100] Duration[4.483110] MaxRoutines[64] MaxQueued[35]CPU[8] Routines[64] AmountOfWork[100] Duration[4.595183] MaxRoutines[64] MaxQueued[35]CPU[8] Routines[64] AmountOfWork[100] Duration[4.579676] MaxRoutines[64] MaxQueued[35]Average[4.556335]

Now using 24 Go routines:

CPU[8] Routines[24] AmountOfWork[100] Duration[4.595832] MaxRoutines[24] MaxQueued[75]CPU[8] Routines[24] AmountOfWork[100] Duration[4.430000] MaxRoutines[24] MaxQueued[75]CPU[8] Routines[24] AmountOfWork[100] Duration[4.477544] MaxRoutines[24] MaxQueued[75]CPU[8] Routines[24] AmountOfWork[100] Duration[4.550768] MaxRoutines[24] MaxQueued[75]

178

Page 179: Going Go Programming

CPU[8] Routines[24] AmountOfWork[100] Duration[4.629989] MaxRoutines[24] MaxQueued[75]Average[4.536827]

Now using 8 Go routines:

CPU[8] Routines[8] AmountOfWork[100] Duration[4.616843] MaxRoutines[8] MaxQueued[91]CPU[8] Routines[8] AmountOfWork[100] Duration[4.477796] MaxRoutines[8] MaxQueued[91]CPU[8] Routines[8] AmountOfWork[100] Duration[4.841476] MaxRoutines[8] MaxQueued[91]CPU[8] Routines[8] AmountOfWork[100] Duration[4.906065] MaxRoutines[8] MaxQueued[91]CPU[8] Routines[8] AmountOfWork[100] Duration[5.035139] MaxRoutines[8] MaxQueued[91]Average[4.775464]

Let's collect the results of the different runs:

100 Go Routines : 4.985973 :64  Go Routines : 4.556335 : ~430 Milliseconds Faster24  Go Routines : 4.536827 : ~450 Milliseconds Faster8   Go Routines : 4.775464 : ~210 Milliseconds Faster

This program seems to run the best when we use 3 Go routines per core. This seems to be a magic number because it always yields pretty good results for the programs I write. If we run the program on a machine with more cores, we can increase the Go routine number and take advantage of the extra resources and processing power. That's to say if the MongoDB can handle the extra load for this particular task. Either way we can always adjust the size and capacity of the Work Pool.

We have proved that for this particular task, spawning a Go routine for each task is not the best performing solution. Let's look at the code for the Work Pool and see how it works:

The Work Pool can be found under the following folder if you downloaded the code:

cd $HOME/example/src/github.com/goinggo/workpool

All the code can be found in a single Go source code file called workpool.go. I have removed all the comments and some lines of code to let us focus on the important pieces. Not all the functions are listed in this post as well.

Let's start with the types that make up the Work Pool:

type WorkPool struct {    _ShutdownQueueChannel chan string    _ShutdownWorkChannel  chan struct{}    _ShutdownWaitGroup    sync.WaitGroup    _QueueChannel         chan _PoolWork    _WorkChannel          chan PoolWorker

179

Page 180: Going Go Programming

    _QueuedWork           int32    _ActiveRoutines       int32    _QueueCapacity        int32}

type _PoolWork struct {    Work          PoolWorker    ResultChannel chan error}

type PoolWorker interface {    DoWork(workRoutine int)}

The WorkPool structure is a public type that represents the Work Pool. The implementation uses two channels to run the pool.

The WorkChannel is at the heart of the Work Pool. It manages the queue of work that needs to be processed. All of the Go routines that will be performing the work will wait for a signal on this channel.

The QueueChannel is used to manage the posting of work into the WorkChannel queue. The QueueChannel provides acknowledgments to the calling routine that the work has or has not been queued. It also helps to maintains the QueuedWork and QueueCapacity counters.

The PoolWork structure defines the data that is sent into the QueueChannel to process enqueuing requests. It contains an interface reference to the users PoolWorker object and a channel to receive a confirmation that the task has been enqueued.

The PoolWorker interface defines a single function called DoWork that has a parameter that represents an internal id for the Go routine that is running the task. This is very helpful for logging and other things that you may want to implement at a per Go Routine level.

The PoolWorker interface is the key for accepting and running tasks in the Work Pool. Look at this sample client implementation:

type MyTask struct {    Name string    WP *workpool.WorkPool}

func (this *MyTask) DoWork(workRoutine int) {    fmt.Printf("%s\n", this.Name)

    fmt.Printf("*******> WR: %d QW: %d AR: %d\n",        workRoutine,             this.WP.QueuedWork(),        this.WP.ActiveRoutines())

    time.Sleep(100 * time.Millisecond)}

180

Page 181: Going Go Programming

func main() {    runtime.GOMAXPROCS(runtime.NumCPU())

    workPool := workpool.New(runtime.NumCPU() * 3, 100)

    task := &MyTask{        Name: "A" + strconv.Itoa(i),        WP: workPool,    }

    err := workPool.PostWork("main", task)

    ...}

I create a type called MyTask that defines the state I need for the work to be performed. Then I implement a member function for MyTask called DoWork, which matches the signature of the PoolWorker interface. Since MyTask implements the PoolWorker interface, objects of type MyTask are now considered objects of type PoolWorker. Now we can pass an object of type MyTask into the PostWork call.

To learn more about interfaces and object oriented programming in Go read this blog post:

http://www.goinggo.net/2013/07/object-oriented-programming-in-go.html

In main I tell the Go runtime to use all of the available CPUs and cores on my machine. Then I create a Work Pool with 24 Go routines. On my current machine I have 8 cores and as we learned above, three Go routines per core is a good starting place. The last parameter tells the Work Pool to create a queue capacity for 100 tasks.

Then I create a MyTask object and post it into the queue. For logging purposes, the first parameter of the PostWork function is a name you can give to the routine making the call. If the err variable is nil after the call, the task has been posted. If not, then most likely you have reached queue capacity and the task could not be posted.

Let's look at the internals of how a WorkPool object is created and started:

func New(numberOfRoutines int, queueCapacity int32) (workPool *WorkPool) {

    workPool = &WorkPool{        _ShutdownQueueChannel: make(chan string),        _ShutdownWorkChannel:  make(chan struct{}),        _QueueChannel:         make(chan _PoolWork),        _WorkChannel:          make(chan PoolWorker, queueCapacity),        _QueuedWork:           0,        _ActiveRoutines:       0,        _QueueCapacity:        queueCapacity,    }

181

Page 182: Going Go Programming

    for workRoutine := 0; workRoutine < numberOfRoutines; workRoutine++ {        workPool._ShutdownWaitGroup.Add(1)        go workPool._WorkRoutine(workRoutine)    }

    go workPool._QueueRoutine()

    return workPool}

The New function accepts the number of routines and the queue capacity as we saw in the above sample client code. The WorkChannel is a buffered channel which is used as the queue for storing the work we need to process. The QueueChannel is an unbuffered channel used to synchronize access to the WorkChannel buffer, guarantee queuing and to maintain the counters.

To learn more about buffered and unbuffered channels read this web page:

http://golang.org/doc/effective_go.html#channels

Once the channels are initialized we are able to spawn the Go routines that will perform the work. First we add 1 to the wait group for each Go routine so we can shut it down the pool cleanly it is time. Then we spawn the Go routines so we can process work. The last thing we do is start up the QueueRoutine so we can begin to accept work.

To learn how the shutdown code and WaitGroup works read this web page:

http://dave.cheney.net/2013/04/30/curious-channels

Shutting down the Work Pool is done like this:

func (this *WorkPool) Shutdown(goRoutine string) (err error) {

    this._ShutdownQueueChannel <- "Down"    <-this._SutdownQueueChannel

    close(this._QueueChannel)    close(this._ShutdownQueueChannel)

    close(this._ShutdownWorkChannel)    this._ShutdownWaitGroup.Wait()

    close(this._WorkChannel)

    return err}

The Shutdown function brings down the QueueRoutine first so no more requests can be accepted. Then the ShutdownWorkChannel is closed and the code waits for each Go routine

182

Page 183: Going Go Programming

to decrement the WaitGroup counter. Once the last Go routine calls Done on the WaitGroup, the call to Wait will return and the Work Pool is shutdown.

Now let's look at the PostWork and QueueRoutine functions:

func (this *WorkPool) PostWork(goRoutine string, work PoolWorker) (err error) {

    poolWork := _PoolWork{work, make(chan error)}

    defer close(poolWork.ResultChannel)

    this._QueueChannel <- poolWork    err = <-poolWork.ResultChannel

    return err}

func (this *WorkPool) _QueueRoutine() {    for {        select {        case <-this._ShutdownQueueChannel:           this._ShutdownQueueChannel <- "Down"           return

        case queueItem := <-this._Queuechannel:             if atomic.AddInt32(&this._QueuedWork, 0) == this._QueueCapacity {                queueItem.ResultChannel <- fmt.Errorf("Thread Pool At Capacity")                continue            }

            atomic.AddInt32(&this._QueuedWork, 1)

            this._WorkChannel <- queueItem.Work

            queueItem.ResultChannel <- nil            break        }    }}

The idea behind the PostWork and QueueRoutine functions are to serialize access to the WorkChannel buffer, guarantee queuing and to maintain the counters. Work is always placed at the end of the WorkChannel buffer by the Go runtime when it is sent into the channel.

The highlighted code shows all the communication points. When the QueueChannel is signaled, the QueueRoutine receives the work. Queue capacity is checked and if there is room, the user PoolWorker object is queued into the WorkChannel buffer. Finally the calling routine is signaled back that everything is queued.

183

Page 184: Going Go Programming

Last let's look at the WorkRoutine functions:

func (this *WorkPool) _WorkRoutine(workRoutine int) {    for {        select {        case <-this._shutdownworkchannel:            this._ShutdownWaitGroup.Done()            return

        case poolWorker := <-this._WorkChannel:            this._SafelyDoWork(workRoutine, poolWorker)            break        }    }}

func (this *WorkPool) _SafelyDoWork(workRoutine int, poolWorker PoolWorker) {    defer _CatchPanic(nil, "_WorkRoutine", "workpool.WorkPool", "SafelyDoWork")

    defer func() {        atomic.AddInt32(&this._ActiveRoutines, -1)    }()

    atomic.AddInt32(&this._QueuedWork, -1)    atomic.AddInt32(&this._ActiveRoutines, 1)

    poolWorker.DoWork(workRoutine)}

The Go runtime takes care of assigning work to a Go routine in the pool by signaling the WorkChannel for a particular Go routine that is waiting. When the channel is signaled, the Go runtime passes the work that is at the head of the channel buffer. The channel buffer acts as a queue, FIFO.

If all the Go routines are busy, then none will be waiting on the WorkChannel, so all remaining work has to wait. As soon as a routine completes it work, it returns to wait again on the WorkChannel. If there is work in the channel buffer, the Go runtime will signal the Go routine to wake up again.

The code uses the SafelyDo pattern for processing work. At this point the code is calling into user code and could panic. You don't want anything to cause the Go routine to terminate. Notice the use of the first defer statement. It catches any panics and stops them in their tracks.

The rest of the code safely increments and decrements the counters and calls into the user routine via the Interface.

To learn more about catch panics read this blog post:

184

Page 185: Going Go Programming

http://www.goinggo.net/2013/06/understanding-defer-panic-and-recover.html

That's the heart of the code and how it implements the pattern. The WorkPool really shows the elegance and grace of channels. With very little code I was able to implement a pool of Go routines to process work. Adding guaranteed queuing and maintain the counters was a breeze.

Download the code from the GoingGo repository on Github and try it for yourself.

Iterating Over Slices In Go

Slices are used everywhere in my code. If I am working with data from MongoDB, it is stored in a slice. If I need to keep track of a collection of problems after running an operation, it is stored in a slice. If you don't understand how slices work yet or have been avoiding them like I did when I started, read these two posts to learn more.

http://www.goinggo.net/2013/08/understanding-slices-in-go-programming.html http://www.goinggo.net/2013/08/collections-of-unknown-length-in-go.html

A question that I am constantly asking myself when coding is, “Do I want to use a pointer to this object or do I want to make a copy?” Though Go can be used as a functional programming language, it is an imperative programming language at heart. What's the difference?

A functional programming language does not allow you to change the state of a variable or an object once it has been created and initialized. This means variables and objects are immutable, they can't be changed. If you want to change the state of a variable or an object, you must make a copy and initialize the copy with the changes. Functions are always passed copies and return values are always copies too.

In an imperative programming language we can create variables and objects that are mutable, or can be changed. We can pass a pointer for any variable or object to a function, which in turn can change the state as necessary. A functional programming language wants you to think in terms of mathematical functions that take input and produce a result. In an imperative programming language we can build similar functions, but we can also build functions that perform operations on state that can exist anywhere in memory.

Being able to use a pointer has advantages but can also get you in trouble. Using pointers can alleviate memory constraints and possibly improve performance. It can also create synchronization issues such as shared access to objects and resources. Find the solution that works best for each individual use case. For your Go programs I recommend using pointers when it is safe and practical. Go is an imperative programming language so take advantage of that.

In Go everything is pass by value and it is really important to remember that. We can pass by value the address of an object or pass by value a copy of an object. When we use a pointer in Go it can sometime be confusing because Go handles all the dereferencing for us. Don’t get me wrong, its great that Go does this, but sometime you can forget what the value of your variable actually is.

185

Page 186: Going Go Programming

At some point in every program, I have the need to iterate over a slice to perform some work. In Go we use the keyword range within a for loop construct to iterate over a slice. In the beginning I made some very bad mistakes iterating over slices because I misunderstood how the range keyword worked. I will show you a nasty bug I created iterating over a slice that puzzled me for a bit. Now it is obvious to me why the code misbehaved, but at the time I was shaking my head.

Let's create some simple objects and place them inside of a slice. Then we will iterate over the slice and see what happens.

package main

import (    "fmt")

type Dog struct {    Name string    Age int}

func main() {    jackie := Dog{        Name: "Jackie",        Age: 19,    }

    fmt.Printf("Jackie Addr: %p\n", &jackie)

    sammy := Dog{        Name: "Sammy",        Age: 10,    }

    fmt.Printf("Sammy Addr: %p\n", &sammy)

    dogs := []Dog{jackie, sammy}

    fmt.Println("")

    for _, dog := range dogs {        fmt.Printf("Name: %s Age: %d\n", dog.Name, dog.Age)        fmt.Printf("Addr: %p\n", &dog)

        fmt.Println("")    }}

The program creates two dog objects and puts them into a slice of dogs. We display the address of each dog object. Then we iterate over the slice displaying the name, age and address of each Dog. Here is the output for the program:

186

Page 187: Going Go Programming

Jackie Addr: 0x2101bc000Sammy Addr: 0x2101bc040

Name: Jackie Age: 19Addr: 0x2101bc060

Name: Sammy Age: 10Addr: 0x2101bc060

So why is the address of the dog object different inside the range loop and why does the same address appear twice? This all has to do with the fact that everything is pass by value in Go. In this code example we actually create 2 extra copies of each Dog object in memory.

The initial existence of each Dog object is created with a composite literal:

jackie := Dog{    Name: "Jackie",    Age: 19,}

The first copies of the objects are created when the objects are placed into the slice: 

dogs := []Dog{jackie, sammy}

The second copies of the objects are created when we iterate over the slice:

dog := range dogs

Now we can see why the address of the dog variable inside range loop is always the same. We are displaying the address of the dog variable, which happens to be a local variable of type Dog and contains a copy of the Dog object for each index of the slice. With each iteration of the slice, the location of the dog variable is the same. The value of the dog variable is changing.

That nasty bug I was talking about earlier had to do with me thinking the address of the dog variable could be used as a pointer to each individual Dog object inside the slice. Something like this:

187

Page 188: Going Go Programming

allDogs := []*Dog{}

for _, dog := range dogs {    allDogs = append(allDogs, &dog)}

for _, dog := range allDogs {    fmt.Printf("Name: %s Age: %d\n", dog.Name, dog.Age)}

I create a new slice that can hold pointers to Dog objects. Then I range over the slice of dogs storing the address of each Dog object into the new slice. Or at least I think I am storing the address of each Dog object.

If I add this code to the program and run it, this is the output:

Name: Sammy Age: 10Name: Sammy Age: 10

I end up with a slice where every element has the same address. This address is pointing to a copy of the last object that we iterated over. Yikes!!

If making all these copies is not what you want, you could use pointers. Here is the example program using pointers:

package main

import (    "fmt")

type Dog struct {    Name string    Age int}

func main() {    jackie := &Dog{        Name: "Jackie",        Age: 19,    }

    fmt.Printf("Jackie Addr: %p\n", jackie)

    sammy := &Dog{        Name: "Sammy",        Age: 10,    }

    fmt.Printf("Sammy Addr: %p\n\n", sammy)

188

Page 189: Going Go Programming

    dogs := []*Dog{jackie, sammy}

    for _, dog := range dogs {        fmt.Printf("Name: %s Age: %d\n", dog.Name, dog.Age)        fmt.Printf("Addr: %p\n\n", dog)    }}

Here is the output:

Jackie Addr: 0x2101bb000Sammy Addr: 0x2101bb040

Name: Jackie Age: 19Addr: 0x2101bb000

Name: Sammy Age: 10Addr: 0x2101bb040

This time we create a slice of pointers to Dog objects. When we iterate over this slice, the value of the dog variable is the address of each Dog object we stored in the slice. Instead of creating two extra copies of each Dog object, we are using the same initial Dog object we created with the composite literal.

When the slice is a collection of Dog objects or a collection of pointers to Dog objects, the range loop is the same.

for _, dog := range dogs {    fmt.Printf("Name: %s Age: %d\n", dog.Name, dog.Age)}

Go handles access to the Dog object regardless of whether we are using a pointer or not. This is awesome but can sometimes lead to a bit of confusion. At least it was for me in the beginning.

I can't tell you when you should use a pointer or when you should use a copy. Just remember that Go is going to pass everything by value. That includes function parameters, return values and when iterating over a slice, map or channel.

Yes, you can also range over a channel. Take a look at this sample code I altered from a blog post written by Ewen Cheslack-Postava:

http://ewencp.org/blog/golang-iterators/

package main

import (    "fmt")

type Dog struct {

189

Page 190: Going Go Programming

    Name string    Age int}

type DogCollection struct {    Data []*Dog}

func (this *DogCollection) Init() {    cloey := &Dog{"Cloey", 1}    ralph := &Dog{"Ralph", 5}    jackie := &Dog{"Jackie", 10}    bella := &Dog{"Bella", 2}    jamie := &Dog{"Jamie", 6}

    this.Data = []*Dog{cloey, ralph, jackie, bella, jamie}}

func (this *DogCollection) CollectionChannel() chan *Dog {    dataChannel := make(chan *Dog, len(this.Data))

    for _, dog := range this.Data {        dataChannel <- dog    }

    close(dataChannel)

    return dataChannel}

func main() {    dc := DogCollection{}    dc.Init()

    for dog := range dc.CollectionChannel() {        fmt.Printf("Channel Name: %s\n", dog.Name)    }}

If you run the program you will get the following output:

Channel Name: CloeyChannel Name: RalphChannel Name: JackieChannel Name: BellaChannel Name: Jamie

I really love this sample code because it shows the beauty of a closed channel. The key to making this program work is the fact that a closed channel is always in the signaled state. That means any read on the channel will return immediately. If the channel is empty a default value is returned. This is what allows the range to iterate over all the data that was passed into

190

Page 191: Going Go Programming

the channel and complete when the channel is empty. Once the channel is empty, the next read on the channel will return nil. This causes the loop to terminate.

Slices are great, lightweight and powerful. You should be using them and gaining the benefits they provide. Just remember that when you are iterating over a slice, you are getting a copy of each element of the slice. If that happens to be an object, you are getting a copy of that object. Don’t ever use the address of the local variable in the range loop. That is a local variable that contains a copy of the slice element and only has local context. Don’t make the same mistake I made.

Recursion And Tail Calls In Go

This article was written for and published by Gopher Academy

I was looking at a code sample that showed a recursive function in Go and the writer was very quick to state how Go does not optimize for recursion, even if tail calls are explicit. I had no idea what a tail call was and I really wanted to understand what he meant by Go was not optimized for recursion. I didn't know recursion could be optimized.

For those who don't know what recursion is, put simply, it is when a function calls itself. Why would we ever write a function that would call itself? Recursion is great for algorithms that perform operations on data that can benefit from using a stack, FILO (First In Last Out). It can be faster than using loops and can make your code much simpler.

Performing math operations where the result of a calculation is used in the next calculation is a classic example where recursion shines. As with all recursion, you must have an anchor that eventually causes the function to stop calling itself and return. If not, you have an endless loop that eventually will cause a panic because you will run out of memory.

Why would you run out of memory? In a traditional C program, stack memory is used to handle all the coming and going of function calls. The stack is pre-allocated memory and very fast to use. Look at the following diagram:

191

Page 192: Going Go Programming

This diagram depicts an example of a typical program stack and what it may look like for any program we write. As you can see the stack in growing with each function call we make. Every time we call a function from another function, variables, registers and data is pushed to the stack and it grows.

In a C program each thread is allocated with its own fixed amount of stack space. The default stack size can range from 1 Meg to 8 Meg depending on the architecture. You have the ability to change the default as well. If you are writing a program that spawns a very large number of threads, you can very quickly start eating up a ton of memory that you probably will never use.

In a Go program each Go routine is allocated its own stack space. However, Go is smarted about allocating space for the routines stack. The stack for a Go routine starts out at 4k and grows as needed. The ability of Go to be able to grow the stack dynamically comes from the concept of split stacks. To learn more about split stacks and how they work with the gcc compiler read this:

http://gcc.gnu.org/wiki/SplitStacks

192

Page 193: Going Go Programming

You can always look at the code implemented for the Go runtime as well:

http://golang.org/src/pkg/runtime/stack.hhttp://golang.org/src/pkg/runtime/stack.c

When we use recursion we need to be aware that the stack is going to grow until we finally hit our anchor and begin to shrink the stack back down. When we say that Go does not optimize for recursion, we are talking about the fact that Go does not attempt to look at our recursive functions and find ways to minimize stack growth. This is where tail calls come in.

Before we talk more about tail calls and how they can help optimize recursive functions, let's begin with a simple recursive function:

func Recursive(number int) int {

    if number == 1 {

        return number    }

    return number + Recursive(number-1)}

func main() {

    answer := Recursive(4)    fmt.Printf("Recursive: %d\n", answer)}

This Recursive function takes an integer as a parameter and returns an integer. If the value of the number variable is one, then the function returns the value out. This if statement contains the anchor and starts the process of unwinding the stack to complete the work.

193

Page 194: Going Go Programming

When the value of the number variable is not the number one, a recursive call is made. The function decrements the number variable by one and uses that value as the parameter for the next function call. With each function call the stack grows. Once the anchor is hit, each recursive call begins to return until we get back to main.

Let's look at a view of all the function calls and return values for the program:

Starting from the left side and from bottom to top we can see the call chain for the program.

Main calls Recursive with a value of 4. Then Recursive calls itself with a value of 3. This continues to happen until the value of 1 is passed into the Recursive function call.

The function calls itself 3 times before it reaches the anchor. By the time the anchor is

194

Page 195: Going Go Programming

reached, there are 3 extended stack frames, one for each call.

Then the recursion begins to unwind and the real work begins. On the right side and from top to bottom we can see the unwind operations.

Each return operation is now executed by taking the parameter and adding it to the return value from the function call.

Eventually the last return is executed and we have the final answer which is 10.The function performs this operation very quickly and it is one of the benefits of recursion. We don't need any iterators or index counters for looping. The stack stores the result of each operation and returns it to the previous call. Again, the only drawback is we need to be careful of how much memory we are consuming.

What is a tail call and how can it help optimize recursive functions? Constructing a recursive function with a tail call tries to gain the benefits of recursion without the drawbacks of consuming large amounts of stack memory.

Here is the same recursive function implemented with a tail call:

func TailRecursive(number int, product int) int {

    product = product + number

    if number == 1 {

        return product    }

    return TailRecursive(number-1, product)}

func main() {

    answer := TailRecursive(4, 0)    fmt.Printf("Recursive: %d\n", answer)}

Can you see the difference in the implementation? It has to do with how we are using the stack and calculating the result. In this implementation the anchor produces the final result. We don't require any return values from the stack except the final return value by the anchor which contains the answer.

Some compilers are able to see this nuance and change the underlying assembly that is produced to use one stack frame for all the recursive calls. The Go compiler is not able to detect this nuance yet. To prove that let's look at the assembly code that is produced by the Go compiler for both these functions.

To produce a file with the assembly code, run this command from a Terminal session:

go tool 6g -S ./main.go > assembly.asm

195

Page 196: Going Go Programming

There are three compilers depending on your machine architecture.

6g: AMD64 Architecture:  This is for modern 64 bit processors regardless if the processor is built by Intel or AMD. AMD developed the 64 bit extension to the x86 architecture.

8g: x86 Architecture: This is for 32 bit processors based on the 8086 architecture.

5g: ARM Architecture: This is for RISC based processors which stands for Reduced Instruction Set Computing.

To learn more about this and other go tool commands look at this page:

http://golang.org/cmd/gc/

I listed the Go code and the assembly code together. Just one item of note to help you.

In order for the processor to be able to perform an operation on data, such as adding or comparing two numbers, the data must exist in one of the processor registers. Think of registers as processor variables.

When you look at the assembly below it helps to know that AX and BX are general purpose registers and used all the time. The SP register is the stack pointer and the FP register is the frame pointer, which also has to do with the stack.

Now let's look at the code:

07 func Recursive(number int) int {0809     if number == 1 {1011         return number12     }1314     return number + Recursive(number-1)15 }

--- prog list "Recursive" ---0000 (./main.go:7) TEXT Recursive+0(SB),$16-16

0001 (./main.go:7) MOVQ number+0(FP),AX

0002 (./main.go:7) LOCALS ,$00003 (./main.go:7) TYPE number+0(FP){int},$80004 (./main.go:7) TYPE ~anon1+8(FP){int},$8

0005 (./main.go:9) CMPQ AX,$10006 (./main.go:9) JNE ,9

0007 (./main.go:11) MOVQ AX,~anon1+8(FP)0008 (./main.go:11) RET ,

196

Page 197: Going Go Programming

0009 (./main.go:14) MOVQ AX,BX0010 (./main.go:14) DECQ ,BX

0011 (./main.go:14) MOVQ BX,(SP)0012 (./main.go:14) CALL ,Recursive+0(SB)

0013 (./main.go:14) MOVQ 8(SP),AX0014 (./main.go:14) MOVQ number+0(FP),BX0015 (./main.go:14) ADDQ AX,BX

0016 (./main.go:14) MOVQ BX,~anon1+8(FP)0017 (./main.go:14) RET ,

If we follow along with the assembly code you can see all the places the stack is touched:

0001: The AX register is given the value from the stack that was passed in for the number variable.

0005-0006: The value of the number variable is compared with the number 1. If they are not equal, then the code jumps to line 14 in the Go code.

0007-0008: The anchor is hit and the value of the number variable is copied onto the stack and the function returns.

0009-0010: The number variable is subtracted by one.

0011-0012: The value of the number variable is pushed onto to the stack and the recursive function call is performed.

0013-0015: The function returns. The return value is popped from the stack and placed in the AX register. Then the value for the number variable is copied from the stack frame and placed in the BX register. Finally they are added together.

0016-0017: The result of the add is copied onto the stack and the function returns.

What the assembly code shows is that we have the recursive call being made and that values are being pushed and popped from the stack as expected. The stack is growing and then being unwound.

Now let's generate the assembly code for the recursive function that contains the tail call and see if the Go compiler optimizes anything.

17 func TailRecursive(number int, product int) int {1819     product = product + number2021     if number == 1 {2223         return product24     }

197

Page 198: Going Go Programming

2526     return TailRecursive(number-1, product)27 }

--- prog list "TailRecursive" ---0018 (./main.go:17) TEXT TailRecursive+0(SB),$24-24

0019 (./main.go:17) MOVQ number+0(FP),CX

0020 (./main.go:17) LOCALS ,$00021 (./main.go:17) TYPE number+0(FP){int},$80022 (./main.go:17) TYPE product+8(FP){int},$80023 (./main.go:17) TYPE ~anon2+16(FP){int},$8

0024 (./main.go:19) MOVQ product+8(FP),AX0025 (./main.go:19) ADDQ CX,AX

0026 (./main.go:21) CMPQ CX,$10027 (./main.go:21) JNE ,30

0028 (./main.go:23) MOVQ AX,~anon2+16(FP)0029 (./main.go:23) RET ,

0030 (./main.go:26) MOVQ CX,BX0031 (./main.go:26) DECQ ,BX

0032 (./main.go:26) MOVQ BX,(SP)0033 (./main.go:26) MOVQ AX,8(SP)0034 (./main.go:26) CALL ,TailRecursive+0(SB)

0035 (./main.go:26) MOVQ 16(SP),BX

0036 (./main.go:26) MOVQ BX,~anon2+16(FP)0037 (./main.go:26) RET ,

There is a bit more assembly code with the TailRecursive function. However the result is very much the same. In fact, from a performance perspective we have made things a bit worse.

Nothing has been optimized for the tail call we implemented. We still have all the same stack manipulation and recursive calls being made. So I guess it is true that Go currently does not optimize for recursion. This does not mean we shouldn't use recursion, just be aware of all the things we learned.

If you have a problem that could best be solved by recursion but are afraid of blowing out memory, you can always use a channel. Mind you this will be significantly slower but it will work.

Here is how you could implement the Recursive function using channels:

198

Page 199: Going Go Programming

func RecursiveChannel(number int, product int, result chan int) {

    product = product + number

    if number == 1 {

        result <- product        return    }

    go RecursiveChannel(number-1, product, result)}

func main() {      result := make(chan int)

    RecursiveChannel(4, 0, result)    answer := <-result

    fmt.Printf("Recursive: %d\n", answer)}

It follows along with the tail call implementation. Once the anchor is hit it contains the final answer and the answer is placed into the channel. Instead of making a recursive call, we spawn a Go routine providing the same state we were pushing onto the stack in the tail call example.

The one difference is we pass an unbuffered channel to the Go routine. Only the anchor writes data to the channel and returns without spawning another Go routine.

In main an unbuffered channel is created and the RecursiveChannel function is called with the initial parameters and the channel. The function returns immediately but main does not terminate. This is because it waits for data to be written to the channel. Once the anchor is hit and writes the answer to the channel, main wakes up with the result and it is printed to the screen. In most cases main will wake before the Go routine terminates.

Recursion is another tool you can use when writing your Go programs. For now the Go compiler will not optimize the code for tail calls but there is nothing stopping future version of Go from doing so. If memory could be a problem you can always you a channel to minic recursion.

Detecting Race Conditions With Go

I always find it interesting when I realize that something I have been practicing or dealing with for a long time has a name. This time it happens to be race conditions. This is something you can't avoid thinking about as soon as you have more than one routine sharing any kind of resource. If you're not thinking about race conditions in your code, now is the time.

199

Page 200: Going Go Programming

A race condition is when two or more routines have access to the same resource, such as a variable or data structure and attempt to read and write to that resource without any regard to the other routines. This type of code can create the craziest and most random bugs you have ever seen. It usually takes a tremendous amount of logging and luck to find these types of bugs. Over the years I have really perfected my logging skills to help identify these problems when they occur.

Back in June with Go version 1.1, the Go tooling introduced a race detector. The race detector is code that is built into your program during the build process. Then once your program is running, it is able to detect and report any race conditions it finds. It is seriously cool and does an incredible job in identifying the code that is the culprit.

Let's take a very simple program that contains a race condition and build the code with the race detector.

package main

import (    "fmt"    "sync")

var Wait sync.WaitGroupvar Counter int = 0

func main() {

    for routine := 1; routine <= 2; routine++ {

        Wait.Add(1)        go Routine(routine)    }

    Wait.Wait()    fmt.Printf("Final Counter: %d\n", Counter)}

func Routine(id int) {

    for count := 0; count < 2; count++ {

        value := Counter        value++        Counter = value    }

    Wait.Done()}

The program looks innocent enough. The program spawns two routines that each increment the global Counter variable twice. When both routines are done running, the program

200

Page 201: Going Go Programming

displays the value of the global Counter variable. When I run the program it displays the number 4 which is the correct answer. So everything must be working correctly, right?

Let's run the code through the Go race detector and see what it finds. Open a Terminal session where the source code is located and build the code using the -race option.

go build -race

Then run the program:

==================WARNING: DATA RACERead by goroutine 5:  main.Routine()      /Users/bill/Spaces/Test/src/test/main.go:29 +0x44  gosched0()      /usr/local/go/src/pkg/runtime/proc.c:1218 +0x9f

Previous write by goroutine 4:  main.Routine()      /Users/bill/Spaces/Test/src/test/main.go:33 +0x65  gosched0()      /usr/local/go/src/pkg/runtime/proc.c:1218 +0x9f

Goroutine 5 (running) created at:  main.main()      /Users/bill/Spaces/Test/src/test/main.go:17 +0x66  runtime.main()      /usr/local/go/src/pkg/runtime/proc.c:182 +0x91

Goroutine 4 (finished) created at:  main.main()      /Users/bill/Spaces/Test/src/test/main.go:17 +0x66  runtime.main()      /usr/local/go/src/pkg/runtime/proc.c:182 +0x91

==================Final Counter: 4Found 1 data race(s)

Looks like the tool detected a race condition with the code. If you look below the race condition report, you can see the output for the program. The value of the global Counter variable is 4. This is the problem with these types of bugs, the code could work most of the time and then randomly something bad happens. The race detector is telling us something bad is lurking in the trees.

The result of the warning tells us exactly where the problem is:

Read by goroutine 5:  main.Routine()      /Users/bill/Spaces/Test/src/test/main.go:29 +0x44

201

Page 202: Going Go Programming

  gosched0()      /usr/local/go/src/pkg/runtime/proc.c:1218 +0x9f

        value := Counter

Previous write by goroutine 4:  main.Routine()      /Users/bill/Spaces/Test/src/test/main.go:33 +0x65  gosched0()      /usr/local/go/src/pkg/runtime/proc.c:1218 +0x9f

        Counter = value

Goroutine 5 (running) created at:  main.main()      /Users/bill/Spaces/Test/src/test/main.go:17 +0x66  runtime.main()      /usr/local/go/src/pkg/runtime/proc.c:182 +0x91

        go Routine(routine)

You can see that the race detector has pulled out the two lines of code that is reading and writing to the global Counter variable. It also identified the point in the code where the routine was spawned.

Let's make a quick change to the program to cause the race condition to raise its ugly head:

package main

import (    "fmt"    "sync"    "time")

var Wait sync.WaitGroupvar Counter int = 0

func main() {

    for routine := 1; routine <= 2; routine++ {

        Wait.Add(1)        go Routine(routine)    }

    Wait.Wait()    fmt.Printf("Final Counter: %d\n", Counter)}

func Routine(id int) {

202

Page 203: Going Go Programming

    for count := 0; count < 2; count++ {

        value := Counter        time.Sleep(1 * time.Nanosecond)        value++        Counter = value    }

    Wait.Done()}

I have added a billionth of a second pause into the loop. I put the pause right after the routine reads the global Counter variable and stores a local copy. Let's run the program and see what the value of the global Counter variable is with this simple change:

Final Counter: 2

This pause in the loop has caused the program to fail. The value of the Counter variable is now 2 and no longer 4. So what happened? Let's break down the code and understand why the billionth of a second pause revealed the bug.

Without the pause the program runs as follows:

203

Page 204: Going Go Programming

Without the pause the first routine that is spawned runs to completion and then the second routine begins to run. This is why the program appears to be running properly. The code is serializing itself because of how fast it is able to run on my machine.

Let's look at how the program runs with the pause:

204

Page 205: Going Go Programming

I didn't complete the diagram for space but it shows enough. The pause is causing a context switch between the two routines that are running. This time we have a much different story. Let's look at the code that is being run in the diagram:

value := Counter

time.Sleep(1 * time.Nanosecond)

value++

Counter = value

With each iteration of the loop the value of the global Counter variable is captured locally, then the local copy is incremented and finally written back to the global Counter variable. If these three lines of code do not run immediately, without interruption, we begin to have

205

Page 206: Going Go Programming

problems. The diagram shows how the read of the global Counter variable and then the context switch is causing all of the initial problems.

In the diagram, before the incremented value by Routine 1 is written back to the global Counter variable, Routine 2 wakes up and reads the global Counter variable. Essentially both routines perform the same exact reads and writes to the global Counter variable so we end up with a final value of 2.

To fix this problem you might think we just need to reduce the incrementing of the global Counter variable from three lines of code to one line of code:

package main

import (    "fmt"    "sync"    "time")

var Wait sync.WaitGroupvar Counter int = 0

func main() {

    for routine := 1; routine <= 2; routine++ {

        Wait.Add(1)        go Routine(routine)    }

    Wait.Wait()    fmt.Printf("Final Counter: %d\n", Counter)}

func Routine(id int) {

    for count := 0; count < 2; count++ {

        Counter = Counter + 1        time.Sleep(1 * time.Nanosecond)    }

    Wait.Done()}

When we run this version of the program we get the right answer again:

Final Counter: 4

If we run this code through the race detector our problems should go away:

206

Page 207: Going Go Programming

go build -race

And the output:

==================WARNING: DATA RACEWrite by goroutine 5:  main.Routine()      /Users/bill/Spaces/Test/src/test/main.go:30 +0x44  gosched0()      /usr/local/go/src/pkg/runtime/proc.c:1218 +0x9f

Previous write by goroutine 4:  main.Routine()      /Users/bill/Spaces/Test/src/test/main.go:30 +0x44  gosched0()      /usr/local/go/src/pkg/runtime/proc.c:1218 +0x9f

Goroutine 5 (running) created at:  main.main()      /Users/bill/Spaces/Test/src/test/main.go:18 +0x66  runtime.main()      /usr/local/go/src/pkg/runtime/proc.c:182 +0x91

Goroutine 4 (running) created at:  main.main()      /Users/bill/Spaces/Test/src/test/main.go:18 +0x66  runtime.main()      /usr/local/go/src/pkg/runtime/proc.c:182 +0x91

==================Final Counter: 4Found 1 data race(s)

We still have a race condition with line 30 of the program:

Write by goroutine 5:  main.Routine()      /Users/bill/Spaces/Test/src/test/main.go:30 +0x44  gosched0()      /usr/local/go/src/pkg/runtime/proc.c:1218 +0x9f

        Counter = Counter + 1

Previous write by goroutine 4:  main.Routine()      /Users/bill/Spaces/Test/src/test/main.go:30 +0x44  gosched0()      /usr/local/go/src/pkg/runtime/proc.c:1218 +0x9f

        Counter = Counter + 1

207

Page 208: Going Go Programming

Goroutine 5 (running) created at:  main.main()      /Users/bill/Spaces/Test/src/test/main.go:18 +0x66  runtime.main()      /usr/local/go/src/pkg/runtime/proc.c:182 +0x91

        go Routine(routine)

The program runs correctly using one line of code to perform the increment. So why do we still have a race condition? Don't be deceived by the one line of Go code we have for incrementing the counter. Let's look at the assembly code generated for that one line of code:

0064 (./main.go:30) MOVQ Counter+0(SB),BX ; Copy the value of Counter to BX0065 (./main.go:30) INCQ ,BX              ; Increment the value of BX0066 (./main.go:30) MOVQ BX,Counter+0(SB) ; Move the new value to Counter

There are actually three lines of assembly code being executed to increment the counter. These three lines of assembly code eerily look like the original Go code. There could be a context switch after any of these three lines of assembly code. Even though the program is working now, technically the bug still exists.

Even though the example I am using is simple, it shows you how complex finding these bugs can be. Any line of assembly code produced by the Go compiler can be paused for a context switch. Our Go code may look like it is safely accessing resources when actually the underlying assembly code is not safe at all.

To fix this program we need to guarantee that reading and writing to the global Counter variable always happens to completion before any other routine can access the variable. Channels are a great way to serialize access to resources. In this case I will use a Mutex (Mutual Exclusion Lock).

package main

import (    "fmt"    "sync"    "time")

var Wait sync.WaitGroupvar Counter int = 0var Lock sync.Mutex

func main() {

    for routine := 1; routine <= 2; routine++ {

208

Page 209: Going Go Programming

        Wait.Add(1)        go Routine(routine)    }

    Wait.Wait()    fmt.Printf("Final Counter: %d\n", Counter)}

func Routine(id int) {

    for count := 0; count < 2; count++ {

        Lock.Lock()

        value := Counter        time.Sleep(1 * time.Nanosecond)        value++        Counter = value

        Lock.Unlock()    }

    Wait.Done()}

Let's build the program with the race detector and see the result:

go build -race./test

Final Counter: 4

This time we get the right answer and no race condition is identified. The program is clean. The Mutex protects all the code between the Lock and Unlock, making sure only one routine can execute that code at a time.

To learn more about the Go race detector and to see more examples read this post:

http://blog.golang.org/race-detector

It's not a bad idea to test your programs with the race detector on if you are using multiple routines. It will save you a lot of time and headaches early on in your unit and quality assurance testing. We are lucky as Go developers to have such a tool so check it out.

Cross Compile Your Go Programs

IntroductionIn my post about building and running programs in Iron.Io, I needed to switched over to my Ubuntu VM to build linux versions of my test programs locally. I love the ability to have Ubuntu available to me for building and testing my code. However, if I can stay on the Mac

209

Page 210: Going Go Programming

side it is better.

I have wanted to learn how to cross compile my Go programs for the two platforms I use, darwin/amd64 and linux/amd64. This way I could create final builds of my programs and publish everything from my Mac OS. After a couple of hours I am finally able to do this.

If you don't have the need to cross compile your code then I recommend you stick with the traditional distribution packages and installs for Go. If this is something you need, then it all starts with downloading the current release of the Go source code.

Installing MercurialThe Go source code is stored in a DVCS called Mercurial and is located on code.google.com. The first thing you need to do is install Mercurial if you don't already have it.

Go to the download page on the Mercurial website:  http://mercurial.selenic.com/downloads

Since I am running on a Mac with OSX 10.8, I downloaded that version. This gives you a real nice installation and when it is done you will have the Mercurial tool hg installed and ready to go.

Cloning Go Source Codehttp://golang.org/doc/install/source

Open up a Terminal session and go to your $HOME directory and clone the current release of the Go source code:

cd $HOMEhg clone -u release https://code.google.com/p/go

If everything works correctly you should see the following output or something similar:

warning: code.google.com certificate with fingerprint 54:a7:34:39:1b:2a:ec:b8:92:68:dc:3a:3e:fe:2b:d3:91:ed:23:1f not verified (check hostfingerprints or web.cacerts config setting)destination directory: gorequesting all changesadding changesets

210

Page 211: Going Go Programming

adding manifestsadding file changesadded 18252 changesets with 63493 changes to 8325 files (+6 heads)updating to branch release-branch.go1.13755 files updated, 0 files merged, 0 files removed, 0 files unresolved

Once that is done you now have a folder called go inside of the $HOME directory with the source code under src. Later on when you need to update this repository, you can go to the go folder and run the following command:

hg update

Before you build the Go code be aware that if your current user is not the owner of all the directories and files of the go code, the build will fail with permission issues. If this happens you can use the chown program to fix the ownership.

Command:sudo chown -R user_name:group_name file/folder to change

Example:sudo chown -R bill:staff bufio.go

Building Go For Each TargetNow you need to build the Go code for the targets you need. For now just build darwin/amd64 and linux/amd64.

First build the Go code for the darwin/amd64 target which is for the host machine:

cd go/srcGOOS=darwin GOARCH=amd64 CGO_ENABLED=1 ./make.bash --no-clean

If everything builds correctly you should see the build end like this:

---Installed Go for darwin/amd64 in /Users/bill/goInstalled commands in /Users/bill/go/bin

If you get the following error on your Mac there is a fix:

# crypto/x509 root_darwin.go:9:43: error: CoreFoundation/CoreFoundation.h: No such file or directory root_darwin.go:10:31: error: Security/Security.h: No such file or directory mime/multipart net/mail

This means you don't have the Command Line Tools for Xcode installed on your machine. Open Xcode and go to Preferences -> Downloads:

211

Page 212: Going Go Programming

Click the Install button for the Command Line Tools. Once that is done try building the Go code again.

Once the build is successful, open the Go folder. You should see everything you need for building Go programs on the Mac 64 bit environment:

You have the go, godoc and gofmt tools and all the package related libraries and tools. Next you need to fix your PATH to point to the bin folder so you can start using the tools.

export PATH=$HOME/go/bin:$PATH

You may want to set that in .bashrc or .bash_profile file from inside the $HOME directory.

With the path set, check that Go is working. Check the version and the environment:

go versiongo version go1.2.1 darwin/amd64

go envGOARCH="amd64"GOBIN=""GOCHAR="6"GOEXE=""GOHOSTARCH="amd64"GOHOSTOS="darwin"GOOS="linux"GOPATH=""

212

Page 213: Going Go Programming

GORACE=""GOROOT="/Users/bill/go"GOTOOLDIR="/Users/bill/go/pkg/tool/darwin_amd64"CC="gcc"GOGCCFLAGS="-g -O2 -fPIC -m64"CGO_ENABLED="1"

Everything looks good. Now build a version of Go that will let you build linux/amd64 binaries:

GOOS=linux GOARCH=amd64 CGO_ENABLED=0 ./make.bash --no-clean

You can't use CGO when cross compiling. Make sure CGO_ENABLED is set to 0

If everything builds correctly you should see the build end like this:

---Installed Go for linux/amd64 in /Users/bill/goInstalled commands in /Users/bill/go/bin

If you look at the Go folder again you should see some new folders for linux/amd64:

Now it is time to test if you can build Go programs for both the Mac and Linux operating systems. Set up a quick GOPATH and Go program. In Terminal run the following commands:

cd $HOMEmkdir examplemkdir srcmkdir simpleexport GOPATH=$HOME/example cd example/src/simple

Create a file called main.go inside of the simple folder with this code:

package main

import (

213

Page 214: Going Go Programming

    "fmt")

func main() {    fmt.Printf("Hello Gophers\n")}

First build the Mac version and make sure it is a Mac executable using the file command:

go buildfile simple

simple: Mach-O 64-bit executable x86_64

The file command tells us what type of file our program is. It is certainly a Mac executable file.

Now build the code for linux/amd64:

export GOARCH="amd64"export GOOS="linux"

go buildfile simple

Simple: ELF 64-bit LSB executable, x86-64, version 1 (SYSV), statically linked, not stripped

You need to change either one or both of the OS/ARCH environment variables to point to the target platform and architecture. Then you can build the code. After the build you can see the file command is reporting the program is a linux executable.

As a reference, here are the different platforms and architectures you can build for cross compilation:

$GOOS     $GOARCHdarwin    386      -- 32 bit MacOSXdarwin    amd64    -- 64 bit MacOSXfreebsd   386freebsd   amd64linux     386      -- 32 bit Linuxlinux     amd64    -- 64 bit Linuxlinux     arm      -- RISC Linuxnetbsd    386netbsd    amd64openbsd   386openbsd   amd64plan9     386windows   386      -- 32 bit Windowswindows   amd64    -- 64 bit Windows

214

Page 215: Going Go Programming

Installing Godoc and VetOnce you finish building Go for the targets you want, you will want to install godoc and vet for the default target. These tools will get built and installed in your GOROOT.

go get code.google.com/p/go.tools/cmd/godocgo get code.google.com/p/go.tools/cmd/vet

ConclusionThis is the documentation for building Go from the source:

http://golang.org/doc/install/source

This is a document written by Dave Cheney about cross compilation. He has build a script that you can download. It makes all of this real simple to perform:

http://dave.cheney.net/2013/07/09/an-introduction-to-cross-compilation-with-go-1-1

Mitchell Hashimoto built this great tool called gox. This tool makes it real easy to build your program for all the different targets without the need to manually change the GOARCH and GOOS environment variables.

Functions and Naked Returns In Go

In Go values that are returned from functions are passed by value. Go gives you some nice flexibility when it comes to returning values from a function.

Here is a simple example of returning two values from a function:

package main

import (   "fmt")

func main() {   id, err := ReturnId()

   if err != nil {      fmt.Printf("ERROR: %s", err)      return   }

   fmt.Printf("Id: %d\n", id)}

func ReturnId() (int, error) {   id := 10   return id, nil}

215

Page 216: Going Go Programming

The function ReturnId returns a value of type integer and of type error. This is something very common that is done in Go. Error handling is performed by returning a value of type error from your functions and the calling function evaluating that value before continuing.

If you don't care about the error for some reason after a function call returns, you can do something like this:

   id, _ := ReturnId()

This time I used an underscore to represent the return value for the second return argument, which was the error. This is really nice because I don't need to declare a variable to hold the value being passed in, I can simply ignore it.

You also have the option to name your return arguments:

func ReturnId() (id int, err error) {   id = 10   return id, err}

If you name your return arguments you are creating local variables just like with your function parameters. This time when I set the id variable, I remove the colon (:) from the short variable declaration and convert it to an assignment operation. Then in the return I specify the return variables as normal.

Naming your return arguments is a nice way to document what you are returning. There is also something else that you can do with your named arguments, or not do:

func ReturnId() (id int, err error) {   id = 10   return}

This is what is called a naked return. I have removed the arguments from the return statement. The Go compiler automatically returns the current values in the return arguments local variables. Though this is really cool you need to watch for shadowing:

func ReturnId() (id int, err error) {   id = 10

   if id == 10 {      err := fmt.Errorf("Invalid Id\n")      return   }

   return}

If you try to compile this you will get the following compiler error:

216

Page 217: Going Go Programming

err is shadowed during return

To understand why this error exists you need to understand what curly bracket do inside of a function. Each set of curly brackets define a new level of scope. Take this code for example:

func main() {   id := 10   id := 20

   fmt.Printf("Id: %d\n", id)}

If you try to compile this code you get the following error:

no new variables on left side of :=

This makes sense because you are trying to declare the same variable name twice. The error goes away if we change the code to look like this:

func main() {   id := 10

   {       id := 20       fmt.Printf("Id: %d\n", id)   }

   fmt.Printf("Id: %d\n", id)}

The curly brackets define a new stack frame and therefore a new level of scope. The variable name can be reused inside the new set of curly brackets. When the code reaches the closing curly bracket that small piece of the stack is popped.

Look again at the code that caused the shadowing error:

func ReturnId() (id int, err error) {   id = 10

   if id == 10 {      err := fmt.Errorf("Invalid Id\n")      return   }

   return}

Inside the if statement we are creating a new variable called err. We are not using the err variable declared as the function return argument. The compiler recognizes this and produces the error. If the compiler did not report this error, you would never see the error that occured inside the if statement. The return err variable is what is passed by default

217

Page 218: Going Go Programming

Naming your return arguments come in real handy when using a defer statement:

func ReturnId() (id int, err error) {   defer func() {      if id == 10 {         err = fmt.Errorf("Invalid Id\n")      }   }()

   id = 10

   return}

Because the return arguments are named, you can reference them in the defer function. You can even change the value of the return arguments inside the defer call and the calling function will see the new values. This version will display the error message.

You need to be aware that the defer statement is evaluated inline with the rest of the code:

func ReturnId() (id int, err error) {   defer func(id int) {      if id == 10 {         err = fmt.Errorf("Invalid Id\n")      }   }(id)

   id = 10

   return}

This version does not display the error message. The value of id is not 10 until after the defer statement is evaluated.

Sometimes it makes sense to use named return arguments, such when using a defer statement at the top of your function. If you are passing raw values out of your function then something like this does not make sense:

package main

import (   "fmt")

func main() {   ans := AddNumbers(10, 12)   fmt.Printf("Answer: %d\n", ans)}

218

Page 219: Going Go Programming

func AddNumbers(a int, b int) (result int) {   return a + b}

The return argument is named for the AddNumbers function but never used. Instead we return the answer of the operation directly out of the return. This shows you how you can still return any value you want even if you name the return arguments.

I asked the Go community for their opinions about using named arguments and naked returns:

https://plus.google.com/107537752159279043170/posts/8hMjHhmyNk2

I got a very good mix of opinions that I think everyone should read. Go gives you a lot of flexibility and choice when building your functions. Don't look for a single common practice for everything. Take each function individually and implement the best construct for that use case.

My Channel Select Bug

I was testing new functionality on a program that is already running in production when suddenly the code behaved very badly. What I saw shocked me and then it became obvious why it happened. I also have a race condition just waiting to be a problem.

I have tried to provide a simplified version of the code and the two bugs.

package main

import (    "fmt"    "os"    "os/signal"    "time")

var Shutdown bool = false

func main() {    sigChan := make(chan os.Signal, 1)    signal.Notify(sigChan, os.Interrupt)

    for {        select {        case <-sigChan:            Shutdown = true            continue

        case <-func() chan struct{} {            complete := make(chan struct{})            go LaunchProcessor(complete)

219

Page 220: Going Go Programming

            return complete        }():            return        }    }}

func LaunchProcessor(complete chan struct{}) {    defer func() {        close(complete)    }()

    fmt.Printf("Start Work\n")

    for count := 0; count < 5; count++ {        fmt.Printf("Doing Work\n")        time.Sleep(1 * time.Second)

        if Shutdown == true {            fmt.Printf("Kill Early\n")            return        }    }

    fmt.Printf("End Work\n")}

The idea behind this code is to run a task and terminate. It allows the operating system to request the program to terminate early. I always like shutting down the program cleanly when possible.

The sample code creates a channel that is bound to an operating system signal and looks for <ctrl> C from the terminal window. If <ctrl> C is issued, the Shutdown flag is set to true and the program continues back into the select statement. The code also spawns a Go routine that performs the work. That routine checks the Shutdown flag to determine if the program needs to terminate early.

Bug Number 1

Take a look at this part of the code:

case <-func() chan struct{} {    complete := make(chan struct{})    go LaunchProcessor(complete)    return complete}():

I thought I was being so clever when I wrote this code. I thought it would be cool to execute a function on the fly to spawn the Go routine. It returns a channel that the select waits on to be told the work is complete. When the Go routine is done it closes the channel and the program

220

Page 221: Going Go Programming

terminates.

Let's run the program:

Start WorkDoing WorkDoing WorkDoing WorkDoing WorkDoing WorkEnd Work

As expected the program starts and spawns the Go routine. Once the Go routine is complete the program terminates.

This time I will hit <ctlr> C while the program is running:

Start WorkDoing Work^CStart WorkDoing WorkKill EarlyKill Early

When I hit <ctrl> C the program launched the Go routine again!!

I thought that the function associated with the case would only be executed once. Then the select would just wait on the channel moving forward. I had no idea that the function would be executed every time the loop iterated back to the select statement.

To fix the code I needed to remove the function out of the select statement and spawn the Go routine outside of the loop:

func main() {    sigChan := make(chan os.Signal, 1)    signal.Notify(sigChan, os.Interrupt)

    complete := make(chan struct{})    go LaunchProcessor(complete)

    for {

        select {        case <-sigChan:            Shutdown = true            continue

        case <-complete:            return        }

221

Page 222: Going Go Programming

    }}

Now when we run the program we get a better result:

Start WorkDoing WorkDoing Work^CKill Early

This time when I hit <ctrl> C the program terminate early and doesn't spawn another Go routine again.

Bug Number 2

There is a second less obvious bug lurking in the code as well. Take a look at these pieces of code:

var Shutdown bool = false

if whatSig == syscall.SIGINT {    Shutdown = true}

if Shutdown == true {    fmt.Printf("Kill Early\n")    return}

The code is using a package level variable to signal the running Go routine to shut down when <ctrl> C is hit. The code is working every time I hit <ctrl> C so why is there a bug?

First let's run the race detector against the code:

go build -race./test

While it is running I hit <ctrl> C again:

Start WorkDoing Work^C==================WARNING: DATA RACERead by goroutine 5:    main.LaunchProcessor()        /Users/bill/Spaces/Test/src/test/main.go:46 +0x10b    gosched0()        /Users/bill/go/src/pkg/runtime/proc.c:1218 +0x9f

Previous write by goroutine 1:    main.main()

222

Page 223: Going Go Programming

        /Users/bill/Spaces/Test/src/test/main.go:25 +0x136    runtime.main()        /Users/bill/go/src/pkg/runtime/proc.c:182 +0x91

Goroutine 5 (running) created at:    main.main()        /Users/bill/Spaces/Test/src/test/main.go:18 +0x8f    runtime.main()        /Users/bill/go/src/pkg/runtime/proc.c:182 +0x91

Goroutine 1 (running) created at:    _rt0_amd64()        /Users/bill/go/src/pkg/runtime/asm_amd64.s:87 +0x106

==================Kill EarlyFound 1 data race(s)

My use of the Shutdown flag comes up on the race detector. This is because I have two Go routines trying to access the variable in an unsafe way.

My initial reason for not securing access to the variable was practical but wrong. I figured that since the variable is only used to shutdown the program when it becomes necessary, I didn't care about a dirty read. If by chance, within the microsecond of glare there was between writing to the variable and reading the variable, if a dirty read occurred, I would catch it again on the next loop. No harm done, right?  Why add complicated channel or locking code for something like this?

Well, there is a little thing called the Go Memory Model.

http://golang.org/ref/mem

The Go Memory Model does not guarantee that the Go routine reading the Shutdown variable will ever see the write by the main routine. It is valid for the write to the Shutdown variable by the main routine to never be written back to main memory. This is because the main routine never reads the Shutdown variable.

This is not happening today but as the Go compiler becomes more sophisticated it could decide to eliminate the write to the Shutdown variable altogether. This behavior is allowed by the Go Memory Model. Also, we don't want code that can't pass the race detector, it is just bad practice, even for practical reasons.

Here is a final version of the code with all bugs fixed:

package main

import (    "fmt"    "os"    "os/signal"    "sync/atomic"

223

Page 224: Going Go Programming

    "time")

var Shutdown int32 = 0

func main() {    sigChan := make(chan os.Signal, 1)    signal.Notify(sigChan, os.Interrupt)

    complete := make(chan struct{})    go LaunchProcessor(complete)

    for {

        select {        case <-sigChan:            atomic.StoreInt32(&Shutdown, 1)            continue

        case <-complete:            return        }    }}

func LaunchProcessor(complete chan struct{}) {    defer func() {        close(complete)    }()

    fmt.Printf("Start Work\n")

    for count := 0; count < 5; count++ {        fmt.Printf("Doing Work\n")        time.Sleep(1 * time.Second)

        if atomic.LoadInt32(&Shutdown) == 1 {            fmt.Printf("Kill Early\n")            return        }    }

    fmt.Printf("End Work\n")}

I prefer to use an if statement to check if the Shutdown flag is set so I can sprinkle that code as needed. This solution changes the Shutdown flag from a boolean value to an int32 and uses the atomic functions Store and Load.

In the main routine if a <ctrl> C is detected, the Shutdown flag is safely changed from 0 to 1. In the LaunchProcessor Go routine, the value of the Shutdown flag is compared to 1. If that

224

Page 225: Going Go Programming

condition is true the Go routine returns.

It's amazing sometimes how a simple program like this can contain a few gotchas. Things you may have never thought about or realized when you started. Especially when the code always seems to work.

Manage Dependencies With GODEP

Introduction

If you are using 3rd party packages, (packages that you don't own or control), you will want a way to create a reproducible build every time you build your projects. If you use 3rd party packages directly and the package authors change things, your projects could break. Even if things don't break, code changes could create inconsistent behavior and bugs.

Keith Rarick's tool godep is a great step in the right direction for managing 3rd party dependencies and creating reproducible builds. The godep tool gives you two options for managing dependencies. The first option creates a dependency file with version control information and then with some godep magic, the code is built against those versions. You can also Vendor your 3rd party packages inside your projects as well. You never need to change a single source code file and everything is accomplished in conjunction with the go tooling.

Downloading Godep

Download godep using go get and make sure your $GOPATH/bin directory is in your PATH.

go get github.com/kr/godepexport PATH=$PATH:$GOPATH/bin

Create A Project

Build your project using the 3rd party packages as you normally would. Since godep does not require you to change any import paths in the code, 'go get' the code you need and import those packages directly. To keep the post simple, I am going to use an existing program called News Search that uses one 3rd party dependency.

export GOPATH=$HOME/examplego get github.com/goinggo/newssearch

225

Page 226: Going Go Programming

After 'go get' completes, I have the following code on disk inside the GOPATH.

The News Search program is using code from a different Going Go repository, which for this project is a 3rd party package. Since 'go get' was successful, the code built and is installed.

Dependency Management

Once you have a project that can build and install properly, you can use godep to create the Godeps dependency file. Change to the root location for the project and run the godep save command with the -copy=false option:

cd $GOPATH/src/github.com/goinggo/newssearchgodep save -copy=false

Once the save is complete, godep creates a file called Godeps. You will save this file with your project:

{    "ImportPath": "github.com/goinggo/newssearch",    "GoVersion": "go1.1.2",    "Deps": [        {            "ImportPath": "github.com/goinggo/utilities/workpool",            "Rev": "7e6141d61b2a16ae83988907308f8e09f703a0d0"        }    ]}

The Godeps file contains everything godep needs to create a reproducible build. The Godep file lists each 3rd party package and the git commit number for that version of code to use.

226

Page 227: Going Go Programming

Now remove the 3rd party package from its original location and perform a build using the godep tool:

godep go build

If you remove the original 3rd party code, you can't use 'go build' directly anymore, the imports don't exist. To build the project use 'go build' from the godep tool.

You can also use 'go install' and 'go test' as long as you run those commands through godep.

godep go buildgodep go installgodep go test

To make this work, godep performs a bit of magic. It uses a working directory and manipulates the GOPATH underneath.

Run the godep path command from inside the project folder:

cd $GOPATH/src/github.com/goinggo/newssearchgodep path

227

Page 228: Going Go Programming

You should see the following output:

/var/folders/8q/d2pfdk_x4qd4__l6gypvzsw40000gn/T/godep/rev/7e/6141d61b2a16ae83988907308f8e09f703a0d0

If you open that folder you will see the code for that version. This code is being used to build the project:

The godep tool will continue to use the code from this location to perform future builds if it exists. Calling godep go build will download the version of code specified in the Godeps file if it doesn't already exist.

If you open any of your source code files you will see the imports have not changed. The way godep works, it doesn't have to change a thing. This is one of the really awesome things about godep.

Updating Dependencies

When it is time to update one of your 3rd party libraries just 'go get' it. Then you just need to run godep save once again to update the Godeps file. Because the imports paths in the source code files are not changed, godep will find and update the dependencies.

I have changed the 3rd party package and pushed it up to GitHub:

Now I 'go get' the code changes and update the Godeps file:

go get github.com/goinggo/utilitiescd $GOPATH/src/github.com/goinggo/newssearchgodep save

If I open the Godeps file the dependencies have changed:

228

Page 229: Going Go Programming

{    "ImportPath": "github.com/goinggo/newssearch",    "GoVersion": "go1.1.2",    "Deps": [        {            "ImportPath": "github.com/goinggo/utilities/workpool",            "Rev": "8ecd01ec035e29915aa6897a3385ee4f8d80cc05"        }    ]}

Now I use godep to build the code:

godep go build

The godep tool downloaded the new version and built the code successfully.

Vendoring

Vendoring is the act of making your own copy of the 3rd party packages your project is using. Those copies are traditionally placed inside each project and then saved in the project repository. The godep tool supports Vendoring and will place the copies inside the project that are using them.

To Vendor code with godep, don't use any options with the save command. First clean the workspace and download a new version of the News Search program:

export GOPATH=$HOME/examplego get github.com/goinggo/newssearchcd $GOPATH/src/github.com/goinggo/newssearch

Now issue the godep save command again but this time without the copy option:

godep save

229

Page 230: Going Go Programming

This time you will have a Godeps folder with a special workspace subfolder and the Godeps file. All the 3rd party packages are copied into the workspace folder under src. This is setup to work with GOPATH.

Version control files are removed and no import paths are changed in any of the source code files.Next remove the original code for the 3rd party package and perform the build:

godep go build

The build is successful and everything is ready to be pushed back into the repository.

Performing an update is as simple as downloading the new version of the 3rd party package and running godep save again.

Conclusion

The godep tool solves many of the problems that exist with creating reproducible builds. It is incredibly easy to use and sits on top of the go tooling. It doesn't change anything about go, how you write go programs or how you import 3rd party packages. The only drawback is that godep does not support Bazaar using the non-vendored option.

For the public packages your are publishing, you can include a Godeps file to provide your "stable build" configuration. Package users can choose to use it or not. That is really cool. Build the code with go directly or through godep.

In the end, godep is a tool that:

1. Supports a Vendor and Non-Vendor solution that provides a reproducible build2. Maintains backwards compatible with all existing Go packages3. Provides a way to publish and access the "stable build" configuration of a product4. Easy to update package dependencies when new package versions are available

Write Your Go Programs Using GEdit

230

Page 231: Going Go Programming

This is a guest post from Tad Vizbaras from Etasoft in South Florida.

There are a number of editors and IDEs for Go development. LiteIde, Vim, Emacs and GEdit just to name a few. Each developer has their own favorite editor for each language they work with. Some like full featured IDE environments while others prefer speed over features. My personal favorite editors for Go development at the moment are Vim and GEdit.

GEdit comes as part of many Linux distros. If you use Ubuntu, it is part of the operating system. GEdit has some decent features like:

* Syntax Highlighting* Split Windows* Word Wrapping

Advanced features are left to be handled by external plug-ins.

I prefer project-less development. That means there are no formal project files and projects are preserved via a workspace bound to a directory structure. Go has excellent support for project-less development. When building and installing your projects, the Go tooling, in conjunction with the way Go packages code, can minimize the need for external scripts and makefiles.

GEdit is a decent editor but I could not find any good Plug-ins that would allow me to perform a Go build right from the editor. The "External Tools" Plug-in has worked for me. I was able to set up shortcuts and get "go build" to execute. When you click on errors, displayed in the bottom pane of GEdit, the cursor jumps to exact error location.

When I started programming in Go, the "External Tools" Plug-in worked for me for quite some time. But after awhile, I started to wish that "go build" would run similar to how Linters ran. With Linters, you can run a command after the file is saved. Since Go usually takes only few seconds to build, the Plug-in could execute a "go build" on save and then jump to the error location if there were any.

I wrote a GEdit Plug-in that is developed in Python. Depending on the version of Python you have installed, you may require some small adjustments. This is covered in the Known Issues section below.

Ops, I forgot to mention... Go is Awesome. But you probably already know that.

Meet GoBuild for GEdit 3.xGoBuild - GEdit 3 Plug-in for Go (golang) development.

GoBuild Plug-in version 1.0 for GEdit. Plug-in attaches to the on_save event in GEdit for Go source code files only. It does nothing for any other file type.

It will run "go build" after the file is saved. If the current filename has "_test.go" in the name, then the Plug-in will run "go test" against the current file's directory. The Plug-in will wait a number of seconds for the build or test to complete. It will timeout and quit the build or test so GEdit will not freeze.

231

Page 232: Going Go Programming

ImagesPlug-in captures "go build" errors and shows them in the Gedit status bar. It also jumps to the first error and highlights error line if error is in the current file.

Plug-in shows the last successful build.

232

Page 234: Going Go Programming

DownloadI have posted the plugin on GitHub. Please send any feedback you may have. https://github.com/tadvi/gedit-gobuild

Know IssuesHas been tested on Ubuntu 13.04 and 13.10.

Ubuntu 13.10 requires small change in the gobuild.plugin.Line with python should be changed to python3 like below:

Loader=python3

This is because Gedit seems to default on using python 3 instead ofpython 2.7 on newer versions of Linux.

Usage * Simply drop files into ~/.local/share/gedit/plugins . * If this directory does not exist - create it. * Start GEdit * Open Edit-Preferences, then plug-ins and check "GoBuild after save" plug-in.

Notes * Current build directory is determined based on the active open source file. * Plug-in is designed for fast development on small to mid size projects. * Source code file is built with every save. * Tight iteration of the save-edit-save-edit cycle. * Not designed for large Go projects because compilation will timeout if it takes too long. * Not designed for Go unit tests that take a long time to run. * If you work on Go projects with build times over 5 seconds, this plug-in should be modified    to use keyboard shortcut (such as 'F5') instead of on_save action.

Using XSLT With Go

I am working on a project that requires pulling and processing different XML feeds from the web and storing the data into MongoDB as JSON. Since new feeds come up everyday, changing the Go program to process and publish new feeds is out of the question. A second constraint is that processing has to work in Iron.io or any other linux cloud based environment.

What I needed was a Go program that could take an XML document and XSLT stylesheet at runtime, transform the XML into JSON and then store the JSON to MongoDB. I have some specific field names and other requirements for the JSON document that I need to make sure exist. XSLT makes this real easy to support.

At first I looked at the different C libraries that exist. I figured I could integrate a library

234

Page 235: Going Go Programming

using CGO but after a few hours I realized this was not going to work. The libraries I found were huge and complex. Then by chance I found a reference about a program called xsltproc. The program exists both for the Mac and Linux operating systems. In fact, it comes pre-installed on the Mac and an apt-get will get you a copy of the program on your linux operating system.

I have built a sample program that shows how to use xsltproc in your Go programs. Before we download the sample code we need to make sure you have xsltproc installed.

If you are running on a Mac, xsltproc should already exist under /usr/bin

which xsltproc

/usr/bin/xsltproc

On your linux operating system just run apt-get if you don't already have xsltproc installed

sudo apt-get install xsltproc

The xsltproc program will be installed in the same place under /usr/bin. To make sure everything is good, run the xsltproc program requesting the version:

xsltproc --version

xsltproc was compiled against libxml 20708, libxslt 10126 and libexslt 815libxslt 10126 was compiled against libxml 20708libexslt 815 was compiled against libxml 20708

To download and try the sample program, open a terminal session and run the following commands:

export GOPATH=$HOME/example

go get github.com/goinggo/xsltcd $GOPATH/src/github.com/goinggo/xsltgo build

If you want to install the code under your normal GOPATH, start with the 'go get' line. Here are the files that should exist after the build:

main.go            -- Source code for test programdeals.xml          -- Sample XML document from Yipitstylesheet.xslt    -- Stylesheet to transform the Yipit XML feed to JSONxslt               -- Test program

Let's look at a portion of the XML document the sample program will transform:

<deals>  <list-item>

235

Page 236: Going Go Programming

    <yipit_url>http://yipit.com/business/rondeaus-kickboxing/</yipit_url>    <end_date>2014-01-2716:00:03</end_date>    <title>Let a Former Pro Teach You a Few Kicks of the Trade Month...</title>    <tags>        <list-item>            <url />            <name>Fitness Classes</name>            <slug>fitness-classes</slug>        </list-item>    </tags>    ...  </list-item></deals>

The XML can be found in the deals.xml file. It is an extensive XML document and too large to show in its entirety.

Let's look at a portion of the XSLT stylesheet:

<?xml version="1.0" encoding="UTF-8"?><xsl:stylesheet    xmlns:xsl="http://www.w3.org/1999/XSL/Transform"    xmlns:str="http://exslt.org/strings"    version="1.0"    extension-element-prefixes="str">    <xsl:output method="text" />    <xsl:template name="cleanText">        <xsl:param name="pText" />        <xsl:variable name="cleaned1" select="str:replace($pText, '&quot;', '')" />        <xsl:variable name="cleaned2" select="str:replace($cleaned1, '\', '')" />        <xsl:variable name="cleaned3" select="str:replace($cleaned2, '&#xA;', '')" />        <xsl:value-of select="$cleaned3" />    </xsl:template>    ...    <xsl:template match="/">{"deals": [    <xsl:for-each select="root/response/deals/list-item">{        "dealid": <xsl:value-of select="id" />,        "feed": "Yipit",        "date_added": "<xsl:value-of select="date_added" />",        "end_date": "<xsl:value-of select="end_date" />",        ...         "categories": [<xsl:for-each select="tags/list-item">"<xsl:value-of select="slug"/>"<xsl:choose><xsl:when test="position() != last()">,</xsl:when></xsl:choose></xsl:for-each>],

236

Page 237: Going Go Programming

        ...    }<xsl:choose><xsl:when test="position() != last()">,    </xsl:when></xsl:choose></xsl:for-each>]}    </xsl:template></xsl:stylesheet>

This XSLT can be found in the stylesheet.xslt file. It is an extensive XSLT stylesheet with templates to help cleanup the XML data. Something really great about xsltproc is that it already contains a bunch of great extensions:

./xsltproc_darwin -dumpextensions

Registered XSLT Extensions--------------------------Registered Extension Functions:{http://exslt.org/math}lowest{http://exslt.org/math}power{http://exslt.org/strings}concat{http://exslt.org/dates-and-times}date{http://exslt.org/dates-and-times}day-name{http://exslt.org/common}object-type{http://exslt.org/math}atan{http://exslt.org/strings}encode-uri{http://exslt.org/strings}decode-uri{http://exslt.org/dates-and-times}add-duration{http://exslt.org/dates-and-times}difference{http://exslt.org/dates-and-times}leap-year{http://exslt.org/dates-and-times}month-abbreviation{http://exslt.org/dynamic}map{http://exslt.org/math}tan{http://exslt.org/math}exp{http://exslt.org/dates-and-times}date-time{http://exslt.org/dates-and-times}day-in-week{http://exslt.org/dates-and-times}second-in-minute{http://exslt.org/dates-and-times}year{http://icl.com/saxon}evaluate{http://exslt.org/math}log{http://exslt.org/dates-and-times}add{http://exslt.org/dates-and-times}day-abbreviation{http://icl.com/saxon}line-number{http://exslt.org/math}constant{http://exslt.org/sets}difference{http://exslt.org/dates-and-times}duration{http://exslt.org/dates-and-times}minute-in-hour{http://icl.com/saxon}eval{http://exslt.org/math}min{http://exslt.org/math}max{http://exslt.org/math}highest{http://exslt.org/math}random

237

Page 238: Going Go Programming

{http://exslt.org/math}sqrt{http://exslt.org/math}cos{http://exslt.org/sets}has-same-node{http://exslt.org/strings}tokenize{http://exslt.org/dates-and-times}seconds{http://exslt.org/dates-and-times}time{http://exslt.org/dynamic}evaluate{http://exslt.org/common}node-set{http://exslt.org/dates-and-times}month-name{http://exslt.org/dates-and-times}week-in-year{http://exslt.org/math}acos{http://exslt.org/sets}intersection{http://exslt.org/sets}leading{http://exslt.org/sets}trailing{http://exslt.org/strings}replace{http://exslt.org/dates-and-times}day-in-year{http://icl.com/saxon}expression{http://exslt.org/math}abs{http://exslt.org/math}sin{http://exslt.org/math}asin{http://exslt.org/math}atan2{http://exslt.org/sets}distinct{http://exslt.org/dates-and-times}hour-in-day{http://exslt.org/dates-and-times}sum{http://exslt.org/dates-and-times}week-in-month{http://exslt.org/strings}split{http://exslt.org/strings}padding{http://exslt.org/strings}align{http://exslt.org/dates-and-times}day-in-month{http://exslt.org/dates-and-times}day-of-week-in-month{http://exslt.org/dates-and-times}month-in-year{http://xmlsoft.org/XSLT/}test

Registered Extension Elements:{http://exslt.org/common}document{http://exslt.org/functions}result{http://xmlsoft.org/XSLT/}test

Registered Extension Modules:http://exslt.org/functionshttp://icl.com/saxonhttp://xmlsoft.org/XSLT/

Look at the stylesheet to see how to access these extensions. I am using the strings extension to help replace characters that are not JSON compliant.

Now let's look at the sample code that uses xsltproc to process the XML against the XSLT stylesheet:

package main

238

Page 239: Going Go Programming

import (    "encoding/json"    "fmt"    "os"    "os/exec")

type document map[string]interface{}

func main() {    jsonData, err := processXslt("stylesheet.xslt", "deals.xml")    if err != nil {        fmt.Printf("ProcessXslt: %s\n", err)        os.Exit(1)    }

    documents := struct {        Deals []document `json:"deals"`    }{}

    err = json.Unmarshal(jsonData, &documents)    if err != nil {        fmt.Printf("Unmarshal: %s\n", err)        os.Exit(1)    }

    fmt.Printf("Deals: %d\n\n", len(documents.Deals))

    for _, deal := range documents.Deals {        fmt.Printf("DealId: %d\n", int(deal["dealid"].(float64)))        fmt.Printf("Title: %s\n\n", deal["title"].(string))    }}

func processXslt(xslFile string, xmlFile string) (jsonData []byte, err error) {    cmd := exec.Cmd{        Args: []string{"xsltproc", xslFile, xmlFile},        Env: os.Environ(),        Path: "xsltproc",    }

    jsonString, err := cmd.Output()    if err != nil {        return jsonData, err    }

    fmt.Printf("%s\n", jsonString)

239

Page 240: Going Go Programming

    jsonData = []byte(jsonString)

    return jsonData, err}

The processXslt function uses an exec.Cmd object to shell out and run the xsltproc program. The key to making this work is the cmd.Output function. The xsltproc program will return the result of the transformation to stdout. This means we only need to write the xml and xslt files to disk before running xsltproc. We will receive the result from xsltproc as a string from the cmd.Output call.

Once the processXslt function has the resulting JSON transformation from xsltproc, the JSON is displayed on the screen and then converted to a slice of bytes for further processing.

In main after the call to the processXslt function, the slice of bytes containing the JSON transformation is unmarshalled into a map so it can be consumed by our Go program and displayed on the screen. In the future that map can be stored in MongoDB via the mgo MongoDB driver.

The xsltproc program can be uploaded to any cloud environment that will allow you to write the XML and XSLT to disk. I have been successful in using xsltproc inside an Iron.io IronWorker container.

If you have the need to process XSLT in your Go programs, give this a try.

Using The Log Package In Go

Linux is unique to Windows in many ways, and writing programs in Linux is no exception. The use of standard out, standard err and null devices is not only a good idea but it's the law. If your programs are going to be logging information, it is best to follow the destination conventions. This way your programs will work with all of the Mac/Linux tooling and hosted environments.

Go has a package in the standard library called log and a type called logger. Using the log package will give you everything you need to be a good citizen. You will be able to write to all the standard devices, custom files or any destination that support the io.Writer interface.

I have provided a really simple sample that will get you started with using logger:

package main

import (    "io"    "io/ioutil"    "log"    "os")

var (    TRACE   *log.Logger

240

Page 241: Going Go Programming

    INFO    *log.Logger    WARNING *log.Logger    ERROR   *log.Logger)

func Init(    traceHandle io.Writer,    infoHandle io.Writer,    warningHandle io.Writer,    errorHandle io.Writer) {

    TRACE = log.New(traceHandle,        "TRACE: ",        log.Ldate|log.Ltime|log.Lshortfile)

    INFO = log.New(infoHandle,        "INFO: ",        log.Ldate|log.Ltime|log.Lshortfile)

    WARNING = log.New(warningHandle,        "WARNING: ",        log.Ldate|log.Ltime|log.Lshortfile)

    ERROR = log.New(errorHandle,        "ERROR: ",        log.Ldate|log.Ltime|log.Lshortfile)}

func main() {    Init(ioutil.Discard, os.Stdout, os.Stdout, os.Stderr)

    TRACE.Println("I have something standard to say")    INFO.Println("Special Information")    WARNING.Println("There is something you need to know about")    ERROR.Println("Something has failed")}

When you run this program you will get the follow output:

INFO: 2013/11/05 18:11:01 main.go:44: Special InformationWARNING: 2013/11/05 18:11:01 main.go:45: There is something you need to know aboutERROR: 2013/11/05 18:11:01 main.go:46: Something has failed

You will notice that TRACE logging is not being displayed. Let's look at the code to find out why.

Look at the TRACE logger pieces:

241

Page 242: Going Go Programming

TRACE *log.Logger

TRACE = log.New(traceHandle,    "TRACE: ",    log.Ldate|log.Ltime|log.Lshortfile)

Init(ioutil.Discard, os.Stdout, os.Stdout, os.Stderr)

TRACE.Println("I have something standard to say")

The code creates a package level variable called TRACE which is a pointer to a log.Logger object. Then inside the Init function, a new log.Logger object is created. The parameters to the log.New function are as follows:

func New(out io.Writer, prefix string, flag int) *Logger

out:    The out variable sets the destination to which log data will be written.prefix: The prefix appears at the beginning of each generated log line.flags:  The flag argument defines the logging properties.

Flags:const (// Bits or'ed together to control what's printed. There is no control over the// order they appear (the order listed here) or the format they present (as// described in the comments). A colon appears after these items:// 2009/01/23 01:23:23.123123 /a/b/c/d.go:23: messageLdate = 1 << iota // the date: 2009/01/23Ltime             // the time: 01:23:23Lmicroseconds     // microsecond resolution: 01:23:23.123123. assumes Ltime.Llongfile         // full file name and line number: /a/b/c/d.go:23Lshortfile        // final file name element and line number: d.go:23. overrides LlongfileLstdFlags = Ldate | Ltime // initial values for the standard logger)

In this sample program the destination for TRACE is ioutil.Discard. This is a null device where all write calls succeed without doing anything. Therefore when you write using TRACE, nothing appears in the terminal window.

Look at INFO:

INFO *log.Logger

242

Page 243: Going Go Programming

TRACE = log.New(infoHandle,    "INFO: ",    log.Ldate|log.Ltime|log.Lshortfile)

Init(ioutil.Discard, os.Stdout, os.Stdout, os.Stderr)

INFO.Println("Special Information")

For INFO os.Stdout is passed into Init for the infoHandle. This means when you write using INFO, the message will appear on the terminal window, via standard out.

Last, look at ERROR:

ERROR *log.Logger

TRACE = log.New(errorHandle,    "INFO: ",    log.Ldate|log.Ltime|log.Lshortfile)

Init(ioutil.Discard, os.Stdout, os.Stdout, os.Stderr)

INFO.Println("Special Information")

This time os.Stderr is passed into Init for the errorHandle. This means when you write using ERROR, the message will appear on the terminal window, via standard error. However, passing these messages to os.Stderr allows other applications running your program to know an error has occurred.

Since any destination that support the io.Writer interface is accepted, you can create and use files:

file, err := os.OpenFile("file.txt", os.O_CREATE|os.O_WRONLY|os.O_APPEND, 0666)if err != nil {    log.Fatalln("Failed to open log file", output, ":", err)}

MYFILE = log.New(file,    "PREFIX: ",    log.Ldate|log.Ltime|log.Lshortfile)

In the sample code, a file is opened and then passed into the log.New call. Now when you use MYFILE to write, the writes go to file.txt.

You can also have the logger write to multiple destinations at the same time.

file, err := os.OpenFile("file.txt", os.O_CREATE|os.O_WRONLY|os.O_APPEND, 0666)if err != nil {    log.Fatalln("Failed to open log file", output, ":", err)}

243

Page 244: Going Go Programming

multi := io.MultiWriter(file, os.Stdout)

MYFILE = log.New(multi,    "PREFIX: ",    log.Ldate|log.Ltime|log.Lshortfile)

Here writes are going to the file and to standard out.

Notice the use of log.Fatalln in the handling of any error with OpenFile. The log package provides an initial logger that can be configured as well. Here is a sample program using log with the standard configuration:

package main

import (    "log")

func main() {    log.Println("Hello World")}

Here is the output:

2013/11/05 18:42:26 Hello World

If you want to remove the formatting or change it, you can use the log.SetFlags function:

package main

import (    "log")

func main() {    log.SetFlags(0)    log.Println("Hello World")}

Here is the output:

Hello World

Now all the formatting has been removed. If you want to send the output to a different destination use the log.SetOutput:

package main

import (    "io/ioutil"

244

Page 245: Going Go Programming

    "log")

func main() {    log.SetOutput(ioutil.Discard)    log.Println("Hello World")}

Now nothing will display on the terminal window. You can use any destination that support the io.Writer interface.

Based on this example I wrote a new logging package for all my programs:

go get github.com/goinggo/tracelog

I wish I knew about log and loggers when I started writing Go programs. Expect to see a lot more of the log package from me in the future.

Label Breaks In Go

Have you ever found yourself in this situation. You have a case statement inside of a for loop and you would like to break from both the case and for statements in a single call?

var err errortimeout := time.After(30 * time.Second)

sigChan := make(chan os.Signal, 1)signal.Notify(sigChan, os.Interrupt)

complete := make(chan error)go launchProcessor(complete)

for {    select {    case <-sigChan:       atomic.StoreInt32(&shutdownFlag, 1)       continue

    case <-timeout:        os.Exit(1)

    case err = <-complete:        break    }

    // Break the loop    break}

return err

245

Page 246: Going Go Programming

Here I have an endless for loop waiting on three channels using a select statement.

The first case is listening for an operating system Interrupt event. If the operating system requests the program to shutdown, this case will set a package level variable and continue back into the loop.

The second case is listening for a timeout event. If the programs runs for 30 seconds, the timeout event will fire and the program will immediately terminate.

The third case is listening for a complete event. If the Goroutine that is launched prior to entering the loop completes it work, it will notify the code on this channel. In this case we need to break out of both the case and the for loop.

Fortunately there isn't any more logic to process outside of the select statement, so the second break statement works. If there were other cases that broke out of the select statement and did not require the loop to terminate, I would be in trouble. The code would require more logic and flags to determine when to break out of the loop and when to continue iterating.

Go has an answer to this coding delima. You can define a label and break to that label.

var err errortimeout := time.After(30 * time.Second)

sigChan := make(chan os.Signal, 1)signal.Notify(sigChan, os.Interrupt)

complete := make(chan error)go launchProcessor(complete)

Loop:     for {        select {        case <-sigChan:           atomic.StoreInt32(&shutdownFlag, 1)           continue

        case <-timeout:            os.Exit(1)

        case err = <-complete:            break Loop        }    }

return err

I have changed the code a bit by declaring a label called Loop just above the for statement. Then in the last case, the break statement is provided the name of that label. This single call to break will jump the execution of the program outside of the for loop and to the next line of code. In this case, the next line of code is the call to return err.

246

Page 247: Going Go Programming

You can also use a label with a continue statement. This is a silly example but it shows you the mechanism:

    guestList := []string{"bill", "jill", "joan"}    arrived := []string{"sally", "jill", "joan"}

CheckList:    for _, guest := range guestList {        for _, person := range arrived {            fmt.Printf("Guest[%s] Person[%s]\n", guest, person)

            if person == guest {                fmt.Printf("Let %s In\n", person)                continue CheckList            }        }    }

Here is the output:

Guest[bill] Person[sally]Guest[bill] Person[jill]Guest[bill] Person[joan]Guest[jill] Person[sally]Guest[jill] Person[jill]Let jill InGuest[joan] Person[sally]Guest[joan] Person[jill]Guest[joan] Person[joan] Let joan in

In this example there are two for loops, one nested inside the other. From the nested for loop, the continue statement uses a label to jump back to the outer for loop. From the output, you can see that theouter for loop starts its next iteration. Once the outer for loop is complete, the execution of the program continues on.

If you think this is just a fancy goto statement, it really isn't. The label being referenced must enclose the same for, switch or select statement. As you saw, the continue will still begin the next iteration of the for loop.

Using label breaks and continues in these scenario keeps the code clean and precise.

Building A Weather App Using Go

247

Page 248: Going Go Programming

At Ardan Studios we have spent the last 6 months, in our spare time and on weekends, building a consumer based mobile application called OutCast. The mobile application is tailored towards those who like spending time outdoors, whether that be fishing, hunting or any other type of activity.

This first release of OutCast shows the conditions for the buoy stations and marine forecasts areas within the United States. All this information is updated every 10 minutes and there are map views with traditional grids and search.

The backend processing for buoy and marine data is built using Go and running at Iron.IO as a scheduled worker task. The buoy processing downloads a text file from the NOAA website and rips through it, updating MongoDB with any changes. The marine processing is a bit more complicated. This requires pulling down multiple web pages from the NOAA website and parsing out all the text. Go made building and running these tasks a breeze.

Another important aspect of OutCast is real time weather radar for the last 50 minutes. This has been very challenging on multiple levels. Mainly because we needed a real good image

248

Page 249: Going Go Programming

library that would run on Linux and could be integrated with Go. We were fortunate to find ImageMagick's MagickWand C API and their Go package that provides the CGO bindings (https://github.com/gographics/imagick).

Processing images is an intense piece of work. Sometimes it takes 3 seconds to clean a single image. With 155 radar stations that need to be processed every 5 minutes, it took us several refactors to get things working well. The MagickWand library can only handle processing one image at a time. This restriction places a lot of stress on getting things done accurately within an acceptable amount of time.

Here is a sample of a radar image before and after processing:

There is another interesting constraint. NOAA updates the images every 120 seconds on different time boundaries. If the program can't download all the images very quickly, the application could have images out of sync when they are animated across the map. Many radar images cross each other like in Orlando, FL. On this area of the map we have 3 radar images overlapping each other.

Sometimes the images are not available. The program goes out to get the image and it doesn't exist. This creates problems with gaps in the timeline. When this happens, there is an alternate image location the program attempts to use. If the image is still not available, then the image from the previous run is used. Unless you are looking for it, you usually can't tell.

Then you have the issue of updating S3 storage and MongoDB for each image. This again needs to happen quickly to prevent image overlays from being out of sync. At the end of the day, you want to do your best to make sure that the images for all the radar stations are in sync. This will provide the best user experience.

Radar image processing happens in three stages and runs on a VM at Digital Ocean with 2 Gig of memory and 2 Cores.

So how does Go help make this all happen every 5 minutes all day and all night without fail?

Stage 1: Download ImagesThere are 155 images that have to be downloaded. The images range from 1k to 20k in size depending on the activity of the weather at that moment. In this stage, the program spawns a Go routine for each image that needs to be downloaded. The Go program consistently does all 155 downloads in less than one second:

249

Page 250: Going Go Programming

15:15:02 radar.go:425: main : downloadImages : Started

-- Spawn Go Routines15:15:02 radar.go:431: main : downloadImages : Info : Image [1] of [155]15:15:02 radar.go:431: main : downloadImages : Info : Image [155] of [155]

-- Sample Download Per ImageWorker-JAX : downloadImage : Info : HEADER : Url , N0R/JAX_N0R_0.gifWorker-JAX : downloadImage : Info : HEADER : Last-Modified , [Fri, 06 Dec 2013 20:12]Worker-JAX : downloadImage : Info : HEADER : Content-Type , [image/gif]Worker-JAX : downloadImage : Info : HEADER : Vary , [Accept-Encoding]Worker-JAX : downloadImage : Info : HEADER : Cache-Control , [max-age=180]Worker-JAX : downloadImage : Info : HEADER : Expires , [Fri, 06 Dec 2013 20:18:02 GMT]Worker-JAX : downloadImage : Info : HEADER : Date , [Fri, 06 Dec 2013 20:15:02 GMT]Worker-JAX : downloadImage : Info : HEADER : Connection , [keep-alive]Worker-JAX : downloadImage : Info : HEADER : Server , [Apache/2.2.15 (Red Hat)]Worker-JAX : downloadImage : Info : HEADER : Accept-Ranges , [bytes]Worker-JAX : downloadImage : Info : HEADER : Content-Length , -1Worker-JAX : downloadImage : Info : HEADER : Image-Length , 6873

-- All Images Complete15:15:02 radar.go:445: main : downloadImages : Completed

Stage 2: Image CleanupNow that all 155 images have been downloaded, they need to be cleaned using the ImageMagick API. Unfortunately, this can only be done with a single Go routine. Trying to clean more than one image at a time causes the program to pull a lot of memory. It really slows things down and can cause the program to be terminated by the OS. I have also seen other very odd behavior. The program will consistently complete this work in 90 seconds or less:

15:15:02 radar.go:453: main : cleanImages : Started15:15:02 radar.go:457: main : cleanImages : Info : Image [1] of [155]

-- Sample Processing Per Image

250

Page 251: Going Go Programming

Worker-RIW : cleanImage : StartedWorker-RIW : cleanImage : Info : ReadImageBlobWorker-RIW : cleanImage : Info : TransparentPaintImageWorker-RIW : cleanImage : Info : WaveImageWorker-RIW : cleanImage : Info : CropWorker-RIW : cleanImage : Info : ResizeWorker-RIW : cleanImage : Info : EqualizeImageWorker-RIW : cleanImage : Info : GaussianBlurImageWorker-RIW : cleanImage : Info : BrightnessContrastImageWorker-RIW : cleanImage : Info : ResetIteratorWorker-RIW : cleanImage : Info : GetImageBlobWorker-RIW : cleanImage : CompletedWorker-RIW : cleanImage : Info : Defer : PixelWand DestroyWorker-RIW : cleanImage : Info : Defer : MagicWand Destroy

-- All Images Complete15:16:20 radar.go:477: main : cleanImages : Completed

Stage 3: Upload To S3 and Update MongoDBThe last stage requires uploading all the cleaned images to S3 storage, removing the old images from S3 and then updating MongoDB with the new list of available image files. I am using the goamz package from Ubuntu for accessing S3 and the mgo package from Gustavo Neimeyer to access MongoDB.

Just like when we download the images, this stage spawns a Go routine for each image that needs to be uploaded and recorded. The Go program consistently performs this work in one second:

15:16:20 radar.go:485: main : updateImages : Started15:16:20 radar.go:491: main : updateImages : Info : Image [1] of [155]15:16:20 radar.go:491: main : updateImages : Info : Image [2] of [155]15:16:20 radar.go:491: main : updateImages : Info : Image [3] of [155]

-- Sample Processing Per ImagecollateImage : Started : StationId[RIW] FileName[US/WY/RIW/20131206-2015.gif]collateImage : Info : Remove : Minutes[50.01] FileName[US/WY/RIW/20131206-1935]collateImage : Info : Keep : Minutes[45.00] FileName[US/WY/RIW/20131206-1940.gif]collateImage : Info : Keep : Minutes[40.00] FileName[US/WY/RIW/20131206-1945.gif]collateImage : Info : Keep : Minutes[35.00] FileName[US/WY/RIW/20131206-1940.gif]collateImage : Info : Keep : Minutes[30.01] FileName[US/WY/RIW/20131206-1945.gif]collateImage : Info : Keep : Minutes[25.02]

251

Page 252: Going Go Programming

FileName[US/WY/RIW/20131206-1950.gif]collateImage : Info : Keep : Minutes[20.01] FileName[US/WY/RIW/20131206-1955.gif]collateImage : Info : Keep : Minutes[15.01] FileName[US/WY/RIW/20131206-2000.gif]collateImage : Info : Keep : Minutes[10.00] FileName[US/WY/RIW/20131206-2005.gif]collateImage : Info : Keep : Minutes[5.01] FileName[US/WY/RIW/20131206-2010.gif]collateImage : Info : Keep - New : FileName[US/WY/RIW/20131206-2015.gif]collateImage : CompletedstoreImageMongoDB : Info : Updating MongostoreImageMongoDB : CompletedstoreImageS3 : Started : Bucket[SRR-Dev] FileName[US/AK/APD/20131206-2015.gif]storeImageS3 : Info : Putting File Into S3 : FileName[US/AK/APD/20131206-2015.gif]storeImageMongoDB : Completed

-- All Images Complete15:16:21 radar.go:505: main : updateImages : Completed

ConclusionThis project has taught me a lot about Go. It is exciting to see how fast the Go routines can download the images and perform all the S3 and MongoDB work. Thanks to CGO, I was able to leverage a powerful image processing library and make calls directly from my Go code.

Currently we are porting the web service that powers the mobile application to Go. It is currently written in Ruby. We are using the beego package for our web framework, the goconvey package for our tests and the envconfig package to handle our configuration needs.

Our goal for OutCast is to provide people the ability to know in advance that the weekend is going to be great. We plan on using Go and MongoDB to analyze outdoor condition data with user preferences and experiences to deliver relevant information and forecasting. In the future, users will interact with OutCast by providing an experience review after their outdoor activities have ended.

Currently OutCast is only available in the Apple App Store. We will have our Android version complete in January 2014.

Sample Web Application Using Beego and Mgo

IntroductionI am very excited about the Beego web framework. I wanted to share with you how I use the framework to build real world web sites and web services. Here is a picture of the sample website the post is going to showcase:

252

Page 253: Going Go Programming

The sample web application:1. Implements a traditional grid view of data calling into MongoDB2. Provides a modal dialog box to view details using a partial view to generate the

HTML3. Implements a web service that returns a JSON document4. Takes configuration parameters from the environment using envconfig5. Implements tests via goconvey6. Leverages my logging package

The code for the sample can be found in the GoingGo repository up on Github:https://github.com/goinggo/beego-mgo

You can bring the code down and run it. It uses a public MongoDB database I created at MongoLab. You will need git and bazaar installed on your system before running go get.

go get github.com/goinggo/beego-mgo

To quickly run or test the web application, use the scripts located in the zscripts folder.

Web Application Code StructureLet's take a look at the project structure and the different folders that exist:

controllers Entry point for each Web call. Controllers process the requests.localize Provides localization support for different languages and culturesmodels Models are data structures used by the business and service layersroutes Mappings between URL's and the controller code that handles those calls.services Services provide primitive functions for the different services that exist. These

could be database or web calls that perform a specific function.static Resource files such as scripts, stylesheets and imagestest Tests that can be run through the go test tool.utilities Code that supports the web application. Boilerplate and abstraction layers for

accessing the database and handling panics.views Code related to rendering viewszscripts Support scripts to help make it easier to build, run and test the web application

253

Page 254: Going Go Programming

Controllers, Models and ServicesThese layers make up the bulk of the code that implement the web application. The idea behind the framework is to hide and abstract as much boilerplate code as possible. This is accomplished by implementing a base controller package and a base services package.

Base Controller PackageThe base controller package uses composition to abstract default controller behavior required by all controllers:

type (    BaseController struct {        beego.Controller        services.Service    })

func (this *BaseController) Prepare() {    this.UserId = this.GetString("userId")    if this.UserId == "" {        this.UserId = this.GetString(":userId")    }

    err := this.Service.Prepare()    if err != nil {        this.ServeError(err)        return    }}

func (this *BaseController) Finish() {    defer func() {        if this.MongoSession != nil {            mongo.CloseSession(this.UserId, this.MongoSession)            this.MongoSession = nil        }    }()}

A new type called BaseController is declared with the Beego Controller type and the base Service type embedded directly. This composes the fields and methods of these types directly into the BaseController type and makes them directly accessible through an object of the BaseController type.

Beego Controller framework will execute the Prepare and Finish functions on any Controller object that implements these interfaces. The Prepare function is executed prior to the Controller function being called. These functions will belong to every Controller type by default, allowing this boilerplate code to be implemented once.

Services Package

254

Page 255: Going Go Programming

The Service package maintains state and implements boilerplate code required by all services:

type (    // Services contains common properties    Service struct {        MongoSession *mgo.Session        UserId string    })

func (this *Service) Prepare() (err error) {    this.MongoSession, err = mongo.CopyMonotonicSession(this.UserId)    if err != nil {        return err    }

    return err}

func (this *Service) Finish() (err error) {    defer helper.CatchPanic(&err, this.UserId, "Service.Finish")

    if this.MongoSession != nil {        mongo.CloseSession(this.UserId, this.MongoSession)        this.MongoSession = nil    }

    return err}

func (this *Service) DBAction(databaseName string, collectionName string, mongoCall mongo.MongoCall) (err error) {    return mongo.Execute(this.UserId, this.MongoSession, databaseName, collectionName, mongoCall)}

In the Service type, the Mongo session and the id of the user is maintained. This version of Prepare handles creating a MongoDB session for use. Finish closes the session which releases the underlying connection back into the pool.  The function DBAction provides an abstraction layer for running MongoDB commands and queries.

Buoy ServiceThis Buoy Service package implements the calls to MongoDB. Let's look at the FindStation function that is called by the controller methods:

func FindStation(service *services.Service, stationId string) (buoyStation *buoyModels.BuoyStation, err error) {

255

Page 256: Going Go Programming

    defer helper.CatchPanic(&err, service.UserId, "FindStation")

    queryMap := bson.M{"station_id": stationId}

    buoyStation = &buoyModels.BuoyStation{}    err = service.DBAction(Config.Database, "buoy_stations",        func(collection *mgo.Collection) error {            return collection.Find(queryMap).One(buoyStation)        })

    if err != nil {        if strings.Contains(err.Error(), "not found") == false {            return buoyStation, err        }

        err = nil    }

    return buoyStation, err}

The FindStation function prepares the query and then using the DBAction function to execute the query against MongoDB.

Implementing Web CallsWith the base types, boilerplate code and service functionality in place, we can now implement the web calls.

Buoy ControllerThe BuoyController type is composed solely from the BaseController. By composing the BuoyController in this way, it immediately satisfies the Prepare and Finish interfaces and contains all the fields of a Beego Controller.

The controller functions are bound to routes. The routes specify the urls to the different web calls that the application supports. In our sample application we have three routes:

beego.Router("/", &controllers.BuoyController{}, "get:Index")beego.Router("/buoy/retrievestation", &controllers.BuoyController{}, "post:RetrieveStation")beego.Router("/buoy/station/:stationId", &controllers.BuoyController{}, "get,post:RetrieveStationJson")

The route specifies a url path, an instance of the controller used to handle the call and the name of the method from the controller to use. A prefix of which verb is accepted can be specified as well.

The Index controller method is used to deliver the initial html to the browser. This will include the javascript, style sheets and anything else needed to get the web application going:

256

Page 257: Going Go Programming

func (this *BuoyController) Index() {    region := "Gulf Of Mexico"

    buoyStations, err := buoyService.FindRegion(&this.Service, region)    if err != nil {        this.ServeError(err)        return    }

    this.Data["Stations"] = buoyStations    this.Layout = "shared/basic-layout.html"    this.TplNames = "buoy/content.html"    this.LayoutSections = map[string]string{}    this.LayoutSections["PageHead"] = "buoy/page-head.html"    this.LayoutSections["Header"] = "shared/header.html"    this.LayoutSections["Modal"] = "shared/modal.html"}

A call is made into the service layer to retrieve the list of regions. Then the slice of stations are passed into the view system. Since this is setting up the initial view of the application, layouts and the template are specified. When the controller method returns, the beego framework will generate the html for the response and deliver it to the browser.

To generate that grid of stations, we need to be able to iterate over the slice of stations. Go templates support iterating over a slice. Here we use the .Stations variable which was passed into the view system:

{{range $index, $val := .Stations}}<tr>         <td><a class="detail" data="{{$val.StationId}}" href="#">{{$val.StationId}}</a></td>  <td>{{$val.Name}}</td>  <td>{{$val.LocDesc}}</td>  <td>{{$val.Condition.DisplayWindSpeed}}</td>  <td>{{$val.Condition.WindDirection}}</td>  <td>{{$val.Condition.DisplayWindGust}}</td></tr>{{end}}

Each station id is a link that brings up a modal dialog box with the details for each station. The RetrieveStation controller method generates the html for the modal dialog:

257

Page 258: Going Go Programming

func (this *BuoyController) RetrieveStation() {    params := struct {        StationId string `form:"stationId" valid:"Required; MinSize(4)" error:"invalid_station_id"`    }{}

    if this.ParseAndValidate(&params) == false {        return    }

    buoyStation, err := buoyService.FindStation(&this.Service, params.StationId)    if err != nil {        this.ServeError(err)        return    }

    this.Data["Station"] = buoyStation    this.Layout = ""    this.TplNames = "buoy/pv_station.html"    view, _ := this.RenderString()

    this.AjaxResponse(0, "SUCCESS", view)}

RetrieveStation gets the details for the specified station and then uses the view system to generate the html for the dialog box. The partial view is passed back to the requesting ajax call and placed into the browser document:

function ShowDetail(result) {    try {        var postData = {};        postData["stationId"] = $(result).attr('data');

        var service = new ServiceResult();        service.getJSONData("/buoy/retrievestation",            postData,            ShowDetail_Callback,            Standard_ValidationCallback,            Standard_ErrorCallback        );    }

    catch (e) {        alert(e);    }}

function ShowDetail_Callback() {    try {        $('#system-modal-title').html("Buoy Details");

258

Page 259: Going Go Programming

        $('#system-modal-content').html(this.ResultObject);        $("#systemModal").modal('show');    }

    catch (e) {        alert(e);    }}

Once the call to modal.('show') is performed, the following modal diaglog appears.

The RetrieveStationJson function implements a web service call that returns a JSON document:

func (this *BuoyController) RetrieveStationJson() {    params := struct {        StationId string `form:":stationId" valid:"Required; MinSize(4)" error:"invalid_station_id"`    }{}

    if this.ParseAndValidate(&params) == false {        return    }

    buoyStation, err := buoyService.FindStation(&this.Service, params.StationId)    if err != nil {        this.ServeError(err)        return    }

259

Page 260: Going Go Programming

    this.Data["json"] = &buoyStation    this.ServeJson()}

You can see how it calls into the service layer and uses the JSON support to return the response.

Testing The EndpointIn order to make sure the application is always working, it needs to have tests:

func TestStation(t *testing.T) {    r, _ := http.NewRequest("GET", "/station/42002", nil)    w := httptest.NewRecorder()    beego.BeeApp.Handlers.ServeHTTP(w, r)

    response := struct {        StationId string `json:"station_id"`        Name string `json:"name"`        LocDesc string `json:"location_desc"`        Condition struct {            Type string `json:"type"`            Coordinates []float64 `json:"coordinates"`        } `json:"condition"`        Location struct {            WindSpeed float64 `json:"wind_speed_milehour"`            WindDirection int `json:"wind_direction_degnorth"`            WindGust float64 `json:"gust_wind_speed_milehour"`        } `json:"location"`    }{}    json.Unmarshal(w.Body.Bytes(), &response)

    Convey("Subject: Test Station Endpoint\n", t, func() {        Convey("Status Code Should Be 200", func() {            So(w.Code, ShouldEqual, 200)        })        Convey("The Result Should Not Be Empty", func() {            So(w.Body.Len(), ShouldBeGreaterThan, 0)        })        Convey("There Should Be A Result For Station 42002", func() {            So(response.StationId, ShouldEqual, "42002")        })    })}

This test creates a fake call through the Beego handler for the specified route. This is awesome because we don't need to run the web application to test. By using goconvey we can create tests that produce nice output that is logical and easy to read.

260

Page 261: Going Go Programming

Here is a sample when the test fails:

Subject: Test Station Endpoint

  Status Code Should Be 200 ✘  The Result Should Not Be Empty ✔  There Should Be A Result For Station 42002 ✘

Failures:

* /Users/bill/Spaces/Go/Projects/src/github.com/goinggo/beego-mgo/test/endpoints/buoyEndpoints_test.go Line 35:Expected: '200'Actual: '400'(Should be equal)

* /Users/bill/Spaces/Go/Projects/src/github.com/goinggo/beego-mgo/test/endpoints/buoyEndpoints_test.go Line 37:Expected: '0'Actual: '9'(Should be equal)

3 assertions thus far

--- FAIL: TestStation-8 (0.03 seconds)

Here is a sample when it is successful:

Subject: Test Station Endpoint

  Status Code Should Be 200 ✔  The Result Should Not Be Empty ✔  There Should Be A Result For Station 42002 ✔

3 assertions thus far

--- PASS: TestStation-8 (0.05 seconds)

ConclusionTake the time to download the project and look around. I have attempted to show you the major points of the sample and how things are put together. The Beego framework makes it easy to implement your own ways to abstract and implement boilerplate code, leverage the go testing harness and run and deploy the code using Go standard mechanisms.

Three-Index Slices in Go 1.2

261

Page 262: Going Go Programming

With the release of Go 1.2, slices gained the ability to specify the capacity when performing a slicing operation. This doesn't mean we can use this index to extend the capacity of the underlying array. It means we can create a new slice whose capacity is restricted. Restricting the capacity provides a level of protection to the underlying array and gives us more control over append operations.

Here are the release notes and design document for the feature request:

http://tip.golang.org/doc/go1.2#three_indexhttps://docs.google.com/document/d/1GKKdiGYAghXRxC2BFrSEbHBZZgAGKQ-yXK-hRKBo0Kk/pub

Let's write some code to explore using the new capacity index. As with all my slice posts, I am going to use this InspectSlice function:

func InspectSlice(slice []string) {    // Capture the address to the slice structure    address := unsafe.Pointer(&slice)

    // Capture the address where the length and cap size is stored    lenAddr := uintptr(address) + uintptr(8)    capAddr := uintptr(address) + uintptr(16)

    // Create pointers to the length and cap size    lenPtr := (*int)(unsafe.Pointer(lenAddr))    capPtr := (*int)(unsafe.Pointer(capAddr))

    // Create a pointer to the underlying array    addPtr := (*[8]string)(unsafe.Pointer(*(*uintptr)(address)))

    fmt.Printf("Slice Addr[%p] Len Addr[0x%x] Cap Addr[0x%x]\n",        address,        lenAddr,        capAddr)

    fmt.Printf("Slice Length[%d] Cap[%d]\n",        *lenPtr,        *capPtr)

    for index := 0; index < *lenPtr; index++ {        fmt.Printf("[%d] %p %s\n",            index,            &(*addPtr)[index],            (*addPtr)[index])    }

    fmt.Printf("\n\n")}

262

Page 263: Going Go Programming

To start, let's create a slice we will use as our source:

source := []string{"Apple", "Orange", "Plum", "Banana", "Grape"}InspectSlice(source)

Output:Slice Addr[0x210231000] Len Addr[0x210231008] Cap Addr[0x210231010]Slice Length[5] Cap[5][0] 0x21020e140 Apple[1] 0x21020e150 Orange[2] 0x21020e160 Plum[3] 0x21020e170 Banana[4] 0x21020e180 Grape

We start with a slice of strings with a length and capacity of 5. This means the underlying array has 5 elements and we have access to the entire array.

Next, let's take a traditional slice of the source and inspect the contents:

takeOne := source[2:3]InspectSlice(takeOne)

Output:Slice Addr[0x210231040] Len Addr[0x210231048] Cap Addr[0x210231050]Slice Length[1] Cap[3][0] 0x21020e160 Plum

With this slice operation we only take the third element from the source. You can see the first element of the takeOne slice has the same address as the third element of the source slice. The takeOne slice has a length of one and a capacity of three. This is because there are three elements left in the underlying array that are available for use.

What if we didn't want the new slice to have access to the remaining capacity? Prior to version 1.2, this was not possible. Let's take the slice again, but this time restrict the capacity to one:

takeOneCapOne := source[2:3:3]  // Use the third index position toInspectSlice(takeOneCapOne)     // set the capacity

Output:Slice Addr[0x210231060] Len Addr[0x210231068] Cap Addr[0x210231070]

263

Page 264: Going Go Programming

Slice Length[1] Cap[1][0] 0x21020e160 Plum

After creating the takeOneCapOne slice, the length and capacity are now one. The takeOneCapOne slice no longer has access to the remaining capacity in the underlying array.

Length and capacity is calculated using this formula:

For slice[ i : j : k ] the

Length:   j - iCapacity: k - i

If we attempt to set the capacity greater than the underlying array, the code will panic.

takeOneCapFour := source[2:3:6]  // (6 - 2) attempts to set the capacity                                 // to 4. This is greater than what is                                 // available.

Runtime Error:panic: runtime error: slice bounds out of range

goroutine 1 [running]:runtime.panic(0x9ad20, 0x1649ea)    /Users/bill/go/src/pkg/runtime/panic.c:266 +0xb6main.main()    /Users/bill/Spaces/Test/src/test/main.go:15 +0x24f

So what happens if we append an element to the takeOneCapOne slice?

source := []string{"Apple", "Orange", "Plum", "Banana", "Grape"}InspectSlice(source)

takeOneCapOne := source[2:3:3]InspectSlice(takeOneCapOne)

takeOneCapOne = append(takeOneCapOne, "Kiwi")InspectSlice(takeOneCapOne)

Here is the output:

Slice Addr[0x210231000] Len Addr[0x210231008] Cap Addr[0x210231010]Slice Length[5] Cap[5][0] 0x21020e140 Apple[1] 0x21020e150 Orange[2] 0x21020e160 Plum

264

Page 265: Going Go Programming

[3] 0x21020e170 Banana[4] 0x21020e180 Grape

-- Before Append --

Slice Addr[0x210231040] Len Addr[0x210231048] Cap Addr[0x210231050]Slice Length[1] Cap[1][0] 0x21020e160 Plum

-- After Append --

Slice Addr[0x210231080] Len Addr[0x210231088] Cap Addr[0x210231090]Slice Length[2] Cap[2][0] 0x210231060 Plum[1] 0x210231070 Kiwi

When we append an element to the takeOneCapOne slice, a new underlying array is created for the slice. This new underlying array contains a copy of the elements being referenced from the source and then is extended to add the new element. This is because the capacity of the takeOneCapOne slice was reached and append needed to grow the capacity. Notice how the address changes in the takeOneCapOne slice after the append.

How is this different from not setting the capacity?

source := []string{"Apple", "Orange", "Plum", "Banana", "Grape"}InspectSlice(source)

takeOne := source[2:3]  // Don't specify capacityInspectSlice(takeOne)

takeOne = append(takeOne, "Kiwi")InspectSlice(takeOne)

InspectSlice(source)

Here is the output:

Slice Addr[0x210231000] Len Addr[0x210231008] Cap Addr[0x210231010]Slice Length[5] Cap[5][0] 0x21020e140 Apple[1] 0x21020e150 Orange[2] 0x21020e160 Plum[3] 0x21020e170 Banana[4] 0x21020e180 Grape

-- Before Append --

265

Page 266: Going Go Programming

Slice Addr[0x210231040] Len Addr[0x210231048] Cap Addr[0x210231050]Slice Length[1] Cap[3][0] 0x21020e160 Plum

-- After Append --

Slice Addr[0x210231060] Len Addr[0x210231068] Cap Addr[0x210231070]Slice Length[2] Cap[3][0] 0x21020e160 Plum[1] 0x21020e170 Kiwi

Slice Addr[0x210231080] Len Addr[0x210231088] Cap Addr[0x210231090]Slice Length[5] Cap[5][0] 0x21020e140 Apple[1] 0x21020e150 Orange[2] 0x21020e160 Plum[3] 0x21020e170 Kiwi[4] 0x21020e180 Grape

This time the append uses the existing capacity and overwrites the value at element 4 in the underlying array. This could be a disaster if this was not our intent.

This new feature of setting the capacity can really help protect us and our data from unwanted overwrites. The more we can leverage the built-in functions and runtime to handle these types of operations the better. These types of bugs are very difficult to find so this is going to help immeasurably.

Here are other posts about slices:

Understanding Slices In Go ProgrammingCollections Of Unknown Length In GoSlices Of Slices Of Slices In GoIterating Over Slices In Go

Queue Your Way To Scalability

IntroductionThe first thing I did when I started programming in Go was begin porting my Windows utilities classes and service frameworks over to Linux. This is what I did when I moved from C++ to C#. Thank goodness, I soon learned about Iron.IO and the services they offered. Then it hit me, if I wanted true scalability, I needed to start building worker tasks that could be queued to run anywhere at any time. It was not about how many machines I needed, it was about how much compute time I needed.

266

Page 267: Going Go Programming

Outcast Marine ForecastThe freedom that comes with architecting a solution around web services and worker tasks is refreshing. If I need 1,000 instances of a task to run, I can just queue it up. I don't need to worry about capacity, resources, or any other IT related issues. If my service becomes an instant hit overnight, the architecture is ready, the capacity is available.

My mobile weather application Outcast is a prime example. I currently have a single scheduled task that runs in Iron.IO every 10 minutes. This task updates marine forecast areas for the United States and downloads and parses 472 web pages from the NOAA website. We are about to add Canada and eventually we want to move into Europe and Australia. At that point a single scheduled task is not a scalable or redundant architecture for this process.

Thanks to the Go Client from Iron.IO, I can build a task that wakes up on a schedule and queues up as many marine forecast area worker tasks as needed. I can use this architecture to process each marine forecast area independently, in their own worker task, providing incredible scalability and redundancy. The best part, I don't have to think about hardware or IT related capacity issues.

Create a Worker TaskBack in September I wrote a post about building and uploading an Iron.IO worker task using Go:

http://www.goinggo.net/2013/09/running-go-programs-in-ironworker.html

This task simulated 60 seconds of work and ran experiments to understand some of the capabilities of the worker task container. We are going to use this worker task to demonstrate how to use the Go Client to queue a task. If you want to follow along, go ahead and walk through the post and create the worker task.

I am going to assume you walked through the post and created the worker called "task" as depicted in the image below:

267

Page 268: Going Go Programming

Download The Go ClientDownload the Go Client from Iron.IO:

go get github.com/iron-io/iron_go

Now navigate to the examples folder:

The examples leverage the API that can be found here:http://dev.iron.io/worker/reference/api/

Not all the API calls are represented in these examples, but from these examples the rest of the API can be easily implemented.

In this post we are going to focus on the task API calls. These are API's that you will most likely be able to leverage in your own programs and architectures.

Queue a TaskOpen up the queue example from the examples/tasks folder. We will walk through the more important aspects of the code.

In order to queue a task with the Go client, we need to create this document which will be posted with the request:

{    "tasks": [        {            "code_name": "MyWorker",            "timeout" : 60,

268

Page 269: Going Go Programming

            "payload": "{\"x\": \"abc\", \"y\": \"def\"}"        }    ]}

In the case of our worker task, the payload document in Go should look like this:

var payload = `{"tasks":[{  "code_name" : "task",  "timeout" : 120,  "payload" : ""}]}`

Now let's look at the code that will request our task to be queued. The first thing we need to do is set our project id and token.

config := config.Config("iron_worker")config.ProjectId = "your_project_id"config.Token = "your_token"

As described in the post from September, this information can be found inside our project configuration:

Now we can use the Go Client to build the url and prepare the payload for the request:

url := api.ActionEndpoint(config, "tasks")postData := bytes.NewBufferString(payload)

269

Page 270: Going Go Programming

Using the url object, we can send the request to Iron.IO and capture the response:

resp, err := url.Request("POST", postData)defer resp.Body.Close()if err != nil {    log.Println(err)    return}

body, err := ioutil.ReadAll(resp.Body)if err != nil {    log.Println(err)    return}

We want to check the response to make sure everything was successful. This is the response we will get back:

{    "msg": "Queued up",    "tasks": [        {            "id": "4eb1b471cddb136065000010"        }    ]}

To unmarshal the result, we need these data structures:

type (    TaskResponse struct {        Message string `json:"msg"`        Tasks   []Task `json:"tasks"`    }

    Task struct {        Id string `json:"id"`    })

Now let's unmarshal the results:

taskResponse := &TaskResponse{}err = json.Unmarshal(body, taskResponse)if err != nil {    log.Printf("%v\n", err)    return}

If we want to use a map instead to reduce the code base, we can do this:

270

Page 271: Going Go Programming

results := map[string]interface{}{}err = json.Unmarshal(body, &results)if err != nil {    log.Printf("%v\n", err)    return}

When we run the example code and everything works, we should see the following output:

Url: https://worker-aws-us-east-1.iron.io:443/2/projects/522b4c518a0c960009000007/tasks

"msg": Queued up{    "id": "52b4721726d9410296012cc8",},

If we navigate to the Iron.IO HUD, we should see the task was queued and completed successfully:

ConclusionThe Go client is doing a lot of the boilerplate work for us behind the scenes. We just need to make sure we have all the configuration parameters that are required. Queuing a task is one of the more complicated API calls. Look at the other examples to see how to get information for the tasks we queue and even get the logs.

Queuing a task like this gives you the flexibility to schedule work on specific intervals or based on events. There are a lot of use cases where different types of web requests could leverage queuing a task. Leveraging this type of architecture provides a nice separation of concerns with scalability and redundancy built in. It keeps our web applications focused and optimized for handling user requests and pushes the asynchronous and background tasks to a cloud environment designed and architected to handle things at scale.

As Outcast grows we will continue to leverage all the services that Iron.IO and the cloud has to offer.  There is a lot of data that needs to be downloaded, processing and then delivered to users through the mobile application. By building a scalable architecture today, we can handle what happens tomorrow.

Macro View of Map Internals In Go

IntroductionThere are lots of posts that talk about the internals of slices, but when it comes to maps, we are left in the dark. I was wondering why and then I found the code for maps and it all made sense.

271

Page 272: Going Go Programming

http://golang.org/src/pkg/runtime/hashmap.c

At least for me, this code is complicated. That being said, I think we can create a macro view of how maps are structured and grow. This should explain why they are unordered, efficient and fast.

Creating and Using MapsLet's look at how we can use a map literal to create a map and store a few values:

// Create an empty map with a key and value of type stringcolors := map[string]string{}

// Add a few keys/value pairs to the mapcolors["AliceBlue"] = "#F0F8FF"colors["Coral"]     = "#FF7F50"colors["DarkGray"]  = "#A9A9A9"

When we add values to a map, we always specify a key that is associated with the value. This key is used to find this value again without the need to iterate through the entire collection:

fmt.Printf("Value: %s", colors["Coral"])

If we do iterate through the map, we will not necessarily get the keys back in the same order. In fact, every time you run the code, the order could change:

colors := map[string]string{}colors["AliceBlue"]   = "#F0F8FF"colors["Coral"]       = "#FF7F50"colors["DarkGray"]    = "#A9A9A9"colors["ForestGreen"] = "#228B22"colors["Indigo"]      = "#4B0082"colors["Lime"]        = "#00FF00"colors["Navy"]        = "#000080"colors["Orchid"]      = "#DA70D6"colors["Salmon"]      = "#FA8072"

for key, value := range colors {    fmt.Printf("%s:%s, ", key, value)}

Output:AliceBlue:#F0F8FF, DarkGray:#A9A9A9, Indigo:#4B0082, Coral:#FF7F50,ForestGreen:#228B22, Lime:#00FF00, Navy:#000080, Orchid:#DA70D6,Salmon:#FA8072

Now that we know how to create, set key/value pairs and iterate over a map, we can peek under the hood.

272

Page 273: Going Go Programming

How Maps Are StructuredMaps in Go are implemented as a hash table. If you need to learn what a hash table is, there are lots of articles and posts about the subject. This is the Wikipedia page to get you started:

http://en.wikipedia.org/wiki/Hash_table

The hash table for a Go map is structured as an array of buckets. The number of buckets is always equal to a power of 2. When a map operation is performed, such as (colors["Black"] = "#000000"), a hash key is generated against the key that is specified. In this case the string "Black" is used to generate the hash key. The low order bits (LOB) of the generated hash key is used to select a bucket.

Once a bucket is selected, the key/value pair needs to be stored, removed or looked up, depending on the type of operation. If we look inside any bucket, we will find two data structures. First, there is an array with the top 8 high order bits (HOB) from the same hash key that was used to select the bucket. This array distinguishes each individual key/value pair stored in the respective bucket. Second, there is an array of bytes that store the key/value pairs. The byte array packs all the keys and then all the values together for the respective bucket.

273

Page 274: Going Go Programming

When we are iterating through a map, the iterator walks through the array of buckets and then return the key/value pairs in the order they are laid out in the byte array. This is why maps are unsorted collections. The hash keys determines the walk order of the map because they determine which buckets each key/value pair will end up in.

Memory and Bucket OverflowThere is a reason the key/value pairs are packed like this in a single byte array. If the keys and values were stored like key/value/key/value, padding allocations between each key/value pair would be needed to maintain proper alignment boundaries. An example where this would apply is with a map that looks like this:

map[int64]int8

The 1 byte value in this map would result in 7 extra bytes of padding per key/value pair. By packing the key/value pairs as key/key/value/value, the padding only has to be appended to the end of the byte array and not in between. Eliminating the padding bytes saves the bucket and the map a good amount of memory. To learn more about alignment boundaries, read this post:

http://www.goinggo.net/2013/07/understanding-type-in-go.html

A bucket is configured to store only 8 key/value pairs. If a ninth key needs to be added to a bucket that is full, an overflow bucket is created and reference from inside the respective bucket.

274

Page 275: Going Go Programming

How Maps GrowAs we continue to add or remove key/value pairs from the map, the efficiency of the map lookups begin to deteriorate. The load threshold values that determine when to grow the hash table are based on these four factors:

% overflow  : Percentage of buckets which have an overflow bucketbytes/entry : Number of overhead bytes used per key/value pairhitprobe    : Number of entries that need to be checked when looking up a keymissprobe   : Number of entries that need to be checked when looking up an absent key

Currently, the code uses the following load threshold values:

LOAD %overflow bytes/entry hitprobe missprobe

6.50 20.90 10.79 4.25 6.50

Growing the hash table starts with assigning a pointer called the "old bucket" pointer to the current bucket array. Then a new bucket array is allocated to hold twice the number of existing buckets. This could result in large allocations, but the memory is not initialized so the allocation is fast.

Once the memory for the new bucket array is available, the key/value pairs from the old bucket array can be moved or "evacuated" to the new bucket array. Evacuations happen as key/value pairs are added or removed from the map. The key/value pairs that are together in an old bucket could be moved to different buckets inside the new bucket array. The evacuation algorithm attempts to distribute the key/value pairs evenly across the new bucket array.

275

Page 276: Going Go Programming

This is a very delicate dance because iterators still need to run through the old buckets until every old bucket has been evacuated. This also affects how key/value pairs are returned during iteration operations. A lot of care has been taken to make sure iterators work as the map grows and expands.

ConclusionAs I stated in the beginning, this is just a macro view of how maps are structured and grow. The code is written in C and performs a lot of memory and pointer manipulation to keep things fast, efficient and safe.

Obviously, this implementation can be changed at any time and having this understanding doesn't affect our ability, one way or the other, to use maps. It does show that if you know how many keys you need ahead of time, it is best to allocated that space during initialization. It also explains why maps are unsorted collections and why iterators seem random when walking through maps.

Special ThanksI would like to thank Stephen McQuay and Keith Randall for their review, input and corrections for the post.

Go Package Management For 2014

IntroductionIn October 2013 I sent out a call to action to the Go community. I wanted to form a group of Gophers that would come together and help write a specification and build a working implementation of a package management tool. We are not there yet, but the group did accomplish a few things:

We started a mailing list called Go package management [go-pm] where people could discuss ideas and get feedback on existing and new tools.

We wrote a goals document that outlined what we wanted to accomplish along with guidelines and help for package management tool authors.

We identified and categorized the existing set of tools that are available for use. We developed a set of user stories that describe the different scenarios people were

facing.

276

Page 277: Going Go Programming

We started a list of packages that tool developers can use to test their tools against. We helped some of the tool developers find and fix bugs.

Overall I think the last 3 months have been productive and I am pleased with the results.

Building ToolsI have come to appreciate that there isn't going to be a silver bullet or a one tool fits all solution. I believe a majority of the use-cases that have been defined in the goals document can be solved and building tools to handle these use-cases is worth the time and effort. If you're thinking about building a tool, please consider these guidelines which are outlined in the goals document:

Work with the Go language as defined in the Go 1 spec of March 28, 2012.Don’t implement solutions that require feature changes or build tools that change the way Go works.

Provide backwards compatibility with go get and the convention of using VCS repositories in the import paths.The existing set of programs, build processes and workflows can’t break. You must respect the existing environments and allow them to continue to function.

Prevent version skewing.Don’t build into the solution the potential for version skewing to occur. Such as requiring semantic versioning in the import path urls. Imports should not need to be changed to access the latest or different versions of a package.

Work on all operating systems and architectures that Go currently supports.One of the great things about Go is that programs can be built on all these different operating systems and architectures. Your tool should not exclude a platform or make use of a specific operating system construct like symlinks.

Interoperate in such a way that ‘go get’ continues to work for package authors who do not wish to participate, or for existing Go source code that cannot be changed. Also, do not force package authors to choose between making their code go getable or using the proposed solution.No one should be required to use a tool in order to share or import a repository. The tooling must continue to work for everyone.

All these guidelines are important because they will allow others to try and use your tools without the need to refactor any existing code. They also guarantee that existing projects continue to build and install with the Go tooling on all platforms.

ChoicesI would love to see the community rally around a few tools and help improve them. There are two classes of tools that I think work well with Go, Vendoring and Revision Locking.

Vendoring takes the 3rd party source code that is referenced in your project and makes a copy of that code inside a new folder within the project. All the code your project needs is inside the one project repository. Vendoring also provides a performance enhancement on

277

Page 278: Going Go Programming

getting the code because only one url call is required.

Revision Locking creates a dependency file that references specific commits in the different version control systems the code is located in. Just like vendoring, the RL tool is used to get, build and install your project. One advantage is that your project repository continues to only contain the specific project code.

Choosing how to handle package management it is a matter of personal preference. The Go team recommends vendoring, which can be found in the Go FAQ. They mention Keith Rarick's tool goven as an option. Keith has abandoned** goven for his other tool godep, which provides both vendoring and revision locking.

** After talking with Keith he has stated that he has not totally abandoned goven, it is just "finished". He continues to maintain the package and merge bug fixes when necessary.

New Call To ActionFor 2014 I would like to see the Go community play a greater role in helping the package management tool authors. There are several ways I think this can be done:

Participate in the go-pm group. Give advice and help to those who have questions. Submit packages that the tool authors can use to help test their tools. Report bugs and feature requests. Continue to add user stories that are missing in the goals document. Submit your new tools and ideas to the go-pm group. Work with the existing tool authors to improve the tools that we have today.

I hope to see all of you participating in the go-pm mailing list this year. I love Go and only want to see it improve for everyone.

Decode JSON Documents In Go

IntroductionWe are working on a project where we have to make calls into a web service. Many of the web calls return very large documents that contain many sub-documents. The worst part is, we usually only need a handful of the fields for any given document and those fields tend to be scattered all over the place.

Here is a sample of a smaller document:

var document string = `{"userContext": {    "conversationCredentials": {        "sessionToken": "06142010_1:75bf6a413327dd71ebe8f3f30c5a4210a9b11e93c028d6e11abfca7ff"    },    "valid": true,    "isPasswordExpired": false,    "cobrandId": 10000004,    "channelId": -1,

278

Page 279: Going Go Programming

    "locale": "en_US",    "tncVersion": 2,    "applicationId": "17CBE222A42161A3FF450E47CF4C1A00",    "cobrandConversationCredentials": {        "sessionToken": "06142010_1:b8d011fefbab8bf1753391b074ffedf9578612d676ed2b7f073b5785b"    },    "preferenceInfo": {        "currencyCode": "USD",        "timeZone": "PST",        "dateFormat": "MM/dd/yyyy",        "currencyNotationType": {            "currencyNotationType": "SYMBOL"        },        "numberFormat": {            "decimalSeparator": ".",            "groupingSeparator": ",",            "groupPattern": "###,##0.##"        }     }},"lastLoginTime": 1375686841,"loginCount": 299,"passwordRecovered": false,"emailAddress": "[email protected]","loginName": "sptest1","userId": 10483860,"userType": {    "userTypeId": 1,    "userTypeName": "normal_user"}}`

It is not scalable for us to create all the structs and embedded structs to unmarshal the different JSON documents using json.Unmarshal and working directly with a map was out of the question. What we needed was a way to decode these JSON documents into structs that just contained the fields we needed, regardless where those fields lived in the JSON document.

Luckily we came a across a package by Mitchell Hashimoto called mapstructure and we forked it. This package is able to take a JSON document that is already unmarshaled into a map and decode that into a struct. Unfortunately, you still needed to create all the embedded structs if you wanted the data at the different levels. So I studied the code and build some functionality on top that allowed us do what we needed.

DecodePathThe first function we added is called DecodePath. This allows us to specify the fields and sub-documents we want from the JSON document and store them into the structs we need. Let's start with a small example using the JSON document above:

279

Page 280: Going Go Programming

type UserType struct {    UserTypeId int    UserTypeName string}

type User struct {    Session   string   `jpath:"userContext.cobrandConversationCredentials.sessionToken"`    CobrandId int      `jpath:"userContext.cobrandId"`    UserType  UserType `jpath:"userType"`    LoginName string   `jpath:"loginName"`}

docScript := []byte(document)docMap := map[string]interface{}{}json.Unmarshal(docScript, &docMap)

user := User{}DecodePath(docMap, &user)

fmt.Printf("%#v", user)

If we run this program we get the following output:

mapstructure.User{    Session:"06142010_1:b8d011fefbab8bf1753391b074ffedf9578612d676ed2b7f073b5785b",    CobrandId:10000004,    UserType:mapstructure.UserType{        UserTypeId:1,        UserTypeName:"normal_user"    }    LoginName:"sptest1"}

The "jpath" tag is used to find the map keys and set the values into the struct. The User struct contains fields from three different layers of the JSON document. We only needed to define two structs to pull the data out of the map we needed.

We can also map entire structs the same way a normal unmarshal would work. Just name the fields in the struct to match the field names in the JSON document. The names of the fields in the struct don't need to be in the same case as the fields in the JSON document.

Here is a more complicated example using an anonymous field in our struct:

type NumberFormat struct {    DecimalSeparator  string `jpath:"userContext.preferenceInfo.numberFormat.decimalSeparator"`

280

Page 281: Going Go Programming

    GroupingSeparator string `jpath:"userContext.preferenceInfo.numberFormat.groupingSeparator"`    GroupPattern      string `jpath:"userContext.preferenceInfo.numberFormat.groupPattern"`}

type User struct {    LoginName string `jpath:"loginName"`    NumberFormat}

docScript := []byte(document)docMap := map[string]interface{}{}json.Unmarshal(docScript, &docMap)

user := User{}DecodePath(docMap, &user)

fmt.Printf("%#v", user)

If we run this program we get the following output:

mapstructure.User{    LoginName:"sptest1"    NumberFormat:mapstructure.NumberFormat{        DecimalSeparator:".",        GroupingSeparator:",",        GroupPattern:"###,##0.##"    }}

We can also use an anonymous field pointer:

type User struct {    LoginName string `jpath:"loginName"`    *NumberFormat}

In this case DecodePath will instantiate an object of that type and perform the decode, but only if a mapping can be found.

We now have great control over decoding JSON documents into structs. What happens when the JSON you get back is an array of documents?

DecodeSlicePathThere are times when the web api returns an array of JSON documents:

var document = `[{"name":"bill"},{"name":"lisa"}]`

281

Page 282: Going Go Programming

In this case we need to decode the slice of maps into a slice of objects. We added another function called DecodeSlicePath that does just that:

type NameDoc struct {    Name string `jpath:"name"`}

sliceScript := []byte(document)sliceMap := []map[string]interface{}{}json.Unmarshal(sliceScript, &sliceMap)

var myslice []NameDocDecodeSlicePath(sliceMap, &myslice)

fmt.Printf("%#v", myslice)

Here is the output:

[]mapstructure.NameDoc{    mapstructure.NameDoc{Name:"bill"},    mapstructure.NameDoc{Name:"lisa"}}

The function DecodeSlicePath creates the slice based on the length of the map and then decodes each JSON document, one at a time.

ConclusionIf it were not for Mitchell I would not have been able to get this to work. His package is brilliant and handles all the real technical issues around decoding maps into structs. The two functions I have built on top of mapstructure provides a nice convenience factor we needed for our project. If you're dealing with some of the same issue, please try out the package.

Concurrency, Goroutines and GOMAXPROCS

IntroductionWhen new people join the Go-Miami group they always write that they want to learn more about Go's concurrency model. Concurrency seems to be the big buzz word around the language. It was for me when I first started hearing about Go. It was Rob Pike's Go Concurrency Patterns video that finally convinced me I needed to learn this language.

To understand how Go makes writing concurrent programs easier and less prone to errors, we first need to understand what a concurrent program is and the problems that result from such programs. I will not be talking about CSP (Communicating Sequential Processes) in this post, which is the basis for Go's implementation of channels. This post will focus on what a concurrent program is, the role that goroutines play and how the GOMAXPROCS environment variable and runtime function affects the behavior of the Go runtime and the programs we write.

282

Page 283: Going Go Programming

Processes and ThreadsWhen we run an application, like the browser I am using to write this post, a process is created by the operating system for the application. The job of the process is to act like a container for all the resources the application uses and maintains as it runs. These resources include things like a memory address space, handles to files, devices and threads.

A thread is a path of execution that is scheduled by the operating system to execute the applications code against a processor or core. A process starts out with one thread, the main thread, and when that thread terminates the process terminates. This is because the main thread is the origin for the application. The main thread can then in turn launch more threads and those threads can launch even more threads. Once we have more than one thread running in our program, we have a concurrent program.

The operating system schedules a thread to run on an available processor or core regardless of which process the thread belongs to. Each operating system has its own algorithms that make these decisions and it is best for us to write concurrent programs that are not specific to one algorithm or the other. Plus these algorithms change with every new release of an operating system, so it is dangerous game to play.

Goroutines and ParallelismGoroutines are functions that we request the Go runtime goroutine scheduler to execute concurrently. We can consider that the main function is executing on a goroutine, however the Go runtime does not start that goroutine. Goroutines are considered to be lightweight because they use little memory and resources plus their initial stack size is small. Prior to version 1.2 the stack size started at 4K and now it starts at 8K. The stack has the ability to grow and shrink as needed.

The operating system schedules threads to run against available processors and the Go runtime schedules goroutines to run against available threads from the schedulers thread pool. By default the schedulers thread pool is allocated with only one thread. Even with one thread, hundreds of thousands of goroutines can be scheduled to run concurrently. It is not recommended to change the size of the schedulers thread pool, but if you want to run goroutines in parallel, Go provides the ability to change the size of the schedulers thread pool via the GOMAXPROCS environment variable or runtime function.

Parallelism is when two or more threads are executing code simultaneously against different processors or cores. We can achieve running goroutines in parallel as long as we are running on a machine with multiple processors or cores and we add more than one thread to the schedulers thread pool. If we add more threads to the schedulers thread pool but run our program on a single CPU machine, our goroutines will run against multiple threads but will be running concurrently against the single CPU, not in parallel.

Concurrency ExampleLet's build a small program that shows Go running goroutines concurrently. In this example we are using the default setting for the schedulers thread pool which is one thread:

package main

import (

283

Page 284: Going Go Programming

    "fmt"    "time")

func main() {    fmt.Println("Starting Go Routines")    go func() {        for char := 'a'; char < 'a'+26; char++ {            fmt.Printf("%c ", char)        }    }()

    go func() {        for number := 1; number < 27; number++ {            fmt.Printf("%d ", number)        }    }()

    fmt.Println("Waiting To Finish")    time.Sleep(1 * time.Second)    fmt.Println("\nTerminating Program")}

This program launches two goroutines by using the keyword go and declaring two anonymous functions. The first goroutine displays the english alphabet using lowercase letters and the second goroutine displays numbers 1 through 26. When we run this program we get the following output:

Starting Go RoutinesWaiting To Finisha b c d e f g h i j k l m n o p q r s t u v w x y z 1 2 3 4 5 6 7 8 9 10 1112 13 14 15 16 17 18 19 20 21 22 23 24 25 26Terminating Program

When we look at the output we can see that the code was run concurrently. Once the two goroutines are launched, the main goroutine is put to sleep for 1 second. We need to do this because once the main goroutine terminates, the program terminates. We want to give enough time for the two goroutines to complete their work.

We can see that the first goroutine completes displaying all 26 letters and then the second goroutine gets a turn to display all 26 numbers. Because it takes less than a microsecond for the first goroutine to complete its work, we don't see the scheduler interrupt the first goroutine before it finishes its work. We can give a reason to the scheduler to swap the goroutines by putting a sleep into the first goroutine:

package main

import (    "fmt"    "time"

284

Page 285: Going Go Programming

)

func main() {    fmt.Println("Starting Go Routines")    go func() {        time.Sleep(1 * time.Microsecond)        for char := 'a'; char < 'a'+26; char++ {            fmt.Printf("%c ", char)        }    }()

    go func() {        for number := 1; number < 27; number++ {            fmt.Printf("%d ", number)        }    }()

    fmt.Println("Waiting To Finish")    time.Sleep(1 * time.Second)    fmt.Println("\nTerminating Program")}

This time we add a microsecond of sleep in the first goroutine as soon as it starts. This is enough to cause the scheduler to swap the two goroutines:

Starting Go RoutinesWaiting To Finish1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 ab c d e f g h i j k l m n o p q r s t u v w x y zTerminating Program

This time the numbers display first and then the letters. A microsecond of sleep is enough to cause the scheduler to stop running the first goroutine and let the second goroutine do its thing.

Parallel ExampleIn our past two examples the goroutines were running concurrently, but not in parallel. Let's make a change to the code to allow the goroutines to run in parallel. All we need to do is change the default size of the schedulers thread pool to use two threads:

package main

import (    "fmt"    "runtime"    "time")

func main() {

285

Page 286: Going Go Programming

    runtime.GOMAXPROCS(2)

    fmt.Println("Starting Go Routines")    go func() {        for char := 'a'; char < 'a'+26; char++ {            fmt.Printf("%c ", char)        }    }()

    go func() {        for number := 1; number < 27; number++ {            fmt.Printf("%d ", number)        }    }()

    fmt.Println("Waiting To Finish")    time.Sleep(1 * time.Second)    fmt.Println("\nTerminating Program")}

Here is the output for the program:

Starting Go RoutinesWaiting To Finisha b 1 2 3 4 c d e f 5 g h 6 i 7 j 8 k 9 10 11 12 l m n o p q 13 r s 14t 15 u v 16 w 17 x y 18 z 19 20 21 22 23 24 25 26Terminating Program

Every time we run the program we are going to get different results. The scheduler does not behave exactly the same for each and every run. We can see that the goroutines are truly running in parallel. Both goroutines start running immediately and you can see them both competing for standard out to display their results.

ConclusionJust because we can change the size of the schedulers thread pool, doesn't mean we should. There is a reason the Go team has set the defaults to the runtime the way they did. Especially the default for the schedulers thread pool. Just know that arbitrarily adding threads to the schedulers thread pool and running goroutines in parallel will not necessarily provide better performance for your programs. Always profile and benchmark your programs and make sure the Go runtime configuration is only changed if absolutely required.

The problem with building concurrency into our applications is eventually our goroutines are going to attempt to access the same resources, possibly at the same time. Read and write operations against a shared resource must always be atomic. In other words reads and writes must happen by one goroutine at a time or else we create race conditions in our programs. To learn more about race conditions read my post.

Channels are the way in Go we write safe and elegant concurrent programs that eliminate race conditions and make writing concurrent programs fun again. Now that we know how

286

Page 287: Going Go Programming

goroutines work, are scheduled and can be made to run in parallel, channels are the next thing we need to learn.

The Nature Of Channels In Go

IntroductionIn my last post called Concurrency, Goroutines and GOMAXPROCS, I set the stage for talking about channels. We discussed what concurrency was and how goroutines played a role. With that foundation in hand, we can now understand the nature of channels and how they can be used to synchronize goroutines to share resources in a safe, less error prone and fun way.

What Are ChannelsChannels are type safe message queues that have the intelligence to control the behavior of any goroutine attempting to read or write to it. A channel acts as a conduit between two goroutines and will synchronize the exchange of any resource that is passed through it. It is the channel's ability to control the goroutines interaction that creates the synchronization mechanism. When a channel is created with no capacity for its queue, it is called an unbuffered channel. In turn, a channel created with capacity for its queue is called a buffered channel.

To understand what the synchronization behavior will be for any goroutine interacting with a channel, we need to know the type and state of the channel. The scenarios are a bit different depending on whether we are using an unbuffered or buffered channel, so let's talk about each one independently.

Unbuffered ChannelsUnbuffered channels have no capacity and therefore require both goroutines to be ready to make any exchange. When a goroutine attempts to write a resource to an unbuffered channel and there is no goroutine waiting to receive the resource, the channel will lock the goroutine and make it wait. When a goroutine attempts to read from an unbuffered channel, and there is no goroutine waiting to send a resource, the channel will lock the goroutine and make it wait.

287

Page 288: Going Go Programming

In the diagram above, we see an example of two goroutines making an exchange using an unbuffered channel. In step 1, the two goroutines approach the channel and then in step 2, the goroutine on the left sticks his hand into the channel or performs a write. At this point, that goroutine is locked in the channel until the exchange is complete. Then in step 3, the goroutine on the right places his hand into the channel or performs a read. That goroutine is also locked in the channel until the exchange is complete. In step 4 and 5 the exchange is made and finally in step 6, both goroutines are free to remove their hands and go on their way.

Synchronization is inherent in the interaction between the write and the read. One can not happen without the other. The nature of an unbuffered channel is synchronization.

Buffered ChannelsBuffered channels have capacity and therefore can behave a bit differently. When a goroutine attempts to write a resource to a buffered channel and channel's queue is full, the channel will lock the goroutine and make it wait until a buffer becomes available. If there is room in the queue, the write can take place immediately and the goroutine can move on. When a goroutine attempts to read from a buffered channel and the buffered channel's queue is empty, the channel will lock the goroutine and make it wait until a resource has been queued.

288

Page 289: Going Go Programming

In the diagram above, we see an example of two goroutines adding and removing items from a buffered channel independently. In step 1, the goroutine on the right is removing a resource from the channel or performing a read. In step 2, the goroutine on the right can remove the resource independent of the goroutine on the left adding a new resource to the channel. In step 3, both goroutines are adding and removing a resource from the channel at the same time and in step 4 both goroutines are done.

Synchronization still occurs within the interactions of reads and writes, however when the queue has buffer availability, the writes will not lock. Reads will not lock when there is something to read from the queue. Consequently, if the buffer is full or if there is nothing to retrieve, a buffered channel will behave very much like an unbuffered channel.

Relay RaceIf you have ever watched a track meet you may have seen a relay race. In a relay race there are four athletes who run around the track as fast as they can as a team. The key to the race is that only one runner per team can be running at a time. The runner with the baton is the only one allowed to run, and the exchange of the baton from runner to runner is critical to winning the race.

Let's build a sample program that uses four goroutines and a channel to simulate a relay race. The goroutines will be the runners in the race and the channel will be used to exchanged the baton between each runner. This is a classic example of how resources can be passed between goroutines and how a channel controls the behavior of the goroutines that interact with it.

package main

import (    "fmt"    "time")

func main() {    // Create an unbuffered channel    baton := make(chan int)

    // First runner to his mark    go Runner(baton)

    // Start the race    baton <- 1

    // Give the runners time to race    time.Sleep(500 * time.Millisecond)}

func Runner(baton chan int) {    var newRunner int

    // Wait to receive the baton

289

Page 290: Going Go Programming

    runner := <-baton

    // Start running around the track    fmt.Printf("Runner %d Running With Baton\n", runner)

    // New runner to the line    if runner != 4 {        newRunner = runner + 1        fmt.Printf("Runner %d To The Line\n", newRunner)        go Runner(baton)    }

    // Running around the track    time.Sleep(100 * time.Millisecond)

    // Is the race over    if runner == 4 {        fmt.Printf("Runner %d Finished, Race Over\n", runner)        return    }

    // Exchange the baton for the next runner    fmt.Printf("Runner %d Exchange With Runner %d\n", runner, newRunner)    baton <- newRunner}

When we run the sample program we get the following output:

Runner 1 Running With BatonRunner 2 To The LineRunner 1 Exchange With Runner 2Runner 2 Running With BatonRunner 3 To The LineRunner 2 Exchange With Runner 3Runner 3 Running With BatonRunner 4 To The LineRunner 3 Exchange With Runner 4Runner 4 Running With BatonRunner 4 Finished, Race Over

The program starts out creating an unbuffered channel:

// Create an unbuffered channelbaton := make(chan int)

Using an unbuffered channel forces the goroutines to be ready at the same time to make the exchange of the baton. This need for both goroutines to be ready creates the synchronization.

If we look at the rest of the main function, we see a goroutine created for the first runner in the race and then the baton is handed off to that runner. The baton in this example is an

290

Page 291: Going Go Programming

integer value that is being passed between each runner. The sample is using a sleep to let the race complete before main terminates and ends the program:

// Create an unbuffered channelbaton := make(chan int)

// First runner to his markgo Runner(baton)

// Start the racebaton <- 1

// Give the runners time to racetime.Sleep(500 * time.Millisecond)

If we just focus on the core parts of the Runner function, we can see how the baton exchange takes place until the race is over. The Runner function is launched as a goroutine for each runner in the race. Every time a new goroutine is launched, the channel is passed into the goroutine. The channel is the conduit for the exchange, so the current runner and the one waiting to go next need to reference the channel:

func Runner(baton chan int)

The first thing each runner does is wait for the baton exchange. That is simulated with the read on the channel. The read immediately locks the goroutine until the baton is written to the channel. Once the baton is written to the channel, the read will release and the goroutine will simulate the next runner sprinting down the track. If the fourth runner is running, no new runner will enter the race. If we are still in the middle of the race, a new goroutine for the next runner is launched.

// Wait to receive the batonrunner := <-baton

// New runner to the lineif runner != 4 {    newRunner = runner + 1    go Runner(baton)}

Then we sleep to simulate some time it takes for the runner to run around the track. If this is the fourth runner, the goroutine terminates after the sleep and the race is complete. If not, the baton exchange takes place with the write to the channel. There is a goroutine already locked and waiting for this exchange. As soon as the baton is written to the channel, the exchange is made and the race continue:

// Running around the tracktime.Sleep(100 * time.Millisecond)

// Is the race overif runner == 4 {    return

291

Page 292: Going Go Programming

}

// Exchange the baton for the next runnerbaton <- newRunner

ConclusionThe example showcases a real world event, a relay race between runners, being implemented in a way that mimics the actual events. This is one of the beautiful things about channels. The code flows in a way that simulates how these types of exchanges can happen in the real world.

Now that we have an understanding of the nature of unbuffered and buffered channels, we can look at different concurrency patterns we can implement using channels. Concurrency patterns allow us to implement more complex exchanges between goroutines that simulate real world computing problems like semaphores, generators and multiplexers.

Running MongoDB Queries Concurrently With Go

If you are attending GopherCon 2014 or plan to watch the videos once they are released, this article will prepare you for the talk by Gustavo Niemeyer and Steve Francia. It provides a beginners view for using the Go mgo driver against a MongoDB database. 

IntroductionMongoDB supports many different programming languages thanks to a great set of drivers. One such driver is the MongoDB Go driver which is called mgo. This driver has been externally developed by Gustavo Niemeyer from Canonical, and eventually Steve Francia, the head of the drivers team at MongoDB Inc, took notice and offered support. Both Gustavo and Steve will be talking at GopherCon 2014 in April about "Painless Data Storage With MongoDB and Go". The talk centers around the mgo driver and how MongoDB and Go really work well together to build highly scalable and concurrent software.

MongoDB and Go let us build scalable software on many different operating systems and architectures, without the need to install any frameworks or runtime environments. Go programs are native binaries and the Go tooling is constantly improving to create binaries that run as fast as equivalent C programs. That wouldn't mean anything if writing code in Go was complicated and as tedious as writing programs in C. This is where Go really shines because once you get up to speed, writing programs in Go is fast and fun.

In this post I am going to show you how to write a Go program using the mgo driver to connect and run queries concurrently against a MongoDB database. I will break down the sample code and explain a few things that seem to be always be a bit confusing to those new to MongoDB and Go.

Sample ProgramThe sample program connects to a public MongoDB database I have hosted with MongoLab. If you have Go and Bazaar installed on your machine, you can run the program. The program launches ten goroutines that individually query all the records from the buoy_stations

292

Page 293: Going Go Programming

collection inside the goinggo database. The records are unmarshaled into native Go types and each goroutine logs the number of documents returned:

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47

// This program provides a sample application for using MongoDB with// the mgo driver.package main import (

"labix.org/v2/mgo""labix.org/v2/mgo/bson""log""sync""time"

) const (

MongoDBHosts = "ds035428.mongolab.com:35428"AuthDatabase = "goinggo"AuthUserName = "guest"AuthPassword = "welcome"TestDatabase = "goinggo"

) type (

// BuoyCondition contains information for an individual station.BuoyCondition struct {

WindSpeed float64 `bson:"wind_speed_milehour"`WindDirection int `bson:"wind_direction_degnorth"`WindGust float64 `bson:"gust_wind_speed_milehour"`

// BuoyLocation contains the buoy's location.BuoyLocation struct {

Type string `bson:"type"`Coordinates []float64 `bson:"coordinates"`

// BuoyStation contains information for an individual station.BuoyStation struct {

ID bson.ObjectId `bson:"_id,omitempty"`StationId string `bson:"station_id"`Name string `bson:"name"`LocDesc string `bson:"location_desc"`Condition BuoyCondition `bson:"condition"`Location BuoyLocation `bson:"location"`

}) // main is the entry point for the application.func main() {

// We need this object to establish a session to our MongoDB.mongoDBDialInfo := &mgo.DialInfo{

Addrs: []string{MongoDBHosts},Timeout: 60 * time.Second,Database: AuthDatabase,Username: AuthUserName,Password: AuthPassword,

// Create a session which maintains a pool of socket connections// to our MongoDB.

293

Page 294: Going Go Programming

48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97

mongoSession, err := mgo.DialWithInfo(mongoDBDialInfo)if err != nil {

log.Fatalf("CreateSession: %s\n", err)}

 // Reads may not be entirely up-to-date, but they will always see

the// history of changes moving forward, the data read will be

consistent// across sequential queries in the same session, and

modifications made// within the session will be observed in following queries

(read-your-writes).// http://godoc.org/labix.org/v2/mgo#Session.SetModemongoSession.SetMode(mgo.Monotonic, true)

 // Create a wait group to manage the goroutines.var waitGroup sync.WaitGroup

 // Perform 10 concurrent queries against the database.waitGroup.Add(10)for query := 0; query < 10; query++ {

go RunQuery(query, &waitGroup, mongoSession)}

 // Wait for all the queries to complete.waitGroup.Wait()log.Println("All Queries Completed")

} // RunQuery is a function that is launched as a goroutine to perform// the MongoDB work.func RunQuery(query int, waitGroup *sync.WaitGroup, mongoSession *mgo.Session) {

// Decrement the wait group count so the program knows this// has been completed once the goroutine exits.defer waitGroup.Done()

 // Request a socket connection from the session to process our

query.// Close the session when the goroutine exits and put the

connection back// into the pool.sessionCopy := mongoSession.Copy()defer sessionCopy.Close()

 // Get a collection to execute the query against.collection := sessionCopy.DB(TestDatabase).C("buoy_stations")

 log.Printf("RunQuery : %d : Executing\n", query)

 // Retrieve the list of stations.var buoyStations []BuoyStationerr := collection.Find(nil).All(&buoyStations)if err != nil {

log.Printf("RunQuery : ERROR : %s\n", err)return

log.Printf("RunQuery : %d : Count[%d]\n", query, len(buoyStations))

294

Page 295: Going Go Programming

98 99 }

view raw GoMgoSample-Full.go hosted with ❤ by GitHub Now that you have seen the entire program, we can break it down. Let's start with the type structures that are defined in the beginning:

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24

type (// BuoyCondition contains information for an individual station.BuoyCondition struct {

WindSpeed float64 `bson:"wind_speed_milehour"`WindDirection int `bson:"wind_direction_degnorth"`WindGust float64 `bson:"gust_wind_speed_milehour"`

}

// BuoyLocation contains the buoy's location.BuoyLocation struct {

Type string `bson:"type"`Coordinates []float64 `bson:"coordinates"`

}

// BuoyStation contains information for an individual station.BuoyStation struct {

ID bson.ObjectId `bson:"_id,omitempty"`StationId string `bson:"station_id"`Name string `bson:"name"`LocDesc string `bson:"location_desc"`Condition BuoyCondition `bson:"condition"`Location BuoyLocation `bson:"location"`

})

view raw GoMgoSample-1.go hosted with ❤ by GitHub The structures represent the data that we are going to retrieve and unmarshal from our query. BuoyStation represents the main document and BuoyCondition and BuoyLocation are embedded documents. The mgo driver makes it easy to use native types that represent the documents stored in our collections by using tags. With the tags, we can control how the mgo driver unmarshals the returned documents into our native Go structures.

Now let's look at how we connect to a MongoDB database using mgo:

1 2 3 4 5 6 7 8 9 10 11

// We need this object to establish a session to our MongoDB.mongoDBDialInfo := &mgo.DialInfo{

Addrs: []string{MongoDBHosts},Timeout: 60 * time.Second,Database: AuthDatabase,Username: AuthUserName,Password: AuthPassword,

} // Create a session which maintains a pool of socket connections// to our MongoDB.mongoSession, err := mgo.DialWithInfo(mongoDBDialInfo)if err != nil {

log.Fatalf("CreateSession: %s\n", err)

295

Page 296: Going Go Programming

12 13 14 15

}

view raw GoMgoSample-2.go hosted with ❤ by GitHub We start with creating a mgo.DialInfo object. We can connect to a single MongoDB database or a replica set. Connecting to a replica set can be accomplished by providing multiple addresses in the Addrs field or with a single address. If we are using a single host address to connect to a replice set, the mgo driver will learn about any remaining hosts from the replica set member we connect to. In our case we are connecting to a single host.

After providing the host, we specify the database, username and password we need for authentication. One thing to note is that the database we authenticate against may not necessarily be the database our application needs to access. Some applications authenticate against the admin database and then use other databases depending on their configuration. The mgo driver supports these types of configurations very well.

Next we use the mgo.DialWithInfo method to create a mgo.Session object. The mgo.Session object maintains a pool of connections to the MongoDB host. We can create multiple sessions with different modes and settings to support different aspects of our applications. We can specify if the session is to use a Strong or Monotonic mode, and we can set the safe level as well as other settings.

The next line of code sets the mode for the session. There are three modes that can be set, Strong, Monotonic and Eventual. Each mode sets a specific consistency for how reads and writes are performed. For more information on the differences between each mode, check out the documentation for the mgo driver.

We are using Monotonic mode which provides reads that may not entirely be up to date, but the reads will always see the history of changes moving forward. In this mode reads occur against secondary members of our replica sets until a write happens. Once a write happens, the primary member is used. The benefit is some distribution of the reading load can take place against the secondaries when possible.

With the session set and ready to go, next we execute multiple queries concurrently:

1 2 3 4 5 6 7 8 9 10 11 12

// Create a wait group to manage the goroutines.var waitGroup sync.WaitGroup // Perform 10 concurrent queries against the database.waitGroup.Add(10)for query := 0; query < 10; query++ {

go RunQuery(query, &waitGroup, mongoSession)} // Wait for all the queries to complete.waitGroup.Wait()log.Println("All Queries Completed")

view raw GoMgoSample-4.go hosted with ❤ by GitHub

296

Page 297: Going Go Programming

This code is classic Go concurrency in action. First we create a sync.WaitGroup object so we can keep track of all the goroutines we are going to launch as they complete their work. Then we immediately set the count of the sync.WaitGroup object to ten and use a for loop to launch ten goroutines using the RunQuery function. The keyword go is used to launch a function or method to run concurrently. The final line of code calls the Wait method on the sync.WaitGroup object which locks the main goroutine until everything is done processing.

To learn more about Go concurrency and better understand how this particular piece of code works, check out these posts on concurrency and channels.

Now let's look at the RunQuery function and see how to properly use the mgo.Session object to acquire a connection and execute a query:

1 2 3 4 5 6 7 8 9

// Decrement the wait group count so the program knows this// has been completed once the goroutine exits.defer waitGroup.Done() // Request a socket connection from the session to process our query.// Close the session when the goroutine exits and put the connection back// into the pool.sessionCopy := mongoSession.Copy()defer sessionCopy.Close()

view raw GoMgoSample-5.go hosted with ❤ by GitHub The very first thing we do inside of the RunQuery function is to defer the execution of the Done method on the sync.WaitGroup object. The defer keyword will postpone the execution of the Done method, to take place once the RunQuery function returns. This will guarantee that the sync.WaitGroup objects count will decrement even if an unhandled exception occurs.

Next we make a copy of the session we created in the main goroutine. Each goroutine needs to create a copy of the session so they each obtain their own socket without serializing their calls with the other goroutines. Again, we use the defer keyword to postpone and guarantee the execution of the Close method on the session once the RunQuery function returns. Closing the session returns the socket back to the main pool, so this is very important.

1 2 3 4 5 6 7 8 9 10 11 12 13 14

// Get a collection to execute the query against.collection := sessionCopy.DB(TestDatabase).C("buoy_stations") log.Printf("RunQuery : %d : Executing\n", query) // Retrieve the list of stations.var buoyStations []BuoyStationerr := collection.Find(nil).All(&buoyStations)if err != nil {

log.Printf("RunQuery : ERROR : %s\n", err)return

} log.Printf("RunQuery : %d : Count[%d]\n", query, len(buoyStations))

view raw GoMgoSample-6.go hosted with ❤ by GitHub

297

Page 298: Going Go Programming

To execute a query we need a mgo.Collection object. We can get a mgo.Collection object through the mgo.Session object by specifying the name of the database and then the collection. Using the mgo.Collection object, we can perform a Find and retrieve all the documents from the collection. The All function will unmarshal the response into our slice of BuoyStation objects. A slice is a dynamic array in Go. Be aware that the All method will load all the data in memory at once. For large collections it is better to use the Iter method instead. Finally, we just log the number of BuoyStation objects that are returned.

ConclusionThe example shows how to use Go concurrency to launch multiple goroutines that can execute queries against a MongoDB database independently. Once a session is established, the mgo driver exposes all of the MongoDB functionality and handles the unmarshaling of BSON documents into Go native types.

MongoDB can handle a large number of concurrent requests when you architect your MongoDB databases and collections with concurrency in mind. Go and the mgo driver are perfectly aligned to push MongoDB to its limits and build software that can take advantage of all the computing power that is available.

The mgo driver can help you distribute your queries across a MongoDB replica set. The mgo driver gives you the ability to create and configure your sessions and take advantage of MongoDB's mode and configuration options. The mode you use for your session, how and where the cluster and load balancer is setup, and the type of work being processed by MongoDB at the time of those queries, plays an important role in the actual distribution.

The mgo driver provides a safe way to leverage Go's concurrency support and you have the flexibility to execute queries concurrently and in parallel. It is best to take the time to learn a bit about MongoDB replica sets and load balancer configuration. Then make sure the load balancer is behaving as expected under the different types of load your application can produce.

Now is a great time to see what MongoDB and Go can do for your software applications, web services and service platforms. Both technologies are being battle tested everyday by all types of companies, solving all types of business and computing problems.

Web Form Validation And Localization In Go

IntroductionAs I improve my knowledge and framework for a Go based web service I am building, I continue to go back and enhance my Beego Sample App. Something I just added recently was providing localized messages for validation errors. I was fortunate to find Nick Snyder's go-i18n package. Nick's package made it easy to support multiple languages for the Go web service I am writing.

Abstracting go-i18nThe go-i18n package is simple to use and you can use it to read files or strings that contain all the messages you want to localize. It has some nice features including variable substitution and support for handling plurals for each individual locale. Nick has documentation for his package, so I am going to show you how I abstracted and integrated go-i18n into the Beego

298

Page 299: Going Go Programming

sample app.

I decided I didn't want to use files to store the messages, but create raw string literal variables. The less I had to worry about managing external resources the better. With that being said, I built a simple package that abstracted the support I needed. Luckily go-i18n supports passing in a string that can contain the JSON document with the message data:

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17

// en-US.go provides the localized messages for English in the United States.package localize var En_US = `[

{"id": "invalid_credentials","translation": "Invalid Credentials were supplied."

},{

"id": "application_error","translation": "An Application Error has occured."

},{

"id": "invalid_station_id","translation": "Invalid Station Id Or Missing"

}]`

view raw localization_messages.go hosted with ❤ by GitHub I am just using simple messages right now, but as you can see, the variable En_US is defined and assigned a JSON document with the messages I need localized. The go-i18n package also lets you define messages like this:

1 2 3 4 5 6 7 8 9

[ { "id": "d_days", "translation": { "one": "{{.Count}} day", "other": "{{.Count}} days" } }]

view raw localization_sample.go hosted with ❤ by GitHub In this sample, the translation has one message for the singular case and one for the plural case. There is also support for using variable substitution thanks to template support.

Here is the localize package that provides support for the web service:

1 2 3 4 5 6

// The localize package provides support for handling different languages// and cultures.package localize import (

"encoding/json" 

"github.com/nicksnyder/go-i18n/i18n"

299

Page 300: Going Go Programming

7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56

"github.com/nicksnyder/go-i18n/i18n/locale""github.com/nicksnyder/go-i18n/i18n/translation"

) var (

// T is the translate function for the specified user// locale and default locale specified during the load.T i18n.TranslateFunc

) // Init initializes the local environment.func Init(defaultLocale string) error {

switch defaultLocale {case "en-US":

LoadJSON(defaultLocale, En_US)default:

return fmt.Errorf("Unsupported Locale: %s", defaultLocale)

// Obtain the default translation function for use.var err errorT, err = NewTranslation(defaultLocale)if err != nil {

return err}

 return nil

} // NewTranslation obtains a translation function object for the// specified locales.func NewTranslation(userLocale string (t i18n.TranslateFunc, err error) {

t, err = i18n.Tfunc(userLocale)if err != nil {

return t, err}

 return t, err

} // LoadJSON takes a json document of translations and manually// loads them into the system.func LoadJSON(userLocale string, translationDocument string) error {

tranDocuments := []map[string]interface{}{}err := json.Unmarshal([]byte(translationDocument),

&tranDocuments)if err != nil {

return err}

 for _, tranDocument := range tranDocuments {

tran, err := translation.NewTranslation(tranDocument)if err != nil {

return err}

 i18n.AddTranslation(locale.MustNew(userLocale), tran)

return nil

300

Page 301: Going Go Programming

57 58 59 60 61 62 63 64 65 66 67 68

}

view raw localization_full.go hosted with ❤ by GitHub The Init function creates the default locale for the application. Currently the Beego Sample App only supports English for the United States. Eventually, we can add cases for the other locales. Obviously this can all be done through configuration in the future.

The Init function uses the LoadJSON function to load the go-i18n datastore with the internal messages for the default locale. Later on we can use the LoadJSON function again to load more JSON documents for the same or different locales.

The Init function also uses the NewTranslation function to obtain a new i18n.TranslateFunc object for the default locale. This object is used to retrieve messages from the go-i18n datastore. If we have a scenario where the default locale is not valid, we can use the NewTranslation function at any time to obtain an object for the locale we need.

Beego IntegrationTo see how I integrated the go-i18n package into the sample app, we need to look at the controller:

1 2 3 4 5 6 7 8 9 10 11 12

// RetrieveStation handles the example 2 tab.func (buoyController *BuoyController) RetrieveStation() {

params := struct {StationId string `form:"stationId"

error:"invalid_station_id" valid:"Required"`}{}

 if buoyController.ParseAndValidate(&params) == false {

return}

...}

view raw localization_controller.go hosted with ❤ by GitHub As discussed in my previous post about the Beego Sample App, we define a struct with tags that are used by the Beego validation module. I added support for defining the error to be returned when validation fails, by providing a new tag called error. In this case the error tag contains the id of the localized message we want to return. The ParseAndValidate function will handle the rest:

301

Page 302: Going Go Programming

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50

// ParseAndValidate will run the params through the validation framework and then// response with the specified localized or provided messagefunc (baseController *BaseController) ParseAndValidate(params interface{}) bool {

err := baseController.ParseForm(params)if err != nil {

baseController.ServeError(err)return false

valid := validation.Validation{}ok, err := valid.Valid(params)if err != nil {

baseController.ServeError(err)return false

if ok == false {// Build a map of the error messages for each fieldmessages2 := map[string]string{}val := reflect.ValueOf(params).Elem()for i := 0; i < val.NumField(); i++ {

// Look for an error tag in the fieldtypeField := val.Type().Field(i)tag := typeField.TagtagValue := tag.Get("error")

 // Was there an error tagif tagValue != "" {

messages2[typeField.Name] = tagValue}

// Build the error responseerrors := []string{}for _, err := range valid.Errors {

// Match an error from the validation framework errors

// to a field name we have a mapping formessage, ok := messages2[err.Field]if ok == true {

// Use a localized message if one existserrors = append(errors,

localize.T(message))continue

// No match, so use the message as iserrors = append(errors, err.Message)

baseController.ServeValidationErrors(errors)return false

return true}

302

Page 303: Going Go Programming

51 52 53 54 view raw localization_parseval.go hosted with ❤ by GitHub When the Beego validation module finds a problem, then the real work begins. The function uses reflection to find the error tag on any of the fields in the param struct. If an error tag is found, the id of the localized message is stored along with the field name. Then the function ranges over all the errors that the Beego validation module found and if an error tag existed for that field, we use the id to retrieve the localized message.

TestingIf we run the run_endpoint_test.sh shell script, which can be found in the zscripts folder, we can see the localized message returned in the last test:

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23

=== RUN TestMissingStation-8TRACE: 2014/03/07 17:13:50 mongo.go:186: Unknown : CopySession : Started : UseSession[monotonic]TRACE: 2014/03/07 17:13:50 mongo.go:200: Unknown : CopySession : CompletedTRACE: 2014/03/07 17:13:50 baseController.go:52: Unknown : BaseController.Prepare : Info : UserId[Unknown] Path[/buoy/station/420]TRACE: 2014/03/07 17:13:50 baseController.go:64: Unknown : Finish : Completed : /buoy/station/420TRACE: 2014/03/07 17:13:50 mongo.go:240: Unknown : CloseSession : StartedTRACE: 2014/03/07 17:13:50 mongo.go:244: Unknown : CloseSession : CompletedTRACE: 2014/03/07 17:13:50 buoyEndpoints_test.go:103: testing : TestStation : Info : Code[409]{ "errors": [ "Invalid Station Id Or Missing" ]}  Subject: Test Station Endpoint Status Code Should Be 409 ✔ The Result Should Not Be Empty ✔ There Should Be An Error In The Result ✔ 9 assertions thus far --- PASS: TestMissingStation-8 (0.00 seconds)

view raw localization_result.go hosted with ❤ by GitHub The last test is designed to validate the localized message is returned.

ConclusionThe Beego framework has been great for developing my Go web service. It has the right amount of framework and modules, like the validation module, when you need it. The ability to bring in a package like go-i18n and integrate it so easily is another big win for Beego.

If you are in need for localizing your Go application, take a look at go-i18n and see if it can work for you.

303

Page 304: Going Go Programming

Exported/Unexported Identifiers In Go

IntroductionOne of the first things I learned about in Go was using an uppercase or lowercase letter as the first letter when naming a type, variable or function. It was explained that when the first letter was capitalized, the identifier was public to any piece of code that wanted to use it. When the first letter was lowercase, the identifier was private and could only be accessed within the package it was defined.

I have come to realize that the use of the language public and private is really not accurate. It is more accurate to say an identifier is exported or unexported from a package. When an identifier is exported from a package, it means the identifier can be directly accessed from any other package in the code base. When an identifier is unexported from a package, it can't be directly accessed from any other package. What we will soon learn is that just because an identifier is unexported, it doesn't mean it can't be accessed outside of its package, it just means it can't be accessed directly.

Direct Identifier AccessLet's start with a simple example of an exported type:

1 2 3 4 5

package counters // AlertCounter is an exported type that// contains an integer counter for alerts.type AlertCounter int

view raw exportedtypes_1a.go hosted with ❤ by GitHub Here we define a user-defined type called AlertCounter inside the package counters. This type is an alias for the built-in type int, but in Go AlertCounter will be considered a unique and distinct type. We are using the capital letter 'A' as the first letter for the name of the type, which means this type is exported and accessible by other packages.

Now let's access our AlertCounter type in the main program:

1 2 3 4 5 6 7 8 9 10 11 12 13 14

package main import (

"fmt""test/counters"

) func main() {

// Create a variable of the exported type and// initialize the value to 10.counter := counters.AlertCounter(10)

 fmt.Printf("Counter: %d\n", counter)

}

view raw exportedtypes_1b.go hosted with ❤ by GitHub

304

Page 305: Going Go Programming

Since the AlertCounter type has been exported, this code builds fine. When we run the program we get the value of 10.

Now let's change the exported AlertCounter type to be an unexported type by changing the name to alertCounter and see what happens:

1 2 3 4 5

package counters // alertCounter is an unexported type that// contains an integer counter for alerts.type alertCounter int

view raw exportedtypes_2a.go hosted with ❤ by GitHub 1 2 3 4 5 6 7 8 9 10 11 12 13 14

package main import (

"fmt""test/counters"

) func main() {

// Attempt to create a variable of the unexported type// and initialize the value to 10. This will NOT compile.counter := counters.alertCounter(10)

fmt.Printf("Counter: %d\n", counter)}

view raw exportedtypes_2b.go hosted with ❤ by GitHub After making the changes to the counters and main packages, we attempt to build the code again and get the following compiler error:

./main.go:11: cannot refer to unexported name counters.alertCounter./main.go:11: undefined: counters.alertCounter

As expected we can't directly access the alertCounter type because it is unexported. Even though we can't access the alertCounter type directly anymore, there is a way for us to create and use variables of this unexported type in the main package:

1 2 3 4 5 6 7 8 9 10 11

package counters // alertCounter is an unexported type that// contains an integer counter for alerts.type alertCounter int // NewAlertCounter creates and returns objects of// the unexported type alertCounter.func NewAlertCounter(value int) alertCounter {

return alertCounter(value)}

view raw exportedtypes_3a.go hosted with ❤ by GitHub

305

Page 306: Going Go Programming

1 2 3 4 5 6 7 8 9 10 11 12 13 14

package main import (

"fmt""test/counters"

) func main() {

// Create a variable of the unexported type using the// exported NewAlertCounter function from the package counters.counter := counters.NewAlertCounter(10)

 fmt.Printf("Counter: %d\n", counter)

}

view raw exportedtypes_3b.go hosted with ❤ by GitHub In the counters package we add an exported function called NewAlertCounter. This function creates and returns objects of the alertCounter type. In the main program we use this function and the programming logic stays the same.

What this example shows is that an identifier that is defined as unexported can still be accessed and used by other packages. It just can't be accessed directly.

Using StructsDefining exported and unexported members for our structs work in the exact same way. If a field or method name starts with a capital letter, the member is exported and is accessible outside of the package. If a field or method starts with a lowercase letter, the member is unexported and does not have accessibility outside of the package.

Here is an example of a struct with both exported and unexported fields. The main program has a compiler error because it attempts to access the unexported field directly:

1 2 3 4 5 6 7 8

package animals // Dog represents information about dogs.type Dog struct {

Name stringBarkStrength intage int

}

view raw exportedtypes_4a.go hosted with ❤ by GitHub 1 2 3 4 5 6 7 8 9 10

package main import (

"fmt""test/animals"

) func main() {

// Create an object of type Dog from the animals package.// This will NOT compile.dog := animals.Dog{

Name: "Chole",

306

Page 307: Going Go Programming

11 12 13 14 15 16 17 18

BarkStrength: 10,age: 5,

fmt.Printf("Counter: %#v\n", dog)}

view raw exportedtypes_4b.go hosted with ❤ by GitHub Here is the error from the compiler:

./main.go:14: unknown animal.Dog field 'age' in struct literal

As expected the compiler does not let the main program access the age field directly.

Let's look at an interesting example of embedding. We start with two user-defined types where one type embeds the other:

1 2 3 4 5 6 7 8 9 10 11 12 13

package animals // Animal represents information about all animals.type Animal struct {

Name stringAge int

} // Dog represents information about dogs.type Dog struct {

AnimalBarkStrength int

}

view raw exportedtypes_5a.go hosted with ❤ by GitHub We added a new exported type called Animal with two exported fields called Name and Age. Then we embed the Animal type into the exported Dog type. This means that the Dog type now has three exported fields, Name, Age and BarkStrength.

Let's look at the implementation of the main program:

1 2 3 4 5 6 7 8 9 10 11 12

package main import (

"fmt""test/animals"

) func main() {

// Create an object of type Dog from the animals package.dog := animals.Dog{

Animal: animals.Animal{Name: "Chole",Age: 1,

},BarkStrength: 10,

307

Page 308: Going Go Programming

13 14 15 16 17 18 19

fmt.Printf("Counter: %#v\n", dog)}

view raw exportedtypes_5b.go hosted with ❤ by GitHub In main we use a composite literal to create and initialize an object of the exported Dog type. Then we display the structure and values of the dog object.

To make things more interesting, let's change the Animal type from exported to unexported by changing the first letter of the type's name to a lowercase letter 'a':

1 2 3 4 5 6 7 8 9 10 11 12 13

package animals // animal represents information about all animals.type animal struct {

Name stringAge int

} // Dog represents information about dogs.type Dog struct {

animalBarkStrength int

}

view raw exportedtypes_6a.go hosted with ❤ by GitHub The animal type remains embedded in the exported Dog type, but now as an unexported type. We keep the Name and Age fields within the animal type as exported fields.

In the main program we just change the name of the type from Animal to animal:

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17

package main import (

"fmt""test/animals"

) func main() {

// Create an object of type Dog from the animals package.// This will NOT compile.dog := animals.Dog{

animal: animals.animal{Name: "Chole",Age: 1,

},BarkStrength: 10,

fmt.Printf("Counter: %#v\n", dog)}

308

Page 309: Going Go Programming

18 19 20 view raw exportedtypes_6b.go hosted with ❤ by GitHub Once again we have a main program that can't compile because we are trying to access the unexported type animal from inside the composite literal:

./main.go:11: cannot refer to unexported name animals.animal

./main.go:14: unknown animals.Dog field 'animal' in struct literal

We can fix the compiler error by initializing the exported fields from the unexported embedded type outside of the composite literal:

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17

package main import (

"fmt""test/animals"

) func main() {

// Create an object of type Dog from the animals package.dog := animals.Dog{

BarkStrength: 10,}dog.Name = "Chole"dog.Age = 1

 fmt.Printf("Counter: %#v\n", dog)

}

view raw exportedtypes_7.go hosted with ❤ by GitHub Now the main program builds again. The exported fields that were embedded into the Dog type from the animal type are accessible, even though they came from an unexported type. The exported fields keep their exported status when the type is embedded.

Standard LibraryThe exported Time type from the time package is a good example of a type from the standard library that provides no access to its internals:

1 2 3 4 5 6 7 8 9 10

type Time struct {// sec gives the number of seconds elapsed since// January 1, year 1 00:00:00 UTC.sec int64

 // nsec specifies a non-negative nanosecond// offset within the second named by Seconds.// It must be in the range [0, 999999999].//// It is declared as uintptr instead of int32 or uint32// to avoid garbage collector aliasing in the case where// on a 64-bit system the int32 or uint32 field is written

309

Page 310: Going Go Programming

11 12 13 14 15 16 17 18 19 20 21 22 23 24

// over the low half of a pointer, creating another pointer.// TODO(rsc): When the garbage collector is completely// precise, change back to int32.nsec uintptr

 // loc specifies the Location that should be used to// determine the minute, hour, month, day, and year// that correspond to this Time.// Only the zero Time has a nil Location.// In that case it is interpreted to mean UTC.loc *Location

}

view raw exportedtypes_8.go hosted with ❤ by GitHub The language designers are using the unexported fields to keep the internals of the Time type private. They are "hiding" the information so we can't do anything contrary to how the time data works. With that being said, we still can use the unexported fields through the interface they provide. Without the ability to use and access unexported fields indirectly, we would not be able to copy objects of this type or embed this type into our own user-defined types.

ConclusionA solid understanding of how to hide and provide access to data from our packages is important. There is a lot more to exporting and unexporting identifiers than meets the eye. In setting out to write this post, I though a couple of examples would do the trick. Then I realized how involved the topic can get once we start looking at embedding unexported types into our own types.

The ability to use exported or unexported identifiers is an implementation detail, one that Go give us flexibility to use in our programs. The standard library has great examples of using unexported identifiers to hide and protect data. We looked at one example with the time.Time type. Take the time to look at the standard library to learn more.

Introduction To Numeric Constants In Go

IntroductionOne of the more unique features of Go is how the language implements constants. The rules for constants in the language specification are unique to Go. They provide the flexibility Go needs at the compiler level to make the code we write readable and intuitive while still maintaining a type safe language.

This post will attempt to build a foundation for what numeric constants are, how they behave in their simplest form and how best to talk about them. There are a lot of little nuances, terms and concepts that can trip us up. Because of this, the post is going to take things slowly.

So if you are ready to peek under the covers just a bit, roll up your sleeve and let’s get started:

310

Page 311: Going Go Programming

Untyped and Typed Numeric ConstantsConstants can be declared with or without a type in Go. When we declare literal values in our code, we are actually declaring constants that are both untyped and unnamed.

The following examples show typed and untyped numeric constants that are both named and unnamed:

const untypedInteger       = 12345const untypedFloatingPoint = 3.141592

const typedInteger int           = 12345const typedFloatingPoint float64 = 3.141592

The constants on the left hand side of the declaration are named constants and the literal values on the right hand side are unnamed constants.

Kinds of Numeric ConstantsYour first instinct may be to think that typed constants use the same type system as variables, but they don’t. Constants have their own implementation for representing the values that we associate with them. Every Go compiler has the flexibility to implement constants as they wish, within a set of mandatory requirements.

When declaring a typed constant, the declared type is used to associate the type’s precision limitations. It does not change how the value is being internally represented. Because the internal representation of constants can be different between the different compilers, it is best to think of constants as having a kind, not a type.

Numeric constants can be one of four kinds: integer, floating-point, complex and rune:

12345    // kind: integer3.141592 // kind: floating-point1E6      // kind: floating-point

In the example above, we have declared three numeric constants, one of kind integer and two of kind floating-point. The form of the literal value will determine what kind the constant takes. When the form of the literal value contains a decimal or exponent, the constant is of kind floating-point. When the form does not contain a decimal or exponent, the constant is of kind integer.

Constants Are Mathematically ExactRegardless of the implementation, constants are always considered to be mathematically exact. This is something that makes constants in Go unique. This is not the case in other languages like C and C++.

Integers can always be represented precisely when there is enough memory to store their entire value. Since the specification requires integer constants to have at least 256 bits of precision, we are safe in saying integer constants are mathematically exact.

To have mathematically exact floating-point numbers, there are different strategies and

311

Page 312: Going Go Programming

options that the compiler can employ. The specification does not state how the compiler must do this, it just specifies a set of mandatory requirements that need to be met.

Here are two strategies that the different Go compilers use today to implement mathematically exact floating-point numbers:

One strategy is to represent all floating-point numbers as fractions, and use rational arithmetic on those fractions. This is what go/types does today and these floating-point numbers never have any no loss of precision.

Another strategy is to use floating-point numbers with so much precision that they appear to be exact for all practical purposes. When we use floating-point numbers with several hundred bits, the difference between exact and approximate becomes virtually non-existent. This is what the gc/gccgo compilers do today.

As developers however, it is best to not consider what internal representation is being used by the compiler, it is irrelevant. Just remember that all constants, regardless if they are declared with or without a type, use the same representation to store their values, which is not the same as variables and is mathematically exact.

Mathematically Exact ExampleSince constants only exist during compilation, it is hard to provide an example that shows constants are mathematically exact. One way is to show how the compiler will let us declare constants of kind integer with values that are much larger than the largest integer types can support.

Here is a program that can be compiled because constants of kind integer are mathematically exact:

package main

import "fmt"

// Much larger value than int64.const myConst = 9223372036854775808543522345

func main() {    fmt.Println("Will Compile")}

If we change the constant to be of type int64, which means the constant is now bound to the precision limitations of a 64 bit integer, the program will no longer compile:

package main

import "fmt"

// Much larger value than int64.const myConst int64 = 9223372036854775808543522345

func main() {    fmt.Println("Will NOT Compile")

312

Page 313: Going Go Programming

}

Compiler Error:./ideal.go:6: constant 9223372036854775808543522345 overflows int64

Here we can see that constants of kind integer can represent very large numbers and why we say they are mathematically exact.

Numeric Constant DeclarationsWhen we declare an untyped numeric constant, there are no type constraints that must be met by the constant value:

const untypedInteger       = 12345    // kind: integerconst untypedFloatingPoint = 3.141592 // kind: floating-point

In each case, the untyped constant on the left hand side of the declaration is given the same kind and value as the constant on the right.

When we declare a typed constant, the constant on the right hand side of the declaration must use a form that is compatible with the declared type on the left:

const typedInteger int           = 12345    // kind: integerconst typedFloatingPoint float64 = 3.141592 // kind: floating-point

The value on the right hand side of the declaration must also fit into the range for the declared type. For instance, this numeric constant declaration is invalid:

const myUint8 uint8 = 1000

uint8 only can represent numbers from 0 to 255. This is what I mean when I said earlier that the declared type is used to associate the type’s precision limitations.

Implicit Integer Type ConversionsIn Go there are no implicit type conversions between variables. However, implicit type conversions between variables and constants can happen regularly by the compiler.

Let’s start with an implicit integer conversion:

var myInt int = 123

In this example we have constant 123 of kind integer being implicitly converted to a value of type int. Since the form of the constant is not using a decimal point or exponent, the constant takes the kind integer. Constants of kind integer can be implicitly converted into signed and unsigned integer variables of any length as long as no truncation needs to take place.

Constants of kind floating-point can also be implicitly converted into integer variables if the constant uses a form that is compatible with the integer type:

313

Page 314: Going Go Programming

var myInt int = 123.0

We can also perform implicit type conversion assignments without declaring an explicit type for the variable:

var myInt = 123

In this case, the default type of int64 is used to initialize the variable being assigned with constant 123 of kind integer.

Implicit Floating-Point Type ConversionsNext let’s look at an implicit floating-point conversion:

var myFloat float64 = 0.333

This time the compiler is performing an implicit conversion between constant 0.333 of kind floating-point to a variable of type float64. Since the form of the constant is using a decimal point, the constant takes the kind floating-point. The default type for a variable initialized with a constant of kind floating-point is float64.

The compiler can also perform implicit conversions between constants of kind integer to variables of type float64:

var myFloat float64 = 1

In this example, constant 1 of kind integer is implicitly converted to a variable of type float64.

Kind PromotionPerforming constant arithmetic between other constants and variables is something we do quite often in our programs. It follows the rules for binary operators in the specification. The rule states that operand types must be identical unless the operation involves shifts or untyped constants.

Let’s look at an example of two constants that are multiplied together:

var answer = 3 * 0.333

In this example we perform multiplication between constant 3 of kind integer and constant 0.333 of kind floating-point.

There is a rule in the specification about constant expressions that is specific to this operation:

"Except for shift operation, if the operands of a binary operation are different kinds of untyped constants, ..., the result use the kind that appears later in this list: integer, rune, floating-point, complex."

Based on this rule, the result of the multiplication between these two constants will be a

314

Page 315: Going Go Programming

constant of kind floating-point. Kind floating-point is being promoted ahead of kind integer based on the rule.

Numeric Constant ArithmeticLet’s continue with our multiplication example:

var answer = 3 * 0.333

The result of the multiplication will be a new constant of kind floating-point. That constant is then assigned to the variable answer through an implicit type conversion from kind floating-point to float64.

When we divide numeric constants, the kind of the constants determine how the division is performed.

const third = 1 / 3.0

When one of the two constants are of kind floating-point, the result of the division will also be a constant of kind floating-point. In our example we have used a decimal point to represent the constant in the denominator. This follows the rules for kind promotion that we talked about before.

Let’s take the same example but use kind integer in the denominator:

const zero = 1 / 3

This time we are performing division between two constants of kind integer. The result of the division will be a new constant of type integer. Since dividing 3 into the value of 1 represents a number that is less than 1, the result of this division is constant 0 of kind integer.

Let’s create a typed constant using numeric constant arithmetic:

type Numbers int8const One Numbers = 1const Two         = 2 * One

Here we declare a new type called Numbers with a base type of int8. Then we declare constant One with type Numbers and assign constant 1 of kind integer. Next we declare constant Two which is promoted to type Numbers through the multiplication of constant 2 of kind integer and constant One of type Numbers.

The declaration of constant Two shows an example of a constant getting promoted not just to a user-defined type, but a user-defined type associated with a base type.

One Practical ExampleLet’s look at one practical example right from the standard library. The time package declares this type and set of constants:

315

Page 316: Going Go Programming

type Duration int64

const (    Nanosecond Duration = 1    Microsecond         = 1000 * Nanosecond    Millisecond         = 1000 * Microsecond    Second              = 1000 * Millisecond)

All of the constants declared above are constants of type Duration which have a base type of int64. Here we are declaring typed constants using constant arithmetic between constants that are typed and untyped.

Since the compiler will perform implicit conversions for constants, we can write code in Go like this:

package main

import (    "fmt"    "time")

const fiveSeconds = 5 * time.Second

func main() {    now := time.Now()    lessFiveNanoseconds := now.Add(-5)    lessFiveSeconds := now.Add(-fiveSeconds)

    fmt.Printf("Now     : %v\n", now)    fmt.Printf("Nano    : %v\n", lessFiveNanoseconds)    fmt.Printf("Seconds : %v\n", lessFiveSeconds)}

Output:Now     : 2014-03-27 13:30:49.111038384 -0400 EDTNano    : 2014-03-27 13:30:49.111038379 -0400 EDTSeconds : 2014-03-27 13:30:44.111038384 -0400 EDT

The power of constants are exhibited with the method calls to Add. Let’s look at the definition of the Add method for the receiver type Time:

func (t Time) Add(d Duration) Time

The Add method accepts a single parameter of type Duration. Let’s look closer at the method calls to Add from our program:

var lessFiveNanoseconds = now.Add(-5)var lessFiveMinutes = now.Add(-fiveSeconds)

316

Page 317: Going Go Programming

The compiler is implicitly converting constant -5 into a variable of type Duration to allow the method call to happen. Constant fiveSeconds is already of type Duration thanks to the rules for constant arithmetic:

const fiveSeconds = 5 * time.Second

The multiplication between constant 5 and time.Second results in constant fiveSeconds becoming a constant of type Duration. This is because constant time.Second is of type Duration and this type is promoted when determining the type of the result. To support the function call, the constant still needs to be implicitly converted from a constant of type Duration to a variable of type Duration.

If constants didn't behave the way they do, these kind of assignments and function calls would always require explicit conversions. Look at what happens when we try to use a value of type int to make the same method call:

var difference int = -5var lessFiveNano = now.Add(difference)

Compiler Error:./const.go:16: cannot use difference (type int) as type time.Duration in function argument

Once we use a typed integer value as the parameter for the Add method call, we received a compiler error. The compiler will not allow implicit type conversions between typed variables. For that code to compile we would need to perform an explicit type conversion:

Add(time.Duration(difference))

Constants are the only mechanism we have to write code without the need to use explicit type conversions.

ConclusionWe take the behavior of constants for granted, which is a testament to the language designers and those who have worked hard on this feature. A lot of work and care has gone into making constants work this way and the benefits are hopefully clear.

So the next time you are working with a constant, remember you are working with something that is unique. A hidden gem buried in the compiler that doesn’t get enough credit or recognition as a unique feature of Go. Constants help make coding in Go fun and the code we write readable and intuitive. While at the same time keeping the code we write type safe.

ThanksThanks to Nate Finch and Kim Shrier who have provided several reviews of the post that have helped to make sure the content and examples were accurate, flowed well and would be interesting to Go developers. I was ready to give up a few times and Nate’s encouragement kept me going.

317