Go is an amazing language for many reasons. There are two things that trip most people up coming to the language; goroutines and channels. Even a lot of the tutorials I've come across use abstract examples that just leave me more confused than before!

The best way to explain both of these concepts is with an actual, simple server. Our server works like this:

  1. A client can connect and send a name, like Bob.
  2. The server will send back an greeting, like Hello, Bob.
  3. Steps 1 and 2 repeat until the client disconnects.



Most importantly here is that the server can handle multiple simultaneous connections. This is a very real and practical example that demonstrates why the architecture of Go is—for a lack of a better term—revolutionary.

Before We Begin

Obviously you will need Go installed.

Since we are creating a simple text based server we do not need to create a special client for it. We can use telnet as the client. In the examples I will be using port 8080, but you can use a different port that suits. Start the server by running:

go run server.go

In a separate terminal window connect to it with telnet:

telnet localhost 8080

Go, Go, Goroutine!

You can view the full code example as a Gist. There are two files in the Gist, we only care about server.go for now.



On a side note, Go handles errors via a second return value. It is up to you to check the error returned and handle it appropriately. This is a helper method that we will use throughout to panic on an error, or print a successful message and continue.

func check(err error, message string) {
if err != nil {
panic(err)
}
fmt.Printf("%s\n", message)
}

main is the entry function for Go. We start the server by binding it to port 8080.

func main() {
ln, err := net.Listen("tcp", ":8080")
check(err, "Server is ready.")

In order to accept multiple simultaneous connections we need to iterate the Accept as each new client connection comes in.

for {
conn, err := ln.Accept()
check(err, "Accepted connection.")

Heres the magic; launching a goroutine. Traditionally a server would launch a new thread or process with each connection so that the server could act independently on each connection. Those of you that have worked with non-blocking servers know of a better solution at the expense of more complex code to coordinate.

A goroutine is launched by calling an existing or anonymous function (closure) with the go keyword. There is no practical limit to the number of goroutines that are running since only one goroutine will be running at any one time.

go func() {
buf := bufio.NewReader(conn)

Inside each connection we want iterate the communication, starting with reading the name sent to the server:

for {
name, err := buf.ReadString('\n')

err will exist if the client disconnects or disappears. We can recognise this easily and handle it gracefully by breaking from the loop.

if err != nil {
fmt.Printf("Client disconnected.\n")
break
}

Finally, return the new message to the client.

conn.Write([]byte("Hello, " + name))

Each connection would have its own goroutine, even though only one goroutine will be actively running at a time. Whenever a goroutine is blocked; it might be waiting for data, waiting for the client to receive it, sleeping, etc. the Go scheduler will move to a different goroutine that isn't blocked. Your program will only be blocked if all goroutines are in a blocked state.

This allows you to concentrate on the logic of your algorithm and not be bothered about wasting CPU or time waiting between different concurrent operations. It also means your program will be extremely efficient by default.

Channel Your Inner Engineer

A channel works just like an array with a preallocated size. Adding an item to the channel will add it to the next vacant space, if you try to add to a channel that is full, it will block. Pulling an item off the channel will take the oldest item. If there is nothing available on the channel it will wait until there is, just like we saw with accepting connections.



One other (hopefully not too common) edge case is a deadlock. This is detected automatically when two go routines are blocked by each other.

The above example works to describe how the server is handling concurrency. But it does everything in the main function which is not a good separation of responsibilities. Using the same Gist but this time we will go through server2.go.

For each name that a client sends we want to add it to the channel to be processed separately. To do this we need a ClientJob that contains the original message sent and who sent it - so that we can send it back to the correct client.

type ClientJob struct {
name string
conn net.Conn
}

We will create a channel of ClientJobs and launch a goroutine now. Even before the server starts:

func main() {
clientJobs := make(chan ClientJob)
go generateResponses(clientJobs)

The function that will indefinitely listen to the channel, create the greeting message and send it back to the original connection. I've added a loop to keep the CPU busy so that we can simulate the server doing something very expensive. You will be able to see this lag when you are interacting with the server.

func generateResponses(clientJobs chan ClientJob) {
for {
// Wait for the next job to come off the queue.
clientJob := <-clientJobs

// Do something thats keeps the CPU buys for a whole second.
for start := time.Now(); time.Now().Sub(start) < time.Second; {
}

// Send back the response.
clientJob.conn.Write([]byte("Hello, " + clientJob.name))
}
}

The final change is to replace what once sent the response back. Instead, we create the ClientJob and add it to the channel.

// conn.Write([]byte("Hello, " + name))
clientJobs <- ClientJob{name, conn}

Go uses the same operator <- to push to a channel but also to pull the next item off it, depending on how you use the operator. <- used as a binary operator (like an assignment) will add the item to the channel:

clientJobs <- ClientJob{name, conn}

However, when used as a unary operator it will pull an item off the channel:

clientJob := <-clientJobs

Since the changes are out of order you might want to look at the complete Gist.

Wrapping Up

Go has all the benefits of a non-blocking architecture without any of the complexity of coordinating it yourself (such as callback hell). In a much larger and more complex application this becomes increasingly valuable.

There is a wealth of features in Go, each deserves its own blog post. Even goroutines and channels have only been touched on here.