Friday, April 13, 2018

Golang Web Server Auth

An example of authentication and authorization in a simple web server written in go.



As described in my previous blog post, I recently rewrote my image viewer desktop app as a web app, for which I wrote the web server in go.

Since I was adding a new potential attack vector, I wanted to add security; but since this is only available on my internal network, and it's not critically valuable data, I did not need enterprise-grade security. In this post I describe how I implemented a relatively simple authentication and authorization mechanism, in particular highlighting the features of go I used that made that easy to do. For a simple app such as this one, the third of the three As of security, auditing, can be done with simple logging if desired.

The code I present here is taken from the github repo for my mimsrv project, with links to specific commits and versions of various files. You can visit that project if you'd like to see more of the code than I present in this post.

Before Auth

Go has good support for writing simple web servers. The net.http package allows setting up a web server that routes requests based on path to specific functions. In the first commit for mimsrv, before there was any code for authentication or authorization, the http processing code looked like this:

In mimsrv.go:
func main() { ... mux := http.NewServeMux() ... mux.Handle("/api/", api.NewHandler(...)) ... log.Fatal(http.ListenAndServe(":8080", mux)) }
In api/api.go:
func NewHandler(c *Config) http.Handler { h := handler{config: c} mux := http.NewServeMux() mux.HandleFunc(h.apiPrefix("list"), h.list) mux.HandleFunc(h.apiPrefix("image"), h.image) mux.HandleFunc(h.apiPrefix("text"), h.text) return mux } func (h *handler) list(w http.ResponseWriter, r *http.Request) { ... }
The above two functions set up the routing and start the web server. The code in mimsrv.go creates a top-level router (mux) that routes any request with a path starting with "/api/" to the api handler that is created by the NewHandler function in api.go. The top-level router also defines routes for other top-level paths, such as "/ui/" for delivering the UI files.

The api code in turn sets up the second-level routing for all of the paths within /api (the h.apiPrefix function adds "/api/" to its argument). So when I make a request with the path /api/list, the main mux passes the request to the api mux, which then calls the h.list function.

Adding Authentication

To implement authentication in mimsrv, I added a new "auth" package with three files, and modified mimsrv.go to use that new auth package. The most interesting part of this change is that it implements the enforcement of the constraint that all requests to any path starting with "/api/" must be authenticated, yet I did not have to make any changes to any of the api code that services those requests.

When I originally wrote my request routing code, it could have been simpler if I had defined everything in one mux. I didn't do that because I think the approach I took provides better modularity, but in addition, that structure made it easy for me to require authentication for all of the api calls.

The authentication code itself is not trivial, but wiring that code into the request routing to enforce authentication for whole chunks of the request path space was. I wrote a wrapper function and inserted it in the middle of the request-handling flow for requests where I wanted to require authentication.

To wire in the authentication requirement for all requests starting with "/api/", I changed mimsrv.go to replace this line:
mux.Handle("/api/", api.NewHandler(...))
with these lines:
apiHandler := api.NewHandler(...)) mux.Handle("/api/", authHandler.RequireAuth(apiHandler))
Here is the RequireAuth method from the newly added auth.go:
func (h *Handler) RequireAuth(httpHandler http.Handler) http.Handler { return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request){ token := cookieValue(r, tokenCookieName) idstr := clientIdString(r) if isValidToken(token, idstr) { httpHandler.ServeHTTP(w, r) } else { // No token, or token is not valid http.Error(w, "Invalid token", http.StatusUnauthorized) } }) }
The RequireAuth function looks at a cookie to see if the user is currently logged in (which means the user has been authenticated). If so, RequireAuth calls the handler it was passed, which in this case is the one created by api.NewHandler. If not, then RequireAuth calls http.Error, which prevents the request from being fulfilled and instead returns an authorization error to the web caller. When the mimsrv client gets this error it displays a login dialog.

The other code I added handles things like login, logout, and cookie renewal and expiration, but all of that code other than RequireAuth is specific to my implementation of authentication. You could instead, for example, use OAuth to authenticate, in which case you would have a completely different mechanism for authenticating a user, but you could still use a function similar to RequireAuth and wire it in the same way.

Adding Authorization

Wrapping selected request paths as described above makes it so that authentication provides authorization for those requests. This coarse-grained authorization is a good start, but for mimsrv I wanted to be able to use fine-grained authorization as well. As this is a simple program with a very small number of users, I don't need anything sophisticated such as role-based authorization. I chose to implement a model in which I only define permissions for global actions, then assign those permissions directly to users.

For this simple permissions model, I needed to be able to define permissions, assign them to users, and check them at run-time before performing an action that requires authorization. My permissions are simple strings, stored in a column in the CSV file that defines my users. To give a permission to a user, I manually edit that CSV file, and to check for authorization before taking an action, the code looks for that permission string in the set of permissions for the current user.

The one piece that is not obvious is how to pass the user's permissions to the code that needs to check them. The reason this is not obvious is because the http routing package defines the function signature for the functions that process an http request, and that function signature includes only the request and a writer for the response. You can't simply add another argument in which you pass your user information, so you have to dig a little deeper to figure out how to pass along that information.

The solution relies on the fact that there is a Context attached to the Request that is passed to the handler function. By adding the user info to the Context, you can then extract that information further along in the processing when you need to check the permission.

The RequireAuth function validates that the user making the request is authenticated, so it already has information about who the user is, and this is the point at which we want to add the user info to the Context. We do this in our RequireAuth function by replacing this line:
httpHandler.ServeHTTP(w, r)
with these lines:
user := userFromToken(token) mimRequest := requestWithContextUser(r, user) httpHandler.ServeHTTP(w, mimRequest) func requestWithContextUser(r *http.Request, user *users.User) *http.Request { mimContext := context.WithValue(r.Context(), ctxUserKey, user) return r.WithContext(mimContext) }
When the code needs to know whether the current user is authorized for an action, it can call the new CurrentUser function, which retrieves the user info from the Context attached to the Request, from which the code can query the user's permissions:
func CurrentUser(r *http.Request) *users.User { v := r.Context().Value(ctxUserKey) if v == nil { return nil } return v.(*users.User) }


While implementing authentication and authorization in a web server takes more than just a few lines of code, at least the part about how it gets tied in to the http processing in go is only a few lines. Although that part is only a few lines of code, it took me a while to dig around and find exactly how to do that. I hope that this article can save some other people a bit of time when doing their own research on how to add auth to a go web server.

Tuesday, March 13, 2018

Golang server, Polymer Typescript client

Finally, a web development environment I enjoy using.



I have found Go to be a nice tool for developing a small web server, and Polymer + Typescript to be a nice combination for developing a web UI. The Go server acts as both the API server and the static content server delivering the UI pages. If you think you might want to try this approach, you can look at my mimsrv program on github as an example. If it looks too complicated, browse in the git history back to some of the earliest commits, such as the first ui commit and the first api commit, to see how things looked at a simpler time.


I have been developing web pages and apps for a long time, since the earliest days of HTML when there were no tools more sophisticated than a text editor, and server-side scripts were the only form of executable web code. In 1994 I wrote htimp, an experiment in how to attach a web browser to an interactive program with a lifetime longer than a single message.

Over the years I tried many technologies, including JavaServer Pages, JavaServer Faces, PHP, jQuery, and others I have forgotten. Some were better than others (more accurately, some were bad and some were excruciating), but I never felt any of them provided a reasonable mental model for how to put together an application.

I was away from the web UI scene for a while, and when I got back to doing some web development a couple of years ago, things seemed to have improved quite a bit. In the last year, I have been introduced to a few technologies that, in combination, provide me with a development environment with a working mental model of how to put together a program, and a set of tools that makes it easy to do that at a good clip.

The three technologies that together have brought pleasure back to my web programming are:
  1. The Go language and development environment
  2. The Typescript language
  3. Polymer-2 (and Web Components) with decorators
Below I describe the project on which I tried out these technologies, followed by a discussion of what I liked about them.


Mimsrv is a web server and UI to view a collection of photos. It is a replacement for mimprint, which is a desktop app that I originally wrote starting in 2001 in Java, and converted to Scala starting in 2008.

A couple of years ago I started looking into rewriting mimprint once again, this time as a web app. As a web app, I would no longer have to worry about distributing a desktop application to the various machines I have on which I wanted to view my photos. I also thought I should be able to leverage the web browser's media capabilities so that I would not have to develop or support that whole chunk of code.

The tools I tried were never nice enough to pull me in and get me going on that replacement, and I had moved my rewrite-mimprint project way down on my TODO-list.

At Google last year I worked on the open-source Datalab project. When I started on it, we were using jQuery and Javascript. I liked it when we converted to Polymer-2 and Typescript, and I liked it more when we switched to using Polymer decorators.

I started learning Go in order to review code from my teammates. It took a little getting used to, but the more I learned, the more it made sense to me. I felt it was much easier to understand the existing Go codebase than similar codebases I had looked at in other languages. It grew on me, and after I started adding my own Go code to the project, I was surprised at how much I liked using it, and I felt that I was making pretty good coding progress.

I thought the combination of Go for the server, and Polymer and Typescript with decorators for the client, worked quite well, and I decided to try it for my personal project. So far that combination has worked well for me, and I have been quite happy with it.

What I Like

Offline Development

One of my requirements is that I be able to develop when I am offline. I insist on this because one of the situations in which I have the most amount of time available for programming on my personal projects is when I am traveling and often don't have network access.

In a previous attempt at putting together a collection of technologies for developing web apps, some of the pieces used maven, and I was unable to figure out how to convince it not to go out looking for new versions of the snapshots I needed every time it compiled.

After using Go on a project at work and being pleasantly surprised at how much I enjoyed using it, I decided to see it if would work for my personal projects. When I downloaded and installed it, I was delighted to discover that, not only did the installation provide everything I needed to compile and run my programs, but it also included all of the documentation and the Go Tour, so those would all be available to me offline!

Similarly, the Typescript and Polymer tools allow just building the code, without attempting to do any dependency resolution, so can easily be used offline.

Simple Mental Model

There are a couple of changes to the web app landscape that have made for a much simpler mental model than in the old days. The main one is the Single Page Application (SPA). With the old approach of having to move to a new page every time the user took an action, saving state across those page changes required mental and technical gyrations. With a SPA, you make AJAX calls to the server using XMLHttpRequest, and just keep your state in variables as in any other program.

The SPA model also allows for a clean separation of responsibility between the server and the client. With Polymer, all of the UI manipulation is handled in the client, so the server doesn't need to deal with any kind of templating of client-side functionality. This means the server can focus on the API and on just delivering the UI code to the client, and the client can focus on managing the UI and making API calls.

The other big change on the client side is the progress that has been made on the asynchronous programming model. At first we had to pass around success and failure callbacks, which requires splitting code up in unwieldy ways around every asynchronous call. The introduction of Promises provided a nice way to avoid the "callback hell" of deeply nested callbacks, but still requires chopping your code up around every asynchronous call. Lastly, the introduction of the async and await keywords made asynchronous programming almost as straightforward as synchronous programming. I'm particularly impressed that you can do things like have an if-statement with synchronous code on one side and asynchronous code on the other side, or a loop with an asynchronous call in it. This is so much simpler to reason about than if you had to figure out how to do that with callbacks or even Promises.

Simple Dependency Management

The few times I had to deal with Maven were unpleasant. I found it hard to control, hard to configure, and hard to understand what it was doing. Perhaps it's just that, with the march of time, people have figured out how to make dependency management better, but I found the dependency management in both Go and Polymer to be pleasant to use.

In Go, when you need a package, you just say go get package, and it downloads that package and all its dependencies. Assuming you follow the Go conventions when naming and locating your package, when someone then wants to download your package, they do the same thing, and Go will also download all of your dependencies to their system.

Polymer-2 uses bower for its package management, and it is almost as easy to use. The bower.json file lists the packages needed, and running bower install installs those packages and their dependencies. When you add a new dependency to one of your Polymer components, you just run bower install --save new-package to download that new package, and you're done. Not quite as effortless as go, but much better than my experience with maven.

For both Go and bower, they don't attempt to download anything except when you explicitly tell them to with go get or bower install, which is good for offline development.

Simple Compilation

For pretty much my whole programming career, I have been accustomed to using some kind of build tool that requires a configuration file: make, ant, maven, sbt, grunt, bazel, gradle, and others.

Go is different: it is so opinionated about where you have to put your packages and how you have to name stuff, that it has all the dependency information it needs by looking at the source files. You just tell it to build your program with the command go build, and it does it. No build config file required.

The Typescript compiler and Polymer build commands do require config files, but they were pretty simple to set up and understand, and seldom need to be modified. Running tsc compiles all the Typescript files to Javascript, and running polymer build packages all the Polymer Javascript and HTML files into a directory where they are served by the Go server.

Type Safety

I like the compiler to catch as many errors in my code as possible. Using compile-time types allows the compiler to spot more errors. This is why I greatly prefer Typescript over Javascript.

Go is also a compiled and typed language, so it catches a lot of problems before execution time.

Separation of Concerns

While I don't think having to use multiple languages is a benefit, the ability to select the best tools for different parts of the problem is. Go works very well as a web server for API calls and static content. Most people using Polymer embed Javascript code in their HTML file, but I prefer using Typescript and am happy putting that in a separate file from the HTML, where my editor understands it better.

Go http support

Go has a nice http package that makes it easy to define web routing and implement handler functions.

Because Go supports functions as first-class values, it's easy to define a function that can take a function as an argument and return another function. In my case, I used that approach to create a function that I could use to specify that certain parts of my API required authentication.

I wrote my http handlers to do only the marshaling and unmarshaling of data and then call the underlying routine that implements the requested functionality. This made it easy to write unit tests of the underlying function. But Go also provides a nice testing package for http handlers that makes it relatively easy to test the http handler as well.

Room for Improvement

I'm pretty happy with this collection of technologies, but there are a couple of things I would like to see improved.

Polymer/Typescript type mismatch

Polymer decorators are a nice improvement over the previous approach, as there is now much less boilerplate and repeated code. But I still have to specify a type in each line, and that type is not quite the same as the Typescript type (for example, string vs String, any vs Object).

I suppose this is not that surprising, given that Typescript is not officially supported by Polymer. I guess that's really what I would like to see happen.

Debugging Typescript

Writing Typescript rather than Javascript is nice, but when it gets loaded into the browser it's Javascript, so debugging in the browser uses the transpiled Javascript. The Javascript is usually close enough to the source Typescript that it's manageable, but it would be nice to be able to debug with the Typescript source code.

Maybe this situation will get better when WebAssembly gets implemented.