Thursday, November 28, 2019

Go Composition vs Inheritance

Go does not support inheritance, but sometimes using embedded structs can look a little like inheritance. I explore that feature to see how it differs.

Contents

Introduction

In lieu of inheritance, the Go language encourages composition by allowing one struct to be embedded in another struct in a way that allows calling methods defined on the embedded struct as if they are defined on the containing struct.

Note: In this post I occasionally use object-oriented terminology such as base class, subclass, and override. Please remember that Go does not support these concepts; I am using those terms here to show how thinking that way with Go can lead to problems.

For the examples that follow, I assume we are building a graphical editor that allows manipulating visual objects on the screen. We want to be able to draw those objects, and we want to be able to transform them with operations such as rotate, so we define an interface with those methods:

Note: For convenience, the final collected code used in this post is available on play.golang.org.
type shape interface { draw() rotate(radians float64) // translate and scale omitted for simplicity }
We write a function that will draw all our shapes:
func drawShapes(shapes []shape) { for _, s := range shapes { s.draw() } }

Base class

We define our "base class", called polygon, where we implement a draw method that we can invoke from our "subclasses":
type polygon struct { sides int angle float64 } func (p *polygon) draw() { fmt.Printf("draw polygon with sides=%d\n", p.sides) vertexDelta := 2*math.Pi / float64(p.sides) vertexAngle := p.angle x0 := math.Cos(vertexAngle) y0 := math.Sin(vertexAngle) for i := 0; i < p.sides; i++ { // Draw one side within unit circle, offset by p.angle. vertexAngle += vertexDelta x1 := math.Cos(vertexAngle) y1 := math.Sin(vertexAngle) fmt.Printf("draw from (%v, %v) to (%v, %v)\n", x0, y0, x1, y1) x0 = x1 y0 = y1 } } func (p* polygon) rotate(radians float64) { p.angle += radians }

Subclass

We define a couple of "subclasses", triangle and square, that "extend" our "base class", along with functions to create instances of those types:
type triangle struct { polygon } type square struct { polygon } func createTriangle() *triangle { return &triangle{ polygon { sides: 3, }, } } func createSquare() *square { return &square{ polygon { sides: 4, }, } }

Main and test

Finally, we write a couple of test functions to create a list of shapes and draw them, and a one-line main function that calls our test function.
package main import ( "fmt" "math" ) func createTestShapes() []shape { shapes := make([]shape, 0) shapes = append(shapes, createTriangle()) shapes = append(shapes, createSquare()) return shapes } func testDrawShapes() { drawShapes(createTestShapes()) } func main() { testDrawShapes() }
When we run this program, it produces the expected output:
draw polygon with sides=3 draw from (1.000, 0.000) to (-0.500, 0.866) draw from (-0.500, 0.866) to (-0.500, -0.866) draw from (-0.500, -0.866) to (1.000, -0.000) draw polygon with sides=4 draw from (1.000, 0.000) to (0.000, 1.000) draw from (0.000, 1.000) to (-1.000, 0.000) draw from (-1.000, 0.000) to (-0.000, -1.000) draw from (-0.000, -1.000) to (1.000, -0.000)
Note that we have not defined any methods on the triangle and square types, yet the compiler accepts them as implementing shape, as seen by the fact that we can store them in a slice of shape and we can invoke draw on them. Because we embedded polygon in triangle and square, without giving them field names, Go has promoted all of the methods in polygon into the namespaces of triangle and square, allowing draw to be called directly on an instance of type triangle or square.

So far, relying on an object-oriented mental model has not caused us problems. Let's keep going and see when it does.

Overriding

We add a typeName method to our shape interface and our "base class", polygon, and we "override" that method in our "subclasses", triangle and square:
type shape interface { draw() rotate(radians float64) // translate and scale omitted for simplicity typeName() string } func (p *polygon) typeName() string { return "polygon" } func (p *triangle) typeName() string { return "triangle" } func (p *square) typeName() string { return "square" }
We can test our typeName methods by pointing our main to a different test function:
func printShapeNames(shapes []shape) { for _, s := range shapes { fmt.Println(s.typeName()) } } func testShapeNames() { printShapeNames(createTestShapes()) } func main() { testShapeNames() }
This outputs:
triangle square
No problems yet.

Downcall

Let's add a method to our interface and "base class" that invokes the method that we are overriding, and a new test function to call it. This is sometimes referred to as a downcall, in that a superclass calls into the overriding method of a subclass that is below it in the class hierarchy.
type shape interface { draw() rotate(radians float64) // translate and scale omitted for simplicity typeName() string nameAndSides() string } func (p *polygon) nameAndSides() string { return fmt.Sprintf("%s (%d)", p.typeName(), p.sides) } func printShapeNamesAndSides(shapes []shape) { for _, s := range shapes { fmt.Println(s.nameAndSides()) } } func testShapeNamesAndSides() { printShapeNamesAndSides(createTestShapes()) } func main() { testShapeNamesAndSides() }
This outputs:
polygon (3) polygon (4)
Well, that doesn't look right. We wanted it to print triangle and square instead of polygon both times. Thinking of this as inheritance has led us astray.

Method promotion

So, what happened here? Why did printShapeNames work, but printShapeNamesAndSides did not? Let's dig into that.

The return value of createShapes is []shape, which is a slice of objects that implement the shape interface. Since the triangle and square types implement that interface, we can store instances of those types in that slice. But how is it that those types implement that interface when we didn't write those methods for those types? The answer is method promotion.

When we embed one type inside another without giving the internal type a field name, Go automatically promotes all unambiguous names from the embedded type to the containing type. Effectively, for each method in the embedded type whose name does not conflict with a method in the containing type or in any other embedded type within that container, Go creates a method on the containing type that turns around and calls that method on the embedded type. For example, when we embed polygon in triangle the compiler effectively creates this code:
func (t *triangle) typeName() string { return t.polygon.typeName() }
If the embedded type satisfies an interface, and there are no ambiguous method names, this promotion of all the methods of the embedded type makes the containing type also satisfy that interface. Let's explore this method promotion behavior. We create another struct type called thing that has a typeName method, embed it along with our previously defined polygon, which also has a typeName method, in a new type polygonThing, then try to assign an instance of that to a variable of type shape.
type thing struct{} func (t *thing) typeName() string { return "thing" } type polygonThing struct { polygon thing } func testPolygonThing() { p := &polygonThing{} p.draw() fmt.Println(p.typeName()) var s shape = p fmt.Println(s.typeName()) } func main() { testPolygonThing() }
When we compile this, we get these errors:
./comp.go:130:16: ambiguous selector p.typeName ./comp.go:131:7: polygonThing.typeName is ambiguous ./comp.go:131:7: cannot use p (type *polygonThing) as type shape in assignment: *polygonThing does not implement shape (missing typeName method)
where line 131 is the line where we are assigning to s.

From this error we can see that Go did not promote the typeName method from either of the embedded structs into polygonThing. But there was no error message about the call to draw, so it did promote that method from polygon, since it is not ambiguous.

If we comment out the embedded thing line from the definition of polygonThing, the code compiles. If, instead, we comment out the embedded polygon line, we get different errors:
./comp.go:129:4: p.draw undefined (type *polygonThing has no field or method draw) ./comp.go:131:7: cannot use p (type *polygonThing) as type shape in assignment: *polygonThing does not implement shape (missing draw method)
If we want to keep both embedded structs in our composite struct, there are a couple of ways we can resolve the ambiguity of typeName appearing in both embedded structs. The simplest is to assign a name to one of the embedded structs, converting it to a regular field. Instead of writing thing in the definition of polygonThing, we can write t thing. Go then does not attempt to promote the methods from thing into polygonThing, and the promotion of typeName from polygon into polygonThing is no longer ambiguous, so it succeeds.

Another possibility is to resolve the ambiguity by defining a typeName method directly on polygonThing. In this case, Go does not attempt to promote typeName from either of the embedded structs. We can call a method in an embedded struct by referring to that embedded struct as if it were a named field.
func (t *polygonThing) typeName() string { return t.polygon.typeName()+"Thing" }
With this definition, the program compiles and runs, outputting
draw polygon with sides=0 polygonThing polygonThing

Solution

Now that we understand how embedded structs work in Go, let's go back and reconsider what happened with our printShapeNamesAndSides function.

Assume one of the elements in our slice of shape is an instance of triangle. We call nameAndSides with that triangle as the receiver. Since we did not define nameAndSides on triangle, that calls the promoted version of that method. That promoted method turns around and calls nameAndSides on the embedded polygon, passing the embedded polygon as the receiver. In polygon.nameAndSides, it calls p.typeName, but p here is the receiver of the nameAndSides method, which is the polygon, not the triangle. So the call from nameAndSides to typeName call's the typeName method on polygon rather than on triangle.

With this understanding, let's update our code to make "overriding" work. The difference between the behavior we are seeing and what we would expect from a system with inheritance and overriding is that here our "base class" does not, by default, make calls to methods of the "subclass". It can't because the method in the "base class" has no reference to the type of the containing object. In order to implement a call to method in an instance of a "subclass" from polygon.nameAndSides, we need a reference to that instance, such as a triangle. We will do this by explicitly passing our shape as an argument, then calling the typeName method on that shape rather than on the receiver. By calling a method on a passed-in argument rather than the receiver, it is clear, when looking at that method in the "base class", that the call may be going to a different type of object than polygon.
type shape interface { ... nameAndSides(s shape) string } func (p *polygon) nameAndSides(s shape) string { return fmt.Sprintf("%s (%d)", s.typeName(), p.sides) } func printShapeNamesAndSides(shapes []shape) { for _, s := range shapes { fmt.Println(s.nameAndSides(s)) } }
With these changes, we get the expected output:
triangle (3) square (4)

Conclusion

The way Go promotes methods of embedded structs makes it have some of the characteristics of inheritance as defined in object-oriented programming. In particular, it allows for methods to be automatically promoted to the containing struct, and thus for interfaces to be automatically promoted to the containing struct. One key difference is that, when you override one of those promoted methods in the containing struct, the code in the embedded class does not automatically call the overridden method in the containing class, as happens in some object-oriented languages such as Java.

You may have heard of the fragile base class problem. A related issue, that can arise when there are downcalls from a superclass to an overridden method in a subclass, similar to the example here where I "overrode" the typeName method, might be termed the fragile subclass problem. If you are interested into digging into that, you can read Safely Creating Correct Subclasses without Seeing Superclass Code, a paper from OOPSLA 2000 that examines that issue. See section 4. The designers of Go chose not to implement inheritance, but instead to favor composition. Although some Go constructs can look a little like inheritance, it's better to start thinking about designing in Go using composition rather than trying to bend Go to do something like inheritance.

Tuesday, June 11, 2019

A Future Telescope

This post describes an idea for a telescope that can see where heavenly objects will be in the future. This may sound crazy, like something out of a science-fiction story, but I believe it is based on solid theory. Unless, or course, I have misinterpreted something. Read on if you enjoy considering surprising extrapolations of theory.

Contents

Collective Electrodynamics

Carver Mead's book Collective Electrodynamics, first published in 2002, puts forth a theory of electrodynamics based on four-vectors. As with many other low-level aspects of physics, this theory is time-symmetric, making no claims about how to distinguish between the past and the future.

I found Carver's theory and his exposition of it to be elegant and convincing. Even if you don't agree with my interpretation and conclusions in this post, I recommend you read this book if you are generally interested in physics.

Carver's description of the process of photon emission and absorption includes a few comments noting that a photon will not be emitted without a destination that will absorb the photon at some point in the future, because the emitter and absorber are a coupled pair forming a single resonator.
  • In section 4.8: "Any energy leaving one resonator is transferred to some other resonator, somewhere in the universe."
  • In section 4.12: "The spectral density of distant resonators acting as absorbers is, of necessity, identical to that of the resonators producing the local random field, because they are the same resonators."
  • In the Epilogue: "It is by now a common experimental fact that an atom, if sufficiently isolated from the rest of the universe, can stay in an excited state for an arbitrarily long period. ... The mechanism for initiating an atomic transition is not present in the isolated atom; it is the direct result of coupling with the rest of the universe."
Part 5 describes how two atoms couple electromagnetically as resonators.

Interpreting the Theory

As a thought experiment, if we were out in space in some part of the universe in which there were no matter in one direction, we would not be able to shine a flashlight in that direction because there would be nothing to absorb the photons, therefore they would not be emitted. If we were able to measure all of the other energy going into or out of the flashlight, we would be able to notice that energy leaves the flashlight when we point it towards other things, but not when we point it towards truly empty space.

Coming back to our current location in the universe, there is a finite amount of matter between us and the Hubble sphere. Consider a line segment from our location to a point on the Hubble sphere. If there are no atoms on the intersection of said line segment and our future light cone, then it should not be possible to emit a photon in that direction. More restricted, if there are no atoms in that intersection that are capable of absorbing a photon of the frequency our source atom is attempting to emit, then we will not be able to emit said photon in that direction.

The Big Idea

Assume, then, that we have a highly directional monochromatic light source that we can point accurately, and that we can accurately know how much light we are emitting based on energy input measurements. What would happen if we were to provide that light with a suitable input power signal, then scan the sky? If there are any differences in the density of atoms in different directions that are capable of absorbing photons of the frequency we are sending, would we be able to produce a map of the sky showing those differences? Would there be any anisotropism, as there is for the background radiation?

Given how much matter there is in the universe, I suspect it would be hard to find one of those line segments out to the Hubble sphere without a single atom capable of absorbing one of our photons, but perhaps if we are trying to send out a great many photons, there will be enough of a statistical variation to measure.

The thing that I find fascinating about this is that, if it did in fact work, we would be "seeing the future", because whatever map we produced would be a function of where the absorbing atoms are going to be when the light we emit reaches them. For planets in our solar system that would be minutes or hours in the future, but for distant nebulae that could be millions or billions of years from now.

The Details

The devil is in the details. Even if, in principle, the theory supports this conclusion, would it be possible to build such a device?

In addition to the statements of theory, I make two assumptions above:
  1. We can accurately point our light source, such that we can perform a raster scan on a portion of the sky.
  2. We can determine how much light energy is leaving our light source by measuring the input energy to that source.
The first assumption seems straightforward: the optics involved in sending out a beam of light to a small portion of the sky should be the same as receiving light from a small portion of sky, which we do on a regular basis to form images of space. But I am not an astronomer, so I may be missing something. For example, I know that some modern telescopes use a guide laser shining up through the atmosphere to allow for dynamic adjustments to the mirrors to compensate for atmospheric distortion. Would this also work when sending out a signal beam alongside the reference beam? I don't know why not, but, as mentioned, this is not my area of expertise.

I think the second assumption may require more effort to solve. The typical advice for powering a laser is to use a current source in order to get a stable output. For my experiment, however, I specifically don't want a stable source. Instead, I want a source that can output more or less light based on how much the space into which it is shining can accept.

Since I can't directly measure the light output, I also need a light source where I can accurately judge how much light is being output by measuring the input power. This means I need to know the power transfer characteristics of the light source. How much of the input power is transformed into light, and how much into heat or other forms of energy? Is that relationship constant over time, or might it vary such that at one point in time I get x% of the input turning into heat, and moments later I get 2x% turning into heat? Alas, I am not a solid-state physicist (assuming my light source is a solid-state laser), so I don't know the answers to these questions.

An Invitation

So, what do you think? Is there a fatal flaw to my understanding of the theory? A fundamental reason why it would not be possible to build such a "future telescope"? A technical limitation making it not currently possible?

I have talked to a few people about this idea, and the ones who I know have a good understanding of Carver's theory have said that, in principle, they don't see anything wrong with my reasoning.

AsI mentioned above, I'm not an astronomer or solid-state physicist, so I don't have the background to take this concept to the practical stage. But perhaps someone else does.

This seems like it would be a very exciting thing if it worked, but I think it would require a significant investment of time and access to some expensive equipment to take the next step. Would anyone like to give it a try? If you do, I'd love to hear about it.