Factory Reset

Leaving something behind you is never easy when you put your whole heart into it. Five years of Audiotool have shaped me, taught me and left their scars. As of November 2012 I am no longer involved in the project and wish the team all the best and success in pursuing the democratization of music production.

Of course, every end is a new beginning. For me this means that I am going to start a whole new adventure. Luckily I am not alone in this because starting from scratch is never easy. Together with my colleague Tim Richter we are going to pursue our ideas of a multi-platform development kit.

2013 will be a roller-coaster. I am all in.

Continuous Deployment

Much has already been said about Continuous Deployment. Etsy made it famous once and we integrated our own solution about two weeks ago.

The reason why I integrated it into our system was quite simple: the only person who could perform a deploy was me. This was hindering the rest of the team of course.

Our architecture allowed “hot” deployments already. That means users who were already using the Audiotool application did not notice a deploy happened at all. This is of course only possible as long as there are no new dependencies on the API.

The Audiotool startup process is a very important ingridient. We have a boot sequence which loads a configuration file upfront. In this configuration file is a version number. The version number is used to load all dependencies the application needs to start. We have put a repository server in place which serves SWF files based on this version and they get cached till the end of time.

If a user started Audiotool when version 1.1 was online and we update to 1.2 in the meantime the user would still load all audio plugins for version 1.1. There are some cases when a hot deployment is not possible (yet) and we call this a scheduled update.
This pushes a message into Audiotool, notifying users that they should save their song and restart the application. But this is not part of the blog post and a scheduled update is very rare. We did it twice last year, once this year.

When we did a deploy it was basically done like this:

  1. Make sure all changes are checked in.
  2. Update a default version key in some source files in case the boot sequence cannot load the configuration for whatever reason.
  3. Update the Nginx configuration so that some verison-less files like our embed player are routed correct.
  4. Create a tag for the repository.
  5. Create a clone of the tag.
  6. Execute mvn -Pdeploy-to-live deploy in the clone. This command already updates all plug-ins on audiotool.com and puts new metadata in place.
  7. Copy all SWF files form a local directory to S3.
  8. SSH into a server and update the configuration file Audiotool parses at startup.
  9. Reload the Nginx configuration.

A lot of steps. This means a lot can go wrong of course. And even though we had all those steps written down in an internal Wiki it is still hard to do all this. You need the appropriate SSH keys to log into some servers, uploading files to S3 requires a tool which is also configured and not everyone is comfortable with editing a Nginx configuration file on an Amazon server through the terminal.

There was only one deployment that happened without me. I was at FITC Amsterdam in 2010 at that time and it was a long phone call. Obviously something had to change.

The first step was to get rid of all the manual configuration hassle. The configuration file that one needed to change via SSH had to go so I made it a dynamically created file by the web server. The actual version is pulled from the database and this made our my life much easier already. But still this was not what I wanted. Of course nothing really changed. You still had to perform the S3 upload, SSH into a server and so on.

But since the configuration was already served via the web server I could start automating more things. Even better: with our last major update we dropped the version number from Audiotool. User’s would stop expecting to see big changes and a version number that increases. Instead we focus on being much more active and to push changes online as quick as possible.

Since I already wrote some shell scripts for myself to deploy various server applications with a single command I started doing the same for the whole Audiotool application.

The first step was to get rid of the default version in some of the ActionScript source files. I simply [Embed] a text file now which contains the version information. A single text file is so much easier to change.

Then I added an API call which allows me to change the version information online. That way a new version can be released easily without any human interaction. With the API call in place and a text file containing all important configuration parameters the last issue was the Nginx configuration. But a small script on the server should do the job.

So I started writing a little script which performs all the necessary tasks:

  1. Create a clone of the repository.
  2. Get the id of the current revision.
  3. Create a tag for the revision like YYYY-MM-DD_REVISION.
  4. Replace the default version with the revision.
  5. Execute the Maven command to deploy some metadata and build all SWF files.
  6. Upload all SWF files to S3.
  7. curl our API with the revision information.
  8. SSH into a server at Amazon to update Nginx.
  9. Cleanup.

That’s all there was to it. Since I do not have that much experience with shell scripting the most annoying part was to figure out a way to replace the text in a file. After looking through all examples of possible/impossible sed and awk options I got lucky with sed -i "/@build.version@/ s//${HG_VERSION}/g" ${HG_CLONE}/default.version.txt. This basically replaces @build.version@ with the content of ${HG_VERSION}. I know, magic. I just want to write it down here so Future-Joa can Google this and come back in five years getting a quick answer.

With the shell script at our disposal we simply had to hook it into TeamCity, a fantastic continuous integration solution by the way. After configuring some command line tools on the CI server we were ready to go.

The results speak for themselves. We made already twenty deployments during the last two weeks. That is about as many deployments as we did since we started Audiotool. Because deployments become less scary and everyone can trigger them we are able to iterate quicker and get rid of a lots of bugs. In fact it makes it also much easier to reason about your program. If you push hundreds of new features and get a null pointer exception: good luck. However if you just changed one little thing and get loads of bug reports it is quite easy to identify what could be the possible culprit. Of course this is only a Flash problem since there are no stack traces in the release player. But I guess if you build a JavaScript application you would have similar problems.

I can only recommend doing the same. Especially for your own sanity. 100% automated deployments are really cool and stress free! It is also much easier to setup than you might expect.

Collateral Damage

I am no longer committed to supporting any Flash related open-source projects.

Here is why. When I started using the Flash Player it was quite easy to reach its limits. However you were able to get around those limitations with clever hacks and debatable optimization techniques. I was always keen to share my knowledge with the community and to explore all possible options to achieve best performance.

The Flash Player has been hibernating for half a decade now. The only glimpse of performance was finally a set of specialized op-codes which allow you to modify an array of bytes. In layman’s terms this means it was finally possible to do a[b] = c with an acceptable performance. So I wrote a tool which allows you to do just that and many other things. I have spent a good time of my free time trying to improve the performance of the Flash Player and contributing all my code to the community.

As a reminder: I showed some drastic performance improvements at Flash on the Beach in 2009. That was three years ago. It was not necessary to modify the Flash Player and it was not necessary to modify the ActionScript language.

The Adobe roadmap for the Flash runtimes states that Flash Player “Dolores”

  • will support ActionScript Workers
  • comes with improved performance for Apple iOS
  • and ActionScript 3 APIs to access the fast-memory op-codes

This player should be released in the second half of 2012. The “Next” Flash Player will finally include

  • modernizing the core of the Flash runtime
  • work on the VM
  • updates to the ActionScript language

This is planned for 2013 apparently. And what can we expect? Type inference, static typing as a default, and hardware-oriented numeric types. Hooray, so it will be finally possible in 2013 to write a[b] = c without having to use some weird fast-memory op-codes. If we look back to the year 2009 this makes me really sad.

With the introduction of the speed tax you will now have to license your application. No matter if you make money out of it or not. Now I think that 9% is a decent number and I can understand Adobe’s position on this. In fact it is much more friendly than the 30% Google or Apple take. However the AppStore was an invention. What is the invention here? Squeezing money out of an already existing feature, and suddenly making it unavailable after people have been relying on it for years to push the boundaries of the platform and actually innovate?

But for the hell of it, a[b] = c is not a premium feature. Nor are hardware accelerated graphics. That is what I would expect from any decent runtime.

Limiting the capabilities of a runtime — by defaulting back to software rendering for instance — will make it less attractive to use it in the first place. You are probably not interested to go through a signing progress for a small demo. So your performance might be crap, people will complain about the Flash Player taking 100% CPU because its using software rendering (YEY! 2013!), laptop fans will start to dance and you will look like a bad developer because that other guy got the same thing running with hardware acceleration. Or you could use a different technology.

Why is this bad? Because apparently this signing with a $50k threshold targets the enterprise and small developers seem to be acceptable collateral damage. However thinking about the next five to ten years: who is going to write ActionScript code if it is no longer attractive to play around with it in the first place?

We still rely on the Flash Player at audiotool.com. I am still developing for it and we will probably have to use it as long as there is no alternative. Me no longer supporting open-source tools is just me no longer spending my personal time for a platform that I would not use for private stuff. Work is of course not always about fun. But fortunately I am able to spend 90% of my time writing Scala code.

I will finish this blog post with some bad karma:

It’s also worth noting that the new Adobe license will prohibit scenarios where you’d have the first level of a game in the Flash Player, and the full experience inside the Unity Web Player. Alas, this is something you’ll need to be aware of if you were considering such a route.

You will not only pay for the features. You are also welcome to cede some of your rights.

Project Hiddenwood

This years FOTB was special. At the end of my session I showed a sneak preview of project Hiddenwood. I demonstrated complete playback of Audiotool tracks on stage — in a browser. Now that does not sound too special…

But then again, the playback was done using JavaScript only and calculated in realtime.

Audiotool is a complex piece of software so you might ask how one could torture themselves by implementing it in JavaScript? We didn’t. Instead we started building our own vision of a cross-platform application framework a couple of months ago.

Introducing project Hiddenwood.

Hiddenwood is a collection of libraries and tools specifically designed to support different devices and platforms. The core libraries are the “driver layer” and always platform-specific with a platform-independent interface.
On top of that we provide a basic layer of libraries like our UI system, animation framework or managed collections which guarantee 0% garbage collection activity and have been battle-tested in Audiotool.

The framework is all about speed and consistency. The rendering pipeline is optimized for OpenGL and although we offer something similar to Flash’s display list a lot of features are not available because they would compromise the speed.

Speaking about speed: we are always interested in staying as native as possible on our target platform. So for the browser we emit JavaScript, for Android you will get the full DalvikVM performance and for the desktop you will get JVM performance. This approach has also another very important aspect. If you want to go platform-specific for certain features you can do that.
For instance if we want to render Audiotool songs on the server using a fork-join pool for our audio calculation this is possible and might not make sense on an Android device.

You write Java code and the supported platforms are native desktop applications, Android (minimum requirements are Gingerbread and OpenGL ES 2.0) and modern browsers. Now for browsers we even go one step further and support multiple options. That means if WebGL is not available we simply fallback to a normal canvas based render-engine. The same applies to some of the Android drivers.

iOS is of course important as well and we are actively researching the best option that will give us the most flexibility and performance.

We are currently working on two real applications built with Hiddenwood. So far it is a real pleasure to enjoy quick build times and simply test what you want on the desktop with great debugging capabilities. When you are ready you can try the same app on Android or in the browser — which might take a little bit longer to compile.

Because we see Hiddenwood as an application framework there are a lot of goodies built-in like a sprite-sheet based class generator. Think Image mixerBackground = Textures.mixer.background(); where mixer was the folder name and background the name of the file.

We believe that as a developer you really do not care about what kind of technology you are using and just want a great result. We also think that you should be able to reuse platform-independent code across multiple projects. However we do not want to take power away from the developer because if you know what you are doing: go for it.

Of course we are not the only ones with this idea. Nicolas Cannasse saw the signs years ago and invented haXe which gives you a comparable experience and Google released playN a couple of weeks ago which takes a similar approach (and requires another 25 committers :P).

But when we started Hiddenwood we wanted the Java tooling experience and playN was not public at that time. We also think that a game engine is not what you want to use for all kinds of applications. So we like to be able to give people the freedom to build their own game engine on top of Hiddenwood — and calculate physics in a different thread peut-être.
Speaking about threading: the only possible solution that works across all platforms is a shared-nothing architecture which we put in place. However if you write platform specific code you can use of course everything the platform offers and a lot of the Hiddenwood core libraries like the network- or cache-layer make use of multiple threads.

In the end what makes Hiddenwood special in my opinion is that we do not believe in write once run anywhere because that just does not make sense. The essence and philosophy behind Hiddenwood is to write platform-agnostic code using kickass-libraries and being able to reuse that. Audiotool on a tablet would look completely different from Audiotool running in a browser. And Audiotool on iOS would probably be also a little bit different from Audiotool on an Android device because there are simply different paradigms you should respect.

I hope that we can share more information with you soon. With the news of mobile Flash Player being deprecated and the ongoing demand for cross-platform development we have exciting times ahead of us. I am also super excited about the (beautiful <3) applications which we are going to release in the not so distant future.

First look at Dart

In this post I want to give you a short introduction to Google’s new language called Dart.

Personally I am quite disappointed that Dart looks a lot like Java 8 with some tweaks here and there. Although the language is in its early stages I wonder why pattern matching was not a top-priority to the language team since Dart relies heavily on message passing. More on that in a second.

Since Dart should scale from small scripts to full applications it is nice to see support for generics from the start, optional typing and functions as first-class citizens.
However that makes Dart look more like a better Java than JavaScript.

What makes Dart more of a web language is how you perform concurrent computations. This is done by shared-nothing message passing. The same way you do it with workers in JavaScript, actors in Erlang or Scala and hopefully quite soon in the Flash Player. We should not forget Go in this equation since it was also an effort to try out concurrency via channels that can send and receive messages. If we look a little bit more into Dart we can see some of the same ideas.

A worker in Dart are called Isolate [1]. This makes sense since it runs completely isolate from the rest of your program. What is nice about the Dart approach is that you do not have to deal with files anymore when using an Isolate. JavaScript requires you to perform the task of having some special file lying around somewhere that adheres to the Worker protocol. When I write my application, I do not want to think about files that have some special top-level logic in them. Especially, I do not want to write a custom file at runtime using a blob-builder.

Dart embeds this from the start just like Go does. You will have a lot of Isolate objects communicating via a SendPort [2] and a ReceivePort [3]. The Promise<T> [4] is just like Java’s Future<T> a holder for a value that can be computed later. I just wonder why they did not implement a read-only and write-only view on them. It would be nice if Promise<T> would be extended with methods like map, flatMap etc. because your code would be less spaghetti.

With no further ado I would like to explain some code now that you know how Dart works. Basically we calculate the n-th Fibonacci number with a very expensive approach. You would never do this in reality. Let us define fib(x) = fib(x - 1) + fib(x - 2) with fib(1) = 1 and fib(2) = 1. So fib(3) is fib(2) + fib(1) which is 1 + 1. Sorry for to bore you to death.

If we implement this in pseudo-code we get something like this:

int fib(int x) {
  return x < 3 ? 1 : fib(x - 1) + fib(x - 2);
}

If you would like to make this multi-threaded with Scala you could wrap the calls to fib into a Future[Int] like this:

def fib(x: Int) =
  x match {
    case 1 | 2 => 1
    case _ =>
      val n1 = future { fib(x - 1) }
      val n2 = future { fib(x - 2) }
      n1() + n2()
  }

We want to do the same with Dart. Since it is a language targeted for the web we get no access to blocking calls. The Scala code blocks when n1() or n2() is called. In layman's terms: we wait until the value has been computed.

Since there is no support for continuation passing style like C#'s async we have to write a lot of callbacks now.

This is the entry-point for the Dart version of this:

main() {
  int n = 7;

  print("Computing fib($n) ... ");

  new FibIsolate().spawn().then(
    (port) =>
      port.call(n).receive(
        (value, port) => print("fib($n) = $value")
      )
  );
}

First of all we see string-interpolation which is nice. Then we create a FibIsolate which I will show you in a second. When you create an Isolate it does not do anything so we have to call spawn() to perform the actual computation. However spawn returns us only a Promise<SendPort> which we can only use when available. This is done with the then method. then takes a function as an argument.
There are multiple ways we could do this. then((x) => ...) is the same as then((x) { ... }). You are only allowed to use the first form for a single expression but you do not need to write all that boilerplate. In fact you are even allowed to omit a semicolon. Hell Yeah!

So when we get a SendPort which happens to be the case when our function is called we can send some value via the call method. Why use call and not send? call returns a ReceivePort which allows us to wait for the result. We do this by calling receive on the ReceivePort with a closure to print the value we get back.

Now let's have a look at the implementation of FibIsolate.

class FibIsolate extends Isolate {
  main() {
    port.receive((n, replyTo) {
      switch(n) {
        case 0:
          replyTo.send(0); break;

        case 1:
        case 2:
          replyTo.send(1); break;

        default:
          Promise<int> n1 = new Promise<int>();
          Promise<int> n2 = new Promise<int>();

          new FibIsolate().spawn().then(
            (port) =>
              port.call(n - 1).receive(
                (n, port) => n1.complete(n)
              )
          );

          new FibIsolate().spawn().then(
            (port) =>
              port.call(n - 2).receive(
                (n, port) => n2.complete(n)
              )
          );

          n1.then(
            (x) => n2.then(
              (y) => x + y
            )
          ).flatten().then(
            (x) => replyTo.send(x)
          );
      }
    });
  }
}

First of all we have to extend from Isolate. The main() function of an Isolate is its entry-point. Now something that really bugs me is the amount of indentation. Nearly every second Isolate which you are going to write has this form. Your actual logic starts at the fifth indentation level.

In the main method we immediately start listening for messages via the Isolate's ReceivePort. Then we need to react on the message via a switch case. Now although it might look simple in this case it is very sad that Dart does not come with proper pattern matching since you will need to write a lot of switch-case in Dart code. If you have ever seen pattern matching in action using Scala you never want to go back to a dumb switch statement.

In the default case it gets a little bit more interesting. We need to spawn two new instances of FibIsolate since we want to do this recursive. We need to receive their result and when we have both results we want to perform the addition and send our value back to the caller.

Since there are no blocking calls we create a Promise<int> for each FibIsolate. When can complete the Promise<int> once we receive a value from an isolate. This allows us to combine both Promise<int> objects and wait on their results. Why do we not nest the FibIsolate you might ask. Because we want to spawn both at the same time so the computation can happen in parallel.

The actual nesting happens in the n1.then((x) => n2.then((y) => x + y)) construct. However the return type of n1.then((x) => n2.then((y) => ...) is no longer Promise<int> but Promise<Promise<int>>. Fortunately someone thought about this case and there is a flatten() method which turns the Promise<Promise<int>> into a Promise<int> we just have to await and the finally reply with the computed value.

However remember the definition of fib(x)? The case for 1, 2 and 0 is quite simple because we can reply with the value immediately. And we do this by calling send on the SendPort which I named replyTo.

You can take a look at the full example running in the browser here.

What I like about Dart is that someone thought about workers and made them first-class citizens. Pattern-matching is on the road map and they will hopefully copy Scala. There are however a lot of things that I dislike.

  1. Semicolons are required but optional for shorthand (x) => x
  2. The return keyword is required but optional when used with (x) => x
  3. No control about concurrency. Are my isolates CPU or I/O bound?!
  4. No syntactic sugar for writing an Isolate
  5. Not DSL friendly like Scala or maybe even Kotlin
  6. int disguises itself as a primitive but is an object instead
  7. Missed the opportunity to embed continuation passing style into the language from the start
  8. Function is the only function-type

The type-system can be argued as well. I did not find anything about type erasure yet. Personally Dart feels much more like Java and Go than JavaScript. But in the end I would be quite happy if Dart could replace JavaScript since it is a step in the right direction from my personal point of view.

[1] http://www.dartlang.org/docs/api/Isolate.html#Isolate::Isolate
[2] http://www.dartlang.org/docs/api/SendPort.html#SendPort::SendPort
[3] http://www.dartlang.org/docs/api/ReceivePort.html#ReceivePort::ReceivePort
[4] http://www.dartlang.org/docs/api/Promise.html#Promise::Promise