Decrease the JavaScript Pain… with TypeScript?

Rethinking Things

My recent post on the many available game development platforms was prompted by a pressing existential dilemma. My favourite platform for making games (Javascript / the “Open Web Runtime”) was giving me development pain that I wasn’t willing to tolerate. I had hit that wall with JavaScript where the scale of my project made things unmanageable, and I was wasting far too much time debugging annoying little mistakes. I was over it, and was beginning to think that if I was going to experience this much discomfort I might as well develop in C++.

I started delving into the recent most version of Cocos2d-x, and it was good. Building for desktop and iOS emulator worked well. Cocos2d-x is a great engine, and I’d happily use it for a serious project, but after a few unfruitful hours lost trying to get it to build to my android device I was remembering the true pain of working with C++. This got me to thinking again… my goal is to MAKE GAMES, to enjoy making them, and to get them out in the world for people to play. Right now I’m more interested in iterating my ideas than making a large scale game. Surely there was some kind of middle ground? Flash or OpenFL sit around that middle zone, but for reasons stated in my last post, they don’t work for me.

Then I started to reconsider my stance on the TypeScript language. I have a gut level reaction against Microsoft technology, but TypeScript is open source, and outputs JavaScript that is close enough to hand written that you can read it and understand how it fits in with your hand written code. It doesn’t feel like too severe a lock in, especially since the code you write will eventually be roughly compatible with ECMASript 6 when it finally arrives.

I figured, if I was prepared to walk away from JavaScript, maybe TypeScript could allow me to keep all the breezy ease of development and creative expression, while giving me the features I craved, such as auto-completion, jump-to-definition, etc.

The announcement that TypeScript has reached version 1.0 was made less than two weeks ago, so this was the perfect time to take a real look at what it had to offer.

Finding an IDE

My first obstacle was that I am developing on OSX. TypeScript is only really supported in Visual Studio at this stage. On any OS, my favourite code editor is Sublime Text. It has some support for TypeScript but it is incomplete. You’ll get partial auto-completion and error highlighting, but no in-code messages to tell you what errors you have made, as you would in Visual Studio.

Another cross platform editor for TypeScript is CATS. The project is promising, and autocomplete and error warnings are functional, but the editor itself, at least on OSX, has some problems. It is still Alpha software, so it’s not really ready for serious use. As a side note, it is built on Node-Webkit, so +1 for that. ;)

In the end the best setup I was able to find for now was using an Eclipse plugin from Palantir. In not a fan of Eclipse, but I’m willing to use it until something better comes along.

It seems very likely that more and more ides will support TypeScript, as it has been made relatively easy using TypeScript Tools. From the github page:

TypeScript-tools (v0.2) provides access to the TypeScript Language Services (v0.9) via a simple commandline server (tss). This makes it easy to build editor plugins supporting TypeScript.

This approach is a good move and I’m sure it will help the language to flourish.

Let’s Do It

Once I had my environment set up I found that it was easy to set up a hybrid TypeScript and Javascript project. For me the most important proof of concept was to be able to work with the pixi.js rendering library. In order to work with JavaScript libraries you need type definition files, which are like interfaces that describe classes and functions so TypeScript can do it’s thing.

A valuable resource for TypeScript developers is DefinitelyTyped, a repository of TypeScript definition files for a large number of popular JavaScript libraries. Pixi.js was in there, which made me happy. Type definition files are nothing special, and you can easily write them yourself if you want to work with your own JavaScript code.

Using the DefinitelyTyped PIXI type definition file, I quickly got some bunnies spinning on the screen, and immediately felt the benefits of autocompletion and jump-to-definition, the two ide features no programmer should really ever have to do without. (At least if they want to stay sane.)

I can really see this setup working for me. It will allow me to go a lot further with this runtime than I would be willing to go otherwise.

The Great Game-Dev Platform Showdown

I came to the games industry from a web development background, originally as an Actionscript 3.0 programmer. Over the last couple of years the casual games industry I got started in has become less and less web-focused. Mobile is where the market is now. Working at a game studio that has traditionally made Flash games for the web, I find myself participating in a huge amount of discussion about different game development platforms, and which ones are the best / most suitable / most productive etc. Is it Unity? Is it C++? HTML5? Should we write custom code or use a pre-existing engine or framework?

For my personal game development, there are several factors that define my decisions:

- Strong preference for open source technologies.

- Extreme lack of time to waste re-inventing the wheel, or doing anything other than making a game.

- Need to deploy to multiple platforms, both desktop and mobile.

- The goal of building a long-term body of game code that I can re-use and iterate for future work.

This article is a comparison between the various game development platforms that I’ve considered over the last few years.

Note: Since I’m primarily interested in making 2D games, I’m not mentioning any 3D engines at all.

JavaScript / “Open Web Runtime”

As Flash fell from favour on the web, I transitioned from ActionScript to JavaScript, and that’s where I’ll start this technology showdown. I think of JavaScript as much more than a web technology. The “Open Web” is a runtime capable of deploying to pretty much any platform, and is in many ways the most portable runtime of all.

In recent years I’ve been really obsessed with JavaScript. After Flash went into decline I really fell in love with the language, and have followed the growth and development of the “Web Runtime” very closely. (I avoid the term “HTML 5″ because it is too limited and excludes other important technologies such as WebGL.)

Projects like Node-Webkit, Crosswalk, CocoonJS, Ejecta, and XDK, make it possible and practical to deploy applications to every major platform as “native” apps. Certainly for desktop applications the runtime is sufficient to build many or most of the indie games that I love the best. Using WebGL frees the cpu to do important game logic, and V8/Chromium based wrappers have very good performance everywhere but on iOS, where JIT is disabled.

Up until recently I was really close to feeling like I was willing to go all-in with this technology for my personal work. I could accept the performance limitations in exchange for the benefits, especially ease of deployment. Then I had a sudden change in heart. After coming back to a quite large code base after a break of a few months, I kept finding myself asking, “what was the name of that function?”, “what was that variable called?”, “why is this object prototype not inheriting properly from this parent class?” and so on. I realised I really missed auto-completion and code-intel. Now the project had reached a certain size, debugging was also feeling very drawn out and tricky without compile-time error messages and warnings to show me the way to problems before they occurred. I’ve been so in love with JavaScript for so long that this experience actually represents quite an existential shift for me, and was responsible for this reassessment of the available alternatives.


- Effortless deployment to many platforms.
- JavaScript is Fun, Expressive, and Quick.
- Great libraries like PIXI.js help you get stuff done.


- Performance is an issue, especially on iOS.
- After a certain point, large JavaScript projects become hard to manage and development isn’t so much fun.


A lot of people really seem to love Unity. I have not really used it, but from the stories I hear I can see why it is an excellent choice for a lot of teams. The artists can get involved straight away, and the integration between editor and code ide has a lot of benefits. However, I have always had a strong resistance to using it. It just isn’t compatible with my obsession with open technologies. I don’t like the idea of being locked in to proprietary technology, or of developing in C#. I want to iterate my code base over the course of my lifetime, and C# simply isn’t the language I want my code to be written in. For an individual or company who really wants to get the job done and ship a product for multiple platforms, Unity is probably the best choice, and worth the price tag. For my personal projects I’m just not interested in it.


- Effortless deployment to many platforms.
- Well integrated editor / ide.
- Unity store for pre-built components. (Avoid re-inventing the wheel).


- Proprietary technology (lock in).
- Expensive.
- Support for 2D games is very new.


HAXE/NME, now rebranded as the OpenFL platform, is another option for developers who want to deploy to multiple platforms. HAXE is a nice programming language, especially if you are already coming from an ActionScript and Flash background. When I evaluated OpenFL I found it easy to set up and build the test projects to the various targets, including directly to an Android device. I’ve heard that things can get fiddly at times with it, and you have to be aware of which apis will perform well on your target platforms, since they all handle the apis differently.

I’ve always thought HAXE was really cool, but when I weigh up using any language or technology over the long term, I’m not willing to spend my time on it unless it is widely used and backed by at least one large company with a big investment into its success. For the right project it could be a great choice, especially if you know ActionScript well.


- Easy to set up.
- Cool language with good balance between power and ease of use.
- Deploy to many targets.
- Use familiar apis if you have an ActionScript background.
- Enthusiastic scene.


- Not widely used enough (at least for my liking).


I recently evaluated SFML, and initially really liked it. It has a very clean and simple api, that reminded me a little of PIXI.js. It supports Gamepads out of the box, as well as Audio, Networking, and of course Graphics rendering.

SFML is nicely broken down into several modules for doing separate things. This seems like a great design choice.

One of the deal breakers for me was the lack of batch rendering support. There is supposedly implicit batch rendering planned behind the scenes, but it felt like a missing feature not to allow explicit batching. It would be possible to set up your own batching if you liked the framework enough to invest the time. There was also no built in support for sprite atlases or animations, so you would have to write those yourself. Another feature that would require implementation is a Scene Graph, if you are used to having one. (As flash developers often are.)

SFML does not currently support mobile, but this is planned in the near future. (Version 2.2)


- Clean, simple API.
- Very well documented.
- Modular design.
- Does most of what you want without telling you how you should do things too much.
- Great starting place for building your own engine.


- Missing a few key features you need if you want to actually make a game.
- Mobile support not quite there.
- Small dev team – who knows when feature X will come out?


I haven’t personally used SDL much, but it is often compared to SFML in terms of the features it offers. It is a good place to start if you are interested writing a game engine, but probably not the best choice if you really want to make a game. I believe it is a bit more widely used than SFML, so you might have more luck finding open source classes that can be used with it.


- A good starting place.


- Only a starting place.


For those willing to work in C++, the big player in open source / cross platform game development is Cocos2d-x. Like Unity, it does in some ways force you do to things the “Cocos2d way”, but in terms of the features it supports it is not really lacking. It is widely used, and shares many apis with its Objective C cousin, Cocos2d, which has been used for hundreds of commercial games.

Because the engine is focused on mobile it does lack a few features on desktop, most notably support for Gamepads and Keyboard input. Luckily it turned out to be easy to integrate SFML into the desktop build to get these features.

The original developer of Cocos2d in Objective C, Ricardo Quesada, appears to have left Zynga, where he was hired to work on the Objective C version. He has moved to work at Chukong Technologies, who are responsible for making Cocos2d-x in C++. I think this is a great sign for the engine, and indicates that the C++ version is likely to overtake the Objective C version as the leading open source engine for mobile games. After all, why would you develop for iOS only, when you can get all of those extra platforms for only a little more effort?

Chukong Technologies is a very successful company, at one point earning 6 million a month on their game Fishing Joy, made with Cocos2d-x. It’s good to know that the engine is backed by a successful company.


- Very complete feature set.
- Good support for multiple platforms.
- Backed by a successful company.
- Widely used, with growing user base.
- Code base written in C++ may have the most ongoing value.
- Emscripten deploy to Web mostly functional.


- C++ is a more challenging and less productive language to develop in.
- Much more work to maintain a multi-target build.


I find myself swinging back and forth between the two extremes of Javascript and C++. I keep coming back to JavaScript for its ease-of-deployment and high level programming fun. On the other hand C++ gives you the best possible performance, but at the cost of extra work when it comes time to port and deploy to your target platforms.

My feeling is that programming always involves a bit of pain. You just have to decide which kind of pain you find the most tolerable. Development pain, deployment pain, porting pain, debugging pain, every technology has weaknesses you’ll have to work with. You have to decide what your objectives are, both in the short and the long term.

How to Set Up Gamepad Support in Cocos2d-x with SFML

Although Cocos2d-x compiles easily on desktop, one of its few limitations right now is lack of support for Gamepad and Keyboard input. I recently evaluated SFML, and although I liked it a lot I still see Cocos2d-x as being a better choice if you just want to get to the important business of making a game, rather than spending a lot of time building a game engine.

However, there was still this problem of no Gamepad. SFML supports gamepads out of the box, so I decided to see how easy it would be to include SFML into my Cocos2d-x project. Happily, it turned out to only take a few minutes to get working.

NOTE: I was using Cocos2D-x v3.0rc1 and was building for desktop on OSX. I had already installed SFML on my system, so don’t forget to get that set up before trying to integrate it with cocos2d-x. SFML comes with easy to follow instructions for installing the pre-built binaries.

Because I had been using the SFML templates for XCode, I hadn’t had to set it up from scratch, so I started by reading the instructions on, on how to set up SFML on the “holy trinity” of the three desktop platforms. Because I’m on OSX, I followed the XCode instructions.

Basically, on OSX, you need to add the SFML frameworks and add a framework search path.

SFML is nicely broken into separate modules, and it turns out that you only need one of them to support Gamepad – the “sfml-window” framework. To add it, open the project settings panel, and under “other linker flags” add the line:

-framework sfml-window

Then, under “Framework Search Paths” add:


After that, my test app compiled, but the gamepad was not detected. I read the documentation on the SFML page, and found this line:

“if you have no window, or if you want to check joysticks state before creating one, you must call sf::Joystick::update explicitly.”

Since I was not opening an SFML window, I needed to call the sf::Joystick::update() function each frame to be able to read the joystick.

After that my PS3 controller was detected. :) The relevant test code looked like this:

// include the SFML joystick header
#include <SFML/Window/Joystick.hpp>

// convenience function for snapping into the dead zone:
float snapToZero( float value, float threshold ) {

    if(fabs(value) < threshold) {
        return 0;
    return value;

// ... inside the update() function:

// manually update Joystick every frame

// let's see if the joystick is connected:
cout << sf::Joystick::isConnected(0) << endl;

if ( sf::Joystick::isConnected(0) )
    // joystick number 0 is connected!
    float deadzone = 5;
    float x = snapToZero( sf::Joystick::getAxisPosition(0, sf::Joystick::X), deadzone );
    float y = snapToZero( sf::Joystick::getAxisPosition(0, sf::Joystick::Y), deadzone );
    float x2 = snapToZero( sf::Joystick::getAxisPosition(0, sf::Joystick::Z), deadzone );
    float y2 = snapToZero( sf::Joystick::getAxisPosition(0, sf::Joystick::R), deadzone );
    // output the stick coordinates
    cout << x << " " << y << " " << x2 << " " << y2 << endl;

Although these instructions are for OSX, I'm assuming that SFML will work just as well with Cocos2d-x on windows or linux. If you have any experience with this setup on those platforms, please feel free to comment.

Pixi.js – First Impressions


A few days ago a coworker sent me a link to a very new html5 2D graphics library called Pixi.js, telling me to “Check this out.” I did check it out and was immediately very pleased with what I saw.

Pixi.js arrived at the perfect time for me. I had been planning to start working again on the html5 rewrite of an unfinished flash game. I had already made my own Canvas2D engine, but was considering switching to EaselJS, to conform with my general philosophy that it is better to get on with making a game than to build an engine. When I saw Pixi.js I instantly knew that I wanted to use it for my project.

The thing that makes Pixi.js so appealing to me is that is is primarily a WebGL renderer, so it prioritises the optimal performance environment in the browser, but it has a fallback to the standard 2D canvas context, so it will work is all modern browsers. The great thing about this is that the most common case where it will fall back to the 2D context, at least on desktop, is Internet Explorer, which has a decent hardware accelerated 2D canvas element.

Pixi has only just been released to the public, but it has hit the scene in a very tidy shape. GoodBoy has obviously planned this initial release well. They have ensured that the documentation is well presented, and have built an attractive and impressive demo game that shows of what the engine can do.

The demo game is instant proof that PixiJS can offer the power needed to make a great game in the browser using WebGL. There are also some benchmarks that get more and more exciting the more bunnies or pirates you add to the scene.


The documentation gives a very concise overview of how simple and usable Pixi.js is. Unlike projects like EaselJS, it offers only the features you really need. You can see at a glance what it does. It doesn’t try to be the new Flash, it just gives you what you need, in an API that will be familiar to any Flash developer. It has the standard objects: Stage, DisplayObject, Sprite, MovieClip, Texture etc. It has a full heirarchical display list, and supports JSON SpriteAtlas loading for animations. In other words, it has exactly what you want, and nothing more. No doubt it will in time have many more features, both from Good Boy Digital and from the community, but for now it seems good to go for a serious project.

Ive really been enjoying using Pixi.js, and getting on with making my game. Thanks, Good Boy Digital! :)

Tagged , , , ,

ThreeJS & Blender – Exporting Skeletal Animations

It took me a bit of experimentation and online research to get skeletal animation to work with the ThreeJS Blender Exporter. I have compiled some tips that may help others who are in the same situation. Note that at the time of this writing, support for skeletal animation is still considered experimental, and from what I can tell from the github wiki has only become relatively stable in the last few months.

These are some pitfalls / things to check if you are struggling to get things working. Note that they mostly don’t apply to exporting Morph Target animations.

Ensure the scale is appropriate.

This is more of a general tip for using the exporter. With default settings your model may be much too small for the scene. The ThreeJS examples use a scale that is much greater than that of the Blender default scene. I set the export scale to 50 for the model to look right in my ThreeJS scene.

Delete the Armature Modifier before exporting.

It appears that you have to delete the Armature Modifier from your mesh object before using the exporter, or the animations come out distorted. The bones will still be included in the export data, but if the armature modifier is on, the animation will be broken. This does not seem to be widely documented, and I only came across it in a discussion on the github wiki.

Check your Vertex Groups.

When using skinning, ThreeJS will not render parts of the mesh that have not been assigned to any bones. This can really trip you up when you are getting started. The fact that you have to delete the armature modifier may make you think you don’t have to assign it in the first place, but this is incorrect and will potentially cause the model to appear invisible or incomplete. You must have correctly assigned vertex groups on your mesh. The easiest way to achieve this is to use automatic weight generation when you assign the armature modifier. Once assigned, you can delete it again immediately and the vertex groups will remain. Examine the vertex groups in the Object Data panel, or go into weight paint mode and make sure the mesh has been assigned to the bones. Also be careful to delete all the vertex groups that you don’t need or you may get errors from ThreeJS. ( Basically you want meaningful data only in your export, and nothing else. )

Key all bones in the first and last frames of your animation.

I found that I had to insert a keyframe for every bone of the armature in the first and last frames of the animation, to describe the initial pose and the final pose. Even if the animation looked fine in Blender, without these frames the mesh would would rotate or twist strangely in ThreeJS, or would be missing the last part of the animation. Create the keyframe in Pose Mode by pressing ‘a’ to select all the bones, then press ‘i’ to insert a keyframe for ‘LocRot’ ( ie location and rotation. )

[edit] I don’t yet understand why, but it seems that you have to key the last frame of the animation as well, or animations will be broken in the middle.

Have something to add to the list? Have I made any mistakes? Is anything out of date? Please let me know in the comments…

Tagged , , , ,

ThreeJS – First Impressions

I’ve recently been taking a look at WebGL libraries, and I feel that Three.js is the most promising library available at this time. I also have huge respect for Mr Doob as a developer-of-cool-things, and would like to support and get involved with this project. It appears to be evolving rapidly, and many of the more cutting edge features have only been added within the last few months.

The api is very simple to understand and use, and it only takes a few lines to set up a 3d scene. There are the usual assortment of primitives, and several importers for different file formats. Of most interest to me is the Blender exporter tool, which provides a simple pipeline for getting models into a Three.js scene. Support for animation is one area that is still in development, and considered “experimental”. Morph targets worked first time for me, but it took a bit of finessing in Blender to get the skeletal animations to work correctly. I expect that in time this area will become very stable as it is used by more people.

The library also has a variety of post-processing effects that can be used on a scene with relative ease, and it supports custom shaders, so there is a lot of flexibility there.

This is an important time for developers who work with browser-based technology. WebGL and WebAudio are the most exciting to me personally, as a game developer. Three.js is a great starting point for experimenting with WebGL, especially if you are more interested in really getting something done than just “checking out the tech.” One of the reasons I admire Mr Doob’s work is that he has always used the technology to create wonderful and innovative experiments and experiences. Now he is helping to enable others to do the same… Thanks Mr Doob, you are awesome!

Seeded Perlin Noise in Javascript

This is a translation of the original code by Ken Perlin, into Javascript. The seeding is dependent on the seeded random function included below, which is a quick translation of the seeded random by Michael Baczynski of

…and the seeded random:

Battle Panic

Last month, just before the Easter holiday, Ninja Kiwi released its latest game: Battle Panic.

I was coding this project for about five months, since the previous October. I had the excellent experience of being provided with quite a lot of completed art and character animation before I even started, by the amazingly talented / jealousy inducing artist and animator Warwick Urquhart, so it was looking good right from the beginning. No programmer art in sight. It was awesome working with Warwick on this one. THANKS WORIC!!!

I got to do some of my favourite kinds of development: making little autonomous guys run around ‘interacting’ ( fighting ) with eachother.


2D Ray Casting on a Grid in AS3

A long-postponed re-write of my ray casting code:

The method for traversing the tile grid is based on an algorithm described in this paper: A Fast Voxel Traversal Algorithm for Ray Tracing, by John Amanatides & Andrew Woo. I came across it while reading this good article on collisions for tile engines by Metanet Software.

In the next version the castRay() function will return an object containing the edge normal and ID of the hit tile. Later, I’ll add handling for fine collisions with tiles having “interesting” contours.

While in this example I’m using the ray to find a collision with a stationary obstruction ( a solid tile ), the same algorithm can be used as a broad phase for things like projectile collisions on moving targets. It would work nicely as an extension for Grant Skinner’s grid based ProxityManager class.

I’m certain there are optimisations I could make at this point. All feedback is welcome…

Tagged , , ,

How to Install FlashDevelop on OSX with Bridge

As a long-time OSX user, I managed to miss out of the joy of FlashDevelop until earlier this year when I started at my new job. When I went back to Mac, I really missed all of those convenient features. As much as I love the universal power of Textmate as an editor, I was completely converted to the pleasures of auto-complete, auto-import, etc.

I struggled for a long time to find a decent alternative for OSX but there really was nothing I liked. Eclipse had some badly supported plugins, and the alternatives were the expensive Flash Builder from Adobe, and FDT. ( None of which were at all close to the clean usability of FlashDevelop, let alone the zen-like Textmate. )

Then, I happened to read about the FlashDevelop Bridge project. This has totally solved my AS3 development needs on OSX, with near perfect integration into the operating system.

The concept behind the Bridge is that you virtualise Windows and run FlashDevelop there. The Bridge runs as a server on the host ( OSX or Linux ) and talks to FlashDevelop over the divide. This communication allows FlashDevelop to signal the host OS to build using either the Flash IDE or Flex, ( ie, on the host operating system). This means the virtualised Windows doesn’t have to do any of the heavy lifting, and therefore won’t slow you down.

This is what you’ll need to get set up:

The latest FlashDevelop 4 developer build, and FlashDevelopBridge, both available here.

Virtual Box, a free download from Oracle.

Windows ( I used an old copy of XP, assuming it would be the lightest to virtualise. )

KeyTweak for Windows, or similar, to remap some keys so you don’t break your keyboard-shortcut muscle memory.


1 – Install Virtual Box.

2 – Install Windows on Virtual Box.

This is the only slightly involved part of the installation process. Just go with the defaults for the virtual machine. Here is a more detailed description of setting up XP in Virtual Box on OSX.

3 – Install FlashDevelop on Windows.

4 – In OSX, install Flash Develop Bridge


These are the setup steps for Bridge, as quoted from

1 – VirtualBox: Virtual Machine Settings > Shared Folders, add ‘Dev’ as /Users/yourname/Dev

2 – MS Explorer: Tools > Map Network Drive, map Z: to \VBOXSRVDev

3 – Mac Bridge: configure Z: as local /Users/yourname/Dev

4 – FlashDevelop: Program Settings > BridgeSettings, verify drive and set ‘Active’

5 – restart FlashDevelop

Note that you can call the shared folder whatever you like. Don’t forget step 2. This is easy to neglect because after you share the folder through VirtualBox, it will show up in explorer, but you still have to map the network drive for Bridge to work.

As an optional but highly recommended final step, install KeyTweak on windows, and remap the left command key ( windows will see it as the left windows key ) to be a control key. If you don’t do this it will drive you insane switching from ctrl-c to cmd-c keyboard shortcuts all the time. If you chose to do this step you’ll also have to open Virtual Box preferences and on the “input” tab change the ‘Host Key’ to be the right command key, rather than the left, since we are now using that for control. This is the key you press to release keyboard focus, and is basically the only taint on otherwise perfect OS integration. Before you can cmd-tab to another application you have to either press the host key to release focus, or click out of virtual box. It is a small price to pay for FlashDevelop on OSX, and you get used to if after a short time.

You might also like to turn on ClearType for better font rendering.

After you get things set up the way you want them, it’s a good idea to backup the virtual machine you created in Virtual Box. Then you can re-install, or install on multiple machines, in only minutes.

That’s it. The whole process was hassle-free for me. I hope you have the same experience.

Tagged , , ,