Technology Blog

An introduction to TypeScript and ES Modules

09/17/2020 17:20:00

JavaScript is everywhere, and TypeScript is JavaScript with some cool extra features.

You've probably heard of it, it's exceptionally popular, with lots of really mainstream JavaScript libraries and frameworks being built in TypeScript.

We're going to go through what a type is, why they're useful, and how you can use them without getting lost in configuration and tools.

First, let's understand what TypeScript is -

TypeScript extends JavaScript by adding types.

By understanding JavaScript, TypeScript saves you time catching errors and providing fixes before you run code.

Any browser, any OS, anywhere JavaScript runs. Entirely Open Source.

TypeScript is a programming language that is a superset of JavaScript - any valid JavaScript, is valid TypeScript - and it adds additional language features that get compiled down to vanilla JavaScript before it runs in your web browser. The most notable thing it adds to the language are types.

What are types?

The TypeScript pitch is pretty simple - "JavaScript with Types, to help prevent you making mistakes in your code" - but when you start to google around what Types are, you end up with things like the wikipedia page on computational type theory.

But we should translate this into simpler English - a Type let's you tell the computer that you expect data in a specific "shape", so that it can warn you if you try to use data that isn't in the correct format.

For example, this is an interface:

inteface Animal {
    numberOfLegs: number,
    numberOfEyes: number
}

This interface is a Type definition - that says:

  • Animals have two properties.
  • numberOfLegs, which is a number
  • numberOfEyes, which is a number

In TypeScript you can just put an interface like that in your .ts files.

A .ts file? Well that is identical to a regular JavaScript .js file - that also has TypeScript code in it.

When we create a JavaScript object that contains the properties or functions that we've declared in our interface, we can say that our object implements that interface. Sometimes you'll see people say the "object conforms to that interface".

In practice, this means that if you create an object, for it to be an Animal and be used in your code in places that require an animal, it must at least have those two properties.

// Just some object

const notAnAnimal = {
    blah: "not an animal"
};

// Cats are animals

const cat = {
    numberOfLegs: 4,
    numberOfEyes: 2
};

// You can even tell TypeScript that your variable
// is meant to be an animal with a Type Annotation.

const cat2: Animal = {
    numberOfLegs: 4,
    numberOfEyes: 2
};

We'll work some examples later on, but I'd rather look at what TypeScript can do for you.

Let's start by working out how we're going to run our TypeScript code in our browser.

Running TypeScript in our browser with snowpack

Snowpack is a frontend development server - it does similar things to CreateReactApp if you're familiar with React development. It gives you a webserver that reloads when you change your files.

It's built to help you write your webapps using ES Modules - that's where you can use import statements in your frontend code, and the browser does the work of loading JavaScript files from your server and making sure that requests don't get duplicated.

It also natively, and transparently supports TypeScript - this means you can add TypeScript files (with the extension .ts) and load them as if they're just plain old JavaScript. This means if you have all your code in a file called index.ts, you can reference it from a HTML file as index.js and it'll just work without you doing anything at all.

Setting up snowpack

snowpack is available on NPM, so the quickest way we can create a project that uses snowpack is to npm init in a new directory.

First, open your terminal and type

npm init

Just hit enter a few times to create the default new node project. Once you have a package.json, we're going to install our dependencies

npm install snowpack typescript --save-dev

That's it!

Snowpack just works out of the current directory if you've not configured anything.

We can just go ahead and create HTML, JavaScript or TypeScript files in this directory and it'll "just work". You can run snowpack now by just typing

npx snowpack dev

ES Modules, the simplest example

Let's take a look at the simplest possible example of a web app that uses ES Modules

If we were to have a file called index.html with the following contents

<!DOCTYPE html>
<html lang="en">

<head>
    <title>Introduction to TypeScript</title>
    <script src="/index.js" type="module"></script>
</head>

<body>
    Hello world.
</body>

</html>

You'll notice that where we're importing our script, we're also using the attribute type="module" - telling our browser that this file contains an ES Module.

Then an index.js file that looks like this

console.log("Oh hai! My JavaScript file has loaded in the browser!");

You would see the console output from the index.js file when the page loaded.

Oh hai! My JavaScript file has loaded in the browser!

You could build on this by adding another file other.js

console.log("The other file!");

and replace our index.js with

import "./other";

console.log("Oh hai! My JavaScript file has loaded in the browser!");

Our output will now read:

The other file!
Oh hai! My JavaScript file has loaded in the browser!

This is because the import statement was interpreted by the browser, which went and downloaded ./other.js and executed it before the code in index.js.

You can use import statements to import named exports from other files, or, like in this example, just entire other script files. Your browser makes sure to only download the imports once, even if you import the same thing in multiple places.

ES Modules are really simple, and perform a lot of the jobs that people were traditionally forced to use bundlers like webpack to achieve. They're deferred by default, and perform really well.

Using TypeScript with snowpack

If you've used TypeScript before, you might have had to use the compiler tsc or webpack to compile and bundle your application.

You need to do this, because for your browser to run TypeScript code, it has to first be compiled to JavaScript - this means the compiler, which is called tsc will convert each of your .ts files into a .js file.

Snowpack takes care of this compilation for you, transparently. This means that if we rename our index.js file to index.ts (changing nothing in our HTML), everything still just works.

This is excellent, because we can now use TypeScript code in our webapp, without really having to think about any tedious setup instructions.

What can TypeScript do for you right now?

TypeScript adds a lot of features to JavaScript, but let's take a look at a few of the things you'll probably end up using the most, and the soonest. The things that are immediately useful for you without having to learn all of the additions to the language.

TypeScript can:

  • Stop you calling functions with the wrong variables
  • Make sure the shape of JavaScript objects are correct
  • Restrict what you can call a function with as an argument
  • Tell you what types your functions returns to help you change your code more easily.

Let's go through some examples of each of those.

Use Type Annotations to never call a function with the wrong variable again

Look at this addition function:

function addTwoNumbers(one, two) {
    const result = one + two;
    console.log("Result is", result);
}

addTwoNumbers(1, 1);

If you put that code in your index.ts file, it'll print the number 2 into your console.

We can give it the wrong type of data, and have some weird stuff happen - what happens if we pass a string and a number?

addTwoNumbers("1", 1);

The output will now read 11 which isn't really what anyone was trying to do with this code.

Using TypeScript Type Annotations we can stop this from happening:

function addTwoNumbers(one: number, two: number) {
    const result = one + two;
    console.log("Result is", result);
}

If you pay close attention to the function parameters, we've added : number after each of our parameters. This tells TypeScript that this function is intended to only be called with numbers.

If you try and call the function with the wrong Type or paramter - a string rather than a number:

addTwoNumbers("1", 1); // Editor will show an error here.

Your Visual Studio Code editor will underline the "1" argument, letting you know that you've called the function with the wrong type of value - you gave it a string not a number.

This is probably the first thing you'll be able to helpfully use in TypeScript that'll stop you making mistakes.

Using Type Annotations with more complicated objects

We can use Type annotations with more complicated types too!

Take a look at this function that combines two coordinates (just an object with an x and a y property).

function combineCoordinates(first, second) {
    return {
        x: first.x + second.x,
        y: first.y + second.y
    }
}

const c1 = { x: 1, y: 1 };
const c2 = { x: 1, y: 1 };

const result = combineCoordinates(c1, c2);

Simple enough - we're just adding the x and y properties of two objects together. Without Type annotations we could pass objects that are completely the wrong shape and crash our program.

combineCoordinates("blah", "blah2"); // Would crash during execution

JavaScript is weakly typed (you can put any type of data into any variable), so would run this code just fine, until it crashes trying to access the properties x and y of our two strings.

We can fix this in TypeScript by using an interface. We can decalre an interface in our code like this:

interface Coordinate {
    x: number,
    y: number
}

We're just saying "anything that is a coordinate has an x, which is a number, and a y, which is also a number" with this interface definition. Interfaces can be described as type definitions, and TypeScript has a little bit of magic where it can infer if any object fits the shape of an interface.

This means that if we change our combineCoordinates function to add some Type annotations we can do this:

interface Coordinate {
    x: number,
    y: number
}

function combineCoordinates(first: Coordinate, second: Coordinate) {
    return {
        x: first.x + second.x,
        y: first.y + second.y
    }
}

And your editor and the TypeScript compiler will throw an error if we attempt to call that function with an object that doesn't fit the shape of the interface Coordinate.

The cool thing about this type inference is that you don't have to tell the compiler that your objects are the right shape, if they are, it'll just work it out. So this is perfectly valid:

const c1 = { x: 1, y: 1 };
const c2 = { x: 1, y: 1 };

combineCoordinates(c1, c2);

But this

const c1 = { x: 1, y: 1 };
const c2 = { x: 1, bar: 1 };

combineCoordinates(c1, c2); // Squiggly line under c2

Will get a squiggly underline in your editor because the property y is missing in our variable c2, and we replaced it with bar.

This is awesome, because it stops a huge number of mistakes while you're programming and makes sure that the right kind of objects get passed between your functions.

Using Union Types to restrict what you can call a function with

Another of the really simple things you can do in TypeScript is define union types - this lets you say "I only want to be called with one of these things".

Take a look at this:

type CompassDirections = "NORTH" | "SOUTH" | "EAST" | "WEST";

function printCompassDirection(direction) {
    console.log(direction);
}

printCompassDirection("NORTH");

By defining a union type using the type keyword, we're saying that a CompassDirection can only be one of NORTH, SOUTH, EAST, WEST. This means if you try call that function with any other string, it'll error in your editor and the compiler.

Adding return types to your functions to help with autocomplete and intellisense

IntelliSense and Autocomplete are probably the best thing ever for programmer productivity - often replacing the need to go look at the docs. Both VSCode and WebStorm/IntelliJ will use the type definitions in your code to tell you what parameters you need to pass to things, right in your editor when you're typing.

You can help the editors out by making sure you add return types to your functions.

This is super easy - lets add one to our combineCoordinates function from earlier.

function combineCoordinates(first: Coordinate, second: Coordinate) : Coordinate {
    return {
        x: first.x + second.x,
        y: first.y + second.y
    }
}

Notice at the end of the function definition we've added : Coordinate - this tells your tooling that the function returns a Coordinate, so that if at some point in the future you're trying to assign the return value of this function to the wrong type, you'll get an error.

Your editors will use these type annotations to provide more accurate hints and refactoring support.

Why would I do any of this? It seems like extra work?

It is extra work! That's the funny thing.

TypeScript is more verbose than JavaScript and you have to type extra code to add Types to your codebase. As your code grows past a couple of hundred lines though, you'll find that errors where you are providing the wrong kind of data to your functions or verifying that API calls return data that is in the right shape dramatically reduce.

Changing code becomes easier, as you don't need to remember every place you use a certain shape of object, your editor will do that work for you, and you'll find bugs sooner, again, with your editor telling you that you're using the wrong type of data before your application crashes in the browser.

Why is everyone so excited about types?

People get so excited, and sometimes a little bit militant about types, because they're a great tool for removing entire categories of errors from your software. JavaScript has always had types, but it's a weakly typed language.

This means I can create a variable as a string

let variable = "blah";

and later overwrite that value with a number

variable = 123;

and it's a perfectly valid operation because the types are all evaluated while the program is running - so as long as the data in a variable is in the "correct shape" of the correct type - when your program comes to use it, then it's fine.

Sadly, this flexibility frequently causes errors, where mistakes are made during coding that become increasingly hard to debug as your software grows.

Adding additional type information to your programs reduces the likelihood of errors you don't understand cropping up at runtime, and the sooner you catch an error, the better.

Just the beginning

This is just the tip of the iceberg, but hopefully a little less intimidating than trying to read all the docs if you've never used TypeScript before, without any scary setup or configuration.

The happy path – using Azure Static Web Apps and Snowpack for TypeScript in your front end

09/01/2020 18:00:00

In 2020, I’m finding myself writing TypeScript as much as I’m using C# for my day-to-day dev. I’ve found myself experimenting, building multiplayer browser-based games, small self-contained PWAs and other “mostly browser-based things” over the last year or so.

One of the most frustrating things you have to just sort of accept when you’re in the browser, or running in node, is the frequently entirely incoherent and flaky world of node and JavaScript toolchains.

Without wanting to labour the point too much, many of the tools in the JavaScript ecosystem just don’t work very well, are poorly maintained, or poorly documented, and even some of the absolutely most popular tools like WebPack and Babel that sit underneath almost everything rely on mystery meat configuration, and fairly opaque error messages.

There is a reason that time and time again I run into frontend teams that hardly know how their software is built. I’ve spent the last year working on continual iterations of “what productive really looks like” in a TypeScript-first development environment, and fighting that healthy tension between tools that want to offer plenty of control, but die by the hands of their own configuration, to tools that wish to be you entire development stack (Create React App, and friends).

What do I want from a frontend development stack?

In all software design, I love tools that are correct by default and ideally require zero configuration.

I expect hot-reload, it’s the fast feedback cycle of the web and to accept the inconsistencies of browser-based development without the benefit it’s a foolish thing.

I want native TypeScript compilation that I don’t have to think about. I don’t want to configure it, I want it to just work for v.current of the evergreen browsers.

I want source maps and debugger support by default.

I want the tool to be able handle native ES Modules, and be able to consume dependencies from npm.

Because I’ve been putting a lot of time into hosting websites as Azure Static Web Apps, I also want whatever tool I use to play nicely in that environment, and be trivially deployable from a GitHub Action to Azure Static Web Apps.

Enter Snowpack

Snowpack is a modern, lightweight toolchain for faster web development. Traditional JavaScript build tools like webpack and Parcel need to rebuild & rebundle entire chunks of your application every time you save a single file. This rebundling step introduces lag between hitting save on your changes and seeing them reflected in the browser.

I was introduced to snowpack by one of it’s contributors, an old friend, when complaining about the state of “tools that don’t just work” in the JavaScript ecosystem as a tool that was trying to pretty much do all the things I was looking for, so I’ve decided to use it for a couple of things to see if it fits the kind of projects I’ve been working on.

And honestly, it pretty much just works perfectly.

Setting up snowpack to work with Azure Static Web Apps

Last month I wrote about how Azure Static Web Apps are Awesome with a walkthrough of setting up a static web app for any old HTML site, and I want to build on that today to show you how to configure a new project with snowpack that deploys cleanly, and uses TypeScript.

Create a package.json

First, like in all JavaScript projects, we’re going to start by creating a package.json file.

You can do this on the command line by typing

npm init

We’re then going to add a handful of dependencies:

npm install npm-run-all snowpack typescript --save-dev

Which should leave us with a package.json that looks a little bit like this

{
    "name": "static-app",
    "version": "",
    "description": "",
    "repository": "http://tempuri.org",
    "license": "http://tempuri.org",
    "author": "",
    "dependencies": {},
    "devDependencies": {
        "npm-run-all": "^4.1.5",
        "snowpack": "^2.9.0",
        "typescript": "^4.0.2"
    }
}

Add some build tasks

Now, we’ll open up our package.json file and add a couple of tasks to it:

{
    ...
    "scripts": {
        "start": "run-p dev:api dev:server",
        "dev:api": "npm run start --prefix api",
        "dev:server": "npx snowpack dev",
        "build:azure": "npx snowpack build"
    },
    ...
}

What we’re doing here, is filling in the default node start task – using a module called npm-run-all that allows us to execute two tasks at once. We’re also defining a task to run an Azure Functions API, and the snowpack dev server.

Create our web application

Next, we’re going to create a directory called app and add an app/index.html file to it.

<html>
<head>
    <title>Hello Snowpack TypeScript</title>
    <script src="/index.js" type="module"></script>
</head>

<body>
    Hello world.
</body>
</html>

And we’ll create a TypeScript file called app/index.ts

class Greeter {
    private _hasGreeted: boolean;

    constructor() {
        this._hasGreeted = false;
    }

    public sayHello(): void {
        console.log("Hello World");
        this._hasGreeted = true;
    }
}

const greeter = new Greeter();
greeter.sayHello();

You’ll notice we’re using TypeScript type annotations (Boolean, and : void in this code, along with public access modifiers).

Configuring Snowpack to look in our APP directory

Next, we’re going to add a snowpack configuration file to the root of our repository. We’re adding this because by default, snowpack works from the root of your repository, and we’re putting our app in /app to help Azure Static Web Apps correctly host our app later.

Create a file called snowpack.config.json that looks like this:

{
    "mount": {
        "app": "/"
    },
    "proxy": {
        "/api": "http://127.0.0.1:7071/api"
    }
}

Here we’re telling snowpack to mount our content from “app” to “/”, and to reverse proxy “/api” to a running Azure Functions API. We’ll come back to that, but first, let’s test what we have.

npm run dev:server

Will open a browser, and both in the console and on the screen, you should see “Hello World”.

Snowpack has silently Transpiled your TypeScript code, into a JavaScript file with the same filename, that your webapp is referencing using ES Module syntax.

The cool thing here, is everything you would expect to work in your frontend now does. You can use TypeScript, you can reference npm modules in your frontend code and all this happens with next to no startup time.

You can extend this process using various snowpack plugins, and it probably supports the JavaScript tooling you’re already using natively – read more at snowpack.dev

Create our Azure Functions API

Because Azure Static Web Apps understand Azure functions, you can add some serverless APIs into a subdirectory called api in your repository, and Azure Oryx will detect and auto-host and scale them for you as part of it’s automated deployment.

Make sure you have the Azure Functions Core Tools installed by running

npm install -g azure-functions-core-tools\@3

Now we’re going to run a few commands to create an Azure functions app.

mkdir api  
cd api  
func init --worker-runtime=node --language=javascript

This generates a default javascript+node functions app in our API directory, we just need to create a function for our web app to call. Back in the command line, we’ll type (still in our /api directory)

func new --template "Http Trigger" --name HelloWorld

This will add a new function called HelloWorld into your API directory.

In the file api/package.json make sure the following two tasks are present:

  "scripts": {
    "prestart": "func extensions install",
    "start": "func start"
  },

If we now return to the root of our repository and type

npm run start

A whole lot of text will scroll past your console, and snowpacks live dev server will start up, along with the Azure Functions app with our new “HelloWorld” function in it.

Let’s add a little bit of code to our app/index.html to call this

The cool thing, is we can just do this with the app running, and both the functions runtime, and the snowpack server, will watch for and hot-reload changes we make.

Calling our API

We’re just going to add some code to app/index.ts to call our function, borrowed from the previous blog post. Underneath our greeter code, we’re going to add a fetch call

…
const greeter = new Greeter();
greeter.sayHello();

fetch("/api/HelloWorld")
    .then(response => response.text())
    .then(data => console.log(data));

Now if you look in your browser console, you’ll notice that the line of text

“This HTTP triggered function executed successfully. Pass a name in the query string or in the request body for a personalized response.”

Is printed to the window. That’s the text returned from our “HelloWorld” API.

And that’s kind of it!

Really, that is it – you’ve now got a TypeScript compatible, hot-reloading dev server, with a serverless API, that is buttery smooth to develop on top of. But for our final trick, we’re going to configure Azure Static Web Apps to host our application.

Configuring Static Web Apps

First, go skim down the guide to setting up Azure Static Web Apps I put together here - https://www.davidwhitney.co.uk/Blog/2020/07/29/azure_static_web_apps_are_awesome

You’re going to need to push your repository to GitHub, go and signup / login to the Azure Portal, and navigate to Azure Static Web Apps and click Create.

Once you’re in the creation process, you’ll need to again, authenticate with GitHub and select your new repository from the drop-downs provided.

You’ll be prompted to select the kind of Static Web App you’re deploying, and you should select Custom. You’ll then be faced with the Build Details settings, where you need to make sure you fill in the following:

App Location: /
API location: api
App artifact location: build

Remember at the very start when we configured some npm tasks in our root? Well the Oryx build service is going to be looking for the task build:azure in your scripts configuration.

We populated that build task with “npx snowpack build” – a built in snowpack task that will compile and produce a build folder with your application in it ready to be hosted.

This configuration let’s Azure know that our final files will be available in the generated build directory so it knows what to host.

When you complete this creation flow, Azure will commit a GitHub action to your repository, and trigger a build to deploy your website. It takes around 2 minutes the first time you set this up.

That’s it.

I’ve been using snowpack for a couple of weeks now, and I’ve had a wonderful time with it letting me build rich frontends with TypeScript, using NPM packages, without really worrying about building, bundling, or deploying.

These are the kind of tools that we should spend time investing in, that remove the nuance of low-level control, and replace it with pure productivity.

Give Azure Static Sites with Snowpack a go for your next project.

Does remote work make software and teamwork better?

08/10/2020 11:00:00

I woke today with an interesting question in my inbox, about the effectiveness of remote work and communication, especially during the global pandemic:

Diego Alto: Here's the statement

"Working remotely has an added benefit (not limited to software) of forcing documentation (not only software!) which helps transfer of knowledge within organizations. (I am not just talking about software).

Example: a decision made over a chat in the kitchen making coffee is inaccessible to everyone else who wasn't in that chat. But a short message in slack, literally a thumbs up in a channel everyone sees, makes it so.

Discuss"

Let's think about what communication really means in the context of remote work, especially while we're all forced to live through this as our day-to-day reality.

Tools are not discipline

High quality remote work requires clear, concise, and timely communication, but I'm not sure that it causes or facilitates it.

I think there's a problem here that comes from people learning to use tools rather than the disciplines involved in effective communication - which was entirely what the original agile movement was about.

DHH talks about this a lot - about how they hire writers at Basecamp, their distributed model works for them because they have a slant towards hiring people that use written communication well.

I find this fascinating, but I also have a slightly sad realisation that people are unbelievably bad at written communication, and time is rarely invested in making them better at it.

This means that most written communication in businesses is waste and ineffective and may as well not have happened. Most of the audit trails it creates either get lost or are weaponised - so it becomes a case of "your mileage may vary".

Distinct types of written communication are differently effective though, and this is something we should consider.

Different forms of communication for different things

Slack is an "ok form of communication" but it's harmed by being temporal - information drifts over time or is repeated.

But it's better than email! I hear you cry.

Absolutely, but it's better because it's open and searchable. Email is temporal AND closed. Slack is an improvement because it's searchable (to a point) and most importantly a broadcast medium of communication.

It's no surprise that slack and live chat has seen such a rise in popularity - it's just the 2010's version of the reply-all email that drove all of business through the late 90s and early 2000s.

Slack is basically just the millennial version of reply-all email chains.

Both forms of communication are worse than structured and minimal documentation in a known location though.

"Just enough" documentation - a record of decisions, impacts, and the why, is far more effective that sifting through any long-form communication to extract details you might just miss.

Co-location of information

I'm seeing a rise in Architecture Decision Records (ADRs) and tooling inside code repositories to support and maintain them for keeping track of decisions.

An architectural decision record (ADR) is a document that captures an important architectural decision made along with its context and consequences.

There's a tension between the corporate wiki, which is just rotten and useless, and "just read the code bro".

"Just read the code" is interesting, as it's a myth I've perpetuated in the past. It's an absolute fact that the code is the only true and honest source of understanding code, but that doesn't mean "don't give me a little bit of narrative here to help me".

Just enough

I don't want you to comment the code, I want you to tell me its story. I want you to tell me just enough.

I've done a bunch of stuff with clients about what "just enough" documentation looks like, and almost every time, "just enough" is always "co-located with the thing that it describes".

Just enough means, please tell me -

  • What this thing is
  • Why it exists
  • What it does
  • How to build this thing
  • How to run this thing
  • Exactly the thing I'll get wrong when I first try and use it

Any software or libraries that don't ship with this information, rot away, don't get adopted, and don't get used.

I'm glad the trend has taken hold. ADRs really fit into this "just enough" minimal pattern, with the added benefit of growing with the software.

The nice thing about ADRs, is they are more of a running log than a spec - specs change, they're wrong, they get adapted during work. ADRs are meant to be the answer to "why is this thing why it is".

Think of them as the natural successor to the narrative code comment. The spiritual sequel to the "gods, you gotta know about this" comment.

Nothing beats a well-formed readme, with its intent, and a log of key decisions, and we know what good looks like.

Has remote work, and a global pandemic, helped this at all?

On reflection, I'm not sure "better communication" is a distributed work benefit - but it's certainly a distributed work requirement.

I have a sneaking suspicion that all the global pandemic really proved was that "authoritarian companies that didn't support home work was nothing more than control freakery by middle management".

There's nothing in the very bizarre place the world finds itself that implicitly will improve communication, but we're now forced into a situation where everyone needs the quality levels of communication that was previously only required by remote workers.

Who needs to know how things work anyway?

People that can't read code have no concern knowing how it works, but every concern in understanding why it works and what it does.

Why. What. Not how.

Understanding the how takes a lot of context.

There's a suitable abstraction of information that must be present when explaining how software works and it's the job of the development teams to make this clear.

We have to be very aware of the "just enough information to be harmful" problem - and the reason ADRs work well, in my opinion, is they live in the repository, with the code, side by side with the commits.

This provides a minimum bar to entry to read and interpret them -and it sets the context for the reader that understanding the what is a technical act.

This subtly barrier to entry is a kind of abstraction, and hopefully one that prevents misinterpretation of the information.

Most communication is time-sensitive

There's a truth here, and a reason - the reason slack and conversations are so successful at transmitting information; Most information is temporal in nature.

Often only the effects communication needs to last, and at that point, real-time-chat being limited by time isn't a huge problem.

In fact, over communication presents a navigation and a maintenance burden - with readers often left wondering what the most-correct version of information is, or where the most current information resides, while multiple-copies of it naturally atrophies over time.

We've all seen that rancid useless corporate wiki, but remember it was born from good intentions of communication before it fell into disrepair.

All code is writing

So, remember that all code is writing.

It's closer to literature than anything. And we don't work on writing enough.

Writing, in various styles, with abstractions that should provide a benefit.

And that extends to the narrative, where it lives, and how it's updated.

But I believe it's always been this way, and "remote/remote first" rather than improving this process (though it can be a catalyst to do so), pries open the cracks when it's sub-par.

This is the difficulty of remote. Of distributed team management.

It's ironically, what the agile manifestos focus on co-location was designed to resolve.

Conflict is easier to manage in person than in writing.

You're cutting out entire portions of human interaction around body language, quickly addressing nuance or disagreement, by focusing on purely written communication. It's easier to diffuse and subtly negotiate in person.

The inverse is also true, thoughts are often more fully formed in prose. This can be both for better or worse, with people often more reticent to change course once they perceive they have put significant effort into writing a thing down.

There is no one way

Everything in context, there is no one way.

The biggest challenge in software, or teamwork in general, is replacing your mental model with the coherent working pattern of a team.

It's about realising it's not about you.

It's easy in our current remote-existence to think that "more communication" is the better default, and while that might be the correct place to start, it's important that the quality of the communication is the thing you focus on improving in your organisations.

Make sure your communication is more timely, more contextual, more succinct, and closer in proximity to the things it describes.

Azure Static Web Apps Are Awesome!

07/29/2020 13:01:00

Over the last 3 months or so, I’ve been building a lot of experimental software on the web. Silly things, fun things. And throughout, I’ve wrangled with different ways to host modern web content.

I’ve been through the ringer of hosting things on Glitch for its interactivity, Heroku to get a Node backend, even Azure App Services to run my node processes.

But each time it felt like effort, and cost, to put a small thing on the internet.

Everything was somehow a compromise in either effort, complexity, or functionality.

So when Microsoft put out the beta of static web apps a couple months ago, I was pretty keen to try them out.

They’re still in beta, the docs are a little light, the paint is dripping wet, but they’re a really great way to build web applications in 2020, and cost next to nothing to run (actually, they're free during this beta).

I want to talk you through why they’re awesome, how to set them up, and how to customise them for different programming languages, along with touching on how to deal with local development and debugging.

We need to talk about serverless

It is an oft-repeated joke – that the cloud is just other people’s computers, and serverless, to extend the analogy, is just someone else’s application server.

See the source image

While there is some truth to this – underneath the cloud vendors, somewhere, is a “computer” – it certainly doesn’t look even remotely like you think it does.

When did you last dunk a desktop computer looking like this under the sea?

See the source image

While the cloud is “someone else’s computer”, and serverless is “someone else’s server” – it’s also someone else’s hardware abstraction, and management team, and SLA to satisfy, operated by someone else’s specialist – and both the cloud, and serverless, make your life a lot easier by making computers, and servers, somebody else’s problem.

In 2020, with platforms like Netlify and Vercel taking the PaaS abstraction and iterating products on top of it, it’s great to see Microsoft, who for years have had a great PaaS offering in Azure, start to aim their sights at an easy to use offering for “the average web dev”.

Once you remove the stupid sounding JAMSTACK acronym, shipping HTML and JavaScript web apps that rely on APIs for interactivity, it’s a really common scenario, and the more people building low-friction tools in this space, the better.

Let’s start by looking at how Azure Static Web Apps work in a regular “jamstack-ey” way, and then we’ll see how they’re a little bit more magic.

What exactly are Azure Static Web Apps?

Azure Static Web Apps is currently-beta new hosting option in the Azure-WebApps family of products.

They’re an easy way to quickly host some static files – HTML and JavaScript – on a URL and have all the scaling and content distribution taken care of for you.

They work by connecting a repository in GitHub to the Azure portal’s “Static Web Apps” product, and the portal will configure your repository for continuous delivery. It’s a good end-to-end experience, so let’s walk through what that looks like.

Creating your first Static Web App

We’re going to start off by creating a new repository on GitHub -

And add an index.html file to it…

Great, your first static site, isn’t it grand. That HTML file in the root is our entire user experience.

Perfect. I love it.

We now need to hop across to the Azure portal and add our new repository as a static site.

The cool thing about this process, is that the Azure portal will configure GitHub actions in our repository, and add security keys, to configure our deployment for us.

We’re just giving the new site a resource group (or creating one if you haven’t used Azure before - a resource group is just a label for a bunch of stuff in Azure) and selecting our GitHub repository.

Once we hit Review + Create, we’ll see our final configuration.

And we can go ahead and create our app.

Once the creation process has completed (confusingly messaged as “The deployment is complete”) – you can click the “Go to resource” button to see your new static web app.

And you’re online!

I legitimately think this is probably the easiest way to get any HTML onto the internet today.

Presuming you manage to defeat the Microsoft Active Directory Boss Monster to login to Azure in the first place ;)

What did that do?

If we refresh our GitHub page now, you’ll see that the Azure Create process, when you gave it permission to commit to your repositories, used them.

When you created your static web app in the Azure portal, it did two things:

  1. Created a build script that it committed to your repository
  2. Added a deployment secret to your repository settings

The build script that gets generated is relatively lengthy, but you’re not going to have to touch it yourself.

It configures GitHub actions to build and push your code every time you commit to your master branch, and to create special preview environments when you open pull requests.

This build script is modified each time to reference the deployment secret that is generated by the Azure portal.

You will notice the secret key lines up in your repository.

Is this just web hosting? What makes this so special?

So far, this is simple, but also entirely unexciting – what makes Azure Static Web Apps so special though, is their seamless integration with Azure Functions.

Traditionally if you wanted to add some interactivity to a static web application, you’d have to stand up an API somewhere – Static Web Apps pulls these two things together, and allows you define both an Azure Static Web App, and some Azure functions that it’ll call, in the same repository.

This is really cool, because, you still don’t have a server! But you can run server-side code!

It is especially excellent because this server-side code that your application depends on, is versioned and deployed with the code that depends on it.

Let’s add an API to our static app!

Adding an API

By default, the configuration that was generated for your application expects to find an Azure Functions app in the /api directory, so we’re going to use npm and the Azure functions SDK to create one.

At the time of writing, the Functions runtime only supports up to Node 12 (the latest LTS version of node) and is updated tracking that version.

You’re going to need node installed, and in your path, for the next part of this tutorial to work.

First, let’s check out our repository

Make sure you have the Azure Functions Core Tools installed by running

npm install -g azure-functions-core-tools@3

Now we’re going to run a few commands to create an Azure functions app.

mkdir api
cd api
func init --worker-runtime=node --language=javascript

This generates a default javascript+node functions app in our API directory, we just need to create a function for our web app to call. Back in the command line, we’ll type (still in our /api directory)

func new --template "Http Trigger" --name HelloWorld

This will add a new function called HelloWorld into your API directory

These are the bindings that tell the Azure functions runtime what to do with your code. The SDK will generate some code that actually runs…

Let’s edit our HTML to call this function.

We’re using the browsers Fetch API to call “/api/HelloWorld” – Azure Static Web Apps will make our functions available following that pattern.

Let’s push these changes to git, and wait a minute or two for our deployment to run.

If we now load our webpage, we’ll see this:

How awesome is that – a server-side API, without a server, from a few static files in a directory.

If you open up the Azure portal again, and select Functions, you’ll see your HelloWorld function now shows up:

I love it, but can I run it locally?

But of course!

Microsoft recommends using the npm package live-server to run the static portion of your app for development, which you can do just by typing

npx live-server

From the root of your repository. Let’s give that a go now

Oh no! What’s going on here.

Well, live-server is treating the /api directory as if it were content, and serving an index page locally, which isn’t what we want. To make this run like we would on production, we’re also going to need to run the azure functions runtime, and tell live-server to proxy any calls to /api across to that running instance.

Sounds like a mouthful, but let’s give that a go.

cd api
npm i
func start

This will run the Azure functions runtime locally. You will see something like this

Now, in another console tab, let’s start up live-server again, this time telling it to proxy calls to /api

npx live-server --proxy=/api:http://127.0.0.1:7071/api

If we visit our localhost on 8080 now, you can see we have exactly the same behaviour as we do in Azure.

Great, but this all seems a little bit… fiddly… for local development.

If you open your root directory in Visual Studio Code, it will hint that it has browser extension support for debugging and development, but I like to capture this stuff inside my repository really so anyone can run these sites from the command line trivially.

Adding some useful scripts

I don’t know about you, but I’m constantly forgetting things, so let’s capture some of this stuff in some npm scripts so I don’t have to remember them again.

In our /api/package.json we’re going to add two useful npm tasks

This just means we can call npm run start on that directory to have our functions runtime startup.

Next we’re going to add a package.json to the root of our repository, so we can capture all our live server related commands in one place.

From a command prompt type:

npm init

and hit enter a few times past the default options – you’ll end up with something looking like this

And finally, add the npm-run-parallel package

npm install npm-run-all –save-dev

We’re going to chuck a few more scripts in this default package.json

Here we’re setting up a dev:api, dev:server and a start task to automate the command line work we had to incant above.

So now, for local development we can just type

npm run start

And our environment works exactly how it would on Azure, without us having to remember all that stuff, and we can see our changes hot-reloaded while we work.

Let’s commit it and make sure it all still works on Azure!

Oh No! Build Failure!

Ok, so I guess here is where our paint is dripping a little bit wet.

Adding that root package.json to make our life easier, actually broke something in our GitHub Actions deployment pipeline.

If we dig around in the logs, we’ll see that something called “Oryx” can’t find a build script, and doesn’t know what to do with itself

As it turns out, the cleverness that’s baked into Azure static web apps, is a tool called Oryx, and it’s expecting frameworks it understands, and is running some language detection.

What’s happened is that it’s found our package.json, presumed we’re going to be specifying our own build jobs, and we’re not just a static site anymore, but then when we didn’t provide a build task, it’s given up because it doesn’t know what to do.

The easiest way I’ve found to be able to use node tooling, and still play nicely with Azure’s automated deployment engine is to do two things:

  1. Move our static assets into an “app” directory
  2. Update our deployment scripts to reflect this.

First, let’s create an app directory, and move our index.html file into it.

Now we need to edit the YAML file that Azure generated in .github/workflows

This might sound scary, but we’re only really changing one thing – in the jobs section, on line ~30 of the currently generated sample there are three configuration settings –

We just need to update app_location to be “app”.

Finally, we need to update the npm scripts we added to make sure live-server serves our app from the right location.

In our root package.json, we need to add “app” to our dev:server build task

We’re also going to add a task called build:azure – and leave it empty.

In total, we’ve only changed a few files subtly.

You might want to run your npm run start task again now to make sure everything still works (it should!) and commit your code and push it to GitHub.

Wonderful.

Everything is working again.

“But David! You’re the TDD guy right? How do you test this!”

Here’s the really cool bit I suppose – now we’ve configured a build task, and know where we can configure an app_artifact_location – we can pretty much do anything we want.

  • Want to use jest? Absolutely works!
  • Want to use something awesome like Wallaby? That too!

Why not both at once!

You just need to npm install the thing you want, and you can absolutely test the JavaScript in both your static site and your API.

You can install webpack and produce different bundled output, use svelte, anything, and Microsoft’s tooling will make sure to host and scale both your API and your web app.

My standard “dev” load-out for working with static web sites is

  1. Add a few dev dependencies

  2. Add this default babel.config.js file to the root of my repository

This allows jest to use any language features that my current version of node supports, and plays nicely with all my Visual Studio Code plugins.

I’ll also use this default Wallaby.conf.js configuration *for the continuous test runner Wallaby.js – which is similar to NCrunch but for JavaScript and TypeScript codebases.

You mentioned TypeScript?

Ah yes, well, Azure Functions runtime totally supports TypeScript.

When you create your API, you just need to

func init --worker-runtime=node --language=typescript

And the API that is generated will be TypeScript – it’s really that simple.

Equally, you can configure TypeScript for your regular static web app, you’ll probably want to configure WebPack to do the compiling and bundling into the assets folder, but it works absolutely fine.

When your functions are created using TypeScript, some extra .json metadata is created alongside each function that points to a compiled “dist” directory, that is built when the Azure functions runtime deploys your code, complete with source-maps, out of the box.

But let’s go wild, how about C# !

You can totally use C# and .NET Core too!

If you func init using the worker dotnet, the SDK will generate C# function code that works in exactly the same way as it’s JavaScript and TypeScript equivalent.

You can literally run a static web app, with an auto-scaled C# .NET Core API backing it.

Anything that the Azure Functions runtime supports is valid here (so python too).

I Think This is Really Awesome

I hope by splitting this out into tiny steps, and explaining how the GitHub Actions build, interacts with both the Functions runtime and the Oryx deployment engine that drives Azure Static Web Apps has given you some inspiration for the kinds of trivially scalable web applications you can build today, for practically free.

If you’re a C# shop, a little out of your comfort zone away from ASP.NET MVC, why not use Statiq.Web as part of the build process to generate static WebApps, that use C#, and are driven by a C# and .NET Core API?

Only familiar with Python? You can use Pelikon or Lector to do the same thing.

The Oryx build process that sits behind this is flexible, and provides plenty of hooks to customise the build behaviour between repository pulling, and your site getting served and scaled.

These powerful serverless abstractions let us do a lot more with a lot less, without the stress of worrying about outages, downtime, or scaling.

You can really get from zero to working in Azure static sites in five or ten minutes, and I legitimately think this is one of the best ways to host content on the internet today.

.NET Templates for C# Libraries, GitHub Actions and NuGet Publishing

05/04/2020 17:20:00

Whenever I'm looking to put out a new library I find myself configuring everything in repetitively simple ways. The default File -> New Project templates that Visual Studio ships with never quite get it right for my default library topology.

Almost every single thing I build looks like this:

Mode                 LastWriteTime         Length Name
----                 -------------         ------ ----
d-----        04/05/2020   1:16 PM                .github               # GitHub actions build scripts
d-----        04/05/2020   1:10 PM                adr                   # Architecture decision register
d-----        04/05/2020   1:05 PM                artifacts             # Build outputs
d-----        04/05/2020   1:05 PM                build                 # Build scripts
d-----        04/05/2020   1:05 PM                docs                  # Documentation markdowns
d-----        04/05/2020   1:05 PM                lib                   # Any non-package-managed libs
d-----        04/05/2020   1:05 PM                samples               # Examples and samples
d-----        04/05/2020   4:23 PM                src                   # Source code
d-----        04/05/2020   4:21 PM                test                  # Tests
-a----        03/09/2019   8:59 PM           5582 .gitignore
-a----        04/05/2020   4:22 PM           1833 LibraryTemplate.sln
-a----        04/05/2020   1:02 PM           1091 LICENSE
-a----        04/05/2020   3:16 PM            546 README.md
-a----        04/05/2020   1:08 PM              0 ROADMAP.md

So I spent the afternoon deep diving into dotnet templating and creating and publishing a nuget package to extend .NET new with my default open source project directory layout.

Installation and usage

You can install this template from the command line:

dotnet new -i ElectricHead.CSharpLib

and then can create a new project by calling

dotnet new csghlib --name ElectricHead.MySampleLibrary

Topology and conventions

This layout is designed for trunk based development, against a branch called dev. Merging to master, triggers publishing.

  • Commit work to a branch called dev.
  • Any commits will build and be tested in release mode by GitHub Actions.
  • Merge to master will trigger Build, Test and Publish to NuGet
  • You need to setup your NuGet API key as a GitHub Secret called NuGetApiKey

You need to use the csproj to update your SemVer version numbers, but GitHubs auto-incrementing build numbers will be appended to the build parameter in your version number, so discrete builds will always create unique packages.

The assembly description will be set to the SHA of the commit that triggered the package.

GitHub Actions

Pushing the resulting repository to GitHub will create builds and packages for you as if by magic. Just be sure to add your NuGet API key to your repositories Secrets from the Settings tab, to support publishing to NuGet.org

Deploying Azure WebApps in 2020 with GitHub Actions

05/03/2020 12:45:00

For the longest time, I've relied on, and recommended Azure's Kudu deployment engine as the simplest and most effective way to deploy web apps into Azure App services on Windows. Sadly, over the last couple of years, and after it's original author changed roles, Kudu has lagged behind .NET Core SDK versions, meaning if you want to run the latest versions of .NET Core for your webapp, Kudu can't build and deploy them.

We don't want to be trapped on prior versions of the framework, and luckily GitHub actions can successfully fill the gap left behind by Kudu without much additional effort.

Let's walk through creating and deploying an ASP Net Core 3.1 web app, without containerisation, to Azure App services and GitHub in 2020.

This entire process will take less than 5-10 minutes to complete the first time, and once you understand the mechanisms, should be trivial for any future projects.

Create Your WebApp

  • Create a new repository on GitHub.
  • Clone it to your local machine.
  • Create a new Visual Studio Solution
  • Create a new ASP.NET Core 3.1 MVC WebApplication

Create a new Azure App Service Web App

  • Visit the Azure Portal
  • Create a new web application
  • Select your subscription and resource group

Instance details

Because we're talking about replacing Kudu for building our software - the Windows native deployment engine in AppServices, we're going to deploy just our code to Windows. It's worth noting that GitHub actions can also be used for Linux deployments, and Containerised applications, but this walkthrough is intended as a like-for-like example for Windows-on-Azure users.

  • Give it a name
  • Select code from the publish options
  • Select `.NET Core 3.1 (LTS) as the runtime stack
  • Select Windows as the operating system
  • Select your Region
  • Select your App service plan

Configure it's deployment

Now we're going to link our GitHub repository, to our AppServices Web app.

  • Visit the Deployment Centre in for your newly created application in the Azure Portal.
  • Select GitHub actions (preview)
  • Authorise your account
  • Select your repository from the list

A preview of a GitHub action will be generated and displayed on the screen, click continue to have it automatically added to your repository.

When you confirm this step, a commit will be added to your repository on GitHub with this template stored in .github/workflows as a .yml file.

Correct any required file paths

Depending on where you created your code, you might notice that your GitHub action fails by default. This is because the default template just calls dotnet build and dotnet publish, and if your projects are in some (probably sane) location like /src the command won't be able to find your web app by default.

Let's correct this now:

  • Git pull your repository
  • Customise generated .yml file in .github/workflows to make sure it's paths are correct.

In the sample I created for this walkthrough, the build and publish steps are changed to the following:

- name: Build with dotnet
  run: dotnet build ANCWebsite.sln --configuration Release

- name: dotnet publish
  run: dotnet publish src/ANCWebsite/ANCWebsite.csproj -c Release -o ${{env.DOTNET_ROOT}}/myapp
  

Note the explicit solution and CSProj paths. Commit your changes and Git push.

Browse to your site!

You can now browse to your site and it works! Your deployments will show up both in the Azure Deployment Center and the GitHub Actions list.

This whole process takes less than ~2-3 minutes to setup, and is reliable and recommended. A reasonable replacement for the similar Kudu "git pull deploy" build that worked for years.

How it works under the hood

This is all powered by three things:

  • An Azure deployment profile
  • A .yml file added to .github/workflows
  • A GitHub secret

As you click through the Azure Deployment center setup process, it does the following:

  • Adds a copy of the templated GitHub Actions Dot Net Core .yml deployment file from https://github.com/Azure/webapps-deploy to your repository
  • Downloads the Azure publishing profile for your newly created Website and adds it as a GitHub secret to your GitHub repositories "Secrets" setting.
  • Makes sure the name of the secret referenced in .github/workflows/projectname_branchname.yml matches the name of the secret it added to the respository.

The rest is taken care of by post-commit hooks and GitHub actions automatically.

You can set this entire build pipeline up yourself by creating the .yml file by hand, and adding your secret by hand. You can download the content for your Publish profile by visiting

Deployment Center -> Deployment Credentials -> Get Publish Profile

In the Azure portal.

It's an XML blob that you can paste into the GitHub UI, but honestly, you may aswell let Azure do the setup for you.

Next Steps

You've just built a simple CI pipeline that deploys to Azure WebApps without any of the overhead of k8s, docker, or third party build systems.

Some things you can now do:

  • Consider running your tests as part of this build pipeline by adding a step to call dotnet test
  • Add an additional branch specification to deploy test builds to different deployment slots

The nice thing about this approach, is that your build runs in a container on GitHub actions, so you can always make sure you're using the versions of tools and SDKs that you desire.

You can find all the code used in this walkthrough here.

The Quarantine Project

05/03/2020 10:00:00

While we're all enduring these HashTagUnprecendetedTimes I'm going to keep a living list here of my own personal "quarantine project".

Earlier in the year I took some time out, which was originally intended to be dedicaded to travel, conferences and writing book 3. Obviously the first two things in that list are somewhat less plausible in a global pandemic, so I've been up to a bunch of other stuff.

Here is a living list, and I'll update it as more things arrive, until such a time as I write about them all distinctly.

Events

Remote Code DOJO

I've been running weekly pair programming dojos that you can sign up to and attend here

Projects

.NET

JavaScript / TypeScript

Hardware

Work In Progress

Books

Book 3 is still coming 🖤

Running .NET Core apps on Glitch!

03/26/2020 16:00:00

Over the last couple of months I've been doing a lot of code for fun in Glitch.

Glitch is a collabartive web platform that aims to make web programming accessible and fun - complete with real-time editing and hot-reloading built in. It's a great platform for sketching out web apps, working with friends, or adapting samples other people share ("remixing"). It's a great product, and I love the ethos behind it - and like a lot of things on the web in 2020, it's commonly used for writing HTML and javascript, with default templates also available for Node + Express.js apps.

...but why not .NET Core?

I was in the middle of configuring some webpack jobs when I idley tweeted that it'd be great if the netcore team could support this as a deployment target. The glitch team shot across a few docs asking what an MVP would look like for netcore on glitch, and I idley, and mostly out of curiosity, typed dotnet into the Glitch command line prompt to see if the dotnet CLI just happened to be installed. And it was.

Armed with the wonderfully named glitchnomicon and the dotnet cli I created a fresh ANC (ASP.NET Core) MVC start project, and migrated the files one by one into a Glitch project.

With a little tweaking I've got the dotnet new project template running in Glitch, without any changes to the C# code at all.

Subtle changes:

  • No Bootstrap
  • Stripped out boilerplate layout
  • No jQuery
  • Removed "development" mode and "production" CSS filters from the views
  • Glitch executes the web app in development mode by default so you see detailed errors

I've published some projects on Glitch for you to remix and use as you like.

ASP.NET MVC Starter Project

ASP.NET Starter Project (just app.run)

I'll be adding more templates to the collection .NET Core Templates over time.

Glitch is awesome, and you should check it out.

Small thoughts on literate programming

02/28/2020 16:00:00

One of the most profound changes in my approach to software was understanding it to be literature. Functional literature, but literature regardless. Intent, characterisation, description, text, subtext, flow, rhythm, style, all effect software like they do prose.

It's a constrained form of communication, with grammar, and that's why we work in "programming languages". They are languages. With rules, idioms and quirks. These aren't analogies, it's what software is. It's storytelling. Constrained creative writing with purpose.

Basically, Donald Knuth was right, and called it a bajillion years ago - with the idea of literate programming. Today's languages are that thing. You will never be a great programmer unless you become an excellent communicator, and an excellent writer. The skillset is the same.

Critical thinking, expression of concept, reducing repititon, form for impact, signposting, intent and subtext. If you want to understand great software, understand great literature.

Communication skills are not optional

If you want to teach a junior programmer to be a better programmer, teach them to write. Language is our tool for organising our thoughts. It's powerful. It has meaning. It has power.

It's a gift, it's for everyone. 🖤

(I should plug that I have a talk on this - get in touch)

Hardware Hacking with C# and JavaScript

01/29/2020 09:20:00

Over the last couple of months I’ve had a little exposure to hardware hacking and wearables after lending some of my exceptionally rusty 1990s-era C to Jo’s excellent Christmas Jumper project.

The project was compiled for the AdaFruit Feather Huzzah a low powered, ESP8266 SOC, Arduino compatible, gumstick sized development board. Other than a handful of RaspberryPi’s here and there, I’d never done any hardware hacking before, so it had that wonderful sheen of new and exciting we all crave.

“It’s just C!” I figured, opening the Arduino IDE for the first time.

And it was, just C. And I forgot how much “just C” is a pain – especially with a threadbare IDE.

I like syntax checking. I like refactoring. I like nice things.

The Arduino IDE, while functional, was not a nice thing.

It only really has two buttons – compile, and “push”. And it’s S L O W.

Arduino IDE

That’s it.

I Need Better Tools for This

I went on a small mission to get better C tools that also worked with the Arduino, as my workflow had devolved into moving back and forth between Visual Studio for syntax highlighting, and Arduino IDE for verification.

I stumbled across Visual Micros “Arduino IDE for Visual Studio” which was mostly good, if occasionally flaky, and had a slightly awkward and broken debugging experience. Still – light-years ahead of what I was using. There’s also an open source and free VSCode extension which captures much of the same functionality (though sadly I was missing my ReSharper C++ refactorings).

We stumbled forwards with my loose and ageing grasp of C, Google and got the thing done.

But there had to be a better way. Something less archaic and painful.

Can I Run C#?

How about we just don’t do C?

I know, obvious.

I took to Google to work out what the current state of the art was for C# and IoT, remembering a bit of a fuss made a few years ago during the NETCORE initial prototypes of IoT compatibility.

Windows 10 IOT Core seems steeped in controversy and potential abandonment in the face of NETCOREs cross platform sensibilities, so I moved swiftly onwards.

Thinking back a decade, I remembered a fuss made about the .NET MicroFramework based on Windows CE, and that drove me to a comparatively new project .NET NanoFramework – a reimplementation and cut down version of the CLR designed for IOT devices.

I read the docs and went to flash the NanoFramework runtime onto my AdaFruit Feather Huzzah. I’d flashed this thing hundreds of times by now.

And every time, it failed to connect.

One Last Huzzah

As it transpired, the AdaFruit Feather Huzzah that was listed as supported (£19.82, Amazon), wasn’t the device I needed, I instead needed the beefier AdaFruit Feather Huzzah32 (£21.52, Amazon). Of course.

Turns out the Huzzah had a bigger sibling with more memory and more CPU based on the ESP32 chip. And that’s what nanoFramework targeted.

No problem, low cost, ordered a couple.

Flashing a Huzzah32 to Run C#

The documentation is a little bit dense and took longer than I’d like to fumble through, so I’ll try condensing it here. Pre-Requirement: Visual Studio 2019+, Any SKU, including the free community edition.

  1. Add a NuGet package source to Visual Studio

        https://pkgs.dev.azure.com/nanoframework/feed/_packaging/sandbox/nuget/v3/index.json
    
  2. Add a Visual Studio Extensions feed

        http://vsixgallery.com/feed/author/nanoframework/
    
  3. Go to Tools -> Extensions and install the “nanoFramework” extension.

  4. Install the USB driver for the Huzzah, the SiLabs CP2104 Driver

  5. Plug in your Huzzah

  6. Check the virtual COM port it has been assigned in Windows Device Manager under the “Ports (COM&LTP)” category.

  7. Run the following commands from the Package Management Console

        dotnet tool install -g nanoFirmwareFlasher
        nanoff --target ESP32_WROOM_32 --serialport YOURCOMPORTHERE –update
    

That’s it, reset the board, you’re ready to go.

The entire process took less than 5 minutes, and if you’ve already used an ESP32, or the previous Huzzah, you’ll already have the USB drivers installed.

Hello World

You can now create a new nanoFramework project from your File -> New Project menu.

I used this program, though obviously you can write everything in your static void main if that’s your jam.

public class Program
{
      public const int DefaultDelay = 1000;

      public static void Main() => new Program().Run();
      public Program()
      {
            Console.WriteLine("Constructors are awesome.");
      }

      public void Run()
      {
            while (true)
            {
                  Console.WriteLine("ArduinYo!");
                  Thread.Sleep(DefaultDelay);
            }
      }
}

The brilliant, life affirming, beautiful thing about this, is you can just press F5, and within a second your code will be running on the hardware. No long compile and upload times. Everything just works like any other program, and it’s a revelation.

NanoFramework has C# bindings for much of the hardware you need to use, and extensive documentation for anything you need to write yourself using GPIO (General Purpose IO – the hardware connections on the board you can soldier other components to, or add Arduino shields to).

But It’s 2020 Isn’t Everything JavaScript Now?

Alright fine. If you want a slightly worse debugging experience, but slightly wider array of supported hardware, there’s another project, Espruino.

Somehow, while their API documentation is excellent, it’s a little obtuse to find information about running Espruino on the ESP32 – but they both work and are community supported.

The process of flashing is slightly more obtuse than in .NET land, but let’s skip to the good bit

  1. Make sure you have a version of python installed and in your path

  2. At a command prompt or terminal, using pip (which is installed with python), run the command

        pip install esptool
    
  3. Download the latest binaries for the board from here or for subsequent versions, via the download page

    You need all three files for your first flash.

  4. From a command prompt, in the same directory as your downloaded files

        esptool.py --chip esp32 --port /dev/ttyUSB0 --baud 921600 --after hard_reset write_flash -z --flash_mode dio --flash_freq 40m --flash_size detect 0x1000 bootloader.bin 0x8000 partitions_espruino.bin 0x10000 espruino_esp32.bin
    
  5. Install the “Espruino IDE” from the Google Chrome app store

  6. Under Settings -> Communication set the Baud Rate to 115200

That’s it, you’re ready to go in JavaScript – just click connect and run the provided sample

Espruino IDE

What’s next?

Well, nanoFramework is nice, but I really wish I could just use NetStandard 2.0. Luckily for me, the Meadow project from Wilderness labs is just that - an implementation of NetStandard for their own ESP32 derived board. I’ve ordered a couple of them to see how the experience stacks up. Their ESP32 boards look identical to the Huzzah32s, with some extra memory and CPU horsepower, presumably to accommodate the weight of the framework.

They are twice the cost currently at £50 vs the £20 for the Huzzah32, but if they deliver on the flexibility of full .NET, I’d imagine it’ll be the best possible environment for this kind of development if you’re willing to use or learn C#.

In JavaScript land? Well, JavaScript is nice, but TypeScript is better! Espruino don’t directly support a lot of ES6+ features, or TypeScript, but with a little bit of magic, their command line tooling, and Babel, we can use modern JavaScript now, on those devices (I’ll leave this to a subsequent post).

C is wonderful, but “even” hobbyist programmers should have good toolchains for their work <3

« Previous Entries

History