Technology Blog

The happy path – using Azure Static Web Apps and Snowpack for TypeScript in your front end

09/01/2020 18:00:00

In 2020, I’m finding myself writing TypeScript as much as I’m using C# for my day-to-day dev. I’ve found myself experimenting, building multiplayer browser-based games, small self-contained PWAs and other “mostly browser-based things” over the last year or so.

One of the most frustrating things you have to just sort of accept when you’re in the browser, or running in node, is the frequently entirely incoherent and flaky world of node and JavaScript toolchains.

Without wanting to labour the point too much, many of the tools in the JavaScript ecosystem just don’t work very well, are poorly maintained, or poorly documented, and even some of the absolutely most popular tools like WebPack and Babel that sit underneath almost everything rely on mystery meat configuration, and fairly opaque error messages.

There is a reason that time and time again I run into frontend teams that hardly know how their software is built. I’ve spent the last year working on continual iterations of “what productive really looks like” in a TypeScript-first development environment, and fighting that healthy tension between tools that want to offer plenty of control, but die by the hands of their own configuration, to tools that wish to be you entire development stack (Create React App, and friends).

What do I want from a frontend development stack?

In all software design, I love tools that are correct by default and ideally require zero configuration.

I expect hot-reload, it’s the fast feedback cycle of the web and to accept the inconsistencies of browser-based development without the benefit it’s a foolish thing.

I want native TypeScript compilation that I don’t have to think about. I don’t want to configure it, I want it to just work for v.current of the evergreen browsers.

I want source maps and debugger support by default.

I want the tool to be able handle native ES Modules, and be able to consume dependencies from npm.

Because I’ve been putting a lot of time into hosting websites as Azure Static Web Apps, I also want whatever tool I use to play nicely in that environment, and be trivially deployable from a GitHub Action to Azure Static Web Apps.

Enter Snowpack

Snowpack is a modern, lightweight toolchain for faster web development. Traditional JavaScript build tools like webpack and Parcel need to rebuild & rebundle entire chunks of your application every time you save a single file. This rebundling step introduces lag between hitting save on your changes and seeing them reflected in the browser.

I was introduced to snowpack by one of it’s contributors, an old friend, when complaining about the state of “tools that don’t just work” in the JavaScript ecosystem as a tool that was trying to pretty much do all the things I was looking for, so I’ve decided to use it for a couple of things to see if it fits the kind of projects I’ve been working on.

And honestly, it pretty much just works perfectly.

Setting up snowpack to work with Azure Static Web Apps

Last month I wrote about how Azure Static Web Apps are Awesome with a walkthrough of setting up a static web app for any old HTML site, and I want to build on that today to show you how to configure a new project with snowpack that deploys cleanly, and uses TypeScript.

Create a package.json

First, like in all JavaScript projects, we’re going to start by creating a package.json file.

You can do this on the command line by typing

npm init

We’re then going to add a handful of dependencies:

npm install npm-run-all snowpack typescript --save-dev

Which should leave us with a package.json that looks a little bit like this

{
    "name": "static-app",
    "version": "",
    "description": "",
    "repository": "http://tempuri.org",
    "license": "http://tempuri.org",
    "author": "",
    "dependencies": {},
    "devDependencies": {
        "npm-run-all": "^4.1.5",
        "snowpack": "^2.9.0",
        "typescript": "^4.0.2"
    }
}

Add some build tasks

Now, we’ll open up our package.json file and add a couple of tasks to it:

{
    ...
    "scripts": {
        "start": "run-p dev:api dev:server",
        "dev:api": "npm run start --prefix api",
        "dev:server": "npx snowpack dev",
        "build:azure": "npx snowpack build"
    },
    ...
}

What we’re doing here, is filling in the default node start task – using a module called npm-run-all that allows us to execute two tasks at once. We’re also defining a task to run an Azure Functions API, and the snowpack dev server.

Create our web application

Next, we’re going to create a directory called app and add an app/index.html file to it.

<html>
<head>
    <title>Hello Snowpack TypeScript</title>
    <script src="/index.js" type="module"></script>
</head>

<body>
    Hello world.
</body>
</html>

And we’ll create a TypeScript file called app/index.ts

class Greeter {
    private _hasGreeted: boolean;

    constructor() {
        this._hasGreeted = false;
    }

    public sayHello(): void {
        console.log("Hello World");
        this._hasGreeted = true;
    }
}

const greeter = new Greeter();
greeter.sayHello();

You’ll notice we’re using TypeScript type annotations (Boolean, and : void in this code, along with public access modifiers).

Configuring Snowpack to look in our APP directory

Next, we’re going to add a snowpack configuration file to the root of our repository. We’re adding this because by default, snowpack works from the root of your repository, and we’re putting our app in /app to help Azure Static Web Apps correctly host our app later.

Create a file called snowpack.config.json that looks like this:

{
    "mount": {
        "app": "/"
    },
    "proxy": {
        "/api": "http://127.0.0.1:7071/api"
    }
}

Here we’re telling snowpack to mount our content from “app” to “/”, and to reverse proxy “/api” to a running Azure Functions API. We’ll come back to that, but first, let’s test what we have.

npm run dev:server

Will open a browser, and both in the console and on the screen, you should see “Hello World”.

Snowpack has silently Transpiled your TypeScript code, into a JavaScript file with the same filename, that your webapp is referencing using ES Module syntax.

The cool thing here, is everything you would expect to work in your frontend now does. You can use TypeScript, you can reference npm modules in your frontend code and all this happens with next to no startup time.

You can extend this process using various snowpack plugins, and it probably supports the JavaScript tooling you’re already using natively – read more at snowpack.dev

Create our Azure Functions API

Because Azure Static Web Apps understand Azure functions, you can add some serverless APIs into a subdirectory called api in your repository, and Azure Oryx will detect and auto-host and scale them for you as part of it’s automated deployment.

Make sure you have the Azure Functions Core Tools installed by running

npm install -g azure-functions-core-tools\@3

Now we’re going to run a few commands to create an Azure functions app.

mkdir api  
cd api  
func init --worker-runtime=node --language=javascript

This generates a default javascript+node functions app in our API directory, we just need to create a function for our web app to call. Back in the command line, we’ll type (still in our /api directory)

func new --template "Http Trigger" --name HelloWorld

This will add a new function called HelloWorld into your API directory.

In the file api/package.json make sure the following two tasks are present:

  "scripts": {
    "prestart": "func extensions install",
    "start": "func start"
  },

If we now return to the root of our repository and type

npm run start

A whole lot of text will scroll past your console, and snowpacks live dev server will start up, along with the Azure Functions app with our new “HelloWorld” function in it.

Let’s add a little bit of code to our app/index.html to call this

The cool thing, is we can just do this with the app running, and both the functions runtime, and the snowpack server, will watch for and hot-reload changes we make.

Calling our API

We’re just going to add some code to app/index.ts to call our function, borrowed from the previous blog post. Underneath our greeter code, we’re going to add a fetch call

…
const greeter = new Greeter();
greeter.sayHello();

fetch("/api/HelloWorld")
    .then(response => response.text())
    .then(data => console.log(data));

Now if you look in your browser console, you’ll notice that the line of text

“This HTTP triggered function executed successfully. Pass a name in the query string or in the request body for a personalized response.”

Is printed to the window. That’s the text returned from our “HelloWorld” API.

And that’s kind of it!

Really, that is it – you’ve now got a TypeScript compatible, hot-reloading dev server, with a serverless API, that is buttery smooth to develop on top of. But for our final trick, we’re going to configure Azure Static Web Apps to host our application.

Configuring Static Web Apps

First, go skim down the guide to setting up Azure Static Web Apps I put together here - https://www.davidwhitney.co.uk/Blog/2020/07/29/azure_static_web_apps_are_awesome

You’re going to need to push your repository to GitHub, go and signup / login to the Azure Portal, and navigate to Azure Static Web Apps and click Create.

Once you’re in the creation process, you’ll need to again, authenticate with GitHub and select your new repository from the drop-downs provided.

You’ll be prompted to select the kind of Static Web App you’re deploying, and you should select Custom. You’ll then be faced with the Build Details settings, where you need to make sure you fill in the following:

App Location: /
API location: api
App artifact location: build

Remember at the very start when we configured some npm tasks in our root? Well the Oryx build service is going to be looking for the task build:azure in your scripts configuration.

We populated that build task with “npx snowpack build” – a built in snowpack task that will compile and produce a build folder with your application in it ready to be hosted.

This configuration let’s Azure know that our final files will be available in the generated build directory so it knows what to host.

When you complete this creation flow, Azure will commit a GitHub action to your repository, and trigger a build to deploy your website. It takes around 2 minutes the first time you set this up.

That’s it.

I’ve been using snowpack for a couple of weeks now, and I’ve had a wonderful time with it letting me build rich frontends with TypeScript, using NPM packages, without really worrying about building, bundling, or deploying.

These are the kind of tools that we should spend time investing in, that remove the nuance of low-level control, and replace it with pure productivity.

Give Azure Static Sites with Snowpack a go for your next project.

Does remote work make software and teamwork better?

08/10/2020 11:00:00

I woke today with an interesting question in my inbox, about the effectiveness of remote work and communication, especially during the global pandemic:

Diego Alto: Here's the statement

"Working remotely has an added benefit (not limited to software) of forcing documentation (not only software!) which helps transfer of knowledge within organizations. (I am not just talking about software).

Example: a decision made over a chat in the kitchen making coffee is inaccessible to everyone else who wasn't in that chat. But a short message in slack, literally a thumbs up in a channel everyone sees, makes it so.

Discuss"

Let's think about what communication really means in the context of remote work, especially while we're all forced to live through this as our day-to-day reality.

Tools are not discipline

High quality remote work requires clear, concise, and timely communication, but I'm not sure that it causes or facilitates it.

I think there's a problem here that comes from people learning to use tools rather than the disciplines involved in effective communication - which was entirely what the original agile movement was about.

DHH talks about this a lot - about how they hire writers at Basecamp, their distributed model works for them because they have a slant towards hiring people that use written communication well.

I find this fascinating, but I also have a slightly sad realisation that people are unbelievably bad at written communication, and time is rarely invested in making them better at it.

This means that most written communication in businesses is waste and ineffective and may as well not have happened. Most of the audit trails it creates either get lost or are weaponised - so it becomes a case of "your mileage may vary".

Distinct types of written communication are differently effective though, and this is something we should consider.

Different forms of communication for different things

Slack is an "ok form of communication" but it's harmed by being temporal - information drifts over time or is repeated.

But it's better than email! I hear you cry.

Absolutely, but it's better because it's open and searchable. Email is temporal AND closed. Slack is an improvement because it's searchable (to a point) and most importantly a broadcast medium of communication.

It's no surprise that slack and live chat has seen such a rise in popularity - it's just the 2010's version of the reply-all email that drove all of business through the late 90s and early 2000s.

Slack is basically just the millennial version of reply-all email chains.

Both forms of communication are worse than structured and minimal documentation in a known location though.

"Just enough" documentation - a record of decisions, impacts, and the why, is far more effective that sifting through any long-form communication to extract details you might just miss.

Co-location of information

I'm seeing a rise in Architecture Decision Records (ADRs) and tooling inside code repositories to support and maintain them for keeping track of decisions.

An architectural decision record (ADR) is a document that captures an important architectural decision made along with its context and consequences.

There's a tension between the corporate wiki, which is just rotten and useless, and "just read the code bro".

"Just read the code" is interesting, as it's a myth I've perpetuated in the past. It's an absolute fact that the code is the only true and honest source of understanding code, but that doesn't mean "don't give me a little bit of narrative here to help me".

Just enough

I don't want you to comment the code, I want you to tell me its story. I want you to tell me just enough.

I've done a bunch of stuff with clients about what "just enough" documentation looks like, and almost every time, "just enough" is always "co-located with the thing that it describes".

Just enough means, please tell me -

  • What this thing is
  • Why it exists
  • What it does
  • How to build this thing
  • How to run this thing
  • Exactly the thing I'll get wrong when I first try and use it

Any software or libraries that don't ship with this information, rot away, don't get adopted, and don't get used.

I'm glad the trend has taken hold. ADRs really fit into this "just enough" minimal pattern, with the added benefit of growing with the software.

The nice thing about ADRs, is they are more of a running log than a spec - specs change, they're wrong, they get adapted during work. ADRs are meant to be the answer to "why is this thing why it is".

Think of them as the natural successor to the narrative code comment. The spiritual sequel to the "gods, you gotta know about this" comment.

Nothing beats a well-formed readme, with its intent, and a log of key decisions, and we know what good looks like.

Has remote work, and a global pandemic, helped this at all?

On reflection, I'm not sure "better communication" is a distributed work benefit - but it's certainly a distributed work requirement.

I have a sneaking suspicion that all the global pandemic really proved was that "authoritarian companies that didn't support home work was nothing more than control freakery by middle management".

There's nothing in the very bizarre place the world finds itself that implicitly will improve communication, but we're now forced into a situation where everyone needs the quality levels of communication that was previously only required by remote workers.

Who needs to know how things work anyway?

People that can't read code have no concern knowing how it works, but every concern in understanding why it works and what it does.

Why. What. Not how.

Understanding the how takes a lot of context.

There's a suitable abstraction of information that must be present when explaining how software works and it's the job of the development teams to make this clear.

We have to be very aware of the "just enough information to be harmful" problem - and the reason ADRs work well, in my opinion, is they live in the repository, with the code, side by side with the commits.

This provides a minimum bar to entry to read and interpret them -and it sets the context for the reader that understanding the what is a technical act.

This subtly barrier to entry is a kind of abstraction, and hopefully one that prevents misinterpretation of the information.

Most communication is time-sensitive

There's a truth here, and a reason - the reason slack and conversations are so successful at transmitting information; Most information is temporal in nature.

Often only the effects communication needs to last, and at that point, real-time-chat being limited by time isn't a huge problem.

In fact, over communication presents a navigation and a maintenance burden - with readers often left wondering what the most-correct version of information is, or where the most current information resides, while multiple-copies of it naturally atrophies over time.

We've all seen that rancid useless corporate wiki, but remember it was born from good intentions of communication before it fell into disrepair.

All code is writing

So, remember that all code is writing.

It's closer to literature than anything. And we don't work on writing enough.

Writing, in various styles, with abstractions that should provide a benefit.

And that extends to the narrative, where it lives, and how it's updated.

But I believe it's always been this way, and "remote/remote first" rather than improving this process (though it can be a catalyst to do so), pries open the cracks when it's sub-par.

This is the difficulty of remote. Of distributed team management.

It's ironically, what the agile manifestos focus on co-location was designed to resolve.

Conflict is easier to manage in person than in writing.

You're cutting out entire portions of human interaction around body language, quickly addressing nuance or disagreement, by focusing on purely written communication. It's easier to diffuse and subtly negotiate in person.

The inverse is also true, thoughts are often more fully formed in prose. This can be both for better or worse, with people often more reticent to change course once they perceive they have put significant effort into writing a thing down.

There is no one way

Everything in context, there is no one way.

The biggest challenge in software, or teamwork in general, is replacing your mental model with the coherent working pattern of a team.

It's about realising it's not about you.

It's easy in our current remote-existence to think that "more communication" is the better default, and while that might be the correct place to start, it's important that the quality of the communication is the thing you focus on improving in your organisations.

Make sure your communication is more timely, more contextual, more succinct, and closer in proximity to the things it describes.

Azure Static Web Apps Are Awesome!

07/29/2020 13:01:00

Over the last 3 months or so, I’ve been building a lot of experimental software on the web. Silly things, fun things. And throughout, I’ve wrangled with different ways to host modern web content.

I’ve been through the ringer of hosting things on Glitch for its interactivity, Heroku to get a Node backend, even Azure App Services to run my node processes.

But each time it felt like effort, and cost, to put a small thing on the internet.

Everything was somehow a compromise in either effort, complexity, or functionality.

So when Microsoft put out the beta of static web apps a couple months ago, I was pretty keen to try them out.

They’re still in beta, the docs are a little light, the paint is dripping wet, but they’re a really great way to build web applications in 2020, and cost next to nothing to run (actually, they're free during this beta).

I want to talk you through why they’re awesome, how to set them up, and how to customise them for different programming languages, along with touching on how to deal with local development and debugging.

We need to talk about serverless

It is an oft-repeated joke – that the cloud is just other people’s computers, and serverless, to extend the analogy, is just someone else’s application server.

See the source image

While there is some truth to this – underneath the cloud vendors, somewhere, is a “computer” – it certainly doesn’t look even remotely like you think it does.

When did you last dunk a desktop computer looking like this under the sea?

See the source image

While the cloud is “someone else’s computer”, and serverless is “someone else’s server” – it’s also someone else’s hardware abstraction, and management team, and SLA to satisfy, operated by someone else’s specialist – and both the cloud, and serverless, make your life a lot easier by making computers, and servers, somebody else’s problem.

In 2020, with platforms like Netlify and Vercel taking the PaaS abstraction and iterating products on top of it, it’s great to see Microsoft, who for years have had a great PaaS offering in Azure, start to aim their sights at an easy to use offering for “the average web dev”.

Once you remove the stupid sounding JAMSTACK acronym, shipping HTML and JavaScript web apps that rely on APIs for interactivity, it’s a really common scenario, and the more people building low-friction tools in this space, the better.

Let’s start by looking at how Azure Static Web Apps work in a regular “jamstack-ey” way, and then we’ll see how they’re a little bit more magic.

What exactly are Azure Static Web Apps?

Azure Static Web Apps is currently-beta new hosting option in the Azure-WebApps family of products.

They’re an easy way to quickly host some static files – HTML and JavaScript – on a URL and have all the scaling and content distribution taken care of for you.

They work by connecting a repository in GitHub to the Azure portal’s “Static Web Apps” product, and the portal will configure your repository for continuous delivery. It’s a good end-to-end experience, so let’s walk through what that looks like.

Creating your first Static Web App

We’re going to start off by creating a new repository on GitHub -

And add an index.html file to it…

Great, your first static site, isn’t it grand. That HTML file in the root is our entire user experience.

Perfect. I love it.

We now need to hop across to the Azure portal and add our new repository as a static site.

The cool thing about this process, is that the Azure portal will configure GitHub actions in our repository, and add security keys, to configure our deployment for us.

We’re just giving the new site a resource group (or creating one if you haven’t used Azure before - a resource group is just a label for a bunch of stuff in Azure) and selecting our GitHub repository.

Once we hit Review + Create, we’ll see our final configuration.

And we can go ahead and create our app.

Once the creation process has completed (confusingly messaged as “The deployment is complete”) – you can click the “Go to resource” button to see your new static web app.

And you’re online!

I legitimately think this is probably the easiest way to get any HTML onto the internet today.

Presuming you manage to defeat the Microsoft Active Directory Boss Monster to login to Azure in the first place ;)

What did that do?

If we refresh our GitHub page now, you’ll see that the Azure Create process, when you gave it permission to commit to your repositories, used them.

When you created your static web app in the Azure portal, it did two things:

  1. Created a build script that it committed to your repository
  2. Added a deployment secret to your repository settings

The build script that gets generated is relatively lengthy, but you’re not going to have to touch it yourself.

It configures GitHub actions to build and push your code every time you commit to your master branch, and to create special preview environments when you open pull requests.

This build script is modified each time to reference the deployment secret that is generated by the Azure portal.

You will notice the secret key lines up in your repository.

Is this just web hosting? What makes this so special?

So far, this is simple, but also entirely unexciting – what makes Azure Static Web Apps so special though, is their seamless integration with Azure Functions.

Traditionally if you wanted to add some interactivity to a static web application, you’d have to stand up an API somewhere – Static Web Apps pulls these two things together, and allows you define both an Azure Static Web App, and some Azure functions that it’ll call, in the same repository.

This is really cool, because, you still don’t have a server! But you can run server-side code!

It is especially excellent because this server-side code that your application depends on, is versioned and deployed with the code that depends on it.

Let’s add an API to our static app!

Adding an API

By default, the configuration that was generated for your application expects to find an Azure Functions app in the /api directory, so we’re going to use npm and the Azure functions SDK to create one.

At the time of writing, the Functions runtime only supports up to Node 12 (the latest LTS version of node) and is updated tracking that version.

You’re going to need node installed, and in your path, for the next part of this tutorial to work.

First, let’s check out our repository

Make sure you have the Azure Functions Core Tools installed by running

npm install -g azure-functions-core-tools@3

Now we’re going to run a few commands to create an Azure functions app.

mkdir api
cd api
func init --worker-runtime=node --language=javascript

This generates a default javascript+node functions app in our API directory, we just need to create a function for our web app to call. Back in the command line, we’ll type (still in our /api directory)

func new --template "Http Trigger" --name HelloWorld

This will add a new function called HelloWorld into your API directory

These are the bindings that tell the Azure functions runtime what to do with your code. The SDK will generate some code that actually runs…

Let’s edit our HTML to call this function.

We’re using the browsers Fetch API to call “/api/HelloWorld” – Azure Static Web Apps will make our functions available following that pattern.

Let’s push these changes to git, and wait a minute or two for our deployment to run.

If we now load our webpage, we’ll see this:

How awesome is that – a server-side API, without a server, from a few static files in a directory.

If you open up the Azure portal again, and select Functions, you’ll see your HelloWorld function now shows up:

I love it, but can I run it locally?

But of course!

Microsoft recommends using the npm package live-server to run the static portion of your app for development, which you can do just by typing

npx live-server

From the root of your repository. Let’s give that a go now

Oh no! What’s going on here.

Well, live-server is treating the /api directory as if it were content, and serving an index page locally, which isn’t what we want. To make this run like we would on production, we’re also going to need to run the azure functions runtime, and tell live-server to proxy any calls to /api across to that running instance.

Sounds like a mouthful, but let’s give that a go.

cd api
npm i
func start

This will run the Azure functions runtime locally. You will see something like this

Now, in another console tab, let’s start up live-server again, this time telling it to proxy calls to /api

npx live-server --proxy=/api:http://127.0.0.1:7071/api

If we visit our localhost on 8080 now, you can see we have exactly the same behaviour as we do in Azure.

Great, but this all seems a little bit… fiddly… for local development.

If you open your root directory in Visual Studio Code, it will hint that it has browser extension support for debugging and development, but I like to capture this stuff inside my repository really so anyone can run these sites from the command line trivially.

Adding some useful scripts

I don’t know about you, but I’m constantly forgetting things, so let’s capture some of this stuff in some npm scripts so I don’t have to remember them again.

In our /api/package.json we’re going to add two useful npm tasks

This just means we can call npm run start on that directory to have our functions runtime startup.

Next we’re going to add a package.json to the root of our repository, so we can capture all our live server related commands in one place.

From a command prompt type:

npm init

and hit enter a few times past the default options – you’ll end up with something looking like this

And finally, add the npm-run-parallel package

npm install npm-run-all –save-dev

We’re going to chuck a few more scripts in this default package.json

Here we’re setting up a dev:api, dev:server and a start task to automate the command line work we had to incant above.

So now, for local development we can just type

npm run start

And our environment works exactly how it would on Azure, without us having to remember all that stuff, and we can see our changes hot-reloaded while we work.

Let’s commit it and make sure it all still works on Azure!

Oh No! Build Failure!

Ok, so I guess here is where our paint is dripping a little bit wet.

Adding that root package.json to make our life easier, actually broke something in our GitHub Actions deployment pipeline.

If we dig around in the logs, we’ll see that something called “Oryx” can’t find a build script, and doesn’t know what to do with itself

As it turns out, the cleverness that’s baked into Azure static web apps, is a tool called Oryx, and it’s expecting frameworks it understands, and is running some language detection.

What’s happened is that it’s found our package.json, presumed we’re going to be specifying our own build jobs, and we’re not just a static site anymore, but then when we didn’t provide a build task, it’s given up because it doesn’t know what to do.

The easiest way I’ve found to be able to use node tooling, and still play nicely with Azure’s automated deployment engine is to do two things:

  1. Move our static assets into an “app” directory
  2. Update our deployment scripts to reflect this.

First, let’s create an app directory, and move our index.html file into it.

Now we need to edit the YAML file that Azure generated in .github/workflows

This might sound scary, but we’re only really changing one thing – in the jobs section, on line ~30 of the currently generated sample there are three configuration settings –

We just need to update app_location to be “app”.

Finally, we need to update the npm scripts we added to make sure live-server serves our app from the right location.

In our root package.json, we need to add “app” to our dev:server build task

We’re also going to add a task called build:azure – and leave it empty.

In total, we’ve only changed a few files subtly.

You might want to run your npm run start task again now to make sure everything still works (it should!) and commit your code and push it to GitHub.

Wonderful.

Everything is working again.

“But David! You’re the TDD guy right? How do you test this!”

Here’s the really cool bit I suppose – now we’ve configured a build task, and know where we can configure an app_artifact_location – we can pretty much do anything we want.

  • Want to use jest? Absolutely works!
  • Want to use something awesome like Wallaby? That too!

Why not both at once!

You just need to npm install the thing you want, and you can absolutely test the JavaScript in both your static site and your API.

You can install webpack and produce different bundled output, use svelte, anything, and Microsoft’s tooling will make sure to host and scale both your API and your web app.

My standard “dev” load-out for working with static web sites is

  1. Add a few dev dependencies

  2. Add this default babel.config.js file to the root of my repository

This allows jest to use any language features that my current version of node supports, and plays nicely with all my Visual Studio Code plugins.

I’ll also use this default Wallaby.conf.js configuration *for the continuous test runner Wallaby.js – which is similar to NCrunch but for JavaScript and TypeScript codebases.

You mentioned TypeScript?

Ah yes, well, Azure Functions runtime totally supports TypeScript.

When you create your API, you just need to

func init --worker-runtime=node --language=typescript

And the API that is generated will be TypeScript – it’s really that simple.

Equally, you can configure TypeScript for your regular static web app, you’ll probably want to configure WebPack to do the compiling and bundling into the assets folder, but it works absolutely fine.

When your functions are created using TypeScript, some extra .json metadata is created alongside each function that points to a compiled “dist” directory, that is built when the Azure functions runtime deploys your code, complete with source-maps, out of the box.

But let’s go wild, how about C# !

You can totally use C# and .NET Core too!

If you func init using the worker dotnet, the SDK will generate C# function code that works in exactly the same way as it’s JavaScript and TypeScript equivalent.

You can literally run a static web app, with an auto-scaled C# .NET Core API backing it.

Anything that the Azure Functions runtime supports is valid here (so python too).

I Think This is Really Awesome

I hope by splitting this out into tiny steps, and explaining how the GitHub Actions build, interacts with both the Functions runtime and the Oryx deployment engine that drives Azure Static Web Apps has given you some inspiration for the kinds of trivially scalable web applications you can build today, for practically free.

If you’re a C# shop, a little out of your comfort zone away from ASP.NET MVC, why not use Statiq.Web as part of the build process to generate static WebApps, that use C#, and are driven by a C# and .NET Core API?

Only familiar with Python? You can use Pelikon or Lector to do the same thing.

The Oryx build process that sits behind this is flexible, and provides plenty of hooks to customise the build behaviour between repository pulling, and your site getting served and scaled.

These powerful serverless abstractions let us do a lot more with a lot less, without the stress of worrying about outages, downtime, or scaling.

You can really get from zero to working in Azure static sites in five or ten minutes, and I legitimately think this is one of the best ways to host content on the internet today.

.NET Templates for C# Libraries, GitHub Actions and NuGet Publishing

05/04/2020 17:20:00

Whenever I'm looking to put out a new library I find myself configuring everything in repetitively simple ways. The default File -> New Project templates that Visual Studio ships with never quite get it right for my default library topology.

Almost every single thing I build looks like this:

Mode                 LastWriteTime         Length Name
----                 -------------         ------ ----
d-----        04/05/2020   1:16 PM                .github               # GitHub actions build scripts
d-----        04/05/2020   1:10 PM                adr                   # Architecture decision register
d-----        04/05/2020   1:05 PM                artifacts             # Build outputs
d-----        04/05/2020   1:05 PM                build                 # Build scripts
d-----        04/05/2020   1:05 PM                docs                  # Documentation markdowns
d-----        04/05/2020   1:05 PM                lib                   # Any non-package-managed libs
d-----        04/05/2020   1:05 PM                samples               # Examples and samples
d-----        04/05/2020   4:23 PM                src                   # Source code
d-----        04/05/2020   4:21 PM                test                  # Tests
-a----        03/09/2019   8:59 PM           5582 .gitignore
-a----        04/05/2020   4:22 PM           1833 LibraryTemplate.sln
-a----        04/05/2020   1:02 PM           1091 LICENSE
-a----        04/05/2020   3:16 PM            546 README.md
-a----        04/05/2020   1:08 PM              0 ROADMAP.md

So I spent the afternoon deep diving into dotnet templating and creating and publishing a nuget package to extend .NET new with my default open source project directory layout.

Installation and usage

You can install this template from the command line:

dotnet new -i ElectricHead.CSharpLib

and then can create a new project by calling

dotnet new csghlib --name ElectricHead.MySampleLibrary

Topology and conventions

This layout is designed for trunk based development, against a branch called dev. Merging to master, triggers publishing.

  • Commit work to a branch called dev.
  • Any commits will build and be tested in release mode by GitHub Actions.
  • Merge to master will trigger Build, Test and Publish to NuGet
  • You need to setup your NuGet API key as a GitHub Secret called NuGetApiKey

You need to use the csproj to update your SemVer version numbers, but GitHubs auto-incrementing build numbers will be appended to the build parameter in your version number, so discrete builds will always create unique packages.

The assembly description will be set to the SHA of the commit that triggered the package.

GitHub Actions

Pushing the resulting repository to GitHub will create builds and packages for you as if by magic. Just be sure to add your NuGet API key to your repositories Secrets from the Settings tab, to support publishing to NuGet.org

Deploying Azure WebApps in 2020 with GitHub Actions

05/03/2020 12:45:00

For the longest time, I've relied on, and recommended Azure's Kudu deployment engine as the simplest and most effective way to deploy web apps into Azure App services on Windows. Sadly, over the last couple of years, and after it's original author changed roles, Kudu has lagged behind .NET Core SDK versions, meaning if you want to run the latest versions of .NET Core for your webapp, Kudu can't build and deploy them.

We don't want to be trapped on prior versions of the framework, and luckily GitHub actions can successfully fill the gap left behind by Kudu without much additional effort.

Let's walk through creating and deploying an ASP Net Core 3.1 web app, without containerisation, to Azure App services and GitHub in 2020.

This entire process will take less than 5-10 minutes to complete the first time, and once you understand the mechanisms, should be trivial for any future projects.

Create Your WebApp

  • Create a new repository on GitHub.
  • Clone it to your local machine.
  • Create a new Visual Studio Solution
  • Create a new ASP.NET Core 3.1 MVC WebApplication

Create a new Azure App Service Web App

  • Visit the Azure Portal
  • Create a new web application
  • Select your subscription and resource group

Instance details

Because we're talking about replacing Kudu for building our software - the Windows native deployment engine in AppServices, we're going to deploy just our code to Windows. It's worth noting that GitHub actions can also be used for Linux deployments, and Containerised applications, but this walkthrough is intended as a like-for-like example for Windows-on-Azure users.

  • Give it a name
  • Select code from the publish options
  • Select `.NET Core 3.1 (LTS) as the runtime stack
  • Select Windows as the operating system
  • Select your Region
  • Select your App service plan

Configure it's deployment

Now we're going to link our GitHub repository, to our AppServices Web app.

  • Visit the Deployment Centre in for your newly created application in the Azure Portal.
  • Select GitHub actions (preview)
  • Authorise your account
  • Select your repository from the list

A preview of a GitHub action will be generated and displayed on the screen, click continue to have it automatically added to your repository.

When you confirm this step, a commit will be added to your repository on GitHub with this template stored in .github/workflows as a .yml file.

Correct any required file paths

Depending on where you created your code, you might notice that your GitHub action fails by default. This is because the default template just calls dotnet build and dotnet publish, and if your projects are in some (probably sane) location like /src the command won't be able to find your web app by default.

Let's correct this now:

  • Git pull your repository
  • Customise generated .yml file in .github/workflows to make sure it's paths are correct.

In the sample I created for this walkthrough, the build and publish steps are changed to the following:

- name: Build with dotnet
  run: dotnet build ANCWebsite.sln --configuration Release

- name: dotnet publish
  run: dotnet publish src/ANCWebsite/ANCWebsite.csproj -c Release -o ${{env.DOTNET_ROOT}}/myapp
  

Note the explicit solution and CSProj paths. Commit your changes and Git push.

Browse to your site!

You can now browse to your site and it works! Your deployments will show up both in the Azure Deployment Center and the GitHub Actions list.

This whole process takes less than ~2-3 minutes to setup, and is reliable and recommended. A reasonable replacement for the similar Kudu "git pull deploy" build that worked for years.

How it works under the hood

This is all powered by three things:

  • An Azure deployment profile
  • A .yml file added to .github/workflows
  • A GitHub secret

As you click through the Azure Deployment center setup process, it does the following:

  • Adds a copy of the templated GitHub Actions Dot Net Core .yml deployment file from https://github.com/Azure/webapps-deploy to your repository
  • Downloads the Azure publishing profile for your newly created Website and adds it as a GitHub secret to your GitHub repositories "Secrets" setting.
  • Makes sure the name of the secret referenced in .github/workflows/projectname_branchname.yml matches the name of the secret it added to the respository.

The rest is taken care of by post-commit hooks and GitHub actions automatically.

You can set this entire build pipeline up yourself by creating the .yml file by hand, and adding your secret by hand. You can download the content for your Publish profile by visiting

Deployment Center -> Deployment Credentials -> Get Publish Profile

In the Azure portal.

It's an XML blob that you can paste into the GitHub UI, but honestly, you may aswell let Azure do the setup for you.

Next Steps

You've just built a simple CI pipeline that deploys to Azure WebApps without any of the overhead of k8s, docker, or third party build systems.

Some things you can now do:

  • Consider running your tests as part of this build pipeline by adding a step to call dotnet test
  • Add an additional branch specification to deploy test builds to different deployment slots

The nice thing about this approach, is that your build runs in a container on GitHub actions, so you can always make sure you're using the versions of tools and SDKs that you desire.

You can find all the code used in this walkthrough here.

The Quarantine Project

05/03/2020 10:00:00

While we're all enduring these HashTagUnprecendetedTimes I'm going to keep a living list here of my own personal "quarantine project".

Earlier in the year I took some time out, which was originally intended to be dedicaded to travel, conferences and writing book 3. Obviously the first two things in that list are somewhat less plausible in a global pandemic, so I've been up to a bunch of other stuff.

Here is a living list, and I'll update it as more things arrive, until such a time as I write about them all distinctly.

Events

Remote Code DOJO

I've been running weekly pair programming dojos that you can sign up to and attend here

Projects

.NET

JavaScript / TypeScript

Hardware

Work In Progress

Books

Book 3 is still coming 🖤

Running .NET Core apps on Glitch!

03/26/2020 16:00:00

Over the last couple of months I've been doing a lot of code for fun in Glitch.

Glitch is a collabartive web platform that aims to make web programming accessible and fun - complete with real-time editing and hot-reloading built in. It's a great platform for sketching out web apps, working with friends, or adapting samples other people share ("remixing"). It's a great product, and I love the ethos behind it - and like a lot of things on the web in 2020, it's commonly used for writing HTML and javascript, with default templates also available for Node + Express.js apps.

...but why not .NET Core?

I was in the middle of configuring some webpack jobs when I idley tweeted that it'd be great if the netcore team could support this as a deployment target. The glitch team shot across a few docs asking what an MVP would look like for netcore on glitch, and I idley, and mostly out of curiosity, typed dotnet into the Glitch command line prompt to see if the dotnet CLI just happened to be installed. And it was.

Armed with the wonderfully named glitchnomicon and the dotnet cli I created a fresh ANC (ASP.NET Core) MVC start project, and migrated the files one by one into a Glitch project.

With a little tweaking I've got the dotnet new project template running in Glitch, without any changes to the C# code at all.

Subtle changes:

  • No Bootstrap
  • Stripped out boilerplate layout
  • No jQuery
  • Removed "development" mode and "production" CSS filters from the views
  • Glitch executes the web app in development mode by default so you see detailed errors

I've published some projects on Glitch for you to remix and use as you like.

ASP.NET MVC Starter Project

ASP.NET Starter Project (just app.run)

I'll be adding more templates to the collection .NET Core Templates over time.

Glitch is awesome, and you should check it out.

Small thoughts on literate programming

02/28/2020 16:00:00

One of the most profound changes in my approach to software was understanding it to be literature. Functional literature, but literature regardless. Intent, characterisation, description, text, subtext, flow, rhythm, style, all effect software like they do prose.

It's a constrained form of communication, with grammar, and that's why we work in "programming languages". They are languages. With rules, idioms and quirks. These aren't analogies, it's what software is. It's storytelling. Constrained creative writing with purpose.

Basically, Donald Knuth was right, and called it a bajillion years ago - with the idea of literate programming. Today's languages are that thing. You will never be a great programmer unless you become an excellent communicator, and an excellent writer. The skillset is the same.

Critical thinking, expression of concept, reducing repititon, form for impact, signposting, intent and subtext. If you want to understand great software, understand great literature.

Communication skills are not optional

If you want to teach a junior programmer to be a better programmer, teach them to write. Language is our tool for organising our thoughts. It's powerful. It has meaning. It has power.

It's a gift, it's for everyone. 🖤

(I should plug that I have a talk on this - get in touch)

Hardware Hacking with C# and JavaScript

01/29/2020 09:20:00

Over the last couple of months I’ve had a little exposure to hardware hacking and wearables after lending some of my exceptionally rusty 1990s-era C to Jo’s excellent Christmas Jumper project.

The project was compiled for the AdaFruit Feather Huzzah a low powered, ESP8266 SOC, Arduino compatible, gumstick sized development board. Other than a handful of RaspberryPi’s here and there, I’d never done any hardware hacking before, so it had that wonderful sheen of new and exciting we all crave.

“It’s just C!” I figured, opening the Arduino IDE for the first time.

And it was, just C. And I forgot how much “just C” is a pain – especially with a threadbare IDE.

I like syntax checking. I like refactoring. I like nice things.

The Arduino IDE, while functional, was not a nice thing.

It only really has two buttons – compile, and “push”. And it’s S L O W.

Arduino IDE

That’s it.

I Need Better Tools for This

I went on a small mission to get better C tools that also worked with the Arduino, as my workflow had devolved into moving back and forth between Visual Studio for syntax highlighting, and Arduino IDE for verification.

I stumbled across Visual Micros “Arduino IDE for Visual Studio” which was mostly good, if occasionally flaky, and had a slightly awkward and broken debugging experience. Still – light-years ahead of what I was using. There’s also an open source and free VSCode extension which captures much of the same functionality (though sadly I was missing my ReSharper C++ refactorings).

We stumbled forwards with my loose and ageing grasp of C, Google and got the thing done.

But there had to be a better way. Something less archaic and painful.

Can I Run C#?

How about we just don’t do C?

I know, obvious.

I took to Google to work out what the current state of the art was for C# and IoT, remembering a bit of a fuss made a few years ago during the NETCORE initial prototypes of IoT compatibility.

Windows 10 IOT Core seems steeped in controversy and potential abandonment in the face of NETCOREs cross platform sensibilities, so I moved swiftly onwards.

Thinking back a decade, I remembered a fuss made about the .NET MicroFramework based on Windows CE, and that drove me to a comparatively new project .NET NanoFramework – a reimplementation and cut down version of the CLR designed for IOT devices.

I read the docs and went to flash the NanoFramework runtime onto my AdaFruit Feather Huzzah. I’d flashed this thing hundreds of times by now.

And every time, it failed to connect.

One Last Huzzah

As it transpired, the AdaFruit Feather Huzzah that was listed as supported (£19.82, Amazon), wasn’t the device I needed, I instead needed the beefier AdaFruit Feather Huzzah32 (£21.52, Amazon). Of course.

Turns out the Huzzah had a bigger sibling with more memory and more CPU based on the ESP32 chip. And that’s what nanoFramework targeted.

No problem, low cost, ordered a couple.

Flashing a Huzzah32 to Run C#

The documentation is a little bit dense and took longer than I’d like to fumble through, so I’ll try condensing it here. Pre-Requirement: Visual Studio 2019+, Any SKU, including the free community edition.

  1. Add a NuGet package source to Visual Studio

        https://pkgs.dev.azure.com/nanoframework/feed/_packaging/sandbox/nuget/v3/index.json
    
  2. Add a Visual Studio Extensions feed

        http://vsixgallery.com/feed/author/nanoframework/
    
  3. Go to Tools -> Extensions and install the “nanoFramework” extension.

  4. Install the USB driver for the Huzzah, the SiLabs CP2104 Driver

  5. Plug in your Huzzah

  6. Check the virtual COM port it has been assigned in Windows Device Manager under the “Ports (COM&LTP)” category.

  7. Run the following commands from the Package Management Console

        dotnet tool install -g nanoFirmwareFlasher
        nanoff --target ESP32_WROOM_32 --serialport YOURCOMPORTHERE –update
    

That’s it, reset the board, you’re ready to go.

The entire process took less than 5 minutes, and if you’ve already used an ESP32, or the previous Huzzah, you’ll already have the USB drivers installed.

Hello World

You can now create a new nanoFramework project from your File -> New Project menu.

I used this program, though obviously you can write everything in your static void main if that’s your jam.

public class Program
{
      public const int DefaultDelay = 1000;

      public static void Main() => new Program().Run();
      public Program()
      {
            Console.WriteLine("Constructors are awesome.");
      }

      public void Run()
      {
            while (true)
            {
                  Console.WriteLine("ArduinYo!");
                  Thread.Sleep(DefaultDelay);
            }
      }
}

The brilliant, life affirming, beautiful thing about this, is you can just press F5, and within a second your code will be running on the hardware. No long compile and upload times. Everything just works like any other program, and it’s a revelation.

NanoFramework has C# bindings for much of the hardware you need to use, and extensive documentation for anything you need to write yourself using GPIO (General Purpose IO – the hardware connections on the board you can soldier other components to, or add Arduino shields to).

But It’s 2020 Isn’t Everything JavaScript Now?

Alright fine. If you want a slightly worse debugging experience, but slightly wider array of supported hardware, there’s another project, Espruino.

Somehow, while their API documentation is excellent, it’s a little obtuse to find information about running Espruino on the ESP32 – but they both work and are community supported.

The process of flashing is slightly more obtuse than in .NET land, but let’s skip to the good bit

  1. Make sure you have a version of python installed and in your path

  2. At a command prompt or terminal, using pip (which is installed with python), run the command

        pip install esptool
    
  3. Download the latest binaries for the board from here or for subsequent versions, via the download page

    You need all three files for your first flash.

  4. From a command prompt, in the same directory as your downloaded files

        esptool.py --chip esp32 --port /dev/ttyUSB0 --baud 921600 --after hard_reset write_flash -z --flash_mode dio --flash_freq 40m --flash_size detect 0x1000 bootloader.bin 0x8000 partitions_espruino.bin 0x10000 espruino_esp32.bin
    
  5. Install the “Espruino IDE” from the Google Chrome app store

  6. Under Settings -> Communication set the Baud Rate to 115200

That’s it, you’re ready to go in JavaScript – just click connect and run the provided sample

Espruino IDE

What’s next?

Well, nanoFramework is nice, but I really wish I could just use NetStandard 2.0. Luckily for me, the Meadow project from Wilderness labs is just that - an implementation of NetStandard for their own ESP32 derived board. I’ve ordered a couple of them to see how the experience stacks up. Their ESP32 boards look identical to the Huzzah32s, with some extra memory and CPU horsepower, presumably to accommodate the weight of the framework.

They are twice the cost currently at £50 vs the £20 for the Huzzah32, but if they deliver on the flexibility of full .NET, I’d imagine it’ll be the best possible environment for this kind of development if you’re willing to use or learn C#.

In JavaScript land? Well, JavaScript is nice, but TypeScript is better! Espruino don’t directly support a lot of ES6+ features, or TypeScript, but with a little bit of magic, their command line tooling, and Babel, we can use modern JavaScript now, on those devices (I’ll leave this to a subsequent post).

C is wonderful, but “even” hobbyist programmers should have good toolchains for their work <3

Music Streaming and the Disappearing Records

12/15/2019 21:20:00

I'm part of the Napster generation.

That means I pretty much stole all the music I grew up listening to. Unrepentantly, unremorsefully, downloaded everything.

Napster might seem like a dim and distant memory in internet time, but it's easy to forget quite how game changing Napster was for both the music scene, and the industry.

For anyone who might be too young to remember, or perhaps not technically savvy enough at the time, Napster, while not the first way you could "borrow" music from the internet, was the first popular Peer-to-Peer file sharing network.

It was ridiculously easy - you installed the software, told it where your collection of MP3s (songs stored as files on your computer) were and everyone else did the same. You were faced with an empty search box and your imagination and it just worked.

You could search for anything you wanted, however esoteric, and for the most part, Napster delivered.

It'd hard to explain how much of a revelation this was in 1999 - a year after Google was founded, and a handful of years before search on the internet really worked well at all.

I was 15 in 1999, teaching myself C++ and Perl, and this seemed like magic. Everyone ripped the few CDs they could afford to buy themselves to their computers, and in return we got everything.

When you're 15, the complex relationship between rights owners, writers, musicians, ethics, points based royalty systems and the music business was all literally the furthest thing from your mind. You're hungry for culture, and music, for as long as it's been recorded, had always tracked generations and cultural shifts.

People forget just how hard it was to discover new music, especially non-mainstream music in the 90s, and earlier.

When home cassette recording was becoming popular in the early 80s, the record companies were scared. Home Taping Was Killing Music! They proclaimed - yet the same taping and trading scene, participated in by kids eager to hear new, increasingly niche and underground music, rebelled. That same taping scene enhanced the burgeoning New Wave of British Heavy Metal scene, internationalising it, making music accessible to everyone. It's no coincidence that bands like Metallica rose to prominence by both their own participation in the demo trading scene of the early 80s, later weaponising this grass roots movement into international success.

But the 90s were different. The taping scene was related to live bootlegs, and the onward march of technology proven cassettes to be unreliable - home taping didn't kill the industry - it's actually gave the industry room to innovate on quality and easily win.

And oh boy did the industry win. The 90s were probably the final hurrah for rockstars and massive pushed music outside of mainstream pop. They didn't realise it then of course, but somebody did. Someone who made his name in the taping scene.

Lars was right.

Which is an awkward admission for a self-admitted former music pirate.

In 2000, Metallica, et al v. Napster inc saw the first time ever a court case was brought against a Peer-to-Peer filesharing network. Lars Ulrich, Metallica's drummer, was exceptionally vocal in the media about the widespread damage he saw file sharing causing to the music industry.

"Napster hijacked our music without asking. They never sought our permission. Our catalogue of music simply became available as free downloads on the Napster system."

Lars was right. Technology has this wonderful and world changing ability to ask, "can we do this thing?" but it rarely stops to see if it should. Metallica won.

Collectively, we, the youth, demonised them. Greedy rich millionaires pulling up the ladder behind them. Metallica inadvertently fuelled the boom in DRM and various rights management technologies of the late 90s and early 2000s, but the effects of the Napster lawsuit are still felt today.

While they thought they were fighting for their creative agency, they really were fighting for control. What the Metallica suit did was push file sharing underground into a series of different sharing platforms, which were more difficult to regulate, harder to track, and more resilient. They ironically made file sharing more sophisticated.

Lars today understands that the fans, the youth of the day thought Metallica were fighting them, rather than the file-sharing organisations. All his fears did come to fruition though.

It's a sobering admission to be on the wrong side of the argument twenty years later.

The Long Tail of File Sharing

But what if file sharing was used for good?

The file-sharing epidemic of Napsters launch wasn't the start of file-sharing. But actually the end destination of an entirely different scene, completely distinct from the tape trading of the 80s.

With its origin in 80s hacker culture, and continued survival on pre-World Wide Web protocols like usenet (a distributed message board system that predates web pages) and IRC (a decentralised chat protocol that was extended to support file transfers) - the digital music trading scene of the late 90s was part of the Warez scene - often just called "the scene" to people involved.

The scene is a closed community of ripping groups specialising in ripping (converting media to digital copies) and distributing copyrighted material, complete with its own rules and regulations about getting access to material - often before release. The scene doesn't really care too much about the material it distributes, though access to pre-release games, movies and music is absolutely a motivating factor. In many cases, scene release groups formed around specific types of content, cracking games, acquiring pre-release music, and distributing them all through private channels and FTP servers. The rise of Peer-to-Peer technology saw many previously difficult to obtain scene releases leaked out to the general public drawing the attention and ire of the recording industry.

This was exactly the kind of technologically advanced, weaponised, piracy that they had feared at the rise of cassette tape duplication - but this time it was viral, hard to stop, and terrifyingly, more advanced than any of the technology the recording industry was using at the time.

You can't put this kind of genie back in the bottle.

For the better part of a decade, the record industry fought a war of attrition with scene releases, the rise of Napster alternatives like AudioGalaxy, KaZaA, LimeWire and eDonkey (never say we’re bad at naming things in technology again…) and the dedication of an entire generation who believed they were in the moral right, fighting evil megacorporation’s trying to enforce archaic copyright law.

And the industry fought and fought.

In a ferociously blind moment, the music industry never critically assessed the value proposition of its products, and they never innovated on the formats. CD prices, especially in the late 90s and early 2000s were at a record high, and as the war against scene rippers and street-date breaking leaks intensified, the products that were being sold were subject to increasingly dubious and in some cases dangerous DRM (digital rights management) approaches in a futile attempt to prevent piracy.

The music industry really didn’t stand a chance – the file-sharing scene entrenched, worked on it’s technology and was brutally effective. BitTorrent became the tool of choice, and the “mass market piracy” calmed down back to smaller communities of enthusiasts around specific genres or niches.

Across the same time window, CD-Rs and home CD burning reached mass market acceptance and affordability. But for labels? The prices had never really come down. They were used to making a lot of money on CD sales, and giving big number advances to artists, but as they saw their profits shrink, they were struggling. The largest cost in the majority of business is always staff and humans - and the scene didn't have to compete with that.

In the UK, high street retail chains like HMV, Our Price, Music Zone and FOPP went into administration, were bought, and entered administration again – relying on cut price DVD sales to keep the doors open (a format that was still slightly impractical for the average user to pirate at the time).

But something more interesting was happening. People were using illegal music sources not just to steal music they knew they wanted, but to discover things they’d never heard of. There were reports that consumers of illegal downloads were actually… spending more money on music?!

While everyone was so caught up on the idea that people were just out to get things for free (which was certainly more than case with the consumption of other contemporary piracy like that of games) music and it’s particular place in the cultural ecosystem, with live performances, and merchandise, and youth identity, actually saw some uplift where bands that would have never gotten the attention of a label were suddenly independent darlings.

While the majors were losing, and the millions-of-unit pop albums of the time were performing poorly, the underground was thriving, much like that early tape trading scene. This phenomenon dovetailed with the rise of then nascent social media platforms like Livejournal and later, eponymously MySpace and the idea of the “MySpace Bands” – but what these bands really were was the grass roots marketing of local niche scenes into bigger audiences, powered by the leveling of technology, and the groundwork done, ironically, by software like Napster.

Did Napster accidentally “save music”?

A whole generation of people grew up stealing music and being totally ok with exploring music they would never listen to precisely because it didn’t cost anything. Sadly, you can’t feed your kids on the willingness of people to explore music free of cost.

There were breakout bands from the “MySpace scene” in the underground – the Arctic Monkeys, Bring Me The Horizon, Asking Alexandria – they made money. People noticed.

Pay What You Want

In October 2007 Radiohead released their seventh album “In Rainbows”, online, for any amount at DRM-free MP3s. They’d found themselves in a novel part of their career free from the encumbrance of a traditional record deal and buoyed by the high profile a previously successful career as a major label recording artist afforded.

In December of the same year, they released a series of expanded physical formats and the download was removed.

Reaction was mixed. While Radiohead didn’t invent the “pay what you want” business model, they were the largest artist (by several orders of magnitude) to adopt it and bring it into the mainstream. Trent Reznor of Nine Inch Nails was critical of it not going far enough (arguing the low-quality digital release was a promotional tool for more expensive deluxe editions) while scores of artists criticised the move as an exceptionally wealthy band devaluing the worth of music.

Trent Reznor would go on to produce the Saul Williams third album “The Inevitable Rise and Liberation of NiggyTardust!” – which Williams and Reznor released as high-quality audio audio files for “Pay What You Want or $5”. In the two months from its release, Tardust! Shifted around 30k paying copies, out of ~150k downloads, this compared favourably to Williams’ debut album, which had shifted 30k copies in the previous 3 years.

Reznor would later go on to release two of his own albums, Ghosts I-IV and The Slip, under similar schemes, and licensed under the Creative Commons license, complete with deluxe physical releases.

While it’s clear that the Pay What You Want model was prolific for these particular bands, much of the criticism centred around the model being entirely untenable for artists without the prior success of Radiohead or Reznor. The benefit of privilege of success under a previous regime.

The record industry didn’t react either in kind, or kindly. Prices remained at an all-time high. In this same time window, a somewhat blunt Reznor addressed crowds in Australia during a show to express his dissatisfaction with the value placed on music.

  “I woke up this morning and I forgot where I was for a minute.

  I remembered the last time I was here; I was doing a lot of complaining at the prices 
  of CDs down here. That story got picked up, and got carried all around the world, and
  now my record label all around the world hates me because I yelled at the and called
  them out for being greedy fucking assholes.

  I didn’t get a chance to check, has the price come down at all?

  You know what that means? Steal it. Steal away. Steal and steal and steal some more
  and give it to all your friends. 

  Because one way or another these motherfuckers are going to realise, they’re
  ripping people off and that’s not right.” 

Curt, but the tide was certainly shifting against the high price of physical media at the end of the 00’s. Reznor re-started his own label around this time to release his own work.

A Model for The Rest of Us

The 2000s were not kind to Peer to Peer file sharing services. Apple and Amazon both had DRM powered music storefronts, along with also-rans, and the launch of the iPod in 2001 monopolised paid-for digital downloads, normalised DRM to consumers of music, and saved Apple as a company.

As these more closed ecosystems gave the record industry exactly what they were looking for. The ability to charge the same amount, while enjoying the comparative low cost of digital distribution. Peer to peer had been pushed underground by litigation returning to the warez scene subcultures from where it came, thanks to lobby groups and the rise of film piracy pushing for crackdowns on file-sharing, and especially popular mainstream BitTorrent sites like The Pirate Bay. Several high-profile lawsuits and prison sentences did well for scaring people away from “downloading” pirated music. The industry didn’t recover, but it did see hope.

Towards the end of the 2000s, streaming audio services and web-radio started their rise, along with the founding of companies like Spotify that offered a different model for music consumption. Not only was it a model that worked for the record companies because nobody ever really owned any of the music they were streaming, but it worked for people by passing the tolerance test of “seemingly more convenient than the thing it replaced”.

Tired of loading new songs onto your iPod? Spotify!

Don’t even have an iPod or MP3 player anymore because 4G and smartphones were now ubiquitous? Spotify!

Spotify was so convenient, and so useful, it steamrolled across everything that came before it. Its free mode was ad supported, and sure, the labels weren’t making as much money as they were making before, but it sure beat having some kids upload the albums you published to YouTube and benefiting from the ad revenue.

In Spotify, the labels found the same thing that the videogame industry found in Valves Steam platform – a form of DRM that regular consumers didn’t feel threatened by. That didn’t seem like it infringed on anything. That didn’t feel like it was a threat, or punitive. A far cry from the MPAA and the BPI pressuring ISPs to release information of their consumers so they could litigate against them.

If anything, Spotify is too good. It has competitors in 2019, but none of them are especially credible. Apple Music (which early Pay What You Want proponent Trent Reznor ended up working on for a time), Amazon, and briefly Microsoft all offered competitors – but Spotify and it’s reach, discovery algorithms and passive social features out-stepped the competition. It has a vice like grip on the streaming industry, much like Apple’s iTunes did on DRM’d digital sales previously.

The nature of music has also shifted significantly in the two decades of the mainstream internet. Lars was right.

We normalised the fact that music wasn’t worth anything, and the cracks are now showing around the whole ecosystem that supports music. Bands don’t break big anymore, music is diverse, interesting, challenging, infinitely broad, and infinitely shallow.

You like Mexican Hip Hop mixed with Deathcore? We got that. How about Indian Rap Metal? Christopher Lee singing power metal? Yes, that exists too.

Low cost and high-quality recording equipment have made the production of music accessible to an entire generation, at the same time as the global economic downturn saw the closure of music venues across the UK. Never has something so creatively healthy felt so continuously threatened by extinction.

Spotify exacerbate this problem with a steady stream of controversies regarding the allegedly low renumeration of artists streaming on their platform. You can’t really find any solid numbers on what Spotify pay artist other than the consensus that “it’s not enough”. Songwriters doubly so – the co-writer of the Bon Jovi song Livin’ on a Prayer, in 2018, for half a billion streams, received $6,000 in royalties from Spotify. Several high-profile artists have pulled catalogues from Spotify, only to later re-emerge (presumably because you cannot fight technological change, but also, because money).

It doesn’t take much to do the back of envelope maths with numbers like that, and they don’t look good. I don’t work in the music business, but I know a lot of people that do, and the stories are all consistent. Living as a touring musician in 2019 is a harder life than it’s ever been before.

No art without patronage.

When you’re young, you just want to have people hear your music, to play shows, to be a Rockstar. None of those things pay a mortgage.

Begging to play?

What have artists, bands even done to cope with this existence?

We’ve seen crowdfunding experiments – some successful, some not. Meet and greets, on-tour-lessons, signing sessions, expanded editions, hang-outs, VIP experiences, the works. There’s plenty of good in all those things, but it’s impossible to not identify all of these things for what they are – making artists work extra to somehow justify the value of their art.

Art has value. Value is not cost. These two things should not be accidentally conflated. We started off with “the work” as having value, which was slowly shifted to the live performance. The live performances value was slowly shifted to the merchandise, begetting the slow productisation of art. When we push art into the purely commercial it can’t help but be compromised.

The financial models behind streaming music are compromised. Technology has ascended to occupy the place the record labels once were, with Spotify and other infrastructure companies being the bodies that profit the most.

I run a software company for a living, I speak for, and advocate for technology because I care deeply about it, but there’s certainly something tragically wrong here, even if it’s the simple answer that Spotify subscriptions are too cheap.

What about the Underground?

I’ve only really hinted at my own personal tastes throughout this piece, music is subjective. But the underground, the small labels, I fear deeply for in this climate.

I fear for the small labels for the discovery capabilities – in niche genres, labels are important, grass roots shows are important.

I grew up discovering music from soundtracks, from the second-hand CD shops in Manchester where you could buy music journalists discarded promos for £3-5 a CD. I’d go down on Saturday mornings with friends and we’d buy 4-5 albums a week. We’d take chances. We’d buy things based on the descriptions and the sound-alikes and the artwork.

It was culture, and culture shifts. The internet has been nothing but incredible for music discovery and access, and has replicated and bettered the experiences I had digesting weird metal albums on Saturday afternoons over the last decade – but in doing so, it’s also completely conflated the concept of ownership with that of access.

It’s no shock to anyone that you don’t own the music you listen to on streaming services, but the more none-mainstream you get, the greater the risk you’re running to lose access to that music in its entirety.

We’ve seen how easy it is for records to disappear from Spotify at the behest of their owners, but what happens when the owners go bankrupt? When the labels go out of business?

What happens when nobody owned the thing they’ve been enjoying, and it vanishes?

The games industry has long been contending with a similar category of problem with the way it treats abandonware (games out of copyright where the authors and publishers no longer exist) and video game emulation.

Games stuck in licensing hell have routinely vanished or become unplayable. We shouldn’t let our own culture erode and disappear.

We’ve slowly killed ownership, we’re slowly killing our DIY scenes by closing live venues, and we’re exposing music that has been created in our scenes and undergrounds, across every genre that isn’t mainstream, but outsourcing ownership to privately owned organisations that hardly pay the creators of the art we covet. Culture and art should not be kept behind the gates of collectors, and inflated prices.

The music business is lucky, but not without its tragedies – the UMG media archive famously burnt down, losing original recordings and masters of some of the most important albums in history.

The British Library, thankfully, cares about this.

The “Sound and Moving Image” archive is vast and more varied than you might imagine – their stated aim is to collect a copy of each commercial release in the UK, of any and all genres. There’s no budget, and the rely on donations from labels and artists, along with private collections. The more esoteric “sounds” are digitised, but for most of the popular recordings, you’ll have to go in person to listen for free.

I fundamentally believe in the value of digital distribution and streaming platforms. They’ve opened the walls of music up, but as a community we need to be better at protecting our culture and music – because private organisations owe us nothing and are not always good actors.

Metallica were right about Napster. Let’s protect music, especially outside of the mainstream.

And go to a show!

« Previous Entries

History