Wednesday, 10 September 2014

Creating RF64 and BWF WAV files with NAudio

There are two extensions to the standard WAV file format which you may sometimes want to make use of. The first is the RF64 extension, which overcomes the inherent limitation that WAV files cannot be larger than 4GB. The second is the Broadcast Wave Format (BWF) which builds on the existing WAV format and specifies various extra chunks containing metadata.

In this post, I’ll explain how you can make a class to create Broadcast Wave Files using NAudio that supports large file sizes using the RF64 extension, and includes the “bext” chunk from the BWF specification.

File Header

First of all, a WAV file starts with the byte sequence ‘RIFF’ and then has a four byte size value, which is the number of bytes following in the entire file. However, for large files, instead of ‘RIFF’, ‘RF64’ is used, and the following four byte integer for the RIFF size is then ignored (it should be set to -1).

Then we have the ‘WAVE’ identifier (another 4 bytes), and following that in a normal WAV file we would usually expect the format chunk (with the ‘fmt ‘ 4 byte identifier). But to support RF64, we add a “JUNK” chunk. This is of size 28 bytes, and initially is all set to zeroes. If the overall size of the entire WAV file grows to over 4GB, then we will turn this “JUNK” chunk into a ‘ds64’ chunk. If the overall file size does not grow beyond 4GB, then the junk chunk is simply left in place, and media players will just ignore it.

The ds64 Chunk

A ds64 chunk consists of three 8 byte integers and a four byte integer. These are the RIFF size, which is the size of the entire file minus 8 bytes, the data size, which is the number of bytes of sample data in the ‘data’ chunk and the ‘sampleCount’ which is the number of samples. The sample count is optional really, as it corresponds to the sample count found in the ‘fact’ chunk of a standard WAV file. This chunk is usually only present for non-PCM audio formats as it is trivial to calculate the sample count for PCM from the byte count. Finally, a ds64 chunk can have a table containing the sizes of any other huge chunks, but usually this would be unused since it is likely only the ‘data’ chunk that will grow larger than 4GB. So the final four bytes of a typical ds64 chunk are 0s, indicating no table entries.

The bext chunk

Following the ds64 chunk, we have the bext chunk from the BWF specification. This has space for a textual description of the file as well as timestamps, and newer versions of the bext chunk also allow you to specify various bits of loudness information. The algorithms for calculating this are hard to track down, so I tend to use version 1 of bext and ignore them.

The fmt chunk

Then we have the standard ‘fmt ‘ chunk, which works just the same way it does in a standard WAV file, containing a WAVEFORMATEX structure with information about sample rate, bit depth, encoding and number of channels. RF64 files are almost always either PCM or IEEE floating point samples, since it is only with uncompressed audio that you typically end up creating files larger than 4GB.

The data chunk

Finally, we have the ‘data’ chunk, containing the actual audio data. Again this is used in exactly the same way as it is in a regular WAV file, except that the chunk data length only needs to be filled in the file is less than 4GB. If it is a RF64 file, the four byte length for the data chunk is ignored (set it to –1), and the size from the ds64 chunk is used instead.

The code

Here’s a simple implementation of a BWF writer class, that creates BWF files with a simple bext chunk and turns them into RF64 files if necessary. I plan to clean this code up a little and import it into NAudio in the future (either as its own class or upgrade WaveFileWriter to support RF64 and BWF – let me know your preference in the comments).

Friday, 8 August 2014

Brush up on your Languages with Pluralsight

Over the last year not only have I created a number of courses for Pluralsight, I’ve also watched a lot too. Most of the time, I’m not watching to learn a brand new technology, but as a refresher for something I’ve already used a bit. Often in just an hour or two (on 1.3x playback) you can watch a whole course and pick up loads of great tips.

I’ve found it a particularly effective way to brush up on my skills in a few programming languages that I’m semi-proficient in, but not completely “fluent” in. So here’s a few programming language related courses that I can recommend:

First, last year I was glad I watched Structuring JavaScript by Dan Whalin, as I had been hearing lots of people talking about the “revealing module pattern” and “revealing prototype pattern” but hadn’t yet properly learned what those patterns were. He explains them simply and clearly.

Another great course is Python Fundamentals by Austin Bingham and Robert Smallshire. It’s been a number of years since I did any serious Python development, so my skills had grown a bit rusty. This superbly presented course is a brilliant introduction to Python, and filled in a couple of gaps in my knowledge. They’ve got a follow-up course out as well which is undoubtedly also worth watching.

Third, several times over the years I’ve tried and failed to get to grips with PowerShell. The Everyday PowerShell for Developers course by Jim Christopher was exactly what I needed as it shows how to do the sorts of things developers will be interested in doing with PowerShell.

And finally, the F# section of the Pluralsight library is still small, but growing fast, and one fascinating course was Mark Seemann’s Functional Architecture with F#. It’s fast-moving but gives fascinating insights into how you could architect a typical line of business application in a more functional way.

Anyway, that’s enough recommendations for now. I have several other courses I want to highlight, so maybe this will become a regular blog feature. Let me know in the comments if there are any must-see courses you’ve come across.

Wednesday, 30 July 2014

Going Beyond the Limits of Windows Forms

One of the things I explore in my Windows Forms Best Practices course on Pluralsight (soon to be released) is how you can go beyond the limitations of the Windows Forms platform, and integrate newer technologies into your legacy applications. Here are three ways you can extend the capabilities of your Windows Forms applications.

1. Use Platform Invoke to harness the full power of the Windows Operating System

Because Windows Forms has not been significantly updated for several years now, many of the newer capabilities of Windows are not directly supported. For example, Windows Forms controls do not provide you with touch events representing your gestures on a touch screen.

However, this does not mean you are limited to using only what the System.Windows.Forms namespace has to offer. For example, with a bit of Platform Invoke, you can register to receive WM_TOUCH messages in your application with a call into the RegisterTouchWindow API in your Form load event. Then you can override the WndProc method on your form to detect the WM_TOUCH messages, and use another Windows API, GetTouchInputInfo, to interpret the message parameters.

This may sound a little complicated to you, but there is a great demo application that is part of the Windows 7 SDK which you can read about here. Of course it would be much nicer if Windows Forms had built-in touch screen support, but don’t let the fact that it doesn’t stop you from taking advantage of operating system features like this. Anything the Windows API can do, your Windows Forms application can do, thanks to P/Invoke.

2. Use the WebBrowser control to host Web Content

Many existing Windows Forms line of business applications are being gradually replaced over time with newer HTML 5 applications, allowing them to be accessed from a much broader set of devices. But the transition can be a slow process, and sometimes the time required to rewrite the entire application is prohibitive. However, with the WebBrowser Windows Forms control, you can host any web content you like, meaning that you could incrementally migrate certain parts of your application to web pages.

The WebBrowser control is essentially Internet Explorer hosted inside a Windows Forms control, and will use whatever version of IE you have installed. Frustratingly, it defaults to IE 7 compatibility mode, and there isn’t an easy programmatic way to change that. However, if you are in control of the HTML that is rendered, or are able to write a registry key, then you can use one of the two techniques described here to fix it.

The WebBrowser control actually has a few cool properties, such as the ability to let you explore and manipulate the DOM as a HtmlDocument using the WebBrowser’s Document property. You can even provide a .NET object that can be manipulated using JavaScript with the ObjectForScripting property. So two way communication from the your .NET code to the webpage, and vice versa are possible.

Obviously composing your application partly out of Windows Forms controls and partly out of hosted web pages won’t be a completely seamless user experience, but it may provide a way for you to incrementally retire various parts of a large legacy Windows Forms application and replace them with HTML 5.

3. Use the ElementHost control to host WPF content

Alternatively, you may wish to move away from Windows Forms towards WPF and XAML. Not all applications are suited to being web applications, and the WPF platform offers many advantages in terms of rendering capabilities that Windows Forms developers may wish to take advantage of. It also provides a possible route towards supporting other XAML based platforms such as Windows Store apps or Windows Phone apps.

The ElementHost control allows you to host any WPF control inside a Windows Forms application. It’s extremely simple to use, and with the exception of a few quirks here and there, works very well. If you made use of a Model View Presenter pattern, where your Views are entirely passive, then migrating your application to WPF from Windows Forms is not actually as big a task as you might imagine. You simply need to re-implement your view interfaces with WPF components instead of Windows Forms. In my Pluralsight course I show how easy this is, simply swapping out part of the interface for the demo application with a WPF replacement.

So if you are working on a Windows Forms application and find yourself frustrated by the limitations of the platform, remember that you aren’t limited to what Windows Forms itself has to offer. Consider one of these three techniques to push the boundaries of what is possible, and start to migrate towards more modern UI development technologies at the same time.

Friday, 18 July 2014

10 Ways to Create Maintainable and Testable Windows Forms Applications

Most Windows Forms applications I come across have non-existent or extremely low unit test coverage. And they are also typically very hard to maintain, with hundreds if not thousands of lines of code in the code behind for the various Form classes in the project. But it doesn’t have to be this way. Just because Windows Forms is a “legacy” technology doesn’t mean you are doomed to create an unmaintainable mess. Here’s ten tips for creating maintainable and testable Windows Forms applications. I’ll be covering many of these in my forthcoming Pluralsight course on Windows Forms best practices.

1. Segregate your user interface with User Controls

First, avoid putting too many controls onto a single form. Usually the main form of your application can be broken up into logical areas (which we could call “views”). You will make life much easier for yourself if you put the controls for each of these areas into their own container, and in Windows Forms, the easiest way to do this is with a User Control. So if you have an explorer style application, with a tree view on the left and details view on the right, then put the TreeView into its own UserControl, and create a UserControl for each of the possible right-hand views. Likewise if you have a tab control, create a separate UserControl for each page in the tab control.

Doing this not only keeps your classes from getting unmanageably large, but it also makes tasks like setting up resizing and tab order much more straightforward. It also allows you to easily disable whole sections of your user interface in one go where necessary. You’ll also find when you break up your user interface into smaller UserControls containing logically grouped controls, that it becomes much easier to redesign the UI layout of your application.

2. Keep non UI code out of code behind

Invariably in Windows Forms applications you’ll find code accessing the network, database or file system in the code behind of a form. This is a serious violation of the “Single Responsibility Principle”. The focus of your Form or UserControl class should simply be the user interface. So when you detect non UI related code exists in your code behind, refactor it out into a class with a single responsibility. So you might create a PreferencesManager class for example, or a class that is responsible for calls to a particular web service. These classes can then be injected as dependencies into your UI components (although this is just the first step – we can take this idea further as we’ll see shortly). 

3. Create passive views with interfaces

One particularly helpful technique is to make each of the forms and user controls you create implement a view interface. This interface should contain properties that allow the state and content of the controls in the view to be set and retrieved. It may also include events to report back user interactions such as clicking on a button or moving a slider. The goal is that the implementation of these view interfaces is completely passive. Ideally there should be no conditional logic whatsoever in the code behind of your Forms and UserControls.

Here’s an example of a view interface for a new user entry view. The implementation of this view should be trivial. Any business logic doesn’t belong in the code behind (and we’ll discuss where it does belong next).

interface INewUserView
    string FirstName { get; set; }
    string LastName { get; set; }
    event EventHandler SaveClicked;

By ensuring that your view implementations are as simple as possible, you will maximise your chances of being able to migrate to an alternative UI framework such as WPF, since the only thing you will need to do is recreate the views in the new technology. All your other code can be reused.

4. Use presenters to control the views

So if you have made all your views passive and implement interfaces, you need something that will implement the business logic of your application and control the views. And we can call these “presenter” classes. This is the pattern known as “Model View Presenter” or MVP.

In Model View Presenter your views are completely passive, and the presenter instructs the view what data to display. The view is also allowed to communicate back to the presenter. In my example above it does so by raising an event, but often with this pattern your view is allowed to make direct calls into the presenter.

What is absolutely not allowed is for the view to start directly manipulating the model (which includes your business entities, database layer etc). If you follow the MVP pattern, all the business logic in your application can be easily tested because it resides inside presenter or other non-UI classes.

5. Create services for error reporting

Often your presenter classes will need to display error messages. But don’t just put a MessageBox.Show into a non-UI class. You’ll make that method impossible to unit test. Instead create a service (say IErrorDisplayService) that your presenter can call into whenever it needs to report a problem. This keeps your presenters unit testable, and also provides flexibility to change the way errors are presented to the user in the future.

6. Use the Command pattern

If your application contains a toolbar with a large number of buttons for the user to click, the Command pattern may be a very good fit. The command pattern dictates that you create a class for each command. This has the great benefit of segregating your code into small classes each with a single responsibility. It also allows you to centralise everything to do with a particular command. Should the command be enabled? Should it be visible? What is its tooltip and shortcut key? Does it require a specific privilege or license to execute? How should we handle exceptions thrown when the command runs?

The command pattern allows you to standardise how you deal with each of these concerns that are common to all the commands in your application. Your command object will have an Execute method that actually contains the code to perform the required behaviour for that command. In many cases this will involve calling into other objects and business services, so you will need to inject those as dependencies into your command object. Your command objects themselves should be possible (and straightfoward) to unit test.

7. Use an IoC container to manage dependencies

If you are using Presenter classes and Command classes, then you will probably find that the number of classes they depend on grows over time. This is where an Inversion of Control container such as Unity or StructureMap can really help you out. They allow you to easily construct your views and presenters no matter how many levels of dependencies they have.

8. Use the Event Aggregator pattern

Another design pattern that can be really useful in a Windows Forms application is the event aggregator pattern (sometimes also called a “Messenger” or an “event bus”). This is a pattern where the raiser of an event and the handler of an event do not need to be coupled to each other at all. When an “event” happens in your code that needs to be handled elsewhere, simply post a message to the event aggregator. Then the code that needs to respond to that message can subscribe and handle it, without needing to worry about who is raising it.

An example might be that you send a “help requested” message with details of where the user currently is in the UI. Another service then handles that message and ensures the correct page in the help documentation is launched in a web browser. Another example is navigation. If your application has many screens, a “navigate” message can be posted to the event aggregator, and then a subscriber can respond to that by ensuring that the new screen is displayed in the user interface.

As well as radically decoupling publishers and subscribers of events, the event aggregator also has the great benefit of creating code that is extremely easy to unit test.

9. Use Async and Await for threading

If you are targeting .NET 4 and above and using Visual Studio 12 or newer, don’t forget that you can make use of the new async and await keywords which will greatly simplify any threading code in your application, and automatically handle the marshalling back onto the UI thread when the background tasks have completed. They also hugely simplify handling exceptions across multiple chained background tasks. They are a great fit for Windows Forms applications and well worth checking out if you haven’t already.

10. Don’t leave it too late

It is possible to retrofit all the patterns and techniques I described above to an existing Windows Forms application, but I can tell you from bitter experience, it can be a lot of work, especially once the code behind for forms gets into the thousands of lines of code territory. If you start off building your applications with patterns like MVP, event aggregators and the command pattern you’ll find it a lot less painful to maintain as they grow bigger. And you’ll also be able to unit test all your business logic which which is critical for on-going maintainability.

Thursday, 10 July 2014

Six ways to initiate tasks on another thread in .NET

Over the years, Microsoft have provided us with multiple different ways to kick off tasks on a background thread in .NET. It can be rather bewildering to decide which one you ought to use. So here’s my very quick guide to the choices available. We’ll start with the ones that have been around the longest, and move on to the newer options.

Asynchronous Delegates

Since the beginning of .NET you have been able to take any delegate and call BeginInvoke on it to run that method asynchronously. If you don’t care about when it ends, it’s actually quite simple to do. Here we’ll call a method that takes a string:

Action<string> d = BackgroundTask;
d.BeginInvoke("BeginInvoke", null, null);

The BeginInvoke method also takes a callback parameter and some optional state. This allows us to get notification when the background task has completed. We call EndInvoke to get any return value and also to catch any exception thrown in our function. BeginInvoke also returns an IAsyncResult allowing us to check or wait for completion.

This model is called the Asynchronous Programming Model (APM). It is quite powerful, but is a fairly cumbersome programming model, and not particularly great if you want to chain asynchronous methods together as you end up with convoluted flow control over lots of callbacks, and find yourself needing to pass state around in awkward ways.

Thread Class

Another option that has been in .NET since the beginning is the Thread class. You can create a new Thread object, set up various properties such as the method to execute, thread name, and priority, and then start the thread.

var t = new Thread(BackgroundTask);
t.Name = "My Thread";
t.Priority = ThreadPriority.AboveNormal;
This may seem like the obvious choice if you need to run something on another thread, but it is actually overkill for most scenarios.

It’s better to let .NET manage the creation of threads rather than spinning them up yourself. I only tend to use this approach if I need a dedicated thread for a single task that is running for the lifetime of my application.


The ThreadPool was introduced fairly early on in .NET (v1.1 I think) and provided an extremely simple way to request that your method be run on a thread from the thread pool. You just call QueueUserWorkItem and pass in your method and state. If there are free threads it will start immediately, otherwise it will be queued up. The disadvantage of this approach compared to the previous two is that it provides no mechanism for notification of when your task has finished. It’s up to you to report completion and catch exceptions.

ThreadPool.QueueUserWorkItem(BackgroundTask, "ThreadPool");

This technique has now really been superseded by the Task Parallel Library (see below), which does everything the ThreadPool class can do and much more. If you’re still using it, its time to learn TPL.

BackgroundWorker Component

The BackgroundWorker component was introduced in .NET 2 and is designed to make it really easy for developers to run a task on a background thread. It covers all the basics of reporting progress, cancellation, catching exceptions, and getting you back onto the UI thread so you can update the user interface.

You put the code for your background thread into the DoWork event handler of the background worker:

backgroundWorker1.DoWork += BackgroundWorker1OnDoWork;

and within that function you are able to report progress:

backgroundWorker1.ReportProgress(30, progressMessage);

You can also check if the user has requested cancellation with the BackgroundWorker.CancellationPending flag.

The BackgroundWorker provides a RunWorkerCompleted event that you can subscribe to and get hold of the results of the background task. It makes it easy to determine if you finished successfully, were cancelled, or if an exception that was thrown. The RunWorkerCompleted and ProgressChanged events will both fire on the UI thread, eliminating any need for you to get back onto that thread yourself.

BackgroundWorker is a great choice if you are using WinForms or WPF. My only one criticism of it is that it does tend to encourage people to put business logic into the code behind of their UI. But you don't have to use BackgroundWorker like that. You can create one in your ViewModel if you are doing MVVM, or you could make your DoWork event handler simply call directly into a business object.

Task Parallel Library (TPL)

The Task Parallel Library was introduced in .NET 4 as the new preferred way to initiate background tasks. It is a powerful model, supporting chaining tasks together, executing them in parallel, waiting on one or many tasks to complete, passing cancellation tokens around, and even controlling what thread they will run on.

In its simplest form, you can kick off a background task with TPL in much the same way that you kicked off a thread with the ThreadPool.

Task.Run(() => BackgroundTask("TPL"));

Unlike the ThreadPool though, we get back a Task object, allowing you to wait for completion, or specify another task to be run when this one completes. The TPL is extremely powerful, but there is a lot to learn, so make sure you check out the resources below for learning more.

C# 5 async await

The async and await keywords were introduced with C# 5 and .NET 4.5 and allow you to write synchronous looking code that actually runs asynchronously. It’s not actually an alternative to the TPL; it augments it and provides an easier programming model. You can call await on any method that returns a task, or if you need to call an existing synchronous method you can do that by using the TPL to turn it into a task:

await Task.Run(() => xdoc.Load(""));

The  advantages are that this produces much easier to read code, and another really nice touch is that when you are on a UI thread and await a method, when control resumes you will be back on the UI thread again:

await Task.Run(() => xdoc.Load(""));
label1.Text = "Done"; // we’re back on the UI thread!

This model also allows you to put one try…catch block around code that is running on multiple threads, which is not possible with the other models discussed. It is also now possible to use async and await with .NET 4 using the BCL Async Nuget Package.

Here’s a slightly longer example showing a button click event handler in a Windows Forms application that calls a couple of awaitable tasks, catches exceptions whatever thread its on and updates the GUI along the way:

private async void OnButtonAsyncAwaitClick(object sender, EventArgs e)
    const string state = "Async Await";
    this.Cursor = Cursors.WaitCursor;
        label1.Text = String.Format("{0} Started", state);
        await AwaitableBackgroundTask(state);
        label1.Text = String.Format("About to load XML"); 
        var xdoc = new XmlDocument();
        await Task.Run(() => xdoc.Load(""));
        label1.Text = String.Format("{0} Done {1}", state, xdoc.FirstChild.Name);
    catch (Exception ex)
        label1.Text = ex.Message;
        this.Cursor = Cursors.Default;

Learn More

Obviously I can’t fully explain how to use each of these approaches in a short blog post like this, but many of these programming models are covered in depth by some of my fellow Pluralsight Authors. I’d strongly recommend picking at least one of these technologies to master (TPL with async await would be my suggestion).

Thread and ThreadPool:

Asynchronous Delegates:


Task Parallel Library:

Async and Await

Wednesday, 2 July 2014

Input Driven Resampling with NAudio using ACM

NAudio provides a number of different mechanisms for resampling audio, but the resampler classes all assume you are doing “output driven” resampling. This is where you know how much resampled audio you want, and you “pull” the necessary amount of input audio through. This is fine for playing back audio, or for resampling existing audio files, but there are cases when you want to do “input driven” resampling. For example, if you are receiving blocks of audio from the network or soundcard, and want to resample just that block of audio before sending it on somewhere else, then input driven is what you want.

All of NAudio’s resamplers (ACM, Media Foundation and WDL) can be used in input driven mode, but in this post I’ll focus on using AcmStream, since it is the most widely available, and doesn’t require floating point samples.

Step 1 – Open an ACM Resampler Stream

We start by opening the ACM resampler, specifying the desired input and output formats. Here, we open an ACM stream that can convert mono 16 bit 44.1kHz to mono 16 bit 8kHz. (Note that the ACM resampler won’t change bit depths or channel counts at the same time – so the WaveFormats should differ by sample rate only).

var resampleStream = new AcmStream(new WaveFormat(44100, 16, 1), new WaveFormat(8000, 16, 1));

Note that you shouldn’t recreate the AcmStream for each buffer you receive. Codecs like to maintain internal state, so you’ll get the best results if you open it once, and then keep passing audio through it.

Step 2 – Copy Audio into the Source Buffer

Now with the ACM stream, it already has a source and destination buffer allocated that it will use for all conversions. So we need to copy audio into the source buffer, Convert it, then read it out of the destination buffer. The source buffer is a limited size, so if you have more input data than can fit into the source buffer, you’ll need to perform this conversion in multiple stages. Here we’ll get some input data as a byte array (e.g. received from the network or a soundcard), and copy it into the source buffer:

byte[] source = GetInputData();
Buffer.BlockCopy(source, 0, resampleStream.SourceBuffer, 0, source.Length);

Step 3 – Resample the Source Buffer with the Convert function

To convert the audio in the source buffer, call the Convert function. This will return the number of bytes available in the destination buffer for you to read out. It will also tell you how many source bytes were used. This should be all of them. If they haven’t been used, it probably means you are trying to convert too many at once. Pass the audio through in smaller block sizes, or copy the “leftovers” out and put them through again next time (this is how NAudio’s WaveFormatConversionStream works internally). Here’s the code to convert what you’ve copied into the source buffer:

int sourceBytesConverted = 0;
var convertedBytes = resampleStream.Convert(source.Length, out sourceBytesConverted);
if (sourceBytesConverted != source.Length)
    Console.WriteLine("We didn't convert everything {0} bytes in, {1} bytes converted");

Step 4 – Read the Resampled Audio out of the Destination Buffer

Now we can copy our converted audio out of the ACM stream’s destination buffer:

var converted = new byte[convertedBytes];
Buffer.BlockCopy(resampleStream.DestBuffer, 0, converted, 0, convertedBytes);

And that’s all you have to do (apart from remember to Dispose your AcmStream when you’re done with it). It is a little more convoluted than output driven resampling, but not too difficult once you know what you’re doing.

I’ll try to do some future posts showing how to do input driven resampling with the Media Foundation resampler, and the WDL resampler.

Sunday, 29 June 2014

How to create zip backups of previous git commits

I’m working on a new Pluralsight course at the moment, and one of the things I need to do is provide “before” and “after” code samples of all the demos I create. In theory should be easy since I’m using git for source control, so I can go back to any previous point in time using git checkout and then zip up my working folder. But the trouble with that approach is that my zip would also contain a whole load of temporary files and build artefacts that I don’t want. What I needed was to be able to quickly create a zip file containing only the files under source control for a specific revision.

I thought at first I’d need to write my own tool to do this, but I discovered that the built-in git archive command does exactly what I want. To create a zip file for a specific commit you just use the following command:

git archive --format zip --output 47636c1

Or if like me you are using tags to identify the commits you want to export, you can use the tag name instead of the hash:

git archive --format zip --output begin-demo

Or if you just want to export the latest files under source control, you can use a branch name instead:

git archive --format zip --output master

In fact, if you have Atlassian’s excellent free SourceTree utility installed, it’s even easier, since you can just right-click any commit and select “archive”. Anyway, this feature saved me a lot of time, so hope this proves useful to someone else as well.