Thursday, 23 October 2014

Thoughts on the demise of CodePlex and Mercurial

I've been an enthusiastic user of CodePlex ever since it first launched in 2006. 14 of my open source projects are hosted there, including my "main" contribution to .NET open source, NAudio, and my most downloaded project of all time, Skype Voice Changer.

CodePlex was for me a huge improvement over SourceForge, where I had initially attempted to host NAudio in back 2003, but never actually succeeded in figuring out how to use CVS on Windows. Thanks to the TFS to SVN bridge, CodePlex source control was easy to work with using TortoiseSVN, and offered discussion forums, bug tracking, release hosting, and even ClickOnce hosting, which I make use of for a number of projects.

I was particularly excited in 2010 when CodePlex started to support Mercurial for source control. I was just awakening to the amazing power and flexibility of DVCS, and I quickly settled on Mercurial as my preferred option to Git - it just seemed to play a lot better with Windows, have a simpler command line syntax, and wasn't blocked by my work firewall. So I made the switch to Mercurial for NAudio in 2011.

Git vs Mercurial

It became obvious soon after making my decision that Git was definitely winning the popularity contest in the DVCS space. Behind the sometimes arcane command line syntax, there was an incredibly powerful feature-set there, and slowly but surely thanks to tools like SourceTree and GitHub for Windows, the developer experience on Windows improved and overtook Mercurial.

A case in point would be Microsoft’s decision to support Git natively in Visual Studio. Thanks to the built-in integration, it is trivially easy to enable Git source control for every project you create. For a long time I hoped that Mercurial support would follow, especially since Microsoft had appeared to back it in the past through CodePlex, but they made it clear that Mercurial support was never coming.

Likewise with Windows Azure, when Microsoft added the ability to deploy using DVCS, it was Git that was supported, and Mercurial users were left out in the cold again. I believe that has actually now been rectified, but for me at least, the damage had been done. I’ve used Git for almost all my new projects for over a year now, and I only really use Mercurial now for my legacy projects. It’s obvious that if I want to integrate with the latest tooling, I need to be using Git, not Mercurial.

GitHub vs CodePlex

Although CodePlex added Mercurial hosting back in 2010, it was obvious that they were rapidly losing users to GitHub, and in 2012, they finally added Git support. In theory this should have revived CodePlex as the premier hosting site for .NET open source, but it became apparent that they were falling behind in other areas too. GitHub’s forking and pull request system is very slick, and GitHub pages is a much nicer option than the rather clunky wiki system that CodePlex uses for documentation.

For a while it looked like CodePlex was fighting back, with regular updates of new features, but the CodePlex blog has had no news to announce for over a year now, and perhaps more of an indictment, GitHub has become the hosting provider of choice for the new and exciting open source projects coming out of Microsoft, such as the new ASP.NET vNext project. There are some notable exceptions such as Roslyn (which is considering a move to GitHub) and Visual F# tools (which has a top-voted feature request to move to GitHub).

Does CodePlex offer any benefits over GitHub? Well there are a few. I like having a separate Discussion forum to my Issues list. The ClickOnce hosting is useful for several of my projects. And I can’t complain about the modest income stream that their DeveloperMedia ad integration allows you to tap into if you have a popular project you’d like to generate some income for. But GitHub is a clear winner in most other respects.

Standardisation vs Competition

Now it could be considered a good thing that Git has won the DVCS war and GitHub has won the open source hosting war. It allows us all to embrace them and standardise on one way of working, saving time learning multiple tools and workflows. But there is a part of me that feels reluctant to walk away from Mercurial and CodePlex, as a lack of competition in these spaces will ultimately leave us poorer. If there is no viable competition, what will drive Git and GitHub to keep innovating, and meeting the needs of all their users?

For example, GitHub at one point unexpectedly decided to ditch their uploads feature. This immediately made it unsuitable for hosting lots of the sorts of projects that I work on, which are released as a set of binaries. It looks like they have remedied the situation now with a releases feature, but for me that did highlight a danger that they were already in such a position of strength they could afford to make a decision that would be hugely unpopular with many of their users.

I’m also uneasy about the way that GitHub has become established in the minds of many developers as the only place that counts when evaluating someone’s contribution to open source. My CoderWall page simply ignores all my CodePlex work and focuses entirely on a few peripheral projects I have hosted on GitHub and BitBucket. My OpenHub (formerly Ohloh) page does at least attempt to track my CodePlex work but somehow only picks up a very limited subset of my actual commit history (apparently I did almost nothing in the last two years). I’d rather they didn’t show anything about my commit history than a misrepresentation. I’ve also read numerous blogs proclaiming that you should only hire a developer after checking their GitHub contributions. So it is concerning that the all the work I have done on CodePlex counts for nothing in the minds of some simply because I did it on the wrong hosting site with the wrong DVCS tool. Hopefully the new Microsoft MVP criteria won’t take the same blinkered approach.

Time to Transition?

So I find myself at the end of 2014 wondering whether the time has come to migrate NAudio to GitHub. I was initially against the idea, but it would certainly make it easier to accept contributions (very few people are willing to learn Mercurial), and GitHub pages would be a great platform to build improved documentation on. And all of a sudden these tools that attempt to “rank” you as a contributor to open source would finally recognize me as having done something!

But part of me wishes that the likes of CodePlex and Mercurial would have a renaissance, as well as new DVCS (Veracity?) and alternative open source hosting sites like the excellent BitBucket will continue to grow and flourish and provide real competition to Git and GitHub, spurring them on to more innovation.

I’d love to know your thoughts on this in the comments. Have you transitioned your open source development to Git and GitHub, and why / why not?

TLDR: Git is awesome and GitHub is awesome but the software development community is poorer for there being no viable competition.

Thursday, 16 October 2014

How to Create Circular Backgrounds for your Font Awesome Icons

I was recently using the font awesome icons for a webpage I was creating, which provide a really nice scalable set of general purpose icons, and I wanted them to appear on a circular background, something like this:

image

But not being an expert in CSS, I didn’t know how to set this up. After a bit of searching and experimentation I found two different ways to create this effect.

Method 1: CSS circles

The first is to create the circle using the border-radius css property and some padding to create space around your icon. You can set the radius to 50% or to half the width and height to create a circle. The only catch is that the container needs to be square, otherwise you’ll end up with an ellipse. For example, if I use this style

.circle-icon {
    background: #ffc0c0;
    padding:30px;
    border-radius: 50%;
}

And then try to use it with a font awesome icon like this:

<i class="fa fa-bicycle fa-5x circle-icon"/>

I get an ellipse:

image

So we need to specify the height and width explicitly, and this leads us to also need some rules to get our icon centred horizontally (using text-align) and vertically (using line-height and vertical-align). So if we update our CSS style like this:

.circle-icon {
    background: #ffc0c0;
    width: 100px;
    height: 100px;
    border-radius: 50%;
    text-align: center;
    line-height: 100px;
    vertical-align: middle;
    padding: 30px;
}

Now we get the circle as desired:

image

So mission accomplished, sort of, although it feels a shame to have to specify exact sizing for things. Maybe any CSS experts reading this can tell me a better way of accomplishing this effect. The good news is that font awesome itself has a concept of “stacked” icons which offers another way to achieve the same effect.

Method 2 – Stacked Icons

Stacked icons are basically drawing two font awesome icons on top of each other. Font awesome comes with the fa-circle icon which is a solid circle, so we can use that for the background. We need to style it to set the background colour correctly, and we use fa-stack-2x on this icon to indicate that it must be drawn twice the size of the icon that will appear to be inside the circle. Then we put both icons inside a span with the fa-stack class, and we can still use the regular font awesome styles to choose the overall icon size, such as fa-4x. So here for example, I have

<span class="fa-stack fa-4x">
  <i class="fa fa-circle fa-stack-2x icon-background"></i>
  <i class="fa fa-lock fa-stack-1x"></i>
</span>

where icon-background is simply specifying a background colour:

.icon-background {
    color: #c0ffc0;
}

and this looks like this:

image

As you can see this is a nice simple technique, and I prefer it to the CSS approach. Font awesome also includes circles with borders, so if you want to create something like this (I used three stacked icons), you can:

image

To experiment with this yourself, try this jsfiddle.

Monday, 6 October 2014

Auto-Registration with StructureMap

When I want an IoC container I usually either use Unity or create my own really simple one. But I’ve been meaning to try out some of the alternatives, and so I decided to give StructureMap a try for my latest project.

I had two tasks to accomplish. The first was to tell it to use a particular concrete class (ConsoleLogger) as the implementer of my ILog interface. The second was to get it to scan the assembly and find all the implementers of my ICommand interface, without having to register each one explicitly.

As you’d expect. the first task is very simple to accomplish. You create a new Container, and then map the interface to the concrete implementer using a fluent API. Here I’ve said that my logger will be a singleton:

var container = new Container(x => 
    x.ForSingletonOf<ILog>().Use<ConsoleLogger>());
var logger = container.GetInstance<ILog>();

The second task is also straightforward to achieve with StructureMap. We ask StructureMap to scan the calling assembly, and add all types of ICommand. The one gotcha is that I had to make the ICommand interface and implementing classes public for it to detect them. Here’s the registration code:

var container = new Container(x =>
    {
        x.ForSingletonOf<ILog>().Use<ConsoleLogger>();
        x.Scan(a =>
           {
               a.TheCallingAssembly();
               a.AddAllTypesOf<ICommand>();
           });
   });

Now we can easily get hold of all the implementations of the ICommand interface like so:

var commands = container.GetAllInstances<ICommand>();

As you can see, it’s very straightforward. In fact there are ways with StructureMap to simplify things further by making use of conventions. So if you’re looking for an IoC that can do auto-registration, why not give StructureMap a try?

Thursday, 2 October 2014

September Pluralsight Course Recommendations

I blogged in August about some of my Pluralsight course recommendations, and so I thought I’d give another brief roundup of some of the best ones I’ve watched in the last month or two.

Hack your API First (Troy Hunt). With the “shellshock” and “heartbleed” vulnerabilities making national headlines, everyone wants to be sure that their website or network connected application is secure. But do you have the confidence as a developer that you know how to ensure your programs are safe from attack? Troy Hunt has created several security related Pluralsight courses, and they are all excellent as well as being a lot of fun to watch. This latest one gives lots of practical guidance on how you can ensure any APIs your application exposes or uses can be kept secure.

Executable Specifications (Elton Stoneman). Lots of developers (myself included) have embraced the practice of writing “unit tests”, but often these tests are very low level and only cover small components in isolation. What if we could write specifications in such a way that acceptance testing of the whole stack could be automated? Don’t believe it’s possible? Well you might change your mind after watching this course. Elton Stoneman does a superb job of showing the power of the SpecFlow framework. I’ll definitely be watching more of his courses in the future, and looking out for a chance to try SpecFlow on one of my own projects.

F# Functional Data Structures (Kit Eason) F# is a language I find very exciting and I am slowly getting to grips with it’s syntax. The real challenge though is to go beyond simply writing C# code in the F# syntax, and to start taking advantage of the power of functional programming. In this course Kit Eason guides you through the various data structures offered by F#, showing you how, when and why to use them. You should definitely check it out if you are learning F#.

.NET Interoperability Fundamentals (Pavel Yosifovich). Probably the biggest headache for me when I started creating NAudio was having to learn how to interoperate with unmanaged code effectively. I’ve done a huge amount of P/Invoke to Windows APIs as well as COM interop for the more modern Windows APIs, and it has been a painful and error prone process. This is the course I wish I could have watched 10 years ago. Pavel knows his stuff, and really I’d describe this course as an expert chatting about everything you need to know to effectively work with unmanaged code. Well worth watching if you need to do any kind of interop.

So that’s my recommendations for this month. I know there are probably plenty of other good ones I missed, so let me know in the comments what you’ve been watching. And I know my blogging output has been reduced over the summer. I’m hoping to get back up to speed in the near future, and I’ll have news to share about my next Pluralsight course soon.

Saturday, 27 September 2014

Announcing Windows Forms Best Practices

Those of you following my blog will know that I’ve been working on a course entitled Windows Forms Best Practices for Pluralsight. I’m pleased to announce that it is now live, and you can check out some of my ideas for how to write better Windows Forms applications (as well as how to incrementally migrate away to a newer technology).

The approach I took was to make a small Windows Forms demo application in the style often seen in typical line of business applications – all the code sits inside the code behind of a monolithic main form. And through the course I improve this application in various ways, both from a usability and a maintainability perspective.

Along the way I refactor it towards a Model View Presenter pattern, and improve it in various other ways such as better threading and exception handling, making it resizable and localizable.

One regret I do have is that there simply wasn’t time to show how the code could be refactored completely to MVP – I just did the first step. I have however since continued the refactoring and have a version of the demo application that uses MVP more extensively. I’ll try to make this available for viewers of my course, so do get in touch (@mark_heath on Twitter) if you’d like to get hold of this.

The course is available for viewing here. Hope you find it helpful.

Wednesday, 10 September 2014

Creating RF64 and BWF WAV files with NAudio

There are two extensions to the standard WAV file format which you may sometimes want to make use of. The first is the RF64 extension, which overcomes the inherent limitation that WAV files cannot be larger than 4GB. The second is the Broadcast Wave Format (BWF) which builds on the existing WAV format and specifies various extra chunks containing metadata.

In this post, I’ll explain how you can make a class to create Broadcast Wave Files using NAudio that supports large file sizes using the RF64 extension, and includes the “bext” chunk from the BWF specification.

File Header

First of all, a WAV file starts with the byte sequence ‘RIFF’ and then has a four byte size value, which is the number of bytes following in the entire file. However, for large files, instead of ‘RIFF’, ‘RF64’ is used, and the following four byte integer for the RIFF size is then ignored (it should be set to -1).

Then we have the ‘WAVE’ identifier (another 4 bytes), and following that in a normal WAV file we would usually expect the format chunk (with the ‘fmt ‘ 4 byte identifier). But to support RF64, we add a “JUNK” chunk. This is of size 28 bytes, and initially is all set to zeroes. If the overall size of the entire WAV file grows to over 4GB, then we will turn this “JUNK” chunk into a ‘ds64’ chunk. If the overall file size does not grow beyond 4GB, then the junk chunk is simply left in place, and media players will just ignore it.

The ds64 Chunk

A ds64 chunk consists of three 8 byte integers and a four byte integer. These are the RIFF size, which is the size of the entire file minus 8 bytes, the data size, which is the number of bytes of sample data in the ‘data’ chunk and the ‘sampleCount’ which is the number of samples. The sample count is optional really, as it corresponds to the sample count found in the ‘fact’ chunk of a standard WAV file. This chunk is usually only present for non-PCM audio formats as it is trivial to calculate the sample count for PCM from the byte count. Finally, a ds64 chunk can have a table containing the sizes of any other huge chunks, but usually this would be unused since it is likely only the ‘data’ chunk that will grow larger than 4GB. So the final four bytes of a typical ds64 chunk are 0s, indicating no table entries.

The bext chunk

Following the ds64 chunk, we have the bext chunk from the BWF specification. This has space for a textual description of the file as well as timestamps, and newer versions of the bext chunk also allow you to specify various bits of loudness information. The algorithms for calculating this are hard to track down, so I tend to use version 1 of bext and ignore them.

The fmt chunk

Then we have the standard ‘fmt ‘ chunk, which works just the same way it does in a standard WAV file, containing a WAVEFORMATEX structure with information about sample rate, bit depth, encoding and number of channels. RF64 files are almost always either PCM or IEEE floating point samples, since it is only with uncompressed audio that you typically end up creating files larger than 4GB.

The data chunk

Finally, we have the ‘data’ chunk, containing the actual audio data. Again this is used in exactly the same way as it is in a regular WAV file, except that the chunk data length only needs to be filled in the file is less than 4GB. If it is a RF64 file, the four byte length for the data chunk is ignored (set it to –1), and the size from the ds64 chunk is used instead.

The code

Here’s a simple implementation of a BWF writer class, that creates BWF files with a simple bext chunk and turns them into RF64 files if necessary. I plan to clean this code up a little and import it into NAudio in the future (either as its own class or upgrade WaveFileWriter to support RF64 and BWF – let me know your preference in the comments).

Friday, 8 August 2014

Brush up on your Languages with Pluralsight

Over the last year not only have I created a number of courses for Pluralsight, I’ve also watched a lot too. Most of the time, I’m not watching to learn a brand new technology, but as a refresher for something I’ve already used a bit. Often in just an hour or two (on 1.3x playback) you can watch a whole course and pick up loads of great tips.

I’ve found it a particularly effective way to brush up on my skills in a few programming languages that I’m semi-proficient in, but not completely “fluent” in. So here’s a few programming language related courses that I can recommend:

First, last year I was glad I watched Structuring JavaScript by Dan Whalin, as I had been hearing lots of people talking about the “revealing module pattern” and “revealing prototype pattern” but hadn’t yet properly learned what those patterns were. He explains them simply and clearly.

Another great course is Python Fundamentals by Austin Bingham and Robert Smallshire. It’s been a number of years since I did any serious Python development, so my skills had grown a bit rusty. This superbly presented course is a brilliant introduction to Python, and filled in a couple of gaps in my knowledge. They’ve got a follow-up course out as well which is undoubtedly also worth watching.

Third, several times over the years I’ve tried and failed to get to grips with PowerShell. The Everyday PowerShell for Developers course by Jim Christopher was exactly what I needed as it shows how to do the sorts of things developers will be interested in doing with PowerShell.

And finally, the F# section of the Pluralsight library is still small, but growing fast, and one fascinating course was Mark Seemann’s Functional Architecture with F#. It’s fast-moving but gives fascinating insights into how you could architect a typical line of business application in a more functional way.

Anyway, that’s enough recommendations for now. I have several other courses I want to highlight, so maybe this will become a regular blog feature. Let me know in the comments if there are any must-see courses you’ve come across.