Saturday, 31 January 2015

Buying a Code Signing Certificate

Towards the end of last year, I decided to buy a code signing certificate. Why would I want to do that? Well, I've been working on publishing a new Skype call recording utility, and if you leave your applications unsigned, then Windows SmartScreen can block users from installing and running it. There is a way to ignore the warning messages, but many users will not know how to do this, and I wanted to remove as many barriers to installation as possible.

clip_image001

Having a signed application doesn't automatically make these warnings go away. After all, what's to stop a malicious hacker from signing their own code? But once Windows decides that they trust my application, the theory is that any updates or new applications signed with the same certificate will also be trusted.

Step 1 was to find a code signing certificate that wasn't horrendously expensive. Code signing certificates are a lot more expensive than SSL certificates (I recently picked up an SSL certificate for $25 for five years), and can be several hundred dollars a year. This is of course no big deal if you are Microsoft or Adobe, but for an independent developer, this is a significant investment, particularly if you don't have a high volume of sales, or are producing freeware.

I eventually settled on using K Software, whose website seemed to contain relatively up to date information about code signing certificates. Their cheapest authenticode certificates were about $80 per year, and they promised a fast turnaround time. Apparently certificates could be issued as quickly as 15 minutes, or 1-2 days if identify verification was needed.

So I made the order, and my order was passed on to Comodo, who were the actual certificate authority who would be issuing me with my code signing certificate.

After a few days of silence I chased up to ask what was going on, and I got a reply back telling me I needed "face to face verification". In other words I needed to prove I was who I said I was. Fair enough, I was expecting to send some proof of identity to them, but I hadn't anticipated they would require me to visit a Notary Public.

They also told me they wanted my details on 192.com and scoot.co.uk. This was something I really didn't want to do, since these open you up to nuisance marketing phone calls. But I had no option if I wanted to, and registered my business on scoot.

Visiting a notary was a bit of a hassle as it required me to take a half-day off work. It cost me £40, and he took copies of my passports, bank statements and various other forms of identification, and faxed them through to Comodo. I was required to "overnight" the documents to Comodo, but that isn't an option the Royal Mail offer to America, so I went for their best service of tracked and signed, which supposedly delivers in five working days.

That proved to be a mistake, as my documents took 16 days to arrive. This was extremely frustrating as scoot were constantly pestering me with phone calls trying to upsell me to their paid offering. They kept explaining that their free option doesn't show your company website URL to visitors, and wondering why I didn't seem to care about this. I didn't want them to boot me off their listings, so I had to stall them for as long as possible, while I waited for the interminably slow overseas postal service to deliver my documents.

Very concerningly, after I got confirmation of delivery from Royal Mail, Comodo claimed not to have received my documents at all. But after several emails they eventually decided they had received them. Now they needed to contact my Notary Public and get him to verify that he really did send the documents. This took another a few days, and finally, well over a month after making the order, I got my signing certificate.

To actually download the certificate, I needed to use the same computer and browser as I had used to make the original order. This was a bit of a problem at first because I had actually forgotten which one I used now that a whole month had elapsed. But eventually I downloaded my certificate, and it downloaded into some mysterious location in Chrome, but fortunately allowed me to export it as a .pfx file, which is what I needed for signing ClickOnce applications.

So I did finally get my code signing certificate, and it certainly didn't take anything like the 1-2 days advertised. It meant I had to delay the launch of my product by a month. The good news is that as far as I can tell, signing my code has had the desired effect - the installs I've tried haven't been blocked by Windows SmartScreen.

So if you decide you want a code signing certificate, do give yourself plenty of time to get it sorted out and don’t leave it to the last minute. You may also want to check out this very thorough article from Eric Law in which he explains how he went about getting his code signing certificate and set up a hardware security token.

Thursday, 29 January 2015

Using Azure Application Diagnostics with ASP.NET Web Pages

Although I've built a couple of sites with ASP.NET MVC, I like the simplicity of ASP.NET Web Pages. It allows me to start with a completely empty project and only add code I've written and understand (interestingly this seems to be the philosophy behind the vNext version of ASP.NET). The downside is that every now and then I run into things that would be easy to do with ASP.NET MVC, but finding how to accomplish them with Web Pages is tricky.

Azure Diagnostics

Windows Azure WebSites offers a nice feature to turn on "application diagnostics". This allows you to write your own custom error logs using System.Diagnostics.Trace. It's super easy to configure - simply go to the control panel for your website in the Azure portal, and chose where you want these logs stored. You can choose any combination of file system, table storage and blob storage, and you can set different logging levels for each one.

clip_image001

Once you've done this, in theory you should just be able to write messages using the Trace class and see them appear in the places you’ve configured. If you choose blob storage, for example, you get a CSV file with the trace messages for a particular day.

@{
    System.Diagnostics.Trace.TraceInformation("INFORMATION");
    System.Diagnostics.Trace.TraceWarning("WARNING");
    System.Diagnostics.Trace.TraceError("ERROR");
}

 

Using it with ASP.NET Web Pages

However, if you try this with ASP.NET Web Pages, you simply won’t see anything at all in your logs. However, thanks to StackOverflow, I found out how to enable application tracing. Basically the TRACE flag needs to be set on the compiler this can in the web.config file. For a .NET 4.5 application, you need the following:

<system.codedom>
    <compilers>
        <compiler language="c#;cs;csharp" 
                  extension=".cs" 
                  type="Microsoft.CSharp.CSharpCodeProvider, System, Version=2.0.3600.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" 
                  compilerOptions="/define:TRACE" 
                  warningLevel="1" /> 
    </compilers> 
</system.codedom>

Once you’ve done this, re-deploy your web.config and as if by magic all your trace statements will make it through to the configured destinations. This allows you to easily log anything from both within Razor views as well as from within any C# files you have in your App_Code folder.

Anyway, hope this post is useful to someone as it had me tearing my hair out for an evening trying to figure out why it wasn’t working.

Wednesday, 28 January 2015

Minimum Deliverable Product

Many years ago, I came up with an idea for a simple software application that I could sell online. Basically, an open source app I created for modifying your voice in Skype became incredibly popular and had millions of downloads, and it struck me that I could probably sell a "Pro" version with some additional features.

Of course what actually happened was that the code for the "Pro" version stagnated for years as my enthusiasm for this project came in fits and starts, and I couldn't make up my mind exactly how I wanted to build the thing, and wasn't even sure anyone would buy it. So the idea was going nowhere.

Getting Motivated

But about a year ago I started listening to the entreprogrammers podcast, and it inspired me to resurrect this idea. The only trouble was, would I be able to find the time to actually complete the thing, given that I was already spending a lot of my free time making courses for Pluralsight?

I soon realised that if I were to ever get something out the door I would need to pick a realistic and achievable feature set for the first version, and focus exclusively on completing that. In other words I needed to stop trying to make the perfect app, and create what's known as a "Minimum Viable Product".

Minimum Feature Set

For my application there were three main features I wanted to deliver:

  1. You can change your voice in a Skype conversation
  2. You can play back pre-recorded sounds for the other person to hear
  3. You can record your Skype conversations

For each of these features, I decided on what the bare minimum capabilities were that I would be happy launching with. For example, I wanted several cool voice effects, but I didn't really need the effects parameters to be adjustable. I wanted the ability to replay sounds, but I didn't really need looping and repositioning within those sounds for version 1. And I wanted to offer recording to MP3, but for version 1, recording to WAV would probably suffice.

By deciding on a minimum feature set for my application, I finally had an achievable goal and was able to focus on getting the job done.

Minimum Lovable Product

I've heard some criticism of the "minimum viable product" concept from people saying that what you really need to create is a "minimum lovable product". In other words, if your application is too stripped down and basic, you may find that no one is willing to buy it, and your potential customers go off in search of alternative solutions. So while I did go for a fairly basic feature set, I did attempt to ensure that I had just enough in each of my three key features to offer customers something genuinely fun and useful.

Minimum Deliverable Product

Unfortunately though, completing the minimum viable feature set of my application was the easy part. There is a whole host of additional stuff that needs to be done just to sell a single piece of software. I needed an installer, a logo, a website, a domain name, some help documentation and tutorials, an email address to handle sales and support enquiries, a way of selling the software that complies with the new and arcane EU VAT rules, a way of refunding people, a way of generating software licenses, and a way to handle errors gracefully within the application.

It turned out that it was these tasks rather than the core feature set that were preventing me from completing the project. So I decided I needed to take the "minimum viable product" approach with each of them too.

What's the "minimum viable website"? I created something very simple (and ugly) in bootstrap and ASP.NET web-pages. A few basic help pages, and the ability to install and buy were all that was really needed. It doesn't look pretty but I can improve it gradually or even hire a designer at a later date.

What's the "minimum viable installer"? I opted for ClickOnce, since it's quick to make an installer and easy to keep customers updated with new features. Even that turned into a bigger task than I wanted as the hassle I went through to purchase a code signing certificate was much greater than I anticipated.

What's the "minimum viable licensing mechanism"? Here I may have made a mistake. I opted for portable.licensing,which itself is great, but that meant I had to create an automated license creation and emailing system. Maybe I should have just decided not to worry about pirating, and save myself time by not using licenses.

What's the "minimum viable checkout"? Well a "buy now" PayPal button was my plan, but thanks to the new EU laws on VAT collection I needed a complicated system of verifying the locations of my customers and charging them the correct VAT. So I decided to find a company who would collect and report VAT for me. After a false start with Digital River's MyCommerce, who disabled my account and refused to respond to any emails, I found paddle.com whose customer support was much better, and offered straightforward integration for my own license generation code (turns out I could have used their built-in license generation capabilities but I was too far down the road with my own by this point).

Keeping Motivated

So actually going from a working prototype to a deliverable product is quite a journey. I realised that if I was ever going to complete the task, I'd need a way to motivate myself. So what I did was put an advert in my open source software for the "Pro" version. It would simply tell users that if they wanted to record their calls they could upgrade. The advert took them to a "coming soon" page hosted on my blog, where they could register their interest to be notified when the pro version was released, and they could purchase a pre-order license for a discounted price.

My email list grew quite rapidly - I collected 1400 email addresses in just over a year. And I had six people pre-order the product for $30 (actually seven pre-orders, but one wanted a refund when he realised it wasn't released yet). The very fact that I had some customers who had paid for the product helped motivate me to actually complete the task.

Going Live

So finally, years later than I should have done it, I went live with my site a couple of days ago. I've still got to email all the people who registered pre-order interest, and offer them a discount. I'm not sure what my "conversion rate" will be, but with 1,400 emails I'm hoping at least a few people will buy a license.

Obviously going live is only the beginning. There are lots of tweaks and enhancements to my app and website that I deliberately held off doing, just so I could get my first version out of the door. But that's the point of the minimum viable product. Just get something released, and then you can gauge how much further time and effort to invest, depending on whether there is actually any interest.

At the time of writing I'm still waiting for my first customer from the new site (it's early days yet, but I suspect my marketing and sales ineptitude isn't helping). But if you have a use for a Skype Voice Changer or a Skype call recorder, then why not try it out? And if you'd like to be my first customer, then I'm offering readers of my blog a 25% discount. Just use the SOUNDCODE coupon code when you purchase.

Tuesday, 6 January 2015

Porting WCF Service Contracts to F#

One of my goals this year is to get better at F#, by using it more, so I decided to port a simple WCF service over to F#. In this post, I will demonstrate how to port data and operation contracts over from C# to F#. Let’s start by looking at the service contract we will be porting (simplified for the purposes of this post):

[ServiceContract]
public interface IRetrieval
{
    [OperationContract]
    [FaultContract(typeof(RetrievalServiceFault))]
    ChunkResponse GetChunk(ChunkRequest request);

    [OperationContract]
    [FaultContract(typeof(RetrievalServiceFault))]
    VersionInfoResponse GetVersionInfo(VersionInfoRequest request);
}


[DataContract]
public class VersionInfoResponse
{
    [DataMember]
    public string Version { get; set; }
}

[DataContract]
public class VersionInfoRequest
{
}

[DataContract]
public class ChunkResponse
{
    [DataMember]
    public byte[] Data { get; set; }

    [DataMember]
    public bool IsEndOfFile { get; set; }
}

[DataContract]
public class ChunkRequest
{
    [DataMember]
    public string FileName { get; set; }

    [DataMember]
    public long Offset { get; set; }

    [DataMember]
    public int BytesRequested { get; set; }
}

[DataContract]
public class RetrievalServiceFault
{
    public RetrievalServiceFault(string message)
    {
        this.Message = message;
    }

    [DataMember]
    public string Message { get; private set; }
}

Attributes

First, we need to know how to put attributes on things in F#. The syntax is similar to C#, just with an extra set of angle brackets (one of the few cases where C# is more compact than F#):

[<OperationContract>]

That’s easy, but what about the FaultContract attribute we have on each operation? That takes a type as a parameter. Well F# also has a typeof function, and you put the type name in angle brackets like so:

[<FaultContract(typeof<RetrievalServiceFault>)>]

Interfaces

Now we need to know how to declare an interface in F#. There seems to be no special syntax other than to declare a type containing only abstract members. For each method, we need to provide the name, the annotated parameter list, and the type it returns. The syntax looks a little odd to C# developers at first as we are used to the return type coming at the beginning rather than the end of the method signature. It takes the form abstract MethodName : ParameterName : ParameterType –> MethodReturnType. Here’s our example

[<ServiceContract()>]
type IRetrieval =
    [<OperationContract>]
    [<FaultContract(typeof<RetrievalServiceFault>)>]
    abstract GetChunk : request : ChunkRequest -> ChunkResponse

    [<OperationContract>]
    [<FaultContract(typeof<RetrievalServiceFault>)>]
    abstract GetVersionInfo : request : VersionInfoRequest -> VersionInfoResponse

 

Data Contracts

The final piece of the puzzle is to implement the four objects that are passed as the input and output to the methods on our interface. One of the challenges is that the properties need public getters and setters, and F# likes to make properties immutable. There are a few ways of achieving this in F#, but the simplest one for this purpose seems to be to use an F# record with the mutable keyword like so:

[<DataContract>]
type VersionInfoResponse =
    { [<DataMember>] mutable Version : string }

Empty Classes

The VersionInfoRequest class had me stumped for a while, as it contains no members at all (probably a bad design choice in C#), but I eventually stumbled on a way to implement this in F#:

[<DataContract>]
type VersionInfoRequest() = 
    do()

Longs and Byte Arrays

The final challenge for me was learning how to declare the types of C# long and byte[] types. In F# these turn into int64 and array<byte> respectively. Other types, such as bool, int and string, are unchanged from C#:

[<DataContract>]
type ChunkResponse =
    {   [<DataMember>] mutable Data : array<byte>;
        [<DataMember>] mutable IsEndOfFile : bool;
     }

[<DataContract>]
type ChunkRequest =
    {   [<DataMember>] mutable FileName : string;
        [<DataMember>] mutable Offset : int64;
        [<DataMember>] mutable BytesRequested : int;
    }

 

I’ll hopefully find some time soon to do a follow-up post showing how to configure the WCF client and server in F#.

Tuesday, 30 December 2014

Mixing and Looping with NAudio

On a recent episode of .NET Rocks (LINK), Carl Franklin mentioned that he had used NAudio to create an application to mix together audio loops, as part of his “Music to Code By” Kickstarter. He had four loops, for drums, bass, and guitar, and the application allows the volumes to be adjusted individually. He made a code sample of his application available for download here.

image

This is quite simple to set up with NAudio. To perform the looping part, Carl made use of a LoopStream (using a technique I describe here). The key to looping is simply in the Read method, to read from your source, and if you reach the end (your source returns 0 or fewer samples than requested), reposition to the start and keep reading. This means you have a WaveStream that will never end.

Here's the code for a LoopStream that a WaveFileReader can be passed into:

/// <summary>
/// Stream for looping playback
/// </summary>
public class LoopStream : WaveStream
{
    WaveStream sourceStream;

    /// <summary>
    /// Creates a new Loop stream
    /// </summary>
    /// <param name="sourceStream">The stream to read from. Note: the Read method of this stream should return 0 when it reaches the end
    /// or else we will not loop to the start again.</param>
    public LoopStream(WaveStream sourceStream)
    {
        this.sourceStream = sourceStream;
        this.EnableLooping = true;
    }

    /// <summary>
    /// Use this to turn looping on or off
    /// </summary>
    public bool EnableLooping { get; set; }

    /// <summary>
    /// Return source stream's wave format
    /// </summary>
    public override WaveFormat WaveFormat
    {
        get { return sourceStream.WaveFormat; }
    }

    /// <summary>
    /// LoopStream simply returns
    /// </summary>
    public override long Length
    {
        get { return sourceStream.Length; }
    }

    /// <summary>
    /// LoopStream simply passes on positioning to source stream
    /// </summary>
    public override long Position
    {
        get { return sourceStream.Position; }
        set { sourceStream.Position = value; }
    }

    public override int Read(byte[] buffer, int offset, int count)
    {
        int totalBytesRead = 0;

        while (totalBytesRead < count)
        {
            int bytesRead = sourceStream.Read(buffer, offset + totalBytesRead, count - totalBytesRead);
            if (bytesRead == 0)
            {
                if (sourceStream.Position == 0 || !EnableLooping)
                {
                    // something wrong with the source stream
                    break;
                }
                // loop
                sourceStream.Position = 0;
            }
            totalBytesRead += bytesRead;
        }
        return totalBytesRead;
    }
}

For mixing, the approach Carl took was simply to create four instances of DirectSoundOut and start them playing together. To allow adjusting the volumes of each channel he passed each LoopStream into a WaveChannel32, which converts to 32 bit floating point, and has a Volume property (1.0 is full volume). To ensure that the four parts remained in sync, when you deselect a part, it doesn't actually stop it playing - instead it sets its volume to 0.

This approach to synchronization works surprisingly well, but it is not actually guaranteed to keep the four parts synchronized. Over time, they could drift. So a better approach is to use a single output device, and feed each of the four WaveChannels into a mixer. Here’s an example block diagram showing a modified signal chain with two inputs feeding into a single mixer:

 

----------   ----------   -----------
| Wave   |   | Loop   |   | Wave    |
| File   |-->| Stream |-->| Channel |---
| Reader |   |        |   | 32      |  |   ------------
----------   ----------   -----------  --->| Mixing   |
                                           | Wave     |
----------   ----------   -----------  --->| Provider |
| Wave   |   | Loop   |   | Wave    |  |   | 32       |
| File   |-->| Stream |-->| Channel |---   ------------
| Reader |   |        |   | 32      |
----------   ----------   -----------



NAudio has a number of options available for mixing. The best is probably MixingSampleProvider, but for Carl's project, it was easier to use MixingWaveProvider32, since he's not making use of the ISampleProvider interface. This allows you to mix together any WaveProviders that are already in 32 bit floating point format.

MixingWaveProvider32 requires that you specify its inputs up front. So here, we could connect each of our inputs, and then start playing. With this simple change, Carl's mixing application is now guaranteed to not go out of sync. This is the recommended way to mix multiple sounds with NAudio.

Here's the code that sets up the mixer (Carl has a class called WavePlayer encapsulating the WaveFileReader, LoopStream and WaveChannel32, allowing you to access the WaveChannel32 with the Channel property):

foreach (string file in files)
{
    Clips.Add(new WavePlayer(file));                    
}
var mixer = new MixingWaveProvider32(Clips.Select(c => c.Channel));
audioOutput.Init(mixer);

You can download my modified version of Carl's application here.

The only caveat is that mixers require all their inputs to be in the same format. For this application, this isn’t a problem, but if you want to mix together sounds of arbitrary formats, you'd need to convert them all to a common format. This is something I cover in my NAudio Pluralsight course if you’re interested in finding out more about how to do this.

Tuesday, 16 December 2014

ClickOnce Deployment Fundamentals

I'm delighted to announce that my sixth Pluralsight course, ClickOnce Deployment Fundamentals is now live. In it I go through all the options available for customising your ClickOnce deployment, as well as how to handle updates, the capabilities of the deployment API, and what gets stored where on the disk. I also have modules covering some of the more advanced parts of ClickOnce such as handling pre-requisites with the bootstrapper, signing your deployment, and using the MAGE tool.

Why ClickOnce?

You may be surprised that I'm doing a course on ClickOnce, since it is now a fairly old and oft-maligned technology. As I explain in the course, it's not the right choice for all installers, but for simple .NET applications, it may actually prove to be the simplest solution for keeping your application automatically up to date. I go through some of the pros and cons in the course, as well as pointing out a few alternatives you might want to consider.

Some ClickOnce Resources

I've tried to give a fairly comprehensive coverage of ClickOnce capabilities in the course, but you can't cover everything, so here's some of what I consider to be the most helpful resources if you are planning to use it yourself.

  • RobinDotNet Robin is one of the few genuine ClickOnce experts out there on the web, and she has provided several really helpful articles, including things like how you can host your ClickOnce deployments in Azure blob storage.
  • MSDN - MSDN may not be the most thrilling documentation to read, but don't overlook it when it comes to ClickOnce, as it is really the only comprehensive source of information you’ll find. Have a look here and here for some useful material.
  • Smart Client Deployment book by Brian Noyes. This really is the best book out there on ClickOnce. Don’t be put off by the fact that it is fairly old now. ClickOnce hasn’t changed an awful lot though, so pretty much everything in the book is still relevant.
  • Finally here’s a video that discusses re-signing with MAGE, which shows how to work around a nasty gotcha when re-signing if you are using .deploy file extensions (which you probably are if deploying via the web).

More to Come on Signing…

I’m also hoping to follow this up with another post about the process of signing your ClickOnce applications. I actually attempted to buy my own code signing certificate which I wanted to use in my demos in this course, but it has proved surprisingly difficult to complete the purchase of my certificate (certainly a story for a future blog post), so for the course I just used a self-generated certificate. As soon as I finally get the real deal, I’ll post showing what difference it makes to the warnings you receive during installation when your app is signed by a certificate issued by a trusted Certificate Authority.

Saturday, 29 November 2014

Effective Debugging with Divide and Conquer

I frequently get support requests for NAudio from people who complain that their audio doesn’t sound right. Can I have a look at their code and see what they are doing wrong?

Frequently the code they post contains multiple steps. Audio is recorded, processed, encoded with a codec, sent over the network, received over the network, decoded with a codec, processed some more, and then played.

Now if the audio doesn’t sound right, there’s clearly a problem somewhere, but how can we pinpoint where exactly? It is possible that a code review might reveal the problem, but often you’ll actually need to debug to get to the bottom of a problem like this.

The Number One Debugging Principle

Perhaps the most basic and foundational skill you need to learn if you are ever to debug effectively is to “divide and conquer”. If you have thousands of lines of code in which the bug might be hiding, going through each one line by line would take too long.

What you need to do is divide your code in half. Is the bug in the first half or the second? Once you’ve identified which half contains the problem, do the same again, and before long you’ll have narrowed down exactly where the problem lies. It’s a very simple and effective technique, and yet all too often overlooked.

Divide and Conquer Debugging in Practice

To illustrate this technique, let’s take the audio example I mentioned earlier. Where’s the problem? Well lets start by eliminating the network code. Instead of sending audio out over the network, write it to a WAV file. Then play that WAV file in Windows Media Player. If it sounds fine, then the problem isn’t in the first half of our system. With one quick test, we’ve narrowed down the problem to the decode and playback side of things.

Now we could test half of the remainder of the code by playing back from a pre-recorded file instead of from the network. If that sounds OK then its something in the code that receives over the network and decodes audio. So we can very quickly zero in on the problem area.

The point is simple, you don’t need to restrict yourself to looking at the output of the entire system to troubleshoot a problem. Look at the state at intermediate points to find out where things are going wrong. And often you don’t need to run through all the code. Can you pass data just through a small part of your logic, to see if the problem resides there?

Learn it and Use it

If you learn the art of divide and conquer, you’ll not only be great at debugging, but it will improve the way you write your code in the first place. Because as I’ve argued before on this blog, divide and conquer is perhaps the most essential skill for a programmer to have.