Monday, August 22, 2011

Use this data, not that

The Salt Lake Tribune has recently been running a series of articles dealing with some privacy and security concerns that a fraud probe into a Utah prenatal health care program raised. The articles are primarily immigration-themed but they are also eye-opening from a software design perspective. One of the articles focuses on the fact that the software Utah uses required them to enter a Social Security Number (SSN) for patient identification. Some of the women coming into the clinics were unable or unwilling to provide this information, so the clinic staff would issue them 'dummy' SSNs to get them into the system. This eventually caused an issue because one of the dummy numbers entered happened to match the real SSN of a man in Maine. The end result was a case of accidental identity theft.

Anyone who's ever developed software will be un-surprised by the details the article gives about how the data ended up so muddled. The system required a nine-digit ID, so the staff used SSNs. When the SSN was unavailable, they'd make one up. For years they'd prepend a "V" or something to try and distinguish the reals from the fakes, but then an upgrade forced the values to become numeric only. Under both schemas ID duplication was occurring, a fact the staff was well aware of. Changing the ID field's parameters was too expensive, so they just lived with the mess. Investigations by the U.S. Social Security Administration (SSA) only prompted the helpful advice to use a different numerical prefix that the SSA doesn't use in SSNs. The state's processes have been modified to continue doing exactly what they've been doing all along, except now they have to keep a separate (most likely paper) log to be used to sort out any difficulties.

There are some very important lessons about software development that can be learned here: first, that using government-issued numbers as IDs is a very bad practice, and second, that good software design cannot ignore the human element.

SSNs are not used as IDs as much in software anymore, but I think some designers and developers don't really understand why this is the case. We may say "People don't want to give us that information" or "We don't want to be responsible for keeping that data private." While it's good to recognize the inherent privacy concerns, these reasons miss the point, plus most organizations that would use SSN in the first place do have valid reasons to collect it. The real reason SSNs make poor IDs because they cannot be changed, and because they are intended to be a private key.

An example to illustrate: I worked for an automotive shop where the mechanics would track the vehicle work via a touch-screen terminal. The mechanics would log in to said terminals using their SSN. The software running on the terminal communicated only with the server in the back room, and the shop had valid reasons to know the SSNs of the people their customers were entrusting their vehicular safety to. It all seemed like a reasonable setup. But then Employee B found out Employee A's SSN, and began to enter work under Employee A's login. I don't remember why firing Employee B was not an option, but it wasn't. We couldn't change Employee A's SSN without screwing up the payroll system, and we didn't have the resources to redo the terminal software (it was really, really bad code). This left Employee A entirely without recourse.

Using an SSN as a private key to eliminate duplication or provide positive identification for legal purposes is a perfectly valid thing to do. But to use SSN as a username or a public ID number is just wrong. It boxes you into logistical and ethical corners that can be very expensive to get out of.

The complaint is raised that we don't want our users to have to be responsible for yet another number or ID that they have to remember. This is a valid concern, and it has a simple solution: don't do it. Look the patient up by name when they come in the clinic. Issue them a card with the ID number printed on it (and include a barcode or magnetic stripe so it can just be scanned). Issue them an ID badge with an RFID chip. Let them choose a username - these are intended to be public, so people can reuse them ad infinitum. Require SSN as a element of account creation if you must, but store it privately (and securely) and map to it by the public ID of your/their choosing.

David Platt once said, "Your user is not you," and I don't think truer words have ever been spoken. Developers tend to have a certain mental block about this; they assume that because the field says "SSN" or "Email" or "Date of Birth" then that's what the user will enter. But we forget that to a user, a field is not a discreet, re-usable piece of information - it is a post-it note where they can write stuff till they need it again. Users will put information wherever they can fit it, regardless of categorization. A balance has to be struck between making forms daunting or too permissive. Validation goes a long way to helping with this. I work for a company that receives real-time (multiple per second) data feeds from the largest retail chain on the planet. One element in these feeds is email address. We get the data just as the store associate enters it, and since the software on their side does not require any validation at all - not even a check to be sure it includes a "@"! - the email addresses are not viable. A trivial regex would allow this information, which is invaluable for our marketing purposes, to be usable instead of dross. Validation is no magic bullet though - the most rigid validation in the world won't alert you to the fact that the patient's birth-date is not 1/1/1970. Unless for some exceptional reason you can verify the person's birth certificate, there's pretty much no way to independently verify that kind of data, and it would not be worth the effort for you to try. So in that instance the software should simply be aware that this value is not ironclad and may need to be treated with kid gloves.

The bottom line is that as computers and software become more and more ubiquitous, we have to avoid creating any further pitfalls like this.

Sunday, March 20, 2011

An evaluation of Silverlight and XAML

Up till now, I've only used this blog to post about technical issues and development patterns, without really editorializing. This entry, however, is going to be a rather subjective evaluation of technology stacks.

One of the bigger paradigm shifts in .NET development that has occurred in the last few years is the introduction of Silverlight. I'm not going to go into the reasons that Silverlight was introduced or why it has seen a significant rate of adoption (partly because the reasons for the latter are incredibly varied depending on company and application). Hand in hand with Silverlight has come XAML. The two are not one and the same: Silverlight is an application framework that's very tightly coupled to Internet solutions, while XAML is a markup language that can be used in many different types of .NET projects. This post will discuss the pros and cons of both.

Silverlight is nice in that it further simplifies web application programming. Some websites are actually not suited to a model where the user can control page flow, and some user interfaces can only be accomplished in an HTML+JavaScript environment after much trickery and tweaking third-party controls (e.g. jQuery UI). Silverlight allows you to deliver an application in a sever-client and/or web-based manner without having to dance around the stateless, static-content-oriented HTTP protocol.

Silverlight's architecture has some good ideas behind it. Applets have been a source of (real and perceived) security concerns for a long time, so the Silverlight designers decided that Silverlight would be a subset, instead of a super-set, of the .NET framework. In other words, Silverlight uses a smaller selection of .NET's functionality. This deliberate scope restriction takes away the ability of Silverlight applets to do some dangerous things. The code to access local file systems, databases, and other critical resources is not even there. It enforces a safer application ecosystem by design instead of by potentially breakable (and in the end arbitrary) access switches. It also solves (or at least mitigates) another concern of applets. Applets have to have their own execution sandbox that the client has to download, in addition to downloading the applet itself. A smaller functionality set compiles to smaller binaries, which results in smaller download and install footprints. Even with today's fast connections, dual-core processors and hundred-gig hard drives, all resources are still finite, and so creating a paradigm that rewards lighter-weight deliverables is a very smart idea.

These benefits do not come without some significant hassles, though.

One of the biggest problems I've had with Silverlight is that various versions are not compatible. You can install and run .NET 1.1, 2.0, and 4.0 all on the same machine without any problem - you can have solutions that have projects in all different versions; you can have multiple IIS app pools running different framework versions all running simultaneously. Not only is this possible, but it's extremely easy - the setup and execution is all seamless. .NET is certainly not the only application framework or product that can support this kind of parallelism, but the point is that it does it well. Silverlight does not. I know it is possible to get Silverlight 3 and 4 running on the same machine - I've seen co-workers get it done - but its a very difficult process, and even those co-workers throw up their hands in surrender when I ask them to help me re-create it. "I just un-installed and re-installed things in an apparently arbitrary order until it started working" was the answer I got from more than one of them.

On the surface this sounds like a nit-picky concern - "just use the same version of Silverlight for everything", right? But let's be realistic, it's never that simple. Various applications are developed under different constraints and requirements, and sometimes using only one version is simply not a realistic option. Some clients and environments require an older framework, and you can't change that. Plus, even if you do have the option to upgrade, development hours are limited and business users/clients aren't always willing to assume the upgrade risk. This is true for anything, not just Silverlight - there are still many .NET 2.0 applications and DLLs running in production environments that won't be upgraded for years to come for these very same reasons. Effective multi-version support is a feature I don't think enterprise software development tools can skimp on, and I feel that Silverlight not only skimped, but completely dropped the ball.

This versioning/parallelism flaw is major, but there are also some important minor annoyances. The subset mentality that Silverlight was designed in is a good idea that was clumsily executed. Instead of Silverlight being a 'true' subset of .NET, it is actually a parallel, minimized fork of .NET. It looks like .NET, it smells like .NET, but it doesn't taste like .NET. Visual Studio is always cranky when you try to add a reference to a Silverlight project in a non-Silverlight project - it'll do it, and the solution will compile, but VS will always mark it as a broken reference in Solution Explorer. Tools like ReSharper will even give you pre-compile errors in non-Silverlight code that references Silverlight code (as well as in the solution-wide analysis, which is much harder to ignore).

The path of work-arounds this particular flaw sent me down was a real comedy of errors. "Hmm, VS2010 + Resharper doesn't play nice with MSTest projects referencing Silverlight projects. Okay, let's create a Silverlight test project. Hmm, no such project type. Okay, we'll just create a Silverlight class library - the whole 'test project' definition is a somewhat arbitrary distinction anyway. Argh, okay, where can I download the MSTest for Silverlight framework? Geez that's hard to find. Okay got it! Compiles, woo-hoo! What? The MS Test runner can't run the MSTest for Silverlight tests (doesn't recognize the attributes)? Crap. Well, maybe there's a port of the test runner for Silverlight. Uh ... well there's a crappy browser-based version that's difficult to use and hard to see test failures in... Okay, screw it, let's just go with NUnit, I've never really liked MSTest anyway. NUnit port for Silverlight? Unofficially done, but existent and stable! Score one for open-source! Create Silverlight class library, add tests, reference NUnit for Silverlight DLLs ... compiles! Woo-hoo! RUNS! Woo-hoo!"

I share that bit partly to inject a little levity, partly to show that with Silverlight NUnit is nicer than MSTest, and partly to reinforce my argument that Silverlight is a second-class citizen even in the Microsoft world. MSTest doesn't like it, Visual Studio doesn't like it, it doesn't even like itself. There are just so many little 'gotchas' in trying to use Silverlight, functionality that has to be re-created, or specialized ports of existing tools that you have to employ. The whole paradigm just seems to work against code re-use, which is something that makes me rather cranky.

XAML is the markup language that Silverlight uses to create its user-interface components. As mentioned previously, though, it is not tied to Silveright. The Windows Presentation Foundation (WPF), which is intended for desktop applications, also uses XAML. In fact, Visual Studio 2010 itself is written in WPF, and therefore XAML. XAML is a big leap forward in terms of simplicity and portability of UI design - it takes everything that was great about HTML, CSS, and Web Forms, and combines them all into something even better. It further closes the gap between Windows Forms and Web Forms - these two technologies used extremely similar but inherently separate structures, but now everything is united under one roof. You can design for the desktop or the web (as long as that web is Silverlight) using one approach. XAML makes formatting pages/screens much, much more intuitive than setting up CSS stylesheets or creating application themes, and it makes the flexibility of HTML layouts available to desktop apps. Making a desktop application look pretty is no small feat regardless of technology, and WPF gives you a shorter path.

Unfortunately, XAML also takes everything that was bad about ASP.NET Data Grids and makes it the standard. The Model-View Model pattern that XAML is intended to employ encourages injecting property, method, and even class names directly in the XAML markup, or in other words, into uncompiled text. I shudder every time I see this kind of thing being done, whether it's in .config documents, vanilla XML, or in 'magic' strings inside the C#/VB code. Doing this kind of thing works against refactoring. As far as I'm aware, there exists no tool that will extend object refactors into the XAML. Given the fluid nature of the XAML data-binding model, it's a difficult task to hope to accomplish, especially considering the fact that the source object doesn't have to be bound in until run-time. Again, this may seem nit-picky, but I argue that it is not. Code is always changing, and needs to be flexible enough to accommodate rapid change. This need becomes more and more pressing each year. Members in XAML {Binding} or {StaticResources} statements are disconnected from the code in a way that discourages and complicates changes. What's even more concerning to me is that it is very easy for incomplete refactors to go unnoticed. It is very easy to change something, have the {Binding} member no longer match, and then that element no longer shows up on the screen, and no one would even notice, even with the greatest QA department in the world, because no error is thrown when said binding fails. This kind of thing has bitten us more than a few times even with the more strict binding mechanism of ASP.NET Data Grids, sometimes even in production code. I am pessimistic that such occurrences will only increase in a world that relies more heavily on XAML-based implementations.

Now, the good news is that there are ways to get around this flaw in XAML. The traditional, explicit data-binding model of giving controls names and wiring them up in the code-behind can be employed. There are also code-only ways to create {Binding}s using only C# code (no XAML) - they're not as pretty, but they work, and because they eschew magic-string based reflection they are refactor-friendly. I hope to post some examples of my own here before too long.

My current evaluation of Silverlight is that it has too many flaws to justify the somewhat dubious benefits it brings. In the end, traditional ASP.NET websites with a liberal amount of jQuery can provide all the same functionality without any of Silverlight's limitations or contrivances. And if the HTML 5 standard can ever see wide-spread adoption, then Silverlight becomes even less attractive. I would urge .NET developers to discourage the use of Silverlight in order to shorten the time till its end-of-life date.

My current evaluation of XAML (and I reserve the right to modify this in the future) is that it is better than both Windows Forms and Web Forms. While it has some non-trivial pitfalls, they are worth the risk for the benefits gained. I would urge the use of WPF for desktop development, and if it ever becomes available for non-Silverlight ASP.NET use, then it is preferable to 'pure' HTML+CSS.

Comments, questions, and corrections are more than welcome!

Wednesday, September 15, 2010

C# Compiler Bug, or just something obscure and frustrating?

Earlier today, I started getting the strangest error message when trying to run my tests.  The message varied a little depending on what test runner was being used, but the gist of the error message was:

"Could not load file or assembly or one of its dependencies.  Signature missing argument. (Exception from HRESULT: 0x801312E3)"

The Visual Studio solution this started occurring in is pretty simple.  It contains four (4) C# 4.0 class libraries, and two .NET 4.0 test projects.  Both test projects are using NUnit 2.5.7 and Rhino Mocks 3.6. One test project was working just fine, but when I tried to run the second test project's tests, I would get this error.  It occurred no matter if I ran it through ReSharper or through the NUnit GUI.

After three hours of trial and error, I finally found that the error appeared to be with a reference to the project containing the domain objects.  To better illustrate, here is the project hierarchy:

Test Project
    Class Library A
        Domain Project
        Class Library B
            Domain Project
        Class Library C
    Class Library B
        Domain Project
    Class Library C

It's not the most straightforward tree, but not complex by any means, and compiles without error or warning.  Yet for some reason the test runners were very upset that Class Library B was referencing the Domain Project.  It appeared to be at least in some way related to Rhino Mocks - when I removed the lines of code in the Test Project that included calls to the Expect() method, but left the actual project hierarchy the same, the error went away.  (The tests of course were then useless, so this wasn't a viable alternative, but it got me closer to finding the problem.)

The tests themselves are setting up expectations on a method from Class Project B that has a class from Domain Project as its return type.  You may notice, however, that Test Project does not reference Domain Project.  Because this solution is relatively new, so far the tests just verify that the method is called with the right inputs; I haven't yet written the tests to verify output.  In other words, I don't have to deal directly with the class from Domain Project yet, so I haven't referenced that project.

It eventually turned out that this was in fact the problem.  When I added a reference to Domain Project to Test Project, the errors went away and I was able to run my tests again. It seems that Rhino Mocks requires direct references to all the types employed by a method signature, even if the C# compiler doesn't need them all to build the DLL.  It makes it so the error ends up in a bizarre no-man's land - it's not a compile-time problem, but it manifests when the DLL is being loaded, which is before what we typically think of being run-time.

I'm not one who understands compiler design and implementation very well, so it's hard for me to say what the compiler's doing here. From comparing disassembles of the DLL compiled with and without the Domain Project reference, though, it's pretty clear that without the project reference, the compiler doesn't know the return type of delegate passed into Expect(), and can't build the method signatures correctly.  (See the footnotes for more detail.)

Honestly, this feels like something the compiler should at least send up a warning about, or perhaps even fail to build on. It results in a compiled DLL that can't be used; it feels like it allows us to create invalid binaries.  Maybe detecting this kind of problem would be so enormously complex that its better to put the burden on the developer, but you'd think they'd have better documented it in that case.

So the final take-away is: when using generics and/or delegation, make sure all types implicitly referenced by your code are explicitly referenced in the project References.

This is a very remote and unusual case, but I could find absolutely nothing on Google or in any Microsoft documentation that gave any hints, and the error message itself was basically useless.  So I am putting this recap out on the Internet in the hopes that if anyone else ever runs into this, they'll have a little more insight than I did.

Footnotes

This is the C# code written:

[Test]
public void MyTest()
{
  _classFromProjectB
    .Expect(x => x.GetBatch(Arg<int>.Is.Anything, Arg<DateTime>.Is.Anything));

  // invoke the method being tested
}

Without the Domain Project reference, this is what the compiler produces:

[CompilerGenerated]
private static byte CS$<>9__CachedAnonymousMethodDelegate1;

[CompilerGenerated]
private static IClassFromProjectB <MyTest>b__0(void x)
{
  byte CS$1$0000 =
    (byte)x.GetBatch(Arg<int>.Is.Anything, Arg<DateTime>.Is.Anything);
  return (IClassFromProjectB) CS$1$0000;
}

[Test]
public void MyTest()
{
  if (CS$<>9__CachedAnonymousMethodDelegate1 == 0)
  {
    CS$<>9__CachedAnonymousMethodDelegate1 =
      (byte) new int(null, (IntPtr) <MyTest>b__0);
  }
  this._classFromProjectB.Expect<IClassFromProjectB, byte>(
     (Function<IClassFromProjectB, byte>) CS$<>9__CachedAnonymousMethodDelegate1);

  // invoke the method being tested
}

With the project reference, it produces:

[Test]
public void MyTest()
{
  this._classFromProjectB
    .Expect<IClassFromProjectB, List<DomainObject>>(
      delegate (IClassFromProjectB x) 
      {
        return x.GetBatch(Arg<int>.Is.Anything, Arg<DateTime>.Is.Anything);
      });

  // invoke the method being tested
}

Friday, April 23, 2010

My Anti-Spam: Protect Email Addresses From Spammers Without Inconviencing Anybody

It goes without saying that spam is a big problem that affects a lot of people.  While spam blockers are increasingly effective - almost nothing gets past the Gmail one - they aren't perfect, and really, that's attacking the symptom rather than the problem.  The best way to avoid spam is keep email addresses out of the hands of spammers. Web developers have a responsibility to make sure websites they construct protect their customers and clients by taking steps to thwart spam-bot harvesting attempts.

From a technical standpoint, the most effective approach would be to never give the website user access to an email address.  Submissions to company email boxes would go through contact forms built by the web team, and all email communication would handled server-side.  There are two drawbacks to this approach, however.  The first is that for small companies/websites, developing the code and infrastructure to handle this might be beyond their budget or skills.  Secondly, I'd be willing to bet a large number of people are like me, in that filling out those contact forms is usually an obnoxious and overly cumbersome process that we have no confidence even works in the first place.

For a small website it's temping to just put the email addresses in "mailto:" links with the "@" and "." written out, with the expectation that the user will fix those "escapes" in their mail client.  However, "me (at) zero (dot) net" can be parsed by a spam-bot just as easily as the human brain (easier, in fact), so this approach offers zero protection.  Plus, this just confuses some people, even with the handy "Hey replace with the right things" message that a lot of people put next to the link.  It violates the #1 rule of user interface design: "Don't Make Me Think."

A number of forum posts and advice pages I've come across have recommended an approach whereby clicking on an "email me" link calls a JavaScript function that redirects the browser to the appropriate "mailto:" URL.  The reasoning behind this is that the email is protected because it is not directly exposed through the href element on link.  If the JavaScript is placed in a separate *.js file, this is not a bad approach.  It removes the email from the HTML document itself, making it less likely that the spam-bot will find and parse it.

If, however, we assume that the spam-bot is sophisticated enough to download the *.js files included in the page, then this approach offers us no added protection, because once the bot has the *.js file, it will be able to parse the email out of it just as easily as it would from an HTML document.  I honestly don't know how vigilant your average spam-bot is in this regard, but even if they're not, this smells of "security by obscurity" - "the bot is less likely to go there, so we're secure."  "Hard to find" does not mean protected, so this approach raises a red flag to me.

In grappling with this issue, I eventually arrived at an approach that might considered a hybrid of the 'do it server-side' and the 'do it in JavaScript' approaches.  In this solution, the "email me" link in the HTML page has no href (or rather an href="#"), but has an onclick event that calls a JavaScript method.  This JavaScript method creates an XMLHttpRequest that calls out to a handler or service that returns the email address, which is then pre-pended with the "mailto:" protocol and navigated to by the browser.

So in many ways this is the "use JavaScript" approach with a twist, that twist being the server-side component that retrieves the actual email address.  This component could be an extremely simple web service, a script that writes out an XML or a simple text response, or anything really.  The exact implementation would depend on your server-side language of choice of course; in ASP.NET, I implemented this as a managed handler.

The advantage of this approach is that the email address is never written to a document - HTML, JavaScript, or otherwise - that can be found and parsed by a bot.  The email address is pulled across the wire in the JavaScript call, and thereby resides only in the browser's memory.  And because the component that delivers the email address is a server-side piece of code, you're able to add functionality to it that block requests from people/apps you don't like.  In my particular implementation, I check a white list of acceptable referrers, and do not return anything to a client who doesn't match.

The web page code ends up looking something like this:

<html>
<head>
  <script type="text/javascript">
      function openEmailClient(addressKey) {
        var request = new XmlHttpRequest();
        request.Open('POST', 'getEmail?key=' + addressKey, false);
        request.Send('');
        window.location.href = 'mailto:' + request.responseText;
     }
  </script>
</head>
<body>
     <p>
        <a href="#" onclick="openEmailClient('help');">Click here to email someone</a>
    </p>
</body>
</html>

There are a million different ways the "getEmail" component might be implemented - there might be a database involved, you could get the email from the config file, or it might just be hardcoded.  Either way, at the core of it, it needs to parse the incoming request (query string in this case), take the given key, lookup the email address by that key from wherever it might be stored , and write that out as the response.

I like this solution because it lets the user have control over the email sending process - they're able to actually send an email, no hand-waving involved - yet offers a measure of security without being particularly difficult to implement.  I had this whole thing done in only an hour, and that's including writing some tests around my "getEmail" component and doing manual testing of the actual page.  It's also very scalable - "getEmail" can be as simple or complex as you need.

This is not, of course, an infalliable solution.  An email address can still be harvested if a spam-bot is set up to perform an actual browser click and then process the "mailto" protocol itself.  However, this process is slow, and spammers need volume, so it's pretty unlikely that there's someone out there doing that.  And of course, this approach does nothing to stop a human being from clicking on the link and taking your email - in fact, that's actually the point.  This approach protects against spammers, not stalkers.  If there are privacy issues with a specific person or group of people, then you would need to explore other avenues.  (Also, filtering by referrer is not the greatest option; bots can spoof referring agents when making HTTP requests.  IP filtering would be a much better option; I just didn't have it in the project I was working on.)

Comments, questions, and concern are always welcome!

Monday, September 14, 2009

MS Test, Private Accessors, and Team Build

At my work,we use TeamBuild to run a continuous integration build on a couple of different solutions.  Recently, we had an issue where one test project (there are several in any given solution) would run its tests just fine in Visual Studio on the developers' machines, but when run under the command-line MSTest runner for TeamBuild, they would fail.  These tests were using a private accessor, and the test failures were reporting that the MSTest run-time could not properly load the class the private accessor was trying to wrap.  (Private Accessors use reflection to wrap the classes they give you access to, and it was giving a "Could not load file or assembly" error.)

I spent the better part of two days researching this issue.  I could never reproduce it on my workstation, and the particularly bizarre thing was that the tests would run fine through the Visual Studio instance installed on the TeamBuild machine.  Something was off with the way MSTest was doing things, it seemed.  I never found anything on the Internet that really described this issue, though I did find some similiar issues relating to TeamBuild and reflection.  I tried a couple of thier solutions, as well as tweaking config and rewriting the tests a bit.  Nothing worked.

Eventually, I just threw in the towel, and wrote my own private accessor class, and that finally resolved the issue.  Here's what I did:

The class were trying to test was called PageTitleModule, and a private method that needed to be tested, which depended on the state of a private variable.  Inside the test class, there was a place where we built the PageTitleModule_Accessor class and set up the private variable:

private static PageTitleModule_Accessor GetPageTitleModule(PageTitleInfo cmsContent)
{
   PageTitleModule_Accessor module = new PageTitleModule_Accessor();
   module._cmsContent = cmsContent;
   return module;
}

Then, in the actual tests, we would use the accessor class to invoke the private method:

PageTitleModule_Accessor module = GetPageTitleModule(cmsContent);
string reWrittenResponse = module.RewriteResponse(originalResponse);

Just for reference, we use this same pattern, including the built-in private accessors, many places in our test code, and none of them ever given us any grief.  So I'm sure there is some setting that was wrong or compilation option that needed to be tweaked.  But after two days, the hack-y workaround I'm about to show you just made more sense; we needed to get our tests running again so we could get back to actual development.

Before, when we had the problem, we were using the Private Accessor fuctionality built-in to .NET.  (In other words, we went to PageTitleModule, right-clicked on the class definition, told it to "Create Private Accessor" and selected the appropriate test project, thus generating the PageTitleModule_Accessor class.)  When I finally gave up on that working, I wrote a PageTitleModule_Accessor of my own that implemented this private member and private method:

internal class PageTitleModule_Accessor
{
  private readonly PrivateObject _moduleImpl;

  public PageTitleModule_Accessor()
  {
    _moduleImpl = new PrivateObject(typeof(PageTitleModule), null);
  }

  public PageTitleInfo _cmsContent
  {
    get
    {
      return _moduleImpl.GetFieldOrProperty("_cmsContent") as PageTitleInfo;
    }
    set
    {
      _moduleImpl.SetFieldOrProperty("_cmsContent", value);
    }
  }

  public string RewriteResponse(string response)
  {
    return _moduleImpl.Invoke("RewriteResponse", new object[] {response}) as string;
  }
}

This PrivateObject type that I'm using is part of the Microsoft's unit testing framework (Microsoft.VisualStudio.QualityTools.UnitTestFramework.dll).  While researching why the default accessor generation was not working, I disassmbled the generated test assembly (using RedGate's .NET Reflector) to take a look at what was going on and where the error was coming from.  I didn't find what I was looking for, but I saw that the auto-generated private accessors employ this PrivateObject type to manage the reflection of the type being tested, so I elected to re-use it.

(The "generated test assembly" mentioned above refers to the fact that when you use the built-in private accessor functionality, when you build, the C# compiler creates a DLL named [AssemblyUnderTest]_Accessor.dll, and it is in this binary that the [ClassUnderTest]_Accessor type lives.)

So, for those of you who know a lot about .NET private accessors, I'd love to have you weigh in on this.  Those of you who might happen to run into this problem, hopefully this gives you some information on what to do with it.