A TileListView for Universal Windows

I just released a small library for arrange any visual item in a tiled-fashion, very similar to the famous “start screen” of Windows 8/10, but following a fixed-position configuration.

The source code on GitHub.

The problem with the ordinary tiled-layout views is about the possible re-arrangement of the tiles, thus an user might see a different result upon the actual viewport size. I needed something “fixed” instead. The user should define the layout by editing the grid via drag-and-drop, then that layout will be fixed for any screen. However, the editor should consider more than a single arrangement, whereas the feasible displays can’t fit the desired layout.

The demo leverages an useful MVVM pattern, and allows to define several kind of blocks: fixed, full-sizable, shrinkable, expandable, on just one or both the directions.

The library is working, but at the moment as a beta (several minor problems and refinements to work on). A reasonable prototype to put your hands on, though.

As described, the project is tailored for the Universal Windows Platform, so only Windows 10 (phones and IoT as well) is supported. You must use Visual Studio 2015 on Windows 10 for development also.

2016-03-05 (1)
Editing mode (drag-and-drop to add/move/remove tiles)
2016-03-05 (2)
Blocks size is modifiable at any time
2016-03-05 (3)
The view mode: as any normal hub page.

Have fun!

 

HD44780 LCD Module driver for Windows 10 IoT

This is my first post about Windows 10 IoT and small computers (embedded) after some experiences in the past with the .Net MicroFramework (Netduino, in essence).

I must say that this Windows 10 is finally a small masterpiece, or “everything you ever asked and none never responded you”. The programming way is easy but very flexible, although still many bricks are missing (in development).

The source code on GitHub

Here is a very simple experiment, a bit vintage, of driving a common alphanumeric LCD module (HD44780-based) with a Raspberry PI 2 and Windows 10 IoT.

WP_001158

The project isn’t anything new, but rather kinda “refresh” of another of mine where the Netduino board was used (see below). Someone of you may wonder “why” still using a so-old LCD module when several graphics are available on the market. Well, my first answer is: “why not?”. As said, this project hasn’t any specific purpose (although many of you may dig into their “miracles box” and pull out an “almost useless” LCD module). The aim is to test how hard is to drive something well known.

 

Some credits…

I can’t miss to mention Laurent Ellerbach, who gave me some inspiration (and motivation) on pursuing those hacking/funny activities.

 

The hardware.

All you need is very easy to find:

  • Raspberry PI2 (with Windows 10 IoT installed)
  • any suitable HD44780 LCD display module (mine is a 4×20)
  • 74HC595 shift-register
  • 220 Ohms resistor (only if you need the backlight)
  • 10k Ohms trimpot (22k or 47k are fine the same)

For sake of simplicity, I won’t detail how to set up the Raspberry, but there are many articles which describe very well that. I followed the Microsoft site and everything went fine, except for the suggested SD minimum size: I found that a 8GB won’t work. Simply consider a 16GB.

RPI2-HD44780_schem

RPI2-HD44780_bb

 

The software.

I wanted to publish the project keeping the sources simpler as possible. A similar application won’t have sense in a complex hardware (full-featured TFT displays and HDMI monitors perform way better than this module). The general guideline is: if you find convenient to connect a LCD module to a RPI, then make it working in minutes.

Since the LCD module’s capabilities are very limited, I embraced the idea to expose the APIs as it were a kind of “Console”. Just a “write” and something more, where a background task should manage the physical transfer by itself.

The project contains two different demo:

  1. a basic one, where some strings’ content is reflected on the display;
  2. a slightly more complex demo, which gets a bunch of RSS news from the BBC.uk channel, and rotates the titles on the screen.

 

Basic demo.

    class BasicDemo
    {

        public async Task RunAsync()
        {
            //write a static string
            DriverHD44780.Instance.DrawString(
                "This is a basic demo",
                new Point(0, 0)
                );

            int n = 0;
            while (true)
            {
                //display a simple counter
                DriverHD44780.Instance.DrawString(
                    $"Counting...{n}",
                    new Point(0, 1)
                    );

                //display current time and date
                var now = DateTime.Now;
                DriverHD44780.Instance.DrawString(
                    now.ToString("T") + "   ",
                    new Point(0, 2)
                    );

                DriverHD44780.Instance.DrawString(
                    now.ToString("M") + "   ",
                    new Point(0, 3)
                    );

                n++;
                await Task.Delay(1000);
            }
        }

    }

WP_001159

 

RSS demo.

    class RssDemo
    {

        public async Task RunAsync()
        {
            //write a static string
            DriverHD44780.Instance.DrawString(
                "Getting RSS...",
                new Point(0, 0)
                );

            //get the latest news using a normal HTTP GET request
            var http = new HttpClient();
            var endpoint = new Uri("http://feeds.bbci.co.uk/news/rss.xml");

            var srss = await http.GetStringAsync(endpoint);
            var xrss = XDocument.Parse(srss);

            //extract the news items, and sort them by date-time descending
            var xnews = xrss.Root
                .Element("channel")
                .Elements("item")
                .OrderByDescending(_ => (DateTime)_.Element("pubDate"))
                .ToList();

            int n = 0;
            while (true)
            {
                /**
                * Loop the news as one per page
                **/

                //the first row is for the publication date-time
                var dt = (DateTime)xnews[n].Element("pubDate");
                DriverHD44780.Instance.DrawString(
                    dt.ToString("g"),
                    new Point(0, 0)
                    );

                //the three other rows are for the title
                var title = (string)xnews[n].Element("title");
                title = title + new string(' ', 60);

                for (int row = 0; row < 3; row++)
                {
                    DriverHD44780.Instance.DrawString(
                        title.Substring(row * 20, 20),
                        new Point(0, row + 1)
                        );
                }

                //wait some seconds before flipping page
                n = (n + 1) % xnews.Count;
                await Task.Delay(3000);
            }
        }

    }

WP_001165

 

Performance.

You may wonder how well performs the driver. Well, there are two stages involved in the displaying process:

  1. the calls from the main application to the driver;
  2. the physical data transfer.

Any invokation by the main app involves always the cache: no matter how many calls are made, because everything is hosted in memory. For this reason, any manipulation is irrelevant in terms of performance impact. However, a fixed rate (typically 200ms) there’s a cache dump to the LCD screen, that is: the physical data transfer though the SPI.

How long takes the entire screen dump via SPI?

The circuit is very simple, thus there’s no way to take the transfer faster than the machine execution speed. Even adjusting the SPI clock rate, the resulting duration won’t change notably. Please, bear in mind that a too high SPI clock rate could face signal degradation due the wire length. I used a perfect 1 MHz value, and you can see from the below screenshot that the total transfer duration is less than 30ms.

UNIT0000

If you are interested in a faster way to dump the data via SPI, I suggest to read the following section which requires a decent knowledge about electronics.

 

The old “LCD Boost” library.

The original project was tailored for the .Net MicroFramework (Netduino) and many things were optimized for speed reasons. Moreover, the NetMF had some leaky problems mostly due to the super squeezed CLR, thus many solutions were solved as it were a low-level device.

Here are some links to the original articles of mine:

Very fast SPI-to-parallel interface for Netduino

LcdBoost library for Netduino

The GDI library for Netduino targets a LCD module.

A playable Invaders-like game with a Netduino Plus 2.

 

No, I don’t like JavaScript. Definitely.

Here is just another misleading faces of the JavaScript (or, better, EcmaScript).
Ten years ago or so, I probably had a better consideration of this language, but I really cannot avoid to compare it with C#. The strong yet clear rules of C# rarely lead to side-effects like this one.

Function: gimme your reference!

As in C#, a function can be referred using a delegate, and its “pointer” is uniquely identified in the program. The same thing is in C/C++: you can refer to a certain function by a simple pointer.
In JavaScript isn’t so simple, because the functions are instantiated on every scope creation. This leads to problems when you wish to subscribe and unsubscribe callbacks, events, and similar.
Honestly, I was unaware about this problem, and I bumped against it using the Socket.io library. Simply I wanted to unsubscribe the events (callbacks) from a socket, but the handler functions were nested inside another function. The result is plain simple: the unsubscription failed.

A neat example…

Consider this minimal pattern for subscribe/unsubscribe a single callback:

var E = (function () {
    var a = [];

    return {
        add: function (fn) {
            a.push(fn);
        },
        remove: function (fn) {
            for (var i = 0; i < a.length; i++) {
                if (a[i] === fn) {
                    a.splice(i, 1);
                    break;
                }
            }
        },
        len: function () {
            return a.length;
        }
    }
})();

Now let’s consider this trivial snippet where we’ll implement a basic test for a callback:

function xyz() {

    return {
        add: function () {
            E.add(f);
        },
        remove: function () {
            E.remove(f);
        }
    }
}

var b = new xyz();
b.add();
b.remove();
alert(E.len());     //yields 0...Correct!

The result is zero, as expected.

In order to understand where the problem is, just add a nested function which will leverage the same un/sub pattern:

function xyz() {

    function f(enable) {
        function handler() {
            //
        }

        if (enable) {
            E.add(handler);
        } else {
            E.remove(handler);
        }
    }

    return {
        f: f,
        add: function () {
            E.add(f);
        },
        remove: function () {
            E.remove(f);
        }
    }
}

Once run the proper test, the result is NOT the expected one:

var b = new xyz();
b.f(true);
b.f(false);
alert(E.len());    //yields 1...WRONG!

As said, the reason is that the “handler” function is instantiated (as a new “object”) every time the containing function “f” is called. Thus, the first “handler” reference won’t be the same as the second call.

Conclusions.

You will say: “that’s fine: it’s all about the language!”. Then I’ll answer: “Okay: may I say that I don’t like that?”.

AngularJS experimental page routing/templating

This is my very first post about “pure-web” tech, and it’s also very short. I began to deal with those things some months ago, but I feel there’s a long road to walk.
Here is an attempt to re-think the Single-Page (web) Application (a.k.a. SPA) using Angular-JS, toward a more abstracted templating way. The reasons behind a similar solution are pretty hard to understand only reading this post, but shortly I’ll post a much larger yet concrete framework for telemetry applications.
As a hint, think the ability to compose a page from a series of components, and store/retrieve the layout on any persistent medium (e.g. file, database, etc)

From what I meant, AngularJS is among the closest web-frameworks to the desktop’s WPF, which is (at least in my mind) the best framework for LOB apps.
However, I noticed that the ability to reuse components, abstract views and so away, is still somewhat not yet standardized, nor used. That’s because I thrown myself in this challenge, and the result isn’t bad as expected (for a web-dev noob like me).
A short video should explain way better than thousand words how the result is:

Follow the project on the Github repository:
https://github.com/highfield/ng-route1
Stay tuned for hotter articles in the near future!

Azure Veneziano – Part 2

This is the second part of my Internet-of-Things telemetry project based on Azure.

The source of the project is hosted in the azure-veneziano GitHub repository.

Here are the other parts:

In this article I’ll show you how to setup some components of Windows Azure, in order to make the system working.

I won’t cover details such as “how to subscribe to the Azure platform” or similar. Please, consider the several posts around the web, that describes very well how to walk the first steps, as well the benefits coming from the subscription.
A good place to start is here.

The system structure more in depth.

In the previous article there is almost no description about the system structure, mainly because the post is focused on the device. However, since here the key-role is for Azure, it’s better to dig a bit in depth around what’s the target.

structure

On the left there are a couple of Netduinoes as symbol of a generic, small device which interfaces with sensors, collects some data, then sends them to the Azure platform. This section is covered in the first part of the series.
The JSON-over-HTTP data sent by any device are managed by a “custom API” script within the Azure’s “Mobile Services” section. Basically a Node.JS JavaScript function which is called on every device’s HTTP request.
This script has two major tasks to do:

  1. parse the incoming JSON data, then store them into a SQL database;
  2. “wake-up” the webjob, because new data should be processed.

The database is a normal Azure SQL instance, where only two simple tables are necessary for this project. One is for holding the current variables state, that is every single datum instance incoming from any device. The other table depicts the “history” of the incoming data, that is the evolution of the state. This is very useful for analysis.

Finally, there is the “webjob”.
A webjob could be seen as a service or, more likely, as a console application. You can put (almost) anything into this .Net app, then it can started anytime. What I need is something like a endlessly running app, but in a “free-context” this service is shut-down after 20-30 minutes. That’s the way I used a trick to “wake it up” using kinda trigger from the script. Whenever new data are incoming the app is started, but can stay stopped whenever nothing happens.
The webjob task is just sending a mail upon a certain condition is met. In this article I won’t show anything sophisticated, than a very short plain-text mail. The primary goal here is setting up the Azure platform, and testing the infrastructure: in the next articles we’ll add several pieces in order to make this project very nice.

Looks nice, but…how much does cost all that?

Just two words about the cost of the Azure platform.
Entering into the Azure portal is much like as walking in Venezia: full of intriguing corners, each one different from others, and always full of surprises. The platform is really huge, but surprisingly simple to use.

billing

I say that I was surprised, because you’ll be also surprised by realizing that many stuffs come for FREE. Unless you want to scale up (and get more professional) this project, your bill will stick to ZERO.

Setup the mobile service.

The Mobile Services are the most important components in order to interface any mobile device. The “mobile” term is rather oriented to devices like phones or small boards, but the services could be accessed even from a normal PC.
The first thing to do is create your own mobile service: this task couldn’t be more easy…

azure-mobile-create-service-1

Type in your favorite service name, which has to be an unique identifier worldwide (as far I know).
About the database, ensure to pick the “Create a free 20 MB SQL database” (if you don’t have one yet), and the wizard will create automatically for you.
Two more parameters: select the closest region to you to host the service, then choose “JavaScript” as backend language for the management.

azure-mobile-create-service-2

If you are creating a new database, you’ll face a second page in the wizard. Simply you have to specify the credentials to use to gain access to the database.

azure-mobile-create-service-3

That’s all: within a few your brand new mobile service should be ready. The below sample view gives an overview about the service.

Please, notice that there are links where you can download sample apps/templates already configured with your own parameters!…Dumb-proof!

azure-mobile-overview

Also have a look at the bottom toolbar, where a “manage keys” button pops up some strange strings. Those strings are the ones that you should specify in the Netduino (and any other device) in order to gain access to the Azure Mobile Service.

        public static void Main()
        {
            //istantiate a new Azure-mobile service client
            var ms = new MobileServiceClient(
                "(your service name)",
                applicationId: "(your application-id)",
                masterKey: "(your master key)"
                );

The next task to do is about creating the database tables.
We need just three tables, and (even surprising) we don’t need to specify any column-schema: it will created automatically upon the JSON structure defined in the Netduino device software. This feature is by default, but you can disable it in the “configure” section, with the “dynamic schema” switch.

Table name Purpose
tdevices Each record is paired to a remote device and holds identification and status data of it.
tsensors Each record is paired to a “variable” defined by a certain device somewhere and holds identification and status data of it.
thistory Each record stores the value of a certain variable at the time it arrives on the server, or marks an event occurred. Think the table as a queue, where you can query the records in order to depict a certain variable’s value evolution over time.

azure-mobile-tables

Press “create” and enter “tsensors”, then ensure checked the “enable soft delete” and confirm. Repeat the same for both the “tdevices” and the “thistory” tables, and your task is over.
The “soft delete” feature marks a record as “deleted” and keeps it, instead of removing from the table. You should enable this feature when you deal with concurrency. I personally think it is useful even for a simple dubugging. The problem is that is up to you “cleaning” the obsolete records.

azure-mobile-create-table

The last section to setup within the Mobile Service context is the “Custom API“, that is the code to run upon any incoming data request.
Simply select the “API” section, then press “create”.

azure-mobile-api-overview

The wizard will ask you the name of the new API, as well as the permission grants to access it.
Back to the Netduino code, the API’s name should be specified on any request.

                    //execute the query against the server
                    ms.ApiOperation(
                        "myapi",
                        MobileServiceClient.Create,
                        jobj
                        );

Technically speaking, the name is the very last segment of the URI path which maps the request against Azure.

http://{your-service-name}.azure-mobile.net/api/{your-api-name}

azure-mobile-api-create

At this point you can begin to type the script in.

The device-side entry-point for the data.

The handler for the incoming requests is just a JavaScript function. Better: one function per HTTP method. However, since the primary goal is pushing data from a device into the server, the method used is POST (CREATE, in the REST terminology) all the times.
The JavaScript environment comes with Node.Js, which is very easy yet compact to use. I’m NOT a JavaScript addict, but honestly I didn’t have much effort in coding what I wanted.
The “script” section of the API allows to edit your script as you were on Visual Studio. The only missing piece is the Intellisense, but for JavaScript I don’t need it actually.

azure-mobile-api-script

The script we need is structured as follows:

exports.post = function(request, response) {

    // section: wake-up the webjob
        
    // section: update/insert the device's info into the "tdevices" table

    // section: update/insert the device's data into the "tsensors" table

    // section: append the device's data to the "thistory" table

};

Let’s face the database updating first.
For the “tdevices” table the script is as follows:

    var devicesTable = request.service.tables.getTable("tdevices");
    var sensorsTable = request.service.tables.getTable("tsensors");
    var historyTable = request.service.tables.getTable("thistory");
        
    //update/insert the device's info record
    devicesTable
    .where({
        devId: incomingData.devId
    }).read({
        success: function(results) {
            var deviceData = {
                devId: incomingData.devId,
                version: incomingData.ver
            };
            
            var flush = false;            
            if (results.length > 0) {
                //We found a record, update some values in it
                flush = (results[0].version != deviceData.version);
                results[0].devId = deviceData.devId;
                results[0].version = deviceData.version;
                devicesTable.update(results[0]);
                
                //Respond to the client
                console.log("Updated device", deviceData);
                request.respond(200, deviceData);
            } else {
                //Perform the insert in the DB
                devicesTable.insert(deviceData);

                //Reply with 201 (created) and the updated item
                console.log("Added new device", deviceData);
                request.respond(201, deviceData);
            }
            
            manageSensorTable(flush);
        }
    });    

As the data come in, the first thing is to look for the correspondent existent entry in the “tdevices” table, using the device’s identification as key. If the record does exist, it will be “updated”, otherwise a new entry will be added.
Upon an update, the logic here is comparing the incoming “configuration” version with the corresponding value stored in the table. If they don’t match, the “flush” flag is set, which serves to the next step to remove all the obsolete “sensor” entries.

When the operation on the “tdevices” table is over, begins the one on the “tsensors” and the “thistory” tables.
As in the previous snippet, first there is a selection of the records of “tsensors” marked as owned by the current device identifier. Then, if the “flush” flag is set, all the records are (marked as) deleted.
Finally, the data contained in the incoming message are scanned one item at once. For each variable, it looks for the corresponding entry in the recordset, then either update it or add a new record if wasn’t found.
Any item present in the message is also appended “as-is” to the “thistory” table.

    //update/insert the device's data record
    function manageSensorTable(flush) {
        sensorsTable
        .where({
            devId: incomingData.devId
        }).read({
            success: function(results) {
                if (flush) {
                    //flush any existent sensor record related to the involved device
                    console.log("Flush sensors data");
                    for (var i = 0; i < results.length; i++) {
                        sensorsTable.del(results[i].id);
                    }
                }
                
                for (var i = 0; i < incomingSensorArray.length; i++) {
                    var sensorData = {
                        devId: incomingData.devId,
                        name: incomingSensorArray[i].name, 
                        value: incomingSensorArray[i].value
                    };
                    
                    //find the index of the related sensor
                    var index = flush ? 0 : results.length;
                    while (--index >= 0) {
                        if (results[index].name == sensorData.name)
                            break;
                    }
                    
                    if (index >= 0) {
                        //record found, so update some values in it
                        results[index].devId = sensorData.devId;
                        results[index].name = sensorData.name;
                        results[index].value = sensorData.value;
                        sensorsTable.update(results[index]);
                    } else {
                        //Perform the insert in the DB
                        sensorsTable.insert(sensorData);
                    }
                    
                    //insert the record in the historian table
                    historyTable.insert(sensorData, {
                        success: function() {
                            //do nothing
                        }
                    });
                    
                }
            }
        });
    }

The last but not least piece of script is for waking up the webjob.
Please, note that my usage of the webjob is rather uncommon, but I think it’s the best compromise. The trade is between the Azure “free-context” limitations, and the desired service availability. The result is a webjob configured as “running continuously”, but is shut down by the platform when there’s no external “stimulation”. The trick is to “wake up” the webjob only when necessary by invoking a fake call to its site.
Have a look at my question on StackOverflow on how to solve the problem.

    {
        //access the webjob's API so that it'll wake up
        var wakeup_request = require('request');
        var username = "azureveneziano\$azureveneziano";
        var password = "(web-site-password)";
    
        var uri = 
            "http://" + 
            username + ":" + password + "@" +
            "azureveneziano.scm.azurewebsites.net/api/jobs/";
            
        wakeup_request(uri, function(error, response, body) {
            if (error) {
                console.error("scm failed:", error);
            }
        });
    }

At the end, it’s a trivial dummy read to the webjob deployment site. This read wakes up or keeps awaken the webjob.

Please, notice that all the “console” calls are useful only during the debugging stage: you should remove them when the system is stable enough.

If everything goes well, the Netduino should send some data to the Azure API, and the database should fill.
Here is an example of what the “tsensors” table may contain:

azure-mobile-table-data

Creating and deploying the webjob.

To understand what a “webjob” is, I suggest to read the Scott Hanselman’s article.
Since a webjob is part of a web-site, you must create one first. Azure offers up to 10 web-sites for free, so that isn’t a problem. At the moment, I don’t use any “real” web-site (meaning pages), but I need the registration.
The procedure of registration, deployment and related task can be easily managed from within Visual Studio.

When I started the project I used Visual Studio Express 2013 for Web, and the Update 4 CTP allowed such a management. Since a few days, there’s another great alternative: Visual Studio 2013 Community, which comes out with Update 4 released, but offers also a lot of useful features.
The following snapshots were taken on the Express release, but should be similar on other editions.

Start Visual Studio and create a “Microsoft Azure Webjob” project, and give it the proper name.

webjob-wizard

As you may notice, the solution composition looks almost the same as a normal Console application.
In order to add the proper references, just choose the “Manage NuGet packages” from the project’s contextual menu.

webjob-nuget-menu

Firstly install the base “Microsoft.Azure.Webjobs” package as follows:

webjob-nuget-webjobs

Then install the “Microsoft Webjobs Publish” package:

webjob-nuget-publish

Finally install the “Windows Azure Storage” package:

webjob-nuget-storage

Since this webjob will “run continuously”, but will be actually shut down often, the very first thing to add to the code is a procedure for detecting the shutting request, so that to exit the application gracefully.
This piece of code isn’t mine, so I invite to read the original article by Amit Apple about the trick.

            #region Graceful-shutdown watcher

            /**
             * Implement the code for a graceful shutdown
             * http://blog.amitapple.com/post/2014/05/webjobs-graceful-shutdown/
             **/

            //get the shutdown file path from the environment
            string shutdownFile = Environment.GetEnvironmentVariable("WEBJOBS_SHUTDOWN_FILE");

            //set the flag to alert the incoming shutdown
            bool isRunning = true;

            // Setup a file system watcher on that file's directory to know when the file is created
            var fileSystemWatcher = new FileSystemWatcher(
                Path.GetDirectoryName(shutdownFile)
                );

            //define the FileSystemWatcher callback
            FileSystemEventHandler fswHandler = (_s, _e) =>
            {
                if (_e.FullPath.IndexOf(Path.GetFileName(shutdownFile), StringComparison.OrdinalIgnoreCase) >= 0)
                {
                    // Found the file mark this WebJob as finished
                    isRunning = false;
                }
            };

            fileSystemWatcher.Created += fswHandler;
            fileSystemWatcher.Changed += fswHandler;
            fileSystemWatcher.NotifyFilter = NotifyFilters.CreationTime | NotifyFilters.FileName | NotifyFilters.LastWrite;
            fileSystemWatcher.IncludeSubdirectories = false;
            fileSystemWatcher.EnableRaisingEvents = true;

            Console.WriteLine("Running and waiting " + DateTime.UtcNow);

            #endregion

At this point you might add some blocking code, and test what happens. As in the Amit’s article:

       // Run as long as we didn't get a shutdown notification
        while (isRunning)
        {
            // Here is my actual work
            Console.WriteLine("Running and waiting " + DateTime.UtcNow);
            Thread.Sleep(1000);
        }

        Console.WriteLine("Stopped " + DateTime.UtcNow);

Before deploying the webjob onto Azure, we should check the “webjob-publish-settings” file which is part of the project. Basically, we should adjust the file in order to instruct the server to run the webjob continuously. Here is an example:

{
  "$schema": "http://schemastore.org/schemas/json/webjob-publish-settings.json",
  "webJobName": "AzureVenezianoWebJob",
  "startTime": null,
  "endTime": null,
  "jobRecurrenceFrequency": null,
  "interval": null,
  "runMode": "Continuous"
}

Open the project’s contextual menu, and choose the “Publish as Azure Webjob” item. A wizard like this one will open:

webjob-publish-0

We should specify the target web-site from this dialog:

website-select

If the web-site is not existent yet, we should create a new one:

website-create

When everything has been collected for the deployment, we can validate the connection, then proceed to the publication.

webjob-publish-2

Once the webjob has been published, it should placed to run immediately. To test whether the shut down will happen gracefully, simply leave the system as is, and go to take a cup of coffee. After 20-30 minutes, you can check what really happened in the webjob’s log.

Please, note that it’s important that you leave any webjobs’ status page of the Azure portal during the test. It would hold alive the service without really shutting it down.

Enter in the “websites” category, then in the “Webjobs” section:

webjob-status

At this point you should see the status as “running” or being changing to. Click the link below the “LOGS” column, and a special page will open:

webjob-log

This mini-portal is a really nice diagnostic tool for the webjobs. You should able to trace both explicit “Console” logs and also exceptions. To reveal the proper flow of the webjob, you should check the timestamps, as well as the messages such as:

[11/03/2014 07:03:53 > bb4862: INFO] Stopped 11/3/2014 7:03:53 AM

The mail alert application.

Most of the material inherent to this article has been shown. However, I just would to close this part with a “concrete” sign of what the project should do. On the next article I’ll focus almost entirely on the webjob code, where the system could considered finished (many things will follow, though).

As described above, as soon a message from any device calls the API, the webjob is waken up (in case is stopped), and the data are pushed in the database.
The webjob task should pick those data out, and detect what is changed. However, the API and the webjob execution are almost asynchronous each other, so it’s better to leave the webjob running and polling for other “news”. On the other hands, when something changes by a remote point, it might be possible that something else will change too in a short time. This is another reason for leaving the webjob running until the platform shuts it down.

I don’t want to dig into details here: this will be argument for the next article. The only important thing is how the data are read periodically (about 10 seconds here) from the server. The data read are copied in a local in-memory model, for ease of interaction with the language.
At the end of each poll, the variable which are changed since the previous poll are marked with the corresponding flag. Immediately after, the program flow yields the execution of a custom logic, that is what the system should do upon a certain status.

        private const string connectionString =
            "Server=tcp:(your-sqlserver-name).database.windows.net,1433;" +
            "Database=highfieldtales;" +
            "User ID=(your-sqlserver-username);" +
            "Password=(your-sqlserver-password);" +
            "Trusted_Connection=False;" +
            "Encrypt=True;" +
            "Connection Timeout=30;";

        static void Main()
        {
            // ...

            //create and open the connection in a using block. This 
            //ensures that all resources will be closed and disposed 
            //when the code exits. 
            using (var connection = new SqlConnection(connectionString))
            {
                //create the Command object
                var command = new SqlCommand(
                    "SELECT * FROM highfieldtales.tsensors WHERE __deleted = 0",
                    connection
                    );

                //open the connection in a try/catch block.  
                //create and execute the DataReader, writing the result 
                //set to the console window. 
                try
                {
                    connection.Open();

                    //run as long as we didn't get a shutdown notification
                    int jobTimer = 0;
                    while (isRunning)
                    {
                        if (++jobTimer > 10)
                        {
                            jobTimer = 0;

                            //extract all the variables from the DB table
                            using (SqlDataReader reader = command.ExecuteReader())
                            {
                                while (reader.Read())
                                {
                                    /**
                                     * update the local in-memory model with the
                                     * data read from the SQL database
                                     */

                                }
                            }

                            //detect the most recent update timestamp as the new reference
                            foreach (LogicVar lvar in MachineStatus.Instance.Variables.Values)
                            {
                                if (lvar.LastUpdate > machine.LastUpdate)
                                {
                                    machine.LastUpdate = lvar.LastUpdate;
                                }
                            }

                            //invoke the custom logic
                            logic.Run();
                        }

                        Thread.Sleep(1000);
                    }
                }
                catch (Exception ex)
                {
                    Console.WriteLine(ex.Message);
                }
            }

            // ...
        }

Let’s say that this piece of code is “fixed”. Regardless what the system should react upon the status, this section will be always the same. For this reason there’s a special, well-defined area where we could write our own business logic.
Here is a very simple example:

    class CustomLogic
        : ICustomLogic
    {

        public void Run()
        {
            LogicVar analog0 = MachineStatus.Instance.Variables["Analog0"];
            LogicVar analog1 = MachineStatus.Instance.Variables["Analog1"];

            if ((analog0.IsChanged || analog1.IsChanged) &&
                (double)analog0.Value > (double)analog1.Value
                )
            {
                var mail = new MailMessage();
                mail.To.Add("vernarim@outlook.com");
                mail.Body = "The value of Analog0 is greater than Analog1.";
                MachineStatus.Instance.SendMail(mail);
            }
        }

    }

If you remember, the “Analog0” and “Analog1” are two variables sent by the Netduino. When I turn the trimpots so that:

  • any of the two variables is detected as changed, and…
  • the “Analog0” value becomes greater than the “Analog1” value…

…then an e-mail message is created and sent to me…(!)

Here is what I see on my mailbox:

mail-message

Conclusions.

This article looks long, but it isn’t actually so: there are a lot of picture because the Azure setup walkthrough.
Azure experts may say that a more straightforward solution would be using a Message-Hub instead of a tricky way to trigger a webjob. Well, yes and no. I didn’t find a way to “peek” what’s inside a queue without removing its content, as long as other problems to solve.
This is much more an experimental project built on the Azure “sandbox”, than a definitive optimal way to structure a telemetry system. However, I believe that’s a very good point to start, take practice, then refine your own project.

In the next article, I’ll show how to create a better (yet useful) mail alerting component.

Azure Veneziano – Part 1

Microsoft Azure logo
Microsoft Azure logo

This is the the first part of a series, where I’ll present a telemetry project as a classic “Internet of Things” (IoT) showcase. The project starts as very basic, but it’ll grow up in the next parts by adding several useful components.
The central-role is for Microsoft Azure, but other sections will space over several technologies.

The source of the project is hosted in the azure-veneziano GitHub repository.

Inspiration.

This project was born as a sandbox for digging into cloud technologies, which may applies to our control-systems. I wanted to walk almost every single corner of a real control-system (kinda SCADA, if you like), for understanding benefits and limitations of a full-centralized solution.
By the way, I was also inspired by my friend Laurent Ellerbach, who published a very well-written article on how to create your own garden sprinkler system. Overall, I loved the mixture of different components which can be “glued” (a.k.a. interconnected) together: it seems that we’re facing a milestone, where the flexibility offered by those technologies are greater than our fantasy.
At the time of writing, Laurent is translating his article from French to English, so I’m waiting for the new link. In the meantime, here’s an equivalent presentation who held in Kiev, Ukraine, not long ago.

UPDATE: the Laurent’s article is now available here.

Why the name “Azure Veneziano”?

If any of you had the chance to visit my city, probably also saw in action some of the famous glass-makers of Murano. The “Blu Veneziano” is a particular tone of blue, which is often used for the glass.
I just wanted to honor Venezia, but also mention the “color” of the framework used, hence the name!

The system structure.

The system is structured as a producer-consumer, where:

  • the data producer is one (or more) “mobile devices”, which sample and sometime collect data from sensors;
  • the data broker, storage and business layer are deployed on Azure, where the main logic works;
  • the data consumers are both the logic and the final user (myself in this case), who monitor the system.

In this introductory article I’ll focus the first section, using a single Netduino Plus 2 board as data producer.

Netduino as the data producer.

In the IoT perspective, the Netduino plays the “Mobile device” role. Basically, it’s a subject which plays the role of a hardware-software thin-interface, so that the converted data can be sent to a server (Azure, in this case). Just think to a temperature sensor, which is wired to an ADC, and a logic gets the numeric value and sends to Azure. However, here I won’t detail a “real-sensor” system, rather a small simulation as anyone can do in minutes.
Moreover, since I introduced the project as “telemetry”, the data flow is only “outgoing” the Netduino. It means that there’s (still) no support to send “commands” to the board. Let’s stick to the simpler implementation possible.

The hardware.

The circuit is very easy.

netduino_bb

Two trimpots: each one provide a voltage swinging from 0.0 to 3.3 V to the respective analog input. That is, the Netduino’s internal ADC will convert the voltage as a floating-point (Double) value, which ranges from 0.0 to 100.0 (for sake of readiness, meaning it as it were a percent).
There are also two toggle-switches. Each one is connected to a discrete (Boolean) input, which should be configured with an internal pull-up. When the switch is open, the pull-up resistor takes the input value to the “high” level (true). When the switch is closed to the ground, it takes the value to the “low” level, being its resistance lower than the pull-up. and two switches.
If you notice, there’s a low-value resistor in series to each switch: I used a 270 Ohms-valued, but it’s not critical at all. The purpose is just to protect the Netduino input from mistakes. Just imagine a wrong setting of the pin actually configured as an output: what if the output would set the high-level when the switch is closed to the ground? Probably the output won’t fry, but the stress on that port isn’t a good thing.

All those “virtual” sensors can be seen from a programmer perspective as two Double- and two Boolean-values. The funny thing is that I can modify their value with my fingers!

???????????

Again, no matter here what could be the real sensor. I’d like to overhaul the hardware section for those who don’t like/understand so much about electronics. There are many ready-to-use modules/shields to connect, which avoid (or minimize) the chance to deal with the hardware.

Some virtual ports and my…laziness.

Believe me, I’m lazy.
Despite I’m having a lot of fun by playing with those hardware/software things, I really don’t like to stay spinning all the time the trimpots or sliding the switches, but I need some data changing overtime. So, I created a kind of (software) virtual port.
This port will be detailed below, and its task is to mimic a “real” hardware port. From the data production perspective it’s not different from the real ports, but way easier to manage, especially in a testing/demo session.
This concept of the “virtual port” is very common even in the high-end systems. Just think to a diagnostic section of the device, which collects data from non-physical sources (e.g. memory usage, cpu usage, etc)

The software.

Since the goal is posting on a server the data read by the Netduino, we should carefully choose the best way to do it.
The simplest way to connect a Netduino Plus 2 to the rest of the world is using the Ethernet cable. That’s fine, at least for the prototype, because the goal is reach the Internet.
About the protocol, among the several protocols available to exchange data with Azure, I think the simplest yet well-known approach is using HTTP. Also bear in mind that there’s no any “special” protocol in the current Netduino/.Net Micro Framework implementation.
The software running in the board is very simple. It can be structured as follows:

  • the main application, as the primary logic of the device;
  • some hardware port wrappers as data-capturing helpers;
  • a HTTP-client optimized for Azure-mobile data exchange;
  • a JSON DOM with serialization/deserialization capabilities;

The data transfer is normal HTTP. As the time of writing, the .Net Micro-Framework still did not offer any HTTPS support, so the data are flowing unsecured.

The first part of the main application is about the ports definition. It’s not particularly different than the classic declaration, but the ports are “wrapped” with a custom piece of code.

        /**
         * Hardware input ports definition
         **/

        private static InputPortWrapper _switch0 = new InputPortWrapper(
            "Switch0",
            Pins.GPIO_PIN_D0
            );

        private static InputPortWrapper _switch1 = new InputPortWrapper(
            "Switch1",
            Pins.GPIO_PIN_D1
            );

        private static AnalogInputWrapper _analog0 = new AnalogInputWrapper(
            "Analog0",
            AnalogChannels.ANALOG_PIN_A0,
            100.0,
            0.0
            );

        private static AnalogInputWrapper _analog1 = new AnalogInputWrapper(
            "Analog1",
            AnalogChannels.ANALOG_PIN_A1,
            100.0,
            0.0
            );

The port wrappers.

The aims of the port wrappers are double:

  • yield a better abstraction over a generic input port;
  • manage the “has-changed” flag, especially for non-discrete values as the analogs.

Let’s have a peek at the AnalogInputWrapper class, for instance:

    /// <summary>
    /// Wrapper around the standard <see cref="Microsoft.SPOT.Hardware.AnalogInput"/>
    /// </summary>
    public class AnalogInputWrapper
        : AnalogInput, IInputDouble
    {
        public AnalogInputWrapper(
            string name,
            Cpu.AnalogChannel channel,
            double scale,
            double offset,
            double normalizedTolerance = 0.05
            )
            : base(channel, scale, offset, 12)
        {
            this.Name = name;

            //precalculate the absolute variation window 
            //around the reference (old) sampled value
            this._absoluteToleranceDelta = scale * normalizedTolerance;
        }

        private double _oldValue = double.NegativeInfinity; //undefined
        private double _absoluteToleranceDelta;

        public string Name { get; private set; }
        public double Value { get; private set; }
        public bool HasChanged { get; private set; }

        public bool Sample()
        {
            this.Value = this.Read();

            //detect the variation
            bool hasChanged =
                this.Value < (this._oldValue - this._absoluteToleranceDelta) ||
                this.Value > (this._oldValue + this._absoluteToleranceDelta);

            if (hasChanged)
            {
                //update the reference (old) value
                this._oldValue = this.Value;
            }

            return (this.HasChanged = hasChanged);
        }

        // ...

    }

The class derives from the original AnalogInput port, but exposes the “Sample” method to capture the ADC value (Read method). The purpose is similar to a classic Sample-and-Hold structure, but there is a compare algorithm which detect the new value’s variation.
Basically, a “tolerance” parameter (normalized) has to be defined for the port (default is 5%). When a new sample is performed, its value is compared in reference to the “old value”, plus the tolerance-window around the old-value itself. When the new value falls outside the window, the official port’s value is marked as “changed”, and the old-value replaced with the new one.
This trick is very useful, because allows to avoid useless (and false) changes of the value. Even a little noise on the power rail can produce a small instability over the ADC nominal sampled value. However, we need just a “concrete” variation.

The above class implements the IInputDouble interface as well. This interface comes also from another, more abstract interface.

    /// <summary>
    /// Double-valued input port specialization
    /// </summary>
    public interface IInputDouble
        : IInput
    {
        /// <summary>
        /// The sampled input port value
        /// </summary>
        double Value { get; }
    }


    /// <summary>
    /// Generic input port abstraction
    /// </summary>
    public interface IInput
    {
        /// <summary>
        /// Friendly name of the port
        /// </summary>
        string Name { get; }

        /// <summary>
        /// Indicate whether the port value has changed
        /// </summary>
        bool HasChanged { get; }

        /// <summary>
        /// Perform the port sampling
        /// </summary>
        /// <returns></returns>
        bool Sample();

        /// <summary>
        /// Append to the container an object made up
        /// with the input port status
        /// </summary>
        /// <param name="container"></param>
        void Serialize(JArray container);
    }

Those interfaces yield a better abstraction over the different kinds of port: AnalogInput, InputPort and RampGenerator.

The RampGenerator as virtual port.

As mentioned earlier, there’s a “false-wrapper” because it does NOT wrap any port, but it WORKS as it were a standard port. The benefit become from the interfaces abstraction.
In order to PRODUCE data overtime for the demo, I wanted something automatic but also “well-known”. I may have used a random-number generator, but…how to detect an error or a wrong sequence over a random stream of numbers? Better to rely on a perfectly shaped wave, being periodic, so I can easily check the correct order of the samples on the server, but any missing/multiple datum as well.
As a periodic signal you can choose whatever you want. A sine is maybe the most famous periodic wave, but the goal is testing the system, not having something nice to see. A simple “triangle-wave” generator, is just a linear ramp rising-then-falling, indefinitely.

    /// <summary>
    /// Virtual input port simulating a triangle waveform
    /// </summary>
    public class RampGenerator
        : IInputInt32
    {
        public RampGenerator(
            string name,
            int period,
            int scale,
            int offset
            )
        {
            this.Name = name;
            this.Period = period;
            this.Scale = scale;
            this.Offset = offset;

            //the wave being subdivided in 40 slices
            this._stepPeriod = this.Period / 40;

            //vertical direction: 1=rise; -1=fall
            this._rawDirection = 1;
        }

        // ...

        public bool Sample()
        {
            bool hasChanged = false;

            if (++this._stepTimer <= 0)
            {
                //very first sampling
                this.Value = this.Offset;
                hasChanged = true;
            }
            else if (this._stepTimer >= this._stepPeriod)
            {
                if (this._rawValue >= 10)
                {
                    //hit the upper edge, then begin to fall
                    this._rawValue = 10;
                    this._rawDirection = -1;
                }
                else if (this._rawValue <= -10)
                {
                    //hit the lower edge, then begin to rise
                    this._rawValue = -10;
                    this._rawDirection = 1;
                }

                this._rawValue += this._rawDirection;
                this.Value = this.Offset + (int)(this.Scale * (this._rawValue / 10.0));
                hasChanged = true;
                this._stepTimer = 0;
            }
            
            return (this.HasChanged = hasChanged);
        }

        // ...

    }

Here is how a triangle-wave looks in a scope (it’s a 100 Hz, just to give an idea).

UNIT0000

Of course, I may have used a normal bench wave-generator as a physical signal source, as in the snapshot right above. That would have been more realistic, but the expected wave period would have been too short (i.e. too fast) and the “changes” with consequent message upload too frequent. A software-based signal generator is well suited for very-long periods, like many minutes.

The HTTP client.

As described above, the data are sent to the server via normal (unsecured) HTTP. The Netduino Plus 2 does not offer any HTTP client, but some primitives which help to create your own.
Without digging much into, the client is rather simple. If you know how a basic HTTP transaction works, then you’ll have no difficulty to understand what the code does.

    /// <summary>
    /// HTTP Azure-mobile service client 
    /// </summary>
    public class MobileServiceClient
    {
        public const string Read = "GET";
        public const string Create = "POST";
        public const string Update = "PATCH";

        // ...

        /// <summary>
        /// Create a new client for HTTP Azure-mobile servicing
        /// </summary>
        /// <param name="serviceName">The name of the target service</param>
        /// <param name="applicationId">The application ID</param>
        /// <param name="masterKey">The access secret-key</param>
        public MobileServiceClient(
            string serviceName,
            string applicationId,
            string masterKey
            )
        {
            this.ServiceName = serviceName;
            this.ApplicationId = applicationId;
            this.MasterKey = masterKey;

            this._baseUri = "http://" + this.ServiceName + ".azure-mobile.net/";
        }

        // ..

        private JToken OperateCore(
            Uri uri,
            string method,
            JToken data
            )
        {
            //create a HTTP request
            using (var request = (HttpWebRequest)WebRequest.Create(uri))
            {
                //set-up headers
                var headers = new WebHeaderCollection();
                headers.Add("X-ZUMO-APPLICATION", this.ApplicationId);
                headers.Add("X-ZUMO-MASTER", this.MasterKey);

                request.Method = method;
                request.Headers = headers;
                request.Accept = JsonMimeType;

                if (data != null)
                {
                    //serialize the data to upload
                    string serialization = JsonHelpers.Serialize(data);
                    byte[] byteData = Encoding.UTF8.GetBytes(serialization);
                    request.ContentLength = byteData.Length;
                    request.ContentType = JsonMimeType;
                    request.UserAgent = "Micro Framework";
                    //Debug.Print(serialization);

                    using (Stream postStream = request.GetRequestStream())
                    {
                        postStream.Write(
                            byteData,
                            0,
                            byteData.Length
                            );
                    }
                }

                //wait for the response
                using (var response = (HttpWebResponse)request.GetResponse())
                using (var stream = response.GetResponseStream())
                using (var reader = new StreamReader(stream))
                {
                    //deserialize the received data
                    return JsonHelpers.Parse(
                        reader.ReadToEnd()
                        );
                };
            }
        }

    }

The above code derived from an old project, but here are actually just few lines of code of that release. However, I want to mention the source for who’s interested in.

As the Azure Mobile Services offer, there are two kind of APIs which can be called: table- (Database) and custom-API-operations. Again, I’ll detail those features on the next article.
The key-role is for the OperateCore method, which is a private entry-point for both the table- and the custom-API-requests. All Azure needs is some special HTTP-headers, which should contain the identification keys for gaining access to the platform.
The request’s content is just a JSON document, that is simple plain-text.

The main application.

When the program starts, first creates an instance of the Azure Mobile HTTP-Client (Zumo), then wraps all the port references within an array, for ease of management.
Notice that there are also two “special” ports called “RampGenerator”. In this demo there are two wave-generators with a period of 1200 and 1800 seconds, respectively. Their ranges are also slightly different, but just for less confusion in the data verification.
The ability to fit all the ports in a single array, then treat them as they were an unique entity is the benefit offered by the interfaces abstraction.

        public static void Main()
        {
            //istantiate a new Azure-mobile service client
            var ms = new MobileServiceClient(
                "(your service name)",
                applicationId: "(your application-id)",
                masterKey: "(your master key)"
                );

            //collect all the input ports as an array
            var inputPorts = new IInput[]
            {
                _switch0,
                _switch1,
                new RampGenerator("Ramp20min", 1200, 100, 0),
                new RampGenerator("Ramp30min", 1800, 150, 50),
                _analog0,
                _analog1,
            };

After the initialization, the program runs in a loop forever, and about every second all the ports are sampled. Upon any “concrete” variation, a JSON message is wrapped up with the new values, then sent to the server.

            //loops forever
            while (true)
            {
                bool hasChanged = false;

                //perform the logic sampling for every port of the array
                for (int i = 0; i < inputPorts.Length; i++)
                {
                    if (inputPorts[i].Sample())
                    {
                        hasChanged = true;
                    }
                }

                if (hasChanged)
                {
                    //something has changed, so wrap up the data transaction
                    var jobj = new JObject();
                    jobj["devId"] = "01234567";
                    jobj["ver"] = 987654321;

                    var jdata = new JArray();
                    jobj["data"] = jdata;

                    //append only the port data which have been changed
                    for (int i = 0; i < inputPorts.Length; i++)
                    {
                        IInput port;
                        if ((port = inputPorts[i]).HasChanged)
                        {
                            port.Serialize(jdata);
                        }
                    }

                    //execute the query against the server
                    ms.ApiOperation(
                        "myapi",
                        MobileServiceClient.Create,
                        jobj
                        );
                }

                //invert the led status
                _led.Write(
                    _led.Read() == false
                    );

                //take a rest...
                Thread.Sleep(1000);
            }

The composition of the JSON message is maybe the simplest part, because the Linq-way of my Micro-JSON library.
The led toggling is just a visual heartbeat-monitor.

The message schema.

In my mind, there should be more than just a single board. Better: a more realistic system should connect several devices, even different from each other. Then, each device should provide its own data, and all the data incoming into the server would compose a big-bunch of “variables”.
For this reason, it’s important to distinguish the data originating source, and a kind of “device-identification”, unique in the system, is included in every message.
Moreover, I’d think that the set of variables exposed by a device could be changed any time. For example, I may add some new sensors, re-arrange the input ports, or even adjust some data type. All that means the “configuration is changed”, and the server should be informed about that. That’s because there’s a “version-identification” as well.

Then are the real sensors data. It’s just an array of Javascript objects, each one providing the port (sensor) name and its value.
However, the array will include only the port marked as “changed”. This trick yields at least two advantages:

  • the message length carries only the useful data;
  • the approach is rather “loose-coupled”: the server synchronizes automatically.

Each variable serialization is accomplished by the relative method declared in the IInput interface. Here is an example for the analog port:

        public void Serialize(JArray container)
        {
            var jsens = new JObject();
            jsens["name"] = this.Name;
            jsens["value"] = this.Value;
            container.Add(jsens);
        }

Here is the initial message, which always carries all the values:

{
  "devId": "01234567",
  "ver": 987654321,
  "data": [
    {
      "name": "Switch0",
      "value": true
    },
    {
      "name": "Switch1",
      "value": true
    },
    {
      "name": "Ramp20min",
      "value": 0
    },
    {
      "name": "Ramp30min",
      "value": 50
    },
    {
      "name": "Analog0",
      "value": 0.073260073260073
    },
    {
      "name": "Analog1",
      "value": 45.079365079365
    }
  ]
}

After that, we can adjust the trimpots and the switches in order to produce a “change”. Upon any of the detected changes, a message is composed and issued:

Single change Multiple changes
{
  "devId": "01234567",
  "ver": 987654321,
  "data": [
    {
      "name": "Analog1",
      "value": 52.503052503053
    }
  ]
}
{
  "devId": "01234567",
  "ver": 987654321,
  "data": [
    {
      "name": "Switch1",
      "value": false
    },
    {
      "name": "Analog1",
      "value": 75.946275946276
    }
  ]
}

Conclusions.

It’s easy to realize that this project is very basic, and there are many sections that could be improved. For example, there’s no any rescue of the program when an exception is thrown. However, I wanted to keep the application at a very introductory level.
It’s time to wire your own prototype, because in the next article we’ll see how to set-up the Azure platform for the data elaboration.

Memorabilia 2 – Apple ][

It’s been 20 years ago.

apple-invadersOn July 3, 1994 I subscribed for a small contest organized by RAI Radio Televisione Italiana. Anyone may submit his own software creation, and the prize was a TeleText module for PC.

I sent them my “Apple ][ emulator for PC” and I was awarded.

At that time, Internet still wasn’t known and most TV’s embed TeleText module, capable of receiving data on-the-air. Software broadcasting seemed as an unbelievable thing…then, in a few years, many of us were opening a web browser for surfing on the Internet.

 

My “real” Apple ][.

A step back to 1979.

My very first PC was a Commodore PET 2001: an unbelievable machine with a strange matrix-keyboard, a cassette-tape deck (storage) and a plain-green monitor on the top. Its engine was a 6502 CPU running at 1MHz and 8kB of RAM.

Yes, roughly an Arduino with an user-interface, but with the below exceptions:

  • an Arduino runs way faster;
  • the PET 2001 was particularly useful for the cold-winter days, due the relevant power consumption…

However, this PC was just for few months, then became too useless even for small games.

So, my “nominal” first PC was an Apple ][. In Europe it was marked “Europlus” (someone’d add “proudly built in Ireland”).

However, it came with the “usual” 6502 (actually an awesome CPU), still 1MHz, and 16kB of RAM (immediately upgraded to 48). The cassette-tape was replaced by a 5″ floppy-drive: each medium was capable of 140kB, that is probably less of this post. With a cost of 20000 Lire each (see below for a comparison), a Dysan floppy disk was the “best” on the market…at least for the humans.

I learn a lot on my Apple 2, both software and hardware.

With release of the Apple 2, the Cupertino-guys gave full-featured manuals, detailed hardware schematics, as well as ROM “BIOS” assembly dump. There was no point in the machine that wasn’t well known: hacking it was a real pleasure!

And I did it!…so many times!

Please, notice on the last picture the assembler listing signed by my myth, Steve Wozniak!

I designed several I/O hardware modules, where the most difficult part was the reproduction of the male-connection header: the PCB was the only way.

Along the huge yet worldwide success of Apple ][, they released the Apple //e, which started to fall fairly into the closeness, and so always more. That was the decay of the Apple company, and the rise of the IBM-PC, which moved in the same way as its predecessor: give away schematics and BIOS listings!
I still own my original Apple ][.

 

My “fake” Apple ][ (the emulator).

The advent of PC-XT changes almost everything but the general PC diffusion.

Whereas in the early ’80 there were maybe a dozen students having a PC at home (out of about 1500 of the tech high-school I was), within as low as ten years almost everyone own a PC in their house: mostly an IBM PC-compatible.

However, the Apple ][ was still in my heart!

 

Due to university guidelines, I started to learn Pascal and Fortran. However, Fortran was awful, but (Turbo-) Pascal was awesome, instead. I loved it so much that literally was able to create anything. Whereas the standard Pascal can’t reach something, just open an “asm” island, and mix high-level with assembler code.

No complex “includes”, “.h” or whatever, which I always hated and *still* hate. What are used for? I mean thirty years ago, where the PC resources are very limited, but…today?

I mean, no wonders at all that behind the success of C# there’s the creator of Turbo Pascal: Anders Hejlsberg.

 

So what?

Since my desktop wasn’t enough to place both the old Apple ][ and the PC-AT, the most “reasonable” decision was: “just create an Apple ][ emulator running in the PC-AT!”

It was the early ’90, and I own a 386 machine (I don’t remember the actual CPU speed). However, I loved coding this mixture of “Pasc-asm” that the result was still one of my best creation ever.

Below there is a piece of assembler related to the LDA (immediate) and LDA (indirect, X):

@LDASS: mov     bx, es:[si+1]
        mov     di, bx
        and     di, $FC00
        shr     di, 9
        call    word ptr @LOCRD [di]
        sahf
        mov     cl, es:[bx]
        inc     cl
        dec     cl
        lahf
        add     si, 3
        jmp     @RET
@LDAIX: mov     bx, es:[si+1]
        add     bl, dl
        xor     bh, bh
        mov     bx, es:[bx]
        mov     di, bx
        and     di, $FC00
        shr     di, 9
        call    word ptr @LOCRD [di]
        sahf
        mov     cl, es:[bx]
        inc     cl
        dec     cl
        lahf
        add     si, 2
        jmp     @RET

 

 

Keep going on in the box!

I was able to rescue the old emulator application running even on my today’s Windows 8 64-bit machine. It seems that the old-DOS programs aren’t working in a 64-bit environment, but there’s a solution: DosBox.

DosBox, as always, is the result of a crew of heroes, who thankfully remember that there are still people asking for dinosaur’s stuffs…dino, maybe, but still valuable!

I actually had no problems installing and running my application: I was a bit worried because the total NON-abstraction on write data on video, but…it worked well!

Enjoy this piece of history!

 

 

The price of an Apple ][ computer.

Here is the complete price details of the Apple ][ products, taken from a magazine of the november 1980.

vlcsnap-2014-07-20-07h30m13s160

NOTE: “IVA” is VAT, which was 15% in 1980.

Now, according to this reference, the salary of a generic factory worker was roughly 400.000 Lire. It means that an Apple ][ had an equivalent cost as 5 times a worker salary!

Two little “endians”…and then there were none (of big)

1940 cover of the bookIf you don’t have problems, it means that you are doing nothing new. In my job, I do have problems almost everyday, and that’s making me happy!
If you deal with low-level data transfer, then you probably faced the different “endianness” of the processors. Traditionally, companies like Intel embraced the Little-endian choice, whereas Motorola (now Freescale) joined the Big-endian way.
Now, it seems that the .Net Framework cares only the Little-endian vision of the world. Here is an excerpt of the BitConverter class as seen with any decent disassembler:

	public static class BitConverter
	{
		/// <summary>Indicates the byte order ("endianness") in which data is stored in this computer architecture.</summary>
		/// <filterpriority>1</filterpriority>
		[__DynamicallyInvokable]
		public static readonly bool IsLittleEndian = true;

                // ...

Anyway, when you have to write a C#/.Net program which has to exchange data with a Big-endian device, you’re in trouble because the very poor support of this format. So, I created a pretty decent reader/writer pair that should come useful for many of you.

The “BIDI”-way.

Many modern processors supports both the “endiannesses”, so…why a BinaryReader/BinaryWriter shouldn’t act as them? Furthermore, it’s not unusual to see a mixed-format data in the same stream. One of the latest occasions was just on the FT800 chip, which requires a mixture of Big-endian for the addressing, despite the specs state that the chip is Little-endian based.
So, I definitely wanted a Reader/Writer capable of both. However, the interface is meant as “explicit” reference to a type yet the format. There’s no any “default” format, and this context might be interesting as well. The classes have been based by the original Microsoft’s BinaryReader and BinaryWriter, then I modified the data access. The pair of new classes are named as BidiBinaryReader and BidiBinaryWriter.
If any of you browsed the sources of my Modbus library, then the problem isn’t new at all. This time I turned the source to “wrap” a generic Stream object, instead a faster but less-abstract byte array. Here is an example of how the BidiBinaryReader:


        /// <summary>Reads a 4-byte signed integer (Little-endian) from the current stream and advances the current position of the stream by four bytes.</summary>
        /// <returns>A 4-byte signed integer read from the current stream.</returns>
        /// <exception cref="T:System.IO.EndOfStreamException">The end of the stream is reached. </exception>
        /// <exception cref="T:System.ObjectDisposedException">The stream is closed. </exception>
        /// <exception cref="T:System.IO.IOException">An I/O error occurs. </exception>
        /// <filterpriority>2</filterpriority>
        public virtual int ReadInt32LE()
        {
            this.FillBuffer(4);
            return
                (int)this.m_buffer[0] |
                (int)this.m_buffer[1] << 8 |
                (int)this.m_buffer[2] << 16 |
                (int)this.m_buffer[3] << 24;
        }


        /// <summary>Reads a 4-byte signed integer (Big-endian) from the current stream and advances the current position of the stream by four bytes.</summary>
        /// <returns>A 4-byte signed integer read from the current stream.</returns>
        /// <exception cref="T:System.IO.EndOfStreamException">The end of the stream is reached. </exception>
        /// <exception cref="T:System.ObjectDisposedException">The stream is closed. </exception>
        /// <exception cref="T:System.IO.IOException">An I/O error occurs. </exception>
        /// <filterpriority>2</filterpriority>
        public virtual int ReadInt32BE()
        {
            this.FillBuffer(4);
            return
                (int)this.m_buffer[3] |
                (int)this.m_buffer[2] << 8 |
                (int)this.m_buffer[1] << 16 |
                (int)this.m_buffer[0] << 24;
        }

What the classes don’t expose.

The original Microsoft’s sources also offer the support for reading and writing chars, thus the codec-way to manipulate bytes. The problem is that those sources access to several internal stuffs, so the only way to leverage them is via reflection. I usually work with plain byte-arrays, and the text conversion is made by some specific codec (e.g. UTF8Encoding).
As stated, there’s no a support for setting a “default” endinanness of the Reader/Writer. I mean using the same original’s interface, but allowing the user to set whether adopt the Big- instead of the Little-endian format.

So far so well. As usual here are the sources.

Hacking the WPF GridView – Adding the animation

UPDATE: this article still deserves a bit of mention, but has been superseded by a revision of the code due to some issues. Please, have a look at this article instead.

In the first part of the WPF GridView hacking, I shown how to hide one or more column upon a certain boolean source.
Here I’ll show you a little complex hacking in order to achieve the same functionality, but adding an animation to get the collapsing/expanding ability more fancy.
Just to let you understand better what’s the goal, have a look at this short video:

How to approach the problem?

The animation capability is a well built-in feature of the WPF. I first considered to leverage the animation by using a StoryBoard or a simpler BeginAnimation over the GridViewColumn’s Width property, but…
The simplest way to animate a property is using the BeginAnimation, without any StoryBoard. However, this works only in code, and you must use a StoryBoard when in the XAML.
The following example is taken from the MSDN library documentation:

// Animate the button's width.
DoubleAnimation widthAnimation = new DoubleAnimation(120, 300, TimeSpan.FromSeconds(5));
widthAnimation.RepeatBehavior = RepeatBehavior.Forever;
widthAnimation.AutoReverse = true;
animatedButton.BeginAnimation(Button.WidthProperty, widthAnimation);

The above snippet indicates that a simple converter won’t help us for the animation, but we actually need some powerful tool.

Derive from a Behavior generic class.

It’s a been that the WPF added the powerful capability of adding one or more Behaviors to any DependencyObject-derived object. A Behavior is an elegant way to extend the functionality of a certain object, whereas it is not possible to directly modify it. I’d also add that even having the ability to modify it, a behavioral-pattern yields a lot of abstraction, thus a much greater component reuse.
At this point, the behavior should have to be attached to the GridViewColumn instance, and should also expose at least two properties:

  • IsVisible, of type Boolean, which aims the control of the related column’s visibility, and
  • NominalWidth, of type Double, which specifies the expanded-state width of the same column.

Once attached, the behavior should control the Width property of the owning’s column. Something like this:

    public class WidthAnimationBehavior
        : Behavior<GridViewColumn>
    {

        #region DP NominalLength

        public static readonly DependencyProperty NominalLengthProperty = DependencyProperty.Register(
            "NominalLength",
            typeof(double),
            typeof(WidthAnimationBehavior),
            new PropertyMetadata(
                double.NaN,
                (obj, args) =>
                {
                    var ctl = (WidthAnimationBehavior)obj;
                    ctl.NominalLengthChanged(args);
                }));


        /// <summary>
        /// Represent the nominal length value to be considered
        /// when the element is visible
        /// </summary>
        public double NominalLength
        {
            get { return (double)GetValue(NominalLengthProperty); }
            set { SetValue(NominalLengthProperty, value); }
        }


        private void NominalLengthChanged(DependencyPropertyChangedEventArgs args)
        {
            this.TriggerAnimation();
        }

        #endregion


        #region DP IsVisible

        public static readonly DependencyProperty IsVisibleProperty = DependencyProperty.Register(
            "IsVisible",
            typeof(bool),
            typeof(WidthAnimationBehavior),
            new PropertyMetadata(
                false,
                (obj, args) =>
                {
                    var ctl = (WidthAnimationBehavior)obj;
                    ctl.IsVisibleChanged(args);
                }));


        /// <summary>
        /// Get and set whether the element has to be considered visible.
        /// In this context, the "visibility" is meant as the element's
        /// length expanded (nominal length) or collapsed (zero).
        /// </summary>
        public bool IsVisible
        {
            get { return (bool)GetValue(IsVisibleProperty); }
            set { SetValue(IsVisibleProperty, value); }
        }


        private void IsVisibleChanged(DependencyPropertyChangedEventArgs args)
        {
            this.TriggerAnimation();
        }

        #endregion


        private void TriggerAnimation()
        {
            var targetWidth = this.IsVisible
                ? this.NominalLength
                : 0.0;

            if (targetWidth > 0.0 &&
                this.AssociatedObject.Width == 0.0)
            {
                //begin open

            }
            else if (targetWidth == 0.0 &&
                this.AssociatedObject.Width > 0.0)
            {
                //begin close
                
            }
        }
    }

The actual problem is that the BeginAnimation method is declared in the Animatable class, but the GridViewColumn class does not derive from it.
Let’s continue digging…

Use a timer instead…

Of course there are many ways to manage an animation: I believe the most straightforward is using a normal timer as a clock. Better, a DispatcherTimer, since the goal is dealing heavily with the UI thread, and a specific timer will surely lead a better result.
The above behavior class gets a bit more complex, but still offers a decent functionality without messing the code too much.
Here is the revised class:

    public class WidthAnimationBehavior
        : Behavior<GridViewColumn>
    {
        /// <summary>
        /// Define how long takes the animation
        /// </summary>
        /// <remarks>
        /// The value is expressed as clock interval units
        /// </remarks>
        private const int StepCount = 10;


        public WidthAnimationBehavior()
        {
            //create the clock used for the animation
            this._clock = new DispatcherTimer(DispatcherPriority.Render);
            this._clock.Interval = TimeSpan.FromMilliseconds(20);
            this._clock.Tick += _clock_Tick;
        }


        private DispatcherTimer _clock;
        private int _animationStep;
        private double _fromLength;
        private double _toLength;


        #region DP NominalLength

        public static readonly DependencyProperty NominalLengthProperty = DependencyProperty.Register(
            "NominalLength",
            typeof(double),
            typeof(WidthAnimationBehavior),
            new PropertyMetadata(
                double.NaN,
                (obj, args) =>
                {
                    var ctl = (WidthAnimationBehavior)obj;
                    ctl.NominalLengthChanged(args);
                }));


        /// <summary>
        /// Represent the nominal length value to be considered
        /// when the element is visible
        /// </summary>
        public double NominalLength
        {
            get { return (double)GetValue(NominalLengthProperty); }
            set { SetValue(NominalLengthProperty, value); }
        }


        private void NominalLengthChanged(DependencyPropertyChangedEventArgs args)
        {
            this.TriggerAnimation();
        }

        #endregion


        #region DP IsVisible

        public static readonly DependencyProperty IsVisibleProperty = DependencyProperty.Register(
            "IsVisible",
            typeof(bool),
            typeof(WidthAnimationBehavior),
            new PropertyMetadata(
                false,
                (obj, args) =>
                {
                    var ctl = (WidthAnimationBehavior)obj;
                    ctl.IsVisibleChanged(args);
                }));


        /// <summary>
        /// Get and set whether the element has to be considered visible.
        /// In this context, the "visibility" is meant as the element's
        /// length expanded (nominal length) or collapsed (zero).
        /// </summary>
        public bool IsVisible
        {
            get { return (bool)GetValue(IsVisibleProperty); }
            set { SetValue(IsVisibleProperty, value); }
        }


        private void IsVisibleChanged(DependencyPropertyChangedEventArgs args)
        {
            this.TriggerAnimation();
        }

        #endregion


        private void TriggerAnimation()
        {
            this._animationStep = StepCount;
            this._clock.IsEnabled = true;
        }


        /// <summary>
        /// Clock ticker, mainly used for the animation
        /// </summary>
        /// <param name="sender"></param>
        /// <param name="e"></param>
        void _clock_Tick(object sender, EventArgs e)
        {
            if (this.AssociatedObject != null)
            {
                if (this._animationStep-- == StepCount)
                {
                    //calculates the from/to values to be used for the animation
                    this._fromLength = double.IsNaN(this.AssociatedObject.Width) ? 0.0 : this.AssociatedObject.Width;
                    this._toLength = this.NominalLength * (this.IsVisible ? 1.0 : 0.0);

                    if (Math.Abs(this._toLength - this._fromLength) < 0.1)
                    {
                        //the points match, thus there's no needs to animate
                        this._animationStep = 0;
                        this._clock.Stop();
                    }
                }

                if (this._clock.IsEnabled)
                {
                    //applies the easing function, whereas defined
                    double relative = (StepCount - this._animationStep) / (double)StepCount;
                    double value = this._fromLength + relative * (this._toLength - this._fromLength);

                    this.AssociatedObject.Width = value;
                }

                if (this._animationStep <= 0)
                {
                    //the animation is over: stop the clock
                    this._animationStep = 0;
                    this._clock.Stop();
                }
            }
            else
            {
                //no animation or no target: stop the clock immediately
                this._animationStep = 0;
                this._clock.Stop();
            }
        }

    }

On the XAML side, the documenti will show as follows:

<ListView
    ItemsSource="{Binding Path=People, Source={x:Static local:App.Current}}"
    Grid.Row="1"
    x:Name="lvw1"
    >
    <ListView.View>
        <GridView
            AllowsColumnReorder="False"
            >
            <GridViewColumn Header="FirstName" Width="100" DisplayMemberBinding="{Binding Path=FirstName}" />
            <GridViewColumn Header="LastName" Width="100" DisplayMemberBinding="{Binding Path=LastName}" />

            <GridViewColumn Header="Address" DisplayMemberBinding="{Binding Path=Address}">
                <i:Interaction.Behaviors>
                    <local:WidthAnimationBehavior NominalLength="200" IsVisible="{Binding Path=IsChecked, ElementName=ChkLoc}" />
                </i:Interaction.Behaviors>
            </GridViewColumn>

            <GridViewColumn Header="City" Width="120" DisplayMemberBinding="{Binding Path=City}" />
            <GridViewColumn Header="State" Width="50" DisplayMemberBinding="{Binding Path=State}" />
            <GridViewColumn Header="ZIP" Width="60" DisplayMemberBinding="{Binding Path=ZIP}" />
                    
            <GridViewColumn Header="Phone" Width="150" DisplayMemberBinding="{Binding Path=Phone}" />
            <GridViewColumn Header="Email" Width="150" DisplayMemberBinding="{Binding Path=Email}" />
        </GridView>
    </ListView.View>
</ListView>

NOTE: for readiness reasons, the XAML snippet shown is just the ListView. Also, the behavior has been applied only to the “Address” column, but it could have be applied to any other column.

Again, there’s no code behind, and that’s a good news. The usage in the XAML context is not much complex than using a normal converter. Most of the uncommon tag structure is due by the attached collection, which holds the real behavior instance.
How is the result now? This video says more than thousand words!

So, everything seems fine!…Well, not at all yet.
When the column is expanded (i.e. visible) I should be able to resize, whereas this is a desired feature. By the way, I can do it, but the new width is not stored anywhere, and a new collapse/expansion will lose the desired setting.

A dramatic new approach.

Okay, I need also another feature: the ability to add/remove the columns at runtime. That’s because our LOB app for the timber drying regulation (Cet Electronics), can’t rely on a prefixed set of columns, and the real set depends on the regulator model/state.
Moreover, the animation behavior runs fine, but…why not rethink that class in order to use for many more double-value animations?
That’s for saying that the above trick is valuable for many applications, yet not enough for a pretty flexible usage in a professional context.
So, I approached the GridViewColumns management via a proxy, where each column is mirrored by a view-model instance. I don’t know if the term “view-model” is appropriate in this case, because the GridViewColumn is actually a kind of view-model for the real elements hosted in the visual tree.
Anyway, the deal is hosting this “proxy” in some place, where the business layer would adjust the virtual columns as it wants. At that point the view (i.e. a ListView+GridView) may bind safely to this proxy, thus the visual result should match the expectations.

As for “safely”, I mean without any kind of memory-leak.

For the final solution the XAML is amazingly clean, but it’s also obvious because most of the work is done in the code behind.

<Window 
    x:Class="ListViewHacking.Window1"
    xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
    xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"
    xmlns:local="clr-namespace:ListViewHacking"
    Title="ListView hacking demo" 
    Height="480" Width="900"
    WindowStartupLocation="CenterOwner"
    FontSize="14"
    Background="{StaticResource BG}"
    >
    
        
    <Grid
        Margin="50,40"
        >
        <Grid.RowDefinitions>
            <RowDefinition Height="Auto" />
            <RowDefinition Height="*" />
        </Grid.RowDefinitions>
        
        <StackPanel
            Orientation="Horizontal"
            Grid.Row="0"
            Margin="20,8"
            >
            <CheckBox Content="Show location columns" x:Name="ChkLoc" Click="ChkLoc_Click" Margin="20,0" />
            <CheckBox Content="Show contact columns" x:Name="ChkCont" Click="ChkCont_Click" Margin="20,0" />
        </StackPanel>
        
        <ListView
            ItemsSource="{Binding Path=People, Source={x:Static local:App.Current}}"
            Grid.Row="1"
            x:Name="lvw1"
            >
            <ListView.View>
                <local:GridViewEx
                    AllowsColumnReorder="False"
                    ColumnsSource="{Binding Path=TargetCollection}"
                    >
                </local:GridViewEx>
            </ListView.View>
        </ListView>
        
    </Grid>
</Window>

However, there’s a minimal handling for the checkboxes’ events:

    public partial class Window1 : Window
    {
        public Window1()
        {
            InitializeComponent();
        }


        private void ChkLoc_Click(object sender, RoutedEventArgs e)
        {
            var mirror = (GridViewColumnManager)this.DataContext;
            var isVisible = this.ChkLoc.IsChecked == true;

            //manage the visibility for the specified columns
            for (int i = 2; i <= 5; i++)
            {
                mirror.SourceItems[i].IsVisible = isVisible;
            }
        }


        private void ChkCont_Click(object sender, RoutedEventArgs e)
        {
            var mirror = (GridViewColumnManager)this.DataContext;
            var isVisible = this.ChkCont.IsChecked == true;

            //manage the visibility for the specified columns
            for (int i = 6; i <= 7; i++)
            {
                mirror.SourceItems[i].IsVisible = isVisible;
            }
        }

    }

Now, the columns’ configuration is fully done in the code behind, specifically in the main window:

    /// <summary>
    /// Interaction logic for MainWindow.xaml
    /// </summary>
    public partial class MainWindow : Window
    {
        public MainWindow()
        {
            InitializeComponent();
        }


        private GridViewColumnManager _manager = new GridViewColumnManager();


        private void Button_Click(object sender, RoutedEventArgs e)
        {
            if (this._manager.SourceItems.Count == 0)
            {
                //the very first time, the manager should be
                //filled up with the desired columns
                this.AddItem("FirstName", 100, true);
                this.AddItem("LastName", 100, true);

                this.AddItem("Address", 200, false);
                this.AddItem("City", 120, false);
                this.AddItem("State", 50, false);
                this.AddItem("ZIP", 60, false);

                this.AddItem("Phone", 150, false);
                this.AddItem("Email", 150, false);
            }

            //create then show the secondary window,
            //containing the grid
            var win = new Window1();
            win.Owner = this;
            win.DataContext = this._manager;
            win.ShowDialog();
        }


        //just a helper for creating a column wrapper
        private void AddItem(string caption, double width, bool isVisible)
        {
            var mi = new GridViewColumnWrapper();
            mi.Header = caption;
            mi.Width = width;
            mi.IsVisible = isVisible;

            //here is the opportunity to set the cell content:
            //either a direct data or even a more useful data-binding
            mi.Initializer = (sender, gvc) => gvc.DisplayMemberBinding = new Binding(caption);
            
            this._manager.SourceItems.Add(mi);
        }

    }

Here is the final version of the behavior:

    /// <summary>
    /// Perform a length animation overtime, without using data binding.
    /// It is a normal behavior that can be attached to any <see cref="System.Windows.DependencyObject"/>
    /// </summary>
    /// <remarks>
    /// Currently only the <see cref="System.Double"/> type is supported
    /// </remarks>
    public class LengthAnimationBehavior
        : Behavior<DependencyObject>
    {
        /// <summary>
        /// Define how long is the delay time before actually starting the animation
        /// </summary>
        /// <remarks>
        /// The value is expressed as clock interval units
        /// </remarks>
        private const int DelayCount = 10;

        /// <summary>
        /// Define how long takes the animation
        /// </summary>
        /// <remarks>
        /// The value is expressed as clock interval units
        /// </remarks>
        private const int StepCount = 10;


        /// <summary>
        /// Create the instance and specify what's
        /// the target property to animate
        /// </summary>
        /// <param name="dp"></param>
        public LengthAnimationBehavior(DependencyProperty dp)
        {
            this._dp = dp;

            //create the clock used for the animation
            this._clock = new DispatcherTimer(DispatcherPriority.Render);
            this._clock.Interval = TimeSpan.FromMilliseconds(20);
            this._clock.Tick += _clock_Tick;

            //see: http://wpf-animation.googlecode.com/svn/trunk/src/WPF/Animation/PennerDoubleAnimation.cs
            this.EasingFunction = (t, b, c, d) =>
            {
                //a quintic easing function
                if ((t /= d / 2) < 1)
                    return c / 2 * t * t * t * t * t + b;
                else
                    return c / 2 * ((t -= 2) * t * t * t * t + 2) + b;
            };
        }


        private DependencyProperty _dp;

        private DispatcherTimer _clock;
        private int _animationStep;
        private double _fromLength;
        private double _toLength;

        /// <summary>
        /// Get and set the easing function to be used for the animation
        /// </summary>
        public Func<double, double, double, double, double> EasingFunction { get; set; }


        #region DP NominalLength

        public static readonly DependencyProperty NominalLengthProperty = DependencyProperty.Register(
            "NominalLength",
            typeof(double),
            typeof(LengthAnimationBehavior),
            new PropertyMetadata(
                double.NaN,
                (obj, args) =>
                {
                    var ctl = (LengthAnimationBehavior)obj;
                    ctl.NominalLengthChanged(args);
                }));


        /// <summary>
        /// Represent the nominal length value to be considered
        /// when the element is visible
        /// </summary>
        public double NominalLength
        {
            get { return (double)GetValue(NominalLengthProperty); }
            set { SetValue(NominalLengthProperty, value); }
        }


        private void NominalLengthChanged(DependencyPropertyChangedEventArgs args)
        {
            if (this.IsAnimationEnabled)
            {
                this._animationStep = DelayCount + StepCount;
                this._clock.IsEnabled = true;
            }
            else
            {
                this.SetImmediately();
            }
        }

        #endregion


        #region DP TargetValue

        private static readonly DependencyProperty TargetValueProperty = DependencyProperty.Register(
            "TargetValue",
            typeof(object),
            typeof(LengthAnimationBehavior),
            new PropertyMetadata(
                null,
                (obj, args) =>
                {
                    var ctl = (LengthAnimationBehavior)obj;
                    ctl.TargetValueChanged(args);
                }));


        /// <summary>
        /// Used as mirror of the target property value.
        /// It's a simple way to be notified of any value change
        /// </summary>
        /// <remarks>
        /// Please, note that's everything private
        /// </remarks>
        private object TargetValue
        {
            get { return (object)GetValue(TargetValueProperty); }
            set { SetValue(TargetValueProperty, value); }
        }


        private void TargetValueChanged(DependencyPropertyChangedEventArgs args)
        {
            if (this.IsVisible &&
                (this._animationStep <= 0 || this._animationStep > StepCount))
            {
                //fire the related event
                this.OnControlledValueChanged(this.AssociatedObject);
            }
        }

        #endregion


        #region DP IsVisible

        public static readonly DependencyProperty IsVisibleProperty = DependencyProperty.Register(
            "IsVisible",
            typeof(bool),
            typeof(LengthAnimationBehavior),
            new PropertyMetadata(
                false,
                (obj, args) =>
                {
                    var ctl = (LengthAnimationBehavior)obj;
                    ctl.IsVisibleChanged(args);
                }));


        /// <summary>
        /// Get and set whether the element has to be considered visible.
        /// In this context, the "visibility" is meant as the element's
        /// length expanded (nominal length) or collapsed (zero).
        /// </summary>
        public bool IsVisible
        {
            get { return (bool)GetValue(IsVisibleProperty); }
            set { SetValue(IsVisibleProperty, value); }
        }


        private void IsVisibleChanged(DependencyPropertyChangedEventArgs args)
        {
            if (this.IsAnimationEnabled)
            {
                this._animationStep = DelayCount + StepCount;
                this._clock.IsEnabled = true;
            }
            else
            {
                this.SetImmediately();
            }
        }

        #endregion


        #region DP IsAnimationEnabled

        public static readonly DependencyProperty IsAnimationEnabledProperty = DependencyProperty.Register(
            "IsAnimationEnabled",
            typeof(bool),
            typeof(LengthAnimationBehavior),
            new PropertyMetadata(
                false,
                (obj, args) =>
                {
                    var ctl = (LengthAnimationBehavior)obj;
                    ctl.IsAnimationEnabledChanged(args);
                }));


        /// <summary>
        /// Get or set whether the animation should run or not.
        /// When disabled, any setting will take place immediately
        /// </summary>
        public bool IsAnimationEnabled
        {
            get { return (bool)GetValue(IsAnimationEnabledProperty); }
            set { SetValue(IsAnimationEnabledProperty, value); }
        }


        private void IsAnimationEnabledChanged(DependencyPropertyChangedEventArgs args)
        {
            if ((bool)args.NewValue == false)
            {
                this._animationStep = 0;
                this._clock.Stop();
            }
        }

        #endregion


        /// <summary>
        /// Allow to set the new target length immediately,
        /// without any animation or delay
        /// </summary>
        private void SetImmediately()
        {
            if (this.AssociatedObject != null)
            {
                this.AssociatedObject.SetValue(
                    this._dp,
                    this.NominalLength * (this.IsVisible ? 1.0 : 0.0)
                    );
            }
        }


        /// <summary>
        /// Clock ticker, mainly used for the animation
        /// </summary>
        /// <param name="sender"></param>
        /// <param name="e"></param>
        void _clock_Tick(object sender, EventArgs e)
        {
            if (this.IsAnimationEnabled &&
                this.AssociatedObject != null)
            {
                //check the initial delay
                if (--this._animationStep > StepCount)
                    return;

                //when the delay expires...
                if (this._animationStep == StepCount)
                {
                    //...calculates the from/to values to be used for the animation
                    this._fromLength = (double)this.TargetValue;
                    this._toLength = this.NominalLength * (this.IsVisible ? 1.0 : 0.0);

                    if (Math.Abs(this._toLength - this._fromLength) < 0.1)
                    {
                        //the points match, thus there's no needs to animate
                        this._animationStep = 0;
                        this._clock.Stop();
                    }
                }

                if (this._clock.IsEnabled)
                {
                    //applies the easing function, whereas defined
                    double value = this.EasingFunction(
                        StepCount - this._animationStep,
                        this._fromLength,
                        this._toLength - this._fromLength,
                        StepCount
                        );

                    this.AssociatedObject.SetValue(
                        this._dp,
                        value
                        );
                }

                if (this._animationStep <= 0)
                {
                    //the animation is over: stop the clock
                    this._animationStep = 0;
                    this._clock.Stop();
                }
            }
            else
            {
                //no animation or no target: stop the clock immediately
                this._animationStep = 0;
                this._clock.Stop();
            }
        }


        /// <summary>
        /// The behavior has just been attached to the object
        /// </summary>
        protected override void OnAttached()
        {
            base.OnAttached();

            BindingOperations.SetBinding(
                this,
                LengthAnimationBehavior.TargetValueProperty,
                new Binding()
                {
                    Path = new PropertyPath(this._dp),
                    Source = this.AssociatedObject,
                    Mode = BindingMode.OneWay,
                });
        }


        /// <summary>
        /// The behavior has just been detached to the object
        /// </summary>
        protected override void OnDetaching()
        {
            BindingOperations.ClearBinding(
                this,
                LengthAnimationBehavior.TargetValueProperty
                );

            base.OnDetaching();
        }


        #region EVT ControlledValueChanged

        /// <summary>
        /// Provide the notification of any change
        /// of the target property value, when the animation
        /// is not active
        /// </summary>
        public event EventHandler<ControlledValueChangedEventArgs> ControlledValueChanged;


        private void OnControlledValueChanged(DependencyObject associated)
        {
            var handler = this.ControlledValueChanged;

            if (handler != null)
            {
                handler(
                    this,
                    new ControlledValueChangedEventArgs(associated)
                    );
            }
        }

        #endregion

    }


    /// <summary>
    /// Event arguments for the notification of any change
    /// of the target property value, when the animation
    /// is not active
    /// </summary>
    public class ControlledValueChangedEventArgs
        : EventArgs
    {
        public ControlledValueChangedEventArgs(DependencyObject associated)
        {
            this.AssociatedObject = associated;
        }

        public DependencyObject AssociatedObject { get; private set; }
    }

Here is the proxy manager and a minimal realization of the column mirror model:

    /// <summary>
    /// Proxy for the columns collection used in a grid-view
    /// </summary>
    public class GridViewColumnManager
    {
        public GridViewColumnManager()
        {
            //create the source items collection instance
            this._sourceItems = new ObservableCollection<GridViewColumnWrapper>();
            this._sourceItems.CollectionChanged += SourceItemsCollectionChanged;

            //create the target columns collection instance
            this._targetCollection = new ObservableCollection<GridViewColumn>();
        }


        #region PROP SourceItems

        private readonly ObservableCollection<GridViewColumnWrapper> _sourceItems;

        /// <summary>
        /// Collection reference for the column wrapper items
        /// </summary>
        public ObservableCollection<GridViewColumnWrapper> SourceItems 
        {
            get { return this._sourceItems; }
        }

        #endregion


        #region PROP TargetCollection

        private readonly ObservableCollection<GridViewColumn> _targetCollection;

        /// <summary>
        /// Columns collection reference for the grid-view
        /// </summary>
        public IEnumerable<GridViewColumn> TargetCollection
        {
            get { return this._targetCollection; }
        }

        #endregion


        void SourceItemsCollectionChanged(
            object sender,
            System.Collections.Specialized.NotifyCollectionChangedEventArgs e)
        {
            this.Align();
        }


        /// <summary>
        /// Provides to align the target collection by the source's one.
        /// The default implementation is a simple positional one-to-one mirroring.
        /// </summary>
        /// <remarks>
        /// The wrapper and the actual column instances are compared by leveraging
        /// the column's hash code, which is stored privately inside the wrapper
        /// </remarks>
        protected virtual void Align()
        {
            int ixt = 0;
            for (int ixs = 0; ixs < this._sourceItems.Count; ixs++)
            {
                GridViewColumnWrapper wrapper = this._sourceItems[ixs];
                int pos = -1;

                if (this._targetCollection.Count > ixt)
                {
                    //search for the column equivalent to the current wrapper
                    pos = this._targetCollection.Count;
                    while (--pos >= 0 && this._targetCollection[pos].GetHashCode() != wrapper.ColumnHash) ;
                }

                if (pos >= 0)
                {
                    //the column was found, but adjust its position only
                    //when is not already correct
                    if (pos != ixt)
                        this._targetCollection.Move(pos, ixt);
                }
                else
                {
                    //the column was not found, so create a new one
                    var col = new GridViewColumn();
                    wrapper.ColumnHash = col.GetHashCode();

                    //simple copy of the header, so a further binding is also possible
                    col.Header = wrapper.Header;

                    //sets the initial (nominal) width of the column
                    col.Width = wrapper.Width;

                    //yields a column initialization, whereas available
                    if (wrapper.Initializer != null)
                    {
                        wrapper.Initializer(wrapper, col);
                    }

                    this._targetCollection.Insert(ixt, col);

                    //creates the behavior for the length animation
                    var bvr = new LengthAnimationBehavior(GridViewColumn.WidthProperty);
                    Interaction.GetBehaviors(col).Add(bvr);
                    bvr.ControlledValueChanged += bvr_ControlledValueChanged;

                    //binds the nominal width of the column to the behavior
                    BindingOperations.SetBinding(
                        bvr,
                        LengthAnimationBehavior.NominalLengthProperty,
                        new Binding("Width")
                        {
                            Source = wrapper,
                        });

                    //also binds the visibility to the behavior
                    BindingOperations.SetBinding(
                        bvr,
                        LengthAnimationBehavior.IsVisibleProperty,
                        new Binding("IsVisible")
                        {
                            Source = wrapper,
                        });

                    //now finally enables the animation
                    bvr.IsAnimationEnabled = true;
                }

                ixt++;
            }

            //removes any no further useful column
            while (this._targetCollection.Count > ixt)
                this._targetCollection.RemoveAt(ixt);
        }


        /// <summary>
        /// Event handler for the actual column's width changing
        /// </summary>
        /// <param name="sender"></param>
        /// <param name="e"></param>
        /// <remarks>
        /// This is very useful for keeping track of the manual resizing
        /// of any grid-view column. Every width changing off the animation,
        /// will be notified here.
        /// </remarks>
        void bvr_ControlledValueChanged(object sender, ControlledValueChangedEventArgs e)
        {
            var col = (GridViewColumn)e.AssociatedObject;
            var hash = col.GetHashCode();
            var item = this._sourceItems.FirstOrDefault(_ => _.ColumnHash == hash);
            if (item != null)
            {
                //update the nominal width in the wrapper with
                //the desired one
                item.Width = col.Width;
            }
        }

    }


    public class GridViewColumnWrapper
        : INotifyPropertyChanged
    {

        internal int ColumnHash;

        public string Name { get; set; }
        public Action<GridViewColumnWrapper, GridViewColumn> Initializer { get; set; }


        #region PROP Header

        private object _header;

        public object Header
        {
            get { return this._header; }
            set
            {
                if (this._header != value)
                {
                    this._header = value;
                    this.OnPropertyChanged("Header");
                }
            }
        }

        #endregion


        #region PROP Width

        private double _width;

        public double Width
        {
            get { return this._width; }
            set
            {
                if (this._width != value)
                {
                    this._width = value;
                    this.OnPropertyChanged("Width");
                }
            }
        }

        #endregion


        #region PROP IsVisible

        private bool _isVisible;

        public bool IsVisible
        {
            get { return this._isVisible; }
            set
            {
                if (this._isVisible != value)
                {
                    this._isVisible = value;
                    this.OnPropertyChanged("IsVisible");
                }
            }
        }

        #endregion


        #region EVT PropertyChanged

        public event PropertyChangedEventHandler PropertyChanged;


        protected virtual void OnPropertyChanged(string propertyName)
        {
            var handler = this.PropertyChanged;

            if (handler != null)
            {
                handler(
                    this,
                    new PropertyChangedEventArgs(propertyName));
            }
        }

        #endregion

    }

Finally, a derivation of the native GridView, because we need the ability to bind its column collection, but it is not available.

    public class GridViewEx
        : GridView
    {

        #region DP ColumnsSource

        public static readonly DependencyProperty ColumnsSourceProperty = DependencyProperty.Register(
            "ColumnsSource",
            typeof(ObservableCollection<GridViewColumn>),
            typeof(GridViewEx),
            new PropertyMetadata(
                null,
                (obj, args) =>
                {
                    var ctl = (GridViewEx)obj;
                    ctl.ColumnsSourceChanged(args);
                }));


        public ObservableCollection<GridViewColumn> ColumnsSource
        {
            get { return (ObservableCollection<GridViewColumn>)GetValue(ColumnsSourceProperty); }
            set { SetValue(ColumnsSourceProperty, value); }
        }


        private void ColumnsSourceChanged(DependencyPropertyChangedEventArgs args)
        {
            ObservableCollection<GridViewColumn> source;

            source = args.OldValue as ObservableCollection<GridViewColumn>;
            if (source != null)
            {
                source.CollectionChanged -= source_CollectionChanged;
            }

            this.Columns.Clear();

            source = args.NewValue as ObservableCollection<GridViewColumn>;
            if (source != null)
            {
                foreach (var col in source)
                {
                    this.Columns.Add(col);
                }

                source.CollectionChanged += source_CollectionChanged;
            }
        }

        #endregion


        void source_CollectionChanged(object sender, NotifyCollectionChangedEventArgs e)
        {
            switch (e.Action)
            {
                case NotifyCollectionChangedAction.Add:
                    this.Columns.Add((GridViewColumn)e.NewItems[0]);
                    break;

                case NotifyCollectionChangedAction.Remove:
                    this.Columns.Remove((GridViewColumn)e.OldItems[0]);
                    break;

                case NotifyCollectionChangedAction.Move:
                    this.Columns.Move(e.OldStartingIndex, e.NewStartingIndex);
                    break;

                case NotifyCollectionChangedAction.Replace:
                    this.Columns[e.NewStartingIndex] = (GridViewColumn)e.NewItems[0];
                    break;

                case NotifyCollectionChangedAction.Reset:
                    this.Columns.Clear();
                    break;
            }
        }

    }

If you noticed, in the demo video the grid is hosted in a separate window than the main one. That’s for two reasons:

  • verify that the closure of the child windows won’t lead to any leak, and
  • verify that any manual width change on the columns has to be preserved even when the window is destroyed.

Conclusion.

I know, the final version is pretty complex when compared to the solution seen till now. However, the benefits are noticeable: here are briefly summarized:

  • Of course, the primary target is fully available: each column can be hidden or shown, via a simple bool setting;
  • the columns’ configuration is totally controlled by the back view-model;
  • ease of save/load (persist) the user’s size setting;
  • the animation behavior is now a more generic “animation of a length” (of type Double);
  • there was an effort to avoid any modification of the style of the ListView, so that all the functionality should not have any impact with the user’s style;

Next time, we’ll see that even this final release is not perfect, and it has a subtle issue. We’ll learn how to fix it by leveraging the right tool!

By clicking here, you may download the basic (simplified) demo application. Click here to download the final release, instead.