In a series of articles I will describe how Azure can be used to power a back end for a front end application.
My front end application is a simple Rss aggregator. I follow a number of blogs and I currently use a Windows forms application that I wrote to monitor for new content: the application periodically downloads Rss feeds and processes them on the client side so I can then read them offline (the feed content is stored in a SQLite database). I would like a better solution that can run on my Android phone and also where feeds can be synced even when the client application isn’t running; the back end will instead be responsible for downloading latest feed content and aggregating the data.
A key design feature is that the new Android client will be lightweight: it will carry out minimal processing of the data and won’t be responsible for downloading feeds from the various blogs. Such a setup was passable for my high spec laptop but won’t do for my much lower spec phone for these reasons:
- Downloading feeds from the various blogs will blow out my data plan.
- Heavy processing on the client side will consume precious CPU cycles and battery power, making my phone slow/unresponsive with a constant need to find the next power outlet to charge it.
So with these limitations in mind, the back end will instead do the “heavy lifting” of downloading and processing feeds and ensure that sync data is optimized to the needs of the client, so minimizing bandwidth consumption
I must also mention as well that while thinking on how Azure could be used to power a back end service, a two part article was published in MSDN magazine that is pretty much along the lines that I was thinking for my own web service (please see the “References” section below for links to these two articles). The MSDN articles describe a service that aggregates Twitter and StackOverflow data intelligently, while my proof of concept aggregates Rss feed data from blogs, for example. I draw on these 2 articles heavily in the series.
Another major advantage (mentioned in the MSDN article series) of a cloud back end is better scalability: instead of each client downloading and processing the same feeds individually, the back end application can do this in a single operation, getting around any throttling limitations that may be imposed on some web services. So as the popularity of an app increases, this doesn’t result in a related decrease in performance (due to throttling) which would damage the reputation of the app.
The diagram below shows a high level overview of the solution:
Some of the key features of the architecture are as follows (walking through the diagram from left to right):
- Azure SQL Database is used to store Feed data in a relational database and the data is accessed using Entity Framework (EF) via an internal data provider API. It is envisaged that as further data sources come on board (other than just Rss feeds) each data source (e.g. Twitter) will have it’s own provider API that is implemented to the requirements of the particular data source that is onboarded.
- Azure WebJobs represent the worker processes – they run as a scheduled background task, downloading and processing Rss feeds and writing the results to the database.
- A REST API, implemented using ASP.NET Web API, provides an interface for clients to retrieve data.
- A simple client app (mobile and web) will use the REST API to download data and maintain a client side cache of the data, to the preferences specified by the user, once authenticated and authorised by the REST API.
That’s it for now – stay tuned for part 2!! In the next post, I will discuss the design and development of the Azure SQL Database and Azure WebJob that represent the “backbone” of the solution.
As always, any comments or tips most welcome.
 MSDN Magazine Aug 2015 Issue, Create a Web Service with Azure Web Apps and WebJobs, Microsoft. Available from: https://msdn.microsoft.com/en-us/magazine/mt185572.aspx
 MSDN Magazine Sep 2015 Issue, Build a Xamarin App with Authentication and Offline Support, Microsoft. Available from: https://msdn.microsoft.com/en-us/magazine/mt422581.aspx
I will outline here some of my thoughts on Microsoft’s new venture in the EAI space, announced at the Integrate 2014 conference: the BizTalk Microservices Platform. I guess this could be called “BizTalk Services 2.0”.
When I first heard the news, I was surprised. I didn’t attend the conference but had one eye on the #integrate2014 hashtag and as the talks progressed, I got progressively more excited as I saw more of the platform. My immediate reaction was: “this is Docker!”.
My intention of this post is to put forward what the new platform might be like and what this means for integration devs like me. I should mention here that these thoughts are a result of reading various blogs after the event and my own personal experience, so there may be many inaccuracies.
Before putting forward my ideas though, I would like to discuss Microsoft’s current cloud EAI offering – aka “BizTalk Services 1.0”.
Background – BizTalk Services 1.0
Like a lot of BizTalk devs I believe, I have spent quite a bit of time playing with Microsoft Azure BizTalk Services (MABS) in my spare time whilst working with BizTalk Server during the day.
It’s easy to get up and going: install the BizTalk Services SDK and you are pretty much there. This is a comprehensive guide that I found useful.
I could pick up the concepts from my BizTalk Server (and ESB Toolkit) experience quite quickly.
To be honest it left me quite disappointed since I was expecting some radical new thinking and tooling: it felt like an attempt to replicate BizTalk Server in the cloud. As the year wore on, it seemed little attention was paid to MABS by the product team; for example, one still has to use Visual Studio 2012 to create a solution (no update so we can use Visual Studio 2013). This made me think that the team must be “heads down” working hard on version 2.0 and too busy to release minor versions.
Another major issue is around unit and integration testing. It’s only possible to test a solution by deploying to Azure (directly from Visual Studio). Also the tooling to exercise your solution is limited: perhaps the highlight is a VS plugin that is very much an alpha version with little TLC given to it since it’s release.
Another burning question for me was: how is BizTalk Services actually realised in Azure? Originally I remember reading that MABS would be a multi-tenant affair. But I know that I needed to create my own tracking database… Surely this would be shared storage, I thought? However I’ll quote from Saravana’s latest blog post here:
Under the hood, completely isolated physical infrastructure (Virtual machines, storage, databases etc) are created for every new BizTalk Service provisioning. 
So this explains why it wouldn’t be possible to deploy and test a MABS solution locally, on my developer workstation. It wouldn’t be possible to replicate this runtime environment locally (without a lot of work anyway).
Also it’s obvious that a MABS solution is completely tied to Azure and cannot exist outside of it.
So my final thought on the first version of MABS was: a bit disappointing, nothing really radical here, good for lightweight integration scenarios (at best), hope 2.0 is (heaps) better and given much more TLC than 1.0…
The Inbetween Months – MABS 1.0 => MABS 2.0
So my day job got busy and I put MABS aside for a bit.
I have a FreeNAS box at home so I have spent some time this year setting it up, playing around with it and using it as a central place to store my photos, files and Git repos for my personal projects. It was then that I uncovered the concept of “jails”, which is basically a way of sandboxing. Each jail is bound to a separate IP address. Further info can be found here.
Learning about the jail idea lead to reading about an interesting open source project called “Docker“. I’m certainly no expert on this subject: I think of it in a similar way to the jail concept but instead of the term “jail”, the term “container” applies. Docker has an engine that enables containers to communicate with one another.
What I really like about Docker is that it is a “grassroots” developer led project. After a period of introspection (and no doubt feedback from customers and market forces), this seems to be something Microsoft is keen to promote: that is, a focus on developers and collaboration. .NET going cross platform and open source supports this.
Announcement of the BizTalk Microservices Platform (MABS 2.0?)
What I was alluding to in the previous section is that (with that wonderful thing called hindsight) I shouldn’t have been so surprised to hear that the next version of MABS would run on a “microservices platform”. In fact, this platform will be a key infrastructure component/concept of Azure: “BizTalk Microservices” will just be a subset of a new microservices platform. I suspect Microsoft will build it’s own container management technology for Azure and that Azure containers will be compatible with Docker. Already a Docker engine for Windows server is on the cards. I have heard mention of the term “Azure App Platform” and I think this will be a container management platform (or at least a component of a bigger platform).
From the diagram below, app containers are a foundational, basic building block of the platform.
I think the idea is that each container will have a single very specific function (i.e. be “granular”). Containers will be linked together (communicating over HTTP?) and as a whole be able to do something useful. So a container could be a transform container, a data access container, a business rules container, a service bus connectivity container and so on (I wonder if there will be a bridge container, or is this a monolith? ;-)). It would be possible for each container to be implemented using different technologies: the common factor would occur at the network transport level.
I reckon the (specific) term “BizTalk Microservices” will simply be a way of grouping integration specific containers together in the “toolbox”. BizTalk as a “product” (in the cloud) will be reduced to just a name and a way of locating containers specific to integration, nothing more. Over time, many other containers with a very specific function will come onboard: templates (“accelerators”) will exist which are basically a grouping of containers, designed to tackle a specific end-to-end problem.
Another deal clincher with the platform is that it would be possible to host, run and test locally on my dev workstation. Hopefully the Azure SDK will enable a local runtime environment for running containers (or maybe the Windows server Docker engine would enable running Azure containers on prem).
I’m really excited with the announcements at the conference. It’s very early days, but the microservices concept seems to be a good model: a way of decomposing services into their component (micro) parts that can be finely scaled and tuned (hopefully addressing some of the failings of “old school” SOA, particularly around scalability).
In a number of sources I have read the comment that it’s a really exciting time to be a developer and I agree with that. It’s exciting since cloud computing requires some new thinking and tools, which need to built. There is also a “cross-pollination” of ideas between different platforms.
I can’t wait for the preview of the microservices platform, which looks to be available early next year.
 Microsoft making big bets on Microservices – Saravana Kumar, Dec 4th 2014
I had created a new database project in Visual Studio 2013 and every attempt to publish the project to my Azure hosted database failed with this error:
Creating publish preview… Failed to import target model [database_name]. Detailed message Unable to reconnect to database: Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding.
This was particularly annoying since I could successfully test the connection from VS:
So why was I getting the timeout?
A quick Google found this post on stackoverflow.
As mentioned in the post, I logged into the Azure Management Portal and switched my database to the Standard tier, from the Basic tier:
I tried again to publish and hey presto, it finally worked!!
I then switched my database back to the Basic tier.
Maybe this is a bug or perhaps the Basic tier doesn’t afford sufficient DTUs to complete the publish within a reasonable time (DTU = Database throughput unit).