Thoughts on Integrating Systems & IoT

Posts tagged “azure

Reflections #1

We have recently completed Phase 1 of a “greenfields” BizTalk 2013 R2 implementation for a client and I wanted to jot down here some of my thoughts and technical learnings.

My role on the project was Technical Lead: I gathered requirements, generated the technical specifications, set up the BizTalk solutions, kicked off the initial development and provided supervision and guidance to the team.

Before starting on the project about 15 months ago, I had previously spent quite a bit of time working with a large BizTalk 2009 installation so I knew for my next assignment, that I would be playing with some new technologies in Azure and also using the new(ish) REST adapter. When I look back now, SOAP+WSDL+XML now seems something from a different age!

Here is a list of some key features of this hybrid integration platform:

  • BizTalk sits on-premises and exposes RESTful APIs using the WCF-WebHttp adapter. These APIs provide a standard interface into previously siloed systems on-premises, a big one being the company wide ERP.
  • Azure Service Bus relays using SAS authentication provide a means of exposing data held on-premises, to applications in the cloud.  This proved very effective but a downside is having to ping the endpoints, to ensure that the relay connection established from BizTalk doesn’t shut down, resulting in the relay appearing unavailable.
  • Service Bus queue allows the syncing of data from CRM in the cloud to ERP on-premises.  We used the SB-Messaging adapter.  Don’t forget to check for messages in the dead letter queue and have a process around managing this.
  • A mixture of asynchronous and synchronous processing. With asynchronous processing, we used orchestrations but synchronous processing (where a person or system is waiting for a reponse), these were messaging only (context based routing only).
  • We used the BRE pipeline component to provide the features of a “mini” ESB, almost as a replacement for the ESB Toolkit.

In a nutshell, BizTalk is the bridge between on-premises systems and the cloud, while also providing services for the more traditional systems on the ground.  I believe this will be a common story for most established companies for many years but eventually, many companies will run all their systems in the cloud.

I expected a few challenges with the WCF-WebHttp adapter, after stories from my colleagues who had used this adapter before me.  My main concern was handling non HTTP 200 responses which triggered the adapter to throw an exception that was impossible to handle in BizTalk (I know 500 errors will be returned by the adapter now, with a fix in CU5).  So from the start of the project, I requested that the APIs exposed from the ERP system always returned HTTP 200 but would include an error JSON message that we could interrogate.

My colleague Colin Dijkgraaf and Mark Brimble (sadly now an ex-colleague) presented an Integration Monday session on the WCF-WebHttp adapter – Whats right & wrong with WCF-WebHTTP Adapter?.

We also had to treat the JSON encoder and decoder pipeline components with kid gloves, with issues serializing messages if our XSD schemas contained referenced data types (e.g. from a common schema).  We had to greatly simplify our schemas to work with these components.

Also lack of support for Swagger (consumption via a wizard and exposing Swagger endpoints) is a glaring omission.  I manually created Swagger definition files using Restlet Studio which I found to be a great tool for documenting APIs, which was recommended to me by Mark Brimble.

Advertisements

Creating a Web Service in Azure, Part 1- Introduction and Architecture

Introduction

In a series of articles I will describe how Azure can be used to power a back end for a front end application.

My front end application is a simple Rss aggregator.  I follow a number of blogs and I currently use a Windows forms application that I wrote to monitor for new content: the application periodically downloads Rss feeds and processes them on the client side so I can then read them offline (the feed content is stored in a SQLite database).  I would like a better solution that can run on my Android phone and also where feeds can be synced even when the client application isn’t running; the back end will instead be responsible for downloading latest feed content and aggregating the data.

A key design feature is that the new Android client will be lightweight: it will carry out minimal processing of the data and won’t be responsible for downloading feeds from the various blogs.  Such a setup was passable for my high spec laptop but won’t do for my much lower spec phone for these reasons:

  • Downloading feeds from the various blogs will blow out my data plan.
  • Heavy processing on the client side will consume precious CPU cycles and battery power, making my phone slow/unresponsive with a constant need to find the next power outlet to charge it.

So with these limitations in mind, the back end will instead do the “heavy lifting” of downloading and processing feeds and ensure that sync data is optimized to the needs of the client, so minimizing bandwidth consumption

I must also mention as well that while thinking on how Azure could be used to power a back end service, a two part article was published in MSDN magazine that is pretty much along the lines that I was thinking for my own web service (please see the “References” section below for links to these two articles).  The MSDN articles describe a service that aggregates Twitter and StackOverflow data intelligently, while my proof of concept aggregates Rss feed data from blogs, for example.  I draw on these 2 articles heavily in the series.

Another major advantage (mentioned in the MSDN article series) of a cloud back end is better scalability: instead of each client downloading and processing the same feeds individually, the back end application can do this in a single operation, getting around any throttling limitations that may be imposed on some web services.  So as the popularity of an app increases, this doesn’t result in a related decrease in performance (due to throttling) which would damage the reputation of the app.

Architecture

The diagram below shows a high level overview of the solution:

Datamate Architecture

Figure 1  Datamate Architecture (Based on Figure 2 in Reference Article [1])

Some of the key features of the architecture are as follows (walking through the diagram from left to right):

  • Azure SQL Database is used to store Feed data in a relational database and the data is accessed using Entity Framework (EF) via an internal data provider API.  It is envisaged that as further data sources come on board (other than just Rss feeds) each data source (e.g. Twitter) will have it’s own provider API that is implemented to the requirements of the particular data source that is onboarded.
  • Azure WebJobs represent the worker processes – they run as a scheduled background task, downloading and processing Rss feeds and writing the results to the database.
  • A REST API, implemented using ASP.NET Web API, provides an interface for clients to retrieve data.
  • A simple client app (mobile and web) will use the REST API to download data and maintain a client side cache of the data, to the preferences specified by the user, once authenticated and authorised by the REST API.

That’s it for now – stay tuned for part 2!!  In the next post, I will discuss the design and development of the Azure SQL Database and Azure WebJob that represent the “backbone” of the solution.

As always, any comments or tips most welcome.

References

[1] MSDN Magazine Aug 2015 Issue, Create a Web Service with Azure Web Apps and WebJobs, Microsoft.  Available from: https://msdn.microsoft.com/en-us/magazine/mt185572.aspx

[2] MSDN Magazine Sep 2015 Issue, Build a Xamarin App with Authentication and Offline Support, Microsoft.  Available from: https://msdn.microsoft.com/en-us/magazine/mt422581.aspx


Timeout Publishing Database Project to Azure

I had created a new database project in Visual Studio 2013 and every attempt to publish the project to my Azure hosted database failed with this error:

Creating publish preview… Failed to import target model [database_name]. Detailed message Unable to reconnect to database: Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding.

This was particularly annoying since I could successfully test the connection from VS:

Visual Studio Test Connection OK

So why was I getting the timeout?

A quick Google found this post on stackoverflow.

As mentioned in the post, I logged into the Azure Management Portal and switched my database to the Standard tier, from the Basic tier:

Switch DB to Standard Tier

I tried again to publish and hey presto, it finally worked!!

DB Publish Success

I then switched my database back to the Basic tier.

Maybe this is a bug or perhaps the Basic tier doesn’t afford sufficient DTUs to complete the publish within a reasonable time (DTU = Database throughput unit).