Thoughts on Integrating Systems & IoT

Latest

Azure Logic Apps CI/CD – Part 1 – Introduction

In a series of posts I will demonstrate how to setup a continuous integration and continuous deployment (CI/CD) pipeline for Azure Logic Apps.  I also intend to touch on general Logic Apps development topics along the way and the broader subject of DevOps practices.

This first blog post introduces the scenario and the Logic App that will eventually be a well oiled component of slick automated delivery pipeline using Visual Studio Team Services (VSTS).

Introduction to the Scenario – Low or High Priority Purchase Order Processing

In order to demonstrate CI/CD with Logic Apps we will use the following simple Logic App:

1_Logic_App_Screenshot

b2b.processpurchaseorder.servicebus Logic App

The Logic App does the following:

  1. Receives a batch of Purchase Orders (PO) via an HTTP request.
  2. The variable “CBRFilter” is initialized that will be assigned the calculated priority of the PO.
    1. This variable will be initialized further on, for the purposes of Content Based Routing (CBR) on Azure Service Bus.
  3. The Condition action will evaluate the total PO amount and assign low or high priority accordingly.
    1. If the PO amount is < $200K, then the CBR filter variable will be assigned the value “PriorityLow”.
    2. If the the PO amount is >= $200K, then the CBR filter variable will be assigned the value “PriorityHigh”.
  4. Finally, we publish the PO to a Service Bus topic with the low/high priority filter assigned such that the message will appear on the low/high priority topic subscription.

Testing the Logic App

Ok, let’s take the Logic App for a spin and try it out.

Using Postman, I call the Logic App endpoint, passing in a PO batch as follows:

1_1_Postman_Screenshot

Calling Logic App via Postman

Logging in to the Azure Portal, I can check that the Logic App has worked as expected and has assigned a “high priority” CBR filter, since the total PO batch amount is indeed > $200K:

1_2_Logic_App_Run

Logic App Run with High Priority Assigned

And using Service Bus Explorer, I can see that the PO has been picked up by the high priority topic subscription:

1_3_SB_Explorer

Service Bus Explorer

Conclusion

So this concludes Part 1 of this series where I will demonstrate setting up a CI/CD pipeline for the Logic App that has been introduced in this post.

In the next post I will take you through downloading and parameterising an ARM template using Visual Studio that will be used for deploying the Logic App to the various Azure Resource Groups.

Stay tuned!

 

IoT Home Automation Solution

This is my idea for an IoT home automation solution, which is based on various discussions with my colleagues and interns that have worked with us.  My aim is to make this an Open Source project for myself and others to learn more about Azure, IoT, C#, Docker and electronics.  Hopefully colleagues from Datacom will join in and also any from elsewhere are most welcome to pitch in!

Here is a high level diagram:

Home_Automation_V0.1

 

Further detail about each numbered component follows:

  1. Raspberry Pi running the latest version of Raspbian lite, acting as a protocol gateway and edge device.  This would allow connectivity to various devices and sensors installed in the home using a range of different protocols.  It would be running IoT Edge and Docker.  Two Docker containers would run an ASP.NET Core MVC app for managing devices via a browser based portal and an ASP.NET Core API app would allow control via a Xamarin app running on an Android phone or a smart watch.
  2. Devices and other sensors in the home that would be controlled via commands issued from the RPi, using each devices native protocol.
  3. Smartphone or watch can issue commands to devices in the home, view sensor data and allow device configuration, via the API.  Note: when the user is at home, it would be possible to control and configure devices locally (no cloud integration).  Any device config changes will need to be synced to the cloud.
  4. Azure IoT Hub would provide cloud based device management and integration with devices outside of the home.  For example, it would be possible to switch on a heat pump remotely.  IoT Hub would communicate with the RPi over a protocol such as MQTT.
  5. Azure Event Grid and Webhooks provide an event notification service, pushing events to the smartphone/watch.  Events could be warnings, such as an intruder entering the property or a sensor detecting drug manufacturing in a rental or notification of an actioned command, such as completion of making a cup of coffee.
  6. Smartphone or watch can issue commands remotely, using the cloud to connect to the home.
  7. This API app provides an interface for issuing device commands and retrieving device data whilst away from home.  Could be a container running in App Service or as a container in Azure Container Service.
  8. A CI/CD pipeline would be implemented using a Git repo hosted in GitHub with Docker integration.  Docker images will provide a standard and self contained mechanism for deploying changes to the ASP.NET Core MVC and API apps installed on the RPi.

If you have any comments or suggestions I would be really interested and I intend to blog on this much more.

Reflections #1

We have recently completed Phase 1 of a “greenfields” BizTalk 2013 R2 implementation for a client and I wanted to jot down here some of my thoughts and technical learnings.

My role on the project was Technical Lead: I gathered requirements, generated the technical specifications, set up the BizTalk solutions, kicked off the initial development and provided supervision and guidance to the team.

Before starting on the project about 15 months ago, I had previously spent quite a bit of time working with a large BizTalk 2009 installation so I knew for my next assignment, that I would be playing with some new technologies in Azure and also using the new(ish) REST adapter. When I look back now, SOAP+WSDL+XML now seems something from a different age!

Here is a list of some key features of this hybrid integration platform:

  • BizTalk sits on-premises and exposes RESTful APIs using the WCF-WebHttp adapter. These APIs provide a standard interface into previously siloed systems on-premises, a big one being the company wide ERP.
  • Azure Service Bus relays using SAS authentication provide a means of exposing data held on-premises, to applications in the cloud.  This proved very effective but a downside is having to ping the endpoints, to ensure that the relay connection established from BizTalk doesn’t shut down, resulting in the relay appearing unavailable.
  • Service Bus queue allows the syncing of data from CRM in the cloud to ERP on-premises.  We used the SB-Messaging adapter.  Don’t forget to check for messages in the dead letter queue and have a process around managing this.
  • A mixture of asynchronous and synchronous processing. With asynchronous processing, we used orchestrations but synchronous processing (where a person or system is waiting for a reponse), these were messaging only (context based routing only).
  • We used the BRE pipeline component to provide the features of a “mini” ESB, almost as a replacement for the ESB Toolkit.

In a nutshell, BizTalk is the bridge between on-premises systems and the cloud, while also providing services for the more traditional systems on the ground.  I believe this will be a common story for most established companies for many years but eventually, many companies will run all their systems in the cloud.

I expected a few challenges with the WCF-WebHttp adapter, after stories from my colleagues who had used this adapter before me.  My main concern was handling non HTTP 200 responses which triggered the adapter to throw an exception that was impossible to handle in BizTalk (I know 500 errors will be returned by the adapter now, with a fix in CU5).  So from the start of the project, I requested that the APIs exposed from the ERP system always returned HTTP 200 but would include an error JSON message that we could interrogate.

My colleague Colin Dijkgraaf and Mark Brimble (sadly now an ex-colleague) presented an Integration Monday session on the WCF-WebHttp adapter – Whats right & wrong with WCF-WebHTTP Adapter?.

We also had to treat the JSON encoder and decoder pipeline components with kid gloves, with issues serializing messages if our XSD schemas contained referenced data types (e.g. from a common schema).  We had to greatly simplify our schemas to work with these components.

Also lack of support for Swagger (consumption via a wizard and exposing Swagger endpoints) is a glaring omission.  I manually created Swagger definition files using Restlet Studio which I found to be a great tool for documenting APIs, which was recommended to me by Mark Brimble.

New Book Released – “Robust Cloud Integration with Azure”

Just a shout out that our new book, published by Packt, has recently been released!

RobustCloudIntegrationWithAzure

Many congratulations to my fellow authors: Mahindra Morar, Abhishek Kumar, Martin Abbott, Gyanendra Kumar Gautam and Ashish Bhambhani.

We are very fortunate to have a cast of expert reviewers.  My sincere thanks for taking the time to review our chapters and provide feedback: Bill Chesnut, Glenn Colpaert, Howard S. Edidin and Riaan Gouws.

The book represents about a year of hard work.

It was great to work with my fellow authors.  We are based in Australia, New Zealand and the United States and worked collaboratively.

The book is about designing and building “robust” (i.e. well designed) integration solutions using Azure (Microsoft’s cloud platform).  It starts off by setting the scene around what cloud integration is and why companies are interested (or should be interested!).  The chapters that follow then provide practical examples around:

  • Building an API in the cloud and hosting in App Service
  • Utilising API Management
  • Creating workflows in the cloud using Logic Apps
  • Hooking up and extending the functionality of SaaS applications using Logic Apps
  • Using Functions to run arbitrary code
  • Building loosely coupled and scalable solutions using Service Bus
  • Introduces Event Hubs and IoT Hubs
  • Using the Enterprise Integration Pack to handle EAI/B2B type scenarios
  • Hybrid integration – pairing BizTalk Server 2016 with Logic Apps
  • Keeping within the theme of “robustness”, a run down of the tooling and monitoring available for Logic Apps
  • Crystal ball gazing – what’s next for Microsoft integration?  Which involves a look at Flow, the lightweight version of Logic Apps.

As you can see, it touches on all the tools available in Azure to build integration solutions, hooking up different solutions and devices (on and off premises).

Personally, it has been a very busy last 12 months, with the blessing of another son born (son #2), working on the book and also engaged on a very demanding project at work.  I learnt lots from writing my sections and also reading chapters written by others.  It was great to make friends with new people from different parts of the world too.

I hope you find the book a useful addition to your library :-).

Creating a Web Service in Azure, Part 1- Introduction and Architecture

Introduction

In a series of articles I will describe how Azure can be used to power a back end for a front end application.

My front end application is a simple Rss aggregator.  I follow a number of blogs and I currently use a Windows forms application that I wrote to monitor for new content: the application periodically downloads Rss feeds and processes them on the client side so I can then read them offline (the feed content is stored in a SQLite database).  I would like a better solution that can run on my Android phone and also where feeds can be synced even when the client application isn’t running; the back end will instead be responsible for downloading latest feed content and aggregating the data.

A key design feature is that the new Android client will be lightweight: it will carry out minimal processing of the data and won’t be responsible for downloading feeds from the various blogs.  Such a setup was passable for my high spec laptop but won’t do for my much lower spec phone for these reasons:

  • Downloading feeds from the various blogs will blow out my data plan.
  • Heavy processing on the client side will consume precious CPU cycles and battery power, making my phone slow/unresponsive with a constant need to find the next power outlet to charge it.

So with these limitations in mind, the back end will instead do the “heavy lifting” of downloading and processing feeds and ensure that sync data is optimized to the needs of the client, so minimizing bandwidth consumption

I must also mention as well that while thinking on how Azure could be used to power a back end service, a two part article was published in MSDN magazine that is pretty much along the lines that I was thinking for my own web service (please see the “References” section below for links to these two articles).  The MSDN articles describe a service that aggregates Twitter and StackOverflow data intelligently, while my proof of concept aggregates Rss feed data from blogs, for example.  I draw on these 2 articles heavily in the series.

Another major advantage (mentioned in the MSDN article series) of a cloud back end is better scalability: instead of each client downloading and processing the same feeds individually, the back end application can do this in a single operation, getting around any throttling limitations that may be imposed on some web services.  So as the popularity of an app increases, this doesn’t result in a related decrease in performance (due to throttling) which would damage the reputation of the app.

Architecture

The diagram below shows a high level overview of the solution:

Datamate Architecture

Figure 1  Datamate Architecture (Based on Figure 2 in Reference Article [1])

Some of the key features of the architecture are as follows (walking through the diagram from left to right):

  • Azure SQL Database is used to store Feed data in a relational database and the data is accessed using Entity Framework (EF) via an internal data provider API.  It is envisaged that as further data sources come on board (other than just Rss feeds) each data source (e.g. Twitter) will have it’s own provider API that is implemented to the requirements of the particular data source that is onboarded.
  • Azure WebJobs represent the worker processes – they run as a scheduled background task, downloading and processing Rss feeds and writing the results to the database.
  • A REST API, implemented using ASP.NET Web API, provides an interface for clients to retrieve data.
  • A simple client app (mobile and web) will use the REST API to download data and maintain a client side cache of the data, to the preferences specified by the user, once authenticated and authorised by the REST API.

That’s it for now – stay tuned for part 2!!  In the next post, I will discuss the design and development of the Azure SQL Database and Azure WebJob that represent the “backbone” of the solution.

As always, any comments or tips most welcome.

References

[1] MSDN Magazine Aug 2015 Issue, Create a Web Service with Azure Web Apps and WebJobs, Microsoft.  Available from: https://msdn.microsoft.com/en-us/magazine/mt185572.aspx

[2] MSDN Magazine Sep 2015 Issue, Build a Xamarin App with Authentication and Offline Support, Microsoft.  Available from: https://msdn.microsoft.com/en-us/magazine/mt422581.aspx

ACSUG Event – Integration Saturday 2015

I had a very enjoyable day at the Integration Saturday event, organised by the Auckland Connected Systems User Group (ACSUG), of which I’m a member.  Many thanks to the organizers and sponsors (Datacom, Mexia, Adaptiv, Theta and Microsoft).

I hope that we do this again next year (Integration Saturday 2016!!) and also that this kicks off regular catch ups.

The Meetup site is here.

I think it’s incredible, given our size, how many talented integration specialists we have here in New Zealand (and also Australia).  And that so many people showed up, given the lousy weather on the day (wet and windy)!

Personally, it was great to meet new people and catch up with existing acquaintances and friends during the breaks and at lunch.  Man that beer at the end was the best eh?!!

Here’s a list of the sessions:

What’s New on Integration – Bill Chesnut, Mexia
Azure App Services – Connecting the Dots of Web, Mobile and Integration – Wagner Silveira, Theta
API Apps, Logic Apps, and Azure API Management Deep Dive – Johann Cooper, Datacom
Real Life SOA, Sentinet and the ESB Toolkit – James Corbould, Datacom
REST and Azure Service Bus – Mahindra Morar, Datacom
What Integration Technology Should I Use? – Mark Brimble, Datacom
Top Ten Integration Productivity Tools and Frameworks  – Nikolai Blackie, Adaptiv
An Example of Continuous Integration with BizTalk – Bill Chesnut, Mexia

I was very fortunate to be able to present a session (Real Life SOA, Sentinet and the ESB Toolkit).  A PDF of my slides and notes is available here and source code for my demos can be found here on GitHub (clone: https://github.com/jamescorbould/Ajax.ESB.Licensing.git).

As usual, please don’t hesitate to contact me if you would like to discuss any points raised during the talk…  Particularly, there was quite a buzz of excitement after my demo of Sentinet (an SOA and API management tool).  I think this platform was new to most people and it has a lot to offer – as mentioned, stay tuned for some blog posts on this tool :-).

So thanks again to Craig and Mark for organizing the event and to Bill for flying over from Oz.

Integration Saturday July 18, 2015 – Presentations

Just wanted to shout out about this exciting integration event coming to Auckland NZ on July 18. Many thanks to the organizers. This will be held at Datacom, 210 Federal St, Auckland CBD.

Connected Pawns

The Auckland Systems User group(ACSG) will be holding a one day mini-symposium on Saturday July 18th in Datacom’s cafe. Please see the ACSG site for further details and how to register. This is a free event and will be restricted to the first 70 people who register .

This will be an day jam packed and is aimed at integration developers.

The keynote speaker is Bill Chesnut from Mexia who will start the meeting with the a presentation of the latest trends in integration. He will be followed by local integration experts who will present on  new integration patterns in the cloud, integration with REST, SOA , ESB, CI with BizTalk and Integration tips. For a full list of speakers see this link. A list of the talks can be found below;

1. What’s new on integration – Bill Chesnut  9.00- 9.45

2. Azure App Services – connecting the…

View original post 118 more words

Notes on Creating a Streaming Pipeline Component Based on the BizTalk VirtualStream Class

In this post I’m going to discuss and demonstrate how to create a streaming pipeline component.  I’ll show some of the benefits and also highlight the challenges I encountered using the BizTalk VirtualStream class.

I’m using BizTalk 2013 R2 for the purposes of this demonstration (update: or I was until my Azure dev VM died due to an evaluation edition of SQL Server expiring – I switched to just BizTalk 2013).

If you would like the short version of this post, the code can be viewed here and the git clone URL is: https://github.com/jamescorbould/Ajax.BizTalk.DocMan.git (hope you read on though ;-)).

The Place of the Pipeline Component

As we all know, a pipeline contains one or many pipeline components.  Pipeline components can be custom written and also BizTalk includes various “out of the box” components for common tasks. In most cases, a pipeline is configured on a port to execute on receiving a message and on sending a message.

On the receive, the flow is: adapter –> pipeline –> map –> messagebox.

On the send, the flow is: messagebox –> map –> pipeline –> adapter.

What is a “Non-Streaming” Pipeline Component?

Typically, such a pipeline component implements one or more of the following practices:

  1. Whole messages are loaded into memory in one go instead of one small chunk at a time (to ensure a consistent memory footprint, irrespective of message size). If the message is large, this practice can consume a lot of memory leading to a poorly performing pipeline and also potential triggering a throttling condition for the host instance.
  2. Messages are loaded into an XML DOM (Document Object Model) in order to be manipulated, for example using XMLDocument or XDocument. This causes the message to be loaded entirely into memory into a DOM, creating a much larger memory footprint than the actual message size (some sources indicate 10 times larger than the file size). Similarly (to a lesser extent than loading into a DOM), loading a message into a string datatype will result in the entire message being written into memory.
  3. Messages are not returned to the pipeline shortly after being received; instead, messages are processed and then returned to the pipeline. So further pipeline processing is blocked until the pipeline component has completed processing.

Example 1: Non-Streaming Pipeline Component

Here is an example of a non-streaming pipeline component (the entire solution can be downloaded as indicated in the intro to this post: this code is located in project Ajax.BizTalk.DocMan.PipelineComponent.Base64Encode.NotStreamingBad).

Fig 1.  Non Streaming Pipeline Component

Fig 1. Non Streaming Pipeline Component Example

As shown, the example encodes a stream into neutral base64 format for sending on the wire and inserts it into another message…

In summary, this example is less than optimal:

  1. It uses XDocument which loads the message entirely into memory in one go, into a DOM, which is memory intensive for large messages.
  2. Control is not returned to the pipeline straightaway; the pipeline component does some processing and then returns control to the pipeline which means pipeline processing is blocked until this pipeline component completes processing the message. This potentially slows pipeline processing down.

Example 2: Streaming Pipeline Component

BizTalk ships with some custom streaming classes in the Microsoft.BizTalk.Streaming namespace and there are a number of sources out there that detail how to use them (please see the “Further Resources” section at the end of this post for a list of some that I have found).

As mentioned in the title of this blog post, the one I have used in this example is the VirtualStream class. It’s “virtual” since it uses disk (a temporary file) as a backing store (instead of memory) for storing bytes exceeding the configurable threshold size: this reduces and ensures a consistent memory footprint. A couple of potential disadvantages that come to mind is the extra possible latency of disk IO (for large messages) and also the possible security risk of writing sensitive (unencrypted) messages to disk.

I also observed that the temporary files created by the VirtualStream class (written to the Temp folder in the host instance AppData directory e.g C:\Users\{HostInstanceSvcAccount}\AppData\Local\Temp) are not deleted until a host instance restart. This is also something to consider when using the class to ensure that sufficient disk space exists for the temporary files and also that a strategy exists to purge the files.

In this implementation, I have written a custom stream class (Base64EncoderStream) that wraps a VirtualStream (i.e. an implementation of the “decorator” pattern). I noticed that the BizTalk EPM (Endpoint Processing Manager) only calls the Read method on streams… So logic to base64 encode some bytes was inserted into the (overidden) Read method.

The Read method is called by the EPM repeatedly until all bytes have been read from the backing store (which in the case of this implementation, could be memory or a file on disk). The EPM provides a pointer to a byte array and it’s the job of the Read method to populate the byte array, obviously ensuring not to exceed the size of the buffer. In this way, the stream is read one small chunk (4096 bytes) at a time, orchestrated by the EPM, thereby reducing the processing memory footprint.

Here’s the code for the Read method:

Read Method in Base64EncoderStream (may need to zoom in to view!)

Fig 2.  Read Method in Base64EncoderStream Class (note: may need to zoom in to view!)

Inside the execute method of the pipeline component, the constructor on the VirtualStream subclass is called, passing in the original underlying data stream as follows:

Execute Method in StreamingGood Pipeline Component

Fig. 3  Execute Method in StreamingGood Pipeline Component

So, as shown in the screenshot above, by returning the stream back to the pipeline as quickly as possible, we ensure that the next pipeline component can be initialized and potentially start working on the message and so on with the next components in the chain (i.e. it is now a true “streaming” pipeline component).

Performance Metrics

I decided to compare the performance of both implementations, comparing the following metrics:

  1. Memory consumption
  2. Processing latency

I would say that 1 (memory consumption) is of greater importance to get right than 2 (latency). That is, the impact of not getting 1 correct is greater than not getting 2 correct. Both are important considerations though.

For the purposes of the tests, I created two separate host instances and configured two Send Ports containing a pipeline containing the streaming and non-streaming pipeline component respectively, with each Send Port running within it’s own host instance.

I then ran up perfmon to compare memory consumption, capturing the private bytes counter for each host instance. Not suprisingly, the pipeline containing the streaming pipeline component had a much smaller and consistent memory footprint as can be observed in the screenshots below, using a file approximately 44MB in size:

Fig. 1  Comparison of Memory Consumption

Fig. 4  Comparison of Memory Consumption #1

Fig 2.  Comparison of Memory Consumption

Fig 5.  Comparison of Memory Consumption #2

Each spike of memory consumption associated with the streaming component I believe is due to loading each chunk of bytes into a string (as can be observed in Fig. 5).

One thing I noticed for both pipeline components is that the after processing had finished, memory consumption did not return to the pre-processing level until host instances where restarted, even though I had added object pointers to the pipeline component resource tracker (to ensure object disposable). However, memory consumption for the streaming pipeline remained at a consistent level no matter the number of files submitted for processing while the non streaming pipeline consumed more memory. This behaviour is mentioned in Yossi Dahan’s white paper in Appendix C [REF-1].

I decided to measure processing latency by noting the time that pipeline component execution commenced (using the CAT Teams tracing framework) and the final LastWriteTime property on the outputted file (using PowerShell).  So this final time indicates when the file adapter has completed writing the file and BizTalk has completed processing.

Here are some approximate processing times using 2 sample files:

File Size (MB) Streaming (mm:ss) Non-Streaming (mm:ss)
5.5 1:01 0:44
44 27:02 5:29

I had a hunch that the higher processing latency of the streaming component was primarily due to the disk IO associated with using the VirtualStream class.  Under the hood, when configured to use a file for storing bytes exceeding the byte count threshold, the VirtualStream class switches to wrap an instance of FileStream for the writing of overflow bytes.  I figured that if I increased the size of the buffer used by the FileStream instance, this would mean less disk read and writes (at the expense of greater memory usage).

(As a side note, the default buffer and threshold sizes specified in the VirtualStream class are both 10240 bytes. Also, the maximum buffer and threshold size is 10485760 bytes (c. 10.5MB)).

Unfortunately and surprisingly, increasing the size of the buffer made no difference – processing latency was still the same.  Maybe I will investigate this further in another blog post since I have already written a book in this post!!

Conclusion

I have demonstrated here that implementing a pipeline component in a streaming fashion has major benefits over a non-streaming approach. A streaming component consumes less memory and ensures a consistent memory footprint, regardless of message size.

Another finding is that the VirtualStream class adds significant processing latency (at least in this particular implementation) and unless this is not of concern, means that this class is only really suitable when working with small files.

Some Further Resources

Yossi Dahan, Developing a Streaming Pipeline Component for BizTalk Server, Published Feb 2010  [REF-1]
Available from: http://www.microsoft.com/en-us/download/details.aspx?id=20375
(I found this an extremely useful white paper (Word document format) which I must have reread many times now!).

Mark Brimble, Connected Pawns Blog
Optimising Pipeline Performance: Do not read the message into a string
Optimising Pipeline Performance: XMLWriter vs. XDocument
My colleague Mark wrote a series of blog posts comparing the performance of non-streaming pipeline components vs. streaming pipeline components.

Guidelines on MSDN for Optimizing Pipeline Performance

Developing Streaming Pipeline Components Series

Simplify Streaming Pipeline Components in BizTalk

BizTalk WCF Receive Location Configuration Error: The SSL settings for the service ‘None’ does not match those of the IIS ‘Ssl, SslRequireCert, Ssl128’

Error Scenario

A BizTalk WCF endpoint is exposed with security enabled: SSL with a client certificate is required (so mutual, 2-way client and server authentication is configured).

BizTalk (2009) receive location is configured as follows:

WSHttp Binding Transport Security Configured WSHttp Binding Client Transport Security Configured

 

 

 

 

 

 

 

 

 

IIS configuration:

IIS 2-Way SSL Authentication Configured

(Incidently, the following command can be run in a Windows batch file to configure SSL for a IIS virtual directory:

%windir%\system32\inetsrv\appcmd.exe set config “Default Web Site/ServiceName” -commitPath:APPHOST -section:access -sslFlags:Ssl,Ssl128,SslRequireCert )

Error Message and Analysis

Clients were unable to connect to the service and the following exception message was written to the Application event log on the hosting BizTalk server:

Exception: System.ServiceModel.ServiceActivationException: The service ‘ServiceName.svc’ cannot be activated due to an exception during compilation. The exception message is: The SSL settings for the service ‘None’ does not match those of the IIS ‘Ssl, SslRequireCert, Ssl128’.. —> System.NotSupportedException: The SSL settings for the service ‘None’ does not match those of the IIS ‘Ssl, SslRequireCert, Ssl128’.

So this is an IIS configuration issue.  The service is exposing some endpoint that is unsecured (the SSL setting for this endpoint is ‘None’, as mentioned in the error message), which doesn’t match the actual SSL settings configured: ‘Ssl, SslRequireCert, Ssl128’ (i.e. SSL with minimum 128-bit keys and client certificate required).

In this case, the endpoint not matching the SSL settings is the mex endpoint (i.e. the service WSDL).

Ensure that ALL mex endpoints are disabled, by commenting out the following mex binding configuration in the service Web.config file:

<!–
    The <system.serviceModel> section specifies Windows Communication Foundation (WCF) configuration.
  –>
  <system.serviceModel>
    <behaviors>
      <serviceBehaviors>
        <behavior name=”ServiceBehaviorConfiguration”>
          <serviceDebug httpHelpPageEnabled=”false” httpsHelpPageEnabled=”false” includeExceptionDetailInFaults=”false” />
          <serviceMetadata httpGetEnabled=”false” httpsGetEnabled=”true” />
        </behavior>
      </serviceBehaviors>
    </behaviors>
    <services>
      <!– Note: the service name must match the configuration name for the service implementation. –>
<!– Comment out mex endpoints if client auth enabled using certificates –>
<service name=”Microsoft.BizTalk.Adapter.Wcf.Runtime.BizTalkServiceInstance” behaviorConfiguration=”ServiceBehaviorConfiguration”>
        <!–<endpoint name=”HttpMexEndpoint” address=”mex” binding=”mexHttpBinding” bindingConfiguration=”” contract=”IMetadataExchange” />–>
        <!–<endpoint name=”HttpsMexEndpoint” address=”mex” binding=”mexHttpsBinding” bindingConfiguration=”” contract=”IMetadataExchange” />–>
      </service>
    </services>
  </system.serviceModel>

I restarted IIS and the service could then be compiled and worked as expected.

Integrate 2014: Thoughts on the Announcement of the BizTalk Microservices Platform

Integrate 2014 Logo

I will outline here some of my thoughts on Microsoft’s new venture in the EAI space, announced at the Integrate 2014 conference: the BizTalk Microservices Platform.  I guess this could be called “BizTalk Services 2.0”.

When I first heard the news, I was surprised.  I didn’t attend the conference but had one eye on the #integrate2014 hashtag and as the talks progressed, I got progressively more excited as I saw more of the platform.  My immediate reaction was: “this is Docker!”.

My intention of this post is to put forward what the new platform might be like and what this means for integration devs like me.  I should mention here that these thoughts are a result of reading various blogs after the event and my own personal experience, so there may be many inaccuracies.

Before putting forward my ideas though, I would like to discuss Microsoft’s current cloud EAI offering – aka “BizTalk Services 1.0”.

Background – BizTalk Services 1.0

Like a lot of BizTalk devs I believe, I have spent quite a bit of time playing with Microsoft Azure BizTalk Services (MABS) in my spare time whilst working with BizTalk Server during the day.

It’s easy to get up and going: install the BizTalk Services SDK and you are pretty much there.  This is a comprehensive guide that I found useful.

I could pick up the concepts from my BizTalk Server (and ESB Toolkit) experience quite quickly.

To be honest it left me quite disappointed since I was expecting some radical new thinking and tooling: it felt like an attempt to replicate BizTalk Server in the cloud.  As the year wore on, it seemed little attention was paid to MABS by the product team; for example, one still has to use Visual Studio 2012 to create a solution (no update so we can use Visual Studio 2013).  This made me think that the team must be “heads down” working hard on version 2.0 and too busy to release minor versions.

Another major issue is around unit and integration testing.  It’s only possible to test a solution by deploying to Azure (directly from Visual Studio).  Also the tooling to exercise your solution is limited: perhaps the highlight is a VS plugin that is very much an alpha version with little TLC given to it since it’s release.

Another burning question for me was: how is BizTalk Services actually realised in Azure?  Originally I remember reading that MABS would be a multi-tenant affair.  But I know that I needed to create my own tracking database…  Surely this would be shared storage, I thought?  However I’ll quote from Saravana’s latest blog post here:

Under the hood, completely isolated physical infrastructure (Virtual machines, storage, databases etc) are created for every new BizTalk Service provisioning. [1]

So this explains why it wouldn’t be possible to deploy and test a MABS solution locally, on my developer workstation.  It wouldn’t be possible to replicate this runtime environment locally (without a lot of work anyway).

Also it’s obvious that a MABS solution is completely tied to Azure and cannot exist outside of it.

So my final thought on the first version of MABS was: a bit disappointing, nothing really radical here, good for lightweight integration scenarios (at best), hope 2.0 is (heaps) better and given much more TLC than 1.0…

The Inbetween Months – MABS 1.0 => MABS 2.0

So my day job got busy and I put MABS aside for a bit.

I have a FreeNAS box at home so I have spent some time this year setting it up, playing around with it and using it as a central place to store my photos, files and Git repos for my personal projects.  It was then that I uncovered the concept of “jails”, which is basically a way of sandboxing.  Each jail is bound to a separate IP address.  Further info can be found here.

Learning about the jail idea lead to reading about an interesting open source project called “Docker“.  I’m certainly no expert on this subject: I think of it in a similar way to the jail concept but instead of the term “jail”, the term “container” applies.  Docker has an engine that enables containers to communicate with one another.

What I really like about Docker is that it is a “grassroots” developer led project.  After a period of introspection (and no doubt feedback from customers and market forces), this seems to be something Microsoft is keen to promote: that is, a focus on developers and collaboration.  .NET going cross platform and open source supports this.

Announcement of the BizTalk Microservices Platform (MABS 2.0?)

What I was alluding to in the previous section is that (with that wonderful thing called hindsight) I shouldn’t have been so surprised to hear that the next version of MABS would run on a “microservices platform”.  In fact, this platform will be a key infrastructure component/concept of Azure: “BizTalk Microservices” will just be a subset of a new microservices platform. I suspect Microsoft will build it’s own container management technology for Azure and that Azure containers will be compatible with Docker.  Already a Docker engine for Windows server is on the cards.  I have heard mention of the term “Azure App Platform” and I think this will be a container management platform (or at least a component of a bigger platform).

From the diagram below, app containers are a foundational, basic building block of the platform.

Microservices Stack - Slide

Photo Source: Kent Weare (@wearsy)

I think the idea is that each container will have a single very specific function (i.e. be “granular”).  Containers will be linked together (communicating over HTTP?) and as a whole be able to do something useful.  So a container could be a transform container, a data access container, a business rules container, a service bus connectivity container and so on (I wonder if there will be a bridge container, or is this a monolith? ;-)).  It would be possible for each container to be implemented using different technologies: the common factor would occur at the network transport level.

I reckon the (specific) term “BizTalk Microservices” will simply be a way of grouping integration specific containers together in the “toolbox”.  BizTalk as a “product” (in the cloud) will be reduced to just a name and a way of locating containers specific to integration, nothing more.  Over time, many other containers with a very specific function will come onboard: templates (“accelerators”) will exist which are basically a grouping of containers, designed to tackle a specific end-to-end problem.

Another deal clincher with the platform is that it would be possible to host, run and test locally on my dev workstation.  Hopefully the Azure SDK will enable a local runtime environment for running containers (or maybe the Windows server Docker engine would enable running Azure containers on prem).

Conclusion

I’m really excited with the announcements at the conference.  It’s very early days, but the microservices concept seems to be a good model:  a way of decomposing services into their component (micro) parts that can be finely scaled and tuned (hopefully addressing some of the failings of “old school” SOA, particularly around scalability).

In a number of sources I have read the comment that it’s a really exciting time to be a developer and I agree with that.  It’s exciting since cloud computing requires some new thinking and tools, which need to built.  There is also a “cross-pollination” of ideas between different platforms.

I can’t wait for the preview of the microservices platform, which looks to be available early next year.

 

[1] Microsoft making big bets on Microservices – Saravana Kumar, Dec 4th 2014