Thoughts on Systems Integration using Microsoft Technologies


Creating a Web Service in Azure, Part 1- Introduction and Architecture


In a series of articles I will describe how Azure can be used to power a back end for a front end application.

My front end application is a simple Rss aggregator.  I follow a number of blogs and I currently use a Windows forms application that I wrote to monitor for new content: the application periodically downloads Rss feeds and processes them on the client side so I can then read them offline (the feed content is stored in a SQLite database).  I would like a better solution that can run on my Android phone and also where feeds can be synced even when the client application isn’t running; the back end will instead be responsible for downloading latest feed content and aggregating the data.

A key design feature is that the new Android client will be lightweight: it will carry out minimal processing of the data and won’t be responsible for downloading feeds from the various blogs.  Such a setup was passable for my high spec laptop but won’t do for my much lower spec phone for these reasons:

  • Downloading feeds from the various blogs will blow out my data plan.
  • Heavy processing on the client side will consume precious CPU cycles and battery power, making my phone slow/unresponsive with a constant need to find the next power outlet to charge it.

So with these limitations in mind, the back end will instead do the “heavy lifting” of downloading and processing feeds and ensure that sync data is optimized to the needs of the client, so minimizing bandwidth consumption

I must also mention as well that while thinking on how Azure could be used to power a back end service, a two part article was published in MSDN magazine that is pretty much along the lines that I was thinking for my own web service (please see the “References” section below for links to these two articles).  The MSDN articles describe a service that aggregates Twitter and StackOverflow data intelligently, while my proof of concept aggregates Rss feed data from blogs, for example.  I draw on these 2 articles heavily in the series.

Another major advantage (mentioned in the MSDN article series) of a cloud back end is better scalability: instead of each client downloading and processing the same feeds individually, the back end application can do this in a single operation, getting around any throttling limitations that may be imposed on some web services.  So as the popularity of an app increases, this doesn’t result in a related decrease in performance (due to throttling) which would damage the reputation of the app.


The diagram below shows a high level overview of the solution:

Datamate Architecture

Figure 1  Datamate Architecture (Based on Figure 2 in Reference Article [1])

Some of the key features of the architecture are as follows (walking through the diagram from left to right):

  • Azure SQL Database is used to store Feed data in a relational database and the data is accessed using Entity Framework (EF) via an internal data provider API.  It is envisaged that as further data sources come on board (other than just Rss feeds) each data source (e.g. Twitter) will have it’s own provider API that is implemented to the requirements of the particular data source that is onboarded.
  • Azure WebJobs represent the worker processes – they run as a scheduled background task, downloading and processing Rss feeds and writing the results to the database.
  • A REST API, implemented using ASP.NET Web API, provides an interface for clients to retrieve data.
  • A simple client app (mobile and web) will use the REST API to download data and maintain a client side cache of the data, to the preferences specified by the user, once authenticated and authorised by the REST API.

That’s it for now – stay tuned for part 2!!  In the next post, I will discuss the design and development of the Azure SQL Database and Azure WebJob that represent the “backbone” of the solution.

As always, any comments or tips most welcome.


[1] MSDN Magazine Aug 2015 Issue, Create a Web Service with Azure Web Apps and WebJobs, Microsoft.  Available from:

[2] MSDN Magazine Sep 2015 Issue, Build a Xamarin App with Authentication and Offline Support, Microsoft.  Available from:

ACSUG Event – Integration Saturday 2015

I had a very enjoyable day at the Integration Saturday event, organised by the Auckland Connected Systems User Group (ACSUG), of which I’m a member.  Many thanks to the organizers and sponsors (Datacom, Mexia, Adaptiv, Theta and Microsoft).

I hope that we do this again next year (Integration Saturday 2016!!) and also that this kicks off regular catch ups.

The Meetup site is here.

I think it’s incredible, given our size, how many talented integration specialists we have here in New Zealand (and also Australia).  And that so many people showed up, given the lousy weather on the day (wet and windy)!

Personally, it was great to meet new people and catch up with existing acquaintances and friends during the breaks and at lunch.  Man that beer at the end was the best eh?!!

Here’s a list of the sessions:

What’s New on Integration – Bill Chesnut, Mexia
Azure App Services – Connecting the Dots of Web, Mobile and Integration – Wagner Silveira, Theta
API Apps, Logic Apps, and Azure API Management Deep Dive – Johann Cooper, Datacom
Real Life SOA, Sentinet and the ESB Toolkit – James Corbould, Datacom
REST and Azure Service Bus – Mahindra Morar, Datacom
What Integration Technology Should I Use? – Mark Brimble, Datacom
Top Ten Integration Productivity Tools and Frameworks  – Nikolai Blackie, Adaptiv
An Example of Continuous Integration with BizTalk – Bill Chesnut, Mexia

I was very fortunate to be able to present a session (Real Life SOA, Sentinet and the ESB Toolkit).  A PDF of my slides and notes is available here and source code for my demos can be found here on GitHub (clone:

As usual, please don’t hesitate to contact me if you would like to discuss any points raised during the talk…  Particularly, there was quite a buzz of excitement after my demo of Sentinet (an SOA and API management tool).  I think this platform was new to most people and it has a lot to offer – as mentioned, stay tuned for some blog posts on this tool🙂.

So thanks again to Craig and Mark for organizing the event and to Bill for flying over from Oz.

Integration Saturday July 18, 2015 – Presentations

Just wanted to shout out about this exciting integration event coming to Auckland NZ on July 18. Many thanks to the organizers. This will be held at Datacom, 210 Federal St, Auckland CBD.

Connected Pawns

The Auckland Systems User group(ACSG) will be holding a one day mini-symposium on Saturday July 18th in Datacom’s cafe. Please see the ACSG site for further details and how to register. This is a free event and will be restricted to the first 70 people who register .

This will be an day jam packed and is aimed at integration developers.

The keynote speaker is Bill Chesnut from Mexia who will start the meeting with the a presentation of the latest trends in integration. He will be followed by local integration experts who will present on  new integration patterns in the cloud, integration with REST, SOA , ESB, CI with BizTalk and Integration tips. For a full list of speakers see this link. A list of the talks can be found below;

1. What’s new on integration – Bill Chesnut  9.00- 9.45

2. Azure App Services – connecting the…

View original post 118 more words

Notes on Creating a Streaming Pipeline Component Based on the BizTalk VirtualStream Class

In this post I’m going to discuss and demonstrate how to create a streaming pipeline component.  I’ll show some of the benefits and also highlight the challenges I encountered using the BizTalk VirtualStream class.

I’m using BizTalk 2013 R2 for the purposes of this demonstration (update: or I was until my Azure dev VM died due to an evaluation edition of SQL Server expiring – I switched to just BizTalk 2013).

If you would like the short version of this post, the code can be viewed here and the git clone URL is: (hope you read on though ;-)).

The Place of the Pipeline Component

As we all know, a pipeline contains one or many pipeline components.  Pipeline components can be custom written and also BizTalk includes various “out of the box” components for common tasks. In most cases, a pipeline is configured on a port to execute on receiving a message and on sending a message.

On the receive, the flow is: adapter –> pipeline –> map –> messagebox.

On the send, the flow is: messagebox –> map –> pipeline –> adapter.

What is a “Non-Streaming” Pipeline Component?

Typically, such a pipeline component implements one or more of the following practices:

  1. Whole messages are loaded into memory in one go instead of one small chunk at a time (to ensure a consistent memory footprint, irrespective of message size). If the message is large, this practice can consume a lot of memory leading to a poorly performing pipeline and also potential triggering a throttling condition for the host instance.
  2. Messages are loaded into an XML DOM (Document Object Model) in order to be manipulated, for example using XMLDocument or XDocument. This causes the message to be loaded entirely into memory into a DOM, creating a much larger memory footprint than the actual message size (some sources indicate 10 times larger than the file size). Similarly (to a lesser extent than loading into a DOM), loading a message into a string datatype will result in the entire message being written into memory.
  3. Messages are not returned to the pipeline shortly after being received; instead, messages are processed and then returned to the pipeline. So further pipeline processing is blocked until the pipeline component has completed processing.

Example 1: Non-Streaming Pipeline Component

Here is an example of a non-streaming pipeline component (the entire solution can be downloaded as indicated in the intro to this post: this code is located in project Ajax.BizTalk.DocMan.PipelineComponent.Base64Encode.NotStreamingBad).

Fig 1.  Non Streaming Pipeline Component

Fig 1. Non Streaming Pipeline Component Example

As shown, the example encodes a stream into neutral base64 format for sending on the wire and inserts it into another message…

In summary, this example is less than optimal:

  1. It uses XDocument which loads the message entirely into memory in one go, into a DOM, which is memory intensive for large messages.
  2. Control is not returned to the pipeline straightaway; the pipeline component does some processing and then returns control to the pipeline which means pipeline processing is blocked until this pipeline component completes processing the message. This potentially slows pipeline processing down.

Example 2: Streaming Pipeline Component

BizTalk ships with some custom streaming classes in the Microsoft.BizTalk.Streaming namespace and there are a number of sources out there that detail how to use them (please see the “Further Resources” section at the end of this post for a list of some that I have found).

As mentioned in the title of this blog post, the one I have used in this example is the VirtualStream class. It’s “virtual” since it uses disk (a temporary file) as a backing store (instead of memory) for storing bytes exceeding the configurable threshold size: this reduces and ensures a consistent memory footprint. A couple of potential disadvantages that come to mind is the extra possible latency of disk IO (for large messages) and also the possible security risk of writing sensitive (unencrypted) messages to disk.

I also observed that the temporary files created by the VirtualStream class (written to the Temp folder in the host instance AppData directory e.g C:\Users\{HostInstanceSvcAccount}\AppData\Local\Temp) are not deleted until a host instance restart. This is also something to consider when using the class to ensure that sufficient disk space exists for the temporary files and also that a strategy exists to purge the files.

In this implementation, I have written a custom stream class (Base64EncoderStream) that wraps a VirtualStream (i.e. an implementation of the “decorator” pattern). I noticed that the BizTalk EPM (Endpoint Processing Manager) only calls the Read method on streams… So logic to base64 encode some bytes was inserted into the (overidden) Read method.

The Read method is called by the EPM repeatedly until all bytes have been read from the backing store (which in the case of this implementation, could be memory or a file on disk). The EPM provides a pointer to a byte array and it’s the job of the Read method to populate the byte array, obviously ensuring not to exceed the size of the buffer. In this way, the stream is read one small chunk (4096 bytes) at a time, orchestrated by the EPM, thereby reducing the processing memory footprint.

Here’s the code for the Read method:

Read Method in Base64EncoderStream (may need to zoom in to view!)

Fig 2.  Read Method in Base64EncoderStream Class (note: may need to zoom in to view!)

Inside the execute method of the pipeline component, the constructor on the VirtualStream subclass is called, passing in the original underlying data stream as follows:

Execute Method in StreamingGood Pipeline Component

Fig. 3  Execute Method in StreamingGood Pipeline Component

So, as shown in the screenshot above, by returning the stream back to the pipeline as quickly as possible, we ensure that the next pipeline component can be initialized and potentially start working on the message and so on with the next components in the chain (i.e. it is now a true “streaming” pipeline component).

Performance Metrics

I decided to compare the performance of both implementations, comparing the following metrics:

  1. Memory consumption
  2. Processing latency

I would say that 1 (memory consumption) is of greater importance to get right than 2 (latency). That is, the impact of not getting 1 correct is greater than not getting 2 correct. Both are important considerations though.

For the purposes of the tests, I created two separate host instances and configured two Send Ports containing a pipeline containing the streaming and non-streaming pipeline component respectively, with each Send Port running within it’s own host instance.

I then ran up perfmon to compare memory consumption, capturing the private bytes counter for each host instance. Not suprisingly, the pipeline containing the streaming pipeline component had a much smaller and consistent memory footprint as can be observed in the screenshots below, using a file approximately 44MB in size:

Fig. 1  Comparison of Memory Consumption

Fig. 4  Comparison of Memory Consumption #1

Fig 2.  Comparison of Memory Consumption

Fig 5.  Comparison of Memory Consumption #2

Each spike of memory consumption associated with the streaming component I believe is due to loading each chunk of bytes into a string (as can be observed in Fig. 5).

One thing I noticed for both pipeline components is that the after processing had finished, memory consumption did not return to the pre-processing level until host instances where restarted, even though I had added object pointers to the pipeline component resource tracker (to ensure object disposable). However, memory consumption for the streaming pipeline remained at a consistent level no matter the number of files submitted for processing while the non streaming pipeline consumed more memory. This behaviour is mentioned in Yossi Dahan’s white paper in Appendix C [REF-1].

I decided to measure processing latency by noting the time that pipeline component execution commenced (using the CAT Teams tracing framework) and the final LastWriteTime property on the outputted file (using PowerShell).  So this final time indicates when the file adapter has completed writing the file and BizTalk has completed processing.

Here are some approximate processing times using 2 sample files:

File Size (MB) Streaming (mm:ss) Non-Streaming (mm:ss)
5.5 1:01 0:44
44 27:02 5:29

I had a hunch that the higher processing latency of the streaming component was primarily due to the disk IO associated with using the VirtualStream class.  Under the hood, when configured to use a file for storing bytes exceeding the byte count threshold, the VirtualStream class switches to wrap an instance of FileStream for the writing of overflow bytes.  I figured that if I increased the size of the buffer used by the FileStream instance, this would mean less disk read and writes (at the expense of greater memory usage).

(As a side note, the default buffer and threshold sizes specified in the VirtualStream class are both 10240 bytes. Also, the maximum buffer and threshold size is 10485760 bytes (c. 10.5MB)).

Unfortunately and surprisingly, increasing the size of the buffer made no difference – processing latency was still the same.  Maybe I will investigate this further in another blog post since I have already written a book in this post!!


I have demonstrated here that implementing a pipeline component in a streaming fashion has major benefits over a non-streaming approach. A streaming component consumes less memory and ensures a consistent memory footprint, regardless of message size.

Another finding is that the VirtualStream class adds significant processing latency (at least in this particular implementation) and unless this is not of concern, means that this class is only really suitable when working with small files.

Some Further Resources

Yossi Dahan, Developing a Streaming Pipeline Component for BizTalk Server, Published Feb 2010  [REF-1]
Available from:
(I found this an extremely useful white paper (Word document format) which I must have reread many times now!).

Mark Brimble, Connected Pawns Blog
Optimising Pipeline Performance: Do not read the message into a string
Optimising Pipeline Performance: XMLWriter vs. XDocument
My colleague Mark wrote a series of blog posts comparing the performance of non-streaming pipeline components vs. streaming pipeline components.

Guidelines on MSDN for Optimizing Pipeline Performance

Developing Streaming Pipeline Components Series

Simplify Streaming Pipeline Components in BizTalk

BizTalk WCF Receive Location Configuration Error: The SSL settings for the service ‘None’ does not match those of the IIS ‘Ssl, SslRequireCert, Ssl128’

Error Scenario

A BizTalk WCF endpoint is exposed with security enabled: SSL with a client certificate is required (so mutual, 2-way client and server authentication is configured).

BizTalk (2009) receive location is configured as follows:

WSHttp Binding Transport Security Configured WSHttp Binding Client Transport Security Configured










IIS configuration:

IIS 2-Way SSL Authentication Configured

(Incidently, the following command can be run in a Windows batch file to configure SSL for a IIS virtual directory:

%windir%\system32\inetsrv\appcmd.exe set config “Default Web Site/ServiceName” -commitPath:APPHOST -section:access -sslFlags:Ssl,Ssl128,SslRequireCert )

Error Message and Analysis

Clients were unable to connect to the service and the following exception message was written to the Application event log on the hosting BizTalk server:

Exception: System.ServiceModel.ServiceActivationException: The service ‘ServiceName.svc’ cannot be activated due to an exception during compilation. The exception message is: The SSL settings for the service ‘None’ does not match those of the IIS ‘Ssl, SslRequireCert, Ssl128’.. —> System.NotSupportedException: The SSL settings for the service ‘None’ does not match those of the IIS ‘Ssl, SslRequireCert, Ssl128’.

So this is an IIS configuration issue.  The service is exposing some endpoint that is unsecured (the SSL setting for this endpoint is ‘None’, as mentioned in the error message), which doesn’t match the actual SSL settings configured: ‘Ssl, SslRequireCert, Ssl128’ (i.e. SSL with minimum 128-bit keys and client certificate required).

In this case, the endpoint not matching the SSL settings is the mex endpoint (i.e. the service WSDL).

Ensure that ALL mex endpoints are disabled, by commenting out the following mex binding configuration in the service Web.config file:

    The <system.serviceModel> section specifies Windows Communication Foundation (WCF) configuration.
        <behavior name=”ServiceBehaviorConfiguration”>
          <serviceDebug httpHelpPageEnabled=”false” httpsHelpPageEnabled=”false” includeExceptionDetailInFaults=”false” />
          <serviceMetadata httpGetEnabled=”false” httpsGetEnabled=”true” />
      <!– Note: the service name must match the configuration name for the service implementation. –>
<!– Comment out mex endpoints if client auth enabled using certificates –>
<service name=”Microsoft.BizTalk.Adapter.Wcf.Runtime.BizTalkServiceInstance” behaviorConfiguration=”ServiceBehaviorConfiguration”>
        <!–<endpoint name=”HttpMexEndpoint” address=”mex” binding=”mexHttpBinding” bindingConfiguration=”” contract=”IMetadataExchange” />–>
        <!–<endpoint name=”HttpsMexEndpoint” address=”mex” binding=”mexHttpsBinding” bindingConfiguration=”” contract=”IMetadataExchange” />–>

I restarted IIS and the service could then be compiled and worked as expected.

Integrate 2014: Thoughts on the Announcement of the BizTalk Microservices Platform

Integrate 2014 Logo

I will outline here some of my thoughts on Microsoft’s new venture in the EAI space, announced at the Integrate 2014 conference: the BizTalk Microservices Platform.  I guess this could be called “BizTalk Services 2.0”.

When I first heard the news, I was surprised.  I didn’t attend the conference but had one eye on the #integrate2014 hashtag and as the talks progressed, I got progressively more excited as I saw more of the platform.  My immediate reaction was: “this is Docker!”.

My intention of this post is to put forward what the new platform might be like and what this means for integration devs like me.  I should mention here that these thoughts are a result of reading various blogs after the event and my own personal experience, so there may be many inaccuracies.

Before putting forward my ideas though, I would like to discuss Microsoft’s current cloud EAI offering – aka “BizTalk Services 1.0”.

Background – BizTalk Services 1.0

Like a lot of BizTalk devs I believe, I have spent quite a bit of time playing with Microsoft Azure BizTalk Services (MABS) in my spare time whilst working with BizTalk Server during the day.

It’s easy to get up and going: install the BizTalk Services SDK and you are pretty much there.  This is a comprehensive guide that I found useful.

I could pick up the concepts from my BizTalk Server (and ESB Toolkit) experience quite quickly.

To be honest it left me quite disappointed since I was expecting some radical new thinking and tooling: it felt like an attempt to replicate BizTalk Server in the cloud.  As the year wore on, it seemed little attention was paid to MABS by the product team; for example, one still has to use Visual Studio 2012 to create a solution (no update so we can use Visual Studio 2013).  This made me think that the team must be “heads down” working hard on version 2.0 and too busy to release minor versions.

Another major issue is around unit and integration testing.  It’s only possible to test a solution by deploying to Azure (directly from Visual Studio).  Also the tooling to exercise your solution is limited: perhaps the highlight is a VS plugin that is very much an alpha version with little TLC given to it since it’s release.

Another burning question for me was: how is BizTalk Services actually realised in Azure?  Originally I remember reading that MABS would be a multi-tenant affair.  But I know that I needed to create my own tracking database…  Surely this would be shared storage, I thought?  However I’ll quote from Saravana’s latest blog post here:

Under the hood, completely isolated physical infrastructure (Virtual machines, storage, databases etc) are created for every new BizTalk Service provisioning. [1]

So this explains why it wouldn’t be possible to deploy and test a MABS solution locally, on my developer workstation.  It wouldn’t be possible to replicate this runtime environment locally (without a lot of work anyway).

Also it’s obvious that a MABS solution is completely tied to Azure and cannot exist outside of it.

So my final thought on the first version of MABS was: a bit disappointing, nothing really radical here, good for lightweight integration scenarios (at best), hope 2.0 is (heaps) better and given much more TLC than 1.0…

The Inbetween Months – MABS 1.0 => MABS 2.0

So my day job got busy and I put MABS aside for a bit.

I have a FreeNAS box at home so I have spent some time this year setting it up, playing around with it and using it as a central place to store my photos, files and Git repos for my personal projects.  It was then that I uncovered the concept of “jails”, which is basically a way of sandboxing.  Each jail is bound to a separate IP address.  Further info can be found here.

Learning about the jail idea lead to reading about an interesting open source project called “Docker“.  I’m certainly no expert on this subject: I think of it in a similar way to the jail concept but instead of the term “jail”, the term “container” applies.  Docker has an engine that enables containers to communicate with one another.

What I really like about Docker is that it is a “grassroots” developer led project.  After a period of introspection (and no doubt feedback from customers and market forces), this seems to be something Microsoft is keen to promote: that is, a focus on developers and collaboration.  .NET going cross platform and open source supports this.

Announcement of the BizTalk Microservices Platform (MABS 2.0?)

What I was alluding to in the previous section is that (with that wonderful thing called hindsight) I shouldn’t have been so surprised to hear that the next version of MABS would run on a “microservices platform”.  In fact, this platform will be a key infrastructure component/concept of Azure: “BizTalk Microservices” will just be a subset of a new microservices platform. I suspect Microsoft will build it’s own container management technology for Azure and that Azure containers will be compatible with Docker.  Already a Docker engine for Windows server is on the cards.  I have heard mention of the term “Azure App Platform” and I think this will be a container management platform (or at least a component of a bigger platform).

From the diagram below, app containers are a foundational, basic building block of the platform.

Microservices Stack - Slide

Photo Source: Kent Weare (@wearsy)

I think the idea is that each container will have a single very specific function (i.e. be “granular”).  Containers will be linked together (communicating over HTTP?) and as a whole be able to do something useful.  So a container could be a transform container, a data access container, a business rules container, a service bus connectivity container and so on (I wonder if there will be a bridge container, or is this a monolith? ;-)).  It would be possible for each container to be implemented using different technologies: the common factor would occur at the network transport level.

I reckon the (specific) term “BizTalk Microservices” will simply be a way of grouping integration specific containers together in the “toolbox”.  BizTalk as a “product” (in the cloud) will be reduced to just a name and a way of locating containers specific to integration, nothing more.  Over time, many other containers with a very specific function will come onboard: templates (“accelerators”) will exist which are basically a grouping of containers, designed to tackle a specific end-to-end problem.

Another deal clincher with the platform is that it would be possible to host, run and test locally on my dev workstation.  Hopefully the Azure SDK will enable a local runtime environment for running containers (or maybe the Windows server Docker engine would enable running Azure containers on prem).


I’m really excited with the announcements at the conference.  It’s very early days, but the microservices concept seems to be a good model:  a way of decomposing services into their component (micro) parts that can be finely scaled and tuned (hopefully addressing some of the failings of “old school” SOA, particularly around scalability).

In a number of sources I have read the comment that it’s a really exciting time to be a developer and I agree with that.  It’s exciting since cloud computing requires some new thinking and tools, which need to built.  There is also a “cross-pollination” of ideas between different platforms.

I can’t wait for the preview of the microservices platform, which looks to be available early next year.


[1] Microsoft making big bets on Microservices – Saravana Kumar, Dec 4th 2014

Timeout Publishing Database Project to Azure

I had created a new database project in Visual Studio 2013 and every attempt to publish the project to my Azure hosted database failed with this error:

Creating publish preview… Failed to import target model [database_name]. Detailed message Unable to reconnect to database: Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding.

This was particularly annoying since I could successfully test the connection from VS:

Visual Studio Test Connection OK

So why was I getting the timeout?

A quick Google found this post on stackoverflow.

As mentioned in the post, I logged into the Azure Management Portal and switched my database to the Standard tier, from the Basic tier:

Switch DB to Standard Tier

I tried again to publish and hey presto, it finally worked!!

DB Publish Success

I then switched my database back to the Basic tier.

Maybe this is a bug or perhaps the Basic tier doesn’t afford sufficient DTUs to complete the publish within a reasonable time (DTU = Database throughput unit).

Introducting jarwin – “Just Another (Rss) Reader for Windows”, Beta

This is a post to introduce a personal project I have been working on for the last few months and announce the availability of an early beta version, which I hope people will download and provide feedback to me on.

I have learnt an enormous amount over the last few months working on it and had lots of fun along the way (and some frustrations too!).

If you would like the short version of this post, the app is available for download here.

Also check out the wiki here.

Unfortunately a known issue/limitation is currently no ability to specify proxy server credentials, so you will be probably out of luck trying to sync on a corporate network.  This will be a new feature in the next release…

Also issues/bugs can be logged here.  I’m a very busy man these days so I will try and reply as promptly as I can.

What is this app?  Well it’s not much really – it’s a (very) simple Rss reader and a Windows forms application.  I use it to follow blogs that I’m interested in.

As I mention in the app wiki, it represents only a small percentage of the vision I have in mind, which is an app that I can use to download content from various disparate data source (email, blogs, Twitter, Facebook, LinkedIn etc) and present all this information in an intelligent and meaningful way, such that I can view at a glance and get answers to questions such as: what are the key trends in the systems integration space right now?  What information is meaningful to me?  What is the impact of blog post x?

It would also be possible to “swap” datasets in and out of the analytics (quickly and efficiently).

These ideas are nothing new, I know…  There are tools out there already too.  But I’m fed up using other peoples software outside of my work domain/practice – I would much rather write and use my own.

The seed for these ideas come from a common problem pervasive in all industries I believe: masses amount of data, that is constantly evolving and changing.  How is one supposed to digest and make sense of all this data?  I follow a number of blogs and Twitter feeds, for example.  But I myself feel overwhelmed with the amount of “noise” out there; recently I have stopped following my Twitter feed, since although there is a lot of interesting and useful information in it, there is also a lot of stuff I’m not interested in – life’s too short to trawl through that, let alone try to link it with other data sources.  (I remember reading somewhere that Donald Knuth decided to no longer bother with email, for this reason).

So one day, over an early morning coffee, I hope to be able to view quickly and intelligently, a consolidated view of all my data feeds.  To have a view also, that is not just a mass of text but a complete model that (most importantly) makes sense to me.  Anyway, there is quite a way to go to reach this goal!!

Using a Static Send Port like a Dynamic Send Port

Dynamic send ports allow adapter properties to be set at runtime (and also to select the adapter to be used).  In my particular BizTalk 2009 scenario, I was creating a WCF dynamic send port to call a service endpoint URI only known at runtime, specified by the client (my orchestration is designed to be a generic message broker).

My first dislike was WCF configuration had to be defined programmatically in my orchestration.  Sure, I was storing the properties in a custom SSO application so they weren’t hardcoded, but the BizTalk admin console provides a standard mechanism to configure WCF properties and it made sense to use it.  Thinking of the BizTalk admins, I didn’t like the idea of hiding configuration away and then in a non standard way: it makes troubleshooting more difficult.

Secondly: performance.  A few of my colleagues and sources on the web advised of poor performance using dynamic send ports for these reasons:

1.  A dynamic send port is created each time it is used and in the case of WCF, for instance, the channel stack is created each time.  This can have a significant performance hit.  Further information about this is available here.

2.  Only the default handler for each transport can be used which is a potential performance bottleneck if the host instance used by the default handler hasn’t been optimized for send operations.  This limitation is also a recipe for inconsistent configuration (for example, if a design decision has been made to use a particular host for particular functions, this will not enforceable) and also it isn’t obvious to the BizTalk admins, what handler is used for a particular port.  (Note that this limitation has been removed in BizTalk 2013 where it is now possible to choose a host instance for a dynamic send port, other than being stuck with just the default handler).

So I decided to use a static send port and override the “dummy” URI in my send port with the actual URI provided by the client…  I did this as follows:

1.  In my orchestration, in a Construct Message shape, I assigned to my own custom context properties, values that would later to be used to populate the BTS.OutboundTransportLocation and WCF.Action properties (these specify the endpoint URI and SOAP operation used by the WCF adapter, respectively).  I did this instead of assigning directly to the “out of the box” properties since both were later overwritten on receipt of the message by the send port.

2.  Using a custom pipeline component, I then promoted properties BTS.IsDynamicSend, BTS.OutboundTransportLocation and WCF.Action in a send pipeline assigned to the send port, populating  BTS.OutboundTransportLocation and WCF.Action using the values assigned to my custom context properties like this:

inmsg.Context.Promote(“IsDynamicSend”, “;, true); // Set this to prevent URL caching.

inmsg.Context.Promote(“OutboundTransportLocation”, “;, endpointURL);

inmsg.Context.Promote(“Action”, “;, operationName);

Note that BTS.IsDynamicSend has been set to “true”.  As mentioned on MSDN here, this causes the send adapter to not used cached configuration, but to read configuration from the message context each time the send port is used.  If BTS.IsDynamicSend was not set for example, then the cached endpoint URI would be used over the endpoint URI actually stamped on the message which was not what I wanted, since it’s possible that the endpoint may change between calls.

Performance: running my SOAPUI load tests, the response times of my web service were the same after changing to use a static port over a dynamic port.  I’m using the WCF-Custom adapter with wsHttpBinding in my static send port.  The response time of my web service was already acceptable to my client and my main motivation in using a static send port was better configuration options using the BizTalk admin console (rather than storing configuration in a custom SSO application).  However better perfomance would have been nice!  If I have time, I may investigate this further across the different adapters.

WCF Error from Visual Studio Debugger: “An exception of type ‘System.ServiceModel.FaultException`1’ occurred but was not handled in user code”

This error got me stumped for a bit…  The solution was quite simple but caused a few further grey hairs so I will share here how I resolved this annoyance!

I was hosting a WCF solution in Visual Studio (for testing and debugging purposes) and calling it from a forms application.  Things were going well until I introduced typed SOAP faults: I wanted to throw a typed fault from my WCF solution and for my forms application to catch and handle the error.

When I threw the error from my WCF solution, the Visual Studio debugger complained, saying that the error was unhandled:

Visual Studio 2013 SOAP Fault Unhandled Error

Fig. 1  Visual Studio 2013 SOAP Fault Unhandled Error

However, my forms application had been built to handle the exception as follows:

But SOAP Fault Handled in my Forms App

Fig. 2  But SOAP Fault Handled in my Forms App

I couldn’t work out for sometime how to suppress the error but then noticed a check box in the VS exception message box: “Break when this exception type is user-unhandled” (highlighted in Fig. 1 above).

The wording is a bit of misnomer since I am actually handling the error in my forms app but I guess the VS debugger can’t work this out.  I unchecked the box and VS behaved as I wanted it to: my WCF solution was able to throw the exception for handling by my forms app.

I later learnt that it also possible to configure how the VS debugger behaves in regards to exceptions by selecting “DEBUG –> Exceptions…”.  This opens an exceptions windows where it is possible to configure if the debugger should break or not for certain exception types e.g.:

Specify if Debugger should Break

Fig. 3  Specified that the Debugger should not Break for Exceptions of Type ‘System.ServiceModel.FaultException`1’