Thoughts on Integrating Systems & IoT


Introducting jarwin – “Just Another (Rss) Reader for Windows”, Beta

This is a post to introduce a personal project I have been working on for the last few months and announce the availability of an early beta version, which I hope people will download and provide feedback to me on.

I have learnt an enormous amount over the last few months working on it and had lots of fun along the way (and some frustrations too!).

If you would like the short version of this post, the app is available for download here.

Also check out the wiki here.

Unfortunately a known issue/limitation is currently no ability to specify proxy server credentials, so you will be probably out of luck trying to sync on a corporate network.  This will be a new feature in the next release…

Also issues/bugs can be logged here.  I’m a very busy man these days so I will try and reply as promptly as I can.

What is this app?  Well it’s not much really – it’s a (very) simple Rss reader and a Windows forms application.  I use it to follow blogs that I’m interested in.

As I mention in the app wiki, it represents only a small percentage of the vision I have in mind, which is an app that I can use to download content from various disparate data source (email, blogs, Twitter, Facebook, LinkedIn etc) and present all this information in an intelligent and meaningful way, such that I can view at a glance and get answers to questions such as: what are the key trends in the systems integration space right now?  What information is meaningful to me?  What is the impact of blog post x?

It would also be possible to “swap” datasets in and out of the analytics (quickly and efficiently).

These ideas are nothing new, I know…  There are tools out there already too.  But I’m fed up using other peoples software outside of my work domain/practice – I would much rather write and use my own.

The seed for these ideas come from a common problem pervasive in all industries I believe: masses amount of data, that is constantly evolving and changing.  How is one supposed to digest and make sense of all this data?  I follow a number of blogs and Twitter feeds, for example.  But I myself feel overwhelmed with the amount of “noise” out there; recently I have stopped following my Twitter feed, since although there is a lot of interesting and useful information in it, there is also a lot of stuff I’m not interested in – life’s too short to trawl through that, let alone try to link it with other data sources.  (I remember reading somewhere that Donald Knuth decided to no longer bother with email, for this reason).

So one day, over an early morning coffee, I hope to be able to view quickly and intelligently, a consolidated view of all my data feeds.  To have a view also, that is not just a mass of text but a complete model that (most importantly) makes sense to me.  Anyway, there is quite a way to go to reach this goal!!


Using a Static Send Port like a Dynamic Send Port

Dynamic send ports allow adapter properties to be set at runtime (and also to select the adapter to be used).  In my particular BizTalk 2009 scenario, I was creating a WCF dynamic send port to call a service endpoint URI only known at runtime, specified by the client (my orchestration is designed to be a generic message broker).

My first dislike was WCF configuration had to be defined programmatically in my orchestration.  Sure, I was storing the properties in a custom SSO application so they weren’t hardcoded, but the BizTalk admin console provides a standard mechanism to configure WCF properties and it made sense to use it.  Thinking of the BizTalk admins, I didn’t like the idea of hiding configuration away and then in a non standard way: it makes troubleshooting more difficult.

Secondly: performance.  A few of my colleagues and sources on the web advised of poor performance using dynamic send ports for these reasons:

1.  A dynamic send port is created each time it is used and in the case of WCF, for instance, the channel stack is created each time.  This can have a significant performance hit.  Further information about this is available here.

2.  Only the default handler for each transport can be used which is a potential performance bottleneck if the host instance used by the default handler hasn’t been optimized for send operations.  This limitation is also a recipe for inconsistent configuration (for example, if a design decision has been made to use a particular host for particular functions, this will not enforceable) and also it isn’t obvious to the BizTalk admins, what handler is used for a particular port.  (Note that this limitation has been removed in BizTalk 2013 where it is now possible to choose a host instance for a dynamic send port, other than being stuck with just the default handler).

So I decided to use a static send port and override the “dummy” URI in my send port with the actual URI provided by the client…  I did this as follows:

1.  In my orchestration, in a Construct Message shape, I assigned to my own custom context properties, values that would later to be used to populate the BTS.OutboundTransportLocation and WCF.Action properties (these specify the endpoint URI and SOAP operation used by the WCF adapter, respectively).  I did this instead of assigning directly to the “out of the box” properties since both were later overwritten on receipt of the message by the send port.

2.  Using a custom pipeline component, I then promoted properties BTS.IsDynamicSend, BTS.OutboundTransportLocation and WCF.Action in a send pipeline assigned to the send port, populating  BTS.OutboundTransportLocation and WCF.Action using the values assigned to my custom context properties like this:

inmsg.Context.Promote(“IsDynamicSend”, “”, true); // Set this to prevent URL caching.

inmsg.Context.Promote(“OutboundTransportLocation”, “”, endpointURL);

inmsg.Context.Promote(“Action”, “”, operationName);

Note that BTS.IsDynamicSend has been set to “true”.  As mentioned on MSDN here, this causes the send adapter to not used cached configuration, but to read configuration from the message context each time the send port is used.  If BTS.IsDynamicSend was not set for example, then the cached endpoint URI would be used over the endpoint URI actually stamped on the message which was not what I wanted, since it’s possible that the endpoint may change between calls.

Performance: running my SOAPUI load tests, the response times of my web service were the same after changing to use a static port over a dynamic port.  I’m using the WCF-Custom adapter with wsHttpBinding in my static send port.  The response time of my web service was already acceptable to my client and my main motivation in using a static send port was better configuration options using the BizTalk admin console (rather than storing configuration in a custom SSO application).  However better perfomance would have been nice!  If I have time, I may investigate this further across the different adapters.

WCF Error from Visual Studio Debugger: “An exception of type ‘System.ServiceModel.FaultException`1’ occurred but was not handled in user code”

This error got me stumped for a bit…  The solution was quite simple but caused a few further grey hairs so I will share here how I resolved this annoyance!

I was hosting a WCF solution in Visual Studio (for testing and debugging purposes) and calling it from a forms application.  Things were going well until I introduced typed SOAP faults: I wanted to throw a typed fault from my WCF solution and for my forms application to catch and handle the error.

When I threw the error from my WCF solution, the Visual Studio debugger complained, saying that the error was unhandled:

Visual Studio 2013 SOAP Fault Unhandled Error

Fig. 1  Visual Studio 2013 SOAP Fault Unhandled Error

However, my forms application had been built to handle the exception as follows:

But SOAP Fault Handled in my Forms App

Fig. 2  But SOAP Fault Handled in my Forms App

I couldn’t work out for sometime how to suppress the error but then noticed a check box in the VS exception message box: “Break when this exception type is user-unhandled” (highlighted in Fig. 1 above).

The wording is a bit of misnomer since I am actually handling the error in my forms app but I guess the VS debugger can’t work this out.  I unchecked the box and VS behaved as I wanted it to: my WCF solution was able to throw the exception for handling by my forms app.

I later learnt that it also possible to configure how the VS debugger behaves in regards to exceptions by selecting “DEBUG –> Exceptions…”.  This opens an exceptions windows where it is possible to configure if the debugger should break or not for certain exception types e.g.:

Specify if Debugger should Break

Fig. 3  Specified that the Debugger should not Break for Exceptions of Type ‘System.ServiceModel.FaultException`1’


Few Thoughts on the ESB Toolkit and an Error – “The ‘ServiceType’ property value should not be empty or null”

I have recently been experimenting with the ESB Toolkit (version 2.2 that ships with BizTalk 2013) and I think it is a good way to expedite loosely coupled BizTalk solutions, dynamically configurable at runtime using the Business Rules Engine (BRE).

At a high level, the ESB Toolkit itinerary model is an implementation of the routing slip pattern.

My immediate impression is that development using the Itinerary Designer is tightly coupled to the runtime environment, more so than “standard” BizTalk development.  By “runtime environment”, I mean artefacts/configuration viewable via the BizTalk admin console (e.g. applications, send port filters etc.) and also policies created via the BRE Composer.  Basically the target application needs to be setup before starting work building the solution using Visual Studio.  Any changes to the solution setup (changing a send port name, for example) would likely require firing up Visual Studio and propagating these changes to the itinerary, then importing into the itinerary database.

It’s also occurred to me that the itinerary pattern is in my mind an easier way to implement a message type agnostic solution, compared to using the standard BizTalk toolset.  I have recently been wrestling with a series of orchestrations processing messages in a non typed fashion, routing to/from the MessageBox purely using context properties: this is a powerful enabler of achieving a “service first” approach (instead of a “message first” approach) permitting heavy reuse of processing logic without caring about the underlying message type.  Yes, I’m thinking about SOA principles here.  However it’s been quite a mission to implement this routing using non typed messages in orchestration.

To illustrate this tight coupling of the development and runtime environments mentioned previously (and to demonstrate my noob status regarding the ESB Tookit :-)), whilst trying to export a model via Visual Studio, I was stumped with these errors:

Itinerary Property Value Errors

Itinerary Designer Property Value Errors

It was obvious clicking on my off ramp that these three properties were not configured but they looked to be read only – so how could I add values?!:

Missing Properties in the Itinerary Design

Missing Properties in the Itinerary Designer – Not Possible to Configure Here

After a bit of head scratching and a web search it soon became clear that these properties referred to filter properties on my send port – it would have been useful if the property name made it obvious what these properties referred to.

So in my BizTalk application, I created the following filters on my dynamic send port:

ESB Tookit Send Port Filters Required

ESB Toolkit Send Port Filters Required

I then re-selected the send port in the off ramp and the required properties were then populated from the BizTalk databases:

Properties Visible in Itinerary Designer

Properties Visible in Itinerary Designer After Adding Filters to Send Port

I hope this post helps out other ESB Toolkit “greenhorns”.

Using Inline XSLT to Populate an <Any> Element

I have been working recently with schemas containing <Any> elements.  Luckily I don’t need to manipulate the <Any> contents much: to do so, the standard BizTalk functoids available in the mapper can’t be used and instead, given the range of the XML that can be expected in the <Any>, some extensive custom XSLT would be required.

I have two scenarios:

  1. The contents of an <Any> element need to be copied to a known schema “as is”.
  2. The contents of a known schema need to mapped “as is” to an <Any>.

I struggled with the mass copy functoid for both scenarios – it didn’t behave quite how I expected/wanted.

After a bit of head scratching and a read of this blog post, I realised that the mass copy doesn’t copy across root elements.  I suppose, for example, the assumption is that both source and target schemas will contain a common element that can be used as the “target” of the mass copy functoid.  This wouldn’t work in my case, since the schemas being mapped are quite different and also the <Any> would contain a range of different XML, so I couldn’t specify an appropriate containing “root” element.

To get around this behaviour of the mass copy functoid, in my map I created a scripting functoid and added some inline XSLT like this:

<xsl:copy-of select=”.” />

My map then looked like this:

Map With "Mass Copy" in Inline XSLT

Map With “Mass Copy” in Inline XSLT

Here is an example instance of the source message:

Source Message

Source Message

And this is the result after passing the message above through the map:

Output Message

Output Message

Also here is the output message if the mass copy functoid is used (note that the root <Order> element is missing, under <Entities>):

Output Message Using Mass Copy Functoid

Output Message Using Mass Copy Functoid

It seems strange and limiting that the mass copy functoid doesn’t copy the root element.

BizTalk 2009 – Error Enlisting Send Port – “Exception from HRESULT: 0xC00CE557”

It’s the second time that I have encountered this obscure error in BizTalk 2009, when attempting to enlist a send port after importing bindings originally exported via the admin console:

BizTalk 2009 - Error Enlisting Send Port

BizTalk 2009 – Error Enlisting Send Port

Both times (for different customers) the issue was that an extra carriage return and line feed had been inputted before and after the <Filter> element in the offending bindings file that was being imported, like this:

Bindings File - <Filter> Element Contains Carriage Return and Line Feed

Bindings File – Element Contains Carriage Return and Line Feed

This had a two fold effect – filter expressions were missing from the send port and also the error above enlisting the send port.

I carefully checked the bindings file and discovered other <Filter> elements with extra carriage returns and line feeds: I removed them, like so:

Bindings File - Extra Carriage Return and Line Feed Removed From <Filter> Element

Bindings File – Extra Carriage Return and Line Feed Removed From Element

On import of the bindings file again, I could then see my filter expressions and also could enlist the send port…

I have only ever come across this error in BizTalk 2009.

Also, although I am doing so for the send port in the bindings file snippet, I don’t filter on receive port names in Production ready solutions…  Only sometimes during initial development do I do so!  It is better to filter on context properties such as BTS.MessageType, WCF.Action and BTS.Operation.

BizTalk WCF Service Consuming Wizard – HTTP 407 Proxy Authentication Required

I got this error recently running the BizTalk WCF Consuming Wizard whilst working at a customers office when I didn’t previously (sensitive parts of the error message have been removed):

BizTalk WCF Service Consuming Wizard HTTP 407 Error

BizTalk WCF Service Consuming Wizard HTTP 407 Error

I think it was due to installation of internet security software on my development VM by the company’s infrastructure team and/or a new set of group policies being applied to the domain in which my VM is connected too…

Anyhow, it was obvious that all traffic from my VM was now going through a web proxy for the purposes of filtering traffic and I needed to install proxy client software.  This would entail some internal processes being enacted.

Being impatient to progress (and seeing this as an opportunity to try something out that I had read about a few weeks previously) I decided to see if a I could use SvcUtil.exe, providing my proxy credentials so enabling the SvcUtil.exe to authenticate against the web proxy for me.

It would be great if the BizTalk WCF Service Consuming Wizard had proxy authentication support!  Maybe I’m missing a trick here??

Based on this article here on Stack Overflow, I created a small proxy class, created a strong name key file for it and then installed it in the GAC on my dev VM:

Proxy.cs snippet

Proxy.cs snippet

I then edited my SvcUtil config file as follows, adding a reference to the proxy class assembly:

SvcUtil.exe.config file

SvcUtil.exe.config file

Then using the simple batch script below, I could run SvcUtil with proxy authentication included and hey presto, I could download the WSDL and associated XSD import files!

GetServiceDefinition.bat file

GetServiceDefinition.bat file

This saved me the annoyance of having to download each imported XSD file individually and then modifying the path in the parent WSDL.

I have a GitHub repo here containing the source code for the solution (as usual, provided under the terms of the MIT licence).

BizTalk Instrumentation – An Initial Foray into the CAT Framework

I have spent some time recently looking into the Windows Server AppFabric Customer Advisory Teams (herein “CAT”) BizTalk instrumentation framework and I will report here on some of my findings…  In short, I have become quite excited about the potential if offers not only to instrument BizTalk solutions (in development and production environments) but also as the basis of a unit testing framework.  In this brief initial look into the framework, I will focus on tracing in orchestrations but it’s possible to introduce tracing into all BizTalk artefacts (and also non BizTalk/custom components).

The framework itself has been around for sometime now (since May 2010?) – this blog entry from the CAT is a well written article introducing the tracing facilities it provides (at the end of the article is a link to the framework source code).  This white paper looks to have been heavily based on the blog article (or vice-versa) and is similarly well written and easy to follow.

I have to confess and admit that in the past, like a lot of developers, I have used the System.Diagnostics.Trace .NET class in conjunction with DebugView to trace execution of orchestrations during development.  One thing I have observed is that CPU utilization increases dramatically whilst DebugView runs: it’s not a tool that should be run outside of a development environment.  The CAT observed this and came up with a framework leveraging the Event Tracing for Windows (ETW) facility.  The framework is a wrapper around the TraceProvider class from the Microsoft.BizTalk.Diagnostics namespace in Microsoft.BizTalk.Tracing.dll.  As CAT mention, this is a “hidden gem” and is utilised in all major components in the BizTalk runtime (you can observe this by running DebugView while the BizTalk engine is executing – heaps of trace events are outputted).

Now the framework doesn’t mean you should run DebugView outside of a development environment: instead, it allows rapid and efficient tracing to a log file.  This means that it won’t crash your development VM and also it permits tracing in Production and Production like environments.

I will briefly describe here how I setup the framework to trace  orchestration execution…

After downloading the framework from the CAT blog, I opened the solution using Visual Studio 2012 and successfully ran the Conversion Wizard.

There are 3 projects:

  1. Microsoft.BizTalk.CAT.BestPractices.Framework
  2. Microsoft.BizTalk.CAT.BestPractices.Samples.TracingBenchmark
  3. Microsoft.BizTalk.CAT.BestPractices.Tests

The Microsoft.BizTalk.CAT.BestPractices.Framework project is the one I will need to reference in my solution.

A few immediate/gut observations about the Microsoft.BizTalk.CAT.BestPractices.Framework project that piqued my interest:

  1. Pre processor directives have been specified that output trace statements using System.Diagnostics.Trace .NET class for a debug build only.  So one could infer from this that this is designed for development, where a debug build of the framework could be used for outputting to DebugView.   A release build will not include the calls to System.Diagnostics.Trace, precluding the use of DebugView in Production for example.
  2. Lots of passing of object arrays – no generic collections evident: possibly a boxing/unboxing overhead here.

In order to get the Microsoft.BizTalk.CAT.BestPractices.Framework project to build (under .NET 4.5), I had to specify the full namespace (Microsoft.BizTalk.CAT.BestPractices.Framework) for the Func<string> delegate in the ComponentTraceProvider class.

Next I selected the option to sign the project assembly and created a strong name key then installed the assembly in the GAC.

It’s then a matter of adding a reference to Microsoft.BizTalk.CAT.BestPractices.Framework.dll in my orchestrations project in my BizTalk solution.

The white paper indicated in the introduction to this post presents usage of the framework well but I will demonstrate here how I implemented tracing in my orchestration…

Expression shapes containing tracing statements should be strategically added to the orchestration.

  • Tracing should kick off with a call to the TraceIn method immediately after orchestration activation.  The method returns a GUID that can be used to trace specific instances (the GUID can be prefixed to the trace output for example – thanks Johann for this tip!).


  • Message body content can be outputted using the TraceInfo method.  Note that the variable debugMessage is of type System.XML.XMLDocument.


  • It’s possible to trace scope execution using the TraceStartScope and TraceEndScope methods.  The framework will output scope execution time.




  • Tracing should be added into exception blocks like so, using the TraceError method (passing in the exception object):


  • Finally the orchestration should end with a call to TraceOut, passing in the GUID (stored in the callToken variable below) returned from calling the TraceIn method.


The BizTalk CAT Instrumentation Framework Controller available on CodePlex is an excellent GUI based tool for starting and stopping traces.  It has options to write tracing to DebugView and/or a log file (DebugView shouldn’t be used in Production though, just development environments).  It has a bit of a limitation in that the log file is not human readable until a trace is stopped.

Here’s an example of a trace written to DebugView:



I have an example BizTalk 2013 solution here that is available for download under the terms of the MIT licence where tracing has been implemented.  The example screenshots above are taken from this solution.

Now 2 things have got me really excited about the BizTalk CAT instrumentation framework:

  1. The opportunity to log real time tracing events from Production environments in an efficient way – this has implications for monitoring solutions in Production, parsing tracing events to trigger other events e.g. error alerting, reporting.  Tracing could be written to Azure providing a central repository of data that could be analysed and reported on.
  2. The facility to track flow through an orchestration would be the excellent basis for a unit testing framework.  Tests could be run by writing trace events to a queue (Windows service bus?) to preserve the order in which events have been triggered.  The BizUnit testing framework could then be extended to monitor the events on the queue and compare against expected execution flow configured in the business rules engine (BRE).  Events associated with each test run could also be persisted to a database and provide the basis for analysis of test coverage (i.e. have all paths of execution been tested in a given orchestration?).

In regards to point 1, this facility looks similar to recent functionality added to the BizTalk360 monitoring solution.

The Testing inside BizTalk by using ETW Tracing project on CodePlex looks almost a perfect fit for point 2 and something I intend to explore soon!

In summary, I hope this post provides a good introduction to the CAT tracing framework and how it could be used during development (as a unit testing tool) and subsequently in Production (real time tracing and monitoring).

BizTalk and WCF, Consuming a WCF Service, Part 4 – Exception Handling

This is the last post in a four part series looking at BizTalk and WCF.

Here are the previous three posts:

Part 1 – A Look at the Service to be Consumed

Part 2 – The BizTalk WCF Service Consuming Wizard and a Look at the Artefacts Created

Part 3 – Building the BizTalk Solution

In this final post, I will show how exception handling has been added to the RandomPresentService orchestration controlling our business process.

Some BizTalk Exception Considerations: 101 Course

Firstly, here is a quick brain dump of items that should be considered when thinking about handling exceptions in an orchestration:

  1. What are the customers requirements in regards to messages that fail inside BizTalk?  Should alerting and/or logging be implemented so remedial action can be carried out and if so who should be the recipients of any such alerting?  What steps should be carried out to recover from the error condition (what is the process)?  Often the customer has only a vague idea here and some teasing out of requirements may be required.
  2.  What is the impact if a particular process/action fails in your orchestration?  Does this is invalidate subsequent previously successful actions?  Should previously committed actions be rolled back therefore in a compensation block?
  3. What about business process type errors?  Can these be handled without the need to jump to an exception condition?  Preferably, business type errors that happen routinely (and are therefore not exceptional) should not be handled by throwing an exception but handled in the “body” of the orchestration.
  4. Do you need to handle any custom exceptions?  Hopefully this is well documented in any service that needs to be consumed, for example.

Often error handling is forgotten until the very end of the development phase and is not instead considered at the start of a project – this is a mistake!  Or alternatively, the customers requirements are not taken into consideration when building exception handling.

By thinking about exception handling at the start of a project, it can be “built in” to the solution.  Also it allows time for discussions with the customer around how exceptions should be handled (in the analysis and design phase).  It’s costly to implement exception handling logic retrospectively.

One of the big selling points in regards to BizTalk and another tenant of the platform is that it is “robust” and ensures “guaranteed message delivery”.  In order to live up to these expectations it’s important that error conditions are handled in such a way that resubmission of failed messages is secured, with possibly multiple options for error recovery.  These processes also need to be tested and well documented.

What Exceptions Need to Handled?

A concept that can be hard to determine is: what exceptions should my solution handle?  By their very nature, exceptions are exceptional/rare so are a challenge to define and pin down.

It is a decision guided by some of the following factors:

  1. Experience – harnessed to try and make troubleshooting of “exceptional” type errors easier to investigate when the solution is running in Production.
  2. The needs of the customer.
  3. The type of exceptions that any called services could return.

This analysis will generate a list of exceptions that need to be handled.

As with any solution, the most specific exceptions should be handled first with the most general exception caught last.

And no doubt, not all exceptions will (or should) be catered for.  They will generate a nasty runtime error that will require investigation using a tool like the Orchestration Debugger.  This is where a good support team have a chance to shine :-).

Exception 1: PersistenceException

This is the first exception that I’m going to explicitly handle.

My friend and colleague Johann Cooper has written an excellent blog article about this exception.

Specifying direct binding on send ports in your orchestration is a good idea – this decouples physical send ports from the logical send ports specified in your orchestration.  But this pattern can fail if the physical ports are ever unenlisted.

If physical ports are unenlisted, this removes any associated subscription and an exception of type Microsoft.XLANGs.BaseTypes.PersistenceException will manifest in the orchestration.

(You will also get a nonresumable “Routing Failure Report”, to help with troubleshooting the routing error).

So first up, I have modified the RandomPresentService orchestration like so:

  1. Added a non transactional scope shape around the send and receive shapes for the RandomPresentService WCF service.
  2. Added an exception block to the scope to handle the PersistenceException.  This includes the creation of an ESB fault message (so the error will surface in the ESB exception management portal) and a suspend orchestration shape.
  3. Note the addition of looping: this ensures that the orchestration can be resumed in a valid state, since the RandomPresentService will be called again.
Handling PersistenceException in Orchestration

Handling PersistenceException in Orchestration

Exception 2: Handling SOAP Faults Returned from the RandomPresentService WCF Service

To handle any SOAP faults returned from the RandomPresentService service, I added a new fault type to the operation on the logical port in the orchestration designer (right click on the operation and select “New Fault Message”) and selected a message type of BTS.soap_envelope_1__2.Fault: this denotes that the fault should be wrapped in a SOAP version 1.2 fault:

Adding a SOAP Fault on the Logical Port for the WCF Service

Adding a SOAP Fault Handler on the Logical Port for the WCF Service

The fault doesn’t need to be connected to a receive shape or anything like that – it can be left as is, and instead the fault can be handled by defining another exception block and selecting the SOAP fault as the “Exception Object Type”.  When the fault is defined, it will subsequently be viewable in the “Exception Object Type” drop down, like so:

Specifying the SOAP Fault Type in the for the Exception Block

Specifying the SOAP Fault Type in the Exception Block

It’s important to handle SOAP faults: if not, this will cause an unhandled exception in your orchestration (due to an unexpected message type being returned) and it will not be possible to recover from this error by resuming the orchestration.

Also note that in order to extract the SOAP fault message, the orchestration XPath function can be used like this:

strFaultMsg = xpath(exSOAPFault, “string(/*[local-name()=’Fault’ and namespace-uri()=’’%5D/*%5Blocal-name()=’faultstring&#8217; and namespace-uri()=”])”);

(Where strFaultMsg is of type System.String and exSOAPFault is of type Microsoft.Practices.ESB.ExceptionHandling.Schemas.Faults.FaultMessage).

Exception 3: Handling System.Exception – “Catch-all” Exception Blocks

The final exception block (that will be evaluated last by the orchestration engine) is a “catch all” block since it will handle exceptions of type System.Exception.  Since all exceptions in .NET inherit from System.Exception, any errors, not previously handled explicitly by previous exception blocks, will be caught by this handler.

Also I have wrapped the entire orchestration in a global scope and specified one exception block for System.Exception to ensure than my default exception handling mechanism will be exercised.

Series End

So this concludes this four part series looking at BizTalk and WCF.

As is to be expected, WCF is interleaved tightly into the BizTalk framework and it is possible to leverage it heavily in your BizTalk solutions.

It’s been a journey looking at the creation of a WCF service; using the “out-of-the-box” BizTalk tooling to consume the service; building upon the artefacts created by the WCF Service Consuming Wizard; finally ending with this brief (!) look at implementing exception handling in the orchestration controlling the business process.

I hope it provides a good introduction to the subject!

The RandomPresentService WCF service is available here.

The full BizTalk 2013 solution is available here.

(Both solutions are available under the terms of the MIT licence, a copy of which is included with each).

BizTalk and WCF, Consuming a WCF Service, Part 3 – Building the BizTalk Solution

This post builds on part 1 and part 2 of this series looking at BizTalk and WCF.

The first post looked at the RandomPresent WCF service and looked at design considerations, hosting and testing.

The second post touched on how to consume the WCF service from BizTalk, using the BizTalk WCF Service Consuming Wizard.

Process Flow

The business want to send to each customer, a random Christmas present with their order.  To achieve this, the RandomPresent WCF service will be called by BizTalk and it will return a random present XML, wrapped in a SOAP envelope.  The present response needs to be appended to the order, so the guys in the warehouse can add the present to the shipment.

It’s a matter of combining the original order and the response from the web service.


I’ll start with an overview of the single orchestration driving this process, since all other artefacts hang off this.

In the end, I created a new orchestration from scratch and didn’t use the one generated for me by the BizTalk WCF Service Consuming Wizard:

RandomPresentService Orchestration

RandomPresentService Orchestration

I will summarise the flow here and call out some interesting bits!:

  1. A map on my receive port converts the external format PO to the canonical version (check out my blog post here on the canonical messaging pattern).  The canonical message has an element called “Status” that is assigned the value “NEW” in the map.  This element is mapped to a property called “POStatus” of type MessageContextPropertyBase.   So, as you can guess, I had to create a property schema which I added to my internal schemas project (it contains just this one property).  The type MessageContextPropertyBase identifies “POStatus” as a context property that can be written to the context of a message and, as will be discussed, this is what I want to achieve for the purposes of routing and to prevent my orchestration from being caught in an endless activation loop.
  2. The canonical PO is written to the MessageBox and an instance of the orchestration fires up ; the orchestration subscribes to messages of type canonical PO and also on the “POStatus” promoted property: it is activated when the status is “NEW” only.  The reason for this additional subscription on “Status” is because on completion, the orchestration will write the modified canonical PO to the MessageBox and an infinite activation loop will occur if orchestrations are activated only on publishing of a canonical PO message type (not good!).  Of course, there are other ways of working around this: I think this is why, in some solutions I have seen, solutions have more than one canonical schema with a different namespace to differentiate them – this seems a mad way of getting around the “activation infinite loop” issue though, since this introduces the overhead of maintaining two canonical schemas and also the design principle of a canonical messaging pattern is that there is only “source of truth”.
  3. Note that I have designed my orchestration such that it only works with internal solution type messages only (not on the original, external, versions received).  This protects the orchestration from changes to the format of the external PO which might require changes and redeployment of the orchestration.
  4. Next I create a request message for the RandomPresent WCF service…  I have created a C# classes project that contains a C# class representation of the request schema: I created the class using Xsd.exe, by pointing Xsd.exe at my schema.  I instantiate an instance of the request message in a Construct Message shape using the New keyword.  Another way of creating a message in an orchestration is to create a map for this purpose.  There is a serious caveat to instantiating a message using the New keyword however: it is vital that changes to the schema are reflected in the class representation (maybe this could be achieved in a Visual Studio “post build event” on the schemas project).  As has happened to me in the past, if you assign a message created from your schema (via a map) to an (outdated) message created from a class (e.g. in a Construct Message shape), any missing elements will be silently lost in the output message…  For some reason, creating a new message in an orchestration is a torturous process, it seems to me!
  5. A multipart message map combines the canonical and RandomPresent XML into a new instance of a canonical PO.  This map will be discussed further in the “Maps” section below.
  6. Finally I set the “POStatus” context property to “PROCESSED” in a Message Assignment shape and write the new canonical message to the MessageBox.  The final Send Shape initializes a correlation set but not for the purposes of supporting an asynchronous process, but for routing purposes: the correlation set contains just the “POStatus” context property and initializing this in the Send Shape causes the property to become promoted, enabling it to be used for routing.  Setting this context property to different statuses on completion of processing is my way of preventing an endless activation loop, as discussed in point 1.

(Note the lack of error handling – this will be the subject of my next post).


The BizTalk WCF Service Consuming Wizard created the necessary schemas for me to communicate with the RandomPresent service however I deleted the RandomPresentService_schemas_microsoft_com_2003_10_Serialization.xsd  schema (as discussed in part 2, I haven’t found a use for this schema yet).  I also typically rename the service schema filename to something shorter and more descriptive.

If my solution modifies messages and/or involves an orchestration, I create 2 schema projects: an internal schemas project and an external schemas project, as is the case here.


Finally, a quick look at the map used to combine the canonical PO and the RandomPresent XML into a new instance of a canonical PO:

Canonical PO 1 and RandomPresent Response to Canonical PO 2 Map

Canonical PO 1 and RandomPresent Response to Canonical PO 2 Map

Typically, I use an inline XSLT template in my map for tasks such as these – I find it too frustrating and time consuming using functoids.

As mentioned in this previous blog post, I write my XSLT in a separate file and import it’s contents into my map.

This is the XSLT used:

Inline XSLT Template in Map

Inline XSLT Template in Map


So I hope you enjoyed this tour.  In part 4 of this series, I will show how error handling has been implemented in the solution.

When the solution has been completed, I will make it available for download.