We have recently completed Phase 1 of a “greenfields” BizTalk 2013 R2 implementation for a client and I wanted to jot down here some of my thoughts and technical learnings.
My role on the project was Technical Lead: I gathered requirements, generated the technical specifications, set up the BizTalk solutions, kicked off the initial development and provided supervision and guidance to the team.
Before starting on the project about 15 months ago, I had previously spent quite a bit of time working with a large BizTalk 2009 installation so I knew for my next assignment, that I would be playing with some new technologies in Azure and also using the new(ish) REST adapter. When I look back now, SOAP+WSDL+XML now seems something from a different age!
Here is a list of some key features of this hybrid integration platform:
- BizTalk sits on-premises and exposes RESTful APIs using the WCF-WebHttp adapter. These APIs provide a standard interface into previously siloed systems on-premises, a big one being the company wide ERP.
- Azure Service Bus relays using SAS authentication provide a means of exposing data held on-premises, to applications in the cloud. This proved very effective but a downside is having to ping the endpoints, to ensure that the relay connection established from BizTalk doesn’t shut down, resulting in the relay appearing unavailable.
- Service Bus queue allows the syncing of data from CRM in the cloud to ERP on-premises. We used the SB-Messaging adapter. Don’t forget to check for messages in the dead letter queue and have a process around managing this.
- A mixture of asynchronous and synchronous processing. With asynchronous processing, we used orchestrations but synchronous processing (where a person or system is waiting for a reponse), these were messaging only (context based routing only).
- We used the BRE pipeline component to provide the features of a “mini” ESB, almost as a replacement for the ESB Toolkit.
In a nutshell, BizTalk is the bridge between on-premises systems and the cloud, while also providing services for the more traditional systems on the ground. I believe this will be a common story for most established companies for many years but eventually, many companies will run all their systems in the cloud.
I expected a few challenges with the WCF-WebHttp adapter, after stories from my colleagues who had used this adapter before me. My main concern was handling non HTTP 200 responses which triggered the adapter to throw an exception that was impossible to handle in BizTalk (I know 500 errors will be returned by the adapter now, with a fix in CU5). So from the start of the project, I requested that the APIs exposed from the ERP system always returned HTTP 200 but would include an error JSON message that we could interrogate.
We also had to treat the JSON encoder and decoder pipeline components with kid gloves, with issues serializing messages if our XSD schemas contained referenced data types (e.g. from a common schema). We had to greatly simplify our schemas to work with these components.
Also lack of support for Swagger (consumption via a wizard and exposing Swagger endpoints) is a glaring omission. I manually created Swagger definition files using Restlet Studio which I found to be a great tool for documenting APIs, which was recommended to me by Mark Brimble.
BizTalk WCF Receive Location Configuration Error: The SSL settings for the service ‘None’ does not match those of the IIS ‘Ssl, SslRequireCert, Ssl128’
A BizTalk WCF endpoint is exposed with security enabled: SSL with a client certificate is required (so mutual, 2-way client and server authentication is configured).
BizTalk (2009) receive location is configured as follows:
(Incidently, the following command can be run in a Windows batch file to configure SSL for a IIS virtual directory:
%windir%\system32\inetsrv\appcmd.exe set config “Default Web Site/ServiceName” -commitPath:APPHOST -section:access -sslFlags:Ssl,Ssl128,SslRequireCert )
Error Message and Analysis
Clients were unable to connect to the service and the following exception message was written to the Application event log on the hosting BizTalk server:
Exception: System.ServiceModel.ServiceActivationException: The service ‘ServiceName.svc’ cannot be activated due to an exception during compilation. The exception message is: The SSL settings for the service ‘None’ does not match those of the IIS ‘Ssl, SslRequireCert, Ssl128’.. —> System.NotSupportedException: The SSL settings for the service ‘None’ does not match those of the IIS ‘Ssl, SslRequireCert, Ssl128’.
So this is an IIS configuration issue. The service is exposing some endpoint that is unsecured (the SSL setting for this endpoint is ‘None’, as mentioned in the error message), which doesn’t match the actual SSL settings configured: ‘Ssl, SslRequireCert, Ssl128’ (i.e. SSL with minimum 128-bit keys and client certificate required).
In this case, the endpoint not matching the SSL settings is the mex endpoint (i.e. the service WSDL).
Ensure that ALL mex endpoints are disabled, by commenting out the following mex binding configuration in the service Web.config file:
The <system.serviceModel> section specifies Windows Communication Foundation (WCF) configuration.
<serviceDebug httpHelpPageEnabled=”false” httpsHelpPageEnabled=”false” includeExceptionDetailInFaults=”false” />
<serviceMetadata httpGetEnabled=”false” httpsGetEnabled=”true” />
<!– Note: the service name must match the configuration name for the service implementation. –>
<!– Comment out mex endpoints if client auth enabled using certificates –>
<service name=”Microsoft.BizTalk.Adapter.Wcf.Runtime.BizTalkServiceInstance” behaviorConfiguration=”ServiceBehaviorConfiguration”>
<!–<endpoint name=”HttpMexEndpoint” address=”mex” binding=”mexHttpBinding” bindingConfiguration=”” contract=”IMetadataExchange” />–>
<!–<endpoint name=”HttpsMexEndpoint” address=”mex” binding=”mexHttpsBinding” bindingConfiguration=”” contract=”IMetadataExchange” />–>
I restarted IIS and the service could then be compiled and worked as expected.
Dynamic send ports allow adapter properties to be set at runtime (and also to select the adapter to be used). In my particular BizTalk 2009 scenario, I was creating a WCF dynamic send port to call a service endpoint URI only known at runtime, specified by the client (my orchestration is designed to be a generic message broker).
My first dislike was WCF configuration had to be defined programmatically in my orchestration. Sure, I was storing the properties in a custom SSO application so they weren’t hardcoded, but the BizTalk admin console provides a standard mechanism to configure WCF properties and it made sense to use it. Thinking of the BizTalk admins, I didn’t like the idea of hiding configuration away and then in a non standard way: it makes troubleshooting more difficult.
Secondly: performance. A few of my colleagues and sources on the web advised of poor performance using dynamic send ports for these reasons:
1. A dynamic send port is created each time it is used and in the case of WCF, for instance, the channel stack is created each time. This can have a significant performance hit. Further information about this is available here.
2. Only the default handler for each transport can be used which is a potential performance bottleneck if the host instance used by the default handler hasn’t been optimized for send operations. This limitation is also a recipe for inconsistent configuration (for example, if a design decision has been made to use a particular host for particular functions, this will not enforceable) and also it isn’t obvious to the BizTalk admins, what handler is used for a particular port. (Note that this limitation has been removed in BizTalk 2013 where it is now possible to choose a host instance for a dynamic send port, other than being stuck with just the default handler).
So I decided to use a static send port and override the “dummy” URI in my send port with the actual URI provided by the client… I did this as follows:
1. In my orchestration, in a Construct Message shape, I assigned to my own custom context properties, values that would later to be used to populate the BTS.OutboundTransportLocation and WCF.Action properties (these specify the endpoint URI and SOAP operation used by the WCF adapter, respectively). I did this instead of assigning directly to the “out of the box” properties since both were later overwritten on receipt of the message by the send port.
2. Using a custom pipeline component, I then promoted properties BTS.IsDynamicSend, BTS.OutboundTransportLocation and WCF.Action in a send pipeline assigned to the send port, populating BTS.OutboundTransportLocation and WCF.Action using the values assigned to my custom context properties like this:
inmsg.Context.Promote(“IsDynamicSend”, “http://schemas.microsoft.com/BizTalk/2003/system-properties”, true); // Set this to prevent URL caching.
inmsg.Context.Promote(“OutboundTransportLocation”, “http://schemas.microsoft.com/BizTalk/2003/system-properties”, endpointURL);
inmsg.Context.Promote(“Action”, “http://schemas.microsoft.com/BizTalk/2006/01/Adapters/WCF-properties”, operationName);
Note that BTS.IsDynamicSend has been set to “true”. As mentioned on MSDN here, this causes the send adapter to not used cached configuration, but to read configuration from the message context each time the send port is used. If BTS.IsDynamicSend was not set for example, then the cached endpoint URI would be used over the endpoint URI actually stamped on the message which was not what I wanted, since it’s possible that the endpoint may change between calls.
Performance: running my SOAPUI load tests, the response times of my web service were the same after changing to use a static port over a dynamic port. I’m using the WCF-Custom adapter with wsHttpBinding in my static send port. The response time of my web service was already acceptable to my client and my main motivation in using a static send port was better configuration options using the BizTalk admin console (rather than storing configuration in a custom SSO application). However better perfomance would have been nice! If I have time, I may investigate this further across the different adapters.
Few Thoughts on the ESB Toolkit and an Error – “The ‘ServiceType’ property value should not be empty or null”
I have recently been experimenting with the ESB Toolkit (version 2.2 that ships with BizTalk 2013) and I think it is a good way to expedite loosely coupled BizTalk solutions, dynamically configurable at runtime using the Business Rules Engine (BRE).
At a high level, the ESB Toolkit itinerary model is an implementation of the routing slip pattern.
My immediate impression is that development using the Itinerary Designer is tightly coupled to the runtime environment, more so than “standard” BizTalk development. By “runtime environment”, I mean artefacts/configuration viewable via the BizTalk admin console (e.g. applications, send port filters etc.) and also policies created via the BRE Composer. Basically the target application needs to be setup before starting work building the solution using Visual Studio. Any changes to the solution setup (changing a send port name, for example) would likely require firing up Visual Studio and propagating these changes to the itinerary, then importing into the itinerary database.
It’s also occurred to me that the itinerary pattern is in my mind an easier way to implement a message type agnostic solution, compared to using the standard BizTalk toolset. I have recently been wrestling with a series of orchestrations processing messages in a non typed fashion, routing to/from the MessageBox purely using context properties: this is a powerful enabler of achieving a “service first” approach (instead of a “message first” approach) permitting heavy reuse of processing logic without caring about the underlying message type. Yes, I’m thinking about SOA principles here. However it’s been quite a mission to implement this routing using non typed messages in orchestration.
To illustrate this tight coupling of the development and runtime environments mentioned previously (and to demonstrate my noob status regarding the ESB Tookit :-)), whilst trying to export a model via Visual Studio, I was stumped with these errors:
It was obvious clicking on my off ramp that these three properties were not configured but they looked to be read only – so how could I add values?!:
After a bit of head scratching and a web search it soon became clear that these properties referred to filter properties on my send port – it would have been useful if the property name made it obvious what these properties referred to.
So in my BizTalk application, I created the following filters on my dynamic send port:
I then re-selected the send port in the off ramp and the required properties were then populated from the BizTalk databases:
I hope this post helps out other ESB Toolkit “greenhorns”.
I have been working recently with schemas containing <Any> elements. Luckily I don’t need to manipulate the <Any> contents much: to do so, the standard BizTalk functoids available in the mapper can’t be used and instead, given the range of the XML that can be expected in the <Any>, some extensive custom XSLT would be required.
I have two scenarios:
- The contents of an <Any> element need to be copied to a known schema “as is”.
- The contents of a known schema need to mapped “as is” to an <Any>.
I struggled with the mass copy functoid for both scenarios – it didn’t behave quite how I expected/wanted.
After a bit of head scratching and a read of this blog post, I realised that the mass copy doesn’t copy across root elements. I suppose, for example, the assumption is that both source and target schemas will contain a common element that can be used as the “target” of the mass copy functoid. This wouldn’t work in my case, since the schemas being mapped are quite different and also the <Any> would contain a range of different XML, so I couldn’t specify an appropriate containing “root” element.
To get around this behaviour of the mass copy functoid, in my map I created a scripting functoid and added some inline XSLT like this:
<xsl:copy-of select=”.” />
My map then looked like this:
Here is an example instance of the source message:
And this is the result after passing the message above through the map:
Also here is the output message if the mass copy functoid is used (note that the root <Order> element is missing, under <Entities>):
It seems strange and limiting that the mass copy functoid doesn’t copy the root element.
It’s the second time that I have encountered this obscure error in BizTalk 2009, when attempting to enlist a send port after importing bindings originally exported via the admin console:
Both times (for different customers) the issue was that an extra carriage return and line feed had been inputted before and after the <Filter> element in the offending bindings file that was being imported, like this:
This had a two fold effect – filter expressions were missing from the send port and also the error above enlisting the send port.
I carefully checked the bindings file and discovered other <Filter> elements with extra carriage returns and line feeds: I removed them, like so:
On import of the bindings file again, I could then see my filter expressions and also could enlist the send port…
I have only ever come across this error in BizTalk 2009.
Also, although I am doing so for the send port in the bindings file snippet, I don’t filter on receive port names in Production ready solutions… Only sometimes during initial development do I do so! It is better to filter on context properties such as BTS.MessageType, WCF.Action and BTS.Operation.
I got this error recently running the BizTalk WCF Consuming Wizard whilst working at a customers office when I didn’t previously (sensitive parts of the error message have been removed):
I think it was due to installation of internet security software on my development VM by the company’s infrastructure team and/or a new set of group policies being applied to the domain in which my VM is connected too…
Anyhow, it was obvious that all traffic from my VM was now going through a web proxy for the purposes of filtering traffic and I needed to install proxy client software. This would entail some internal processes being enacted.
Being impatient to progress (and seeing this as an opportunity to try something out that I had read about a few weeks previously) I decided to see if a I could use SvcUtil.exe, providing my proxy credentials so enabling the SvcUtil.exe to authenticate against the web proxy for me.
It would be great if the BizTalk WCF Service Consuming Wizard had proxy authentication support! Maybe I’m missing a trick here??
Based on this article here on Stack Overflow, I created a small proxy class, created a strong name key file for it and then installed it in the GAC on my dev VM:
I then edited my SvcUtil config file as follows, adding a reference to the proxy class assembly:
Then using the simple batch script below, I could run SvcUtil with proxy authentication included and hey presto, I could download the WSDL and associated XSD import files!
This saved me the annoyance of having to download each imported XSD file individually and then modifying the path in the parent WSDL.
I have a GitHub repo here containing the source code for the solution (as usual, provided under the terms of the MIT licence).
I have spent some time recently looking into the Windows Server AppFabric Customer Advisory Teams (herein “CAT”) BizTalk instrumentation framework and I will report here on some of my findings… In short, I have become quite excited about the potential if offers not only to instrument BizTalk solutions (in development and production environments) but also as the basis of a unit testing framework. In this brief initial look into the framework, I will focus on tracing in orchestrations but it’s possible to introduce tracing into all BizTalk artefacts (and also non BizTalk/custom components).
The framework itself has been around for sometime now (since May 2010?) – this blog entry from the CAT is a well written article introducing the tracing facilities it provides (at the end of the article is a link to the framework source code). This white paper looks to have been heavily based on the blog article (or vice-versa) and is similarly well written and easy to follow.
I have to confess and admit that in the past, like a lot of developers, I have used the System.Diagnostics.Trace .NET class in conjunction with DebugView to trace execution of orchestrations during development. One thing I have observed is that CPU utilization increases dramatically whilst DebugView runs: it’s not a tool that should be run outside of a development environment. The CAT observed this and came up with a framework leveraging the Event Tracing for Windows (ETW) facility. The framework is a wrapper around the TraceProvider class from the Microsoft.BizTalk.Diagnostics namespace in Microsoft.BizTalk.Tracing.dll. As CAT mention, this is a “hidden gem” and is utilised in all major components in the BizTalk runtime (you can observe this by running DebugView while the BizTalk engine is executing – heaps of trace events are outputted).
Now the framework doesn’t mean you should run DebugView outside of a development environment: instead, it allows rapid and efficient tracing to a log file. This means that it won’t crash your development VM and also it permits tracing in Production and Production like environments.
I will briefly describe here how I setup the framework to trace orchestration execution…
After downloading the framework from the CAT blog, I opened the solution using Visual Studio 2012 and successfully ran the Conversion Wizard.
There are 3 projects:
The Microsoft.BizTalk.CAT.BestPractices.Framework project is the one I will need to reference in my solution.
A few immediate/gut observations about the Microsoft.BizTalk.CAT.BestPractices.Framework project that piqued my interest:
- Pre processor directives have been specified that output trace statements using System.Diagnostics.Trace .NET class for a debug build only. So one could infer from this that this is designed for development, where a debug build of the framework could be used for outputting to DebugView. A release build will not include the calls to System.Diagnostics.Trace, precluding the use of DebugView in Production for example.
- Lots of passing of object arrays – no generic collections evident: possibly a boxing/unboxing overhead here.
In order to get the Microsoft.BizTalk.CAT.BestPractices.Framework project to build (under .NET 4.5), I had to specify the full namespace (Microsoft.BizTalk.CAT.BestPractices.Framework) for the Func<string> delegate in the ComponentTraceProvider class.
Next I selected the option to sign the project assembly and created a strong name key then installed the assembly in the GAC.
It’s then a matter of adding a reference to Microsoft.BizTalk.CAT.BestPractices.Framework.dll in my orchestrations project in my BizTalk solution.
The white paper indicated in the introduction to this post presents usage of the framework well but I will demonstrate here how I implemented tracing in my orchestration…
Expression shapes containing tracing statements should be strategically added to the orchestration.
- Tracing should kick off with a call to the TraceIn method immediately after orchestration activation. The method returns a GUID that can be used to trace specific instances (the GUID can be prefixed to the trace output for example – thanks Johann for this tip!).
- Message body content can be outputted using the TraceInfo method. Note that the variable debugMessage is of type System.XML.XMLDocument.
- It’s possible to trace scope execution using the TraceStartScope and TraceEndScope methods. The framework will output scope execution time.
- Tracing should be added into exception blocks like so, using the TraceError method (passing in the exception object):
- Finally the orchestration should end with a call to TraceOut, passing in the GUID (stored in the callToken variable below) returned from calling the TraceIn method.
The BizTalk CAT Instrumentation Framework Controller available on CodePlex is an excellent GUI based tool for starting and stopping traces. It has options to write tracing to DebugView and/or a log file (DebugView shouldn’t be used in Production though, just development environments). It has a bit of a limitation in that the log file is not human readable until a trace is stopped.
Here’s an example of a trace written to DebugView:
I have an example BizTalk 2013 solution here that is available for download under the terms of the MIT licence where tracing has been implemented. The example screenshots above are taken from this solution.
Now 2 things have got me really excited about the BizTalk CAT instrumentation framework:
- The opportunity to log real time tracing events from Production environments in an efficient way – this has implications for monitoring solutions in Production, parsing tracing events to trigger other events e.g. error alerting, reporting. Tracing could be written to Azure providing a central repository of data that could be analysed and reported on.
- The facility to track flow through an orchestration would be the excellent basis for a unit testing framework. Tests could be run by writing trace events to a queue (Windows service bus?) to preserve the order in which events have been triggered. The BizUnit testing framework could then be extended to monitor the events on the queue and compare against expected execution flow configured in the business rules engine (BRE). Events associated with each test run could also be persisted to a database and provide the basis for analysis of test coverage (i.e. have all paths of execution been tested in a given orchestration?).
In regards to point 1, this facility looks similar to recent functionality added to the BizTalk360 monitoring solution.
The Testing inside BizTalk by using ETW Tracing project on CodePlex looks almost a perfect fit for point 2 and something I intend to explore soon!
In summary, I hope this post provides a good introduction to the CAT tracing framework and how it could be used during development (as a unit testing tool) and subsequently in Production (real time tracing and monitoring).
This is the last post in a four part series looking at BizTalk and WCF.
Here are the previous three posts:
In this final post, I will show how exception handling has been added to the RandomPresentService orchestration controlling our business process.
Some BizTalk Exception Considerations: 101 Course
Firstly, here is a quick brain dump of items that should be considered when thinking about handling exceptions in an orchestration:
- What are the customers requirements in regards to messages that fail inside BizTalk? Should alerting and/or logging be implemented so remedial action can be carried out and if so who should be the recipients of any such alerting? What steps should be carried out to recover from the error condition (what is the process)? Often the customer has only a vague idea here and some teasing out of requirements may be required.
- What is the impact if a particular process/action fails in your orchestration? Does this is invalidate subsequent previously successful actions? Should previously committed actions be rolled back therefore in a compensation block?
- What about business process type errors? Can these be handled without the need to jump to an exception condition? Preferably, business type errors that happen routinely (and are therefore not exceptional) should not be handled by throwing an exception but handled in the “body” of the orchestration.
- Do you need to handle any custom exceptions? Hopefully this is well documented in any service that needs to be consumed, for example.
Often error handling is forgotten until the very end of the development phase and is not instead considered at the start of a project – this is a mistake! Or alternatively, the customers requirements are not taken into consideration when building exception handling.
By thinking about exception handling at the start of a project, it can be “built in” to the solution. Also it allows time for discussions with the customer around how exceptions should be handled (in the analysis and design phase). It’s costly to implement exception handling logic retrospectively.
One of the big selling points in regards to BizTalk and another tenant of the platform is that it is “robust” and ensures “guaranteed message delivery”. In order to live up to these expectations it’s important that error conditions are handled in such a way that resubmission of failed messages is secured, with possibly multiple options for error recovery. These processes also need to be tested and well documented.
What Exceptions Need to Handled?
A concept that can be hard to determine is: what exceptions should my solution handle? By their very nature, exceptions are exceptional/rare so are a challenge to define and pin down.
It is a decision guided by some of the following factors:
- Experience – harnessed to try and make troubleshooting of “exceptional” type errors easier to investigate when the solution is running in Production.
- The needs of the customer.
- The type of exceptions that any called services could return.
This analysis will generate a list of exceptions that need to be handled.
As with any solution, the most specific exceptions should be handled first with the most general exception caught last.
And no doubt, not all exceptions will (or should) be catered for. They will generate a nasty runtime error that will require investigation using a tool like the Orchestration Debugger. This is where a good support team have a chance to shine :-).
Exception 1: PersistenceException
This is the first exception that I’m going to explicitly handle.
My friend and colleague Johann Cooper has written an excellent blog article about this exception.
Specifying direct binding on send ports in your orchestration is a good idea – this decouples physical send ports from the logical send ports specified in your orchestration. But this pattern can fail if the physical ports are ever unenlisted.
If physical ports are unenlisted, this removes any associated subscription and an exception of type Microsoft.XLANGs.BaseTypes.PersistenceException will manifest in the orchestration.
(You will also get a nonresumable “Routing Failure Report”, to help with troubleshooting the routing error).
So first up, I have modified the RandomPresentService orchestration like so:
- Added a non transactional scope shape around the send and receive shapes for the RandomPresentService WCF service.
- Added an exception block to the scope to handle the PersistenceException. This includes the creation of an ESB fault message (so the error will surface in the ESB exception management portal) and a suspend orchestration shape.
- Note the addition of looping: this ensures that the orchestration can be resumed in a valid state, since the RandomPresentService will be called again.
Exception 2: Handling SOAP Faults Returned from the RandomPresentService WCF Service
To handle any SOAP faults returned from the RandomPresentService service, I added a new fault type to the operation on the logical port in the orchestration designer (right click on the operation and select “New Fault Message”) and selected a message type of BTS.soap_envelope_1__2.Fault: this denotes that the fault should be wrapped in a SOAP version 1.2 fault:
The fault doesn’t need to be connected to a receive shape or anything like that – it can be left as is, and instead the fault can be handled by defining another exception block and selecting the SOAP fault as the “Exception Object Type”. When the fault is defined, it will subsequently be viewable in the “Exception Object Type” drop down, like so:
It’s important to handle SOAP faults: if not, this will cause an unhandled exception in your orchestration (due to an unexpected message type being returned) and it will not be possible to recover from this error by resuming the orchestration.
Also note that in order to extract the SOAP fault message, the orchestration XPath function can be used like this:
strFaultMsg = xpath(exSOAPFault, “string(/*[local-name()=’Fault’ and namespace-uri()=’http://schemas.xmlsoap.org/soap/envelope/’%5D/*%5Blocal-name()=’faultstring’ and namespace-uri()=”])”);
(Where strFaultMsg is of type System.String and exSOAPFault is of type Microsoft.Practices.ESB.ExceptionHandling.Schemas.Faults.FaultMessage).
Exception 3: Handling System.Exception – “Catch-all” Exception Blocks
The final exception block (that will be evaluated last by the orchestration engine) is a “catch all” block since it will handle exceptions of type System.Exception. Since all exceptions in .NET inherit from System.Exception, any errors, not previously handled explicitly by previous exception blocks, will be caught by this handler.
Also I have wrapped the entire orchestration in a global scope and specified one exception block for System.Exception to ensure than my default exception handling mechanism will be exercised.
So this concludes this four part series looking at BizTalk and WCF.
As is to be expected, WCF is interleaved tightly into the BizTalk framework and it is possible to leverage it heavily in your BizTalk solutions.
It’s been a journey looking at the creation of a WCF service; using the “out-of-the-box” BizTalk tooling to consume the service; building upon the artefacts created by the WCF Service Consuming Wizard; finally ending with this brief (!) look at implementing exception handling in the orchestration controlling the business process.
I hope it provides a good introduction to the subject!
(Both solutions are available under the terms of the MIT licence, a copy of which is included with each).
The first post looked at the RandomPresent WCF service and looked at design considerations, hosting and testing.
The second post touched on how to consume the WCF service from BizTalk, using the BizTalk WCF Service Consuming Wizard.
The business want to send to each customer, a random Christmas present with their order. To achieve this, the RandomPresent WCF service will be called by BizTalk and it will return a random present XML, wrapped in a SOAP envelope. The present response needs to be appended to the order, so the guys in the warehouse can add the present to the shipment.
It’s a matter of combining the original order and the response from the web service.
I’ll start with an overview of the single orchestration driving this process, since all other artefacts hang off this.
In the end, I created a new orchestration from scratch and didn’t use the one generated for me by the BizTalk WCF Service Consuming Wizard:
I will summarise the flow here and call out some interesting bits!:
- A map on my receive port converts the external format PO to the canonical version (check out my blog post here on the canonical messaging pattern). The canonical message has an element called “Status” that is assigned the value “NEW” in the map. This element is mapped to a property called “POStatus” of type MessageContextPropertyBase. So, as you can guess, I had to create a property schema which I added to my internal schemas project (it contains just this one property). The type MessageContextPropertyBase identifies “POStatus” as a context property that can be written to the context of a message and, as will be discussed, this is what I want to achieve for the purposes of routing and to prevent my orchestration from being caught in an endless activation loop.
- The canonical PO is written to the MessageBox and an instance of the orchestration fires up ; the orchestration subscribes to messages of type canonical PO and also on the “POStatus” promoted property: it is activated when the status is “NEW” only. The reason for this additional subscription on “Status” is because on completion, the orchestration will write the modified canonical PO to the MessageBox and an infinite activation loop will occur if orchestrations are activated only on publishing of a canonical PO message type (not good!). Of course, there are other ways of working around this: I think this is why, in some solutions I have seen, solutions have more than one canonical schema with a different namespace to differentiate them – this seems a mad way of getting around the “activation infinite loop” issue though, since this introduces the overhead of maintaining two canonical schemas and also the design principle of a canonical messaging pattern is that there is only “source of truth”.
- Note that I have designed my orchestration such that it only works with internal solution type messages only (not on the original, external, versions received). This protects the orchestration from changes to the format of the external PO which might require changes and redeployment of the orchestration.
- Next I create a request message for the RandomPresent WCF service… I have created a C# classes project that contains a C# class representation of the request schema: I created the class using Xsd.exe, by pointing Xsd.exe at my schema. I instantiate an instance of the request message in a Construct Message shape using the New keyword. Another way of creating a message in an orchestration is to create a map for this purpose. There is a serious caveat to instantiating a message using the New keyword however: it is vital that changes to the schema are reflected in the class representation (maybe this could be achieved in a Visual Studio “post build event” on the schemas project). As has happened to me in the past, if you assign a message created from your schema (via a map) to an (outdated) message created from a class (e.g. in a Construct Message shape), any missing elements will be silently lost in the output message… For some reason, creating a new message in an orchestration is a torturous process, it seems to me!
- A multipart message map combines the canonical and RandomPresent XML into a new instance of a canonical PO. This map will be discussed further in the “Maps” section below.
- Finally I set the “POStatus” context property to “PROCESSED” in a Message Assignment shape and write the new canonical message to the MessageBox. The final Send Shape initializes a correlation set but not for the purposes of supporting an asynchronous process, but for routing purposes: the correlation set contains just the “POStatus” context property and initializing this in the Send Shape causes the property to become promoted, enabling it to be used for routing. Setting this context property to different statuses on completion of processing is my way of preventing an endless activation loop, as discussed in point 1.
(Note the lack of error handling – this will be the subject of my next post).
The BizTalk WCF Service Consuming Wizard created the necessary schemas for me to communicate with the RandomPresent service however I deleted the RandomPresentService_schemas_microsoft_com_2003_10_Serialization.xsd schema (as discussed in part 2, I haven’t found a use for this schema yet). I also typically rename the service schema filename to something shorter and more descriptive.
If my solution modifies messages and/or involves an orchestration, I create 2 schema projects: an internal schemas project and an external schemas project, as is the case here.
Finally, a quick look at the map used to combine the canonical PO and the RandomPresent XML into a new instance of a canonical PO:
Typically, I use an inline XSLT template in my map for tasks such as these – I find it too frustrating and time consuming using functoids.
As mentioned in this previous blog post, I write my XSLT in a separate file and import it’s contents into my map.
This is the XSLT used:
So I hope you enjoyed this tour. In part 4 of this series, I will show how error handling has been implemented in the solution.
When the solution has been completed, I will make it available for download.