We have recently completed Phase 1 of a “greenfields” BizTalk 2013 R2 implementation for a client and I wanted to jot down here some of my thoughts and technical learnings.
My role on the project was Technical Lead: I gathered requirements, generated the technical specifications, set up the BizTalk solutions, kicked off the initial development and provided supervision and guidance to the team.
Before starting on the project about 15 months ago, I had previously spent quite a bit of time working with a large BizTalk 2009 installation so I knew for my next assignment, that I would be playing with some new technologies in Azure and also using the new(ish) REST adapter. When I look back now, SOAP+WSDL+XML now seems something from a different age!
Here is a list of some key features of this hybrid integration platform:
- BizTalk sits on-premises and exposes RESTful APIs using the WCF-WebHttp adapter. These APIs provide a standard interface into previously siloed systems on-premises, a big one being the company wide ERP.
- Azure Service Bus relays using SAS authentication provide a means of exposing data held on-premises, to applications in the cloud. This proved very effective but a downside is having to ping the endpoints, to ensure that the relay connection established from BizTalk doesn’t shut down, resulting in the relay appearing unavailable.
- Service Bus queue allows the syncing of data from CRM in the cloud to ERP on-premises. We used the SB-Messaging adapter. Don’t forget to check for messages in the dead letter queue and have a process around managing this.
- A mixture of asynchronous and synchronous processing. With asynchronous processing, we used orchestrations but synchronous processing (where a person or system is waiting for a reponse), these were messaging only (context based routing only).
- We used the BRE pipeline component to provide the features of a “mini” ESB, almost as a replacement for the ESB Toolkit.
In a nutshell, BizTalk is the bridge between on-premises systems and the cloud, while also providing services for the more traditional systems on the ground. I believe this will be a common story for most established companies for many years but eventually, many companies will run all their systems in the cloud.
I expected a few challenges with the WCF-WebHttp adapter, after stories from my colleagues who had used this adapter before me. My main concern was handling non HTTP 200 responses which triggered the adapter to throw an exception that was impossible to handle in BizTalk (I know 500 errors will be returned by the adapter now, with a fix in CU5). So from the start of the project, I requested that the APIs exposed from the ERP system always returned HTTP 200 but would include an error JSON message that we could interrogate.
We also had to treat the JSON encoder and decoder pipeline components with kid gloves, with issues serializing messages if our XSD schemas contained referenced data types (e.g. from a common schema). We had to greatly simplify our schemas to work with these components.
Also lack of support for Swagger (consumption via a wizard and exposing Swagger endpoints) is a glaring omission. I manually created Swagger definition files using Restlet Studio which I found to be a great tool for documenting APIs, which was recommended to me by Mark Brimble.
BizTalk WCF Receive Location Configuration Error: The SSL settings for the service ‘None’ does not match those of the IIS ‘Ssl, SslRequireCert, Ssl128’
A BizTalk WCF endpoint is exposed with security enabled: SSL with a client certificate is required (so mutual, 2-way client and server authentication is configured).
BizTalk (2009) receive location is configured as follows:
(Incidently, the following command can be run in a Windows batch file to configure SSL for a IIS virtual directory:
%windir%\system32\inetsrv\appcmd.exe set config “Default Web Site/ServiceName” -commitPath:APPHOST -section:access -sslFlags:Ssl,Ssl128,SslRequireCert )
Error Message and Analysis
Clients were unable to connect to the service and the following exception message was written to the Application event log on the hosting BizTalk server:
Exception: System.ServiceModel.ServiceActivationException: The service ‘ServiceName.svc’ cannot be activated due to an exception during compilation. The exception message is: The SSL settings for the service ‘None’ does not match those of the IIS ‘Ssl, SslRequireCert, Ssl128’.. —> System.NotSupportedException: The SSL settings for the service ‘None’ does not match those of the IIS ‘Ssl, SslRequireCert, Ssl128’.
So this is an IIS configuration issue. The service is exposing some endpoint that is unsecured (the SSL setting for this endpoint is ‘None’, as mentioned in the error message), which doesn’t match the actual SSL settings configured: ‘Ssl, SslRequireCert, Ssl128’ (i.e. SSL with minimum 128-bit keys and client certificate required).
In this case, the endpoint not matching the SSL settings is the mex endpoint (i.e. the service WSDL).
Ensure that ALL mex endpoints are disabled, by commenting out the following mex binding configuration in the service Web.config file:
The <system.serviceModel> section specifies Windows Communication Foundation (WCF) configuration.
<serviceDebug httpHelpPageEnabled=”false” httpsHelpPageEnabled=”false” includeExceptionDetailInFaults=”false” />
<serviceMetadata httpGetEnabled=”false” httpsGetEnabled=”true” />
<!– Note: the service name must match the configuration name for the service implementation. –>
<!– Comment out mex endpoints if client auth enabled using certificates –>
<service name=”Microsoft.BizTalk.Adapter.Wcf.Runtime.BizTalkServiceInstance” behaviorConfiguration=”ServiceBehaviorConfiguration”>
<!–<endpoint name=”HttpMexEndpoint” address=”mex” binding=”mexHttpBinding” bindingConfiguration=”” contract=”IMetadataExchange” />–>
<!–<endpoint name=”HttpsMexEndpoint” address=”mex” binding=”mexHttpsBinding” bindingConfiguration=”” contract=”IMetadataExchange” />–>
I restarted IIS and the service could then be compiled and worked as expected.
This is the last post in a four part series looking at BizTalk and WCF.
Here are the previous three posts:
In this final post, I will show how exception handling has been added to the RandomPresentService orchestration controlling our business process.
Some BizTalk Exception Considerations: 101 Course
Firstly, here is a quick brain dump of items that should be considered when thinking about handling exceptions in an orchestration:
- What are the customers requirements in regards to messages that fail inside BizTalk? Should alerting and/or logging be implemented so remedial action can be carried out and if so who should be the recipients of any such alerting? What steps should be carried out to recover from the error condition (what is the process)? Often the customer has only a vague idea here and some teasing out of requirements may be required.
- What is the impact if a particular process/action fails in your orchestration? Does this is invalidate subsequent previously successful actions? Should previously committed actions be rolled back therefore in a compensation block?
- What about business process type errors? Can these be handled without the need to jump to an exception condition? Preferably, business type errors that happen routinely (and are therefore not exceptional) should not be handled by throwing an exception but handled in the “body” of the orchestration.
- Do you need to handle any custom exceptions? Hopefully this is well documented in any service that needs to be consumed, for example.
Often error handling is forgotten until the very end of the development phase and is not instead considered at the start of a project – this is a mistake! Or alternatively, the customers requirements are not taken into consideration when building exception handling.
By thinking about exception handling at the start of a project, it can be “built in” to the solution. Also it allows time for discussions with the customer around how exceptions should be handled (in the analysis and design phase). It’s costly to implement exception handling logic retrospectively.
One of the big selling points in regards to BizTalk and another tenant of the platform is that it is “robust” and ensures “guaranteed message delivery”. In order to live up to these expectations it’s important that error conditions are handled in such a way that resubmission of failed messages is secured, with possibly multiple options for error recovery. These processes also need to be tested and well documented.
What Exceptions Need to Handled?
A concept that can be hard to determine is: what exceptions should my solution handle? By their very nature, exceptions are exceptional/rare so are a challenge to define and pin down.
It is a decision guided by some of the following factors:
- Experience – harnessed to try and make troubleshooting of “exceptional” type errors easier to investigate when the solution is running in Production.
- The needs of the customer.
- The type of exceptions that any called services could return.
This analysis will generate a list of exceptions that need to be handled.
As with any solution, the most specific exceptions should be handled first with the most general exception caught last.
And no doubt, not all exceptions will (or should) be catered for. They will generate a nasty runtime error that will require investigation using a tool like the Orchestration Debugger. This is where a good support team have a chance to shine :-).
Exception 1: PersistenceException
This is the first exception that I’m going to explicitly handle.
My friend and colleague Johann Cooper has written an excellent blog article about this exception.
Specifying direct binding on send ports in your orchestration is a good idea – this decouples physical send ports from the logical send ports specified in your orchestration. But this pattern can fail if the physical ports are ever unenlisted.
If physical ports are unenlisted, this removes any associated subscription and an exception of type Microsoft.XLANGs.BaseTypes.PersistenceException will manifest in the orchestration.
(You will also get a nonresumable “Routing Failure Report”, to help with troubleshooting the routing error).
So first up, I have modified the RandomPresentService orchestration like so:
- Added a non transactional scope shape around the send and receive shapes for the RandomPresentService WCF service.
- Added an exception block to the scope to handle the PersistenceException. This includes the creation of an ESB fault message (so the error will surface in the ESB exception management portal) and a suspend orchestration shape.
- Note the addition of looping: this ensures that the orchestration can be resumed in a valid state, since the RandomPresentService will be called again.
Exception 2: Handling SOAP Faults Returned from the RandomPresentService WCF Service
To handle any SOAP faults returned from the RandomPresentService service, I added a new fault type to the operation on the logical port in the orchestration designer (right click on the operation and select “New Fault Message”) and selected a message type of BTS.soap_envelope_1__2.Fault: this denotes that the fault should be wrapped in a SOAP version 1.2 fault:
The fault doesn’t need to be connected to a receive shape or anything like that – it can be left as is, and instead the fault can be handled by defining another exception block and selecting the SOAP fault as the “Exception Object Type”. When the fault is defined, it will subsequently be viewable in the “Exception Object Type” drop down, like so:
It’s important to handle SOAP faults: if not, this will cause an unhandled exception in your orchestration (due to an unexpected message type being returned) and it will not be possible to recover from this error by resuming the orchestration.
Also note that in order to extract the SOAP fault message, the orchestration XPath function can be used like this:
strFaultMsg = xpath(exSOAPFault, “string(/*[local-name()=’Fault’ and namespace-uri()=’http://schemas.xmlsoap.org/soap/envelope/’%5D/*%5Blocal-name()=’faultstring’ and namespace-uri()=”])”);
(Where strFaultMsg is of type System.String and exSOAPFault is of type Microsoft.Practices.ESB.ExceptionHandling.Schemas.Faults.FaultMessage).
Exception 3: Handling System.Exception – “Catch-all” Exception Blocks
The final exception block (that will be evaluated last by the orchestration engine) is a “catch all” block since it will handle exceptions of type System.Exception. Since all exceptions in .NET inherit from System.Exception, any errors, not previously handled explicitly by previous exception blocks, will be caught by this handler.
Also I have wrapped the entire orchestration in a global scope and specified one exception block for System.Exception to ensure than my default exception handling mechanism will be exercised.
So this concludes this four part series looking at BizTalk and WCF.
As is to be expected, WCF is interleaved tightly into the BizTalk framework and it is possible to leverage it heavily in your BizTalk solutions.
It’s been a journey looking at the creation of a WCF service; using the “out-of-the-box” BizTalk tooling to consume the service; building upon the artefacts created by the WCF Service Consuming Wizard; finally ending with this brief (!) look at implementing exception handling in the orchestration controlling the business process.
I hope it provides a good introduction to the subject!
(Both solutions are available under the terms of the MIT licence, a copy of which is included with each).
The first post looked at the RandomPresent WCF service and looked at design considerations, hosting and testing.
The second post touched on how to consume the WCF service from BizTalk, using the BizTalk WCF Service Consuming Wizard.
The business want to send to each customer, a random Christmas present with their order. To achieve this, the RandomPresent WCF service will be called by BizTalk and it will return a random present XML, wrapped in a SOAP envelope. The present response needs to be appended to the order, so the guys in the warehouse can add the present to the shipment.
It’s a matter of combining the original order and the response from the web service.
I’ll start with an overview of the single orchestration driving this process, since all other artefacts hang off this.
In the end, I created a new orchestration from scratch and didn’t use the one generated for me by the BizTalk WCF Service Consuming Wizard:
I will summarise the flow here and call out some interesting bits!:
- A map on my receive port converts the external format PO to the canonical version (check out my blog post here on the canonical messaging pattern). The canonical message has an element called “Status” that is assigned the value “NEW” in the map. This element is mapped to a property called “POStatus” of type MessageContextPropertyBase. So, as you can guess, I had to create a property schema which I added to my internal schemas project (it contains just this one property). The type MessageContextPropertyBase identifies “POStatus” as a context property that can be written to the context of a message and, as will be discussed, this is what I want to achieve for the purposes of routing and to prevent my orchestration from being caught in an endless activation loop.
- The canonical PO is written to the MessageBox and an instance of the orchestration fires up ; the orchestration subscribes to messages of type canonical PO and also on the “POStatus” promoted property: it is activated when the status is “NEW” only. The reason for this additional subscription on “Status” is because on completion, the orchestration will write the modified canonical PO to the MessageBox and an infinite activation loop will occur if orchestrations are activated only on publishing of a canonical PO message type (not good!). Of course, there are other ways of working around this: I think this is why, in some solutions I have seen, solutions have more than one canonical schema with a different namespace to differentiate them – this seems a mad way of getting around the “activation infinite loop” issue though, since this introduces the overhead of maintaining two canonical schemas and also the design principle of a canonical messaging pattern is that there is only “source of truth”.
- Note that I have designed my orchestration such that it only works with internal solution type messages only (not on the original, external, versions received). This protects the orchestration from changes to the format of the external PO which might require changes and redeployment of the orchestration.
- Next I create a request message for the RandomPresent WCF service… I have created a C# classes project that contains a C# class representation of the request schema: I created the class using Xsd.exe, by pointing Xsd.exe at my schema. I instantiate an instance of the request message in a Construct Message shape using the New keyword. Another way of creating a message in an orchestration is to create a map for this purpose. There is a serious caveat to instantiating a message using the New keyword however: it is vital that changes to the schema are reflected in the class representation (maybe this could be achieved in a Visual Studio “post build event” on the schemas project). As has happened to me in the past, if you assign a message created from your schema (via a map) to an (outdated) message created from a class (e.g. in a Construct Message shape), any missing elements will be silently lost in the output message… For some reason, creating a new message in an orchestration is a torturous process, it seems to me!
- A multipart message map combines the canonical and RandomPresent XML into a new instance of a canonical PO. This map will be discussed further in the “Maps” section below.
- Finally I set the “POStatus” context property to “PROCESSED” in a Message Assignment shape and write the new canonical message to the MessageBox. The final Send Shape initializes a correlation set but not for the purposes of supporting an asynchronous process, but for routing purposes: the correlation set contains just the “POStatus” context property and initializing this in the Send Shape causes the property to become promoted, enabling it to be used for routing. Setting this context property to different statuses on completion of processing is my way of preventing an endless activation loop, as discussed in point 1.
(Note the lack of error handling – this will be the subject of my next post).
The BizTalk WCF Service Consuming Wizard created the necessary schemas for me to communicate with the RandomPresent service however I deleted the RandomPresentService_schemas_microsoft_com_2003_10_Serialization.xsd schema (as discussed in part 2, I haven’t found a use for this schema yet). I also typically rename the service schema filename to something shorter and more descriptive.
If my solution modifies messages and/or involves an orchestration, I create 2 schema projects: an internal schemas project and an external schemas project, as is the case here.
Finally, a quick look at the map used to combine the canonical PO and the RandomPresent XML into a new instance of a canonical PO:
Typically, I use an inline XSLT template in my map for tasks such as these – I find it too frustrating and time consuming using functoids.
As mentioned in this previous blog post, I write my XSLT in a separate file and import it’s contents into my map.
This is the XSLT used:
So I hope you enjoyed this tour. In part 4 of this series, I will show how error handling has been implemented in the solution.
When the solution has been completed, I will make it available for download.
BizTalk and WCF, Consuming a WCF Service, Part 2 – The BizTalk WCF Service Consuming Wizard and a Look at the Artefacts Created
This post builds on part 1 of this series looking at BizTalk and WCF.
The previous post gave an overview of the RandomPresent WCF service, discussing design considerations, hosting and testing using SOAPUI.
In this post, we will examine how to consume the web service using the BizTalk WCF Service Consuming Wizard. The next post will delve into the detail of actually building our BizTalk solution, using (some) of the artefacts generated by the wizard.
Our mythical company is feeling generous and management have decided that each customer will receive a gift with their order. The gift will be assigned by calling our newly created web service and then adding an extra order line item for the present.
The BizTalk WCF Service Consuming Wizard
To consume the RandomPresent web service, the BizTalk WCF Service Consuming Wizard could be used. This will generate all the artefacts we need to invoke the service, into our BizTalk solution. However, some of the artefacts generated we will discard in favour of our own (better) implementation (to be discussed in part 3).
Here are the steps below:
- Right click on your solution (typically an external schemas project) and select Add –> Add Generated Items…
- Select Consume WCF Service in the Add Generated Items dialogue box. This will fire up the BizTalk WCF Service Consuming Wizard welcome page – click on the Next button
- On the next screen of the wizard we have two choices:
- Metadata Exchange (MEX) endpoint – this option enables a service description to be downloaded (WSDL file) by pointing the wizard directly to the running RandomPresent web service. Note from part 1 of this post, that we exposed a MEX endpoint by configuring the services web.config file to allow metadata to be downloaded via a HTTP GET
- Metadata Files (WSDL and XSD) – I always think of this as a second best option, since instead of downloading service data directly from source, this option will allow artefacts to consume the service to be created from a WSDL or XSD on the file system. There is a risk that the WSDL and/or XSD that has been obtained is out of date, so I try and avoid using this function if possible (it’s a last resort :-))
- Ensure that the Metadata Exchange (MEX) endpoint option is selected and click Next – this will take you to a screen where the service WSDL can be obtained and loaded into the wizard:
- (Note that it is common practice for a service endpoint to expose it’s service definition using the convention servicename?wsdl)
- Click Next and then Import. The wizard will then process the WSDL and generate artefacts into our BizTalk solution. Click Finish.
The Ajax.BT.Fulfilment BizTalk Solution
So the wizard has consumed our WSDL and generated various BizTalk solution artefacts into our BizTalk purchase order fulfilment solution. Lets take a quick look at what has been created:
- 2 x bindings files for importing into our BizTalk application: one of these files can be imported into our eventual BizTalk application to create a send port to communicate with the RandomPresent service:
- RandomPresentService.BindingInfo.xml – from the WSDL, the wizard has detected that the WCF service implements wsHttpBinding and therefore this file will create a send port using the WCF-WSHttp adapter
- RandomPresentService_Custom.BindingInfo.xml – this is another option for creating a send port in our BizTalk application. This will create a send port which will use the WCF-Custom adapter. Utilising this adapter offers greater WCF extensibility compared to the WCF-WSHttp adapter
- An interesting conundrum is: what bindings should I use in my solution, that utilising WCF-WSHttp or WCF-Custom? I would contend that WCF-Custom is the better option over WCF-WSHttp, to support future/evolving requirements
- RandomPresentService.odx – this is a “starter” orchestration to call the service and contains just the types required. I always move this orchestration into my specific orchestrations project (or just create an orchestration from scratch)
- RandomPresentService_schemas_ajax_randompresentservice.xsd – this schema defines the WCF message types required to construct a request to the service and what we can expect the response to look like
- RandomPresentService_schemas_microsoft_com_2003_10_Serialization.xsd – this contains type details. I suppose if you wish to serialize to a class representation, this XSD would be useful but otherwise I haven’t found a use for this
That’s it for now… In part 3 of this post I will walk you through my BizTalk solution utilising the service. This will build on artefacts created by the BizTalk WCF Service Consuming Wizard, which was the main topic of this post.
I thought I would do a series of posts on BizTalk and Windows Communication Foundation (WCF).
The main aim of this post is to cement my experiences so far exposing and consuming WCF services using BizTalk.
First up (and the focus of this post) is a look at a (very simple) WCF service that will be hosted in IIS. Further posts will detail how the service can be consumed using BizTalk and the tools available, error handling and recovery and monitoring.
What’s WCF you may ask? Further information/background is available here. In a nutshell, it is Microsoft’s implementation of remote communication specifications maintained by groups such as the World Wide Web Consortium (W3C) and the Web Services Interoperability Organization (WS-I).
The RandomPresent WCF Service
Yes, you read correctly…! The purpose of my WCF service is to return a random present (gift) to the client application.
As per Microsoft recommended best practice, the service contract is exposed to the outside world via an interface:
It’s possible to expose a class by applying attributes directly to the class (the attributes define the service’s runtime behaviour). However by exposing the interface to the outside world (i.e. the WSDL will map to the interface) it is possible to change how the service is implemented without necessitating modifications to the WSDL. Changing the WSDL (a file that describes the functionality of a web service) would necessitate clients downloading the WSDL and updating their proxy code accordingly. So using an interface provides a level of abstraction between the specification of the service and how the service is actually implemented, thus protecting clients from any changes to the implementation of the service.
This is the class implementation:
Here are a few notes/pointers on the most interesting aspects of this interface and it’s implementation:
- Note the inclusion of the System.ServiceModel namespace – this contains all the WCF types
- Note that I have specified a namespace for some attributes – this is crucial for a professional looking WCF service! If a namespace is not explicitly specified like this, the WSDL exposed by WCF to the outside world will contain the default namespace http://tempuri.org
WCF services can be hosted in numerous ways – for the purposes of this demo, I will host my service in IIS 8. I did this by creating a new application via IIS manager and pointing it directly to my .cs files – this is OK for a development environment but in a Production environment, you would point to an assembly.
In the root of my IIS application I created a Service.svc file, which represents my WCF endpoint and a Web.config file.
My Service.svc file looks like this:
Note that the service attribute points to the implementation of the service using the full namespace and class name. If you attempt to specify the interface in your .svc file, you will receive the following error when you attempt to view your service: ServiceHost only supports class service types.
Here is the contents of my web.config file:
A few items of interest here that I will summarise:
- Two endpoints have been specified for my service: a wsHttpBinding endpoint and a MEX endpoint. The wsHttpBinding endpoint supports the WS-* specifications which defines a common standard for remote client communication. MEX stands for “metadata interchange” so it describes how a client can communicate with the service using a WSDL file exposed to the outside world by the service (meta in this context means “about” so metadata means “data about data”).
- Note that a behaviour has been specified in the <behaviors> section that enables service metadata to be downloaded via a HTTP GET. So this enables a WSDL file to be downloaded for my service (using a web browser, for example)
- And lastly, to enable me to call my service using SOAPUI, I had to add specific details about security for my endpoint, in a <wsHttpBinding> section. So this is WS-Security configuration for my service.
Now if browse to my service using a web browser, the following web page is displayed with details about the service:
This page nicely describes how to create a proxy class for this service with help from the svcutil.exe tool, a link to a WSDL that describes the functionality of the service and then example code showing how to invoke the proxy class. However, we will be calling the service from BizTalk so won’t be requiring this information (except the handy link to the WSDL).
Finally, SOAPUI is a great tool (although sometimes a bit of a fiddle to work with WCF) for testing a service.
Invoke your web service by pointing SOAPUI to the service WSDL:
- Right click on the “Projects” node and select “New soapUI project”
- In the “New soapUI Project” dialogue box, add the location of the service WSDL and ensure that the “Create Requests” check box is ticked (this will create a sample request that can be used to invoke and test the service):
- Invoke the service by double clicking on the request and click on the green arrow to submit the request to the service. On the “WS-Addressing” tab you will need to ensure that “WS-Addressing” is enabled and that the “Add default wsa:To” is ticked. This is because the service implements wsHttpBinding, as described previously, which implements WS-Addressing (part of the WS-* specifications):
So this introduces a test WCF service that needs to be consumed by BizTalk, which will be the topic of my next post…
Many BizTalk solutions that I have implemented and worked on have followed the canonical messaging pattern. It’s certainly one of the first things I consider when building a new solution and a concept that I come across often. I would consider it “best practice” to implement such a pattern, given it’s benefits (which I will outline in this post).
As they say, “a picture paints a thousand words” so here is a graphical view of this pattern compared against a solution not implementing this pattern (i.e. a peer to peer (P2P) solution):
As you can see in regards to the canonical pattern (in green/with a tick), documents that are logically equivalent map to a standard application specific format (the canonical format). Lets unpack this statement a little.
The term logically equivalent is specific to our application; for example, external purchase orders in the formats indicated in the diagram above are equivalent in the context of the solution and so map to a standard format internally. This means that in the context of the application, these external purchase order formats are the same and will be processed in the same way. However to stores and suppliers, these different purchase order formats are quite distinct.
Canonical format describes how documents will be represented internally in our solution. In BizTalk, this has to be in XML (since BizTalk uses XML internally to represent messages).
The next question is how do we build our canonical document such that it can represent documents that are logically equivalent but may actually be formatted quite differently? Actually this statement is not quite correct: the canonical document should be created first independently of any external representations (e.g. to represent the essence of what a purchase order is) and then it should be a case of deciding how external representations map to the canonical representation. In the case of BizTalk, this will typically involve writing some XSLT that converts various formats from or to the canonical format.
I have to admit that when I first started out building BizTalk solutions I didn’t immediately grasp the benefits of having canonical representations of messages in my solution. This quickly changed however. Obviously there is a performance hit since every message will be transformed twice but I think this overhead is well justified given some of the benefits it provides below (I have tried to list these in order of importance):
- Impact of schema change is minimised – since all messages map to or from the canonical document, if (following our example) a store or supplier decide to change their schema, it will only be necessary to change one map. Compare this to the P2P solution: 4 maps would need to be changed if a store changed their schema and not only that, each supplier would need to contacted and regression testing would need to arranged with each. By utilising a canonical document type, we protect parties from the impact of schema changes.
- Minimising impact of change (2) – since orchestrations, for example, will work on the canonical schema, any changes to external schemas will not require orchestration changes and redeployment.
- Additional document formats can be added with relative ease – only one new additional map would be required to or from the canonical format. Also it would only be necessary to deal with one integration partner and specific knowledge of all downstream message formats is not required – only detailed knowledge of the new message format and the canonical format is needed.
- Reduction in solution complexity – with the canonical solution, 7 maps need to be maintained; 12 maps need to be maintained with the P2P solution.
Here are a couple of caveats that I have come across in respects to this pattern:
- There can be only one canonical representation for your logical message type! I recently worked on a solution where Xsd.exe had been used to create classes for the canonical schemas and then these classes where used in the solution orchestrations… As the canonical schemas changed, the classes were not recreated. This can introduce subtle bugs; for example, if you were to assign canonical message 1 (schema) to canonical message 2 (class) in your orchestration, data not defined in message 2 will be lost… So it is definitely best practice to ensure that only one canonical representation is available in your solution.
- It is harder to implement this pattern retrospectively, after the solution is in Production. So even if your solution is simple, do yourself a favour and future proof by baking in a canonical schema.
I hope this post demonstrates the benefits of the canonical messaging pattern and why solutions should implement it.
In this post I will describe using XSLT templates in maps followed by a general discussion on the BizTalk mapper per se (since the release of Windows Azure BizTalk Services I guess I should qualify what mapper I am referring to now: this is the BizTalk Server 2013 mapper :-)).
For further information on XSLT, please refer to this link.
An XSLT template defines a set of transformation rules that can be applied to a specific set of nodes. It’s possible to define and execute a template in the mapper itself using the scripting functoid.
In the example that follows, the BizTalk solution processes expense claims from employees. However, due to a company “healthy lifestyle” programme, any purchases of Moro bars needs to referred to the employees manager, so they can be given a stern talking too in an attempt to discourage there consumption! (For non Kiwis, a Moro bar is much like a Mars bar but more delicious :-)).
So if a transaction has a <Description> equal to “Moro bar” in the source schema, the <ReferYN> element in the destination schema should be set to “Y” (to indicate “Yes – this employee needs to be referred to his manager”) otherwise it should be set to “N” (to indicate “No – this employee doesn’t need to be referred to his manager”).
1. Drag a scripting functoid onto the map surface. I make it a practice to set the input to the functoid to the root element of the source schema, although this isn’t required to execute the template. This prevents an annoying warning message when Visual Studio 2012 compiles the map indicating that the scripting functoid has no input parameters.
2. Double click on the functoid and click on the “Script Functoid Configuration” tab. Select script type “Inline XSLT Call Template”:
3. It’s possible to write your XSLT directly in the “Inline script” field and I find that this is ok for simple scripts but I prefer to write my XSLT in a file and import using the “Import from file …” button:
In a nutshell, this script outputs a <ReferYN> element depending on the number of transactions with the description “Moro bar”. If the number of transactions is greater than one, the value “Y” is outputted or the value “N” is outputted.
Note that a “match” attribute can be added to the <template> element to explicitly specify what nodes the template will be applied against (further information on this is available here).
4. Don’t forget to add a short and descriptive “Label” describing the function of the functiod and also a brief set of comments for the next developer:
Here is a screenshot of the original ExpenseClaim message:
And this is what it looks like after passing through the map:
I hope this explanation sufficiently gets across how useful XSLT templates are in maps. Often using a template reduces the number of functoids needed to generate the desired result in the destination XML. Less code is a characteristic of functional programming (compared to imperative programming) and XSLT fits this definition in some respects. As maps become more complex and contain greater amounts of logic, using templates is a good option for making a map more readable and less cluttered.
In the example above for instance, without using the template a looping functoid would have been required and also some if-then-else construct. Rather than doing this, just defining a straight function is a better option I believe.
Having said all this, if a map starts to get very complicated, I usually dump the mapper altogether and dive straight into the XSLT and reference this in the BizTalk map via the “Custom XSLT Path” property (further information on this is available here on MSDN).
General Thoughts on the BizTalk Mapper
My beef with the mapper is that is doesn’t encourage a functional approach to writing XSLT (which is it’s power and what I like about it). When reading sources indicating that the mapper generates inefficient and poorly written XSLT I believe that the design of the mapper encourages this and that the generated XSLT is actually partly a result of the imperative mindset that the mapper promotes in the developer. This is particularly apparent in the overuse of loops in maps (again the developer is thinking in the imperative style) when a straight function would do (like the XSLT template example above).
So I think the mapper is not a good tool for writing XSLT. However it is extremely popular and I know used by lots of developers so it will always be part of the BizTalk toolset.
I would be really interested in the opinions of other developers on the (mis)use of the mapper :-).
A quick search of the web suggests that this is a common error and a source of confusion and frustration!
You try to build your orchestration and the build fails with the highlighted error below, for example:
In my case, I’m trying to assign a value to a message but the compiler won’t let me: instead, a copy of the original message needs to be made and then the value can be assigned to the copy.
This is a core feature of BizTalk and demonstrates a tenet of the framework: received messages are immutable (can’t be changed). What this means is that a full message audit trail is maintained which is critical when your application is “in the field” and a BizTalk admin needs to trace message processing through BizTalk using perhaps the BizTalk Admin Console.
So, for example, in the case of the orchestration below, I have an assign message shape (indicated with a red square) where I am incorrectly attempting to assign a value to the original message rather than a copy of it:
In order to fix this, I need to make a couple of changes to the assign message shape:
- Modify the “Messages Constructed” property from the original message to a different message of the same type
- Change the message assignment: instead of assigning a value directly to the original message, I instead “clone” the original message and assign to this copy of the original message
Finally, I change the last Send shape to ensure that the message copy is sent rather than the original message.
(Incidentally, a big clue that a new message needs to be created as part of the assignment is that the message assignment shape comes with a construct message shape).
In conclusion, this post demonstrates a core feature of BizTalk: message immutability. This is a foundational principle of BizTalk as a framework and as demonstrated, is enforced by the compiler.