lunes, 30 de mayo de 2016

CANONICAL DATA MODEL

Often, people from various business units have different terms or abbreviations for the same concept, which may lead to an error while interpretation.
For example, the purchase order number can be denoted in several ways with different parameters and is also based on departments in the organization. Probably, they would be using codes like  PO No, PO ID, PO Code, etc.
This leads to multiple custom versions of “enterprise-wide” data models such as Product, Customer, Supplier etc. All models have redundant custom versions of “enterprise-wide” services and business vocabulary, which in turn leads to Point-to-Point connections that are calculated by n * (n-1).
p2p
Sometimes, these service contracts may express similar capabilities in different ways, leading to inconsistency and might result in misinterpretation.
An ideal solution for this problem is to have service contracts that are standardized with naming conventions. Naming conventions are applied to service contracts as part of formal analysis and design processes. The use of global naming conventions introduces enterprise-wide standards that need to be consistently used and enforced.
The Canonical Expression pattern, using Canonical Data Model (CDM) solves all the related problems.
The name CANON comes from a Greek and Latin meaning ‘a rule’ or ‘standard’.
Canonical Data Model defines common architecture for messages exchanged between applications or components. The CDM defines business entities, attributes, associations and semantics relevant to specific domain.
“Canonical Data Model” is application independent.
Examples of some CDM’s are: OAGIS, ACCORD, HL7, HR-XML.
The CDM shift simplifies the design as shown in the diagram below.
cdm-shift
Benefits of the CDM shift are:
  • Improve Business Communication through standardization
  • Increase re-use of Software Components
  • No. of possible connections is (n * 2) against n (n-1).
  • Reduce transformations
  • Reduce Integration Time and Cost
Few downsides while using CDM are
  • CDM’s are too generic (BIG in size) (Light versions might release in order to solve this problem)
  • CDM usage might impact run-time performance
  • In general, CDM’s  do not contain business validations
By following CDM,  it allows us to design and implement reliable messaging patterns as well as to keep the modules related to the source system decoupled from the target system. By decoupling the module it enables us to create plugable modules that are applicable to various source or target systems that can be switched easily when ever required.
MuleSoft ESB, as a decoupling middleware platform helps us leverage reliable messaging to make otherwise transient, fatal errors in a non-reliable transport recoverable. Mule is agnostic to the payload of message and the architecture of integration applications, that makes it easy to implement patterns like the canonical data model and decoupling middleware.

GENERATING TECHNICAL DOCUMENTATION FOR MULE ESB APPS

A good technical documentation is a key deliverable for any application. Usually, a lot of time is spent on writing technical documentation for the application and often it necessary to draw several diagrams and write several lines of descriptions about the components used in the application. Mule ESB simplifies the approach for technical documentation with respect to Mule applications. It allows to generate an HTML based documentation for the application by click of a button. When exporting the documentation for the application, Anypoint Studio (also known as Mule Studio) creates an HTML page for every single mule configuration file within the application and each of these files contains message flow digram and configuration XML code of every single flow within the configuration file.

Steps to export studio documentation

  • Choose any flow within the application and click on “Export Studio Documentation” option as shown in the image
 1
  • Browse or specify a folder name where the documentation needs to be stored and click on “Generate Studio Documentation” button.  The documentation for the entire application will be generated in the given folder.
2
  • Open the index.html page created within the specified folder in the previous step and browse through the documentation. The documentation allows to  browse every single flows and shows both the graphical flow design and XML configuration code of individual flows within the application. In the following screen, the tabs  can be seen for all the flow files in the application. Upon selecting a flow name, it displays individual flows and XML configuration code for the same.
3

  • The documentations can be stored within any web server. In general, tomcat server is used to host Mule Management Console for monitoring mule server and mule application. These set of documentations can be hosted as static HTML pages within tomcat for easy browsing and also as a reference for individual applications and flows.

HOT DEPLOYMENT OF MULE LICENSES

Introduction
Before moving forward with the instructions, it is important to understand that as long as a Mule instance is running, the license which is currently installed will be used. This implies that it relies on the license’s information, such as expiration date, entitlements, etc..  The procedure mentioned below is for installing a license that will be picked up on the next restart of the Mule instance and is meant to be a prior step.
If the given instructions are followed, then it is not necessary to use the following commands under Linux/Windows/Solaris/Mac –
  • to install a license : mule -installLicense ~/license.lic
  • to verify a license : mule -verifyLicense
  • to un-install a license : mule -unInstallLicense
Instructions
1. Go to the MuleSoft License Verifier application: http://mulelicenseverifier.cloudhub.io
Mule License Verifier
2. Select the license and click on Verify
Mule License Verifier
3. If the license is working, it will show the license information. Please verify if the information is correct
Mule License Digest
4. Once, the information is verified,  the digested license can be downloaded from the link ‘Download digested license’
5. Copy the downloaded digested license to {MULE_HOME}/conf/ of the Mule instance where the license needs to be replaced.

Now, the new licenses have to be installed and going forward, it will automatically pick up if the Mule instance gets restarted.
Note: it’s recommended to try these steps on a development or test instance to familiarize with the procedure prior to installing in production.

LOAD BALANCING WITH APACHE WEB SERVER (PART 3)

Create and run another web service on Server 2 (in this case it on 10.0.1.86)

1. Repeat the same steps similar to the ones done on Server 1
2. Finally the exposed web service should have a URI, http://10.0.1.86:8091/hello?wsdl
Now that we have 2 services running on 2 different servers, configuration of LB for these servers can be done.

Install and configure HTTPD Server as LB instance

1. Download and install apache httpd server. (If already exists, then skip to next step). This can be downloaded fromhttp://httpd.apache.org/download.cgi 
2. Configure httpd-proxy-balance.conf
  a. Required to keep this file under ‘conf/extra/’ folder
  b. httpd-proxy-balance.conf should look like
<IfModule mod_proxy_balancer.c>
ServerName www.mycompany.com
ProxyRequests off
<Location /balancer-manager>
Set Handler balancer-manager
Order deny,allow
Allow from all
</Location>
ProxyPass /balancer-manager !
ProxyPass / balancer://mycluster/ stickysession=SESSION_ID
<Proxy balancer://mycluster >
BalancerMember http://10.0.1.86:8091 loadfactor=4 route=node1
BalancerMember http://10.0.1.43:8091 loadfactor=6 route=node2
# Load Balancer Settings
# We will be configuring a simple Round
# Robin style load balancer. This means
# that all webheads take an equal share of
# of the load.
ProxySet lbmethod=byrequest
</Proxy>
</IfModule>
3. Configure httpd.conf
a. Make sure they are uncommented following modules
LoadModule lbmethod_byrequests_module modules/mod_lbmethod_byrequests.so
LoadModule log_config_module modules/mod_log_config.so
LoadModule proxy_balancer_module modules/mod_proxy_balancer.so
LoadModule proxy_http_module modules/mod_proxy_http.so
LoadModule slotmem_shm_module modules/mod_slotmem_shm.so
LoadModule proxy_module modules/mod_proxy.so
b. Add this line
Include conf/extra/httpd-proxy-balancer.conf
c. Save and restart, httpd

ASSERT LB ACTIVITY

1. Point at the browser, and access http://<>:8090/ In this case it is http://10.0.1.86:8090/hello?wsdl
2. This will take us to the exposed web service on a round robin basis and shares equal load between 10.0.1.43 and 10.0.1.86

LOAD BALANCING WITH APACHE WEB SERVER (PART 2)

Detailed Steps to setup LB

Create and run web service on Server 1(in this case it is on 10.0.1.43)

1. Create a soap based mule web service as shown in the “Message Flow” diagram given below
soap web service
2. Following is the XML Configuration fie (Watch out that the service is exposed over 10.0.1.43)


    <flow name="soap-web-serviceFlow1" doc:name="soap-web-serviceFlow1">
        <http:inbound-endpoint address="http://localhost:8091/hello"
            exchange-pattern="request-response" doc:name="HTTP">
            <cxf:jaxws-service serviceClass="org.example.HelloWorld" />
        </http:inbound-endpoint>
        <component class="org.example.HelloWorldImpl" doc:name="Java" />
    </flow>


3. Run the service with following configuration
run the service
4. Add following run time parameter at VM arguments –Dmule.tcp.bindlocalhosttoalllocalinterfaces=true, this should look like
add the following

5. With this, the service will be exposed http://10.0.1.43:8091/hello?wsdl

LOAD BALANCING WITH APACHE WEB SERVER (PART 1)



Overview

This article quickly provides steps to configure load balancer while setting up a clustered environment in a distributed network.
However, this should not be considered as a full and final configuration for a full-fledged production stable configuration. To make a production stable load balancing server, several configurations need to be done.
This is just an illustration of how the basic configuration can be carried out with limited resource availability.

Assumption

1. Server 1: Exposed a web service or open for web requests.
2. Server 2: Expose a web service that is open for web request and also hosts a apache load balancer
3. Server 1 and 2 are running on a separate IPs
4. HTTP Port on Server 1: 8091
5. HTTP Port on Server 2: 8091
6. Apache HTTPD server port: 8090 setup on Server 2

Prerequisites

1. Server 1 setup for hosting SOAP service exposed on mule server with following
URI: http://<>:8091/hello?wsdl
2. Server 2 setup for hosting SOAP service exposed on mule server with following
URIhttp://<>:8091/hello?wsdl
3. Apache httpd server configured on Server 2

Sequence of operation

1. Create and run web service on Server 1
2. Create and run web service on Server 2
3. Install and configure HTTPD Server as LB instance
  a. Configure httpd-proxy-balance.conf
  b. Configure httpd.conf
4. Assert LB activity

viernes, 27 de mayo de 2016

Mule Batch Job (Part 3)

MULE BATCH JOB

(PART 3)


These two steps we have illustrated how to process records and handle failures in a batch job. another special case I have noticed that worths talking about, for instance in case during the input phase no database connection could be established because of the wrong database url  the following exception is caught by the default exception strategy as depicted within the following: 
INFO 2014-12-12 11:04:24,212[[batch-job-demo].start-batch-job.stage1.02]
     com.mulesoft.module.batch.engine.DefaultBatchEngine: Starting input phase
INFO 2014-12-12 11:04:24,222[[batch-job-demo].start-batch-job.stage1.02]
     org.mule.api.processor.LoggerMessageProcessor:
Start getting users records - connecting to database using URL:
ERROR 2014-12-12 11:04:24,263 [[batch-job-demo].start-batch-job.stage1.02]
     org.mule.exception.DefaultMessagingExceptionStrategy:
********************************************************************************
Message   : null (java.lang.NullPointerException).
Message payload is of type: String
Code      : MULE_ERROR--2
--------------------------------------------------------------------------------
Exception stack is:
1. null (java.lang.NullPointerException)
org.mule.module.db.internal.domain.connection.DefaultDbConnection:99 (null)
-------------------------------------------------------------------------------- 
In this case, the batch process will continue to the end that is the on complete phase, this is a very important if we need to generate a report at the end of the batch process even with 0 records processed etc... and the exception that occurred.  The following is the output result within the on complete phase picked from the logging:
INFO  2014-12-12 11:04:24,287 [[batch-job-demo].start-batch-job.stage1.02]
           com.mulesoft.module.batch.engine.DefaultBatchEngine:
Starting execution of onComplete phase for instance 09b38430-8474-11e4-9c5c-0a0027000000
           of job users-accounts-batch-job
INFO  2014-12-12 11:04:24,371 [[batch-job-demo].start-batch-job.stage1.02]
           org.mule.api.processor.LoggerMessageProcessor:
on-complete payload: BatchJobInstanceId:09b38430-8474-11e4-9c5c-0a0027000000
          Number of TotalRecords: 0
          ProcessedRecords: 0
          Number of sucessfull Records: 0
          Number of failed Records: 0
          ElapsedTime in milliseconds: 0
          InpuPhaseException com.mulesoft.module.batch.exception.BatchException:
                 null (java.lang.NullPointerException). Message payload is of type:
                 String (org.mule.api.MessagingException)
          LoadingPhaseException: null
          CompletePhaseException: null
Here in this phase, it appears clearly that 0 records have been processed, and this happened because of the database connection exception that occurred during the input phase as it is shown by the InputPhaseException. This kind of exception handling is useful if the requirements state to have a report at the end of the batch job process indicating the number of records processed along with the failed and successful ones.

Mule Batch Job (Part 2)

MULE BATCH JOB

(PART 2)


The process records stage:
In this phase we have two steps; the first one is : get-user-account-step that gets a user account record, and the second one is: failures-step that processes failed records. As an example, if the the current user does not have an account, then NoUserAccountExistException exception is thrown, this exception is handled by sending the curent user with the exception message to a JMS queue for later check.
In case the first step returns an existing account information, the failures step is skipped as only failures are captured by using the config: accept-policy="ONLY_FAILURES". The returned account info may be used to generate a CSV file that is required by business operations.
The following depicts the batch step: get-user-account-step; here we hold the current user id in a record variable using the expression: #[recordVars['currentUser']] in the enricher:
<batch:step name="get-user-account-step">
   <logger message="Start processing step: get-user-account-step"
           level="INFO"/>
   <enricher source="#[payload['id']]" target="#[recordVars['currentUser']]">
      <set-payload value="#[payload]" doc:name="Set Payload"/>
   </enricher>
   <flow-ref name="get-account-record" doc:name="Flow Reference"/>
   <logger message="Account record payload: #[payload]" level="INFO" doc:name="Logger"/>
   <!-- We may transform the record payload here and push it into a CSV file -->
</batch:step>
The reference to the flow that gets the current user account called get-account-record is depicted in the following flow that shows the use of the record variable: currentUser to get the corresponding account.
In this flow we also use a component just after the query that returns the account info, this component checks if the returned account info is empty, then it throws the exception: NoUserAccountExistException.
<flow name="get-account-record" doc:name="get-account-record" processingStrategy="synchronous">
 <logger message="Start getting account record for user:
               #[recordVars['currentUser']]" level="INFO"/>
 <db:select config-ref="MySqlDatabase" doc:name="get user account account">
    <db:parameterized-query>
       <![CDATA[SELECT * FROM usermodel.Accounts WHERE user_id=#[recordVars['currentUser']];]]>
    </db:parameterized-query>
 </db:select>
 <component class="com.appnov.batch.AccountVerifier" doc:name="Java"/>
 <logger message="End getting account user: #[recordVars['currentUser']]" level="INFO"/>
</flow> 
The following depicts the batch step: failures-step; that obviously accepts only failed records:
<batch:step name="step-failures" accept-policy="ONLY_FAILURES">
   <logger message="Failed record with user id:  #[recordVars['currentUser']]" level="INFO"/>
   <set-payload value="#[getStepExceptions()]" doc:name="Set Payload"/>
   <foreach collection="#[payload.values()]" doc:name="For Each">
    <logger message="Current user: #[recordVars['currentUser']] record has been failed,Exception:
                      #[payload]" level="INFO"/>
     <!-- We may send the Payload here to a JMS queue or use it to create a report file-->
   </foreach>
</batch:step>

Mule Batch Job (Part 1)

Mule Batch Job

(Part 1)


Introduction
In this post I will show a sample demo application that illustrates mule batch job working and techniques to handle exceptions and records failures. The demo application code can be cloned here Ref[1].
Use case
The application illustrates a use case of a batch job that queries a database to get all users with a status "Approved" and if any exist gets each user account record and generates a CSV file using the account attributes. The resulting CSV file will be saved for later use or it may be sent via email or sent to an FTP server.
Based on this use case, the demo application shows also how to handle some special cases that throw exceptions and lead to record failures. As an example, how to handle an approved user with no existing account? how to keep track of these failed records? and how to generate a report for approved users with no created account or send them with the exceptions to a JMS queue.
The post found here Ref[2] has a very cool explanation of exceptions and errors handling within mule batch jobs.
Description
The demo application uses the following configuration that means the batch job will continue running all the current loaded records no matter how many records have been failed.
<batch:job name="users-accounts-batch-job" max-failed-records="-1">
In other scenarios, we may need to stop the running batch job when we reach 20 failed records by setting the attribute max-failed-records=20
Input phae:
In this  phase we call a flow that loads all approved user:
<batch:input>
<flow-ref name="get-users-records" doc:name="Flow Reference"/>
</batch:input>
The reference to the flow get-users-records:
<flow name="get-users-records" doc:name="get-users-records" processingStrategy="synchronous">
 <logger message="Start getting users records -connecting to database using URL:${database.url}" 
                  level="INFO" doc:name="Logger"/>
 <db:select config-ref="MySqlDatabase" doc:name="get approved users"/>
      <db:parameterized-query><![CDATA[SELECT * FROM usermodel.Users WHERE status=10;]]>
      </db:parameterized-query>
 <
/db:select></flow>
 <logger message="End getting users records #[payload]" level="INFO" doc:name="Logger"/>
</flow>

Mule ESB Licence Cost

Mule ESB Licence Costs is based on Annual Subscription model and core based. The costs depends on the Support Tier (Sliver, Gold or Platinum). The core product licence cost covers the full stack like – ESB Server, Deployment Tools, Monitoring Tools and Development Tools. Most of the connectors are free to use but the enterprise connectors like SAP, HL7 and Siebel are charged separately.

The key inputs that can determine the sizing of number of cores to be purchased are

– Use Cases or Communication Patterns
– Payload Size
– Transactions Per Second (TPS)
– SLA’s
– Throughput and response times

Mule ESB Community Vs Enterprise Edition


High Availability and Performance
FeatureCommunityEditionEnterprise Edition (G) / (S)Enterprise Edition (P)Impact
High AvailabilityNo SupportNo SupportSupportedMessage Loss and Transaction failure
ResilienceNo SupportNo SupportSupportedImpact on effort to take care of state full  and failure scenarios
CachingNo SupportSupportedSupportedPerformance Impact

Development
FeatureCommunityEditionEnterprise Edition (G) / (S)Enterprise Edition (P)Impact
Anypoint TemplatesNo SupportNo SupportSupportedSaves development and design effort by using templates. Guesstimated to be 40 to 60% time saving depending on how close the use case matches to the template.
Transaction ManagementNo SupportSupportedSupportedData loss and Impact on development effort
Batch ManagerNo SupportSupportedSupportedImpact on Development & Support Effort
Batch Process componentNo SupportSupportedSupportedImpact on Development & Support Effort
JDBC Enterprise ConnectorNo SupportSupportedSupportedfor handling Batch statements, used in Data Integration project. Performance hit.
Anypoint DatamapperNo SupportSupportedSupportedImpact on Development

Operational Support
FeatureCommunityEditionEnterprise Edition (G) / (S)Enterprise Edition (P)Impact
Mule Management ConsoleNo SupportSupportedSupportedImpact on Support
SLA and email AlertsNo SupportSupportedSupportedImpact on Support and Availability
SNMP MonitoringNoSupportedSupportedImpact on Support and Availability
HTTP PollingNoSupportedSupportedImpact on Support and Availability. Mule provides Http polling of service for availability.
Deployment ManagementNo SupportSupportedSupportedImpact on Support

Security
FeatureCommunityEditionEnterprise Edition (G) / (S)Enterprise Edition (P)Impact
Role based securityNot SupportedSupportedSupportedMajor effort to custom develop
Oauth 2.0 – Secure Token ProviderNot SupportedSupportedSupportedMajor effort to custom develop
Message EncryptionNot SupportedSupportedSupportedMajor effort to custom develop
SAML 2.0 ModuleNot SupportedSupportedSupportedMajor effort to custom develop
Secure Property HolderNot SupportedSupportedSupportedKeeps password and other confidential text in encrypted format. This cannot be custom built as it links directly to your endpoint.
IP Based FilteringNot SupportedSupportedSupportedIP based filter is available in EE version for filtering endpoints based on inbound IP and requests can be filtered using LDAP.

Support
FeatureCommunityEditionEnterprise Edition (G) / (S)Enterprise Edition (P)Impact
LicenseFreePurchase Minimum 2 CoresPurchase Minimum 4 (2+2) for HALicence Cost
Hardened CodeNo SupportYesYesImpact on stability and performance
SLAForums8/5, 24 Hours Response Time24/7, 2 Hours Response TimeImpact on support
Hot patches & Service packsNo SupportSupportedSupportedImpact on support and availability

RAML: THE RESTFUL API MODELING LANGUAGE (PART 11)

API Console:


You can generate more sophisticated documentation from a RAML file using the API Console web component. The resulting HTML is similar in style to raml2html, but it has a killer feature; it provides a console which allows people to interact with your API from within the documentation. It takes care of building forms for you for any available parameters, validating them according to your definitions, as well as being able to work out how to perform authentication.
To see it in action, there are demos online for Twitter and Github. If you’ve been using the Anypoint Platform you’ll have seen it already; it’s what’s used to generate the interactive documentation in the right-hand column.
API Console is implemented using Angular, but you don’t necessarily need any experience with it in order to use it.
To get up-and-running quickly, simply follow these steps:
  1. Clone the repository: git clone git@github.com:mulesoft/api-console.git
  2. Copy the dist folder into an empty directory to hold your documentation
  3. Copy your .raml file somewhere in that folder
  4. Create an HTML file as follows:
<head>
  <link rel="stylesheet" href="dist/styles/app.css" type="text/css" />
</head>
<body ng-app="ramlConsoleApp" ng-cloak id="raml-console-unembedded">
 <raml-console src="api.raml"></raml-console>
  <script src="dist/scripts/vendor.js"></script>
  <script src="dist/scripts/app.js"></script>
</body>
That’s all there is to it. You can also embed the API Console in an iframe; refer to the project’s README for details.