lunes, 30 de mayo de 2016

CANONICAL DATA MODEL

Often, people from various business units have different terms or abbreviations for the same concept, which may lead to an error while interpretation.
For example, the purchase order number can be denoted in several ways with different parameters and is also based on departments in the organization. Probably, they would be using codes like  PO No, PO ID, PO Code, etc.
This leads to multiple custom versions of “enterprise-wide” data models such as Product, Customer, Supplier etc. All models have redundant custom versions of “enterprise-wide” services and business vocabulary, which in turn leads to Point-to-Point connections that are calculated by n * (n-1).
p2p
Sometimes, these service contracts may express similar capabilities in different ways, leading to inconsistency and might result in misinterpretation.
An ideal solution for this problem is to have service contracts that are standardized with naming conventions. Naming conventions are applied to service contracts as part of formal analysis and design processes. The use of global naming conventions introduces enterprise-wide standards that need to be consistently used and enforced.
The Canonical Expression pattern, using Canonical Data Model (CDM) solves all the related problems.
The name CANON comes from a Greek and Latin meaning ‘a rule’ or ‘standard’.
Canonical Data Model defines common architecture for messages exchanged between applications or components. The CDM defines business entities, attributes, associations and semantics relevant to specific domain.
“Canonical Data Model” is application independent.
Examples of some CDM’s are: OAGIS, ACCORD, HL7, HR-XML.
The CDM shift simplifies the design as shown in the diagram below.
cdm-shift
Benefits of the CDM shift are:
  • Improve Business Communication through standardization
  • Increase re-use of Software Components
  • No. of possible connections is (n * 2) against n (n-1).
  • Reduce transformations
  • Reduce Integration Time and Cost
Few downsides while using CDM are
  • CDM’s are too generic (BIG in size) (Light versions might release in order to solve this problem)
  • CDM usage might impact run-time performance
  • In general, CDM’s  do not contain business validations
By following CDM,  it allows us to design and implement reliable messaging patterns as well as to keep the modules related to the source system decoupled from the target system. By decoupling the module it enables us to create plugable modules that are applicable to various source or target systems that can be switched easily when ever required.
MuleSoft ESB, as a decoupling middleware platform helps us leverage reliable messaging to make otherwise transient, fatal errors in a non-reliable transport recoverable. Mule is agnostic to the payload of message and the architecture of integration applications, that makes it easy to implement patterns like the canonical data model and decoupling middleware.

GENERATING TECHNICAL DOCUMENTATION FOR MULE ESB APPS

A good technical documentation is a key deliverable for any application. Usually, a lot of time is spent on writing technical documentation for the application and often it necessary to draw several diagrams and write several lines of descriptions about the components used in the application. Mule ESB simplifies the approach for technical documentation with respect to Mule applications. It allows to generate an HTML based documentation for the application by click of a button. When exporting the documentation for the application, Anypoint Studio (also known as Mule Studio) creates an HTML page for every single mule configuration file within the application and each of these files contains message flow digram and configuration XML code of every single flow within the configuration file.

Steps to export studio documentation

  • Choose any flow within the application and click on “Export Studio Documentation” option as shown in the image
 1
  • Browse or specify a folder name where the documentation needs to be stored and click on “Generate Studio Documentation” button.  The documentation for the entire application will be generated in the given folder.
2
  • Open the index.html page created within the specified folder in the previous step and browse through the documentation. The documentation allows to  browse every single flows and shows both the graphical flow design and XML configuration code of individual flows within the application. In the following screen, the tabs  can be seen for all the flow files in the application. Upon selecting a flow name, it displays individual flows and XML configuration code for the same.
3

  • The documentations can be stored within any web server. In general, tomcat server is used to host Mule Management Console for monitoring mule server and mule application. These set of documentations can be hosted as static HTML pages within tomcat for easy browsing and also as a reference for individual applications and flows.

HOT DEPLOYMENT OF MULE LICENSES

Introduction
Before moving forward with the instructions, it is important to understand that as long as a Mule instance is running, the license which is currently installed will be used. This implies that it relies on the license’s information, such as expiration date, entitlements, etc..  The procedure mentioned below is for installing a license that will be picked up on the next restart of the Mule instance and is meant to be a prior step.
If the given instructions are followed, then it is not necessary to use the following commands under Linux/Windows/Solaris/Mac –
  • to install a license : mule -installLicense ~/license.lic
  • to verify a license : mule -verifyLicense
  • to un-install a license : mule -unInstallLicense
Instructions
1. Go to the MuleSoft License Verifier application: http://mulelicenseverifier.cloudhub.io
Mule License Verifier
2. Select the license and click on Verify
Mule License Verifier
3. If the license is working, it will show the license information. Please verify if the information is correct
Mule License Digest
4. Once, the information is verified,  the digested license can be downloaded from the link ‘Download digested license’
5. Copy the downloaded digested license to {MULE_HOME}/conf/ of the Mule instance where the license needs to be replaced.

Now, the new licenses have to be installed and going forward, it will automatically pick up if the Mule instance gets restarted.
Note: it’s recommended to try these steps on a development or test instance to familiarize with the procedure prior to installing in production.

LOAD BALANCING WITH APACHE WEB SERVER (PART 3)

Create and run another web service on Server 2 (in this case it on 10.0.1.86)

1. Repeat the same steps similar to the ones done on Server 1
2. Finally the exposed web service should have a URI, http://10.0.1.86:8091/hello?wsdl
Now that we have 2 services running on 2 different servers, configuration of LB for these servers can be done.

Install and configure HTTPD Server as LB instance

1. Download and install apache httpd server. (If already exists, then skip to next step). This can be downloaded fromhttp://httpd.apache.org/download.cgi 
2. Configure httpd-proxy-balance.conf
  a. Required to keep this file under ‘conf/extra/’ folder
  b. httpd-proxy-balance.conf should look like
<IfModule mod_proxy_balancer.c>
ServerName www.mycompany.com
ProxyRequests off
<Location /balancer-manager>
Set Handler balancer-manager
Order deny,allow
Allow from all
</Location>
ProxyPass /balancer-manager !
ProxyPass / balancer://mycluster/ stickysession=SESSION_ID
<Proxy balancer://mycluster >
BalancerMember http://10.0.1.86:8091 loadfactor=4 route=node1
BalancerMember http://10.0.1.43:8091 loadfactor=6 route=node2
# Load Balancer Settings
# We will be configuring a simple Round
# Robin style load balancer. This means
# that all webheads take an equal share of
# of the load.
ProxySet lbmethod=byrequest
</Proxy>
</IfModule>
3. Configure httpd.conf
a. Make sure they are uncommented following modules
LoadModule lbmethod_byrequests_module modules/mod_lbmethod_byrequests.so
LoadModule log_config_module modules/mod_log_config.so
LoadModule proxy_balancer_module modules/mod_proxy_balancer.so
LoadModule proxy_http_module modules/mod_proxy_http.so
LoadModule slotmem_shm_module modules/mod_slotmem_shm.so
LoadModule proxy_module modules/mod_proxy.so
b. Add this line
Include conf/extra/httpd-proxy-balancer.conf
c. Save and restart, httpd

ASSERT LB ACTIVITY

1. Point at the browser, and access http://<>:8090/ In this case it is http://10.0.1.86:8090/hello?wsdl
2. This will take us to the exposed web service on a round robin basis and shares equal load between 10.0.1.43 and 10.0.1.86

LOAD BALANCING WITH APACHE WEB SERVER (PART 2)

Detailed Steps to setup LB

Create and run web service on Server 1(in this case it is on 10.0.1.43)

1. Create a soap based mule web service as shown in the “Message Flow” diagram given below
soap web service
2. Following is the XML Configuration fie (Watch out that the service is exposed over 10.0.1.43)


    <flow name="soap-web-serviceFlow1" doc:name="soap-web-serviceFlow1">
        <http:inbound-endpoint address="http://localhost:8091/hello"
            exchange-pattern="request-response" doc:name="HTTP">
            <cxf:jaxws-service serviceClass="org.example.HelloWorld" />
        </http:inbound-endpoint>
        <component class="org.example.HelloWorldImpl" doc:name="Java" />
    </flow>


3. Run the service with following configuration
run the service
4. Add following run time parameter at VM arguments –Dmule.tcp.bindlocalhosttoalllocalinterfaces=true, this should look like
add the following

5. With this, the service will be exposed http://10.0.1.43:8091/hello?wsdl

LOAD BALANCING WITH APACHE WEB SERVER (PART 1)



Overview

This article quickly provides steps to configure load balancer while setting up a clustered environment in a distributed network.
However, this should not be considered as a full and final configuration for a full-fledged production stable configuration. To make a production stable load balancing server, several configurations need to be done.
This is just an illustration of how the basic configuration can be carried out with limited resource availability.

Assumption

1. Server 1: Exposed a web service or open for web requests.
2. Server 2: Expose a web service that is open for web request and also hosts a apache load balancer
3. Server 1 and 2 are running on a separate IPs
4. HTTP Port on Server 1: 8091
5. HTTP Port on Server 2: 8091
6. Apache HTTPD server port: 8090 setup on Server 2

Prerequisites

1. Server 1 setup for hosting SOAP service exposed on mule server with following
URI: http://<>:8091/hello?wsdl
2. Server 2 setup for hosting SOAP service exposed on mule server with following
URIhttp://<>:8091/hello?wsdl
3. Apache httpd server configured on Server 2

Sequence of operation

1. Create and run web service on Server 1
2. Create and run web service on Server 2
3. Install and configure HTTPD Server as LB instance
  a. Configure httpd-proxy-balance.conf
  b. Configure httpd.conf
4. Assert LB activity

viernes, 27 de mayo de 2016

Mule Batch Job (Part 3)

MULE BATCH JOB

(PART 3)


These two steps we have illustrated how to process records and handle failures in a batch job. another special case I have noticed that worths talking about, for instance in case during the input phase no database connection could be established because of the wrong database url  the following exception is caught by the default exception strategy as depicted within the following: 
INFO 2014-12-12 11:04:24,212[[batch-job-demo].start-batch-job.stage1.02]
     com.mulesoft.module.batch.engine.DefaultBatchEngine: Starting input phase
INFO 2014-12-12 11:04:24,222[[batch-job-demo].start-batch-job.stage1.02]
     org.mule.api.processor.LoggerMessageProcessor:
Start getting users records - connecting to database using URL:
ERROR 2014-12-12 11:04:24,263 [[batch-job-demo].start-batch-job.stage1.02]
     org.mule.exception.DefaultMessagingExceptionStrategy:
********************************************************************************
Message   : null (java.lang.NullPointerException).
Message payload is of type: String
Code      : MULE_ERROR--2
--------------------------------------------------------------------------------
Exception stack is:
1. null (java.lang.NullPointerException)
org.mule.module.db.internal.domain.connection.DefaultDbConnection:99 (null)
-------------------------------------------------------------------------------- 
In this case, the batch process will continue to the end that is the on complete phase, this is a very important if we need to generate a report at the end of the batch process even with 0 records processed etc... and the exception that occurred.  The following is the output result within the on complete phase picked from the logging:
INFO  2014-12-12 11:04:24,287 [[batch-job-demo].start-batch-job.stage1.02]
           com.mulesoft.module.batch.engine.DefaultBatchEngine:
Starting execution of onComplete phase for instance 09b38430-8474-11e4-9c5c-0a0027000000
           of job users-accounts-batch-job
INFO  2014-12-12 11:04:24,371 [[batch-job-demo].start-batch-job.stage1.02]
           org.mule.api.processor.LoggerMessageProcessor:
on-complete payload: BatchJobInstanceId:09b38430-8474-11e4-9c5c-0a0027000000
          Number of TotalRecords: 0
          ProcessedRecords: 0
          Number of sucessfull Records: 0
          Number of failed Records: 0
          ElapsedTime in milliseconds: 0
          InpuPhaseException com.mulesoft.module.batch.exception.BatchException:
                 null (java.lang.NullPointerException). Message payload is of type:
                 String (org.mule.api.MessagingException)
          LoadingPhaseException: null
          CompletePhaseException: null
Here in this phase, it appears clearly that 0 records have been processed, and this happened because of the database connection exception that occurred during the input phase as it is shown by the InputPhaseException. This kind of exception handling is useful if the requirements state to have a report at the end of the batch job process indicating the number of records processed along with the failed and successful ones.