(WID/WPS) Message Logger Primitive

This post gives brief information about Message Logger primitive with few implementation details.
  • The purpose of a  MLP is to log selected content of the service message object, or SMO. 
  • The message is written in XML format. We configure the primitive using an XPath expression so that all or part of the SMO is written. The default is to log the message payload, as identified by the XPath expression /body.  
  • The message logger primitive provides we with the choice between two different implementations.
  • One implementation writes log records to a relational database.
  • The other implementation option, which was introduced in version 6.2, is custom logging. It makes use of the Java logging APIs. With this option, we can choose a default implementation provided by the product that writes log records to a file, or we can provide our own Java logging implementation.  
  • The SMO is not updated by the message logger.


First Implementation (DB)


Here user defined relational database is being used.
Note : 
  • Data source name : jdbc/mediation/mymessageLog; by default this value was jdbc/mediation/messageLog which is the jndi name for default WPS or WESB database.
  • ESBLOG.MSGLOG table was created in the user defined DB.(Default table where this data will be stored)
CREATE SCHEMA ESBLOG;


CREATE TABLE ESBLOG.MSGLOG (TIMESTAMP TIMESTAMP NOT NULL, MESSAGEID VARCHAR(36) NOT NULL, MEDIATIONNAME VARCHAR(256) NOT NULL, MODULENAME VARCHAR(256), MESSAGE CLOB(100000K), VERSION VARCHAR(10));

ALTER TABLE ESBLOG.MSGLOG ADD CONSTRAINT PK_MSGLOG PRIMARY KEY (MESSAGEID, TIMESTAMP, MEDIATIONNAME);

As the table columns describes what every column means.


  • Server admin console configuration is also required for this to work.
Second Implementation (Custom)



Promotable Properties : If we want to have control on certain properties at runtime , configure this.



Administrating these properties :




For PI : https://docs.google.com/open?id=0ByotHxAO08TDb2VuaHEycFhRWUd5dGlaOFQ2ZVVBUQ

(WMB) Timestamp Conversion

Often we require to change time stamp from one format to another.
Below esql code shows how to convert an incoming data (which is a time stamp in string) into time stamp or String of the desired format.


Declare srcFormat CHARACTER; -- Source (Incoming time stamp format)
Declare targetFormat CHARACTER; -- Target (Expected time stamp format)
DECLARE InTime TIMESTAMP; 
SET srcFormat = 'yyyy-MM-dd HH:mm:ss.SSS';
SET targetFormat = 'yyyy-MM-dd''T''HH:mm:ss.SSS';


-- Converting string into time stamp of same format.
SET InTime  =  cast(InputBody.time as TIMESTAMP FORMAT srcFormat);


-- Converting time stamp to string of target format.
SET OutputRoot.XMLNSC.out.time = cast(InTime as CHARACTER FORMAT targetFormat); 

(WID/WPS) Diverging/Converging Gateway types

GeneralizedFlow

A diverging gateway is made available for an activity when two or more links (excluding fault links) start at this activity. We can specify one of three types of diverging gateway :

  • Split, in which only the first link (going left to right) with a transition condition of true is navigated. 


  • Fork, in which all links are navigated in parallel.  




  • Inclusive OR, in which all links with transition conditions of true are navigated. 


A converging gateway is made available for an activity when two or more links (including fault links) end at this activity. We can specify one of three types of converging gateway :

  • Merge - exclusive pathway. When the first link is followed with a "true" flag, the gateway is evaluated and the activity commences. No synchronization happens with the remaining incoming links.
  • Join - parallel pathways. All incoming links are synchronized, the gateway is evaluated when all links are followed. When all links are followed with a "true" flag the activity commences. When all links are followed with a "false" flag the activity is skipped. When some links are followed with a "true" flag but some with a "false" flag the run time throws an exception.

Note: In cases where this exception can happen, a warning is displayed while modeling the process or when deploying the process.

  • Inclusive OR - exclusive and/or parallel pathways. This gateway can be used to merge exclusive paths, or to join parallel paths, or both. When merging exclusive paths, the gateway is evaluated and the activity commences when the first link is followed with a "true" flag. When joining parallel paths, all incoming links are synchronized, the gateway is evaluated when all links are followed. When one or more links are followed with a "true" flag the activity commences. When all links are followed with a "false" flag the activity is skipped. When exclusive and parallel paths converge at this gateway, it is evaluated in the reverse order of the opposite converging gateways. See troubleshooting for more information.





(WID/WPS) Whats new in WID 6.2

  • The Solution view is a new view that shows how our modules, mediation modules, and libraries relate each other. (Integration solutions)
  • Included Generalized Flows (formerly called Cyclic Flows), Repeat Until loops, and Collaboration Scope.
  • In previous releases, business object maps were only used in the assembly diagram. Now we can use a business object map as a step of our  business process.
  • Support for Services Gateway pattern. 


Mediation Related Changes :
  • The Mediation Flow editor has improved support for protocol specific headers, including CICS and IMS messages. 
  • Mapping has been improved to support very large business objects. 
  • Mediation flows and XSLT transformations can now be included in a business module, rather than needing their own module. Performance is improved by eliminating the hop between modules, improving efficiency.Multiple mediation components can now be placed into a single module. Since a mediation flow is treated like any other component now, there is no longer a need for a specialized mediation module. A mediation can be part of any module now.
  • A new construct called a mediation subflow can be used to encapsulate reusable mediation logic.


For more info :
http://www.ibm.com/developerworks/websphere/bpmjournal/0812_fasbinder6/0812_fasbinder6.html

(WID/WPS) Collaboration Scope

  • It is used to create enhanced dynamic workflows.
  • It is the preferred tool for the case paradigm.
  • A "CASE" is the product of a workflow or a part of a workflow used to handle knowledge intensive business flows.
  • A case can be any number of things, including : Evaluation of job application , Ruling on an insurance claim.
  • Any business process can be handled as a case, but it is ideally suited to situations where the task owner uses knowledge and experience to :
    • Trigger sub-process : doctor orders an additional blood test.
    • Repeat a number of activities :  a job applicants second interview.
  • Case handling support is provided by combining dynamic features and the collaboration scope activity.
  • BPEL Process Choreographer Explorer runs Business Space support process dynamically at  runtime.
  • We can create business logic within our collaboration scope by adding basic activities.
  • We can't include structured activities in a collaboration scope.
  • Exit conditions allow us to automatically skip and repeat steps.
  • We set the administrators for the collaboration scope to specify the people allowed to perform manual skips and jumps at run-time.(default setting is everybody)
For more information :

(WID/WPS) Fault Links

When faults occur in our business process, fault handlers are typically engaged to deal with the fault. The generalized flow activity offers a simplified fault handling procedure. 
From any scope or basic activity (excluding the Throw and Rethrow activities) we can add one or more "fault links". 


A fault link in an activity is followed if the specified fault occurs while the activity is running. We can define "catch" fault links for various conditions, or we can create a "catch all" fault link, which will be followed should any fault occur that is not covered by a catch fault link. If multiple fault links are modeled for the same activity, best match decides which fault link will be followed. The fault catching rules are the same as for fault handlers.


Fault link considerations:


When we use fault links in our business process, consider the following: 
  • A fault link is activated for faults that occur within the source activity only. The evaluation of conditions of normal links is not part of the execution of the activity. 
  • If the source activity of the fault link is a scope activity, the fault handler of the scope activity is evaluated first when a fault occurs inside the scope. However, the fault handler can rethrow the fault. In this case, a fault link of the scope can catch the fault and can be navigated. 
  • If an activity is the source of multiple fault links, only one of the fault links can be navigated when a fault occurs. 
  • The target activity of the fault link will be executed normally. Compensate and rethrow activities in fault handlers cannot be the target of a fault link. 
  • When a fault contains fault data, a variable of the fault data type needs to be declared on the surrounding scope. The fault link must reference this variable so that the target activity of the fault link has access to the fault data. 
Following business process shows the fault link usage along with fault handler.


FaultBO used for the fault :


Interface used for Invoke activity :


When invoke activity throws any fault say fault1, Fault handler enclosed with the activity will try to handle this fault first. When it is rethrown from fault handler it will be following the corresponding fault link path defined on the scope level.


Code snippet for throwing a fault {Invoke activity implementation}:
private DataObject createFault(String severity, String messageID, String message, String shortText)
{
BOFactory bof =(BOFactory)ServiceManager.INSTANCE.locateService ("com/ibm/websphere/bo/BOFactory");
DataObject faultBo = bof.create("http://FaultLinkModule", "FaultBO");
faultBo.setString("severity", severity);
faultBo.setString("messageID", messageID);
faultBo.setString("message", message);
faultBo.setString("shortText", shortText);
return faultBo;
}


public Boolean doValidation(String input) {
boolean value=true;
if(<<Condition1>>)
{
value=false;
throw new MyServiceBusinessException(createFault("1", "111", "This is Fault 1", "FAULT1"),"fault1");
//throw new MyServiceBusinessException(new String("Fault 1 Thrown"), "fault1");//Fault Name : fault1
}
else if(<<Condition2>>)
{
value=false;
throw new MyServiceBusinessException(createFault("2", "222", "This is Fault 2", "FAULT2"),"fault2");//Fault Name : fault2
//throw new MyServiceBusinessException(new String("Fault 2 Thrown"), "fault2");
}
return value;
}

//Where MyServiceBusinessException is a user defined class extending ServiceBusinessException class.


Download the PI: https://docs.google.com/open?id=0ByotHxAO08TDQWtJbjBDaTNSdTJYaEZEV3ZaVFA1UQ

Note :


(WID/WPS) Querying Business Process & Human Task

LocalBusinessFlowManagerHome bfmHome = null;
InitialContext context = new InitialContext();
bfmHome = (LocalBusinessFlowManagerHome) context.lookup("local:ejb/com/ibm/bpe/api/BusinessFlowManagerHome");
LocalBusinessFlowManager flowManager = bfmHome.create();


String processTemplate = "MyProcess";
String selectClause = "DISTINCT ACTIVITY.AIID";
String whereClause = "PROCESS_TEMPLATE.NAME = '" + processTemplate+ "'";


QueryResultSet result = flowManager.query(selectClause, whereClause, (String) null,(Integer) null, (TimeZone) null);
System.out.println("\n > query(), result size: " + result.size());


if (result.size() == 0) 
{
System.exit(0);
}


while (result.next()) 
{
AIID aiid = (AIID) result.getOID(1);
System.out.println("AIID : " + aiid.toString());
ActivityInstanceData aid = flowManager.getActivityInstance(aiid);
System.out.println("App Name " + aid.getApplicationName());
System.out.println("Display Name " + aid.getDisplayName());
System.out.println("ProcessTemplate Name "+ aid.getProcessTemplateName());


// Below code is for human task only :

flowManager.claim( aiid ); // For Claiming the humantask
flowManager.createWorkItem( aiid, WorkItemData.REASON_READER, "admin");
flowManager.createWorkItem( aiid, WorkItemData.REASON_EDITOR, "admin");
System.out.println( "Created Work Item ");


// transfer this  EDITOR -workItem from admin to harish
flowManager.transferWorkItem( aiid, WorkItemData.REASON_EDITOR, "admin", "harish");
System.out.println( "Transferred Work Item");


flowManager.deleteWorkItem( aiid, WorkItemData.REASON_READER, "admin");
flowManager.deleteWorkItem( aiid, WorkItemData.REASON_EDITOR, "harish");
System.out.println( "Deleted Work Item");
}


// To work with input & output of Human Task

ClientObjectWrapper input =  flowManager.claim( aiid );
DataObject activityInput = null ;
if ( input.getObject()!= null && input.getObject() instanceof DataObject )
{
activityInput = (DataObject)input.getObject();
System.out.println("activity Input:in : "+activityInput.getString("in"));
 }


// To Complete
ActivityInstanceData activity = flowManager.getActivityInstance(aiid);
ClientObjectWrapper output = flowManager.createMessage(aiid, activity.getOutputMessageTypeName());
DataObject myMessage = null ;
if ( output.getObject()!= null && output.getObject() instanceof DataObject )
{
myMessage = (DataObject)output.getObject();
myMessage.setString("out","output is this");
}
flowManager.complete(aiid, output);

(WID/WPS) Calling a Business Process using BPE API

To call a Microflow : 
Preferred Interaction style can be Any (Sync/Async/Any)

 Join Transaction should be 'True'.


Use call method.
//ClientObjectWrapper cowProcessOut = flowManager.call("MyProcess",cow);


For complete java code :
https://docs.google.com/file/d/0ByotHxAO08TDNnZKX3hfMm5TamlFTW53dk9TQlpMQQ/edit


To call a Long Running Process :
Preferred Interaction style can only be 'Async'.
Join Transaction should be 'False'.
Use initiate method.(PIID piid = flowManager.initiate("MyProcess",cow))


//MyProcess is the process name.

(WID/WPS) Query Tables

Query tables support task and process list queries on data that is contained in the BPC database schema.This includes human task data and business process data that is managed by BPC, and external business data.


Predefined query tables
Predefined query tables provide access to the data in the BPC database. 
They are the query table representation of the corresponding predefined Business Process Choreographer database views, such as the TASK view or the PROCESS_INSTANCE view.


Query tables have the following properties : Name, Attributes, Authorization.


Predefined query tables containing instance data :
TASK 
PROCESS_INSTANCE 
ACTIVITY 
ACTIVITY_ATTRIBUTE 
ACTIVITY_SERVICE 
ESCALATION 
ESCALATION_CPROP 
ESCALATION_DESC 
PROCESS_ATTRIBUTE 
QUERY_PROPERTY 
TASK_CPROP 
TASK_DESC 


Predefined query tables containing template data :
APPLICATION_COMP
ESC_TEMPL
ESC_TEMPL_CPROP
ESC_TEMPL_DESC
PROCESS_TEMPLATE
PROCESS_TEMPL_ATTR
TASK_TEMPL
TASK_TEMPL_CPROP
TASK_TEMPL_DESC


Supplemental query tables
Supplemental query tables in BPC expose to the query table API business data that is not managed by BPC. With supplemental query tables, this external data can be used with data from the predefined query tables when retrieving business process instance information or human task information.


Composite query tables
Composite query tables in BPC comprise predefined query tables and supplemental query tables. They combine data from existing tables or views. Use a composite query table to retrieve the information for a process instance list or task list, such as My To Dos.


Sample code to access all the predefined query tables with its columns & values :


String predefinedTables[] = {

"TASK",
"PROCESS_INSTANCE",
"ACTIVITY",
"ACTIVITY_ATTRIBUTE",
"ACTIVITY_SERVICE",
"ESCALATION",
"ESCALATION_CPROP",
"ESCALATION_DESC",
"PROCESS_ATTRIBUTE",
"QUERY_PROPERTY",
"TASK_CPROP",
"TASK_DESC",
"APPLICATION_COMP",
"ESC_TEMPL",
"ESC_TEMPL_CPROP",
"ESC_TEMPL_DESC",
"PROCESS_TEMPLATE",
"PROCESS_TEMPL_ATTR",
"TASK_TEMPL",
"TASK_TEMPL_CPROP",
"TASK_TEMPL_DESC"
};

Context ctx = new InitialContext();
com.ibm.bpe.api.LocalBusinessFlowManagerHome bfmHome = (LocalBusinessFlowManagerHome)ctx.lookup("local:ejb/com/ibm/bpe/api/BusinessFlowManagerHome");
BusinessFlowManagerService bfmService = bfmHome.create();
for(int counter=0; counter<predefinedTables.length;counter++){
EntityResultSet ers =  bfmService.queryEntities(predefinedTables[counter], null, null, null);
if (ers != null) {
com.ibm.bpe.api.EntityInfo ei = ers.getEntityInfo();
java.util.List aiList = ei.getAttributeInfo(); //Gives the list of columns
System.out.println("\n\n");
System.out.println("Query Table Name : "+predefinedTables[counter]);
for (int i = 0; i < aiList.size(); i++) {
com.ibm.bpe.api.AttributeInfo ai = (com.ibm.bpe.api.AttributeInfo) aiList.get(i);
System.out.print( ai.getName() +"\t\t\t\t");
}
System.out.println();
for (int i = 0; i < ers.getEntities().size(); i++) { // loop thru all the rows.
com.ibm.bpe.api.Entity entity = (com.ibm.bpe.api.Entity) ers.getEntities().get(i);
System.out.println();
for (int o = 0; o < aiList.size(); o++) {
com.ibm.bpe.api.AttributeInfo ai = (com.ibm.bpe.api.AttributeInfo) aiList.get(o);
System.out.print("\t\t\t\t");
if (entity.getAttributeValue(ai.getName()) != null) {
if (! ai.isArray()) {
if (ai.getType() == com.ibm.bpe.api.AttributeType.TIMESTAMP) {
System.out.print(((java.util.Calendar) entity.getAttributeValue(ai.getName())).getTime());
} else {
System.out.print(entity.getAttributeValue(ai.getName()));
}
} else {
System.out.print("arrays not supported");
}
} else {
System.out.print("n/a");
}
System.out.print("\t\t\t\t");
}
System.out.println("");
}
}
}


For more info : http://publib.boulder.ibm.com/infocenter/dmndhelp/v6r2mx/index.jsp?topic=/com.ibm.websphere.bpc.620.doc/doc/bpc/c6bpel_querytables.html