Wikipedia

Search results

Friday, 14 November 2014

Its about time the Web became more event-driven.  We have had AJAX for many years enabling events between server and browser, but on the backend we are still polling data. With the explosion of public APIs from SaaS, Social Media and Infrastructure Apps, more and more applications are written by composing web APIs. Developers often need to call a API to get data updates, only to find that nothing has changed. Streaming APIs provide a more elegant solution to polling allowing developers to subscribe to changes they are interested in.

Salesforce recently announced their new Streaming API that given a SOQL query can associate it with a topic that  applications can subscribe to. This means any application subscribed on that topic will receive updates as they happen in realtime.

Streaming Schreaming. So What?

Realtime data over the web sounds good but how much improvement is there over polling?  Well for starters polling isn’t very elegant, asking an application for changes is hugely inefficient. Secondly, polling mean that you have rate limit restrictions, these restrictions exist to reduce load on the infrastructure, and rate limiting on SaaS applications is often restricted to the number of calls a day which means you usually cannot even get close to real time without paying a lot more. Finally, the REST paradigm (the one used by 90% of public ) is not well suited for realtime, Streaming work around the limitations to allow data to be pushed to the client in the same way AJAX does i.e. CometD and Web Sockets.
If you think back to days when AJAX term was coined, many people didn’t see the true value of what the technology offered until the first Web 2.0 web sites started to get attention. To the less informed, AJAX seemed to just remove the need for the user to refresh their page in the browser.  I believe streaming APIs is a major step in moving towards an event-driven Web.

The SalesForce Streaming API

Salesforce is not the first company to announce a streaming API, Twitter, Facebook and others started to emerge almost a year ago with streaming APIs. What’s different about is that they are the shining light for most other SaaS companies (after all they pretty much defined the category) and they’ve started the ball rolling and others will follow suit.  Getting our application data at realtime isn’t something that we’ve had before, it will change the dynamics of the applications we build.
Of course, there is a big win for Salesforce with their streaming API, with over 50% of all traffic going through their APIs, that’s a lot of applications nagging their infrastructure asking for updated information.  The number of requests that return no data could be as high as 60%.  That’s a lot of inefficient processing. The Salesforce platform can be greatly optimised if people start using the streaming APIs since a single query result set could be filtered to serve hundreds or thousands of customers.

and Mule

Mule is the first integration platform to support the SalesForce streaming API, and to test it out we create a simple demo application with Salesforce Chatter and Twilio running on our cloud platform; Mule iON.

The Demo is pretty simple, when you post to someone’s chatter wall hashtags can be used to perform actions. In this case if you post to the wall and include the #now hashtag the recipient will will be notified of your message through SMS.
To make this possible we created a Salesforce custom object (SMSNotification) that includes the user’s mobile number to send SMS messages on.  We created a trigger to update this custom object with FeedItems posted on the users Chatter wall.  We then created a topic called ‘/SMSNotificationStream‘ that selects everything in the SMSNotification object. So when a new post on a wall is made the SMSNotification object gets updated with the item and has a mobile number associated with it that is used to send SMSes.
One of the nice things about this demo is that is super easy, there is only a few lines of Mule XML to make it work:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
<!-- configure the cloud connectors -->
<sfdc:config name="mySF" username="${sf.user}" password="${sf.password}" securityToken="${sf.securityToken}"/>
 
<twilio:config name="myTwilio" accountSid="${twilio.sid}" authToken="${twilio.authToken}"/>
 
<flow name="sendSmsNotifications">
<sfdc:subscribe-topic config-ref="mySF" topic="/SMSNotificationStream"/>
<regex-filter pattern="\#now"/>
 
<twilio:send-sms-message config-ref="myTwilio"
from="+14155555555"
to="#[map-payload:To__c]"
body="#[map-payload:Text__c]"/>
</flow>
view raw gistfile1.xml hosted with ❤ by GitHub
  • You’ll see that the flow starts with ‘sfdc:subscribe-topic‘  this initiates a streaming connection to Salesforce and will trigger the flow when new data arrives.
  • The filter is used to detect hashtags. Here we only look for #now, but we could look different ones and react to each separately
  • The Twiliosend-sms-message‘ command extracts the number and message to SMS from the payload received from Salesforce
  • This configuration can be dropped in to Mule iON and happily sit there listening and reacting to Chatter; like a message switchboard in the cloud
  • This all happens in realtime!
The screenshot below shows what a wall message looks like in and the SMS message being received via Google Voice.

Let me at it!

Right now the Salesforce Streaming API is in beta and the streaming connector for Mule will not be officially released until Mule 3.2 in a few weeks.  If you want to try it out before that please let me know (tweet or email ross at mulesoft).
Follow: @rossmason@mulejockey

Source : Mule Blog
With the rapidly increasing adoption of SaaS, integration Platform as a Service (iPaaS) has become the preferred way to connect SaaS applications.  However, with the explosion of Open APIs on the Web connecting APIs together is becoming the norm for application development. However, typical application containers and even application PaaS offerings don’t help in this new era where applications compose APis together from many different sources.
In the world composing together is modus operandi. An application built on an iPaaS is focused on connecting 2 or more systems together via APIs in order to synchronizing data between them.  However, we’ve taken this new breed of applications much further with the concept of .
Integration Apps exist in a world where everything needs to connect. Open APIs define thousands of new endpoints for exchanging data and leveraging new functionality and Integration Apps are optimized for dealing with working with many different data sources, focusing on composition rather than just coding.
To explain Integration Apps lets take a look at the anatomy of a traditional Web application.
This will be familiar with any developer; it’s the traditional 3-tier application model that defines how most applications to date have been built.  There are a few problems with this architecture in today’s API-centric world:
  • The database has traditionally been the source of truth, but now applications work with many data sources. Increasingly apps need to read and write from APIs from different 3rd party providers.
  • The App Server is just an HTTP container.  It doesn’t provide much in the way of capabilities other than hosting code and mapping HTTP requests to an application.
  • Custom logic is a big bucket where the application logic resides. This is where data access and application code is hosted.  Developers often use open source frameworks such as Ruby on Rails and the Spring Framework to make creating applications easier. But these frameworks don’t provide much for dealing with connecting to lots of data sources or working with different data formats.
  • Traditional web apps are user-focused, however new applications need to cater for machines too.  Increasingly, more people think about building applications API-first. That is creating an API for the application that can be consumed by JavaScript, native mobile applications and other applications.
When people move their traditional applications to the cloud, hosting them on Platform as a Service offerings such Heroku, CloudBees or Open Stack the picture looks very similar.



Even when running on PaaS, applications don’t change. This is deliberate since PaaS has focused on getting existing applications into the cloud. However, this does nothing for dealing with the explosion of Open APIs.

Introducing Integration Apps

In contrast Integration Apps embrace the need to connect to APIs and work with multiple data sources, data formats and mediating between different applications. The iPaaS platform provides completely new types services for dealing with interactions for other remote systems including monitoring, tracking, governance and mediation. This is needed because Integration Apps take an message-driven approach to connecting APIs. This means rather than making only synchronous calls in code, interactions are defined through messages being passed between components. This will be familiar to developers that understand newer languages such as Node.js or Scala were messages are passed to listeners or between Actors.

With Integration Apps there are more capabilities built in so the developer doesn’t have to do heavy lifting.
Connectivity is focused on working with lots of different APIs, this layer manages the security, session management and monitoring of connections.
The ‘Custom Logic’ is joined by new capabilities to deal with composing different APIs together using Orchestration. Data Mapping is needed since the data exchanged via APIs comes in differentformats, so being able to work with XML, JSON, RSS, ATOM, CSV and legacy formats is really important. There is also more focus on Error handling since interactions between different systems needs to be clear and visible.
iPaaS offers the same services as PaaS such as database, storage. But because these applications are message-driven there is a whole new set of platform services that help you track information between systems, set up alerts and error handling with message replay.
Integration Apps don’t only connect with Open APIs on the Web, often connecting with on-premise applications and data sources is needed so the notion of a data gateway is important.
One element missing from the above picture is the User Interface. This is because increasingly, applications are being built to serve machines not people, making UI optional.   Synchronizing data between a few applications doesn’t need human interaction; nor do many automated business processes.  However, Integration Apps natively support publishing ReST APIs so that other applications and mobile devices can interact with an Integration Apps.

Integration Apps in the wild

MuleSoft  already offer public Integration Apps that synchronise data between SaaS applications or between SaaS and on premise applications like SAP.  The most well known Integration App out in the wild is dataloader.io.  This is an Integration Apps that allows users to upload data into SalesForce – a very common task.  The dataloader.io iApp is the only cloud-based solution for Salesforce and has become very popular taking the number 1 spot on the Salesforce AppExchange.  This Integration App looks very much like the diagram above: it has a JavaScript UI that talks to the app via its ReST API, the Integration Apps uses a mix of Orchestration, Data Mapping and custom logic to allow users to create jobs for loading data into Salesforce.
Without iPaaS and Integration Apps capabilities this application would have taken months to build from the ground up with all the monitoring, management, error handling and connectivity, instead we built it in 4 weeks. And now it serves 1,000s of Salesforce users per day.

Faster, more Productive

The combination of iPaaS and Integration Apps is very powerful, and enables a new type of application that responds to changes in real-time.  Pushing more services and functionality into the platform drives a configuration over coding approach which means developers can focus on composing their application rather than coding everything from the ground up.  If necessary the developer can insert custom logic into their App but for most scenarios it isn’t necessary, with orchestration and data mapper providing the tools to work with APIs and different data formats.
The applications that take advantage of these new capabilities will create richer, more engaging applications.  This new breed of applications will further fuel the Open API explosion adding new APIs that can be consumed people and machines.  Open APIs power a world where everything needs to connect, its time for a new platform that embraces this.
Follow: @CloudHub, @MuleSoft, @rossmason

Source: Mulesoft Blog

Sunday, 24 August 2014

This tutorial is the first in a series of blog posts that explain how to integrate Mule and Social Media.
Today’s post will focus on connecting to Twitter and sending a tweet (if you don’t know what is read this). Subsequent tutorials will cover:

Mule Server and Studio versions
For this integration, I am using the latest version of Mule ESB Community Edition with Mule Studio (1.0.0). This sample can also be run in standalone Mule ESB Community Edition and Mule ESB Enterprise Editions.
Mule Studio comes with built-in Twitter connector we can straight away use in Studio. Lets build a new twitter flow that looks like below. We will create an HTTP inbound end point that forwards request to Twitter connector. Finally, the Twitter connector returns a twitter4j.StatusJSONImpl object that will be transformed using an expression-transformer to display response object’s string representation.

Let’s build the sample now.
  • Create a new Mule flow and name it “twitter”.
  • Drag and drop a new HTTP inbound end point on to “twitterFlow1″. Double click on HTTP icon to bring up properties dialog. Specify “addtweet” for Path field.
  • Click on “Global Elements” tab and click on Create to bring up Global Type dialog box. Select “Twitter” from “Cloud Connectors” section. Leave default values and click Ok. We need to configure twitter account to generate necessary security tokens. I will explain this process in next section.
  • Drag and drop Twitter connector next to HTTP inbound end point. Double click on Twitter icon to bring up properties dialog. Select Twitter connector we created in previous step for “Config Reference” field. Select “Update status” for Operation field. Finally specify “#[header:INBOUND:mymessage]” as Status. This expression extracts “mymessage” parameter value from HTTP request.
  • Finally drag and drop an “expression transformer” next to “Twitter” connector. Double click on Expression icon to bring up properties dialog. Specify evaluator as “groovy” and expression as “payload.toString()”. More on expression transformers can be read from Mule 3 documentation.
Here is the completed flow. I have erased my generated keys.
1 2 3 4 5 6 7 8 9 10 11 12 13 14
<?xml version="1.0" encoding="UTF-8"?>
 
<mule xmlns="http://www.mulesoft.org/schema/mule/core" xmlns:http="http://www.mulesoft.org/schema/mule/http" xmlns:twitter="http://www.mulesoft.org/schema/mule/twitter" xmlns:doc="http://www.mulesoft.org/schema/mule/documentation" xmlns:spring="http://www.springframework.org/schema/beans" xmlns:core="http://www.mulesoft.org/schema/mule/core" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" version="CE-3.2.1" xsi:schemaLocation="
http://www.mulesoft.org/schema/mule/http http://www.mulesoft.org/schema/mule/http/current/mule-http.xsd
http://www.mulesoft.org/schema/mule/twitter http://www.mulesoft.org/schema/mule/twitter/2.3/mule-twitter.xsd
http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-3.0.xsd
http://www.mulesoft.org/schema/mule/core http://www.mulesoft.org/schema/mule/core/current/mule.xsd ">
<twitter:config name="Amjad" accessKey="aaaa" accessSecret="bbbb" consumerKey="cccc" consumerSecret="dddd" useSSL="false" doc:name="Twitter"/>
<flow name="twitterFlow1" doc:name="twitterFlow1">
<http:inbound-endpoint exchange-pattern="request-response" host="localhost" port="8081" path="addtweet" doc:name="HTTP"/>
<twitter:update-status config-ref="Amjad" status="#[header:INBOUND:mymessage]" doc:name="Twitter"/>
<expression-transformer evaluator="groovy" expression="payload.toString()" doc:name="Expression"/>
</flow>
</mule>
view raw twitterFlow1.xml hosted with ❤ by GitHub
Pretty simple, right? To explain in more detail:
1
<twitter:config name="Amjad" accessKey="aaaa" accessSecret="bbbb" consumerKey="cccc" consumerSecret="dddd" useSSL="false" doc:name="Twitter"/>
view raw twitterFlow11.xml hosted with ❤ by GitHub
In MuleStudio this syntax will be indicated as an error because MuleStudio is still trying to use the older version of the XSD.
Anyway, what do all these attibutes mean?
The “consumerKey”, “consumerSecret”, “oauthToken” and “oauthTokenSecret” are in fact keys that are generated by the Twitter application. (More on that in a minute.)
Configure Twitter
Before your are able to start using the Twitter integration you will have to do some configuration in your Twitter account.
Go to the following url: https://dev.twitter.com and sign in with your Twitter username and password.
First, you should add an application:

Fill in all the fields in the screen and agree to the “Terms and Conditions.”






Once your application has been generated, you can choose from a tab bar to configure your application in more detail:

You will see that the consumer key and consumer secret are already generated but the access level is Read-only. If you want to read more information on the Twitter permisson model you can click on the link.

To authenticate your application with your Twitter account you will have to generate authentication keys. This is not done by default:

Click the button to create your access tokens and the following screen will appear:

By default the access level is Read-only. If you need access to direct messages you will have to update your Access level. This can be done in the following ways:
Consumer keys
Go to the Settings tab and adjust the access level:

OAuth keys:
You should recreate your access token (after changing the access level in the settings tab) if you also want to update your access level of your OAuth keys.

Running the application
Right click on twitter.mflow and select Run As Mule Application. Once application has successfully started, you can test the application with the following URL: http://localhost:8081/addtweet?mymessage=hello. Do check your twitter account to see a new tweet with message “hello”.
A successful twitter update will result in following response:
1 2 3 4 5 6
StatusJSONImpl{createdAt=Sun Apr 08 16:40:08 BST 2012, id=189014773306888192, text='hello',
source='<a href="http://www.mulesoft.com" rel="nofollow">Amjad Mulesoft</a>', isTruncated=false,
inReplyToStatusId=-1, inReplyToUserId=-1, isFavorited=false, inReplyToScreenName='null',
geoLocation=null, place=null, retweetCount=0, wasRetweetedByMe=false, contributors=null,
annotations=null, retweetedStatus=null, userMentionEntities=null, urlEntities=null,
hashtagEntities=null...}}
view raw twitter-response.txt hosted with ❤ by GitHub
If you ever try to tweet the same message twice you will get the following response from Twitter:
1 2 3 4 5
StatusJSONImpl{createdAt=null, id=-1, text='null', source='null', isTruncated=false,
inReplyToStatusId=-1, inReplyToUserId=-1, isFavorited=false, inReplyToScreenName='null',
geoLocation=null, place=null, retweetCount=-1, wasRetweetedByMe=false, contributors=null,
annotations=null, retweetedStatus=null, userMentionEntities=null, urlEntities=null,
hashtagEntities=null, user=null}

And that’s it! Have fun!
This is a guest post from Mule community member Tom Stroobants. Thank you Tom! (we’ll be sending you a cool T-shirt).  If anyone else in the Mule community would like to write a guest post, please email us.
This Wednesday, April 25th we are excited to join the folks at for The Cloud Analytics Summit. This is shaping up to be a great event, jam-packed with best practice sessions and opportunities for discussion.
One of the reasons why we are partnering with THINKstrategies is to help companies see how an integration-platform-as-a-service (iPaaS) can accelerate their Big Data and cloud analytics projects.
The integration challenges around Big Data and cloud analytics tend to be twofold. First, it’s important to have your data in a central place, and second, it’s extremely important to collect and analyze that data in real time.

Ask yourself, how helpful would it be to have analytics from only 50% of your data sources? Or how about 1-2 month old analytics about your business? By the time you collected information from all the data sources and crunched the numbers, your market opportunity may have passed you by. Today’s data sources are more distributed, and as more companies look to SaaS offerings like Workday, Box, and Salesforce.com for their core business applications, their big data and integration challenges are only going to get bigger.
We are participating in a panel discussion at the conference to explore this topic and more! We hope that you’ll join us for the discussion and stop by to see us at the expo hall. Here are the details:

About the Conference:

April 25, 2012 | Mountain View, CA
Computer History Museum
Website: http://cloudanalyticssummit.com/

Working with Databases (JDBC) in Mule Studio

In this blog post, I’ll give you some background information about , explain what Mule ESB and Studio do with JDBC, and demonstrate how can you use it in a simple example.

A little reference for JDBC:

JDBC, which stands for Java Database Connectivity, is basically an API that enables users to execute operations over a Data Source using the Java programming language. This API allows you to connect to almost any Data Source system, from relational databases to spreadsheets and flat files and, using the proper SQL syntax, you can perform queries, updates, deletes, or even execute store procedures.

What Mule ESB and do with JDBC

Now let’s see how this is architected in Mule ESB. What Mule ESB does is to make this Java code layer transparent to you. Simply importing a jar file with the driver for a specific data source (, Oracle, etc) and writing some easy XML code will make you able to connect to a Data Source and manipulate the data in it. Studio comes with a friendly User Interface, which makes Mule XML code very easy to create and edit. The image below gives you a better idea of how all this works:
At the very end of the line is your data source, which can be fed by any other application. Next you have the JDBC driver. As we mentioned earlier, this is the Java API interface provided by the vendor code of the Data Source that will allow Mule to connect to the Data Source and manipulate the data in it. What comes next is our Mule ESB instance, which will be the service that will be executing the Mule XML code. And finally we have Mule Studio and you.
Studio gives you the framework to easily create the XML code you need and will allow you to test it by executing the code in an embedded Mule ESB instance. So by using Studio, the other layers will be transparent to you.

My kingdom for a Driver!

Before configuring a JDBC connection the first thing we need is the Driver. If you want to keep your kingdom you should first go to the vendor website and look for a JDBC driver file, which should be in a jar format. Keep in mind that there are some vendors, like Oracle, that may require a license to use the driver. NOTE:  On  www.jarvana.com  you can look for the Driver class you need and download the jar file from there. In the example explained below we are going to work with a MySQL database. You can download the Driver file from here (registration required) or look for the connector class in jarvana.

Putting hands to work

Open new Mule Project in Studio, and then follow these steps to get your flow working: a. Import the driver b. Create a Datasource, c. Create a that uses our Datasource, and finally d. Create a simple flow that uses our connector.

a. Import the Driver

Once you have the jar file, the next steps are very simple:
  1. In the Package Explorer, right-click over the Project folder ( in this case “jdbcprj”).
  2. Look in the menu for Build Path > Add External Archives…
  3. Look for the jar file in your hard drive and click Open.
Now you should see in the package explorer that the jar file is present in “Referenced Libraries.” This will allow you to create an instance of the Object driver you will need.

b. Creating a Datasource

Mule and Studio come with some predefined configuration elements for the most common datasources: Derby, MySQL, Oracle and PostgreSQL. If you want to use another datasource, you can do it by creating a bean object with the configuration and using the bean as the Datasource. No let’s create a MySQL datasource for our connector:
  1. Go to the Global Elements tab and click on the Create button, which will display a new window.
  2. Look for Data Sources > MySQL Data Source and click the OK button.
  3. In the Data Source configuration window only 3 things are need to make this work: the database name in the URL, the User and the Password. Enter those attributes according to your database configuration and click OK.

c. Create the Connector

Now that we have the datasource with its driver we need a Connector.
  1. From the Global Elements tab, click on Create and look for  Connector > Database (JDBC). Then click OK.
  2. The only thing that we need to do here is tell the connector which datasource to use. To do this click on the ‘Database Specific’ drop-down list and look for our datasource created in the previous step. Then click OK.
Optionally, you can go to the Queries tab now and create the queries or SQL statements that you want. If you don’t do this now you will have to do it when configuring an endpoint.

d. Creating a flow

Now, we have the half of the work done.  To use our Datasource in a flow, we need an inbound endpoint or an outbound endpoint, depending on what we want to do, you can use a jdbc inbound endpoint if you want use de information from a database to feed your flow and do some process or use an outbound if you want to write the information you process in your flow in a database. In any of these cases you need to do this:
  1. In the Studio Message Flow view, add a JDBC endpoint (either inbound or outbound) in the flow, and open the configuration window by double-clicking on the endpoint.* NoteTo add the endpoint you just need to look for it in the palette and drag and drop it into the canvas, if you drop it in the canvas out of any flow then a flow scope will be created and you endpoint will be an inbound endpoint, if you drop it in a flow after any element, then you will have an outbound endpoint. Studio automatically perform this conversions as flows should always start with inbound endpoints:
  2. Go to  the reference tab and in the connector drop-down list, look for the JDBC connector created in the step C. We are telling the endpoint how to connect to the data source by specifying a reference to a connector. The connector configuration is something global so it can be reused in any amount of endpoints that you want.
  3. Go to the General tab and select the Query Key you want to use in this endpoint. The JDBC endpoint can execute one SQL statement. If you have not created the query in the connector then you can do it now by going to the Queries tab.* Queries Tab and a New Query * Query selected in the Query key drop down list:
Following these steps you are ready to feed your flow by doing queries to your database or create new database registers with the information processed in your flow, or execute any statement you need over your data source. Here you have an example flow. To use this just copy the configuration and paste it in the XML Configuration tab and save the project. You should see a flow like this in the message flow view:

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19
<?xml version="1.0" encoding="UTF-8"?>
 
<mule xmlns="http://www.mulesoft.org/schema/mule/core" xmlns:mulexml="http://www.mulesoft.org/schema/mule/xml" xmlns:file="http://www.mulesoft.org/schema/mule/file" xmlns:jdbc="http://www.mulesoft.org/schema/mule/jdbc" xmlns:doc="http://www.mulesoft.org/schema/mule/documentation" xmlns:spring="http://www.springframework.org/schema/beans" xmlns:core="http://www.mulesoft.org/schema/mule/core" xmlns:http="http://www.mulesoft.org/schema/mule/http" xmlns:scripting="http://www.mulesoft.org/schema/mule/scripting" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" version="CE-3.2.1" xsi:schemaLocation="
http://www.mulesoft.org/schema/mule/xml http://www.mulesoft.org/schema/mule/xml/current/mule-xml.xsd
http://www.mulesoft.org/schema/mule/file http://www.mulesoft.org/schema/mule/file/current/mule-file.xsd
http://www.mulesoft.org/schema/mule/jdbc http://www.mulesoft.org/schema/mule/jdbc/current/mule-jdbc.xsd
http://www.mulesoft.org/schema/mule/core http://www.mulesoft.org/schema/mule/core/current/mule.xsd
http://www.mulesoft.org/schema/mule/http http://www.mulesoft.org/schema/mule/http/3.1/mule-http.xsd
http://www.mulesoft.org/schema/mule/scripting http://www.mulesoft.org/schema/mule/scripting/3.1/mule-scripting.xsd ">
<jdbc:connector name="jdbcConnector" dataSource-ref="MySQL_Data_Source" validateConnections="false" transactionPerMessage="true" queryTimeout="10" pollingFrequency="10000" doc:name="JDBC">
<jdbc:query key="Users" value="SELECT * FROM Users"/>
</jdbc:connector>
<jdbc:mysql-data-source name="MySQL_Data_Source" user="root" password="" url="jdbc:mysql://localhost:3306/StudioQA" transactionIsolation="UNSPECIFIED" doc:name="MySQL Data Source"/>
<flow name="flows1Flow1" doc:name="flows1Flow1">
<jdbc:inbound-endpoint queryKey="Users" connector-ref="jdbcConnector" doc:name="JDBC"/>
<mulexml:object-to-xml-transformer doc:name="Object-to-Xml"/>
<file:outbound-endpoint path="/Users/myUser/myFolder" doc:name="File"/>
</flow>
</mule>
view raw jdbcExampleFlow hosted with ❤ by GitHub
No related posts.

Saturday, 23 August 2014


Salesforce Bulk API Integration using Mule ESB

May 9, 2014 by
Filed under: ESB, SOA 
Salesforce CRM has been widely used in organizations as part of managing their customer interactions. However, Salesforce with the Cloud Delivery model, has become difficult and expensive for the organizations to custom code their integration of Salesforce with their existing on-premise systems.
Many organizations have a need for this integration in a cost and time effective way to automate their business processes.  As a solution to this problem, WHISHWORKS has a way to integrate the company’s existing systems with Salesforce using a modern, lightweight and low cost Mule Enterprise Service Bus.

WHISHWORKS and Salesforce Bulk API

WHISHWORKS has extensive experience utilizing the Mule ESB Anypoint Salesforce Connector to connect directly with Salesforce APIs. This connector enables users, access to full Salesforce functionality with seamless Salesforce integration.
In a business scenario, where there were huge volumes of data to be migrated to Salesforce from the company’s multiple existing systems, WHISHWORKS has implemented an effective way of integrating with the Salesforce Bulk API.
Architecture Diagram
Architecture Diagram
Mule ESB flows have been designed in a way that can be reused for both initial and operational loads. Transformation of data has also been provided in the process of data import to give a standardized and consolidated form of data in Salesforce.

How did we tune Salesforce Bulk API

Bearing in mind the different constraints Salesforce Bulk API has, WHISHWORKS has tuned the batches uploading to Salesforce in an effective manner enabling seamless business automation between Salesforce and the existing Database systems.  Here is how:
  • Threading Profile Settings: Salesforce allows a maximum of 25 concurrent threads at a time. To restrict the concurrent calls to not more than 25, threading profiles have been created at the flow and the VM endpoint level in which the Salesforce calls reside.

  • Salesforce Batch Tuning: Each Salesforce batch being created is tuned such that the data size of the batch does not exceed 10MB. Tuning parameters have been configured to change the number of records each batch holds depending on the size of the entity.

  • Time Delay between each Salesforce Call: Loading huge volumes of data to Salesforce in batches running concurrently can cause Salesforce to take longer time for processing the batches. To avoid this, a time delay has been provided between each concurrent call to avoid overloading Salesforce.

  • Parallel Garbage Collection: To utilise the JVM memory efficiently while importing the data, Parallel Garbage Collection has been used to clean the Java objects that are not anymore strongly referenced.
All this was done on Mule ESB!

Benefits to the Customer

The Salesforce Integration with the organization’s multiple systems has provided the following benefits to the customer:
  1. This has enabled the customer to be able to streamline and fully automate their business processes.
  2. Scalability: Integration through Mule ESB has enabled adaptation to any new SOA infrastructure that needs to be defined as part of the company’s changing infrastructure.
  3. Speed of Integration: With an underlying platform that contains a single development environment and a reliable multi-tenant architecture, integration to Salesforce has been quickly and efficiently built.
  4. This has provided them with an ability to integrate more systems and to aggregate data for a consistent and accurate overview of business.
  5. Significant cost savings by using low cost Mule ESB Enterprise.

Author: Harika Guniganti is a Masters of Computer Science graduate having 4 years of experience as a Technical Specialist with WHISHWORKS. An experienced hand at integration and Mule ESB,  Harika loves her cooking, crafts and travelling.

API Analytics

Viewing API Analytics

Access the Analytics dashboard for your Anypoint Platform for APIs organization to get insight into how your APIs are being used and how they are performing.
 Contents

Assumptions

In order to access the Analytics Dashboards, you must be a member of the Organization Administrator role for your organization. Users who are API Creators or API Version Owners can access the Top 5 APIs chart on the API administration page, but cannot access the Analytics Dashboards.

Accessing the Top 5 APIs Chart

Organization Administrators, API Creators, and API Version Owners can view a snapshot of the latest activity for your organization's five most-called APIs at the top of the API Administration page. Note that if you don't have any APIs yet in your organization, this chart will not appear.

Once you have data to display, the Anypoint Platform rolls together all the API calls made to all versions of an API and combines them into a single line of data. Each of your top five APIs is represented by a different color.
On this chart, you can:
  • Hover over the items in the legend to highlight a single API.
  • Hover over a peak to view a tooltip with details about the total number of API requests received for that collection time.
  • Change the zoom by clicking the links underneath the chart title. You can view data for the last hour, three hours, day, week, or month.
To toggle this chart on or off, press Command + Shift + A.

Accessing the Analytics Dashboards

As an Organization Administrator, you have access to the Analytics Dashboards for your organization. Go to anypoint.mulesoft.com/analytics to access your Analytics Dashboards. You can also navigate to the overview dashboard by clicking the Analytics> link above the Top 5 APIs chart on your API Administration page.

If you don't see the Analytics> link, you are not a member of the Organization Administrator role.

Navigating the Overview Dashboard

When you access the Analytics for your organization, by default you land on your Overview Dashboard. This dashboard displays four standard charts:
  • Requests by Date: Line chart that shows the number of requests for all APIs in your organization.
  • Requests by Location: Map chart that shows the number of requests for each country of origin.
  • Requests by Application: Bar chart that shows the number of requests from each of the top five registered applications. 
  • Requests by Platform: Ring chart that shows the number of requests broken down by platform.

All four of these charts display, by default, the data for all APIs in your organization for the past day. However, you can use the filters at the top left of the page to change the date range or filter to particular APIs. Note that the time ranges displayed automatically reflect your local time zone.

All of the charts on the Overview Dashboard are cross-filtered. This means that if you filter the data on any one of these charts, the same filter is automatically applied to the other charts on the page. Clicking on an individual application in the bar chart, for example, displays all of the requests from that application and the locations for those requests on the map. Here's how to filter data on individual charts.
Chart
How to filter data
How to clear your filter
Requests by DateClick and drag an area of the chart to filter to just that time period. Once you have the slider filter applied, you can drag the ends to the left or right to adjust them as needed.Click outside the area of the filtered portion.
Requests by LocationClick one or more countries to filter to just those results. Hover over a country for a tooltip displaying the name of the country and the total number of API requests received from that country for the selected time period.Click the country or countries again to reset the map.
Requests by ApplicationClick one or more application bars to filter to just those results. Hover over an application's data bar for a tooltip displaying the name of the application and the total number of requests from that application for that time period.Click the application(s) again to reset the chart.
Requests by PlatformClick one or more segments to filter to just those results. Hover over a segment for a tooltip displaying the name of the application and the total number of requests from that application for that time period.Click the segment(s) again to reset the chart.
To export the data for any of these charts, click the export icon in the chart's upper right corner.

Note that even if you have filtered data on one of the charts to show only selected data, the export icon triggers an export of a .csv file of the full data for that chart, filtered by whatever date range and API selection you have made using the filters in the upper left of the page.

Creating Custom Charts

The Anypoint Platform for APIs allows you to create a wide variety of custom charts to display exactly the data that you wish to track for your APIs. You can display these charts on your Custom Dashboard.
For example, you can create custom charts that show:
  • Hourly transactions per second between first day of the month and today, filtered by client id, API version, or SLA tier.
  • Per minute latency average in the last 24 hours, filtered by API or grouped by client geolocation.
To create a custom chart, click the menu icon in the upper right of the page and select Charts.

  1. On the Charts page, click New to create a new custom chart. You are directed to the Create Chart screen.


  2. Give your chart a Title, and, optionally, a Description.
  3. Click one of the four thumbnails on the left of your preview to select the chart type.
    Available chart types:
    • Line chart
    • Bar chart
    • Ring chart
    • Map chart
  4. Use the drop down options to select a data source, a metric, an aggregation (if relevant), and a data interval (for line charts) or grouping dimension (for other chart types). 
    Available data sources:
    • All APIs in your organization or a single API version
    Available metrics:
    • Requests
    • Response size
    • Request size
    • Response time
    Available data intervals:
    • Minutes
    • Hours
    • Days
    Available grouping dimensions:
    • API Name
    • SLA Tier
    • API Version
    • Hardware Platform
    • OS Family
    • OS Major Version
    • OS Minor Version
    • OS Version
    • Browser
    • User Agent Version
    • Application
    • Client IP
    • City
    • Continent
    • Country
    • Postal Code
    • Timezone
    • Resource Path
    • Request Timestamp
    • Response Timestamp
    • Status Code
    • User Agent Type
    • Verb
  5. Click Save Chart when finished.
You are redirected back to your Charts list, where you should now see the custom chart that you have created listed. Note that only you can see the custom charts that you create – these are not shared with other members of the Organization Administrator role.
See the next section for information about how to add charts to your Custom Dashboard.

Creating a Custom Dashboard

Once you have created some custom charts, you can display them side by side on a custom dashboard that is unique to you. Any other members of the Organization Administrator role do not share your custom charts or custom dashboard – these views are unique to each user.
To access your custom dashboard, click the menu icon in the upper right of the page and select Custom Dashboard.

  1. The first time you open your custom dashboard, it will be blank. Click Edit Dashboard in the upper right.
  2. Drag and drop charts from the drawer on the left of the screen onto your dashboard, rearranging them as needed into the order that you want.
  3. If you don't have any charts yet, click Create Chart to create a custom chart.
  4. After you add a chart to your dashboard, you have the option to open it for editing or click the X to remove it from your dashboard.
  5. Once you are satisfied with your custom dashboard, click Save at the top next to the name. You are redirected to a view of your saved custom dashboard.

When you view your custom dashboard, note that you have a date range picker in the upper left corner that allows you to adjust the time period for all the charts on your dashboard.

Exporting Analytics Data

You can export your analytics data from the charts displayed on your Overview Dashboard or your Custom Dashboard. On either dashboard, click the export icon to download a .csv file with the data for that chart.

Note that the data that you download reflects the selection of the filtering options offered in the upper left corner of your dashboard. However, if you are exporting chart data from the Overview Dashboard and you have selected one or more subsections of a chart, the export files do not reflect that selection – instead any export always contains the full data for that chart without considering the chart-level filters that you may have applied.