Wikipedia

Search results

Sunday, 24 August 2014

This tutorial is the first in a series of blog posts that explain how to integrate Mule and Social Media.
Today’s post will focus on connecting to Twitter and sending a tweet (if you don’t know what is read this). Subsequent tutorials will cover:

Mule Server and Studio versions
For this integration, I am using the latest version of Mule ESB Community Edition with Mule Studio (1.0.0). This sample can also be run in standalone Mule ESB Community Edition and Mule ESB Enterprise Editions.
Mule Studio comes with built-in Twitter connector we can straight away use in Studio. Lets build a new twitter flow that looks like below. We will create an HTTP inbound end point that forwards request to Twitter connector. Finally, the Twitter connector returns a twitter4j.StatusJSONImpl object that will be transformed using an expression-transformer to display response object’s string representation.

Let’s build the sample now.
  • Create a new Mule flow and name it “twitter”.
  • Drag and drop a new HTTP inbound end point on to “twitterFlow1″. Double click on HTTP icon to bring up properties dialog. Specify “addtweet” for Path field.
  • Click on “Global Elements” tab and click on Create to bring up Global Type dialog box. Select “Twitter” from “Cloud Connectors” section. Leave default values and click Ok. We need to configure twitter account to generate necessary security tokens. I will explain this process in next section.
  • Drag and drop Twitter connector next to HTTP inbound end point. Double click on Twitter icon to bring up properties dialog. Select Twitter connector we created in previous step for “Config Reference” field. Select “Update status” for Operation field. Finally specify “#[header:INBOUND:mymessage]” as Status. This expression extracts “mymessage” parameter value from HTTP request.
  • Finally drag and drop an “expression transformer” next to “Twitter” connector. Double click on Expression icon to bring up properties dialog. Specify evaluator as “groovy” and expression as “payload.toString()”. More on expression transformers can be read from Mule 3 documentation.
Here is the completed flow. I have erased my generated keys.
1 2 3 4 5 6 7 8 9 10 11 12 13 14
<?xml version="1.0" encoding="UTF-8"?>
 
<mule xmlns="http://www.mulesoft.org/schema/mule/core" xmlns:http="http://www.mulesoft.org/schema/mule/http" xmlns:twitter="http://www.mulesoft.org/schema/mule/twitter" xmlns:doc="http://www.mulesoft.org/schema/mule/documentation" xmlns:spring="http://www.springframework.org/schema/beans" xmlns:core="http://www.mulesoft.org/schema/mule/core" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" version="CE-3.2.1" xsi:schemaLocation="
http://www.mulesoft.org/schema/mule/http http://www.mulesoft.org/schema/mule/http/current/mule-http.xsd
http://www.mulesoft.org/schema/mule/twitter http://www.mulesoft.org/schema/mule/twitter/2.3/mule-twitter.xsd
http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-3.0.xsd
http://www.mulesoft.org/schema/mule/core http://www.mulesoft.org/schema/mule/core/current/mule.xsd ">
<twitter:config name="Amjad" accessKey="aaaa" accessSecret="bbbb" consumerKey="cccc" consumerSecret="dddd" useSSL="false" doc:name="Twitter"/>
<flow name="twitterFlow1" doc:name="twitterFlow1">
<http:inbound-endpoint exchange-pattern="request-response" host="localhost" port="8081" path="addtweet" doc:name="HTTP"/>
<twitter:update-status config-ref="Amjad" status="#[header:INBOUND:mymessage]" doc:name="Twitter"/>
<expression-transformer evaluator="groovy" expression="payload.toString()" doc:name="Expression"/>
</flow>
</mule>
view raw twitterFlow1.xml hosted with ❤ by GitHub
Pretty simple, right? To explain in more detail:
1
<twitter:config name="Amjad" accessKey="aaaa" accessSecret="bbbb" consumerKey="cccc" consumerSecret="dddd" useSSL="false" doc:name="Twitter"/>
view raw twitterFlow11.xml hosted with ❤ by GitHub
In MuleStudio this syntax will be indicated as an error because MuleStudio is still trying to use the older version of the XSD.
Anyway, what do all these attibutes mean?
The “consumerKey”, “consumerSecret”, “oauthToken” and “oauthTokenSecret” are in fact keys that are generated by the Twitter application. (More on that in a minute.)
Configure Twitter
Before your are able to start using the Twitter integration you will have to do some configuration in your Twitter account.
Go to the following url: https://dev.twitter.com and sign in with your Twitter username and password.
First, you should add an application:

Fill in all the fields in the screen and agree to the “Terms and Conditions.”






Once your application has been generated, you can choose from a tab bar to configure your application in more detail:

You will see that the consumer key and consumer secret are already generated but the access level is Read-only. If you want to read more information on the Twitter permisson model you can click on the link.

To authenticate your application with your Twitter account you will have to generate authentication keys. This is not done by default:

Click the button to create your access tokens and the following screen will appear:

By default the access level is Read-only. If you need access to direct messages you will have to update your Access level. This can be done in the following ways:
Consumer keys
Go to the Settings tab and adjust the access level:

OAuth keys:
You should recreate your access token (after changing the access level in the settings tab) if you also want to update your access level of your OAuth keys.

Running the application
Right click on twitter.mflow and select Run As Mule Application. Once application has successfully started, you can test the application with the following URL: http://localhost:8081/addtweet?mymessage=hello. Do check your twitter account to see a new tweet with message “hello”.
A successful twitter update will result in following response:
1 2 3 4 5 6
StatusJSONImpl{createdAt=Sun Apr 08 16:40:08 BST 2012, id=189014773306888192, text='hello',
source='<a href="http://www.mulesoft.com" rel="nofollow">Amjad Mulesoft</a>', isTruncated=false,
inReplyToStatusId=-1, inReplyToUserId=-1, isFavorited=false, inReplyToScreenName='null',
geoLocation=null, place=null, retweetCount=0, wasRetweetedByMe=false, contributors=null,
annotations=null, retweetedStatus=null, userMentionEntities=null, urlEntities=null,
hashtagEntities=null...}}
view raw twitter-response.txt hosted with ❤ by GitHub
If you ever try to tweet the same message twice you will get the following response from Twitter:
1 2 3 4 5
StatusJSONImpl{createdAt=null, id=-1, text='null', source='null', isTruncated=false,
inReplyToStatusId=-1, inReplyToUserId=-1, isFavorited=false, inReplyToScreenName='null',
geoLocation=null, place=null, retweetCount=-1, wasRetweetedByMe=false, contributors=null,
annotations=null, retweetedStatus=null, userMentionEntities=null, urlEntities=null,
hashtagEntities=null, user=null}

And that’s it! Have fun!
This is a guest post from Mule community member Tom Stroobants. Thank you Tom! (we’ll be sending you a cool T-shirt).  If anyone else in the Mule community would like to write a guest post, please email us.
This Wednesday, April 25th we are excited to join the folks at for The Cloud Analytics Summit. This is shaping up to be a great event, jam-packed with best practice sessions and opportunities for discussion.
One of the reasons why we are partnering with THINKstrategies is to help companies see how an integration-platform-as-a-service (iPaaS) can accelerate their Big Data and cloud analytics projects.
The integration challenges around Big Data and cloud analytics tend to be twofold. First, it’s important to have your data in a central place, and second, it’s extremely important to collect and analyze that data in real time.

Ask yourself, how helpful would it be to have analytics from only 50% of your data sources? Or how about 1-2 month old analytics about your business? By the time you collected information from all the data sources and crunched the numbers, your market opportunity may have passed you by. Today’s data sources are more distributed, and as more companies look to SaaS offerings like Workday, Box, and Salesforce.com for their core business applications, their big data and integration challenges are only going to get bigger.
We are participating in a panel discussion at the conference to explore this topic and more! We hope that you’ll join us for the discussion and stop by to see us at the expo hall. Here are the details:

About the Conference:

April 25, 2012 | Mountain View, CA
Computer History Museum
Website: http://cloudanalyticssummit.com/

Working with Databases (JDBC) in Mule Studio

In this blog post, I’ll give you some background information about , explain what Mule ESB and Studio do with JDBC, and demonstrate how can you use it in a simple example.

A little reference for JDBC:

JDBC, which stands for Java Database Connectivity, is basically an API that enables users to execute operations over a Data Source using the Java programming language. This API allows you to connect to almost any Data Source system, from relational databases to spreadsheets and flat files and, using the proper SQL syntax, you can perform queries, updates, deletes, or even execute store procedures.

What Mule ESB and do with JDBC

Now let’s see how this is architected in Mule ESB. What Mule ESB does is to make this Java code layer transparent to you. Simply importing a jar file with the driver for a specific data source (, Oracle, etc) and writing some easy XML code will make you able to connect to a Data Source and manipulate the data in it. Studio comes with a friendly User Interface, which makes Mule XML code very easy to create and edit. The image below gives you a better idea of how all this works:
At the very end of the line is your data source, which can be fed by any other application. Next you have the JDBC driver. As we mentioned earlier, this is the Java API interface provided by the vendor code of the Data Source that will allow Mule to connect to the Data Source and manipulate the data in it. What comes next is our Mule ESB instance, which will be the service that will be executing the Mule XML code. And finally we have Mule Studio and you.
Studio gives you the framework to easily create the XML code you need and will allow you to test it by executing the code in an embedded Mule ESB instance. So by using Studio, the other layers will be transparent to you.

My kingdom for a Driver!

Before configuring a JDBC connection the first thing we need is the Driver. If you want to keep your kingdom you should first go to the vendor website and look for a JDBC driver file, which should be in a jar format. Keep in mind that there are some vendors, like Oracle, that may require a license to use the driver. NOTE:  On  www.jarvana.com  you can look for the Driver class you need and download the jar file from there. In the example explained below we are going to work with a MySQL database. You can download the Driver file from here (registration required) or look for the connector class in jarvana.

Putting hands to work

Open new Mule Project in Studio, and then follow these steps to get your flow working: a. Import the driver b. Create a Datasource, c. Create a that uses our Datasource, and finally d. Create a simple flow that uses our connector.

a. Import the Driver

Once you have the jar file, the next steps are very simple:
  1. In the Package Explorer, right-click over the Project folder ( in this case “jdbcprj”).
  2. Look in the menu for Build Path > Add External Archives…
  3. Look for the jar file in your hard drive and click Open.
Now you should see in the package explorer that the jar file is present in “Referenced Libraries.” This will allow you to create an instance of the Object driver you will need.

b. Creating a Datasource

Mule and Studio come with some predefined configuration elements for the most common datasources: Derby, MySQL, Oracle and PostgreSQL. If you want to use another datasource, you can do it by creating a bean object with the configuration and using the bean as the Datasource. No let’s create a MySQL datasource for our connector:
  1. Go to the Global Elements tab and click on the Create button, which will display a new window.
  2. Look for Data Sources > MySQL Data Source and click the OK button.
  3. In the Data Source configuration window only 3 things are need to make this work: the database name in the URL, the User and the Password. Enter those attributes according to your database configuration and click OK.

c. Create the Connector

Now that we have the datasource with its driver we need a Connector.
  1. From the Global Elements tab, click on Create and look for  Connector > Database (JDBC). Then click OK.
  2. The only thing that we need to do here is tell the connector which datasource to use. To do this click on the ‘Database Specific’ drop-down list and look for our datasource created in the previous step. Then click OK.
Optionally, you can go to the Queries tab now and create the queries or SQL statements that you want. If you don’t do this now you will have to do it when configuring an endpoint.

d. Creating a flow

Now, we have the half of the work done.  To use our Datasource in a flow, we need an inbound endpoint or an outbound endpoint, depending on what we want to do, you can use a jdbc inbound endpoint if you want use de information from a database to feed your flow and do some process or use an outbound if you want to write the information you process in your flow in a database. In any of these cases you need to do this:
  1. In the Studio Message Flow view, add a JDBC endpoint (either inbound or outbound) in the flow, and open the configuration window by double-clicking on the endpoint.* NoteTo add the endpoint you just need to look for it in the palette and drag and drop it into the canvas, if you drop it in the canvas out of any flow then a flow scope will be created and you endpoint will be an inbound endpoint, if you drop it in a flow after any element, then you will have an outbound endpoint. Studio automatically perform this conversions as flows should always start with inbound endpoints:
  2. Go to  the reference tab and in the connector drop-down list, look for the JDBC connector created in the step C. We are telling the endpoint how to connect to the data source by specifying a reference to a connector. The connector configuration is something global so it can be reused in any amount of endpoints that you want.
  3. Go to the General tab and select the Query Key you want to use in this endpoint. The JDBC endpoint can execute one SQL statement. If you have not created the query in the connector then you can do it now by going to the Queries tab.* Queries Tab and a New Query * Query selected in the Query key drop down list:
Following these steps you are ready to feed your flow by doing queries to your database or create new database registers with the information processed in your flow, or execute any statement you need over your data source. Here you have an example flow. To use this just copy the configuration and paste it in the XML Configuration tab and save the project. You should see a flow like this in the message flow view:

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19
<?xml version="1.0" encoding="UTF-8"?>
 
<mule xmlns="http://www.mulesoft.org/schema/mule/core" xmlns:mulexml="http://www.mulesoft.org/schema/mule/xml" xmlns:file="http://www.mulesoft.org/schema/mule/file" xmlns:jdbc="http://www.mulesoft.org/schema/mule/jdbc" xmlns:doc="http://www.mulesoft.org/schema/mule/documentation" xmlns:spring="http://www.springframework.org/schema/beans" xmlns:core="http://www.mulesoft.org/schema/mule/core" xmlns:http="http://www.mulesoft.org/schema/mule/http" xmlns:scripting="http://www.mulesoft.org/schema/mule/scripting" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" version="CE-3.2.1" xsi:schemaLocation="
http://www.mulesoft.org/schema/mule/xml http://www.mulesoft.org/schema/mule/xml/current/mule-xml.xsd
http://www.mulesoft.org/schema/mule/file http://www.mulesoft.org/schema/mule/file/current/mule-file.xsd
http://www.mulesoft.org/schema/mule/jdbc http://www.mulesoft.org/schema/mule/jdbc/current/mule-jdbc.xsd
http://www.mulesoft.org/schema/mule/core http://www.mulesoft.org/schema/mule/core/current/mule.xsd
http://www.mulesoft.org/schema/mule/http http://www.mulesoft.org/schema/mule/http/3.1/mule-http.xsd
http://www.mulesoft.org/schema/mule/scripting http://www.mulesoft.org/schema/mule/scripting/3.1/mule-scripting.xsd ">
<jdbc:connector name="jdbcConnector" dataSource-ref="MySQL_Data_Source" validateConnections="false" transactionPerMessage="true" queryTimeout="10" pollingFrequency="10000" doc:name="JDBC">
<jdbc:query key="Users" value="SELECT * FROM Users"/>
</jdbc:connector>
<jdbc:mysql-data-source name="MySQL_Data_Source" user="root" password="" url="jdbc:mysql://localhost:3306/StudioQA" transactionIsolation="UNSPECIFIED" doc:name="MySQL Data Source"/>
<flow name="flows1Flow1" doc:name="flows1Flow1">
<jdbc:inbound-endpoint queryKey="Users" connector-ref="jdbcConnector" doc:name="JDBC"/>
<mulexml:object-to-xml-transformer doc:name="Object-to-Xml"/>
<file:outbound-endpoint path="/Users/myUser/myFolder" doc:name="File"/>
</flow>
</mule>
view raw jdbcExampleFlow hosted with ❤ by GitHub
No related posts.

Saturday, 23 August 2014


Salesforce Bulk API Integration using Mule ESB

May 9, 2014 by
Filed under: ESB, SOA 
Salesforce CRM has been widely used in organizations as part of managing their customer interactions. However, Salesforce with the Cloud Delivery model, has become difficult and expensive for the organizations to custom code their integration of Salesforce with their existing on-premise systems.
Many organizations have a need for this integration in a cost and time effective way to automate their business processes.  As a solution to this problem, WHISHWORKS has a way to integrate the company’s existing systems with Salesforce using a modern, lightweight and low cost Mule Enterprise Service Bus.

WHISHWORKS and Salesforce Bulk API

WHISHWORKS has extensive experience utilizing the Mule ESB Anypoint Salesforce Connector to connect directly with Salesforce APIs. This connector enables users, access to full Salesforce functionality with seamless Salesforce integration.
In a business scenario, where there were huge volumes of data to be migrated to Salesforce from the company’s multiple existing systems, WHISHWORKS has implemented an effective way of integrating with the Salesforce Bulk API.
Architecture Diagram
Architecture Diagram
Mule ESB flows have been designed in a way that can be reused for both initial and operational loads. Transformation of data has also been provided in the process of data import to give a standardized and consolidated form of data in Salesforce.

How did we tune Salesforce Bulk API

Bearing in mind the different constraints Salesforce Bulk API has, WHISHWORKS has tuned the batches uploading to Salesforce in an effective manner enabling seamless business automation between Salesforce and the existing Database systems.  Here is how:
  • Threading Profile Settings: Salesforce allows a maximum of 25 concurrent threads at a time. To restrict the concurrent calls to not more than 25, threading profiles have been created at the flow and the VM endpoint level in which the Salesforce calls reside.

  • Salesforce Batch Tuning: Each Salesforce batch being created is tuned such that the data size of the batch does not exceed 10MB. Tuning parameters have been configured to change the number of records each batch holds depending on the size of the entity.

  • Time Delay between each Salesforce Call: Loading huge volumes of data to Salesforce in batches running concurrently can cause Salesforce to take longer time for processing the batches. To avoid this, a time delay has been provided between each concurrent call to avoid overloading Salesforce.

  • Parallel Garbage Collection: To utilise the JVM memory efficiently while importing the data, Parallel Garbage Collection has been used to clean the Java objects that are not anymore strongly referenced.
All this was done on Mule ESB!

Benefits to the Customer

The Salesforce Integration with the organization’s multiple systems has provided the following benefits to the customer:
  1. This has enabled the customer to be able to streamline and fully automate their business processes.
  2. Scalability: Integration through Mule ESB has enabled adaptation to any new SOA infrastructure that needs to be defined as part of the company’s changing infrastructure.
  3. Speed of Integration: With an underlying platform that contains a single development environment and a reliable multi-tenant architecture, integration to Salesforce has been quickly and efficiently built.
  4. This has provided them with an ability to integrate more systems and to aggregate data for a consistent and accurate overview of business.
  5. Significant cost savings by using low cost Mule ESB Enterprise.

Author: Harika Guniganti is a Masters of Computer Science graduate having 4 years of experience as a Technical Specialist with WHISHWORKS. An experienced hand at integration and Mule ESB,  Harika loves her cooking, crafts and travelling.

API Analytics

Viewing API Analytics

Access the Analytics dashboard for your Anypoint Platform for APIs organization to get insight into how your APIs are being used and how they are performing.
 Contents

Assumptions

In order to access the Analytics Dashboards, you must be a member of the Organization Administrator role for your organization. Users who are API Creators or API Version Owners can access the Top 5 APIs chart on the API administration page, but cannot access the Analytics Dashboards.

Accessing the Top 5 APIs Chart

Organization Administrators, API Creators, and API Version Owners can view a snapshot of the latest activity for your organization's five most-called APIs at the top of the API Administration page. Note that if you don't have any APIs yet in your organization, this chart will not appear.

Once you have data to display, the Anypoint Platform rolls together all the API calls made to all versions of an API and combines them into a single line of data. Each of your top five APIs is represented by a different color.
On this chart, you can:
  • Hover over the items in the legend to highlight a single API.
  • Hover over a peak to view a tooltip with details about the total number of API requests received for that collection time.
  • Change the zoom by clicking the links underneath the chart title. You can view data for the last hour, three hours, day, week, or month.
To toggle this chart on or off, press Command + Shift + A.

Accessing the Analytics Dashboards

As an Organization Administrator, you have access to the Analytics Dashboards for your organization. Go to anypoint.mulesoft.com/analytics to access your Analytics Dashboards. You can also navigate to the overview dashboard by clicking the Analytics> link above the Top 5 APIs chart on your API Administration page.

If you don't see the Analytics> link, you are not a member of the Organization Administrator role.

Navigating the Overview Dashboard

When you access the Analytics for your organization, by default you land on your Overview Dashboard. This dashboard displays four standard charts:
  • Requests by Date: Line chart that shows the number of requests for all APIs in your organization.
  • Requests by Location: Map chart that shows the number of requests for each country of origin.
  • Requests by Application: Bar chart that shows the number of requests from each of the top five registered applications. 
  • Requests by Platform: Ring chart that shows the number of requests broken down by platform.

All four of these charts display, by default, the data for all APIs in your organization for the past day. However, you can use the filters at the top left of the page to change the date range or filter to particular APIs. Note that the time ranges displayed automatically reflect your local time zone.

All of the charts on the Overview Dashboard are cross-filtered. This means that if you filter the data on any one of these charts, the same filter is automatically applied to the other charts on the page. Clicking on an individual application in the bar chart, for example, displays all of the requests from that application and the locations for those requests on the map. Here's how to filter data on individual charts.
Chart
How to filter data
How to clear your filter
Requests by DateClick and drag an area of the chart to filter to just that time period. Once you have the slider filter applied, you can drag the ends to the left or right to adjust them as needed.Click outside the area of the filtered portion.
Requests by LocationClick one or more countries to filter to just those results. Hover over a country for a tooltip displaying the name of the country and the total number of API requests received from that country for the selected time period.Click the country or countries again to reset the map.
Requests by ApplicationClick one or more application bars to filter to just those results. Hover over an application's data bar for a tooltip displaying the name of the application and the total number of requests from that application for that time period.Click the application(s) again to reset the chart.
Requests by PlatformClick one or more segments to filter to just those results. Hover over a segment for a tooltip displaying the name of the application and the total number of requests from that application for that time period.Click the segment(s) again to reset the chart.
To export the data for any of these charts, click the export icon in the chart's upper right corner.

Note that even if you have filtered data on one of the charts to show only selected data, the export icon triggers an export of a .csv file of the full data for that chart, filtered by whatever date range and API selection you have made using the filters in the upper left of the page.

Creating Custom Charts

The Anypoint Platform for APIs allows you to create a wide variety of custom charts to display exactly the data that you wish to track for your APIs. You can display these charts on your Custom Dashboard.
For example, you can create custom charts that show:
  • Hourly transactions per second between first day of the month and today, filtered by client id, API version, or SLA tier.
  • Per minute latency average in the last 24 hours, filtered by API or grouped by client geolocation.
To create a custom chart, click the menu icon in the upper right of the page and select Charts.

  1. On the Charts page, click New to create a new custom chart. You are directed to the Create Chart screen.


  2. Give your chart a Title, and, optionally, a Description.
  3. Click one of the four thumbnails on the left of your preview to select the chart type.
    Available chart types:
    • Line chart
    • Bar chart
    • Ring chart
    • Map chart
  4. Use the drop down options to select a data source, a metric, an aggregation (if relevant), and a data interval (for line charts) or grouping dimension (for other chart types). 
    Available data sources:
    • All APIs in your organization or a single API version
    Available metrics:
    • Requests
    • Response size
    • Request size
    • Response time
    Available data intervals:
    • Minutes
    • Hours
    • Days
    Available grouping dimensions:
    • API Name
    • SLA Tier
    • API Version
    • Hardware Platform
    • OS Family
    • OS Major Version
    • OS Minor Version
    • OS Version
    • Browser
    • User Agent Version
    • Application
    • Client IP
    • City
    • Continent
    • Country
    • Postal Code
    • Timezone
    • Resource Path
    • Request Timestamp
    • Response Timestamp
    • Status Code
    • User Agent Type
    • Verb
  5. Click Save Chart when finished.
You are redirected back to your Charts list, where you should now see the custom chart that you have created listed. Note that only you can see the custom charts that you create – these are not shared with other members of the Organization Administrator role.
See the next section for information about how to add charts to your Custom Dashboard.

Creating a Custom Dashboard

Once you have created some custom charts, you can display them side by side on a custom dashboard that is unique to you. Any other members of the Organization Administrator role do not share your custom charts or custom dashboard – these views are unique to each user.
To access your custom dashboard, click the menu icon in the upper right of the page and select Custom Dashboard.

  1. The first time you open your custom dashboard, it will be blank. Click Edit Dashboard in the upper right.
  2. Drag and drop charts from the drawer on the left of the screen onto your dashboard, rearranging them as needed into the order that you want.
  3. If you don't have any charts yet, click Create Chart to create a custom chart.
  4. After you add a chart to your dashboard, you have the option to open it for editing or click the X to remove it from your dashboard.
  5. Once you are satisfied with your custom dashboard, click Save at the top next to the name. You are redirected to a view of your saved custom dashboard.

When you view your custom dashboard, note that you have a date range picker in the upper left corner that allows you to adjust the time period for all the charts on your dashboard.

Exporting Analytics Data

You can export your analytics data from the charts displayed on your Overview Dashboard or your Custom Dashboard. On either dashboard, click the export icon to download a .csv file with the data for that chart.

Note that the data that you download reflects the selection of the filtering options offered in the upper left corner of your dashboard. However, if you are exporting chart data from the Overview Dashboard and you have selected one or more subsections of a chart, the export files do not reflect that selection – instead any export always contains the full data for that chart without considering the chart-level filters that you may have applied.
I am excited to announce release 39 of ! This release is based on a lot of user feedback, and contains a of our redesigned user interface as well as one of our most requested features – & .

Redesigned Experience

We’ve been hard at work the last few months building a revamped user interface which helps you be more productive and integrates seamlessly with the Anypoint Platform for APIs. We’re excited to preview some of that work today. You’ll notice a clean, modern interface that makes it easier to get things done. For example, the the home page now provides easy access to your applications, settings, and logs at a glance. It now also has a handy summary of resource utilization and the number of recent transactions processed.


We have also improved the logs page. While viewing live logs, you can now pause and clear log data, making it easier to debug. You can also switch between archive and live modes with a single toggle switch.

We’ve also improved deployment and settings in CloudHub. All functionality can now be accessed in the settings screen, making it easier to get things done without switching screens. You also can now enter application properties by copying and pasting, making it easier to configure.

To access the new UI and features, just click “Try out a beta version of our new look” in the upper right of your CloudHub console. Or you can simply go to http://cloudhub.io/ui-beta/.

Resource Monitoring

Along with the new UI, we’ve added one of our most requested features: CPU & memory monitoring. On the home page, CloudHub now shows usage of workers at a glance, and a more detailed view is visible on the application dashboard. This allows you to understand better when you are close to hitting your capacity limits and when to upgrade workers.



Please note that if you have existing workers, you will have to restart them to gain these new monitoring capabilities.

What’s Next

We will be launching our new UI as GA in September. Along with this, we will be launching new single sign-on capabilities with the API Platform, allowing you to manage all your users in one place. This release will also include several other widely requested features, including the ability to define custom roles, support for multiple production environments and the ability to do single sign-on through PingFederate. For more information about this next release, please see our FAQ.
We’re excited to have you try out these new capabilities. So have a look, and we look forward to your feedback.
Related posts:
  1. CloudHub Release 34: Improving your daily experience
  2. Iterating on our release strategy: Mule ESB, Mule Studio, CloudHub
  3. CloudHub is now the world’s first Global iPaaS
  4. Announcing CloudHub availability in Europe

I am very excited to announce the general availability of the Anypoint Platform for APIs July 2014 release. This release places a broad set of rich API tooling at developers fingertips, allowing them to address the entirety of the API lifecycle in a more efficient and effective manner. The release also enables API program owners to deliver a robust API platform that can easily integrate with existing enterprise assets. The significant set of enhancements introduced in this release revolves around three core themes.

360 degree API life-cycle

This new release unifies the user experience across the API lifecycle to streamline the creation, management and consumption of APIs. Now users of the Anypoint Platform for APIs can access all functionality of the platform through a single pane of glass for enhanced usability and visibility across their API programs. This means that API owners can design APIs, publish portals, manage and monitor their APIs and enable API consumers (i.e. application developers) to explore the published portals and register their applications – all in a single product, accessed through a single interface.

Advanced identity management

This release introduces two powerful identity management capabilities that will make medium to large enterprises more effective in their introduction of API programs. The first is an out-of-the-box and easy-on option to leverage an existing PingFederate installation as a federated identity and OAuth provider. Federated identity allows you to reuse your user base’s existing credentials (UID and password) to login to the Anypoint Platform for APIs and establish single-sign-on with other internal and external systems that are also federated. OAuth is becoming the de-facto standard way of managing secure delegated access to APIs and PingFederate’s OAuth provider is a leading solution for enabling OAuth. In addition to identity federation, this releases’ built-in integration with PingFederate allows customers to leverage PingFederate for the OAuth protection of APIs through a policy that can be applied in a single click.
The second important identity management feature introduced in this release is the support for custom roles and fine-grained permissions. These features allow administrators to control at a very granular level which users have access to which pieces of functionality. As an example, one could configure two roles – acme_external_admin and acme_external_consumer – that map to the APIs of the “acme” department. The administrator could restrict these roles to have access to consume only the APIs of the acme department.
Henceforth, all users belonging to these roles will have only the restricted set of privileges are associated with them.

Powerful

The Anypoint Platform for APIs now includes a powerful analytics system that provides API program owners with deep insight into the usage and performance of their APIs. The analytics system provides a built-in overview dashboard and the ability to create custom charts and dashboards. API owners can report on API traffic, application usage, response times, geographic usage as well as many other metrics. All data from these dashboards and charts can be exported for use in external tools.

Get started today

You can start using the new version today by signing up at anypoint.mulesoft.com. Lookout for more information on this new version in the coming weeks, including a demo of the new platform delivered in a webinar.
Follow us on Twitter @MuleSoft and LinkedIn for all the latest updates!
reza.shafii on Wednesday, July 30, 2014 Introducing the Anypoint Platform for APIs – July 2014 release I am very excited to announce the general availability of the Anypoint Platform for APIs July 2014 release. This release places a broad set of rich API tooling at developers fingertips, allowing them to address the entirety of the API lifecycle in a more efficient and effective manner. The release also enables API program owners to deliver a robust API platform that can easily integrate with existing enterprise assets. The significant set of enhancements introduced in this release revolves around three core themes. 360 degree API life-cycle This new release unifies the user experience across the API lifecycle to streamline the creation, management and consumption of APIs. Now users of the Anypoint Platform for APIs can access all functionality of the platform through a single pane of glass for enhanced usability and visibility across their API programs. This means that API owners can design APIs, publish portals, manage and monitor their APIs and enable API consumers (i.e. application developers) to explore the published portals and register their applications – all in a single product, accessed through a single interface. Advanced identity management This release introduces two powerful identity management capabilities that will make medium to large enterprises more effective in their introduction of API programs. The first is an out-of-the-box and easy-on option to leverage an existing PingFederate installation as a federated identity and OAuth provider. Federated identity allows you to reuse your user base’s existing credentials (UID and password) to login to the Anypoint Platform for APIs and establish single-sign-on with other internal and external systems that are also federated. OAuth is becoming the de-facto standard way of managing secure delegated access to APIs and PingFederate’s OAuth provider is a leading solution for enabling OAuth. In addition to identity federation, this releases’ built-in integration with PingFederate allows customers to leverage PingFederate for the OAuth protection of APIs through a policy that can be applied in a single click. The second important identity management feature introduced in this release is the support for custom roles and fine-grained permissions. These features allow administrators to control at a very granular level which users have access to which pieces of functionality. As an example, one could configure two roles – acme_external_admin and acme_external_consumer – that map to the APIs of the “acme” department. The administrator could restrict these roles to have access to consume only the APIs of the acme department. Henceforth, all users belonging to these roles will have only the restricted set of privileges are associated with them. Powerful analytics The Anypoint Platform for APIs now includes a powerful analytics system that provides API program owners with deep insight into the usage and performance of their APIs. The analytics system provides a built-in overview dashboard and the ability to create custom charts and dashboards. API owners can report on API traffic, application usage, response times, geographic usage as well as many other metrics. All data from these dashboards and charts can be exported for use in external tools. Get started today You can start using the new version today by signing up at anypoint.mulesoft.com. Lookout for more information on this new version in the coming weeks, including a demo of the new platform delivered in a webinar. Follow us on Twitter @MuleSoft and LinkedIn for all the latest updates! Source - MuleSoft

Salesforce Org to Org - Account Migration

Salesforce Org to Org - Contact Migration

Salesforce Org to Org - Custom Object Migration

Salesforce Org to Org - Opportunity Migration

Thursday, 21 August 2014

SOA Governance Best Practices

How To Use BI Publisher Web Services To Return Report Data From Fusion A...

obiee 11g training vedio part 7

How to create Interactive Report with BI Publisher 11G

obiee11g training vedio part 3

obiee 11g training vedio part 2

OBIEE 11g Training video part1

OBIEE technical architecture

Master Detail Report with Oracle BI Publisher

Create 1st Report with BI Publisher

Master Data Management in the Cloud

Mining IT Big Data: Using Analytics to Improve your Cloud/Datacenter Operations

Using Hadoop to Turn Network Data into Business Intelligence

Entry Points – How to Get the Ball Rolling with Big Data Analytics

The Nature of Analytics – Dances with Rhinos

DevOps and the Cloud: Achieving Faster Application Delivery

The Next Generation of Big Data

Don’t let your DataMapper streaming be out of control

Don’t let your DataMapper streaming be out of control

Wednesday, 20 August 2014

SampleApp V309 What's New Overview

EPM 11.1.2.2 default depoyment on SampleAppv305

How To : Build an Endeca Application on OBIEE Model (SampleAppv305)

Oracle EID 3.0 Integration into OBIEE (SampleApp V305)

Using Performance Tiles

Using Performance Tiles

SmartView Interacting with OBIEE 11.1.1.7 (SampleApp V305)

OBIEE 11.1.1.7 and Oracle DB Advanced Analytics (Sample App V303)

What's New in SampleApp V305

What's New in SampleApp V305

How To: Get point-in-time data in OBIEE using Db Time Temporal (V309)

How To : Build Persistent Aggregates in OBIEE (SampleAppV305)

Using Performance Tiles

How To : Build Map Views (SampleApp V305)

How To : Build Persistent Aggregates in OBIEE (SampleAppV305)

How To : Build an Endeca Application on OBIEE Model (SampleAppv305)