Quick overview on Talend Open Studio GUI

Quick overview on Talend Open Studio GUI:

Talend Open Studio for Data Integration Version: 6.3.1
Java Compiler: 1.7
OS: Windows 8

In this post I will try to explain different tabs and its uses in Talend Open Studio, making it short and simple. If you have not installed TOS for data integration then read my last post here. In the GUI you can see

Menu bar


Repository tree view

Design workspace

Design workspace properties


Outline view and Code Viewer

Lets take a quick overview of each sections.
On the left side you can see the repository tab where we will be creating jobs. Jobs are kind of packages in ODI.


Business Models: A Business Model is a non technical view of a business need in data flow management
The Business Modeler is at the core of the Top/Down approach: it allows any of the key players to take part to the project design.
BMs offer a macroscopic view of the project.  This is how it looks like.


Job Designs: Here we will be creating jobs with different components for upstream and downstream including transformations.

Contexts: Here we will be creating dynamic variables. Here is what Talend documents says.

A context is characterized by parameters. These parameters are mostly context-sensitive variables which will be added to the list of variables for reuse in the component-specific properties on the Component view through the Ctrl+Space keystrokes.
Talend Studio offers you the possibility to create multiple context data sets. Furthermore you can either create context data sets on a one-shot basis from the context tab of a Job, or you can centralize the context data sets in the Contexts node of the Repository tree view in order to reuse them in different Jobs.

Routines: Routines are fairly complex Java functions, generally used to factorize code. They therefore optimize data processing and improve Job capacities.

SQL Templates: These are kind of dynamic queries with substitution methods. Quite similar to KM in ODI. Its scope includes data query and update, schema creation and modification, and data access control.

Metadata: This is same like creating models and data stores in ODI. We will be creating connection to different databases, files and web services. It has different wizards which will guide you to complete the connection. Since they are reusable you do not need to create several times for several jobs.

Design workspace: Here we can laid out the jobs with multiple components from palette session. After creating a specific ETL you can see the java codes generated by clicking the code tab.

Design workspace properties: Here you can see the properties for your job. So you will have properties window at job level and component level. For creating dynamic variables and using then in your job can be configured in context tab. Additionally you will have another tab to execute your job as highlighted below


Palette: Here we will have all technical components that can be dragged to designed workspace to make a complete ETL process.

Outline view and Code Viewer: This panel is located below the Repository tree view.
The Information panel is composed of two tabs, Outline and Code Viewer, which provide information regarding the displayed diagram (either Job or Business Model) and also the generated code.


Well now we are familiar with the basic panels that are required to create a job. In the next post we will be creating a simple table to table mapping and compare it with respect to ODI. Thats  it for today.

Thank you!!!

About Bhabani 86 Articles
Bhabani has 12 plus years of experience in Data warehousing and Analytics projects that has span across multiple domains like Travel, Banking and Financial, Betting and Gaming Industries. Solution areas he focuses on designing the data warehouse and integrating it with cloud platforms like AWS or GCP. He is also a Elite level contributor at OTN forum more than 9 years. He loves to do experiment and POC on different integration tools and services. Some of his favorite skills are Redshift, Big Query, Python, Apache Airflow, Kafka, HDFS, Map Reduce ,HIVE, Habse, Sqoop, Drill, Impala.