Sunday, September 3, 2017

ODI Interview questions

What is Oracle Data Integrator (ODI)?Oracle acquired SUNOPSIS with its ETL tool called "Sunopsis Data Integrator" and renamed to Oracle Data Integrator (ODI) is an E-LT (Extract, Load and Transform) tool used for high-speed data movement between disparate systems.


What are Knowledge Modules?
Knowledge Modules form the basis of 'plug-ins' that allow ODI to generate the relevant execution code , across technologies , to perform tasks in one of six areas, the six types of knowledge module consist of:
a)Reverse-engineering knowledge modules are used for reading the table and other object metadata from source databases
b)Journalizing knowledge modules record the new and changed data within either a single table or view or a consistent set of tables or views
c)Loading knowledge modules are used for efficient extraction of data from source databases for loading into a staging area (database-specific bulk unload utilities can be used where available)
d)Check knowledge modules are used for detecting errors in source data
e)Integration knowledge modules are used for efficiently transforming data from staging area to the target tables, generating the optimized native SQL for the given database
f)Service knowledge modules provide the ability to expose data as Web services.

Does ODI support web services?
Yes, ODI is 'SOA' enabled and its web services can be used in 3 ways:
a)The Oracle Data Integrator Public Web Service, that lets you execute a scenario (a published package) from a web service call.
b)Data Services, which provide a web service over an ODI data store (i.e. a table, view or other data source registered in ODI).
c)The ODIInvokeWebService tool that you can add to a package to request a response from a web service.

what is the ODI Console?
ODI console is a web based navigator to access the Designer, Operator and Topology components through browser

suppose I having 6 interfaces and running the interface 3 rd one failed how to run remaining interfaces?
If you are running Sequential load it will stop the other interfaces. so goto operator and right click on filed interface and click on restart. If you are running all the interfaces are parallel only one interface will fail and other interfaces will finish

what is load plans and types of load plans?
Load plan is a process to run or execute multiple scenarios as a Sequential or parallel or conditional based execution of your scenarios. And same we can call three types of load plans , Sequential, parallel and Condition based load plans.

what is profile in ODI?
profile is a set of objective wise privileges. we can assign this profiles to the users. Users will get the privileges from profile


How to write the sub-queries in ODI?
Using Yellow interface and sub queries option we can create sub queries in ODI
Using VIEW we can go for sub queries Or Using ODI Procedure we can call direct database queries

How to remove the duplicate in ODI?
Use DISTINCT in IKM level. it will remove the duplicate rows while loading into target.

Suppose having unique and duplicate but i want to load unique record one table and duplicates one table?
Create two interfaces or one procedure and use two queries one for Unique values and one for duplicate values.

how to implement data validations?
Use Filters, Mapping Area AND Data Quality related to constraints use CKM Flowcontrol.

How to handle exceptions?
Exceptions In packages advanced tab and load plan exception tab we can handle exceptions.

In the package one interface got failed how to know which interface got failed if we no access to operator?
Make it mail alert or check into SNP_SESS_Log tables for session log details.

How to implement the logic in procedures if the source side data deleted that will reflect the target side table?
Use this query on Command on target Delete from Target_table where not exists (Select 'X' From Source_table Where Source_table.ID=Target_table.ID).

If the Source have total 15 records with 2 records are updated and 3 records are newly inserted at the target side we have to load the newly changed and inserted records?
Use IKM Incremental Update Knowledge Module for Both Insert n Update operations.

How to load the data with one flat file and one RDBMS table using joins?
Drag and drop both File and table into source area and join as in Staging area.

If the source and target are oracle technology tell me the process to achieve this requirement(interfaces, KMS, Models)
Use LKM-SQL to SQL or LKM-SQL to Oracle , IKM Oracle Incremental update or Control append.

what we specify the in XML data server and parameters for to connect to xml file?
File name with location :F and Schema :S this two parameters

How to reverse engineer views(how to load the data from views)?
In Models Go to Reverse engineering tab and select Reverse engineering object as VIEW






































      

Data Modeling

Conceptual Data Modeling

4. Data Modeling 



4.1 Conceptual Data Modeling
      4.1.1 Introduction to Conceptual Data Model
      4.1.2 Stages in Conceptual Modeling
      4.1.3 Components of a Conceptual Data Model
      4.1.4 ER Modeling Basic Concepts
      4.1.5 Enhanced ER Modeling
      4.1.6 Guidelines for ER Modeling
4: 4.1 Conceptual Data Model
Conceptual Data Model 

A conceptual schema or conceptual data model is a map of concepts and their relationships. This describes the semantics of an organization and represents a series of assertions about its nature. Specifically, it describes the things of significance to an organization (entity classes), about which it is inclined to collect information, and characteristics of (attributes) and associations between pairs of those things of significance (relationships).

A conceptual data model identifies the highest-level relationships between the different entities. 
Features of conceptual data model include: 
Includes the important entities and the relationships among them. 
No attribute is specified. 
No primary key is specified. 
4.1.2 Stages in Conceptual Modeling

Main stages in conceptual modeling are as follows:

--  Identification of requirements (done in previous lesson)
--  Designing of solutions
--  Evaluation of solutions
4.1.3 Components of a Conceptual Data Model
  
4.1.4 ER Modeling Basic Concepts
ER Model

The ER model is a conceptual model
Describes data as entities, relationships and attributes
No standard notation for displaying ER diagrams 
(We will choose one among several alternatives for this presentation)
Entity
An entity is a “thing” in the real world with an independent existence
An entity may be an object with a physical existence
For example, person, car, house
An entity may be an object with a conceptual existence
For example, company, job, university-course


4.1.5 Enhanced ER Modeling
  
Enhanced ER Model, Includes all the modeling concepts of the ER model
In addition, it includes the following concepts:
  • Subclass & super-class
  • Specialization & generalization
  • Category
  • Attribute & relationship inheritance
Subclass & Super-class


  • In many cases an entity type has numerous sub-groupings of its entities that are meaningful and need to be represented explicitly because of their significance to the database application. Each of these sub-groupings are called subclass.
  • The entity type on which these subclasses are formed is called super-class
  • The relationship between a super-class and any one of its subclasses is called a super-class/subclass, a class/subclass or an IS-A (or IS-AN) relationship, e.g., a SECRETARY IS-AN EMPLOYEE
  • A member entity of a subclass represents the same real-world entity as some member of the super-class, but in a distinct specific role
Specialization (Top to Bottom Approach)

Specialization is the process of defining a set of subclasses of an entity type

In addition, it allows us to do the following:
  •  Establish additional specific attributes with each subclasses
  •  Establish additional specific relationship types between each subclass and other entity types or other subclasses

Logical Data Modeling

Introduction to Logical Model


Logical Data Model refers to the actual implementation of a conceptual module in a database. It represents normalized design of common data model which is required to support the design of an information system. The very core of the logical data model is the definition of the three types of data objects which the building blocks of the data model and these data objects are the entities, attributes, and relationships. Entities refer to persons, places, events or things which are of particular interest to the company.

Some examples of entities are Employees, States, Orders, and Time Sheets. Attributes refer to the properties of the entities. Examples of attributes for the Employee entity are first name, birthday, gender, address, age and many others. Lastly, relationships refer to the way where in the entities relate to each other.  An example relationship would be "customers purchase products" or "students enroll in classes".  

If the database is ported to another DBMS supporting a similar structure, the logical data model can still be used as a baseline for the new physical data model.


Characteristics of a Logical Model

  • Logical model works in an iterative manner.
  • Its design is independent of database.
  • It includes all entities and relationships among them.
  • All attributes for each entity are specified.
  • The primary key for each entity is specified.
  • Foreign keys (keys identifying the relationship between different entities) are specified.



  • Physical Data Modeling

    Physical Database Design

    Physical database design is the process of developing a set of required data structures on a selected database. It involves the following tasks: 
    • Converting entities into tables
    • Converting relationships into foreign keys
    • Converting attributes into columns
    • Defining constraints
     The purpose of physical design is to optimize performance as closely as possible.





     Along with the logical data model, the database designer requires the following to make sound design decisions:

    • The Process Model, detailing input processes (creation and updating of rows in tables) and output requirements (retrieval of data from the database)
    • The mapping that shows the processes that access each entity class and how (create, update, retrieve)
    • Nonstructural data requirements
    • Performance requirements
    • The target DBMS
    • Disk space requirement
    • Availability of skilled programming resources






INDIRECT LOADING FOR FLAT FILES

Suppose, you have 10 flat files of same structure. All the flat files have same number of columns and data type. Now we need to transfer all the 10 files to same target. 
Names of files are say EMP1, EMP2 and so on. 
 
Solution: 
1. Import one flat file definition and make the mapping as per need. 
2. Now make a notepad file that contains the location and name of each 10 flat files. 
Sample: 
D:\EMP1.txt 
E:\EMP2.txt 
E:\FILES\DWH\EMP3.txt and so on 
3. Now make a session and in Source file name and Source File Directory location fields, give the name and location of above created file. 
4. In Source file type field, select Indirect. 
5. Click Apply
6. Validate Session
7. Make Workflow. Save it to repository and run.
clip_image001

WORKING WITH TASKS IN INFORMATICA

The Workflow Manager contains many types of tasks to help you build workflows and worklets. We can create reusable tasks in the Task Developer. 
Types of tasks:
Task Type Tool where task can be created Reusable or not 
Session Task Developer Yes 
Email Workflow Designer Yes 
Command Worklet Designer Yes 
Event-Raise Workflow Designer No
Event-Wait Worklet Designer No
Timer No
Decision No
Assignment No
Control No

SESSION TASK
  • A session is a set of instructions that tells the Power Center Server how and when to move data from sources to targets.
  • To run a session, we must first create a workflow to contain the Session task.
  • We can run as many sessions in a workflow as we need. We can run the Session tasks sequentially or concurrently, depending on our needs.
  • The Power Center Server creates several files and in-memory caches depending on the transformations and options used in the session.
EMAIL TASK
  • The Workflow Manager provides an Email task that allows us to send email during a workflow.
  • Created by Administrator usually and we just drag and use it in our mapping.
Steps: 
  1. In the Task Developer or Workflow Designer, choose Tasks-Create.
  2. Select an Email task and enter a name for the task. Click Create.
  3. Click Done.
  4. Double-click the Email task in the workspace. The Edit Tasks dialog box appears.
  5. Click the Properties tab.
  6. Enter the fully qualified email address of the mail recipient in the Email User Name field.
  7. Enter the subject of the email in the Email Subject field. Or, you can leave this field blank.
  8. Click the Open button in the Email Text field to open the Email Editor.
  9. Click OK twice to save your changes.
Example: To send an email when a session completes: 
Steps: 
  1. Create a workflow wf_sample_email
  2. Drag any session task to workspace.
  3. Edit Session task and go to Components tab.
  4. See On Success Email Option there and configure it.
  5. In Type select reusable or Non-reusable.
  6. In Value, select the email task to be used.
  7. Click Apply -> Ok.
  8. Validate workflow and Repository -> Save
  • We can also drag the email task and use as per need.
  • We can set the option to send email on success or failure in components tab of a session task.
COMMAND TASK
The Command task allows us to specify one or more shell commands in UNIX or DOS commands in Windows to run during the workflow.
For example, we can specify shell commands in the Command task to delete reject files, copy a file, or archive target files. 
Ways of using command task: 
1. Standalone Command task: We can use a Command task anywhere in the workflow or worklet to run shell commands.
2. Pre- and post-session shell command: We can call a Command task as the pre- or post-session shell command for a Session task. This is done in COMPONENTS TAB of a session. We can run it in Pre-Session Command or Post Session Success Command or Post Session Failure Command. Select the Value and Type option as we did in Email task. 
Example: to copy a file sample.txt from D drive to E. 
Command: COPY D:\sample.txt E:\ in windows 
Steps for creating command task: 
  1. In the Task Developer or Workflow Designer, choose Tasks-Create.
  2. Select Command Task for the task type.
  3. Enter a name for the Command task. Click Create. Then click done.
  4. Double-click the Command task. Go to commands tab.
  5. In the Commands tab, click the Add button to add a command.
  6. In the Name field, enter a name for the new command.
  7. In the Command field, click the Edit button to open the Command Editor.
  8. Enter only one command in the Command Editor.
  9. Click OK to close the Command Editor.
  10. Repeat steps 5-9 to add more commands in the task.
  11. Click OK.
Steps to create the workflow using command task: 
  1. Create a task using the above steps to copy a file in Task Developer.
  2. Open Workflow Designer. Workflow -> Create -> Give name and click ok.
  3. Start is displayed. Drag session say s_m_Filter_example and command task.
  4. Link Start to Session task and Session to Command Task.
  5. Double click link between Session and Command and give condition in editor as
  6. $S_M_FILTER_EXAMPLE.Status=SUCCEEDED
  7. Workflow-> Validate
  8. Repository –> Save
clip_image002
WORKING WITH EVENT TASKS
We can define events in the workflow to specify the sequence of task execution.
Types of Events: 
  • Pre-defined event: A pre-defined event is a file-watch event. This event Waits for a specified file to arrive at a given location.
  • User-defined event: A user-defined event is a sequence of tasks in the Workflow. We create events and then raise them as per need.
Steps for creating User Defined Event: 
  1. Open any workflow where we want to create an event.
  2. Click Workflow-> Edit -> Events tab.
  3. Click to Add button to add events and give the names as per need.
  4. Click Apply -> Ok. Validate the workflow and Save it.
Types of Events Tasks: 
  • EVENT RAISE: Event-Raise task represents a user-defined event. We use this task to raise a user defined event.
  • EVENT WAIT: Event-Wait task waits for a file watcher event or user defined event to occur before executing the next session in the workflow.
Example1: Use an event wait task and make sure that session s_filter_example runs when abc.txt file is present in D:\FILES folder.
Steps for creating workflow: 
  1. Workflow -> Create -> Give name wf_event_wait_file_watch -> Click ok.
  2. Task -> Create -> Select Event Wait. Give name. Click create and done.
  3. Link Start to Event Wait task.
  4. Drag s_filter_example to workspace and link it to event wait task.
  5. Right click on event wait task and click EDIT -> EVENTS tab.
  6. Select Pre Defined option there. In the blank space, give directory and filename to watch. Example: D:\FILES\abc.tct
  7. Workflow validate and Repository Save.
clip_image001
Example 2: Raise a user defined event when session s_m_filter_example succeeds. Capture this event in event wait task and run session S_M_TOTAL_SAL_EXAMPLE 
Steps for creating workflow: 
  1. Workflow -> Create -> Give name wf_event_wait_event_raise -> Click ok.
  2. Workflow -> Edit -> Events Tab and add events EVENT1 there.
  3. Drag s_m_filter_example and link it to START task.
  4. Click Tasks -> Create -> Select EVENT RAISE from list. Give name
  5. ER_Example. Click Create and then done.Link ER_Example to s_m_filter_example.
  6. Right click ER_Example -> EDIT -> Properties Tab -> Open Value for User Defined Event and Select EVENT1 from the list displayed. Apply -> OK.
  7. Click link between ER_Example and s_m_filter_example and give the condition $S_M_FILTER_EXAMPLE.Status=SUCCEEDED
  8. Click Tasks -> Create -> Select EVENT WAIT from list. Give name EW_WAIT. Click Create and then done.
  9. Link EW_WAIT to START task.
  10. Right click EW_WAIT -> EDIT-> EVENTS tab.
  11. Select User Defined there. Select the Event1 by clicking Browse Events button.
  12. Apply -> OK.
  13. Drag S_M_TOTAL_SAL_EXAMPLE and link it to EW_WAIT.
  14. Mapping -> Validate
  15. Repository -> Save.
  16. Run workflow and see.
clip_image001[5]

TIMER TASK
The Timer task allows us to specify the period of time to wait before the Power Center Server runs the next task in the workflow. The Timer task has two types of settings:
  • Absolute time: We specify the exact date and time or we can choose a user-defined workflow variable to specify the exact time. The next task in workflow will run as per the date and time specified.
  • Relative time: We instruct the Power Center Server to wait for a specified period of time after the Timer task, the parent workflow, or the top-level workflow starts.
Example: Run session s_m_filter_example relative to 1 min after the timer task. 
Steps for creating workflow: 
  1. Workflow -> Create -> Give name wf_timer_task_example -> Click ok.
  2. Click Tasks -> Create -> Select TIMER from list. Give name TIMER_Example. Click Create and then done.
  3. Link TIMER_Example to START task.
  4. Right click TIMER_Example-> EDIT -> TIMER tab.
  5. Select Relative Time Option and Give 1 min and Select ‘From start time of this task’ Option.
  6. Apply -> OK.
  7. Drag s_m_filter_example and link it to TIMER_Example.
  8. Workflow-> Validate and Repository -> Save.
clip_image002
DECISION TASK
  • The Decision task allows us to enter a condition that determines the execution of the workflow, similar to a link condition.
  • The Decision task has a pre-defined variable called $Decision_task_name.condition that represents the result of the decision condition.
  • The Power Center Server evaluates the condition in the Decision task and sets the pre-defined condition variable to True (1) or False (0).
  • We can specify one decision condition per Decision task.
Example: Command Task should run only if either s_m_filter_example or 
S_M_TOTAL_SAL_EXAMPLE succeeds. If any of s_m_filter_example or 
S_M_TOTAL_SAL_EXAMPLE fails then S_m_sample_mapping_EMP should run. 
Steps for creating workflow: 
  1. Workflow -> Create -> Give name wf_decision_task_example -> Click ok.
  2. Drag s_m_filter_example and S_M_TOTAL_SAL_EXAMPLE to workspace and link both of them to START task.
  3. Click Tasks -> Create -> Select DECISION from list. Give name DECISION_Example. Click Create and then done. Link DECISION_Example to both s_m_filter_example and S_M_TOTAL_SAL_EXAMPLE.
  4. Right click DECISION_Example-> EDIT -> GENERAL tab.
  5. Set ‘Treat Input Links As’ to OR. Default is AND. Apply and click OK.
  6. Now edit decision task again and go to PROPERTIES Tab. Open the Expression editor by clicking the VALUE section of Decision Name attribute and enter the following condition: $S_M_FILTER_EXAMPLE.Status = SUCCEEDED OR $S_M_TOTAL_SAL_EXAMPLE.Status = SUCCEEDED
  7. Validate the condition -> Click Apply -> OK.
  8. Drag command task and S_m_sample_mapping_EMP task to workspace and link them to DECISION_Example task.
  9. Double click link between S_m_sample_mapping_EMP & DECISION_Example & give the condition: $DECISION_Example.Condition = 0. Validate & click OK.
  10. Double click link between Command task and DECISION_Example and give the condition: $DECISION_Example.Condition = 1. Validate and click OK.
  11. Workflow Validate and repository Save.
  12. Run workflow and see the result.
clip_image002[5]
CONTROL TASK
  • We can use the Control task to stop, abort, or fail the top-level workflow or the parent workflow based on an input link condition.
  • A parent workflow or worklet is the workflow or worklet that contains the Control task.
  • We give the condition to the link connected to Control Task.
Control Option Description 
Fail Me Fails the control task. 
Fail Parent Marks the status of the WF or worklet that contains the
Control task as failed. 
Stop Parent Stops the WF or worklet that contains the Control task. 
Abort Parent Aborts the WF or worklet that contains the Control task. 
Fail Top-Level WFFails the workflow that is running.
Stop Top-Level WFStops the workflow that is running.
Abort Top-Level WFAborts the workflow that is running.

Example: Drag any 3 sessions and if anyone fails, then Abort the top level workflow. 
Steps for creating workflow: 
  1. Workflow -> Create -> Give name wf_control_task_example -> Click ok.
  2. Drag any 3 sessions to workspace and link all of them to START task.
  3. Click Tasks -> Create -> Select CONTROL from list. Give name cntr_task.
  4. Click Create and then done.
  5. Link all sessions to the control task cntr_task.
  6. Double click link between cntr_task and any session say s_m_filter_example and give the condition: $S_M_FILTER_EXAMPLE.Status = SUCCEEDED.
  7. Repeat above step for remaining 2 sessions also.
  8. Right click cntr_task-> EDIT -> GENERAL tab. Set ‘Treat Input Links As’ to OR. Default is AND.
  9. Go to PROPERTIES tab of cntr_task and select the value ‘Fail top level
  10. Workflow’ for Control Option. Click Apply and OK.
  11. Workflow Validate and repository Save.
  12. Run workflow and see the result.
clip_image002[7]
ASSIGNMENT TASK
  • The Assignment task allows us to assign a value to a user-defined workflow variable.
  • See Workflow variable topic to add user defined variables.
  • To use an Assignment task in the workflow, first create and add the
  • Assignment task to the workflow. Then configure the Assignment task to assign values or expressions to user-defined variables.
  • We cannot assign values to pre-defined workflow.
Steps to create Assignment Task: 
  1. Open any workflow where we want to use Assignment task.
  2. Edit Workflow and add user defined variables.
  3. Choose Tasks-Create. Select Assignment Task for the task type.
  4. Enter a name for the Assignment task. Click Create. Then click Done.
  5. Double-click the Assignment task to open the Edit Task dialog box.
  6. On the Expressions tab, click Add to add an assignment.
  7. Click the Open button in the User Defined Variables field.
  8. Select the variable for which you want to assign a value. Click OK.
  9. Click the Edit button in the Expression field to open the Expression Editor.
  10. Enter the value or expression you want to assign.
  11. Repeat steps 7-10 to add more variable assignments as necessary.
  12. Click OK.

Matching Data Between Two Sources Using Part of a String

The illustration below explains different scenarios to match data between two sources using part of a string.

Let us understand the requirement with an example. Click on each image if needed to open a larger view of the image.

The MASTER_LIST source contains a list of strings that need to be searched against the BANK_LIST source. The two sources are shown below.


For instance, if the search string is 'INDIA' in the MASTER_LIST source, the matching rows in the BANK_LIST source will be 'Indian Bank' and 'STATE BANK OF INDIA' since both bank names contain the string 'INDIA'. The resultant rows will be loaded into the BANK_MASTERtable as shown below.


The matching between the two sources should not be case-sensitive.


Scenario 1: When both the sources are relational tables and reside in the same database. The database for this example is Oracle.

The INSTR function returns the location of a substring in a string. If the INSTR function returns '0', it implies, the substring is not present in the original string. A SQL query that matches the data between the two sources is given below.

SELECT
mstr.master_id,
bank.bank_id,
bank.bank_name
FROM
master_list mstr, bank_list bank
WHERE
INSTR(bank.bank_name, mstr.search_string) > 0

The above query returns only one row as shown below.


The query should not be case-sensitive and it can be re-written as below by converting both the string (bank.bank_name) and the substring (mstr.search_string) to lower case. This query should be added to the Sql Query section in the Source Qualifier properties.

SELECT
mstr.master_id,
bank.bank_id,
bank.bank_name
FROM
master_list mstr, bank_list bank
WHERE
INSTR(LOWER(bank.bank_name), LOWER(mstr.search_string)) > 0

The mapping is a simple pass-through mapping as shown below.


The final output will be the same as shown in the BANK_MASTER table below with the desired four resultant rows.




Scenario 2: When both the sources are flat files or one source is a flat file and the other is a relational table.

In this example, both the sources are flat files. The mapping implementation is shown below.



After, the Source Qualifier transformations, create two expression transformations as shown above. Create two output ports MASTER_KEY and BANK_KEY in the expression transformations EXP_Master_List and EXP_Bank_List respectively and in the expression editor pass the integer value '1'. These values will serve as a dummy join to merge the rows from both the flat files in the joiner transformation.


In the joiner transformation, designate the MASTER_LIST source as the "Master" source since it has fewer rows as compared to the BANK_LIST source. The Joiner condition is shown below. The Join type is "Normal Join".





The joiner transformation essentially does a full outer join i.e. all the rows between the two sources are matched with each other. We need to select only those rows that meet the defined criteria. This is achieved by using a filter transformation with the Filter Condition as shown below.





Scenario 3: When both the sources are relational tables but reside in different databases.



A similar approach to Scenario 2 can be used in this case too, but it would mean joining all the rows between the two source tables in the joiner transformation. Rather we can issue a query to the BANK_LIST table similar to the query in Scenario 1 except that the query won't have the MASTER_LIST table since it is in a different database.


The SQL transformation can be used to process queries midstream and to get the matching rows from the BANK_LIST table.



The source definitions for MASTER_LIST and BANK_LIST tables are shown below. Both the tables are in separate databases and there exists no DB links either between the two databases.




The BANK_MASTER target definition is shown below.




Create a new mapping. Drag the MASTER_LIST source definition and BANK_MASTER target definition into the Mapping Designer workspace as shown below.




Create a SQL transformation SQL_Get_Bank_Details as shown below. Click Create to proceed.




Proceed with the default settings as shown below. Click OK and Done to continue.





The SQL transformation needs to be run in the Query Mode since the SQL queries issued to the BANK_LIST table will be dynamic i.e. since the MASTER_LIST table contains two search strings 'INDIA' and 'AMERICA', two queries will be issued as given below.



SELECT BANK_ID, BANK_NAME
FROM
BANK_LIST
WHERE
INSTR(LOWER(BANK_NAME), LOWER('INDIA')) > 0;


SELECT BANK_ID, BANK_NAME
FROM
BANK_LIST
WHERE
INSTR(LOWER(BANK_NAME), LOWER('AMERICA')) > 0;


Drag the MASTER_ID and SEARCH_STRING ports from the Source Qualifier to the SQLtransformation as shown below.





Double click on the SQL transformation to edit it. Go to the SQL Ports tab. Uncheck the SEARCH_STRING as an output port since it is not required in the target as shown below. Only the MASTER_ID needs to be passed to the target, so it remains as an Input/Output port.




Add two SQL output ports BANK_ID and BANK_NAME as shown below ensuring that the correct Native Type and Precision are selected for each.





Next click on the section highlighted in red above to open the SQL Editor that will contain the SQL query that gets issued midstream. Type the query as shown below. Ensure that the order of the fields in the SELECT clause match the order of the SQL output ports.




Now, since the 'SEARCH_STRING' needs to change dynamically as shown in the two queries above, we need to use String Substitution. Click on the SEARCH_STRING port below String Substitution to add it to the query as shown below.




Modify the query as shown below, so that it matches the above two queries that need to be issued to the BANK_LIST table.





Click OK to continue. Link the MASTER_ID_outputBANK_ID and BANK_NAME ports from the SQL transformation to the target definition. The complete mapping is shown below.





In the session task, mention the correct relational connections. A relational connection (Database_B) needs to be specified for the SQL transformation too. As shown below, the MASTER_LIST table is in Database_ABANK_LIST table is in Database_B and BANK_MASTER target table is in Database_C.