Informatica Scenario-Based Interview Questions & Answers

5.00 avg. rating (98% score) - 4 votes

In our previous blog section, we presented you with a list of frequently asked Informatica interview questions, which you must prepare for the Informatica interview round. Now here we are one step ahead with a list of top Informatica scenario-based interview questions, which most of the recruiters ask during the Informatica interview sessions.


Now that you are just one step away to land a job in your dream job, you must prepare well for all the probable interview questions, as every interview round is different, especially when scenario-based questions are asked.


Top 10 Scenario-Based Informatica Interview Questions


Q1. How to load the last N rows from a flat-file into a target table in Informatica?


Ans. Considering that the source has data:


  • Col
  • ABC
  • DEF
  • GHI
  • JKL
  • MNO


Now follow the below steps to load the last 3 rows into a target table


Step 1:


  • Assign the row numbers to each record by using expression transformation. Name the row to calculate as N_calculate.
  • Create a dummy output port and assign 1 to the port in the same expression transformation.
  • This will return 1 for each row.


Ports in Expression Transformation
V_calculate=V_calculatet+1
N_calculate=V_calculate
N_dummy=1

Outputs in Expression Transformation
col, N_calculate, N_dummy
ABC, 1, 1
DEF, 2, 1
GHI, 3, 1
JKL, 4, 1
MNO, 5, 1

Step 2:


  • Pass expression transformation output to the aggregator transformation
  • Do not specify condition ‘any group’
  • Create a N_total_records output port in the aggregator
  • Assign the N_calculatet port to it.
  • By default, it will return the last row
  • It will contain DUMMY port
  • Now it will hold the value as 1 and N_total_records port (it will keep the value of the total number of records available in the source)


Ports in Aggregator Transformation
N_dummy
N_calculate
N_total_records=N_calculate

Outputs in Aggregator Transformation
N_total_records, N_dummy
5, 1

Step 3


  • Now pass the value of expression and aggregator transformation to the joiner transformation
  • Merge the dummy port
  • Check the property sorted input in the joiner transformation to connect both expression and aggregator transformation
  • Now the join condition will be O_dummy (port from aggregator transformation) = O_dummy (port from expression transformation)


Outputs in Joiner Transformation
col, N_calculate, N_total_records
ABC, 1, 5
DEF, 2, 5
GHI, 3, 5
JKL, 4, 5
MNO, 5, 5

Step 4


  • Pass the joiner transformation to filter transformation
  • Mention the filter condition as N_total_records (port from aggregator)-N_calculate(port from expression) <=2
  • Thus, the filter condition in the filter transformation will be N_total_records – N_calculate <=2


Output


Outputs in Filter Transformation
col N_calculate, N_total_records
GHI, 3, 5
JKL, 4, 5
MNO, 5, 5

Summary


Q2. Solve the below situations if data has duplicate rows. 


Data


  • Amazon
  • Walmart
  • Snapdeal
  • Snapdeal
  • Walmart
  • Flipkart
  • Walmart


Situation – Give steps to load all unique names in one table and duplicate names in another table.


Solution 1 – We want solution tables as:


Amazon and Flipkart in one table


And


Walmart, Walmart, Walmart, Snapdeal,  and Snapdeal in another table


Follow the below steps


  • Sort the name data by using a sorter transformation
  • Pass the sorted output to an expression transformation
  • Form a dummy port N_dummy and assign 1 to the port
  • Now for each row, the Dummy output port will return 1


Expression Transformation Output
Name, N_dummy
Amazon, 1
Walmart, 1
Walmart, 1
Walmart, 1
Snapdeal, 1
Snapdeal, 1
Flipkart, 1

  • Pass the acquired expression transformation output to aggregator transformation
  • Check ‘groupby’ on name port
  • Create an output port in aggregator N_calculate_of_each_name and write an expression calculate(name).


Aggregator Transformation Output
name, N_calculate_of_each_name
Amazon, 1
Walmart, 3
Snapdeal, 2
Flipkart, 1

  • Pass the expression and aggregator transformation output to joiner transformation
  • Join the name ports
  • Review the property sorted input to connect both transformations to joiner transformation


Joiner Transformation Output
name, N_dummy, N_calculate_of_each_name
Amazon, 1, 1
Walmart, 1, 3
Walmart, 1, 3
Walmart, 1, 3
Snapdeal, 1, 2
Snapdeal, 1, 2
Flipkart, 1, 1

  • Move the joiner output to router transformation
  • Create one group
  • Specify it as O_dummy=O_count_of_each_name
  • Connect the group to one table
  • Connect default output group to another table
  • You will get separate tables for both


Q3. Situation 2 – Solve the below situations if data has duplicate rows. 


Data


  • Amazon
  • Walmart
  • Snapdeal
  • Snapdeal
  • Walmart
  • Flipkart
  • Walmart


Situation – Load each name once in one table and duplicate products in another table.


Ans.


Solution 2 – We want the output as:


Table 1 


Amazon
Walmart
Snapdeal
Flipkart

Table 2


Walmart
Walmart
Snapdeal

The below steps will give the desired solution:


  • Sort the name data by using a sorter transformation
  • Pass name output to expression transformation
  • Create a variable port,Z_curr_name
  • Assign the name port to variable port
  • Create Z_calculate port
  • Write in the expression editor, IIF(Z_curr_name=Z_prev_name, V_calculate+1,1)
  • Form another variable and call it as port Z_prev_port
  • Assign the name port to this variable
  • Form the output portN_calculate port
  • Assign Z_calculate to this output port


Expression Transformation Name port
Z_curr_name=name
Z_calculate=IIF(Z_curr_name=Z_prev_name, Z_calculate+1, 1)
N_calculate=Z_calculate

 


Expression Transformation Output
Amazon, 1
Walmart, 1
Walmart, 2
Walmart, 3
Snapdeal, 1
Snapdeal, 2
Flipkart, 1

  • Route the expression transformation to router transformation
  • Form a group
  • Specify condition as N_calculate=1
  • Merge the group to one table
  • Merge the default group output to another table


Q4. In Informatica, how do you use Normalizer Transformation for the below-mentioned condition



State


Quarter 1 PurchaseQuarter 2 PurchaseQuarter 3 Purchase

Quarter 4 Purchase



ABC


808590

95



DEF


606570

75



Ans. If you want to transform a single row into multiple rows, Normalizer Transformation will help. Also, it is used for converting multiple rows into a single row to make data look organized. As per the above scenario-based Informatica interview question, we want the solution to look as:


State NameQuarterPurchase
ABC180
ABC285
ABC390
ABC495
DEF160
DEF265
DEF370
DEF475

Follow the steps to achieve the desired solution by using normalizer transformation:


Step 1 –


  • Create a table “purchase_source” and assign a target table as “purchase_target”
  • Import the table to informatica
  • Create a mapping for both the tables having a source as “purchase_source” “purchase_target” respectively
  • Create a new transformation from the transformation menu
  • Enter the name “xyz_purchase”
  • Select create option
  • Select done (now the transformation is created)


Step 2 –


  • Double click on normalization transformation
  • Go to normalizer tab and select it
  • From the tab, click on the icon, this will create two columns
  • Enter the names of columns
  • Fix number of occurrences to 4 for purchase and 0 for the state name
  • Select OK
  • 4 columns will be generated and appear in the transformation


Step 3 –


  • In the mapping, link all four columns in source qualifier of the four Quarters to the normalizer
  • Link state name column to normalizer column
  • Link state_name and purchase columns to target table
  • Link lkp_purchase column to target table
  • Create session and workflow
  • Save the mapping and execute it
  • You will get the desired rearranged output


State NameQuarterPurchase
ABC180
ABC285
ABC390
ABC495
DEF160
DEF265
DEF370
DEF475

Q5. What to do when you get the below error?


AA_10000 Normalizer Transformation: Initialization Error: [Cannot match AASid with BBTid.]


Ans. Follow the below process –


  • Remove all the unconnected input ports to the normalizer transformation
  • If OCCURS is present, check that the number of input ports is equal to the number of OCCURS


Q6. What are the steps to create, design, and implement SCD Type 1 mapping in Informatica using the ETL tool?


Ans. The SCD Type 1 mapping helps in the situation when you don’t want to store historical data in the Dimension table as this method overwrites the previous data with the latest data.


The process to be followed:


  • Identify new records
  • Insert it into the dimension table
  • Identify the changed record
  • Update it in the dimension table 


For example:


If the source table looks like:


CREATE TABLE Students (


Student_Id   Number,


Student_Name Varchar2(60),


Place  Varchar2(60)


)


Now we require using the SCD Type 1 method to load the data present in the source table into the student dimension table.


CREATE TABLE Students_Dim (


Stud_Key  Number,


Student_Id  Number,


Student_Name Varchar2(60),


Location Varchar2(60)


)


Follow the steps to generate SCD Type 1 mapping in Informatica


  • In the database, create source and dimension tables
  • Create or import source definition in the mapping designer tool’s source analyzer
  • Import the Target Definition from Warehouse designer or Target designer
  • Create a new mapping from the mapping designer tab
  • Drag and drop the source
  • Select Create option from toolbar’s Transformation section
  • Select Lookup Transformation
  • Enter the name and click on create
  • From the window, select Student dimension table and click OK
  • Edit lkp transformation
  • Add a new port In_Student_Id from the properties tab
  • Connect the port to source qualifier transformation’ Student_Id port
  • From the lkp transformation’s condition tab, enter the Lookup condition as Student_Id = IN_Student_Id
  • Click OK
  • Now, connect source qualifier transformation’s student_id port to lkp transformation’s In_Student_Id port
  • Create expression transformation using the input port as Stud_Key, Name, Location, Src_Name, Src_Location
  • Create an output port as New_Flag, Changes_Flag
  • In the expression transformation’s output ports, enter the below-mentioned expression
    • New_Flag = IIF(ISNULL(Stud_Key),1,0)



Changed_Flag = IIF(NOT ISNULL(Stud_Key)


AND (Name != Src_Name


OR Location != Src_Location),


1, 0 )


  • Connect lkp transformation port to expression transformation port
  • Also, connect source qualifier transformation port to expression transformation port
  • Form a filter transformation and move the ports of source qualifier transformation
  • Edit the filter transformation and set new Filter Condition as New_Flag=1 from the edit filter transformation option
  • Press OK
  • Create an update strategy transformation
  • Connect all filter transformation port just exclude except the New_Flag port
  • From the properties tab of update strategy, enter DD_INSERT as the strategy expression
  • Drag target definition to mapping
  • Connect relevant ports to target definition from update strategy
  • Create sequence generator transformation
  • Connect NEXTVAL port to target surrogate key port (stud_key)
  • Create a different filter transformation
  • In the filter transformation, drag lkp transformation’s port (Stud_Key), source qualifier transformation (Name, Location), expression transformation (changed_flag) ports
  • Go to properties tab to edit the filter transformation
  • Mention the filter condition as Changed_Flag=1
  • Click OK
  • Create the update strategy
  • Connect parts of filter transformation to update strategy
  • From the update strategy properties tab, enter expressions DD_Update
  • In this mapping, drag target definition
  • From the update strategy, connect all the appropriate ports to target definition


Q7. Give steps to use PMCMD Utility Command.


Ans. There are 4 different built-in command-line programs:


  • infacmd
  • infasetup
  • pmcmd
  • Pmrep


 PMCMD command helps for the following functions:


  • Start workflows
  • Schedule workflows
  • Start a workflow from a specific task
  • Stop and abort workflows and sessions


 Below are the steps to use PMCMD command:


  1. Start workflow

pmcmd startworkflow -service informatica-integration-Service -d domain-name -u user-name -p password -f folder-name -w workflow-name


  1. Scheduling the workflow

pmcmd scheduleworkflow -service informatica-integration-Service -d domain-name -u user-name -p password -f folder-name -w workflow-name


  1. Start a workflow from a specific task

pmcmd startask -service informatica-integration-Service -d domain-name -u user-name -p password -f folder-name -w workflow-name -startfrom task-name


  1. Abort workflow

pmcmd abortworkflow -service informatica-integration-Service -d domain-name -u user-name -p password -f folder-name -w workflow-name


pmcmd aborttask -service informatica-integration-Service -d domain-name -u user-name -p password -f folder-name -w workflow-name task-name


 Q8. How to configure the target load order in Informatica?


Ans. Follow the below steps:


  • Create mapping containing multiple target load order groups in the PowerCenter designer
  • From the toolbar, click on the Mappings and then click on Target Load Plan
  • You will see a pop up that will have a list of source qualifier transformations in the mapping. Also, it will have the target from which it receives data from each source qualifier
  • From the list, pick a source qualifier
  • Using the Up and Down button, move source qualifier within load order
  • Click ok
  • You will get the desired output


Q9. Using the incremental aggregation in the below table, what will be the output in the next table?


Product IDBill NumberCostData
101110001/01/2020
201215001/01/2020
301320001/01/2020
101430005/01/2020
101540005/01/2020
201650005/01/2020
555755005/01/2020
151860005/01/2020

Ans. When the first load is finished the table will become:


Product IDBill NumberLoad_KeyData
101120011100
201220011150
301320011200

Q10. What is the syntax of the INITCAP function?


Ans. This function is used to capitalize the first character of each word in the string and makes all other characters in lowercase.


Below is the Syntax:


INITTCAP(string_name)


These were some of the most popular scenario-based Informatica interview questions.


About the Author

Pratibha Roy

Pratibha Roy

A content specialist by profession, Pratibha has four+ years of experience in writing engaging articles and blogs. She strongly believes in the power of words and equips new ideas with the motive of imparting comprehensive knowledge to the readers. When away from work, she loves reading books with a cup of hot coffee in hand.