Skip to main content

Posts

Showing posts from November, 2011

How to pass parameters to Procedures and Functions in PL/SQL ?

Parameters in Procedure and Functions:   In PL/SQL, we can pass parameters to procedures and functions in three ways. 1) IN type parameter: These types of parameters are used to send values to stored procedures. 2) OUT type parameter: These types of parameters are used to get values from stored procedures. This is similar to a return type in functions. 3) IN OUT parameter: These types of parameters are used to send values and get values from stored procedures. NOTE: If a parameter is not explicitly defined a parameter type, then by default it is an IN type parameter. 1) IN parameter : This is similar to passing parameters in programming languages. We can pass values to the stored procedure through these parameters or variables. This type of parameter is a read only parameter. We can assign the value of IN type parameter to a variable or use it in a query, but we ca

Explicit Cursors

Explicit Cursors An explicit cursor is defined in the declaration section of the PL/SQL Block. It is created on a SELECT Statement which returns more than one row. We can provide a suitable name for the cursor. The General Syntax for creating a cursor is as given below: CURSOR cursor_name IS select_statement; cursor_name – A suitable name for the cursor. select_statement – A select query which returns multiple rows. How to use Explicit Cursor? There are four steps in using an Explicit Cursor. DECLARE the cursor in the declaration section. OPEN the cursor in the Execution Section. FETCH the data from cursor into PL/SQL variables or records in the Execution Section. CLOSE the cursor in the Execution Section before you end the PL/SQL Block. 1) Declaring a Cursor in the Declaration Section: DECLARE CURSOR emp_cur IS SELECT * FROM emp_tbl WHERE salary > 5000;       In the above example we are creating a cursor ‘emp_cur’ on a query which returns

how to check table size in oracle 9i ?

Scenario : how to check table size in oracle 9i ? Solution : select    segment_name           table_name,      sum(bytes)/(1024*1024) table_size_meg from     user_extents where    segment_type='TABLE' and segment_name = 'TABLE_NAME'    GROUP BY segment_name

how to do the incremental loading using mapping variable

Scenario:     how to do the incremental loading using mapping variable   Solution: Step 1:   create a mapping variable, $$MappingDateVariable. In the source qualifier, create a filter to read only rows whose transaction date equals $$MappingDateVariable, such as: transaction_date  = $$MappingDateVariable (I am assuming you have a column in your source table called, transaction date or any date column to help do incremental load). Step 2 : Create a mapping variable, $$MappingDateVariable and hard code the value from which date you need to extract. Step 3 : In the mapping, use the variable function to set the variable value to increment one day each time the session runs. lets say you set the initial value of $$MappingDateVariable as 11/16/2010. The first time the integration service runs the session, it reads only rows dated 11/16/2010. And would set $$MappingDateVariable to 11/17/2010. It saves 11/17/2010 to the repository at the end of the session. T

Data type conversion issue

Scenario: We are creating hundreds of passthrough mappings that need to store numeric source data columns as varchar2 string columns in the target Oracle staging tables. (This is to bypass a current iDQ bug where large numbers are presented in profiling results in scientific notation - KB 117753). Unfortunaly Powercentre pads all available significant places with zeros. E.g. a source column (15,2) passing value 12.3 into a target column of varchar2(20) will populate with "12.3000000000000". Can this be avoided without manipulating each mapping with an additional expression with a string function and new output port for each column? Solution: Enabling high pression ensures the source data type (both scale and prescision intact). So if you want to avoid trailing 0's, u can use TRIM function in the SQ override query.

i get out put file frist field like #id,e_ID,pt_Status, but i dont want #

Scenario: my source .csv stg oracle tgt .csv i get out put first field #along with columan name. and i want to delete dummy files my server is windows Solution: It is not a problem.You need to provide the targetfile path and the name in the input filename and output filename you can provide the file location and the name you want to have in the target file (final file). Ex: Oracle_emp(source)--> SQ-->Logic-->TGT(emp.txt)(Flatfile) In post session sucess command sed 's/^#//g' d:\Informaticaroot\TGTfiles\ emp.txt > d:\Informaticaroot\TGTfiles\