SAP BODS Interview Questions
Q1. What is the utilization of Business Objects Data Services?
Ans: BusinessObjects Data Services gives a graphical point of interaction that permits you to effortlessly make occupations that separate information fromheterogeneous sources, change that information to meet the business prerequisites of your association, and burden the information into a solitary area.
Q2. Characterize Data Services parts.
Ans: Data Services incorporates the accompanying standard parts:
Purifying Packages, Dictionaries, and Directories
The board Console
Q3. What are the means remembered for Data incorporation process?
Stage information in a functional datastore, information distribution center, or information store.
Update arranged information in clump or ongoing modes.
Establish a solitary climate for creating, testing, and conveying the whole information mix stage.
Deal with a solitary metadata storehouse to catch the connections between various extraction and access strategies and give coordinated heredity and effect examination.
Q4. Characterize the terms Job, Workflow, and Dataflow
A task is the littlest unit of work that you can plan freely for execution.
A work process characterizes the dynamic cycle for executing information streams.
Information streams separate, change, and burden information. Everything having to do with information, including understanding sources, changing information, and stacking targets, happens inside an information stream.
Q5. Orchestrate these items all together by their order: Dataflow, Job, Project, and Workflow.
Ans: Project, Job, Workflow, Dataflow.
Q6. What are reusable items in DataServices?
Ans: Job, Workflow, Dataflow.
Q7. What is a change?
Ans: A change empowers you to control how datasets change in a dataflow.
Q8. What is a Script?
Ans: A content is a solitary use object that is utilized to call works and dole out values in a work process.
Q9. What is a constant Job?
Ans: Real-time positions "separate" information from the body of the continuous message got and from any auxiliary sources utilized in the gig.
Q10. What is an Embedded Dataflow?
Ans: An Embedded Dataflow is a dataflow that is called from inside another dataflow.
Q11. What is the contrast between an information store and a data set?
Ans: A datastore is an association with an information base.
Q12. What number of sorts of datastores are available in Data administrations?
Information base Datastores: give a straightforward method for bringing in metadata straightforwardly froman RDBMS.
Application Datastores: let clients effectively import metadata frommost Enterprise Resource Planning (ERP) frameworks.
Connector Datastores: can give admittance to an application's information and metadata or just metadata.
Q13. What is the utilization of Compace vault?
Ans: Remove excess and outdated items from the storehouse tables.
Q14. What are Memory Datastores?
Ans: Data Services likewise permits you to make a data set datastore involving Memory as the Database type. Memory Datastores are intended to upgrade handling execution of information streams executing progressively occupations.
Q15. What are document designs?
Ans: A record design is a bunch of properties portraying the construction of a level document (ASCII). Document designs depict the metadata structure. Document design items can portray records in:
Delimited design — Characters, for example, commas or tabs separate each field.
Fixed width design — The segment width is determined by the client.
SAP ERP and R/3 arrangement.
Q16. Which isn't a datastore type?
Ans: File Format
Q17. What is storehouse? List the sorts of stores.
Ans: The DataServices storehouse is a bunch of tables that holds client made and predefined framework items, source and target metadata, and change rules. There are 3 kinds of archives.
A neighborhood storehouse
A focal vault
A profiler vault
Q18. What is the distinction between a Repository and a Datastore?
Ans: A Repository is a bunch of tables that hold framework items, source and target metadata, and change rules. A Datastore is a genuine association with a data set that holds information.
Q19. What is the distinction between a Parameter and a Variable?
Ans: A Parameter is an articulation that passes a snippet of data to a work process, information stream or custom capability when it is brought in a task. A Variable is a representative placeholder for values.
Q20. When might you utilize a worldwide variable rather than a neighborhood variable?
At the point when the variable should be utilized on various occasions inside a task.
At the point when you need to decrease the advancement time expected for elapsing values between work parts.
At the point when you want to make a reliance between work level worldwide variable name and occupation parts.
Q21. What is Substitution Parameter?
Ans: The Value that is steady in one climate, yet may change when a task is moved to another climate.
Q22. Show a few motivations behind why a task could neglect to execute?
Ans: Incorrect sentence structure, Job Server not running, port numbers for Designer and Job Server not coordinating.
Q23. List factors you consider while deciding if to run work processes or information streams sequentially or in equal?
Ans: Consider the accompanying:
Whether the streams are autonomous of one another
Whether the server can deal with the handling necessities of streams running simultaneously (in equal)
Q24. How does a query work respond? How do the various varieties of the query work vary?
Ans: All query capabilities return one column for each line in the source. They contrast by they way they pick which of a few matching lines to return. '
Q25. List the three sorts of info designs acknowledged by the Address Cleanse change.
Ans: Discrete, multiline, and half and half.
Q26. Name the change that you would use to consolidate approaching informational collections to create a solitary result informational collection with similar blueprint as the info informational indexes.
Ans: The Merge change.
Q27. What are Adapters?
Ans: Adapters are extra Java-based programs that can be introduced hands on server to give network to different frameworks like Salesforce.com or the JavaMessagingQueue. There is likewise a SoftwareDevelopment Kit (SDK) to permit clients to make connectors for custom applications.
Q28. List the information integrator changes
Turn Reverse Pivot
Q29. List the Data Quality Transforms
Q30. What are Cleansing Packages?
Ans: These are bundles that improve the capacity of Data Cleanse to precisely handle different types of worldwide information by including language-explicit reference information and parsing rules.
Q31. What is Data Cleanse?
Ans: The Data Cleanse change recognizes and disengages explicit pieces of blended information, and normalizes your information in view of data put away in the parsing word reference, business rules characterized in the standard document, and articulations characterized in the example record.
Q32. What is the distinction among Dictionary and Directory?
Ans: Directories give data on addresses from postal specialists. Word reference records are utilized to distinguish, parse, and normalize information like names, titles, and firm information.
Q33. Give a few instances of how information can be upgraded through the information purge change, and depict the advantage of those improvements.
Decide orientation appropriations and target
Orientation Codes advertising efforts
Give fields to working on coordinating
Match Standards results
Q34. A task requires the parsing of names into given and family, approving location data, and tracking down copies across a few frameworks. Name the changes required and the assignment they will perform.
Information Cleanse: Parse names into given and family.
Address Cleanse: Validate address data.
Match: Find copies.
Q35. Portray when to utilize the USA Regulatory and Global Address Cleanse changes.
Ans: Use the USA Regulatory change if USPS affirmation or potentially extra choices, for example, DPV and Geocode are required. Worldwide Address Cleanse ought to be used while handling multi-country information.
Q36. Give two instances of how the Data Cleanse change can improve (affix) information.
Ans: The Data Cleanse change can produce name match norms and good tidings. It can likewise allot orientation codes and prenames like Mr. and Mrs.
Q37. What are name match norms and how are they utilized?
Ans: Name match guidelines outline the numerous ways a name can be represented.They are utilized in the match cycle to enormously increment match results.
Q38. What are the various methodologies you can use to stay away from copy columns of information while re-stacking a task.
Involving the auto-right burden choice in the objective table.
Counting the Table Comparison change in the information stream.
Planning the information stream to totally supplant the objective table during every execution.
Counting a preload SQL proclamation to execute before the table burdens.
Q39. What is the utilization of Auto Correct Load?
Ans: It doesn't permit copied information going into the objective table.It works like Type 1 Insert else Update the columns in light of Non-endlessly matching information separately.
Q40. What is the utilization of Array bring size?
Ans: Array bring size shows the quantity of columns recovered in a solitary solicitation to a source data set. The default esteem is 1000. Larger numbers lessen demands, bringing down network traffic, and perhaps further develop execution. The greatest worth is 5000
Q41. What are the distinction between Row-by-column select and Cached correlation table and arranged input in Table Comparison Tranform?
Column by-line select — look into the objective table utilizing SQL each time it gets an information column. This choice is ideal assuming the objective table is huge.
Reserved examination table — To stack the correlation table into memory. This choice is best when the table squeezes into memory and you are looking at the whole objective table
Arranged input — To peruse the correlation table in the request for the essential key column(s) utilizing successive read.This choice further develops execution since Data Integrator peruses the examination table just once.Add a question between the source and the Table_Comparison change. Then, at that point, from the question's feedback construction, drag the essential key segments into the Order By box of the inquiry.
Q42. What is the utilization of involving Number of loaders in Target Table?
Ans: Number of loaders stacking with one loader is known as Single loader Loading. Stacking when the quantity of loaders is more noteworthy than one is known as Parallel Loading. The default number of loaders is 1. The most extreme number of loaders is 5.
Q43. What is the utilization of Rows per commit?
Ans: Specifies the exchange size in number of columns. On the off chance that set to 1000, Data Integrator sends a focus on the fundamental data set each 1000 columns.
Q44. What is the distinction between query (), lookup_ext () and lookup_seq ()?
query() : Briefly, It returns single worth in view of single condition
lookup_ext(): It returns numerous qualities in view of single/different condition(s)
lookup_seq(): It returns different qualities in light of succession number
Q45. What is the utilization of History saving change?
Ans: The History_Preserving change permits you to create another column in your objective as opposed to refreshing a current line. You can show in which segments the change distinguishes changes to be protected. In the event that the worth of specific sections change, this change makes another column for each line hailed as UPDATE in the information informational index.
Q46. What is the utilization of Map-Operation Transfrom?
Ans: The Map_Operation change permits you to change activity codes on informational collections to deliver the ideal result. Activity codes: INSERT UPDATE, DELETE, NORMAL, or DISCARD.
Q47. What is Heirarchy Flatenning?
Ans: Constructs a total progressive system from parent/kid connections, and afterward creates a portrayal of the order in upward or evenly leveled design.
Parent Column, Child Column
Parent Attributes, Child Attributes.
Q48. What is the utilization of Case Transform?
Ans: Use the Case change to work on branch rationale in information streams by combining case or dynamic rationale into one change. The transformallows you to divide an informational collection into more modest sets in light of sensible branches.
Q49. What must you characterize to review an information stream?
Ans: You should characterize review focuses and review rules when you need to review an information stream.
Q50. Show a few elements for PERFORMANCE TUNING in information administrations?
Ans: The accompanying areas depict ways you can change Data Integrator execution
Source-based execution choices
Utilizing exhibit bring size
Limiting separated information
Target-based execution choices
Stacking strategy and lines per commit
Arranging tables to accelerate auto-right loads
Work plan execution choices
Further developing throughput
Amplifying the quantity of pushed-down tasks
Limiting information type change
Limiting area change
Further developing Informix storehouse execution