Top 100+ Sap Maxdb Interview Questions And Answers
Question 1. Is There Any Specific Size Limit For A Sap Maxdb Database?
Answer :
One can configure a most of 255 information volumes within the SAP MaxDB standard format (parameter: VOLUMENO_BIT_COUNT or ConverterVolumeIDLayout= 8). The maximum length of a records volume may be 128 GB. However the most total size of the facts volumes may be 32 TB. The log region can also use a most of 32 TB.
Question 2. Is There Any Limited Number Of Simultaneous Sessions Of A Sap Maxdb Database?
Answer :
No, there may be no restrict for the quantity of simultaneous classes of an SAP MaxDB database. The database parameter MaxUserTasks is used to configure the wide variety of database sessions that can be logged on simultaneously to the database.
OLTP:
The range of database customers in OLTP systems have to be configured to at the least 2 x <number_SAP processes > + 4.
BW:
The wide variety of database customers in OLTP structures ought to be configured to as a minimum three x <number_SAP processes > + four.
Java packages:
The most number of connections to the database in the connection pool is decided for every J2EE example (NW 7.1 is the default 70). The sum of connections (connection pool) of all J2EE instances + four is used to calculate the range of parallel consumer classes (MaxDB parameter: MaxUserTasks).
LiveCache:
The formula that's used to calculate the cost for the database parameter MaxUserTasks for liveCaches in SCM device four.1 and lower is:
2 x <number_SAP processes > + four
The method that's carried out for liveCaches in SCM gadget five.Zero and above is:
three x <number_SAP processes > + four
One also can talk to 757914.
MySQL Interview Questions
Question three. What Are The Tasks Of A Maxdb Database Administrator?
Answer :
The MaxDB administrator is chargeable for tracking the energetic database and also for the subsequent activities:
Backing up the dataset and the log location
Executing consistency tests at the dataset to do away with inconsistencies inside the database which can be due to hardware defects.
Updating the optimizer records (Update Statistics).
Executing performance analyses.
Monitoring the loose memory.
Question 4. How Large Is It Preferred To Configure The Data Volumes Of A Sap Maxdb Database?
Answer :
The most excellent use of the I/O machine is vital for I/O performance. Thus, it will become useful to distribute the volumes frivolously across the to be had I/O channels.
The parallelism of the I/O receives suffering from the variety of facts volumes.
Windows:
The asynchronous I/O of the operating system is used on Windows.
UNIX:
On UNIX the wide variety of configured I/O threads determines the parallelism with which the SAP MaxDB/liveCache database transfers the I/O requests to the operating machine.
O SAP MaxDB version lower than Version 7.7
The number of volumes * variety of I/O threads for every volume (_IOPROCS_PER_DEV), gives the wide variety of I/O threads.
O SAP MaxDB Version 7.7 or higher
Volumes * (general of low/med/high queues for every extent), offers the range of I/O threads. But it could get confined via the database parameter MaxIOPoolWorkers.
The quantity of threads receives increased by means of a number of threads that become configured too high. This effects in achieving the limits of the working system assets.
It is usually recommended to apply the subsequent system to decide the scale of SAP MaxDB information volumes: 'square root of the gadget length in GB, rounded up'.
Examples:
10 GB: four data volumes
50 GB: eight facts volumes
one hundred GB: 10 data volumes
200 GB: 15 information volumes
500 GB: 23 data volumes
1 TB: 32 statistics volumes
It is favored to have the identical length for all the records volumes.
MySQL Tutorial
Question 5. Do I Need To Change A Data Volume Configuration Which Does Not Correspond To The Recommendations Of This Note?
Answer :
No, it is not recommended to exchange the existing configurations. In case of the occurrence of any extreme I/O performance issues, one have to analyze the problems in detail to determine their actual motive.
SAP BW Interview Questions
Question 6. Is There Any Specific Limit For The Number Of Data Volumes?
Answer :
A maximum of 255 information volumes can be configured within the SAP MaxDB preferred format.
Question 7. Is It Recommended To Create The Sap Maxdb Volumes On Raw Devices Or Files?
Answer :
The volumes of the type "File" and of the type "Raw" may be defined on UNIX.
A raw tool is a tough disk or its any component which isn't always controlled through the running gadget. Data volumes of the kind "raw device" can be configured for databases on UNIX.
Since the administrative attempt that's required for document structures does now not apply and so the access to uncooked gadgets is typically faster.
As the working device does no longer have to test the consistency of the record machine on raw gadgets, so it is able to typically boot quicker.
Because of the above mentioned blessings, it is advocated to apply uncooked gadgets on UNIX systems. However, volumes of the type "File" on UNIX are also supported.
Volumes of the type "File" are the endorsed widespread in LINUX.
SAP BW Tutorial DB2 Using SQL Interview Questions
Question 8. Where Can You Find The Information About The Configuration Of Sap Maxdb Volumes Of The Type "record"?
Answer :
The overall performance of the database is significantly influenced through the rate with which the database system can read records from the statistics volumes and might write records to the volumes. For ensuring good overall performance while working the database later, one should see 993848 (Direct I/O mount alternatives for LiveCache/MaxDB) for records about developing and configuring volumes of the type "File".
Question 9. Is It Recommended To Configure All Data Volumes In The Same Lun?
Answer :
It is suggested to distribute the data volumes throughout numerous LUNs. As in line with the diverse stories, approximately 5 LUNs can be configured for every LUN.
SAP Crystal Reports Interview Questions
Question 10. Does The Data Get Distributed Evenly On All Volumes, If A New Data Volume Is Added?
Answer :
This mechanism may be activated the use of parameter EnableDataVolumeBalancing, as of MaxDB Version 7.7.06.09.
When the parameter EnableDataVolumeBalancing (deviating from the default) is ready to price YES, all of the records gets implicitly dispensed calmly to all facts volumes once a new data volume is both brought or deleted.
An even distribution of statistics is brought about at some stage in the restart.
DB2 Using SQL Tutorial
Question 11. How Should The Database Parameter Maxcpu Be Set For Dual Core Cpus?
Answer :
It is recommended to use the quantity of cores as a foundation for calculating MAXCPU (as of Version 7.7., that is MaxCPUs), due to the fact the dual core CPUs have two cores with separate execution units (a separate L1 cache and every so often even a separate L2 cache).
One can also see FAQ 936058: MaxDB Runtime Environment, for records approximately putting the database parameter MAXCPU (MaxCPUs).
Sybase Interview Questions
Question 12. Can Additional Cpus Be Assigned To The Database In Live Operation?
Answer :
MaxDB Version 7.Eight offers the choice to use the parameter UseableCPUs to dynamically add additional CPUs to the database or to lessen the variety of the usage of CPUs. The parameter MaxCPUs keeps to manipulate the maximum variety of CPUs for use.
MySQL Interview Questions
Question 13. How Large Is It Preferred To Configure The Io Buffer Cache?
Answer :
The database parameter CACHE_SIZE or (as of Version 7.7.03) the database parameter CacheMemorySize need to be used to configure the IO buffer cache.
The converter cache and the facts cache of an SAP MaxDB database are covered inside the IO buffer cache.
The database performance is substantially prompted by the dimensions of the IO buffer cache. The large the dimensions of the IO buffer cache, the fewer time-ingesting tough disk accesses ought to be accomplished.
The data quantity this is to be processed in daily commercial enterprise and the software decide the scale of the IO buffer cache this is to be set.
It is commonly encouraged to process all the records in a device this is up and jogging, without getting access to the difficult disk that's inside the facts cache. However it is frequently now not possible within the BI environment and OLTP surroundings.
It should be feasible to maintain all records this is to be processed within the IO buffer cache, whilst using the SAP liveCache era. Generally, the effects of the Quick Sizer must be used to configure the IO buffer cache for SAP liveCache era.
The following applies to the IO buffer cache: the bigger, the higher (supplied that enough physical memory is available).
It have to be stated that a heap reminiscence is also allotted by using the database further to the IO buffer cache. The basic memory intake of an SAP MaxDB database can be determined using the records from the gadget desk MEMORYALLOKATORSTATISTICS.
The variety of a success and failed accesses to the records cache determines the data cache hit ratio. The hit ratio indicates whether or not the size of the information cache is nicely configured. However, if extremely good packages are going for walks on the time of the analysis then the information cache hit ratio does not offer sufficient statistics. For example, at some stage in 12 months-stop closing the hit ratio might also deteriorate because this records does now not need to be held inside the cache completely. After at once restarting the database, it isn't always indicated through the records cache hit ratio that whether or not the device is properly configured, because all facts have to first be loaded into the cache.
The settings for the scale of the IO buffer cache which has been tried and tested through many OLTP customers and BW customers for SAP structures are as follows:
OLTP NON-UNICODE: 1% of the information quantity
OLTP UNICODE: 2% of the data quantity
BW NON-UNICODE: 2% of the data extent
BW UNICODE: 4% of the facts volume
SAP Crystal Reports Tutorial
Question 14. What Is An Index?
Answer :
You can create an index (additionally known as secondary key) to hurry up the search for database records in a table. An index is a database item that can be defined for a unmarried column or a chain of columns of a database desk.
In technical phrases, indexes are records systems (consisting of one or greater inverting lists), which save elements of the records of a desk in a separate B* tree shape. This storage kinds the facts in keeping with the inverting key fields that had been used. Due to this sort of garage, the table statistics can be accessed faster using the indexed columns than with out the relevant index.
Indexes, unlike tables, do no longer consist of any impartial commercial enterprise information, and therefore can always be created again by the table. For example, this is relevant if corruption happens on the index because of hardware problems.
Question 15. What Are Indexes Used For?
Answer :
Indexes permit faster get right of entry to to the rows of a desk.
You can build indexes for a single column or for a sequence of columns.
The definition of indexes determines whether or not the column cost of different rows within the listed columns have to be unique or no longer (UNIQUE or NON-UNIQUE index).
An assigned index name and the table names ought to be precise. Therefore, there may be numerous indexes with the equal call for each database consumer or scheme, but no longer for the identical desk.
SAP BPC Interview Questions
Question sixteen. What Is The Technical Structure Of Indexes?
Answer :
Indexes, like tables, are implemented as B* trees in SAP MaxDB. These encompass a root node that bureaucracy a tree with the following nodes.
The whole index key and the references to the table records are inside the lowest stage of the index tree, otherwise called the leaf degree. The database machine does not use bodily objects to become aware of these references, but alternatively those are saved in the primary key of the records facts. The records document is diagnosed by the number one key (the bodily item of this statistics report is decided via the converter).
Since get right of entry to to the facts does not follow the series Primary key -> Data, however rather Data -> Primary key, it's also referred to as an inversion.
The concept in the back of this is that the relational table layout affords for the fact that all information is depending on a completely unique number one key.
While the access Primary key -> Data always collects one or no rows, the access Data -> Primary key collects no, one or multiple rows.
SAP BPC Tutorial
Question 17. Is The Primary Key For Sap Maxdb Also Stored In A Separate Index-b* Tree?
Answer :
Each database desk has a number one key (primary index). The number one secret is either defined via the consumer or generated with the aid of the machine. A consumer-defined number one key can encompass one or extra columns. The primary key ought to have a completely unique fee for each row of the desk.
The primary secret is applied immediately on the records tree, which means there's no separate number one key tree. There is not any ROWID or some thing similar. The precise identity of a document is the number one key (for multiple keys, that is the mixture of fields that are described as the number one key).
MYSQL DBA Interview Questions
Question 18. How Is Data Accessed Using An Index?
Answer :
If the SQL optimizer evaluates get entry to using the index because the great answer, the primary keys of the desk that in shape the index key fields used are determined in the table tree.
The corresponding rows are study from the table using this listing of number one keys.
SAP BW Interview Questions
Question 19. What Do I Need To Consider When I Create New Indexes?
Answer :
Indexes are extra data structures that should be maintained with every change to the desk data. Therefore, the attempt worried in a statistics exchange (INSERT, UPDATE, DELETE) inside the database increases with the range of indexes in a table. You should consequently make sure that the indexes you create inside the client namespace are simply used by the application also.
SAP BW on HANA Tutorial
Question 20. What Search Strategies Are There With Indexes?
Answer :
You will find a description of the techniques used by the SQL optimizer under 'Strategy' inside the thesaurus of the SAP MaxDB documentation.
DB2 SQL Programming Interview Questions
Question 21. Can Indexes Fragment In Sap Maxdb?
Answer :
No. SAP MaxDB does not have an index fragmentation hassle like that of Oracle (Note 771929). Here, indexes are saved most useful completely, and garage space is straight away allotted to the freespace again.
Question 22. Can I Check Individual Indexes For Consistency?
Answer :
As of Version 7. Eight, individual indexes can be checked for consistency. For extra statistics approximately the announcement CHECK INDEX.
Question 23. Can An Index Become Larger Than The Table To Which It Belongs?
Answer :
Yes. This is possible, as an example, if the index is created on a bit used field (this is, a area filled with too little records), however the number one key calls for an excessive amount of reminiscence.
The key records is stored in a reduced form at index degree in the B* tree of the desk, even as the secondary index have to save each entire primary key.
SAP Workflow Interview Questions
Question 24. Can I Create An Index On A View?
Answer :
No. View tables are perspectives on tables. The tables which can be worried in view tables are known as base tables.
These views on base tables are implemented as SELECT statements on the base tables. Technically, perspectives tables can be in comparison to stored SELECT statements.
Therefore, no indexes may be created on view tables, but they can be created on the base tables of view tables.
DB2 Using SQL Interview Questions
Question 25. What Happens If A Unique Index Is Set To Bad?
Answer :
If a UNIQUE index is set to BAD, the corresponding desk is ready to READ ONLY. This lock is vital because the UNIQUE choice of the index can now not be ensured for every extra write operation that issues this index inside the table. At the state-of-the-art while the database is started out the next time, UNIQUE indexes which might be set to BAD are mechanically recreated.
As of Version 7. 8, the DBM command auto_recreate_bad_index is available. You can use this command to prompt the automatic exercise of indexes (including UNIQUE indexes).
Question 26. What Is A Parallel Index Build?
Answer :
The information for the index is read in parallel by means of several server obligations to carry out the index build as speedy as possible. Only one parallel index build can be executed at a time - if several CREATE INDEX statements are carried out on the identical time, those different indexes are then processed by means of most effective one server project. This is distinctly slower than a parallel index construct. Therefore, you should always make certain that indexes are most effective created successively.
SAP BW on HANA Interview Questions
Question 27. Why May It Take A Long Time Until A Task Is Available Again After You Cancel The User Task To Cancel The Creation Of An Index?
Answer :
If you cancel a CREATE INDEX assertion, the cancel indicator is set for the consumer undertaking. However, before the mission may be re-launched, it have to clear all index lists that were created before by using the server responsibilities for the index build.
This guarantees that all of the systems which are not required are deleted from the machine after you cancel a CREATE INDEX assertion. This may additionally take the time relying at the wide variety and size of the index lists.
SAP Crystal Reports Interview Questions
Question 28. Can I Create Several Indexes Simultaneously?
Answer :
You can create numerous indexes simultaneously. However, in view that simplest one index build can be finished in parallel (through numerous server responsibilities), we advise (to speed up the introduction of indexes) that you ensure that indexes on large tables are commenced most effective if no other CREATE INDEX is energetic, when you are developing several indexes simultaneously. You can create indexes on small tables, even supposing a CREATE INDEX assertion is already energetic.
Question 29. Are Locks Set During Create Index?
Answer :
Yes. Up to and inclusive of SAP MaxDB Version 7.6, a lock is ready at the applicable tables during the index creation.
As of SAP MaxDB Version 7. 7, the device sets a lock for the complete duration of the index introduction only if the subsequent situations practice:
if it's far a UNIQUE index
if the transaction that executes the CREATE INDEX assertion has already set different locks.
SAP R/three Interview Questions
Question 30. How Can I Speed Up The Index Build?
Answer :
You can accelerate the index build on big tables by using making sure that best one CREATE INDEX declaration is lively, and therefore numerous server tasks are wearing out the index build.
The facts cache should be configured sufficiently large, in order that ideally all statistics for the index build can be loaded inside the cache.
Question 31. Must I Carry Out An Update Statistics For Indexes?
Answer :
No. You aren't required to explicitly create the records for the indexes via carrying out an UPDATE STATISTICS.
Question 32. What Do I Have To Bear In Mind With Regard To Views When Using Hints?
Answer :
Hints may be carried out to single-table perspectives. However, other than a few exceptions (including ORDERED), you cannot apply guidelines to enroll in views.
SAP ABAP Data Dictionary Interview Questions
Question 33. Why Must Sql Locks Be Set On Database Objects?
Answer :
If numerous transactions want to get entry to the same gadgets in parallel, those accesses have to be synchronized with the aid of the SQL lock management.
Since the database gadget lets in concurrent transactions for the same database objects, locks are required to isolate individual transactions.
To lock an item way to fasten this object from sure kinds of use via different transactions.
Sybase Interview Questions
Question 34. Which Lock Objects Do Exist In Sap Maxdb?
Answer :
Three kinds of locks exist in SAP MaxDB:
Record locks (ROW):
Individual traces of a table are locked.
Table locks (TAB):
The complete desk is locked.
Catalog locks (SYS):
Database catalog entries are locked.
Question 35. Must Locks Be Explicitly Requested Or Is Implicit Locking Possible?
Answer :
Locks can be asked implicitly through the database device or explicitly by you (the usage of the applicable SQL statements).
A) Requesting locks implicitly:
All enhancing SQL statements (as an instance, INSERT, UPDATE, DELETE) continually request an unique lock.
You can pick out the lock operation mode via specifying an isolation level whilst you open the database session.
Depending on the desired isolation degree, locks are then implicitly requested by using the database machine when the SQL statements are processed.
B) Requesting locks explicitly:
You can assign locks explicitly to a transaction the usage of the LOCK announcement, and you could lock man or woman desk lines by using specifying a LOCK option in an SQL declaration. This manner is feasible at every isolation stage. In addition, you can use the LOCK choice to briefly trade the isolation degree of an SQL assertion.
Question 36. What Is A Shared Lock?
Answer :
Shared locks allow different transactions to carry out examine accesses however no longer to carry out write accesses to the locked object. Other transactions can set similarly shared locks on this item, however no exclusive locks.
Shared locks may be set on individual table traces or at the whole desk.
SAP BPC Interview Questions
Question 37. What Is An Exclusive Lock?
Answer :
If an different lock on a database object is assigned to a transaction, different transactions can not get right of entry to this item.
Transactions that need to test whether or not special locks exist or to set distinct or shared locks collide with the present distinctive lock of the opposite transaction. They cannot get admission to the locked object.
Exclusive locks can be set on character desk traces or at the whole table.
Question 38. What Is The Isolation Level?
Answer :
The isolation level defines whether and wherein manner locks are implicitly requested or d.
The decided on isolation stage affects the diploma of parallelism of concurrent transactions and the records consistency:
The decrease the price of the isolation stage, the better the degree of parallelism and the lower the diploma of the assured consistency.
If there are concurrent accesses to the equal dataset, various inconsistencies (along with grimy examine, non-repeatable examine, and phantom) may additionally occur at one of a kind isolation ranges.
For SAP packages (ABAP stack), the isolation level is defined inside the XUSER document. The ABAP applications run at isolation level 0.
In NetWeaver (Java stack), facts sources are used for the application logon. These records sources may be configured the usage of the NetWeaver. You can freely pick out the isolation degree. If the records source does not specify an specific isolation degree, default isolation level 1 (read committed) is used.
Question 39. Which Isolation Levels Are Available?
Answer :
Isolation degree 0:
In isolation stage zero (uncommitted), lines are read without an implicit request of shared locks (grimy study). This does not make sure that once a line is examine again inside a transaction, it has the equal kingdom as while it became examine the primary time because it could had been changed by means of a concurrent transaction within the meantime. This does now not ensure that examine data are definitely dedicated within the database.
When you insert, exchange, and delete strains, the records statistics are solely locked. The locks are retained until the transaction ends to make certain that no concurrent adjustments can be made.
The SAP system is operated at isolation level 0 due to the fact a separate lock mechanism that is at a higher level than the lock mechanism of the database is implemented for the SAP application.
Isolation degree 1:
If an SQL announcement is used for a information retrieval, it's miles ensured for every examine line that (when a line is read) no different transaction holds an unique lock for this line. The shared lock is eliminated after the record changed into examine. If lock collisions arise, a lock (REQ ROW SHARE) is requested and the transaction have to wait till this lock is assigned to it before it could get entry to the record.
Isolation stage 15:
Isolation stage 15 ensures that the ensuing set is not changed whilst it's miles being processed. Reading backwards and positioning in the ensuing set creates unique effects. When you insert, alternate, or delete strains, exceptional locks for the relevant strains are implicitly assigned to the transaction; these one of a kind locks are d simplest while the transaction ends.
Isolation level 2:
Isolation stage 2 protects in opposition to the non-repeatable study. A record this is read numerous instances inside a transaction obtains constantly the identical values.
If isolation degree 2 or 20 (repeatable) is special, earlier than processing starts, shared locks are implicitly requested for all the tables that are addressed by means of a SQL statement for the data retrieval. When you insert, change, or delete lines, one of a kind locks for the applicable strains are implicitly assigned to the transaction; these one of a kind locks are d simplest while the transaction ends.
Isolation level 3:
If isolation degree three or 30 (serializable) is distinct, a table shared lock is implicitly assigned to the transaction for every desk that is addressed by way of an SQL assertion. These shared locks can be d best while the transaction ends. Isolation degree 3 protects in opposition to 3 strange get entry to sorts (grimy examine, non-repeatable examine, and phantom), however isn't appropriate for an operation with a excessive diploma of parallelism. When you insert, change, or delete lines, exclusive locks for the relevant traces are implicitly assigned to the transaction; these one of a kind locks are d simplest whilst the transaction ends.
New isolation degree as of using MVCC:
When you begin the use of these new isolation levels, the isolation tiers noted above are no longer to be had.
Isolation level 50:
Isolation stage 50 corresponds to COMMITTED READ.
Isolation level 60:
Isolation stage 60 corresponds to "Serializable" that is regarded from Oracle. This is a lesser requirement than ISO/ANSI "Serializable".
The vintage isolation levels that are decrease than three are mapped to the new isolation stage 50. The old isolation stage 3 is mapped to the brand new isolation degree 60.
Question forty. Does The Sap Maxdb Use Consistent Reading?
Answer :
Consistent reading has not been supported but in the OLTP operation and OLAP operation. The lock management of SAP MaxDB will be modified in greater modern-day versions due to the MVCC implementation. Currently, we can not specify wherein SAP MaxDB model this function can be used productively.
MYSQL DBA Interview Questions
Question forty one. Which Database Parameters Do You Use To Configure The Lock Management?
Answer :
The database parameters of SAP MaxDB affect the SAP MaxDB SQL lock control.
A) MaxUserTasks (MAXUSERTASKS):
The database parameter MaxUserTasks defines the most variety of parallel person periods.
B) MaxSQLLocks (MAXLOCKS):
The database parameter MaxSQLLocks defines the maximum variety of line locks and desk locks that may be held or requested at the identical time. If the MaxSQLLocks cost is reached, statements that request locks are rejected (-a thousand: Too many lock requests).
C) RequestTimeout (REQUEST_TIMEOUT):
The database parameter RequestTimeout defines how lengthy a transaction should look forward to the undertaking of a lock. If this wait time has expired earlier than the lock is assigned, the transaction terminates with an blunders.
D) DeadlockDetectionLevel (DEADLOCK_DETECTION):
The database parameter DeadlockDetectionLevel defines the most depth of look for the deadlock detection with SQL locks.
If this database parameter is ready to cost 0, the deadlock detection is deactivated (in different phrases, deadlocks are removed simplest via RequestTimeout). A price this is better than zero defines the intensity of seek. Up to the desired depth of search, deadlocks are detected and right now eliminated. The preliminary cost for the impasse detection is four. A higher cost consequences in huge costs. Consider the use of a higher price simplest if an utility triggers serious impasse troubles that can not be solved inside the software.
Question forty two. What Does The Following Sql Error -1000 Mean: Too Many Lock Requests?
Answer :
A transaction terminates because too many SQL locks have been asked. The configured maximum fee (MaxSQLLocks) has been reached. The gadget can assign no in addition SQL locks.
If those issues often occur in the gadget, test the value of the database parameter MaxSQLLocks.
DB2 SQL Programming Interview Questions
Question forty three. When Should You Increase The Database Parameter Maxsqllocks (maxlocks)?
Answer :
MaxSQLLocks values that are too high bring about lengthy search processes in the lock lists. Therefore, if this is possible, reduce the write transactions by way of using common commits.
You can growth MaxSQLLocks if one of the following conditions frequently occur:
There is a high variety of LOCK LIST ESCALATIONS
the number of LOCK LIST MAX USED ENTRIES equals the fee of MaxSQLLocks.
The wide variety of LOCK LIST MAX USED ENTRIES is close to the fee of MaxSQLLocks and the write transactions can not be reduced.
Question 44. Is A Shared Lock Assigned To Each Data Record Of The Table During A Full Table Scan As Long As The Query Is Running?
Answer :
In isolation level 1, a lock is quickly used for each report and is eliminated earlier than the subsequent report.
In isolation stage 3, a desk shared lock is used.
Question forty five. How Are Locks On Joins Handled?
Answer :
The SQL locks on joins are treated within the identical way as locks on tables. There is not any distinction.
Question 46. What Administration Tools Are Available?
Answer :
SAP recommends that you use the MaxDB tools DBMGUI/Database Studio, DBMCLI and SAP CCMS for the management responsibilities.
Question forty seven. Can Administration Tasks Be Automated?
Answer :
You can use the CCMS scheduling calendar or the MaxDB gear to automate each administration responsibilities that need to run frequently (such as computerized log backups, Update Statistics) or that handiest need to be run when required (which includes extending the facts place).
Question 48. Is There A Central Transaction In The Sap System For All Administrative Database Activities?
Answer :
In the MaxDB surroundings, you could use the transaction DB50: Database Assistant to reveal the database.
You can use transaction LC10: liveCache Assistant to reveal a liveCache. In SAP Releases as of Release 7.0, you get admission to those transactions the use of transaction DB59.
As of SAP Release 7.0 the DBA Cockpit (transaction dbacockpit) is available as a important get admission to to the database or liveCache management.
Question forty nine. What Is The Central Scheduling Calendar?
Answer :
The critical CCMS scheduling calendar contains transactions DB13 and DB13C.
You can use those transactions to schedule the subsequent activities:
Back up the dataset place and the log place
Check backups for completeness
Update the optimizer statistics
Database shape checks
Question 50. How Do I Perform A Database Backup?
Answer :
To perform a database backup, use the MaxDB backup tools DBMGUI or DBMCLI. You can use the scheduling calendar (DB13 or DB13C) to schedule a backup for a particular point in time and execute it implicitly.
It is possible to execute file gadget backups of MaxDB volumes while the database is offline only, although we advocate this handiest in top notch situations (inclusive of if you use snapshots or break up-mirrors).
Question fifty one. Are Logs Also Saved During A Database Backup?
Answer :
No. A database backup using the MaxDB gear simplest backups up the facts this is saved within the information volumes. This consists of the actual use records and the "before images", in order that it is possible to repair a regular database the use of handiest the database backup.
The statistics this is stored within the log extent is not subsidized up for the duration of a statistics backup.
Question 52. What Advantage Does A Backup Using The Maxdb Backup Tool Offer Compared To A File System Backup?
Answer :
It is simplest possible to execute a record system backup when the database is inside the offline fame. The period of time for which the database is unavailable for operation relies upon on the size of the dataset.
In contrast, you may lower back up the database while it's miles in the on-line repute using the MaxDB backup gear, in parallel with production operation. You can use a backup generated on line using database tools for a machine reproduction.
A document device backup of the volumes (records and log) backs up all data this is saved inside the volume.
In assessment, a backup using the MaxDB tools only backs up the records of a converter version, which may be required for a restoration.
When you create a record system backup, the gadget does not carry out a checksum-take a look at on the blocks.
While the backup is being created the usage of the MaxDB gear, the machine executes consistency exams on the facts.
Even when you returned up the log volumes the use of a record gadget backup, you need to also lower back up the log location the usage of the database gear, to make certain that the log location may be released once more for overwriting.
A report device backup isn't always covered in the MaxDB backup records.
Backups performed the usage of the MaxDB gear combine the backup history, and this makes restoring the database in the event of a recuperation a simple manner.
Question 53. Does Maxdb Support The Use Of External Backup Tools?
Answer :
Yes. Max DB supports a number of backup gear from 1/3 parties, including Networker, Netbackup from Veritas, and so forth.
Which particular gear are supported, and the way they may be included in a MaxDB environment is defined within the MaxDB documentation (Note 767598). Search the word list for the keyword "outside backup device".
Scripts inside the database manager additionally enable you to use all other backup gear which can be capable of process backups from pipes.
Question fifty four. What Do I Do If The Data Area Is Full?
Answer :
When there's no greater reminiscence to be had within the database, the database hangs. You ought to make loose memory available within the shape of a brand new statistics volume. You can add new volumes while the database is online.
More statistics approximately this topic is available is within the MaxDB documentation (Note 767598) inside the word list, beneath the key-word "records extent" or "db_addvolume".
Question fifty five. What Do I Do If There Are No Free Database Sessions Left, And No New Database Application Can Log On To The Database?
Answer :
The widespread parameter MAXUSERTASKS defines the maximum quantity of consumer duties that may be energetic on the equal time, and consequently determines the maximum wide variety of database periods.
When this quantity is reached, no in addition customers can go online to this database example. You then want to growth the parameter. If you assign a very high fee to MAXUSERTASKS, then the database gadget requires a lot of deal with room, specially inside the case of nearby conversation the usage of shared memory.
The alternate only becomes active while you restart the database.
