Tuesday, May 13, 2014

CTS Interview Questions on 10 May 2014


1. Explain about system databases?

Database
Database file
Log file
master
Master.mdf
Mastlog.ldf
model
Model.mdf
Modellog.ldf
msdb
Msdbdata.mdf
Msdblog.ldf
tempdb
Tempdb.mdf
Templog.ldf
mssqlsystemresource
Mssqlsystemresource.mdf
Mssqlsystemresource.ldf

 

Master Database—The master database is an important system database

in SQL Server 2008. It houses all system-level data, including

system configuration settings, login information, disk space, stored

procedures, linked servers, and the existence of other databases, along

with other crucial information.

Model Database—The model database serves as a template for creating

new databases in SQL Server 2008. The data residing in the model

database is commonly applied to a new database with the Create

Database command. In addition, the tempdb database is re-created with

the help of the model database every time SQL Server 2008 is started.

Msdb Database—Used mostly by the SQL Server Agent, the msdb

database stores alerts, scheduled jobs, and operators. In addition, it

stores historical information on backups and restores, SQL Mail, and

Service Broker.

Tempdb—The tempdb database holds temporary information, including

tables, stored procedures, objects, and intermediate result sets.

Each time SQL Server is started, the tempdb database starts with a

clean copy.

 

TIP

It is a best practice to conduct regular backups on the system databases.

In addition, if you want to increase performance and response times, it is

recommended to place the tempdb data and transaction log files on different

volumes from the operating system drive. Finally, if you don’t need to

restore the system databases to a point in failure, you can set all recovery

models for the system databases to Simple.

 

 

2. Can we see Resource Database in GUI?

A big reason for the locking away of the system objects is because they all have a common source now

called the Resource database. The Resource database is the physical repository for all system objects

and is inaccessible during normal operations of SQL Server. Although the system objects are physically

stored in the Resource database, they are logically presented as the SYS schema in each database.

Microsoft strongly recommends that the Resource database be left alone, but it can be accessed if SQL

Server is started in single-user mode. Even this access, however, is read-only, as is access to any objects

in the SYS schema. Any attempt to modify a system object will result in an error, even if ad hoc updates

to the system catalog is enabled.

Persisting all the system objects in the Resource database allows for rapid deployment of service packs

and upgrades to SQL Server 2005. When installing a service pack, the process is simply one of replacing

the Resource database with a new version and executing whatever modifications are required to the

operating system objects. This dramatically reduces the amount of time it takes to update SQL Server.

Even though the Resource database isn’t accessible during normal SQL Server operations, information

about the database can be retrieved using system functions and global variables. The following code

returns the build number of the Resource database:

SELECT SERVERPROPERTY(‘ResourceVersion’)

To return the date and time the Resource database was last updated, the following code can be executed:

SELECT SERVERPROPERTY(‘ResourceLastUpdateDateTime

 

The SYS Schema

As previously mentioned the system objects stored in the Resource database logically appear in the SYS

schema of each database. The SYS schema contains views that can be utilized by the DBA to retrieve

information about the objects in a database. Most (but not all) of the information the DBA typically

needs access to is available through the use of system functions and stored procedures that return metadata

from the system objects. Sometimes, however, it is beneficial to retrieve the metadata directly from

the system objects. The views in the SYS schema are provided for this reason.

If you have ever used SQL Server 2000 system tables, you will find that almost all of the old system table

names have been preserved, but now are persisted as views. However, these views are only provided for

backward compatibility. They do not expose any SQL Server 2005–specific metadata. Any future operations

should be based on the new SQL Server 2005 system views.

 

A word of caution is needed here. As a general rule, any scripts or applications created to consume system

metadata directly from system objects should be built with the knowledge that they may not work

in future releases of SQL Server. There is nothing really new about this. Microsoft has cautioned against

formalizing processes that directly access system objects for years and has warned that the system

objects could be altered by future upgrades and service packs.

 

4. What do you do if TempDB is full?

            Sim writes "I would like to know if tempdb has become full and what to do if you cannot truncate the log? I want to understand the process why you cannot truncate it."

Let's start with a breif description of tempdb and it's default settings. The tempdb database is a system database that contains temporary tables created by the server and by the user. The database is recreated each time SQL server is restarted. If you are running SQL 7 or later the default settings for the tempdb are unlimited filesize and autogrow by 10%. SQL Server locks tempdb while it is expanding so correct sizing of your tempdb is important. If you run out of space on a SQL7 or later and you have not changed these settings then you need to look at hardrive space.

Truncating the log just removes inactive entries from the log. TempDB is set to remove the inactive entries automatically (SQL7 and earlier the 'trun. log on chkpt' option is set and the SQL2000 recovery mode is set to simple). These settings force inactive log entries to be removed from the log during a checkpoint operation. Books online has a good article explaining truncating the transaction log.

On versions prior to 7.0 or if you set a maximum size for tempdb you will get an error 1105 (filegroup full) or a 9002 (log file full) when you fill up tempdb. Use performance monitor to watch the amount tempdb space in use. Set an alert to notify you when the usage crosses a threshold for a period of time.

Redesign queries to work on smaller sets of data at a time. Break one large transaction into several smaller transactions if possible. In SQL2000 try table varibles instead of temporary tables. These varaibles are handled in memory not in tempdb. Expand the tempdb by adding files or by moving it to another hardrive.

The procedure for moving tempDB depends upon the version of SQL you are running. The following links discuss how to move the tempdb:

How to Move Tempdb to a Different Device (SQL Server 6.5)

How to Performance Tune the Microsoft SQL Server tempdb Database (SQL Server 7/2000)

I hope this answers your question about TempDB.

Lance


 

5. Explain about Page Architecture?  What are the page Types?

Pages

Every data file begins with a mixed extent containing pages that are used to track information about the

data file. The first page on the first extent of a data file contains a File Header page. This page contains

descriptive information about the data file, such as its logical name, physical name, max size, growth

increment, and filegroup membership.

All data and metadata in a SQL Server 2005 database are stored in pages. Unlike extents, pages always

store data from the same object. This includes rows from tables, rows from indexes, and Large Object

data. Pages are 8KB in size and are organized on 64KB extents, which are made up of eight contiguous

8KB pages. Every page has a 96-byte header that contains information about the page, such as the page

number, the type of data stored on the page, the amount of free space available on the page, and what

object owns the page. SQL Server contains several different types of pages that are used to both store

data and to manage data.

Data Pages

Data pages contain data rows from tables. These rows cannot span pages. Because of the page header and

row offset information, the maximum row size is limited to 8,060 bytes. Row sizes are determined by the

number of columns in the row and the data type defined on each column. To maximize performance,

table and index rows should be kept as narrow as possible. For example, if a single table row was 4,100

bytes in width, only one row could be stored on each data page leaving almost 4,000 bytes of unusable

space. Resulting reads from a table with this structure would require 8KB of data retrieval for only 4,100

bytes of data. This is obviously very inefficient. Physical data page structure is illustrated in Figure 4-2.


Each row-offset block consumes 2 bytes of space for every row stored on a page. Rows from tables are

physically arranged differently than their logical definition in order to optimize storage space. When a

row is stored on a data page, the row is identified with a 4-byte header, which uniquely identifies the

row on the page, followed by the fixed-length data columns, a Null block, a variable block, and then all

the variable data columns at the end of the physical row, as shown in Figure 4-3.


The Null block contains a 2-byte block that indicates how many columns in the row can contain nulls, followed

by a bitmap that indicates whether the nullable column is null. The size of the null bitmap is

equal to 1 bit per column, rounded up to the nearest byte. One to eight nullable columns require a 1-byte

bitmap. Nine to 16 columns require a 2-byte bitmap and so on.

The variable block, like the Null block, contains 2 bytes that indicate how many variable-length columns

are present, followed by a bitmap that indicates what the maximum length of each variable column is.

Unlike the Null block, the variable column bitmap contains 2 bytes per column that points to the end of

each variable-length column, so that all the variable data can be stored contiguously at the end of the

row. If no columns are defined as variable length, the variable block is omitted.

Index Pages

Index pages contain rows from indexes. They have the same structure and limitations as data pages.

Text/Image Pages

When a column is defined with a Large Object data type, SQL Server places a 16-byte pointer in the

actual data row and places the Large Object data on separate data pages. This data includes those

defined as text, image, varchar(MAX), nvarchar(MAX), varbinary(MAX), and XML.

Global Allocation Map (GAM) and Secondary Global Allocation Map (SGAM) Pages

The GAM and SGAM pages are allocation pages that manage extents on a file-by-file basis. The second

page of every data file is a GAM page, and the third page of every data file is a SGAM page. SQL Server

will add additional GAM and SGAM pages as necessary, because each GAM and SGAM page can track

only 63,904 extents. The GAM and SGAM pages form a bitmap that indicates whether an extent is a uniform

or mixed extent. The GAM and SGAM bitmap also indicates whether the extent is full, empty, or

has free data pages.

Page Free Space (PFS) Pages

PFS pages record the status of each page, whether or not a page has been allocated, and the amount of

free space on each page.

Index Allocation Map (IAM) Pages

The IAM page contains information about the extents that a table or index uses. The IAM page contains

the location of the eight initial pages of an object, and a bitmap representing the extents that are in use

for that object. Every IAM page can track up to 512,000 data pages. SQL Server utilizes the IAM and PFS

pages to find and allocate new pages for data.

Bulk Changed Map (BCM) Pages

The Bulk Changed Map pages contain the location of extents that were modified by bulk operations since

the last transaction log backup. Bulk operations include UPDATETEXT, WRITETEXT, SELECT INTO, BULK

INSERT, and image operations. BCMap pages are used primarily for transaction log backup operations

when the database is in BULK-LOGGED recovery mode (see Chapter 9 for a full explanation of the BULKLOGGED

recovery mode).

Differential Changed Map (DCM) Pages

The Differential Changed Map pages contain the identifier of any extent that has been modified since the

last database backup. The DCM pages are used when performing differential backups.

 

 

7. did you hear about 'row offset' and what it does?

 

 

8. What is index and it's types and purposes?

Generally SQL Server examines (table scan) every row in the table to satisfy the query results. “Index is a physical structure containing pointers to the data which is used for retrieving data more quickly and improves the query performance”.

Clustered Index:- Only one Clustered index can create on a Table. When we create a clustered index on a table, all the rows in the table are stored in the order of the clustered index key.

Non-clustered index:- 249 Non-Clustered indexes can create on a Table. Non-clustered indexes are stored as B-Tree structures with the leaf level nodes having the index key and it’s row locator.

By using DBCC DBREINDEX

 

9. What is shrinking?

The Shrink Database task reduces the physical database and log files to a

specific size. This operation removes excess space in the database based on a

percentage value. In addition, you can enter thresholds in megabytes, indicating

the amount of shrinkage that needs to take place when the database

reaches a certain size and the amount of free space that must remain after the

excess space is removed. Free space can be retained in the database or

released back to the operating system.

The following TSQL syntax shrinks the AdventureWorks2008 database,

returns freed space to the operating system, and allows for 15% of free space

to remain after the shrink:

Wow! eBook <WoweBook.Com>

USE [AdventureWorks2008]

GO

DBCC SHRINKDATABASE(N’AdventureWorks2008’, 15, TRUNCATEONLY)

GO

102 CHAPTER 2 Administering the SQL Server 2008 Database Engine

Tip

It is best practice not to select the option to shrink the database. First,

when shrinking the database, SQL Server moves pages toward the beginning

of the file, allowing the end of the files to be shrunk. This process can

increase the transaction log size because all moves are logged. Second, if

the database is heavily used and there are many inserts, the database files

will have to grow again. SQL 2005 and above addresses slow autogrowth

with instant file initialization; therefore, the growth process is not as slow as

it was in the past. However, sometimes autogrow does not catch up with the

space requirements, causing performance degradation. Finally, constant

shrinking and growing of the database leads to excessive fragmentation. If

you need to shrink the database size, you should do it manually when the

server is not being heavily utilized.

Alternatively, you can shrink a database by right-clicking a database and

selecting Tasks, Shrink, and Database or File.

Data Compression in SQL Server 2008

With data explosion occurring in the industry and enterprise, more and more

data is being stored within SQL Server. However, the databases are no longer

averaging 100GB in size as they were 10 years ago. Now databases are

becoming larger than ever, ranging in sizes from 10 to 50 TB on average. In

addition, many times it is only a select few tables within the database that are

growing to these record-breaking sizes. In order to combat this, even though

storage is getting cheaper, Microsoft has introduced data compression and a

Data Compression Wizard or new Transact-SQL statements; therefore, a DBA

can compress tables and indexes to conserve storage space. One of the main

design goals of compression with SQL Server 2008 was the potential to shrink

data warehouse fact tables. Fact tables are typically the largest tables within a

data warehouse as it contains majority of the data associated with a database.

Note

Compression is available only in the Enterprise Edition of SQL Server 2008.

Moreover, the sp_estimate_data_compression_savings stored procedure can

be used to calculate the storage savings associated with a compressed object.

Wow! eBook <WoweBook.Com>

Using the Data Compression Wizard to Compress Space

The Data Compression Wizard can be used to analyze and compress space

associated with a whole table that is stored in a heap or clustered index structure.

In addition, it can also support compression tables using nonclustered

index and index views. Finally, if the table is partitioned, a DBA can

compress portions of the tables and various partitions do not have to maintain

the same compressed settings.

The Data Compression wizard can be invoked by right-clicking any table or

index and selecting Storage and then Manage Compression. The first step is to

select the compression type based on a partition number, or you can choose to

use the same compression for all partitions. The compression options include

row compression, page compression or none, indicating no compression. The

next step is to click the Calculate button to review the disk cost savings. The

cost savings will be displayed in the Requested Compressed Space column

based on a partition number. This is displayed in Figure 2.16. The final step

includes selecting an output option. It is possible to create a script, run immediately,

or schedule the task to commence at a later time.

SQL Server Database Engine Management Tasks 103

FIGURE 2.16

Specifying the Compression Type with the Data Compression Wizard.

Data Compression with Transact SQL

For those of you who do not like using a wizard interface, the following

examples outline the default syntax for compressing data with Transact-SQL.

Wow! eBook <WoweBook.Com>

Creating a Table with Row Compression Enabled

CREATE TABLE <Table Name>

(<Column 1. int, <Column 2> nvarchar(50) )

WITH (DATA_COMPRESSION = ROW);

GO

Creating a Table with Page Compression Enabled

CREATE TABLE <Table Name>

(<Column 1. int, <Column 2> nvarchar(50) )

WITH (DATA_COMPRESSION = PAGE);

GO

The following Transact-SQL syntax illustrates compressing the Sales Order

Detail table in the AdventureWorks2008 database by the page compression

setting.

USE [AdventureWorks2008]

ALTER TABLE [Sales].[SalesOrderDetail]

REBUILD PARTITION = ALL

WITH

(DATA_COMPRESSION = PAGE

)

As mentioned earlier, compression is only included in the Enterprise Edition

of SQL Server 2008. Compression technologies dramatically cut I/O requests

and storage; however, the storage gained is typically at the sacrifice of

processor performance. Therefore, test scenarios in a prototype lab to ensure

that performance degradation of the system does not occur when using this

new feature. Finally, use the new performance counters Page Compression

Attempts /sec and Page Compressed/Sec found in the SQL Server Access

Methods Object to monitor compression.

 

Tempdb contains user objects (temporary tables, table variables) and internal objects (hash tables, bitmaps and so on).
If your tempdb fills because you create lots of temp tables, just drop them.
If it fills because of internal objects, it could be due to a massive SORT / HASH used in a query plan. In this case, I advise using this script to identify the offending query and fix its plan by adding indexes or other tuning measures.
If you want to shrink tempdb without cycling the instance, you can use DBCC FREESYSTEMCACHE('ALL') and then issue the shrink command.

;WITH task_space_usage AS (
    -- SUM alloc/delloc pages
    SELECT session_id,
           request_id,
           SUM(internal_objects_alloc_page_count) AS alloc_pages,
           SUM(internal_objects_dealloc_page_count) AS dealloc_pages
    FROM sys.dm_db_task_space_usage WITH (NOLOCK)
    WHERE session_id <> @@SPID
    GROUP BY session_id, request_id
)
SELECT TSU.session_id,
       TSU.alloc_pages * 1.0 / 128 AS [internal object MB space],
       TSU.dealloc_pages * 1.0 / 128 AS [internal object dealloc MB space],
       EST.text,
       -- Extract statement from sql text
       ISNULL(
           NULLIF(
               SUBSTRING(
                   EST.text,
                   ERQ.statement_start_offset / 2,
                   CASE WHEN ERQ.statement_end_offset < ERQ.statement_start_offset THEN 0 ELSE( ERQ.statement_end_offset - ERQ.statement_start_offset ) / 2 END
               ), ''
           ), EST.text
       ) AS [statement text],
       EQP.query_plan
FROM task_space_usage AS TSU
INNER JOIN sys.dm_exec_requests ERQ WITH (NOLOCK)
    ON  TSU.session_id = ERQ.session_id
    AND TSU.request_id = ERQ.request_id
OUTER APPLY sys.dm_exec_sql_text(ERQ.sql_handle) AS EST
OUTER APPLY sys.dm_exec_query_plan(ERQ.plan_handle) AS EQP
WHERE EST.text IS NOT NULL OR EQP.query_plan IS NOT NULL
ORDER BY 3 DESC, 5 DESC

 

10. When do you do shrinking?

 

11. Can Log file be shrink while it has active transactions?

Tempdb contains user objects (temporary tables, table variables) and internal objects (hash tables, bitmaps and so on).
If your tempdb fills because you create lots of temp tables, just drop them.
If it fills because of internal objects, it could be due to a massive SORT / HASH used in a query plan. In this case, I advise using this script to identify the offending query and fix its plan by adding indexes or other tuning measures.
If you want to shrink tempdb without cycling the instance, you can use DBCC FREESYSTEMCACHE('ALL') and then issue the shrink command.



12. Can data file be shrinking?

 

13. If Log file is occupied 95% of its size, can it be shrinked at this time?

 

14. What is fragmentation and why it happens?

 

15. How do you check the fragmentation values?

DBCC SHOWCONTIG

16. When do you shrink the files, daily, weekly or monthly?

 

17. How do you schedule shrink jobs? How does it check if it (job) has to perform reorg or rebuild?

 

18. Do you take backup of TempDB, if not why?

database is used by SQL Server to store data—yes, you guessed it, temporarily. The TempDB

database is used extensively during SQL Server operations, so careful planning and evaluation of its size

and placement are critical to ensure efficient SQL Server database operations.

 

 

19. What is the purpose of TempDB MDF & LDF? How does it differ from User database MDF & LDF?

 

20. What are the steps involved in Migration activity?

 

21. How do you move Login & User Information while migrating a database?

When a DB is move, attach & detach, restoring to different instance or corresponding login is dropped, the users of the Database becomes Orphaned users and they can’t access the database because the database users are not mapped to SID (Security Identifier) in the new instance.

To detect Orphaned users:- sp_change_users_login 'Report'

To Resolve Orphaned Users:- sp_change_users_login 'Update_One', 'db_user', 'login_name'

 

 

22. Explain Log File Architecture?

 

23. Default Port# of SQL?

 

24. What are Isolation Levels available in SQL? What is the default ISOLATION LEVEL??

An Isolation level determines the degree of isolation of data between concurrent transactions. The default isolation level is Read committed.

a)      Read uncommitted

b)      Read committed

c)       Repeatable read

d)      Serializable

 

 

 

26. What are the Compatibility Levels and its purpose?

 

27. What compatibility levels are supported by SQL Server 2000, 2005, 2008, 2012 & 2014?

 

28. What are the advance features of 2008, 2012 & 2014?

 

29. Which tool you use to check if the hardware configuration is sufficient for SQL Server installation?

A new planning tool known as the System Configuration Checker,

which checks for conditions that could impede a SQL Server installation

prior to the actual installation.

 

30. What are the daily activities you perform?

 

Daily Routine Maintenance Tasks

Maintenance tasks requiring close and regular attention are commonly

checked each day. DBAs who take on these tasks daily ensure system reliability,

availability, performance, and security. Some of the daily routine maintenance

tasks include the following:

■ Check that all required SQL Server services are running.

■ Check Daily Backup logs for success, warnings, or failures.

■ Check the Windows Event logs for errors.

■ Check the SQL Server logs for security concerns such as invalid

logins.

■ Conduct full or differential backups.

■ Conduct Transaction Log backups on databases configured with the

Full or Bulk-Logged recovery model.

■ Verify that SQL Server jobs did not fail.

■ Check that adequate disk space exists for all database files and transaction

logs.

■ At least monitor processor, memory, or disk counters for bottlenecks.

Weekly Routine Maintenance Tasks

Maintenance procedures that require slightly less attention than daily checking

are categorized in a weekly routine. The following list details these

weekly tasks:

■ Conduct full or differential backups.

■ Review Maintenance Plan reports.

■ Check database integrity.

■ Shrink the database if needed.

■ Compact clustered and nonclustered tables and views by reorganizing

indexes.

■ Reorganize data on the data and index pages by rebuilding indexes.

■ Update statistics on all user and system tables.

■ Delete historical data created by backups, restores, SQL Server agent,

and maintenance plan operations.

■ Manually grow database or transaction log files if needed. Adjust automatic

growth values if needed.

■ Remove files left over from executing maintenance plans.

270 CHAPTER 6 SQL Server 2008 Maintenance Practices

Wow! eBook <WoweBook.Com>

Monthly or Quarterly Maintenance Tasks

Some maintenance task are managed more infrequently, such as on a monthly

or quarterly basis. Do not interpret these tasks as unimportant because they

don’t require daily maintenance. These tasks also require maintenance to

ensure the health of their environment, but on a less regular basis because they

are more self-sufficient and self-sustaining. Although the following tasks may

appear mundane or simple, they should not be overlooked during maintenance.

■ Conduct a restore of the backups in a test environment.

■ Archive historical data if needed.

■ Analyze collected performance statistics and compare them to baselines.

■ Review and update maintenance documentation.

■ Review and install SQL Server patches and service packs (if available).

■ Test failover if running a cluster, database mirroring, or log shipping.

■ Validate that the backup and restore process adheres to the Service

Level Agreement defined.

■ Update SQL Server build guides.

■ Update SQL Server disaster recovery documentation.

■ Update maintenance plan checklists.

■ Change Administrator passwords.

■ Change SQL Server service account passwords.

Summary

The maintenance plan feature alone should be one of the key selling points

for SQL Server 2008. The ability to use an uncomplicated wizard to automate

administrative tasks that SQL Server will perform against a single database

or multiple databases has decreased the amount of manual work DBAs

must do and ensures that tasks do not get overlooked. To take advantage of

running tasks concurrently, or using precedence constraints to run tasks

sequentially, you should create plans manually. This is the best way to

develop maintenance plans for those looking for a lot of flexibility on

advanced workflow.

SQL Server 2008 continues to allow organizations to extend their use of

maintenance plans. The following are just some of the features SQL Server

2008 has brought to the table. SQL Server 2008 offers support for multiserver

maintenance plans, SQL Server 2008 does not require SSIS to be

installed, and supports the potential for remote logging.

Summary 271

Wow! eBook <WoweBook.Com>

In the end, the most important thing to take away from this chapter is the

importance of having a maintenance plan in place early and ensuring that

maintenance is scheduled accordingly to preserve the health of each database.

Best Practices

Some important best practices from the chapter include the following:

■ DBAs should fully understand all maintenance activities required and

implemented within the SQL Server environment.

■ Use the Maintenance Plan Wizard to automate and schedule routine

maintenance operations.

■ When creating maintenance plans with the wizard, leverage the

features included in SQL Server 2008 and create independent schedules

for subtasks.

■ Maintenance tasks should be scripted, automated, and fully documented.

■ Maintenance tasks should be conducted during nonpeak times or after

hours, such as on weekends and after midnight.

■ When you configure the order of the maintenance tasks, backups

should be executed first, and then other tasks that change the database.

■ Do not include the Shrink Task when creating Maintenance Plans.

Manually shrink the database if needed during nonpeak hours.

■ Maintenance tasks should be grouped into daily, weekly, and monthly

schedules.

■ Schedule and conduct routine maintenance tasks on a daily, weekly,

and monthly basis.

■ For a large enterprise environment running many SQL Servers, take

advantage of subplans and the multiserver maintenance plan.

 

31. What are the alerts you get wrt to Log Shipping, Replication and Mirroring?

 

on Table1. This condition causes endless loop of waiting for the locks to be released.

                The Database engine picks one of the victim (users) and kills their query and send a error message to users “You are the victim of a deadlock and try again later”.

Deadlock Information Tools

1)      Trace Flags:- DBCC TRACEON (1204) & DBCC TRACEON (1222). When these trace flags is enabling, the deadlock information captured by the SQL Server error log.

2)      Deadlock graph event in SQL Profiler:- SQL Server Profiler graphically representation of tasks and resources involved in a deadlock. (Lock:Deadlock and Lock:Deadlock chain events in the Locks events)

3)      System View:- We can find the blocking sessions by writing the following query

Select session_id, status, blocking_session_id from sys.dm_exec_requests

where blocking_session_id > 0

Resolving Deadlock:- After find the session causing the problem we can use KILL command.

                > KILL process_id

 

1)      What are the Types of Locks? Explain each?

There are 7 locks types are available in SQL Server 2005.

a)      Shared Lock:- Shared (S) locks allow concurrent transactions to read (SELECT) a resource under pessimistic concurrency control.

b)      Update Lock:- Used on resources that can be updated. Prevents a common form of deadlock that occurs when multiple sessions are reading, locking, and potentially updating resources later.

c)       Exclusive Lock:- Exclusive (X) locks prevent access to a resource by concurrent transactions

d)      Intent Lock:- Used to establish a lock hierarchy. The types of intent locks are: intent shared (IS), intent exclusive (IX), and shared with intent exclusive (SIX).

e)      Schema:- Used when an operation dependent on the schema of a table is executing.

f)       Bulk Update:- Used when bulk copying data into a table and the TABLOCK hint is specified

g)      Key-Range:- Protects the range of rows read by a query when using the serializable transaction isolation level

 

What is Check point?

Check Point is a operations that synchronizes the physical data with the current state of the buffer cache by writing out all modified data pages in buffer cache to disk.

What is DB refresh?

This topic describes how to build or refresh a reporting database that is intended for use as a scalable shared database. A reporting database might be a copy of a production database, as in this figure, or the reporting database might contain reporting data imported from one or more production databases. The process of building or refreshing a reporting database is known as the build phase or refresh phase, depending on whether a new reporting database is being built or a stale reporting database is being refreshed.

The life of a scalable shared database begins by building a new reporting database on a set of reporting volumes. Eventually, the reporting data becomes too outdated to be useful, and the reporting database is said to be stale. A stale reporting database is refreshed as part of each update cycle. Refreshing a stale reporting database involves either updating its data or building a completely new, fresh version of the database.

To build or refresh a reporting database, you can use any of the data-copy methods provided by SQL Server for copying data or databases. For more information, see "Methods for Building or Refreshing a Reporting Database," later in this topic.


Note
Before a reporting database is built or refreshed, its set of reporting volumes must be mounted on the production server and each of the volumes must be marked as read-write. For a description of all of the steps in this process, see "Steps for Building or Refreshing the Database," later in this topic.

SQL Server 2005 and later versions support the following methods for building or refreshing a reporting database:

  • SQL Server Integration Services (SSIS)

You can create or copy a database by running SSIS packages and using the Execute SQL task or the Transfer Database task:

    • The Execute SQL task runs SQL statements or stored procedures from a package. Using the Execute SQL task you can create a database by running a CREATE DATABASE statement and then populating the database by copying in one or more tables or views. For more information, see SSIS Execute SQL Task.
    • The Transfer Database task can copy a database within the same server instance or between instances. For more information, see Transfer Database Task.

Note
You can also create a database using the SQL Server Import and Export Wizard, but you must copy at least one table or view. For more information, see Using the SQL Server Import and Export Wizard to Move Data.

  • Backup and restore

You can restore a backup of a production database onto the reporting volume. This involves restoring and recovering a full database backup on to the reporting volume.

    • If you are using the same drive letter, mount the reporting volume on to a different host and connect to a server instance there to restore the database.
    • If the reporting volume uses a different drive letter from the production volume, you must add a WITH MOVE clause to your RESTORE DATABASE statement that specifies the drive letter of the reporting volume in the pathname of the restored database.

For information about using backup and restore to copy a database, see Copying Databases with Backup and Restore.

  • Copying the production database

Before you can copy a database manually or use the Detach and Attach Method of the Copy Database Wizard, you must take the database offline, and after copying it, bring it back online. The Copy Database Wizard, however, offers an alternative method, the SMO Transfer method, that copies the database while it remains online. Though slower than the Detach and Attach Method, the SMO Transfer method has the advantage of preserving active connections to the database.

For more information, see Using the Copy Database Wizard.

Note
We recommend that you always use the same pathname for the production and reporting databases when building a reporting database, and, if possible, the same drive letter for the production and reporting volume (when mounted on the reporting servers).

It might be beneficial to optimize the reporting database for the anticipated query workload. Such optimizations could include:

  • Updating or recreating statistics on affected tables and indexes as needed.
  • Rebuilding indexes with a FILL factor of 100.
  • Putting the reporting database into simple recovery mode and performing a checkpoint to truncate the transaction log and save log space.

Note
For guidelines about how to use a reporting database as scalable shared database, see Ensuring a Correct Environment for a Scalable Shared Database.

When the database is ready, you must mark the reporting volumes as read-only and dismount them from the production server. For more information, see "Steps of the Build or refresh phase," later in this topic.


Note
Before you can refresh a reporting database, its set of reporting volumes must have been dismounted from all of the reporting servers. For more information, see Detaching a Scalable Shared Database.

To build or refresh a reporting database, do the following on the production server:

  1. Using your hardware vendor's utilities, unmask the logical unit number (LUN) corresponding to each reporting volume to make the volume accessible to the production server. If the database uses multiple reporting volumes, perform this step for each of the volumes.
  2. Mount each reporting volume and mark it as read-write. The DiskPart list volumes command, which displays all disks and volumes on a computer, is useful for obtaining volume information. To use the DiskPart utility to mount the volume, at the command prompt, enter the following commands:

DiskPart

DISKPART> select volume=<drive-number>

DISKPART> assign letter=<drive-letter>

DISKPART> attribute clear readonly

DISKPART> exit

Where <drive-number> is the volume number assigned by the Windows operating system and <drive-letter> is the letter assigned to the reporting volume. If the database uses multiple reporting volumes, perform this step for each of the volumes.

  1. If you are refreshing an existing reporting database:
    • Attach the database to a server instance (typically, this would be the production server instance).


    • Set the database to read write access.

You can accomplish this by using the following Transact-SQL statement:

ALTER DATABASE <database_name> SET READ_WRITE

  1. Build or refresh the database.

The administrator builds or refreshes the database using any of the methods building or refreshing a reporting database. The reporting database can use any valid database name. For more information, see "Methods for Building or Updating a Database," earlier in this topic.

Note
In reporting databases, we recommend that PAGE_VERIFY be set to CHECKSUM, the default. For more information, see ALTER DATABASE (Transact-SQL).

  1. Detach the database from the production server instance.

You can accomplish this by using the following Transact-SQL statement:

sp_detach_db @dbname='<database_name>'

Where <database_name> is the name of the database.

Note
By default, sp_detach_db runs UPDATE STATISTICS to resample the statistics. However, some implementations might require you to run UPDATE STATISTICS separately with the FULL SCAN option.

  1. Mark each reporting volume as read-only and dismount it from the production server.

To use the DiskPart utility to dismount the volume, at the command prompt, enter the following commands:

DiskPart

DISKPART> select volume=<drive-number>

DISKPART> attribute set readonly

DISKPART> remove

Where <drive-number> is the volume number assigned by the Windows operating system and <drive-letter> is the letter assigned to the reporting volume. If the database uses multiple reporting volumes, perform this step for each of the volumes.

Important
Reporting volumes must be read-only before being dismounted from the production server.

The reporting database can now be made available as a scalable shared database. For more information, see Attaching a Reporting Database as a Scalable Shared Database.

 Row OFFSET

Data rows are put on the page serially, starting immediately after the header. A row offset table starts at the end of the page, and each row offset table contains one entry for each row on the page. Each entry records how far the first byte of the row is from the start of the page. The entries in the row offset table are in reverse sequence from the sequence of the rows on the page.

No comments:

Post a Comment