Interview with Accenture on 19 May 2014
1.
Recovery Models and explain?
2.
Backup Types?
Full Backup
Differential Backup
Transaction Log Backup
Copy-only Backup
Partial backup
Differential Partial backup
File Group Backup
3.
What is copy-only Backup?
4.
Tell me about yourself?
5.
Did you work in mirroring environment?
6.
Why can’t we configure mirroring in same
instance?
Because we can’t create another database
with same name.
7.
Pre Requisite for Database Mirroring?
8.
Differences between Mirroring and Clustering?
9.
Load balancing Vs Active-Active Setup?
10.
Can we access Mirror server database for
reporting? If so, how?
11.
Explain about Mirroring Modes?
12.
Explain about Always On?
13.
Differences between Always ON and Clustering?
14.
Types of replications and explain?
15.
What are the reasons to be dropped the
replication?
Service packs, hotfixes and security patch
levels must be consistent on all Domain Controllers (DCs). Inconsistent update
levels across DCs can lead to DC-to-DC synchronisation and replication related
problems. It is extremely difficult to trap errors caused by DCs being out of
sync, so it's critical that consistency is maintained.
16.
Peer to Peer replication
17.
If have transactional replication setup. Now you
want to add ten articles to the Publisher. After adding the articles what do
you do to get subscribers updated with the articles? How do you ensure that
subscriber side articles got updated?
18.
19.
Pre Requisite for replication?
20.
What are the agents required for replication?
21.
Merge Vs Transactional with upgradable subscription?
22.
What performance issues you faced and how you
fixed them?
23.
What is server side trace and how to use it?
24.
Do you run SQL Profiler for production server?
25.
Name DBCC Commands?
26.
Name DMVs and DMFs
sys.dm_tran_locks
27.
Normalization and Types?
28.
Types of Authentications?
29.
Name Database and Server roles?
30.
What is the purpose of SQL Server Browser
service?
31.
What are the services added to the
‘services.msc’ when SQL server is installed?
SQL Active Directory Helper Service
Enables integration with Active Directories
SQL Full-text Filter Daemon Launcher
Service to launch full-text filter daemon
process which will perform document filtering and word breaking for SQL Server
full-text search. Disabling this service will make full-text search features of
SQL Server unavailable.
SQL Server
Provides storage, processing and controlled
access of data, and rapid transaction processing.
SQL Server Agent
Executes jobs, monitors SQL Server, fires
alerts, and allows automation of some administrative tasks.
SSAS
Supplies online analytical processing
(OLAP) and data mining functionality for business intelligence
SQL Server Browser
Provides SQL Server connection information
to client computers.
SSIS
Provides management support for SSIS
package storage and execution.
SSRS
Manages, executes, renders, schedules and
delivers reports.
SQL Server VSS Writer
Provides the interface to backup/restore Microsoft SQL
server through the Windows VSS infrastructure.
32. Can we
create index without Primary Key constraint in a table? What is the reason for
Cluster index limit for a table?
33. Explain
about Unique Non Cluster Index?
34.
What are statistics and how do they help SQL
DBA?
35. Explain
about System Databases?
System Databases
The
system databases are comprised of Master, Model,
MSDB, TempDB, and the hidden Resource
database.
If the server is configured to be a replication distributor, there will also be
at least one system
distribution database that is named during the replication configuration process.
The
Master Database
The
Master database is used to record all
server-level objects in SQL Server 2005. This includes Server
Logon
accounts, Linked Server definitions, and EndPoints. The Master
database also records information
about
all the other databases on the server (such as their file locations and names).
Unlike its predecessors,
SQL
Server 2005 does not store system information in the Master
database, but rather in the
Resource
database. However, system information is
logically presented as the SYS schema in the
Master
database.
The
Model Database
The
Model database is a template database. Whenever
a new database is created (including the system
database
TempDB), a copy of the Model
database is created and renamed with the
name of the database
being
created. The advantage of this behavior is that objects can be placed in the Model
database prior to
the creation of
any new database and, when the database is created, the objects will appear in
the new database. For example, it has always bugged me that Transact-SQL does
not contain a Trim function to
truncate both
leading and trailing spaces from a string of characters. Transact-SQL offers an
RTRIM function
that truncates
trailing spaces and an LTRIM function that removes leading spaces. The code to
successfully
implement a
traditional trim operation thus becomes the following:
LTRIM(RTRIM(‘character
string’))
To reduce my
irritation level and the number of characters I needed to type to successfully
trim a character
string, I created
my own TRIM function in the Model database with the following code:
USE Model
GO
CREATE FUNCTION
dbo.Trim (@String varchar(MAX))
RETURNS
varchar(MAX)
AS
BEGIN
SELECT @String =
LTRIM(RTRIM(@String))
RETURN @String
END
After creating
this function in the Model database, it will be propagated to all databases
created after
adding it to the
Model database and can be utilized with the following simplified code:
dbo.TRIM(‘character
string’)
I know it’s only
a saving of two characters, but those two characters are open and close
parenthesis characters,
which are often
the source of annoying syntax errors. By reducing the nested functions, the
overall
complexity of the
function call is also reduced.
Almost any
database object can be added to the Model database so that they are available
in subsequently
created
databases. This includes database users, roles, tables, stored procedures,
functions, and
assemblies.
The MSDB Database
I mostly think of
the MSDB database as the SQL Server Agent’s database. That’s because the SQL
Server
Agent uses the
MSDB database extensively for the storage of automated job definitions, job
schedules,
operator
definitions, and alert definitions. The SQL Server Agent is described in
greater detail in Chapter 8,
but for now, just
know that the Agent is responsible for almost all automated and scheduled
operations.
The SQL Server
Agent is not the only service that makes extensive use of the MSDB database.
Service
Broker, Database
Mail, and Reporting Services also use the MSDB database for the storage of
scheduling
information. In
addition to automation and scheduling information, SQL Server Integration
Services
(SSIS) can also
utilize the MSDB database for the storage of SSIS packages.
The TempDB Database
The TempDB
database is used by SQL Server to store data—yes, you guessed it, temporarily.
The TempDB
database is used
extensively during SQL Server operations, so careful planning and evaluation of
its size
and placement are
critical to ensure efficient SQL Server database operations.
15
Introducing SQL
Server 2005
The TempDB
database is used by the Database Engine to store temporary objects (such as
temporary
tables, views,
cursors, and table-valued variables) that are explicitly created by database
programmers.
In addition, the
TempDB database is used by the SQL Server database engine to store work tables
containing
intermediate
results of a query prior to a sort operation or other data manipulation. For
example, if
you wrote a query
that returned 100,000 rows and you wanted the results sorted by a date value in
the
results, SQL
Server could send the unsorted results to a temporary work table where it would
perform
the sorting
operation and then return the sorted results to you. The TempDB database is
also used extensively
to support new
connection options such as SNAPSHOT ISOLATION or Multiple Active Result Sets
(MARS). If online
index operations are performed, the TempDB database will hold the index during
the
build or rebuild
process.
Another important
aspect to keep in mind about the TempDB database is that all database users
have
access to it and
have the ability to create and populate temporary objects. This access can
potentially create
locking and size
limitation issues on SQL Server, so it is important to monitor the TempDB
database
just like any
other database on SQL Server.
The Resource Database
The last system database
is the Resource database. The Resource database is a read-only database that
contains all the
system objects used by an instance of SQL Server. The Resource database is not
accessible
during normal
database operations. It is logically presented as the SYS schema in every
database. It
contains no user
data or metadata. Instead, it contains the structure and description of all
system objects.
This design
enables the fast application of service packs by just replacing the existing
Resource
database with a
new one. As an added bonus, to roll back a service pack installation, all you
have to do
is replace the
new Resource database with the old one. This very elegant design replaces the
older
method of running
many scripts that progressively dropped and added new system objects.
36. Database Booting
order after SQL started?
Master
Mssqlsystemresource
Model
Temp
MSDB
37. Purpose of
Model database?
38. If Model DB
is corrupted, what happens?
39. Can we
access Resource DB?
40. If I delete
Resource DB, can SQL server works fine?
You can start the service at all
41. Difference
between Stored Procedures and Functions?
42. Purpose of
Check Point?
A checkpoint creates a known good point from which the
SQL Server Database Engine can start applying changes contained in the log
during recovery after an unexpected shutdown or crash.
For performance reasons, the Database Engine performs
modifications to database pages in memory—in the buffer cache—and does not
write these pages to disk after every change. Rather, the Database Engine
periodically issues a checkpoint on each database. A checkpoint writes the
current in-memory modified pages (known as dirty pages) and transaction log
information from memory to disk and, also, records information about the
transaction log.
Name
|
Transact-SQL
Interface
|
Description
|
Automatic
|
EXEC sp_configure 'recovery
interval', 'seconds'
|
Issued automatically in the
background to meet the upper time limit suggested by the recovery interval
server configuration option. Automatic checkpoints run to completion.
Automatic checkpoints are throttled based on the number of outstanding writes
and whether the Database Engine detects an increase in write latency above 20
milliseconds.
|
Indirect
|
ALTER DATABASE … SET
TARGET_RECOVERY_TIME = target_recovery_time
{ SECONDS | MINUTES }
|
Issued in the background to meet a
user-specified target recovery time for a given database. The default target
recovery time is 0, which causes automatic checkpoint heuristics to be used
on the database. If you have used ALTER DATABASE to set TARGET_RECOVERY_TIME
to >0, this value is used, rather than the recovery interval specified for
the server instance.
|
Manual
|
CHECKPOINT
[ checkpoint_duration ]
|
Issued when you execute a
Transact-SQL CHECKPOINT command. The manual checkpoint occurs in the current
database for your connection. By default, manual checkpoints run to
completion. Throttling works the same way as for automatic checkpoints.
Optionally, the checkpoint_duration parameter specifies a requested amount of
time, in seconds, for the checkpoint to complete.
|
Internal
|
None.
|
Issued by various server
operations such as backup and database-snapshot creation to guarantee that
disk images match the current state of the log.
|
43. Check Point Vs Lazy Writer?
It’s important to understand how and when SQL makes
changes in memory and how and when those changes are written to the data file.
It has a big impact on memory use and IO patterns.
Data modification queries (insert, update, delete)
always make changes to data pages in memory. In fact, queries in general only
operate on in-memory data pages. When a query makes a change to an in-memory
page, that page is marked as ‘dirty’, indicating that there are changes on that
page that have to be written to disk before the page can be discarded from
memory. That writing is the job of two system processes, the lazy writer and
the checkpoint.
Lazy Writer
The job of the lazy writer is to find dirty pages
in the buffer pool and write them out to disk and drop those pages from cache.
It does this to keep a certain amount of free pages available within the buffer
pool for data that may be requested by other queries. The pages that it writes
out are ‘older’ pages, ones that haven’t been used for a while.
If there’s lots of available buffers, the lazy
writer will be doing relatively little work and the number of pages written to
disk will be quite low. If the lazy writer’s consistently writing lots of data,
it may indicate that there’s a memory bottleneck.
Checkpoint
The checkpoint process also writes dirty pages to
disk, but it has no interest in keeping available buffers or memory pressure.
The job of the checkpoint is to keep the potential time needed to recover the
database to a small value.
Recovery is the process that the database must go
through when it starts up to ensure transactional consistency. In part of the
recovery process SQL uses the transaction log to work out what transactions had
committed but may not have had their changes written to disk at the time of the
shut down and it replays those transactions. That replay (roll forward) can be
extensive on larger, busy databases. To reduce the amount of work required of
this roll forward, the checkpoint process runs at regular intervals and writes
all dirty data pages to disk and writes an entry into the log indicating that
all changes before that point are now on disk.
One important thing to note here is that all dirty
data pages are written to disk by the checkpoint, not just pages corresponding
to committed transactions. Of course, this means that changes may be written to
disk and later rolled back, either explicitly or because the transaction did
not complete. This is not a concern. When the rollback happens, the data pages
will again be modified in memory and later written to disk. If the SQL service
restarts after the pages were written to disk but before the transaction
commits, another part of the recovery process will undo those data changes.
The frequency at which the checkpoint process runs
is controlled by a server setting: “recovery interval (min)”. This setting
specifies how long a recovery should take. The higher this is set to, the less
often checkpoint runs. The flip side is that the higher the recovery interval
is, the more work checkpoint has to do each time it runs, hence if checkpoint
is causing IO contention it may be better to reduce the recovery interval,
rather than increasing it
Impact of Recovery Interval on Recovery
Performance
For an online transaction processing (OLTP) system
using short transactions, recovery interval is the primary factor
determining recovery time. However, the recovery interval option does
not affect the time required to undo a long-running transaction. Recovery of a
database with a long-running transaction can take much longer than the
specified in the recovery interval option. For example, if a
long-running transaction took two hours to perform updates before the server
instance became disabled, the actual recovery takes considerably longer than
the recovery interval value to recover the long transaction. For more
information about the impact of a long running transaction on recovery time,
see The
Transaction Log (SQL Server).
Typically, the default values provides optimal
recovery performance. However, changing the recovery interval might improve
performance in the following circumstances:
·
If recovery routinely takes significantly longer
than 1 minute when long-running transactions are not being rolled back.
·
If you notice that frequent checkpoints are
impairing performance on a database.
If you decide to increase the recovery interval
setting, we recommend increasing it gradually by small increments and
evaluating the effect of each incremental increase on recovery performance.
This approach is important because as the recovery interval setting
increases, database recovery takes that many times longer to complete. For
example, if you change recovery interval 10, recovery takes
approximately 10 times longer to complete than when recovery interval is
set to zero.
Indirect Checkpoints
Indirect checkpoints, new in SQL Server 2012,
provide a configurable database-level alternative to automatic checkpoints. In
the event of a system crash, indirect checkpoints provide potentially faster,
more predictable recovery time than automatic checkpoints. Indirect checkpoints
offer the following advantages:
·
Indirect checkpoints can reduce overall database
recovery time.
·
Indirect checkpoints enable you to reliably
control database recovery time by factoring in the cost of random I/O during
REDO. This enables a server instance to stay within an upper-bound on recovery
times for a given database (except when a long-running transaction causes
excessive UNDO times).
·
Indirect checkpoints reduce checkpoint-related
I/O spiking by continually writing dirty pages to disk in the background.
However, an online transactional workload on a
database that is configured for indirect checkpoints could experience
performance degradation. This is because the background writer used by indirect
checkpoint sometimes increases the total write load for a server instance.
Internal Checkpoints
Internal Checkpoints are generated by various
server components to guarantee that disk images match the current state of the
log. Internal checkpoint are generated in response to the following events:
·
Database files have been added or removed by
using ALTER DATABASE.
·
A database backup is taken.
·
A database snapshot is created, whether
explicitly or internally for DBCC CHECK.
·
An activity requiring a database shutdown is
performed. For example, AUTO_CLOSE is ON and the last user connection to the
database is closed, or a database option change is made that requires a restart
of the database.
·
An instance of SQL Server is stopped by stopping
the SQL Server (MSSQLSERVER) service . Either action causes a checkpoint in
each database in the instance of SQL Server.
·
Bringing a SQL Server failover cluster instance
(FCI) offline.
44. Rebuild Vs
Reorg?
45. What is the
affect on performance while Rebuild and Reorg Index?
46. Table Vs
View?
47. Full Text
Index?
48. How to
install hot fix in Cluster Environment?
1.
run the set up file (setup.bat).
2.
On the Welcome to the Microsoft SQL
Server 2000 Service Pack 4 Installation Wizard page, click Next.
3.
On the Computer Name page, select
Virtual Server, type the virtual server name, for example, SQLCLUSTER, and then
click Next.
4.
On the Software License Agreement
page, click Yes.
5.
On the Instance Name page, click
Next.
6.
On the Remote Information page, type
Admin for the cluster, and then click Next.
7.
On the Connect to Server page,
select Windows authentication, an
8.
d then click Next.
9.
In the SA Password Warning dialog
box, select Ignore the security threat warning, leave the password blank, and
then click Next.
10. In the SQL Server 2000 Service Pack 4 Setup dialog box,
select Upgrade Microsoft Search and apply SQL Server 2000 SP4, and then click
Continue.
11. In the SQL Server 2000 Service Pack 4 Setup dialog box, make
sure that the check box for Enable cross-database ownership chaining for all
databases is clear, and then click Continue.
12. In the Error Reporting dialog box, click OK.
13. On the Remote Information page, do the following:
Use
this
|
To
do this
|
Username
|
Type CLUSvc.
|
Password
|
Type the corresponding password.
|
Domain
|
Type the domain name.
|
14. Click Next.
15. In the dialog box warning you to back up your databases,
click OK.
16. On the Start Copying Files page, click Next.
17. On the Shutting Down Tasks page, click Next.
The
installation process starts.
18. In the Setup dialog box, click OK.
19. On the Setup Complete page, select Yes, I want to restart my
computer now, and then click Finish.
20. For clustered deployments, restart the second computer that
is running SQL Server only after the first computer that is running SQL Server
has fully restarted. You might experience database corruption if you do not
wait until the first computer that is running SQL Server has fully restarted
before restarting the second computer that is running SQL Server.
Important
|
Long-running uncommitted
transactions increase recovery time for all types of checkpoints.
|
49.
No comments:
Post a Comment