Sunday, 15 May 2016

Interview

Optimising ASP.Net MVC web site?


You can use “Yslow” tool for this
Some primary techniques are

1. Remove unused view engines:
E.g. suppose controller action method Home/AAA
The view 'AAA' or its master was not found or no view engine supports the searched locations. The following locations were searched:
~/Views/Home/AAA.aspx
~/Views/Home/AAA.ascx
~/Views/Shared/AAA.aspx
~/Views/Shared/AAA.ascx
~/Views/Home/AAA.cshtml
~/Views/Home/AAA.vbhtml
~/Views/Shared/AAA.cshtml
~/Views/Shared/AAA.vbhtml
By default it will search for both aspx and cshtml view we can remove unwanted view engines by below lines in global.asax file like below

// global.asax
protected void Application_Start()
{
    ViewEngines.Engines.Clear();
    ViewEngines.Engines.Add(new RazorViewEngine());
}

The view 'AAA' or its master was not found or no view engine supports the searched locations. The following locations were searched:
~/Views/Home/AAA.cshtml
~/Views/Home/AAA.vbhtml
~/Views/Shared/AAA.cshtml
~/Views/Shared/AAA.vbhtml
Note: Thus it will reduces number of loop checks for the view

2. Bundling and minification of JavaScript and CSS:
This will reduces number of HttpRequests to the server also reduces the file size.

3. Remove Duplicate Scripts

4. Use HTTP Compression
You can achieve this by adding setting in web.config file
<system.webServer>
  <urlCompression doStaticCompression="true" doDynamicCompression="true" dynamicCompressionBeforeCache="false" />
</system.webServer>
You can typically reduce an HTML document to less than half of its original size. This, in turn, halves the amount of time the client needs to download the page as well as the amount of bandwidth required. All of this is achieved without actually changing the way the site works, its page layout, or the content. The only thing that changes is the way the information is transferred.
Not all files are suitable for compression. For obvious reasons, files that are already compressed such as JPEGs, GIFs, PNGs, movies, and 'bundled content (e.g., Zip, Gzip, and bzip2 files) However, sites that have a lot of plain text content, including the main HTML files, XML, CSS, and RSS, may benefit from the compression

5. Add expiry headers:
So that browser never requests those files again up to expiration time
<system.webServer>
<staticContent>
 <clientCache cacheControlMode="UseMaxAge" cacheControlMaxAge="365.00:00:00"/>
</staticContent>
</system.webServer>

6. Optimize Images

7. Use OutputCacheAttribute when appropriate (for frequent data)
[OutputCache(VaryByParam = "none", Duration = 3600)]
public ActionResult Categories()
{
    return View(new Categories());
}

8. Put Style sheets at the Top

9. Put Scripts at the Bottom

10. Make JavaScript and CSS External
Good example.
<link rel="stylesheet" type="text/css" media="screen" href="/css/common.css" />
<script type="text/javascript" src="/js/common.js"></script>
Bad example
<style type="text/css">
...
</style>
<script type="text/javascript">
...
</script>
If you include the code and styles inline, they must be downloaded per request.


What is an application pool in IIS?


Application pools allow you to isolate your applications from one another, even if they are running on the same server. This way, if there is an error in one app, it won't take down other applications. Additionally, applications pools allow you to separate different apps which require different levels of security.
Application pools are used to separate sets of IIS worker processes. It is used to isolate our web application for security, reliability, availability and performance and keep running without impacting each other.

One application pool can have multiple worker processes also.


How to separate positive and negative numbers from column?
We have a table with ID column containing following values in it



And the Output should be


ANS:

SELECT T1.POSITIVE_VALUE, T2.negetive_value 
FROM (select id positive_value,  ROW_NUMBER() OVER (ORDER BY ID) AS C1 FROM nos WHERE id >= 0) T1 
FULL OUTER JOIN (SELECT id negetive_value,  ROW_NUMBER() OVER (ORDER BY ID)      AS  C2 FROM nos WHERE id < 0) T2
ON (T1.C1 = T2.C2)

Write the SQL script to delete/Truncate data from all the tables of the specific database

Recently, I came across this question where candidate had to write a query to empty all the tables from the specific database. Prima facie it feels like a cakewalk for the candidate but believe me it's not. Why not ? Because one must consider the relationships between the tables while writing the query for this question. It could be multilevel hierarchy that one needs to identify before even thinking about deleting the data from the child tables.

So the question gets divided into 2 parts

Identify the relationship hierarchy
Start deleting the data from the bottom i.e from child tables to the parent tables
Now the real question, is it really necessary to identify the hierarchy between tables ? Isn't there any other way to perform this ?

Well there might be many but the one that appealed me is the following one

Disable all the constraints for all the tables
Delete data from all the tables
Enable all the constraints that were disabled in the first step
Following statement can be used to disable all the constraints for a particular table

ALTER TABLE [table name]
NOCHECK CONSTRAINT ALL

And to enable all the constrains

ALTER TABLE [Table Name]
CHECK CONSTRAINT ALL

How to get the count of rows for each table of the particular database ? 

This is simple yet important question which may feature when someone is interviewing for SQL developer position.

The question goes like this
"How to get the count of rows for each table of the particular database ? How many ways you can think of to fetch the details ?"

Again second part of the question made it interesting because now interviewer wants to understand your knowledge about different ways of Looping in SQL server ?

One obvious answer for this question would be using CURSOR but I'll leave that to you to write instead I'll try to use in-build looping mechanism that comes handy in this situation.


Using SP_MsforeachTable :

sp_msforeachtable - It is in-built procedure which provides cursor like functionality. The SQL code enclosed within this procedure will execute for each table from the database.

As you can see in the screenshot below

In the TEST database, I have 3 tables namely T1 (968 rows), T2 (123 rows) and T3 (123 rows).



Now, I'll try to run the following query

EXEC TestCount..sp_MSforeachtable ' SELECT COUNT(1) As RecCount FROM ? '

As you would have noticed I have used " ? " (question mark) instead of table name in the SELECT statement. Internally sp_msforeachtable replaces this "?" by the table name one by one and executes the SELECT statement.

I received following result. In it each row represents the count of rows from some table but we don't know which row belongs to which table



Hence we must tweak the query to display table name as well. And as I mentioned before " ? " represents the table name here so its just the matter of using it appropriately

EXEC TestCount..sp_MSforeachtable ' SELECT ''?'' as TableName,COUNT(1) As RecCount FROM ? '
And this gives us the following result



There is one procedure (sp_spaceused) which gives the complete storage related information about the object.
Let us write one more query with sp_msforeachtable to get the detailed information then

EXEC sp_MSforeachtable ' exec sp_spaceused ''?'' '

and it produces the following output




Using While Loop :

DECLARE @TAB TABLE

(

ID INT IDENTITY(1,1),

TABNAME VARCHAR(1000)

)

INSERT INTO @TAB (TABNAME)

SELECT name FROM SYS.tables

DECLARE @LOOP INT = 1,

@MAXCNT INT,

@NAME VARCHAR(1000),

@QUERY VARCHAR(4000)

SELECT @MAXCNT = MAX(ID) FROM @TAB

WHILE @LOOP <= @MAXCNT

BEGIN
SELECT @NAME = TABNAME FROM @TAB WHERE ID = @LOOP
SET @QUERY = ' SELECT '''+ @NAME + ''' AS TABNAME, COUNT(1) AS CNT FROM ' + @NAME
--PRINT @QUERY

EXEC (@QUERY)
SET @LOOP = @LOOP + 1
END

To summarize at this moment I can think of 3 ways to get the required information
sp_MSforeachtable
While Loop
Cursor



How to generate values from 1 to 1000 without using WHILE loop and Cursor ? and Using single SELECT or block of code?

;WITH CTE AS
(
SELECT 1 AS NUM
UNION ALL
SELECT NUM + 1 FROM CTE
WHERE NUM < 1001

)
SELECT * FROM CTE
OPTION (MAXRECURSION 1000)


What is the difference between Table Variable and Temp Table ?



Can we have 2 tables with same name in a SQL database ?

Yes, Tables with same name can exists in the same database provided that database has case sensitive collation set on it.

ALTER DATABASE TestCollation
COLLATE SQL_Latin1_General_CP1_CS_AS ;

As you might have noticed I have changed CI to CS in the collation which signifies Case Sensitive..

Now let us try to create tables once again

CREATE TABLE Test(ID TINYINT)
CREATE TABLE TEST(ID TINYINT)

How to Find distinct values without using distinct?


DECLARE @TAB TABLE
(
 ID TINYINT

)

INSERT INTO @TAB
SELECT 1 AS ID UNION ALL
SELECT 3 UNION ALL
SELECT 45 UNION ALL
SELECT 76 UNION ALL
SELECT 28 UNION ALL
SELECT 45 UNION ALL
SELECT NULL UNION ALL
SELECT NULL UNION ALL
SELECT 1 UNION ALL
SELECT 3

SELECT * FROM @TAB


;WITH CTE AS
(
 SELECT ID, ROW_NUMBER() OVER (PARTITION BY ID ORDER BY ID) ROWNUM
 FROM @TAB
)
SELECT ID FROM CTE WHERE ROWNUM = 1

SELECT ID FROM @TAB
GROUP BY ID

SELECT ID FROM @TAB
INTERSECT
SELECT ID FROM @TAB

SELECT ID FROM @TAB
EXCEPT
SELECT 100 -- SOME VALUE THAT ISN'T PRESENT IN ID COLUMN

SELECT ID FROM @TAB
UNION


SELECT ID FROM @TAB 


Garbage Collector


Garbage Collector is a feature provided by CLR which helps up to clean unused managed objects.
By cleaning those unused managed objects it reclaims the memory.
It only cleans unused managed objects and does not clean unmanaged objects.

It means anything outside the CLR boundary the garbage collector does not know how to clean it.

Note: garbage collector is nothing but it is a background thread which runs continuously and basically checks if there are any unused managed objects clean those objects and reclaim the memory.
Concept of generation 0,1,2 in GC?

Step1: Application stats and assume 5 objects are created, these 5 objects are moved to generation 0 bucket

Step2: Now G.C. comes and checks for unused object in generation 0 bucket. 2 objects are unused so they get free and GC reclaims the memory. And remaining 3 objects are moved into generation 1 bucket.

Step3: Application keeps on running, it creates 2 more fresh objects and assumes that these 2 objects are not in used and then GC frees these objects, reclaim the memory.

Step4: GC then checks for unused objects in generation 1 bucket (already 3 object where there) frees 1 unused object and remaining 2 objects are moved in generation 2 bucket.

IMP: generations are nothing but they define how long an object has been in the memory

generation2 -- long time objects in the memory
generation1 -- less time objects than generation 2 in the memory
generation0 -- fresh objects in memory

GC visits more in generation 0 bucket and thus increases the performances

**************************

what is optimised code in c#
Multiple document.ready
Can we call a function in one document.ready from another -- NO
types of delegates
Javascript prototype
Mvc extention (advantage of mvc)
Authenticate webApi while calling from jquery
How you set default list using linq if list is emply
Employee with department - find department wise max salary
asp.net mvc extend controller


**********************

Difference between LINQ to SQL and Entity Framework

LINQ to SQL
Entity Framework
It only works with SQL Server Database.
It can works with various databases like Oracle, DB2, MYSQL, SQL Server etc.
It generates a .dbml to maintain the relation
It generates an .edmx files initially. The relation is maintained using 3 different files .csdl, .msl and .ssdl
It has not support for complex type.
It has support for complex type.
It cannot generate database from model.
It can generate database from model.
It allows only one to one mapping between the entity classes and the relational tables /views.
It allows one-to-one, one-to-many & many-to-many mappings between the Entity classes and the relational tables /views
It allows you to query data using DataContext.
It allows you to query data using EntitySQL, ObjectContext, DbContext.
It provides a tightly coupled approach.
It provides a loosely coupled approach. Since its code first approach allow you to use Dependency Injection pattern which make it loosely coupled .
It can be used for rapid application development only with SQL Server.
It can be used for rapid application development with RDBMS like SQL Server, Oracle, DB2 and MySQL etc.

Friday, 6 May 2016

SQL ISOLATION

Problem
You are looking at ways of ensuring atomicity and consistency in your database by ensuring transactional integrity. You may want to avoid locks, or ensure problems such as lost updates or dirty reads cannot occur, or otherwise ensure the quality of your data. Transactional isolation is a narrow but somewhat complex subject that might be ideal for your needs.
Solution
This article will cover the five transaction isolation settings - READ UNCOMMITTED, READ COMMITTED (locking), READ COMMITTED (snapshot), REPEATABLE READ and SERIALIZATION. You'll find short explanations on the differences between them with emphasis on practical code examples to demonstrate the effects at different isolation levels. Finally, you'll find some links at the bottom for further reading. Please execute the code in 'Setting up the Test Environment' below in your development database context to get started, or read the 'ACID: Review' section below for a bit of background in the four principles of an RDBMS system.

ACID: Review

Before I dive into transaction isolation, it's worth taking a brief look at four of the fundamental principles underlying relational database management systems. These four principles are referred to as 'ACID', and each letter is an acronym for one property of RDBMS systems that is non-negotiable for the sake of the integrity of the system. These are:
  • READ UNCOMMITTED: A query in the current transaction can read data modified within another transaction but not yet committed. The database engine does not issue shared locks when Read Uncommitted is specified, making this the least restrictive of the isolation levels. As a result, it’s possible that a statement will read rows that have been inserted, updated or deleted, but never committed to the database, a condition known as dirty reads. It’s also possible for data to be modified by another transaction between issuing statements within the current transaction, resulting in nonrepeatable reads or phantom reads.

  • READ COMMITTED: A query in the current transaction cannot read data modified by another transaction that has not yet committed, thus preventing dirty reads. However, data can still be modified by other transactions between issuing statements within the current transaction, so nonrepeatable reads and phantom reads are still possible. The isolation level uses shared locking or row versioning to prevent dirty reads, depending on whether the READ_COMMITTED_SNAPSHOT database option is enabled. Read Committed is the default isolation level for all SQL Server databases.

  • REPEATABLE READ: A query in the current transaction cannot read data modified by another transaction that has not yet committed, thus preventing dirty reads. In addition, no other transactions can modify data being read by the current transaction until it completes, eliminating nonrepeatable reads. However, if another transaction inserts new rows that match the search condition in the current transaction, in between the current transaction accessing the same data twice, phantom rows can appear in the second read.

  • SERIALIZABLE: A query in the current transaction cannot read data modified by another transaction that has not yet committed. No other transaction can modify data being read by the current transaction until it completes, and no other transaction can insert new rows that would match the search condition in the current transaction until it completes. As a result, the Serializable isolation level prevents dirty reads, nonrepeatable reads, and phantom reads. However, it can have the biggest impact on performance, compared to the other isolation levels.

  • SNAPSHOT: A statement can use data only if it will be in a consistent state throughout the transaction. If another transaction modifies data after the start of the current transaction, the data is not visible to the current transaction. The current transaction works with a snapshot of the data as it existed at the beginning of that transaction. Snapshot transactions do not request locks when reading data, nor do they block other transactions from writing data. In addition, other transactions writing data do not block the current transaction for reading data. As with the Serializable isolation level, the Snapshot level prevents dirty reads, nonrepeatable reads and phantom reads. However, it is susceptible to concurrent update errors. (not  ANSI/ISO SQL standard)

Setting up the Test Environment

-- init
SET NOCOUNT OFF
ALTER DATABASE [YourDB] SET READ_COMMITTED_SNAPSHOT ON
GO
CREATE TABLE dbo.TestIsolationLevels (
EmpID INT NOT NULL,
EmpName VARCHAR(100),
EmpSalary MONEY,
CONSTRAINT pk_EmpID PRIMARY KEY(EmpID) )
GO
INSERT INTO dbo.TestIsolationLevels 
VALUES 
(2322, 'Dave Smith', 35000),
(2900, 'John West', 22000),
(2219, 'Melinda Carlisle', 40000),
(2950, 'Adam Johns', 18000) 
GO

Experiment 1: Read using READ UNCOMMITTED

READ UNCOMMITTED is the most optimistic concurrency isolation option available in SQL Server. It allows a transaction to get the value in a row even when locks are present on the row/object or it hasn't yet been committed to disk. Reads like this are also known as 'dirty reads' since they effectively read from the transaction log rather than disk or cache - the data is unpersisted. (Note if no concurrent transactions are occurring, the read will occur from disk/cache). To show the effects of READ UNCOMMITTED, we can open a transaction as follows:
BEGIN TRAN
UPDATE  dbo.TestIsolationLevels 
SET     EmpSalary = 25000
WHERE   EmpID = 2900
Now select the value that's being updated using the following (in a separate query window):
SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED
SET NOCOUNT ON
GO
SELECT EmpID, EmpName, EmpSalary
FROM dbo.TestIsolationLevels
WHERE EmpID = 2900

Note the value for empSalary reflects the current *uncommitted* value
Note the value for empSalary reflects the current *uncommitted* value. You can view the intent lock on the key (empID) and the intent exclusive locks on the object containers (the page on which the row is located and the object) imposed by the UPDATE statement using the following:
SELECT      es.login_name, tl.resource_type, 
            tl.resource_associated_entity_id,
            tl.request_mode, 
            tl.request_status
FROM        sys.dm_tran_locks tl
INNER JOIN  sys.dm_exec_sessions es ON tl.request_session_id = es.session_id 
WHERE       es.login_name = SUSER_SNAME() AND tl.resource_associated_entity_id <> 0
Now rollback the transaction to reset the EmpSalary for this employee to 22000.00:
ROLLBACK;

Experiment 2: Read using READ COMMITTED (snapshot)

There are two levels of READ COMMITTED isolation, which are locking and snapshot. Locking is the most straightforward, and simply means that once an UPDATE transaction is open, exclusive and intent-exclusive locks are taken out on the page, key range (if appropriate) and object. When reading the row using READ COMMITTED while using locking, the SELECT query used will hang until the value of LOCK_TIMEOUT (session-level parameter, if set) has expired, at which point an error will be returned.
If the value of the database-level option READ_COMMITTED_SNAPSHOT is False, locking mode for READ COMMITTED transactions is the default option. If it is True, then snapshot is the default option unless overridden by the READCOMMITTEDLOCK table hint.
Here's a demonstration of READ COMMITTED isolation with locking, by using the table hint in a database with READ_COMMITTED_SNAPSHOT ON. You don't need the table hint if this value is OFF:
ALTER DATABASE [YourDB] SET READ_COMMITTED_SNAPSHOT ON
GO
BEGIN TRAN
UPDATE  dbo.TestIsolationLevels 
SET     EmpSalary = 25000
WHERE   EmpID = 2900
Now in a separate query window:
SELECT  EmpID, EmpName, EmpSalary
FROM    dbo.TestIsolationLevels WITH (READCOMMITTEDLOCK)
WHERE   EmpID = 2900
The query will hang as it is waiting for the key lock on EmpID to be released. Allow the query to execute by issuing in your first window:
ROLLBACK;
The query will hang as it is waiting for the key lock on EmpID to be released
READ COMMITTED with snapshot is different from locking. The UPDATE statement above will take the same locks on the objects, but with the SELECT transaction session specifying snapshot isolation, the row returned will be the last row version before the UPDATE statement began. This row version is stored internally and represents the last consistent state of the row. Logically it follows that if you are using row versioning, this capability must be DB-wide, since otherwise the transaction with the UPDATE statement would not know to maintain a version of the row before issuing the UPDATE. Therefore, to use snapshot isolation the option must be set using the ALTER DATABASE statement (note that all database user connections will be killed when doing this).
Note About Row Versioning
Row versioning is an internal feature used by SQL Server to maintain recent copies of rows that have been changed, for the purposes of maintaining table consistency and ensuring better isolation from reads or writes of transactions that concurrently access the same rows. Row versioning, also called 'Row-Level Versioning (RLV)' was first introduced in SQL Server 2005. Historical rows are kept in the 'version store', inside TEMPDB, and each row that has been 'versioned' has a row pointer added to it which allows the query engine to locate the versioned row. Interestingly, the 'inserted' and 'deleted' tables used with triggers and the OUTPUT clause (to name two uses) use a similar method of versioning. There are performance sacrifices made when using this level of transaction isolation - please see the note under 'Next Steps' for more information.
Below is an example of using the READ COMMITTED with snapshot isolation:
ALTER DATABASE [YourDB] SET READ_COMMITTED_SNAPSHOT ON
GO
SET TRANSACTION ISOLATION LEVEL READ COMMITTED
SET NOCOUNT ON
GO
BEGIN TRAN
UPDATE  dbo.TestIsolationLevels 
SET     EmpSalary = 25000
WHERE   EmpID = 2900
Now in a separate query window:
SELECT  EmpID, EmpName, EmpSalary
FROM    dbo.TestIsolationLevels 
WHERE   EmpID = 2900
The query will return the last consistent row of data. Note the empSalary column is 22000.00 despite the transaction being open and the update written (but uncommitted). This is correct, and the SELECT is reading from the previous row version, not the present state of the row. This is compliant with the C in ACID - consistency.
The query will return the last consistent row of data.

Experiment 3: Read using SNAPSHOT isolation

For all intents and purposes, reads using READ COMMITTED - snapshot and SNAPSHOT are almost identical - but not identical. There are some differences when it comes to details and behavior. READ COMMITTED - snapshot will read the most recent consistent row version since the start of the statement being issued, where snapshot isolation will read the most consistent row version since the transaction started. This can cause problems with concurrent transactions since SELECTs inside the transaction that occur later than the COMMIT time of the UPDATE transaction will return an incorrect value. Likewise, update conflicts can occur for the same reason when concurrent updates are attempted.
To use snapshot isolation, you must first enable the feature as follows:
ALTER DATABASE [YourDB] SET ALLOW_SNAPSHOT_ISOLATION ON
Now start the UPDATE again, and issue the SELECT in a separate query window like so:
SET TRANSACTION ISOLATION LEVEL SNAPSHOT
SET NOCOUNT ON
GO
BEGIN TRAN
UPDATE  dbo.TestIsolationLevels 
SET     EmpSalary = 25000
WHERE   EmpID = 2900
Now in a separate query window:
SELECT  EmpID, EmpName, EmpSalary
FROM    dbo.TestIsolationLevels 
WHERE   EmpID = 2900
You will note that, like READ COMMITTED, the correct snapshot of the data row is returned, yielding empSalary = 22000, which is consistent and correct. Rollback the transaction.
You will note that, like READ COMMITTED, the correct snapshot of the data row is returned, yielding empSalary = 22000

Experiment 4: Read using REPEATABLE READ

The REPEATABLE READ isolation level is similar to the READ COMMITTED isolation level, in that it guarantees the output of uncommitted transactions won't be read by other concurrent transactions. However, if a separate concurrent transaction commits before the first one, it is possible to read the same row twice within the transaction and obtain different values. Likewise it is possible that additional 'phantom' rows could be present depending on the behavior of the concurrent transaction.
Execute the following:
SET TRANSACTION ISOLATION LEVEL READ COMMITTED 
SET NOCOUNT ON
GO
BEGIN TRAN
SELECT  EmpID, EmpName, EmpSalary
FROM    dbo.TestIsolationLevels 
WHERE   EmpID = 2900
WAITFOR DELAY '00:00:10'
SELECT  EmpID, EmpName, EmpSalary
FROM    dbo.TestIsolationLevels 
WHERE   EmpID = 2900
COMMIT
Now while this is executing, execute the following in a separate query window:
BEGIN TRAN
UPDATE  dbo.TestIsolationLevels 
SET     EmpSalary = 25000
WHERE   EmpID = 2900
COMMIT
Note the results below. Despite the two SELECTs being in one explicit transaction, the empSalary value differs between the individual statements in that transaction. The next isolation level helps to solve this problem.
Despite the two SELECTs being in one explicit transaction, the empSalary value differs between the individual statements in that transaction.
REPEATABLE READ is the isolation level to use if read requests (note: not updates) are returning inconsistent data *within one transaction*, and consists of a superset of the READ COMMITTED isolation type features (i.e. it encapsulates READ COMMITTED characteristics). Here is an example of using REPEATABLE READ when a concurrent UPDATE is occurring:
SET TRANSACTION ISOLATION LEVEL REPEATABLE READ
SET NOCOUNT ON
GO
BEGIN TRAN
SELECT  EmpID, EmpName, EmpSalary
FROM    dbo.TestIsolationLevels 
WHERE   EmpID = 2900
WAITFOR DELAY '00:00:10'
SELECT  EmpID, EmpName, EmpSalary
FROM    dbo.TestIsolationLevels 
WHERE   EmpID = 2900
COMMIT
Run the below while the above is executing:
BEGIN TRAN
UPDATE  dbo.TestIsolationLevels
SET     EmpSalary = 25000
WHERE   EmpID = 2900
COMMIT

You'll notice that the UPDATE transaction is waiting on the SELECT transaction
You'll notice that the UPDATE transaction is waiting on the SELECT transaction, and that the SELECT transaction yields the correct data if the transaction consistency as a whole is considered. Interestingly though, this still doesn't hold true for phantom rows - it's possible to insert rows into a table and have the rows returned by a calling SELECT transaction even under the REPEATABLE READ isolation level.
SET TRANSACTION ISOLATION LEVEL REPEATABLE READ
SET NOCOUNT ON
GO
BEGIN TRAN
SELECT  EmpName
FROM    dbo.TestIsolationLevels 
WAITFOR DELAY '00:00:10'
SELECT  EmpName
FROM    dbo.TestIsolationLevels 
COMMIT
Run the below while the above is executing:
BEGIN TRAN
INSERT INTO dbo.TestIsolationLevels VALUES (3427, 'Phantom Employee 1', 30000)
COMMIT

To counter this problem, we need to use the SERIALIZABLE isolation level - the toughest of the bunch
To counter this problem, we need to use the SERIALIZABLE isolation level - the toughest of the bunch.

Experiment 5: Serializable Isolation

SERIALIZABLE has all the features of READ COMMITTED, REPEATABLE READ but also ensures concurrent transactions are treated as if they had been run in serial. This means guaranteed repeatable reads, and no phantom rows. Be warned, however, that this (and to some extent, the previous two isolation levels) can cause large performance losses as concurrent transactions are effectively queued. Here's the phantom rows example used in the previous section again but this time using the SERIALIZABLE isolation level:
SET TRANSACTION ISOLATION LEVEL SERIALIZABLE
SET NOCOUNT ON
GO
BEGIN TRAN
SELECT  EmpName
FROM    dbo.TestIsolationLevels 
WAITFOR DELAY '00:00:10'
SELECT  EmpName
FROM    dbo.TestIsolationLevels 
COMMIT
Run the below while the above is executing:
BEGIN TRAN
INSERT INTO dbo.TestIsolationLevels VALUES (3427, 'Phantom Employee 1', 30000)
COMMIT

Experiment 5: Serializable Isolation

Note About Performance
I hope this article has been relevant and interesting, but I cannot finish without writing about performance. The reason there are five variable levels of transaction isolation in SQL Server is so the DBA or developer can tailor the isolation level to the type of query (and frequency of query) being performed. Generally, the more pessimistic the isolation level (SERIALIZABLE, READ COMMITTED - locking), the worse the performance of the query operating under that scope. This is plain to see when you consider one example, READ COMMITTED - locking, which forces other queries to wait for the resources being held for the first query. This can cause significant performance delays along the application stack, potentially leading to timeouts or other errors.

However, reducing transaction isolation levels to the most optimistic (READ UNCOMMITTED) is not necessarily a good idea under all circumstances either, for the reasons demonstrated in this article. Some systems, such as finance / banking systems, require absolute data integrity and for this to be maintained, the isolation principle is paramount. You should choose carefully the level of transaction isolation depending on what is required from the query or queries you are writing. Your queries, and associated isolation levels, should always be tested in a suitable test/development environment before deployment to production.