Wednesday, September 9, 2009

ERP


Functional Modules of ERP Software


ERP software is made up of many software modules. Each ERP software module mimics a major functional area of an organization. Common ERP modules include modules for product planning, parts and material purchasing, inventory control, product distribution, order tracking, finance, accounting, marketing, and HR. Organizations often selectively implement the ERP modules that are both economically and technically feasible.
ERP Production Planning Module
In the process of evolution of manufacturing requirements planning (MRP) II into ERP, while vendors have developed more robust software for production planning, consulting firms have accumulated vast knowledge of implementing production planning module. Production planning optimizes the utilization of manufacturing capacity, parts, components and material resources using historical production data and sales forecasting.
ERP Purchasing Module
Purchase module streamline procurement of required raw materials. It automates the processes of identifying potential suppliers, negotiating price, awarding purchase order to the supplier, and billing processes. Purchase module is tightly integrated with the inventory control and production planning modules. Purchasing module is often integrated with supply chain management software.
ERP Inventory Control Module
Inventory module facilitates processes of maintaining the appropriate level of stock in a warehouse. The activities of inventory control involves in identifying inventory requirements, setting targets, providing replenishment techniques and options, monitoring item usages, reconciling the inventory balances, and reporting inventory status. Integration of inventory control module with sales, purchase, finance modules allows ERP systems to generate vigilant executive level reports.
ERP Sales Module
Revenues from sales are live blood for commercial organizations. Sales module implements functions of order placement, order scheduling, shipping and invoicing. Sales module is closely integrated with organizations' ecommerce websites. Many ERP vendors offer online storefront as part of the sales module.
ERP Market in Module
ERP marketing module supports lead generation, direct mailing campaign and more.
ERP Financial Module
Both for-profit organizations and non-profit organizations benefit from the implementation of ERP financial module. The financial module is the core of many ERP software systems. It can gather financial data from various functional departments, and generates valuable financial reports such balance sheet, general ledger, trail balance, and quarterly financial statements.
ERP HR Module
HR (Human Resources) is another widely implemented ERP module. HR module streamlines the management of human resources and human capitals. HR modules routinely maintain a complete employee database including contact information, salary details, attendance, performance evaluation and promotion of all employees. Advanced HR module is integrated with knowledge management systems to optimally utilize the expertise of all employees.

ERP Definition - A Systems Perspective


What is ERP

ERP is the acronym of Enterprise Resource Planning. ERP utilizes ERP software applications to improve the performance of organizations' resource planning, management control and operational control. ERP software is multi-module application software that integrates activities across functional departments, from product planning, parts purchasing, inventory control, product distribution, to order tracking. ERP software may include application modules for the finance, accounting and human resources aspects of a business.
ERP vs. CRM and SCM
CRM (Customer Relationship Management) and SCM (Supply Chain Management) are two other categories of enterprise software that are widely implemented in corporations and non-profit organizations. While the primary goal of ERP is to improve and streamline internal business processes, CRM attempts to enhance the relationship with customers and SCM aims to facilitate the collaboration between the organization, its suppliers, the manufacturers, the distributors and the partners.
ERP Definition - A Systems Perspective
ERP, often like other IT and business concepts, are defined in many different ways. A sound definition should several purposes:
1. It answers the question of "what is ... ?".
2. It provides a base for defining more detailed concepts in the field - ERP software, ERP systems, ERP implementation etc.
3. It provides a common ground for comparison with related concepts - CRM, SCM etc.
4. It helps answer the basic questions in the field - benefits of ERP, the causes of ERP failure etc.
A definition of ERP based on Systems Theory can server those purposes.
ERP is a system which has its goal, components, and boundary.
The Goal of an ERP System - The goal of ERP is to improve and streamline internal business processes, which typically requires reengineering of current business processes.
The Components of an ERP System - The components of an ERP system are the common components of a Management Information System (MIS).
· ERP Software - Module based ERP software is the core of an ERP system. Each software module automates business activities of a functional area within an organization. Common ERP software modules include product planning, parts purchasing, inventory control, product distribution, order tracking, finance, accounting and human resources aspects of an organization.
· Business Processes - Business processes within an organization falls into three levels - strategic planning, management control and operational control. ERP has been promoted as solutions for supporting or streamlining business processes at all levels. Much of ERP success, however, has been limited to the integration of various functional departments.
· ERP Users - The users of ERP systems are employees of the organization at all levels, from workers, supervisors, mid-level managers to executives.
· Hardware and Operating Systems - Many large ERP systems are UNIX based. Windows NT and Linux are other popular operating systems to run ERP software. Legacy ERP systems may use other operating systems.
The Boundary of an ERP System - The boundary of an ERP system is usually small than the boundary of the organization that implements the ERP system. In contrast, the boundary of supply chain systems and ecommerce systems extends to the organization's suppliers, distributors, partners and customers. In practice, however, many ERP implementations involve the integration of ERP with external information systems.

History and Evolution of ERP
ERP (Enterprise Resource Planning) is the evolution of Manufacturing Requirements Planning (MRP) II. From business perspective, ERP has expanded from coordination of manufacturing processes to the integration of enterprise-wide backend processes. From technological aspect, ERP has evolved from legacy implementation to more flexible tiered client-server architecture.
The following table summarizes the evolution of ERP from 1960s to 1990s.

Timeline System Description
1960s Inventory Management & Control Inventory Management and control is the combination of information technology and business processes of maintaining the appropriate level of stock in a warehouse. The activities of inventory management include identifying inventory requirements, setting targets, providing replenishment techniques and options, monitoring item usages, reconciling the inventory balances, and reporting inventory status.
1970s Material Requirement Planning (MRP) Materials Requirement Planning (MRP) utilizes software applications for scheduling production processes. MRP generates schedules for the operations and raw material purchases based on the production requirements of finished goods, the structure of the production system, the current inventories levels and the lot sizing procedure for each operation.
1980s Manufacturing Requirements Planning (MRP II) Manufacturing Requirements Planning or MRP utilizes software applications for coordinating manufacturing processes, from product planning, parts purchasing, inventory control to product distribution.
1990s Enterprise Resource Planning (ERP) Enterprise Resource Planning or ERP uses multi-module application software for improving the performance of the internal business processes. ERP systems often integrates business activities across functional departments, from product planning, parts purchasing, inventory control, product distribution, fulfillment, to order tracking. ERP software systems may include application modules for supporting marketing, finance, accounting and human resources.

Functional Modules of ERP Software
ERP software is made up of many software modules. Each ERP software module mimics a major functional area of an organization. Common ERP modules include modules for product planning, parts and material purchasing, inventory control, product distribution, order tracking, finance, accounting, marketing, and HR. Organizations often selectively implement the ERP modules that are both economically and technically feasible.
ERP Production Planning Module
In the process of evolution of manufacturing requirements planning (MRP) II into ERP, while vendors have developed more robust software for production planning, consulting firms have accumulated vast knowledge of implementing production planning module. Production planning optimizes the utilization of manufacturing capacity, parts, components and material resources using historical production data and sales forecasting.
ERP Purchasing Module
Purchase module streamline procurement of required raw materials. It automates the processes of identifying potential suppliers, negotiating price, awarding purchase order to the supplier, and billing processes. Purchase module is tightly integrated with the inventory control and production planning modules. Purchasing module is often integrated with supply chain management software.
ERP Inventory Control Module
Inventory module facilitates processes of maintaining the appropriate level of stock in a warehouse. The activities of inventory control involves in identifying inventory requirements, setting targets, providing replenishment techniques and options, monitoring item usages, reconciling the inventory balances, and reporting inventory status. Integration of inventory control module with sales, purchase, finance modules allows ERP systems to generate vigilant executive level reports.
ERP Sales Module
Revenues from sales are live blood for commercial organizations. Sales module implements functions of order placement, order scheduling, shipping and invoicing. Sales module is closely integrated with organizations' ecommerce websites. Many ERP vendors offer online storefront as part of the sales module.
ERP Market in Module
ERP marketing module supports lead generation, direct mailing campaign and more.
ERP Financial Module
Both for-profit organizations and non-profit organizations benefit from the implementation of ERP financial module. The financial module is the core of many ERP software systems. It can gather financial data from various functional departments, and generates valuable financial reports such balance sheet, general ledger, trail balance, and quarterly financial statements.
ERP HR Module
HR (Human Resources) is another widely implemented ERP module. HR module streamlines the management of human resources and human capitals. HR modules routinely maintain a complete employee database including contact information, salary details, attendance, performance evaluation and promotion of all employees. Advanced HR module is integrated with knowledge management systems to optimally utilize the expertise of all employees.

ERP Implementation Methodologies

Different companies may install the same ERP software in totally different processes. The same company may implement different ERP software in the same approach. There are three commonly used methodologies for implementing ERP systems.
The Big Bang
Companies layout a grand plan for their ERP implementation. The installation of ERP systems of all modules happens across the entire organization at once. The big bang approach promised to reduce the integration cost in the condition of thorough and careful execution. This method dominated early ERP implementations, it partially contributed the higher rate of failure in ERP implementation. Today, not many companies dare to attempt it anymore. The premise of this implementation method is treating ERP implementation as the implementation of a large-scale information system, which typically follows SDLC (Systems Development Life Cycle). But an ERP system is much more than a traditional information system in the fact that the implementation of ERP continuously calls for the realignment of business processes. Many parties involved in ERP software systems are not IT professionals. ERP more than automates existing business processes. ERP transforms the business processes.
Modular Implementation
The method of modular implementation goes after one ERP module at a time. This limits the scope of implementation usually to one functional department. This approach suits companies that do not share many common processes across departments or business units. Independent modules of ERP systems are installed in each unit, while integration of ERP modules is taken place at the later stage of the project. This has been the most commonly used methodology of ERP implementation. Each business unit may have their own "instances" of ERP and databases. Modular implementation reduces the risk of installation, customization and operation of ERP systems by reducing the scope of the implementation. The successful implementation of one module can benefit the overall success of an ERP project.
Process-Oriented Implementation
The process-oriented implementation focuses on the support of one or a few critical business processes which involves a few business units. The initial customization of the ERP system is limited to functionality closely related to the intended business processes. The process-oriented implementation may eventually grow into a full-blown implementation of the ERP system. This approach is utilized by many small to mid-sized companies which tend to have less complex internal business processes.

ERP Benefits - Operational Control, Management Control and Strategic Planning
According to Anthony, R. A, organizational processes fall into three levels - strategic planning, management control and operational control. Even though much of ERP success has been in facilitating operational coordination across functional departments, successful implementation of ERP systems benefit strategic planning and manegment control one way or other.
Help reduce operating costs
ERP software attempts to integrate business processes across departments onto a single enterprise-wide information system. The major benefits of ERP are improved coordination across functinal departments and increased efficiencies of doing business. The immediate benefit from implementing ERP systems we can expect is reduced operating costs, such as lower inventory control cost, lower production costs, lower marketing costs and lower help desk support costs.
Facilitate Day-to-Day Management
The other benefits from implementing ERP systems is facilitation of day-to-day management . The implementations of ERP systems nurture the establishment of backbone data warehouses. ERP systems offer better accessibility to data so that management can have up-to-the-minute access to information for decision making and managerial control. ERP software helps track actual costs of activities and perform activity based costing.
Support Strategic Planning
Strategic Planning is "a deliberate set of steps that assess needs and resources; define a target audience and a set of goals and objectives; plan and design coordinated strategies with evidence of success; logically connect these strategies to needs, assets, and desired outcomes; and measure and evaluate the process and outcomes." (source) Part of ERP software systems is designed to support resource planning portion of strategic planning. In reality, resource planning has been the weakest link in ERP practice due to the complexity of strategic planning and lack of adequate integration with Decision Support Systems (DSS).
Critical Successful Factors of ERP Implementation

Implementation of an ERP system is a major investment and commitment for any organizations. The size and complexity of the ERP projects are the major factors that impact the cost of ERP implementations. Different companies may implement the same ERP software in totally different approaches and the same company may integrate different ERP software applications by following the same procedures. However, there are factors common to the success of ERP implementation regardless the ERP systems they implement and the methodologies they use.
Project Planning
ERP implementation starts with project planning - setting project goals, identifying high level business requirements, establishing project teams and estimating the project costs. The project planning offers the opportunity to re-evaluate the project at great details. If the ERP project is not justified at the planning phase, organizations shouldn't hesitate to cancel the project. For every successful ERP projects, there're projects that are canceled before implementation.
Architectural Design
While high level architectural decision is made in the process of ERP vendor selection, it remains a critical successful factor in integrating ERP with other e-business applications, ecommerce applications or legacy systems. Choice of middleware, interface software or programming languages drastically impact the implementation cost and release date.
Data Requirements
Unlike in-house e-business applications, much of the packaged ERP implementation involves the integration of ERP systems with existing e-business software (CRM, SCM and SFA) and legacy information systems. Appropriate level of data requirements is critical for an ERP to interact with other applications. Data requirements usually reflect details of business requirements. It costs ten times to correct a mistake at later phase of ERP implementation than the effort to correctly define requirements at analysis and design phase.
Phased Approach
It is important to break an ERP project down to manageable pieces by setting up pilot programs and short-term milestones. Dependent on the IT experience, some organizations choose the easiest piece as the pilot project, while others may implement a mission-critical application first. The pilot project can both demonstrate the benefits of ERP and help gain hands-on ERP implementation experience.
Data Conversion
Second generation ERP systems use relational database management systems (RDBMS) to store enterprise data. If a large amount of data are stored in other database systems or in different data formats, data conversion is a daunting tasks which is often underestimated in ERP implementations. A two-hour data conversion task could be turned into to a two-month efforts as the result of DBA group's lack of technical experience and management's incompetency or ignorance.
Organization Commitments
The involvement of ERP implementation goes far beyond IT department to many other functional departments. The commitment and smooth coordination from all parties is the key to the success of ERP project. The commitments come from the understanding of how ERP can benefit each functional department. For example, if the warehouse staff isn't completely sold on the inventory control module's benefits, they may not input the kind of usage data that is essential to the project's success.

Monday, September 7, 2009

SQL SERVER

(B) What is normalization? What are different type of normalization?

Note :- A regular .NET programmer working on projects often stumbles in this question
,which is but obvious.Bad part is sometimes interviewer can take this as a very basic question to be answered and it can be a turning point for the interview.So let's cram it.

It is set of rules that has been established to aid in the design of tables that are meant to be connected through relationships. This set of rules is known as Normalization.

Benefits of Normalizing your database include:

Ö Avoiding repetitive entries

Ö Reducing required storage space

Ö Preventing the need to restructure existing tables to accommodate new data.

Ö Increased speed and flexibility of queries, sorts, and summaries.

Note :- During interview people expect to answer maximum of three normal forms and thats what is expected practically.Actually you can normalize database to fifth normal form.But believe this book answering three normal forms will put you in decent shape during interview.

Following are the three normal forms :-


First Normal Form
For a table to be in first normal form, data must be broken up into the smallest units possible.In addition to breaking data up into the smallest meaningful values, tables in first normal form should not contain repetitions groups of fields.










193










Figure :- 10.1 Repeating groups example

For in the above example city1 and city2 are repeating.In order this table to be in First normal form you have to modify the table structure as follows.Also not that the Customer Name is now broken down to first name and last name (First normal form data should be broken down to smallest unit).




Figure :- 10.2 Customer table normalized to first normal form


Second Normal form
The second normal form states that each field in a multiple field primary keytable must be directly related to the entire primary key. Or in other words,each non-key field should be a fact about all the fields in the primary key.

In the above table of customer , city is not linked to any primary field.




Figure :- 10.3 Normalized customer table.


194









Figure :- 10.4 City is now shifted to a different master table.

That takes our database to a second normal form.


Third normal form
A non-key field should not depend on other Non-key field.The field "Total" is dependent on "Unit price" and "qty".




Figure :- 10.5 Fill third normal form

So now the "Total" field is removed and is multiplication of Unit price * Qty.

Note :- Fourth and Fifth normal form is left as a home work to users.

(B)What is denormalization ?

Denormalization is the process of putting one fact in numerous places (its vice-versa of normalization).Only one valid reason exists for denormalizing a relational design - to enhance performance.The sacrifice to performance is that you increase redundancy in database.

(B)What is a candidate key ?

A table may have more than one combination of columns that could uniquely identify the rows in a table; each combination is a candidate key. During database design you can pick up one of the candidate keys to be the primary key. For example, in the supplier table


195






supplierid and suppliername can be candidate key but you will only pick up supplierid as the primary key.

(B) What are different types of joins and whats the difference between them ?


INNER JOIN
Inner join shows matches only when they exist in both tables.Example in the below SQL there are two tables Customers and Orders and the inner join in made on Customers Customerid and Orders Customerid.So this SQL will only give you result with customers who have orders.If the customer does not have order it will not display that record.

SELECT Customers.*, Orders.* FROM Customers INNER JOIN Orders ON Customers.CustomerID =Orders.CustomerID


LEFT OUTER JOIN
Left join will display all records in left table of the SQL statement.In SQL below customers with or without orders will be displayed. Order data for customers without orders appears as NULL values. For example, you want to determine the amount ordered by each customer and you need to see who has not ordered anything as well. You can also see the LEFT OUTER JOIN as a mirror image of the RIGHT OUTER JOIN (Is covered in the next section) if you switch the side of each table.

SELECT Customers.*, Orders.* FROM Customers LEFT OUTER JOIN Orders ON Customers.CustomerID =Orders.CustomerID


RIGHT OUTER JOIN
Right join will display all records in right table of the SQL statement.In SQL below all orders with or without matching customer records will be displayed. Customer data for orders without customers appears as NULL values. For example, you want to determine if there are any orders in the data with undefined CustomerID values (say, after a conversion or something like it). You can also see the RIGHT OUTER JOIN as a mirror image of the LEFT OUTER JOIN if you switch the side of each table.

SELECT Customers.*, Orders.* FROM Customers RIGHT OUTER JOIN Orders
ON Customers.CustomerID =Orders.CustomerID


196






(I)What are indexes and What is the difference between clustered and nonclustered indexes?

Indexes in SQL Server are similar to the indexes in books. They help SQL Server retrieve the data quicker

There are clustered and nonclustered indexes. A clustered index is a special type of index that reorders the way records in the table are physically stored. Therefore table can have only one clustered index. The leaf nodes of a clustered index contain the data pages.

A nonclustered index is a special type of index in which the logical order of the index does not match the physical stored order of the rows on disk. The leaf node of a nonclustered index does not consist of the data pages. Instead, the leaf nodes contain index rows.

(A)How can you increase SQL performance ?

Following are tips which will increase your SQl performance :-

Ö Every index increases the time in takes to perform INSERTS, UPDATES and DELETES, so the number of indexes should not be very much. Try to use maximum 4-5 indexes on one table, not more. If you have read-only table, then the number of indexes may be increased.

Ö Keep your indexes as narrow as possible. This reduces the size of the index and reduces the number of reads required to read the index.

Ö Try to create indexes on columns that have integer values rather than character values.

Ö If you create a composite (multi-column) index, the order of the columns in the key are very important. Try to order the columns in the key as to enhance selectivity, with the most selective columns to the leftmost of the key.

Ö If you want to join several tables, try to create surrogate integer keys for this purpose and create indexes on their columns.








197






Ö Create surrogate integer primary key (identity for example) if your table will not have many insert operations.

Ö Clustered indexes are more preferable than nonclustered, if you need to select by a range of values or you need to sort results set with GROUP BY or ORDER BY.

Ö If your application will be performing the same query over and over on the same table, consider creating a covering index on the table.

Ö You can use the SQL Server Profiler Create Trace Wizard with "Identify Scans of Large Tables" trace to determine which tables in your database may need indexes. This trace will show which tables are being scanned by queries instead of using an index.

(A)What is the use of OLAP ?

OLAP is useful because it provides fast and interactive access to aggregated data and the ability to drill down to detail.

(A)What's a measure in OLAP ?

Measures are the key performance indicators that you want to evaluate. To determine which of the numbers in the data might be measures, a rule of thumb is: If a number makes sense when it is aggregated, then it is a measure.

(A)What are dimensions in OLAP ?

Dimensions are the categories of data analysis.For example, in a revenue report by month by sales region, the two dimensions needed are time and sales region.Typical dimensions include product, time, and region.

(A)What are levels in dimensions ?

Dimensions are arranged in hierarchical levels, with unique positions within each level. For example, a time dimension may have four levels, such as Year, Quarter, Month, and Day. Or the dimension might have only three levels, for example, Year, Week, and Day. The values within the levels are called members. For example, the years 2002 and 2003 are members of the level Year in the Time dimension.



198






(A)What are fact tables and dimension tables in OLAP ?

Twist :- can you explain the star schema for OLAP ?

The dimensions and measures are physically represented by a star schema.Dimension tables revolve around fact table. A fact table contains a column for each measure as well as a column for each dimension. Each dimension column has a foreign-key relationship to the related dimension table, and the dimension columns taken together are the key to the fact table.

(A)What is DTS?

DTS is used to pull data from various sources into the star schema.

(A)What is fillfactor ?

Twist :- When does plage split occurs ?

The 'fill factor' option specifies how full SQL Server will make each index page. When there is no free space to insert new row on the index page, SQL Server will create new index page and transfer some rows from the previous page to the new one. This operation is called page splits. You can reduce the number of page splits by setting the appropriate fill factor option to reserve free space on each index page. The fill factor is a value from
1 through 100 that specifies the percentage of the index page to be left empty. The default value for fill factor is 0. It is treated similarly to a fill factor value of 100, the difference in that SQL Server leaves some space within the upper level of the index tree for FILLFACTOR = 0. The fill factor percentage is used only at the time the index is created. If the table contains read-only data (or data that very rarely changed), you can set the 'fill factor' option to 100. When the table's data modified very often, you can decrease the fill factor to 70% or whatever you think is best.

(A)What is RAID and how does it work ?

Redundant Array of Independent Disks (RAID) is a term used to describe the technique of improving data availability through the use of arrays of disks and various data-striping methodologies. Disk arrays are groups of disk drives that work together to achieve higher data-transfer and I/O rates than those provided by single large drives. An array is a set of multiple disk drives plus a specialized controller (an array controller) that keeps track of how data is distributed across the drives. Data for a particular file is written in segments to the different drives in the array rather than being written to a single drive.

199






For speed and reliability, it's better to have more disks. When these disks are arranged in certain patterns and use a specific controller, they are called a Redundant Array of Inexpensive Disks (RAID) set. There are several numbers associated with RAID, but the most common are 1, 5 and 10.

RAID 1 works by duplicating the same writes on two hard drives. Let's assume you have two 20 Gigabyte drives. In RAID 1, data is written at the same time to both drives. RAID1 is optimized for fast writes.

RAID 5 works by writing parts of data across all drives in the set (it requires at least three drives). If a drive failed, the entire set would be worthless. To combat this problem, one of the drives stores a "parity" bit. Think of a math problem, such as 3 + 7 = 10. You can think of the drives as storing one of the numbers, and the 10 is the parity part. By removing any one of the numbers, you can get it back by referring to the other two, like this: 3 + X
= 10. Of course, losing more than one could be evil. RAID 5 is optimized for reads.

RAID 10 is a bit of a combination of both types. It doesn't store a parity bit, so it's fast, but it duplicates the data on two drives to be safe. You need at least four drives for RAID
10. This type of RAID is probably the best compromise for a database server.

Note :- It's difficult to cover complete aspect of RAID in this book.It's better to take some decent SQL SERVER book for in detail knowledge , but yes from interview aspect you can probably escape with this answer.



(B)What's the difference between DELETE TABLE and
TRUNCATE TABLE commands?

Following are difference between them :-

Ö DELETE TABLE syntax logs the deletes thus making the delete operation slow.TRUNCATE table does not log any information but it logs information about deallocation of data page of the table.So TRUNCATE table is faster as compared to delete table.

Ö DELETE table can be rolled back while TRUNCATE can not be.

Ö DELETE table can have criteria while TRUNCATE can not.

Ö TRUNCATE table can not triggers.


200






(B)What are the problems that can occur if you do not implement locking properly in SQL SERVER ?

Following are the problems that occur if you do not implement locking properly in SQL SERVER.


Lost Updates
Lost updates occur if you let two transactions modify the same data at the same time, and the transaction that completes first is lost. You need to watch out for lost updates with the READ UNCOMMITTED isolation level. This isolation level disregards any type of locks, so two simultaneous data modifications are not aware of each other. Suppose that a customer has due of 2000$ to be paid.He pays 1000$ and again buys a product of
500$.Lets say that these two transactions are now been entered from two different counters of the company.Now both the counter user start making entry at the same time 10:00
AM.Actually speaking at 10:01 AM the customer should have 2000$-1000$+500 = 1500$ pending to be paid.But as said in lost updates the first transaction is not considered and the second transaction overrides it.So the final pending is 2000$+500$ = 2500$.....I hope the company does not loose the customer.


Non-Repeatable Read
Non-repeatable reads occur if a transaction is able to read the same row multiple times and gets a different value each time.Again, this problem is most likely to occur with the READ UNCOMMITTED isolation level. Because you let two transactions modify data at the same time, you can get some unexpected results. For instance, a customer wants to book flight , so the travel agent checks for the flights availability.Travel agent finds a seat and goes ahead to book the seat.While the travel agent is booking the seat , some other travel agent books the seat.When this travel agent goes to update the record , he gets error saying that “Seat is already booked”.In short the travel agent gets different status at different times for the seat.


Dirty Reads
Dirty reads are a special case of non-repeatable read. This happens if you run a report while transactions are modifying the data that you're reporting on. For example there is a customer invoice report which runs on 1:00 AM in afternoon and after that all invoices are sent to the respective customer for payments.Lets say one of the customer has 1000$

201






to be paid.Customer pays 1000$ at 1:00 AM and at the same time report is run.Actually customer has no money pending but is still issued a invoice.


Phantom Reads
Phantom reads occur due to a transaction being able to read a row on the first read, but not being able to modify the same row due to another transaction deleting rows from the same table. Lets say you edit a record in the mean time somebody comes and deletes the record , you then go for updating the record which does not exist....Panic.

Interestingly, the phantom reads can occur even with the default isolation level supported by SQL Server: READ COMMITTED. The only isolation level that doesn't allow phantoms is SERIALIZABLE, which ensures that each transaction is completely isolated from others. In other words, no one can acquire any type of locks on the affected row while it is being modified.

(B)What are different transaction levels in SQL SERVER
?

Twist :- what are different types of locks in SQL SERVER ?

Transaction Isolation level decides how is one process isolated from other process.Using transaction levels you can implement locking in SQL SERVER.

There are four transaction levels in SQL SERVER :-


READ COMMITTED
The shared lock is held for the duration of the transaction, meaning that no other transactions can change the data at the same time. Other transactions can insert and modify data in the same table, however, as long as it is not locked by the first transaction.


READ UNCOMMITTED
No shared locks and no exclusive locks are honored. This is the least restrictive isolation level resulting in the best concurrency but the least data integrity.







202






REPEATABLE READ
This setting disallows dirty and non-repeatable reads. However, even though the locks are held on read data, new rows can still be inserted in the table, and will subsequently be read by the transaction.


SERIALIZABLE
This is the most restrictive setting holding shared locks on the range of data. This setting does not allow the insertion of new rows in the range that is locked; therefore, no phantoms are allowed.

Following is the syntax for setting transaction level in SQL SERVER.

SET TRANSACTION ISOLATION LEVEL SERIALIZABLE (I)What are different locks in SQL SERVER ? Depending on the transaction level six types of lock can be acquired on data :-

Intent
The intent lock shows the future intention of SQL Server's lock manager to acquire locks on a specific unit of data for a particular transaction. SQL Server uses intent locks to queue exclusive locks, thereby ensuring that these locks will be placed on the data elements in the order the transactions were initiated. Intent locks come in three flavors: intent shared (IS), intent exclusive (IX), and shared with intent exclusive (SIX).

IS locks indicate that the transaction will read some (but not all) the resources in the table or page by placing shared locks.

IX locks indicate that the transaction will modify some (but not all) resources in the table or page by placing exclusive locks.

SIX locks indicate that the transaction will read all resources, and modify some(but not all) of them. This will be accomplished by placing the shared locks on the resources read and exclusive locks on the rows modified. Only one SIX lock is allowed per resource at one time; therefore, SIX locks prevent other connections from modifying any data in the resource (page or table), although they do allow reading the data in the same resource.






203






Shared
Shared locks (S) allow transactions to read data with SELECT statements. Other connections are allowed to read the data at the same time; however, no transactions are allowed to modify data until the shared locks are released.


Update
Update locks (U) are acquired just prior to modifying the data. If a transaction modifies a row, then the update lock is escalated to an exclusive lock; otherwise, it is converted to a shared lock. Only one transaction can acquire update locks to a resource at one time. Using update locks prevents multiple connections from having a shared lock that want to eventually modify a resource using an exclusive lock. Shared locks are compatible with other shared locks, but are not compatible with Update locks.


Exclusive
Exclusive locks (X) completely lock the resource from any type of access including reads. They are issued when data is being modified through INSERT, UPDATE and DELETE statements.


Schema
Schema modification locks (Sch-M) are acquired when data definition language statements, such as CREATE TABLE, CREATE INDEX, ALTER TABLE, and so on are being executed. Schema stability locks (Sch-S) are acquired when store procedures are being compiled.


Bulk Update
Bulk update locks (BU) are used when performing a bulk-copy of data into a table with TABLOCK hint. These locks improve performance while bulk copying data into a table; however, they reduce concurrency by effectively disabling any other connections to read or modify data in the table.

(I)Can we suggest locking hints to SQL SERVER ?

We can give locking hints that help's you override default decisions made by SQL Server. For instance, you can specify the ROWLOCK hint with your UPDATE statement to convince SQL Server to lock each row affected by that data modification. Whether it's

204






prudent to do so is another story; what will happen if your UPDATE affects 95% of rows in the affected table? If the table contains 1000 rows, then SQL Server will have to acquire 950 individual locks, which is likely to cost a lot more in terms of memory than acquiring a single table lock. So think twice before you bombard your code with ROWLOCKS.

(I)What is LOCK escalation?

Lock escalation is the process of converting a lot of low level locks (like row locks, page locks) into higher level locks (like table locks). Every lock is a memory structure too many locks would mean, more memory being occupied by locks. To prevent this from happening, SQL Server escalates the many fine-grain locks to fewer coarse-grain locks. Lock escalation threshold was definable in SQL Server 6.5, but from SQL Server 7.0 onwards it's dynamically managed by SQL Server.

(B)What are the different ways of moving data/ databases between servers and databases in SQL Server?

There are lots of options available, you have to choose your option depending upon your requirements. Some of the options you have are: BACKUP/RESTORE, detaching and a ttaching databases, re plication, DTS, BCP, logshipping, INSERT...SELECT, SELECT...INTO, creating INSERT scripts to generate data.

(I)What are advantages of SQL 2000 over SQl 7.0 ?

Ö User-Defined Functions: User-Defined Functions (UDFs) -- one or more Transact-SQL statements can be used to encapsulate code for reuse. User- defined functions cannot make a permanent changes to the data or modify database tables. UDF can change only local objects for a UDF, such as local cursors or variables.

Ö Distributed Partitioned Views: Distributed partitioned views allow you to partition tables horizontally across multiple servers. So, you can scale out one database server to a group of database servers that cooperate to provide the same performance levels as a cluster of database servers. Due to distributed partitioned views, SQL Server 2000 now on the first place in the tpc-c tests.



205






Ö New Data Types: These include: bigint, an 8-byte integer type; sql_variant, a data type that allows the storage of data of different data types; and the table data type, which allows applications to store results temporarily for later use.
Ö INSTEAD OF and AFTER Triggers: There are INSTEAD OF and AFTER
Triggers in SQL Server 2000. INSTEAD OF triggers are executed instead of the INSERT, UPDATE or DELETE triggering action. AFTER triggers are executed after the triggering action.

Ö Cascading Referential Integrity Constraints: There are new ON DELETE and ON UPDATE clauses in the REFERENCES clause of the CREATE TABLE and ALTER TABLE statements.The ON DELETE clause controls what actions are taken if you attempt to delete a row to which existing foreign keys point. The ON UPDATE clause defines the actions that are taken if you attempt to update a candidate key value to which existing foreign keys point.

Ö The ON DELETE and ON UPDATE clauses have two options:

NO ACTION :-NO ACTION specifies that the deletion/update fail with an error.

CASCADE :-CASCADE specifies that all the rows with foreign
keys pointing to the deleted/updated row are also deleted/updated.

Ö 32 CPU and 64GB Memory Support: SQL Server 2000 Enterprise Edition running under Windows 2000 DataCenter can support up to 32 CPUs and up to 64GB of physical memory (RAM) on a computer.

Ö XML Support: SQL Server 2000 can use XML to insert, update, and delete values in the database, and the database engine can return data as Extensible Markup Language (XML) documents

(B)What is the difference between a HAVING CLAUSE
and a WHERE CLAUSE?

You can use Having Clause with the GROUP BY function in a query and WHERE Clause is applied to each row before they are part of the GROUP BY function in a query.
(B) What is difference between UNION and UNION ALL SQL syntax ?


206






UNION SQL syntax is used to select information from two tables.But it selects only distinct records from both the table. , while UNION ALL selects all records from both the tables.

Note :- Selected records should have same datatype or else the syntax will not work.

(I)How can you raise custom errors from stored procedure ?

The RAISERROR statement is used to produce an ad hoc error message or to retrieve a custom message that is stored in the sysmessages table. You can use this statement with the error handling code presented in the previous section to implement custom error messages in your applications. The syntax of the statement is shown here.

RAISERROR ({msg_id |msg_str }{,severity ,state } [ ,argument [ ,,...n ] ] ))
[ WITH option [ ,,...n ] ]

A description of the components of the statement follows.

msg_id :-The ID for an error message, which is stored in the error column in sysmessages. msg_str :-A custom message that is not contained in sysmessages.
severity :- The severity level associated with the error. The valid values are 0–25. Severity levels 0–18 can be used by any user, but 19–25 are only available to members of the fixed-server role sysadmin. When levels 19–25 are used, the WITH LOG option is required.

state A value that indicates the invocation state of the error. The valid values are 0–127. This value is not used by SQL Server.

Argument, . . .

One or more variables that are used to customize the message. For example, you could pass the current process ID (@@SPID) so it could be displayed in the message.

WITH option, . . .

The three values that can be used with this optional argument are described here.
LOG - Forces the error to logged in the SQL Server error log and the NT application log. NOWAIT - Sends the message immediately to the client.

207






SETERROR - Sets @@ERROR to the unique ID for the message or 50,000.

The number of options available for the statement make it seem complicated, but it is actually easy to use. The following shows how to create an ad hoc message with a severity of 10 and a state of 1.

RAISERROR ('An error occured updating the NonFatal table',10,1)

--Results--

An error occured updating the NonFatal table

The statement does not have to be used in conjunction with any other code, but for our purposes it will be used with the error handling code presented earlier. The following alters the ps_NonFatal_INSERT procedure to use RAISERROR.

USE tempdb go
ALTER PROCEDURE ps_NonFatal_INSERT

@Column2 int =NULL AS
DECLARE @ErrorMsgID int



INSERT NonFatal VALUES (@Column2) SET @ErrorMsgID =@@ERROR
IF @ErrorMsgID <>0

BEGIN

RAISERROR ('An error occured updating the NonFatal table',10,1) END
When an error-producing call is made to the procedure, the custom message is passed to the client. The following shows the output generated by Query Analyzer.





208






(I) What is ACID fundamental and what are transactions in SQL SERVER ?

A transaction is a sequence of operations performed as a single logical unit of work. A logical unit of work must exhibit four properties, called the ACID (Atomicity, Consistency, Isolation, and Durability) properties, to qualify as a transaction:


Atomicity
Ö A transaction must be an atomic unit of work; either all of its data modifications are performed or none of them is performed.


Consistency
Ö When completed, a transaction must leave all data in a consistent
state. In a relational database, all rules must be applied to the transaction's modifications to maintain all data integrity. All internal data structures, such as B-tree indexes or doubly-linked lists, must be correct at the end of the transaction.


Isolation
Ö Modifications made by concurrent transactions must be isolated
from the modifications made by any other concurrent transactions. A transaction either sees data in the state it was in before another concurrent transaction modified it, or it sees the data after the second transaction has completed, but it does not see an intermediate state. This is referred to as serializability because it results in the ability to reload the starting data and replay a series of transactions to end up with the data in the same state it was in after the original transactions were performed.


Durability
Ö After a transaction has completed, its effects are permanently in
place in the system. The modifications persist even in the event of a system failure.

(A) What is DBCC?


209






DBCC (Database Consistency Checker Commands) is used to check logical and physical consistency of database structure.DBCC statements can fix and detect problems.They are grouped in to four categories :-

Ö Maintenance commands like DBCC DBREINDEX , DBCC DBREPAIR etc , they are mainly used for maintenance tasks in SQL SERVER.

Ö Miscellaneous commands like DBCC ROWLOCK , DBCC TRACEOFF etc , they are mainly used for enabling row-level locking or removing DLL from memory.

Ö Status Commands like DBCC OPENTRAN , DBCC SHOWCONTIG etc , they are mainly used for checking status of the database.

Ö Validation Commands like DBCC CHECKALLOC,DBCC CHECKCATALOG etc , they perform validation operations on database.

Note :- Check MSDN for list of all DBCC commands , its very much possible specially during DBA interviews they can ask in depth individual commands.

Below is a sample screen in which DBCC SHOWCONTIG command is run.DBCC SHOWCONTIG is used to display fragmentation information for the data and indexes of the specified table.In the sample screen “Customer” table is checked for fragmentation information.If you look at the figure “Scan density” is 100 if everything is contigious in this image it is 95.36% which is decent percentage.So such type of useful information can be collected by DBCC command and database performance and maintenance can be improved.




















210















Figure :- 10.6 DBCC SHOWCONTIG command in action.












211






(A) What is purpose of Replication ?

Replication is way of keeping data synchronized in multiple databases.SQL ser ver replication has two important aspects publisher and subscriber.


Publisher
Database server that makes data available for replication is called as Publisher.


Subscriber
Database Servers that get data from the publishers is called as Subscribers.

(A) What are different type of replication supported by
SQL SERVER ?

There are three types of replication supported by SQL SERVER:-


Snapshot Replication.
Snapshot Replication takes snapshot of one database and moves it to the other database.After initial load data can be refreshed periodically.The only disadvantage of this type of replication is that all data has to copied each time the table is refreshed.


Transactional Replication
In transactional replication data is copied first time as in snapshot replication , but later only the transactions are synchronized rather than replicating the whole database.You can either specify to run continuously or on periodic basis.


Merge Replication.
Merge replication combines data from multiple sources into a single central database.Again as usual the initial load is like snapshot but later it allows change of data both on subscriber and publisher , later when they come on-line it detects and combines them and updates accordingly.

(I) What is BCP utility in SQL SERVER ?



212






BCP (Bulk Copy Program) is a command line utility by which you can import and export large amounts of data in and out of SQL SERVER database.

Below is a sample which shows BCP in action.








Figure :- 10.7 Export BCP utility in action













213













Figure :- 10.8 Import BCP utility in action

(I)What are different types of triggers in SQl SERVER
2000 ?

There are two types of triggers :-

Ö INSTEAD OF triggers

INSTEAD OF triggers fire in place of the triggering action. For example, if an INSTEAD OF UPDATE trigger exists on the Sales table and an UPDATE statement is executed against the Salestable, the UPDATE statement will not change a row in the sales table. Instead, the UPDATE statement causes the INSTEAD OF UPDATE trigger to be executed, which may or may not modify data in the Sales table.

214






Ö AFTER triggers

AFTER triggers execute following the SQL action, such as an insert, update, or delete.This is the traditional trigger which existed in SQL SERVER.

INSTEAD OF triggers gets executed automatically before the Primary Key and the Foreign Key constraints are checked, whereas the traditional AFTER triggers gets executed after these constraints are checked.

Unlike AFTER triggers, INSTEAD OF triggers can be created on views.

(A)If we have multiple AFTER Triggers on table how can we define the sequence of the triggers ?

If a table has multiple AFTER triggers, then you can specify which trigger should be executed first and which trigger should be executed last using the stored procedure sp_settriggerorder. All the other triggers are in an undefined order which you cannot control.

(A)What is SQl injection ?

It is a Form of attack on a database-driven Web site in which the attacker executes unauthorized SQL commands by taking advantage of insecure code on a system connected to the Internet, bypassing the firewall. SQL injection attacks are used to steal information from a database from which the data would normally not be available and/or to gain access to an organization’s host computers through the computer that is hosting the database.

SQL injection attacks typically are easy to avoid by ensuring that a system has strong input validation.

As name suggest we inject SQL which can be relatively dangerous for the database. Example this is a simple SQL

SELECT email, passwd, login_id, full_name

FROM members

WHERE email = 'x'

Now somebody does not put “x” as the input but puts “x ; DROP TABLE members;”. So the actual SQL which will execute is :-


215






SELECT email, passwd, login_id, full_name

FROM members
WHERE email = 'x' ; DROP TABLE members; Think what will happen to your database




216

ADO.NET

(B)What is the namespace in which .NET has the data

functionality classes ?

Following are the namespaces provided by .NET for data management :-


System.data
This contains the basic objects used for accessing and storing relational data, such as DataSet,DataTable, and DataRelation. Each of these is independent of the type of data source and the way we connect to it.


System.Data.OleDB
Contains the objects that we use to connect to a data source via an OLE-DB provider, such as OleDbConnection, OleDbCommand, etc. These objects inherit from the common base classes, and so have the same properties, methods, and events as the SqlClient equivalents.


System.Data.SqlClient:
This Contains the objects that we use to connect to a data source via the Tabular Data Stream (TDS) interface of Microsoft SQL Server (only). This can generally provide better performance as it removes some of the intermediate layers required by an OLE-DB connection.


System.XML
This Contains the basic objects required to create, read, store, write, and manipulate
XML documents according to W3C recommendations.

(B) Can you give a overview of ADO.NET architecture ?

The most important section in ADO.NET architecture is “Data Provider”.Data Provider provides access to datasource (SQL SERVER , ACCESS ,ORACLE).In short it provides object to achieve functionalities like opening and closing connection , retrieve data and update data.In the below figure you can see the four main sections of a data provider :-

Ö Connection.


* 70 percent of IT projects are database projects,so ADO.NET forms the heart of major
.NET interviews.

173






Ö Command object (This is the responsible object to use stored procedures)

Ö Data Adapter (This object acts as a bridge between datastore and dataset).
Ö Datareader (This object reads data from data store in forward only mode). Dataset object represents disconnected and cached data.If you see the diagram it is not in
direct connection with the data store (SQL SERVER , ORACLE etc) rather it talks with Data adapter , who is responsible for filling the dataset.Dataset can have one or more Datatable and relations.















174

Figure :- 9.1 ADO.NET Architecture


* Not answering simple ADO.NET questions can lead to negative results.






“DataView” object is used to sort and filter data in Datatable.

Note:- This is one of the favorite questions in .NET.Just paste the picture in your mind and during interview try to refer that image.

(B)What are the two fundamental objects in ADO.NET ? Datareader and Dataset are the two fundamental objects in ADO.NET. (B)What is difference between dataset and datareader ? Following are some major differences between dataset and datareader :-
Ö DataReader provides forward-only and read-only access to data , while the
DataSet object can hold more than one table (in other words more than one rowset) from the same data source as well as the relationships between them.

Ö Dataset is a disconnected architecture while datareader is connected architecture.

Ö Dataset can persists contents while datareader can not persist contents , they are forward only.

(I)What are major difference between classic ADO and

ADO.NET ?

Following are some major differences between both

Ö As in classic ADO we had client and server side cursors they are no more present in ADO.NET.Note it's a disconnected model so they are no more applicable.

Ö Locking is not supported due to disconnected model.

Ö All data is persisted in XML as compared to classic ADO where data was persisted in Binary format also.

(B)What is the use of connection object ?

They are used to connect a data to a Command object.

Ö An OleDbConnection object is used with an OLE-DB provider



* Why do you want to leave this company ? (Never say anything negative about your past company)

175






Ö A SqlConnection object uses Tabular Data Services (TDS) with MS SQL Server

(B)What is the use of command objects and what are the methods provided by the command object ?

They are used to connect connection object to Datareader or dataset.Following are the methods provided by command object :-

Ö ExecuteNonQuery :- Executes the command defined in the CommandText property against the connection defined in the Connection property for a query that does not return any rows (an UPDATE, DELETE or INSERT).Returns an Integer indicating the number of rows affected by the query.

Ö ExecuteReader :- Executes the command defined in the CommandText property against the connection defined in the Connection property. Returns a "reader" object that is connected to the resulting rowset within the database, allowing the rows to be retrieved.

Ö ExecuteScalar :- Executes the command defined in the CommandText property against the connection defined in the Connection property. Returns only a single value (effectively the first column of the first row of the resulting rowset). Any other returned columns and rows are discarded. Fast and efficient when only a "singleton" value is required

(B)What is the use of dataadapter ?

These are objects that connect one or more Command objects to a Dataset object..They provide logic that gets the data from the data store and populates the tables in the DataSet, or pushes the changes in the DataSet back into the data store.

Ö An OleDbDataAdapter object is used with an OLE-DB provider

Ö A SqlDataAdapter object uses Tabular Data Services with MS SQL Server.

(B)What are basic methods of Dataadapter ?

There are three most commonly used methods of Dataadapter :-

Fill :- Executes the SelectCommand to fill the DataSet object with data from the data source. Can also be used to update (refresh) an existing table in a DataSet with changes




176

* Author has Intentionally marked all Database questions as Basic belive me not answering them will not give a good impression during interviews.






made to the data in the original datasource if there is a primary key in the table in the
DataSet.

FillSchema :- Uses the SelectCommand to extract just the schema for a table from the data source, and creates an empty table in the DataSet object with all the corresponding constraints.

Update:- Calls the respective InsertCommand, UpdateCommand, or DeleteCommand for each inserted, updated,or deleted row in the DataSet so as to update the original data source with the changes made to the content of the DataSet. This is a little like the UpdateBatch method provided by the ADO Recordset object, but in the DataSet it can be used to update more than one table.

(B)What is Dataset object?

The DataSet provides the basis for disconnected storage and manipulation of relational data. We fill it from a data store,work with it while disconnected from that data store, then reconnect and flush changes back to the data store if required.

(B)What are the various objects in Dataset ?

Dataset has a collection of DataTable object within the Tables collection. Each DataTable object contains a collection of DataRow objects and a collection of DataColumn objects. There are also collections for the primary keys,constraints, and default values used in this table which is called as constraint collection, and the parent and child relationships between the tables.Finally, there is a DefaultView object for each table. This is used to create a DataView object based on the table, so that the data can be searched, filtered or otherwise manipulated while displaying the data.

Note :- Look back again to the main diagram for ADO.NET architecture for visualizing this answer in pictorial form

(B) How can we connect to Microsoft Access , Foxpro , Oracle etc ?

Microsoft provides System.Data.OleDb namespace to communicate with databases like Access , oracle etc.In short any OLE DB-Compliant database can be connected using System.Data.OldDb namespace.





* No one can be perfect in .NET as whole , so when you think you do not know the answer admit it , it put’s lot of + effect.

177






Note :- Small sample of OLEDB is provided in “WindowsAppOleDb” which uses
“Nwind.mdb” in bin directory to display data in Listbox.

Private Sub loadData()
Dim strPath As String
strPath = AppDomain.CurrentDomain.BaseDirectory
Dim objOLEDBCon As New OleDbConnection(“Provider=Microsoft.Jet.OLEDB.4.0;Data Source =” & strPath & “Nwind.mdb”)
Dim objOLEDBCommand As OleDbCommand Dim objOLEDBReader As OleDbDataReader Try

objOLEDBCommand = New OleDbCommand(“Select FirstName from Employees”)
objOLEDBCon.Open() objOLEDBCommand.Connection = objOLEDBCon objOLEDBReader = objOLEDBCommand.ExecuteReader() Do While objOLEDBReader.Read()
lstNorthwinds.Items.Add(objOLEDBReader.GetString(0)) Loop
Catch ex As Exception
Throw ex
Finally objOLEDBCon.Close()
End Try

End Sub

The main heart is the “Loaddata()” method which actually loads the data in listbox.

Note:- This source code has the connectionstring hard coded in the program itself which is not a good programming practice.For windows application the best place to store connectionstring is “App.config”.Also note “AppDomain.CurrentDomain.BaseDirectory” function this gives the current path of the running exe which is “BIN” and the MDB file is in that directory.Also note the finally block which executes irrespective that there is error or not.Thus ensuring that all the connection to the datastore is freed.It’s best practice to put all clean up statements in finally block thus ensuring that the resources are deallocated properly.
(B) How do we connect to SQL SERVER , which namespace do we use ?




178


* Where do you see yourself after three years ?






First below is the code , after the code i have given the explanation for it.For this sample we will also need a SQL Table setup which i have imported using the DTS wizard.

Private Sub LoadData()
‘ note :- with and end with makes your code more readable
Dim strConnectionString As String
Dim objConnection As New SqlConnection
Dim objCommand As New SqlCommand Dim objReader As SqlDataReader Try


file.

‘ this gets the connectionstring from the app.config

‘ note if this gives error see where the MDB file is

stored in your pc and point to that strConnectionString =
AppSettings.Item(“ConnectionString”)
‘ take the connectiostring and initialize the connection

object






Employees”)


With objConnection
.ConnectionString = strConnectionString
.Open() End With
objCommand = New SqlCommand(“Select FirstName from

With objCommand
.Connection = objConnection objReader = .ExecuteReader()
End With
‘ looping through the reader to fill the list box
Do While objReader.Read()
lstData.Items.Add(objReader.Item(“FirstName”)) Loop

Catch ex As Exception
Throw ex
Finally objConnection.Close()
End Try



ID=sa;Database=Employees”/>





* What are your negative points ? (Careful guy’s)

179






Note:- The above code is provided in CD in folder WindowsAppSqlClient”.Comments in the code do explain a lot but we will again iterate through the whole code
later..”LoadData” is the main method which loads the data from SQL SERVER.Before running this code you have to install SQL SERVER in your machine.As we are dealing with SQLCLIENT we need to setup database in SQL SERVER.For this sample i have imported access “Nwind.mdb” in “SampleAccessDatabase” folder in CD in to SQlSERVER.Depending on computer you will also have to change the connectionstring in Web.config file.

For setting up the sample SQL table we can use the DTS import wizard to import.See the below figure which is using data source as Microsoft Access.While importing the database author had give the database name as “Employees”.








Figure:- 9.2 Loading “Nwind.mdb” in SQL SERVER for the sample


180 * Are you comfortable with location shift ? If you have personal problems say no right at the first stage.... or else after joining surely you will repent.













Figure :- 9.3 Load only the Employee table.

For simplicity sake we will only import the employee table as thats the only thing needed in our sample code.













* Do you work during late night’s ?.Best answer if there is project deadline yes.Do not sho1w81
that it’s your culture to work during nights.











Figure :- 9.4 View of loaded Employee table

Now from interview point of view definitely you are not going to say the whole source code which is given in book.Interviewer expects only the broader answer of what are the steps needed to connect to SQL SERVER.For fundamental sake author has explained the whole source code.In short you have to explain the “LoadData” method in broader way.Following are the steps to connect to SQL SERVER :-

Ö First is import the namespace “System.Data.SqlClient”.

Ö Create a connection object as shown in “LoadData” method.

With objConnection
.ConnectionString = strConnectionString
.Open() End With


Ö Create the command object with the SQL.Also assign the created connection object to command object. and execute the reader.






182

* Any special acheivements in your life till now...tell your best project which you have done best in your career .







objCommand = New SqlCommand(“Select FirstName from Employees”) With objCommand
.Connection = objConnection objReader = .ExecuteReader()
End With
Ö Finally loop through the reader and fill the list box.If old VB programmers are
expecting the movenext command it’s replaced by Read() which returns true if there is any data to be read.If the .Read() return’s false that means that it’s end of datareader and there is no more data to be read.

Do While objReader.Read() lstData.Items.Add(objReader.Item(“FirstName”)) Loop
Ö Finally do not forget to close the connection object.

Note:- In “LoadData” you will see that connectionstring is stored in Web.config file and is loaded using “AppSettings.Item(“ConnectionString”)”.While running this sample live on your database do not forget to change this connectionstring accordingly to your machine name and SQL SERVER or else the source code will not run.

(B) How do we use stored procedure in ADO.NET and how do we provide parameters to the stored procedures?

ADO.NET provides the SqlCommand object which provides the functionality of executing stored procedures.

Note :- Sample code is provided in folder “WindowsSqlClientCommand”.There are two stored procedures created in same database “Employees” which was created for the previous question.

CREATE PROCEDURE SelectByEmployee @FirstName nvarchar(200) AS Select FirstName from Employees where FirstName like @FirstName + '%' CREATE PROCEDURE SelectEmployee AS
Select FirstName from Employees

If txtEmployeeName.Text.Length = 0 Then
objCommand = New SqlCommand(“SelectEmployee”)
Else
objCommand = New SqlCommand(“SelectByEmployee”)



* Are you looking for onsite oppurtunites ? (Be careful do not show your desperation of abroad journeys)

183






objCommand.Parameters.Add(“@FirstName”, Data.SqlDbType.NVarChar, 200)
objCommand.Parameters.Item(“@FirstName”).Value =
txtEmployeeName.Text.Trim() End If
In the above sample not lot has been changed only that the SQL is moved to the stored
procedures.There are two stored procedures one is “SelectEmployee” which selects all the employees and the other is “SelectByEmployee” which returns employee name starting with a specific character.As you can see to provide parameters to the stored procedures we are using the parameter object of the command object.In such question interviewer expects two simple answers one is that we use command object to execute stored procedures and the parameter object to provide parameter to the stored procedure.Above sample is provided only for getting the actual feel of it.Be short be nice and get a job.

(B) How can we force the connection object to close after my datareader is closed ?

Command method Executereader takes a parameter called as CommandBehavior where in we can specify saying close connection automatically after the Datareader is close.

pobjDataReader = pobjCommand.ExecuteReader(CommandBehavior.CloseConnection)

(B) I want to force the datareader to return only schema of the datastore rather than data ?

pobjDataReader = pobjCommand.ExecuteReader(CommandBehavior.SchemaOnly)

(B) How can we fine tune the command object when we are expecting a single row or a single value ?

Again CommandBehaviour enumeration provides two values SingleResult and SingleRow.If you are expecting a single value then pass “CommandBehaviour.SingleResult” and the query is optimized accordingly , if you are expecting single row then pass “CommandBehaviour.SingleRow” and query is optimized according to single row.

(B) Which is the best place to store connectionstring in .NET

projects ?




184


* Why have you changed so many jobs ? (Prepare a decent answer do not blame companies and individiuals for your frequent change)






Config files are the best place to store connection strings.If it’s a web-based application “Web.config” file will be used and if it’s a windows application “App.config” files will be used.

(B) What are steps involved to fill a dataset ?

Twist :- How can we use dataadapter to fill a dataset ?

Sample code is provided in “WindowsDataSetSample” folder in CD.”LoadData” has all the implementation of connecting and loading to dataset.This dataset is finally binded to a ListBox.Below is the sample code.

Private Sub LoadData()
Dim strConnectionString As String
strConnectionString = AppSettings.Item(“ConnectionString”) Dim objConn As New SqlConnection(strConnectionString) objConn.Open()
Dim objCommand As New SqlCommand(“Select FirstName from
Employees”)
objCommand.Connection = objConn
Dim objDataAdapter As New SqlDataAdapter() objDataAdapter.SelectCommand = objCommand Dim objDataSet As New DataSet

End Sub
In such type of question’s interviewer is looking from practical angle , that have you worked with dataset and datadapters.Let me try to explain the above code first and then we move to what steps to say suring interview.

Dim objConn As New SqlConnection(strConnectionString)

objConn.Open()

First step is to open the connection.Again note the connection string is loaded from config file.

Dim objCommand As New SqlCommand(“Select FirstName from Employees”)

objCommand.Connection = objConn

Second step is to create a command object with appropriate SQL and set the connection object to this command.

Dim objDataAdapter As New SqlDataAdapter()



* Any plans of opening your own software company...Beware do not start pouring your billgate’s dream to him.....can lead to a serious situation.

185






objDataAdapter.SelectCommand = objCommand

Third steps is to create the Adapter object and pass the command object to the adapter object.

objDataAdapter.Fill(objDataSet)

Fourth step is to load the dataset using the “Fill” method of the dataadapter. lstData.DataSource = objDataSet.Tables(0).DefaultView lstData.DisplayMember = “FirstName”
lstData.ValueMember = “FirstName”

Fifth step is to bind to the loaded dataset with the GUI.At this moment sample has listbox as the UI.Binding of the UI is done by using DefaultView of the dataset.Just to revise every dataset has tables and every table has views.In this sample we have only loaded one table i.e. Employees table so we are referring that with a index of zero.

Just say all the five steps during interview and you will see the smile in the interviewer’s face.....Hmm and appointment letter in your hand.

(B)What are the various methods provided by the dataset object to generate XML?

Note:- XML is one of the most important leap between classic ADO and ADO.NET.So this question is normally asked more generally how can we convert any data to XML format.Best answer is convert in to dataset and use the below methods.

Ö ReadXML

Read’s a XML document in to Dataset.

Ö GetXML

This is function’s which return’s a string containing XML document.

Ö WriteXML

This write’s a XML data to disk.

(B) How can we save all data from dataset ?



186






Dataset has “AcceptChanges” method which commits all the changes since last time
“Acceptchanges” has been executed.

Note :- This book does not have any sample of Acceptchanges.I leave that to reader’s as homework sample.But yes from interview aspect that will be enough.

(B) How can we check that some changes have been made to dataset since it was loaded ?

Twist :- How can we cancel all changes done in dataset ? , How do we get values which are changed in a dataset ?

For tracking down changes Dataset has two methods which comes as rescue “GetChanges
“and “HasChanges”.


GetChanges
Return’s dataset which are changed since it was loaded or since Acceptchanges was executed.


HasChanges
This property indicate’s has any change’s been made since the dataset was loaded or acceptchanges method was executed.

If we want to revert or abandon all change’s since the dataset was loaded use
“RejectChanges”.

Note:- One of the most misunderstood things about these properties is that it tracks the changes of actual database.That’s a fundamental mistake , actually the changes are related to only changes with dataset and has nothing to with changes happening in actual database.As dataset are disconnected and do not know anything about the changes happening in actual database.

(B) How can we add/remove row’s in “DataTable” object of

“DataSet” ?

“Datatable” provides “NewRow” method to add new row to “DataTable”.”DataTable” has “DataRowCollection” object which has all rows in a “DataTable” object.Following are the methods provided by “DataRowCollection” object :-


187






Add
Add’s a new row in DataTable


Remove
Remove’s a “DataRow” object from “DataTable”


RemoveAt
Remove’s a “DataRow” object from “DataTable” depending on index position of the
“DataTable”.

(B) What’s basic use of “DataView” ?

“DataView” represent’s a complete table or can be small section of row’s depending on some criteria.It’s best used for sorting and finding data with in “datatable”.

Dataview has the following method’s :-


Find
Take’s a array of value’s and return’s the index of the row.


FindRow
This also takes array of values but returns a collection of “DataRow”.

If we want to manipulate data of “DataTable” object create “DataView” (Using the “DefaultView” we can create “DataView” object) of the “DataTable” object. and use the following functionalities :-


AddNew
Add’s a new row to the “DataView” object.











188






Delete
Deletes the specified row from “DataView” object.

(B) What’s difference between “DataSet” and “DataReader”

?

Twist :- Why is DataSet slower than DataReader ? Fourth point is the answer to the twist.
Note:- This is my best question and i expect everyone to answer it.It’s asked almost 99% in all companies....Basic very Basic cram it.

Following are the major difference between “DataSet” and “DataReader” :-

Ö “DataSet” is a disconnected architecture , while “DataReader” has live connection while reading data.So if we want to cache data and pass to a different tier “DataSet” forms the best choice and it has decent XML support.

Ö When application needs to access data from more than one table “DataSet”
forms the best choice.

Ö If we need to move back while reading record’s , “datareader” does not support this functionality.

Ö But one of the biggest drawbacks of DataSet is speed.As “DataSet” carry considerable overhead because of relations,multiple tables etc speed is slower than “DataReader”.Always try to use “DataReader” whereever possible. , as it’s meant specially for speed performance.

(B) How can we load multiple tables in a DataSet ? objCommand.CommandText = "Table1" objDataAdapter.Fill(objDataSet, "Table1") objCommand.CommandText = "Table2" objDataAdapter.Fill(objDataSet, "Table2")
Above is a sample code which shows how to load multiple “DataTable” object’s in one
“DataSet” object.Sample code shows two tables “Table1” and “Table2” in object
ObjDataSet.

189






lstdata.DataSource = objDataSet.Tables("Table1").DefaultView

In order to refer “Table1” DataTable , use Tables collection of DataSet and the Defaultview object will give you the necessary output.

(B) How can we add relation’s between table in a DataSet ?

Dim objRelation As DataRelation objRelation=New DataRelation("CustomerAddresses",objDataSet.Tables("Customer").Columns("Custid")
,objDataSet.Tables("Addresses").Columns("Custid_fk"))

objDataSet.Relations.Add(objRelation)

Relation’s can be added between “DataTable” object’s using the “DataRelation” object.Above sample code is trying to build a relationship between “Customer” and “Addresses” “Datatable” using “CustomerAddresses” “DataRelation” object.

(B) What’s the use of CommandBuilder ?

CommandBuilder builds “Parameter” objects automatically.Below is a simple code which uses commandbuilder to load its parameter object’s.

Dim pobjCommandBuilder As New OleDbCommandBuilder(pobjDataAdapter)

pobjCommandBuilder.DeriveParameters(pobjCommand)

Be careful while using “DeriveParameters” method as it needs a extra trip to the Datastore which can be very inefficient.

(B) What’s difference between “Optimistic” and

“Pessimistic” locking ?

In pessimistic locking when user wants to update data it locks the record and till then no one can update data.Other user’s can only view the data when there is pessimistic locking.

In optimistic locking multiple user’s can open the same record for updating , thus increase maximum concurrency.Record is only locked when updating the record.This is the most




190






preferred way of locking practically.Now a days browser based application are very common and having pessimistic locking is not a practical solution.

(A) How many way’s are there to implement locking in

ADO.NET ?

Following are the ways to implement locking using ADO.NET :-

Ö When we call “Update” method of DataAdapter it handles locking internally.If the DataSet values are mot matching with current data in Database it raises Concurrency exception error.We can easily trap this error using Try..Catch block and raise appropriate error message to the user.

Ö Define a Datetime stamp field in the table.When actually you are firing the UPDATE SQL statements compare the current timestamp with one existing in the database.Below is a sample SQL which checks for timestamp before updating and any mismatch in timestamp it will not update the records.This is the best practice used by industries for locking.

Update table1 set field1=@test where LastTimeStamp=@CurrentTimeStamp

Ö Check for original values stored in SQL SERVER and actual changed values.In stored procedure check before updating that the old data is same as the current.Example in the below shown SQL before updating field1 we check that is the old field1 value same.If not then some one else has updated and necessary action has to be taken.

Update table1 set field1=@test where field1 = @oldfield1value

Locking can be handled at ADO.NET side or at SQL SERVER side i.e. in stored procedures.for more details of how to implementing locking in SQL SERVER read “What are different locks in SQL SERVER ?” in SQL SERVER chapter.

Note:- This is one of the favorite question’s of interviewer , so cram it....When i say cram it i do not mean it.... i mean understand it.This book has tried to cover ADO.NET as much as possible , but indeterminist nature of ADO.NET interview question’s makes it difficult to make full justice.But hope so that the above question’s will make you quiet confident during interviews.

(A)How can we perform transactions in .NET?



191






The most common sequence of steps that would be performed while developing a transactional application is as follows:

Ö Open a database connection using the Open method of the connection object.

Ö Begin a transaction using the Begin Transaction method of the connection object. This method provides us with a transaction object that we will use later to commit or rollback the transaction. Note that changes caused by any queries executed before calling the Begin Transaction method will be committed to the database immediately after they execute.Set the Transaction property of the command object to the above mentioned transaction object.

Ö Execute the SQL commands using the command object. We may use one or more command objects for this purpose, as long as the Transaction property of all the objects is set to a valid transaction object.

Ö Commit or roll back the transaction using the Commit or Rollback method of the transaction object.

Ö Close the database connection.

(I)What’s difference between Dataset. clone and Dataset. copy ?

Clone: - It only copies structure, does not copy data. Copy: - Copies both structure and data.
(A)Can you explain the difference between an ADO.NET Dataset and an ADO Recordset?

There two main basic differences between recordset and dataset :-

Ö With dataset you an retrieve data from two databases like oracle and sql server and merge them in one dataset , with recordset this is not possible

Ö All representation of Dataset is using XML while recordset uses COM.

Ö Recordset can not be transmitted on HTTP while Dataset can be.





192

.NET Architecture

(B) What are design patterns ?

Design patterns are recurring solution to recurring problems in software architecture. (A) Can you list down all patterns and there classification ?
Note :- This is advanced question because any one who says to list down all patterns can only be crazy for what he is asking.But its always a win-a-win situation for the interviewer.

There are three basic classification of patterns Creational , Structural and Behavioral patterns.


Creational Patterns
Ö Abstract Factory:- Creates an instance of several families of classes

Ö Builder :- Separates object construction from its representation

Ö Factory Method:- Creates an instance of several derived classes

Ö Prototype:- A fully initialized instance to be copied or cloned

Ö Singleton:- A class of which only a single instance can exist

Note :- The best way to remember Creational pattern is by ABFPS (Abraham Became
First President of States).


Structural Patterns
Ö Adapter:-Match interfaces of different classes.

Ö Bridge:-Separates an object’s interface from its implementation.

Ö Composite:-A tree structure of simple and composite objects.

Ö Decorator :-Add responsibilities to objects dynamically.

Ö Façade:-A single class that represents an entire subsystem.

Ö Flyweight:-A fine-grained instance used for efficient sharing.

Ö Proxy:-An object representing another object.


149






Note : To remember structural pattern best is (ABCDFFP)


Behavioral Patterns
Ö Mediator:-Defines simplified communication between classes.

Ö Memento:-Capture and restore an object's internal state.

Ö Interpreter:-A way to include language elements in a program.

Ö Iterator:-Sequentially access the elements of a collection.

Ö Chain of Resp:-A way of passing a request between a chain of objects.

Ö Command:-Encapsulate a command request as an object.

Ö State:-Alter an object's behavior when its state changes.

Ö Strategy:-Encapsulates an algorithm inside a class.

Ö Observer:-A way of notifying change to a number of classes.

Ö Template Method:-Defer the exact steps of an algorithm to a subclass.

Ö Visitor:-Defines a new operation to a class without change.

Note :- Just remember Music....... 2 MICS On TV (MMIICCSSOTV).

Note:- No source code is provided for architecture section. As much of the things can be clear from good UML diagrams.

(A)What’s difference between Factory and Abstract Factory Pattern’s?

Note: - This is quiet a confusing architect question especially in design pattern section. Interviewer can take you for a nice ride …. So get the difference’s in your heart.

First read the definition provided in the first question about both these patterns. The common thing they have is that they belong to creational patterns. In short they hide the complexity of creating objects.

The main difference between factory and Abstract factory is factory method uses inheritance to decide which object has to be instantiated. While abstract factory uses delegation to decide instantiation of object. We can say Abstract factory uses factory method to complete the architecture. Abstract Factory is one level higher in abstraction over Factory.

150






The below two class diagrams will provide overview of what the actual difference is. First figure shows a sample implementation of Factory Patterns. In this figure there are two basic sections:-

Ö The actual product section i.e. Class “Product” it inherits from a abstract class “AbstractProduct”.

Ö The creational aspect section that’s “ConcreteCreator” class which inherits from class “Creator”.

Ö Now there are some rules the client who will need the “Product” object will have to follow. He will never refer directly to the actual “Product” object he will refer the “Product” object using “AbstractProduct”.

Ö Second client will never use “New” keyword to create the “Product” object but will use the “Creator” class which in turn will use the “ConcreteCreator” class to create the actual “Product” object.





Figure: - 8.1 Class diagram of a factory Pattern

So what are benefits from this architecture? All creational and initializing aspects are now detached from the actual client. As your creational aspect is now been handled in “ConcreteCreator” and the client has reference to only “Creator”, so any implementation change in “CreateProduct” will not affect the client code. In short now your creational aspect of object is completely encapsulated from the client’s logic.

Now let’s look at the second class diagram which provides an overview of what actually “Abstract factory” pattern is. It creates objects for families of classes. In short it describes collection of factor methods from various different families. In short it groups related



151






factory methods. Example in this the class “Creator” is implemented using the “Abstract”
factory pattern. It now creates objects from multiple families rather one product.

Note :- Just stick up to this definition that Abstract factory classifies factory methods or groups logically related factory method together..









Figure:- 8.2 Class Diagram of Abstract Factory

152






(I)What’s MVC pattern?

Twist: - How can you implement MVC pattern in ASP.NET?

The main purpose using MVC pattern is to decouple the GUI from the Data. It also gives the ability to provide multiple views for the same Data. MVC pattern separates objects in to three important sections:-

Ö Model: - This section is specially for maintaining data. It is actually where your business logic, querying database, database connection etc. is actually implemented.

Ö Views: - Displaying all or some portion of data, or probably different view of data. View is responsible for look and feel, Sorting, formatting etc.

Ö Controller: - They are event handling section which affects either the model or the view. Controller responds to the mouse or keyboard input to command model and view to change. Controllers are associated with views. User interaction triggers the events to change the model, which in turn calls some methods of model to update its state to notify other registered views to refresh their display.

Ok now this was all in theory. Let’s look at how in actually ASP.NET can we implement MVC pattern. During interview with theory they will be looking at more have you really implemented MVC or its just talks. Following are the various sections of ASP.NET which maps to MVC sections:-

Ö Model: - This section is represented by Data view, Dataset, Typed Dataset, Business components, business entity models etc. Now this section can then be tied up to either windows application or web UI.

Ö View: - ASPX, ASCX, or windows application UI like data grid etc. form the view part of it.

Ö Controller: - In ASP.NET the behind code is the controller. As the events are handled by that part. Controller communicates both with Model as well as
view.

I hope I was able to map you imagination of ASP.NET with the diagram given below.





153















Figure: - 8. 3 Data flow between MVC architectures.


(A)How can we implement singleton pattern in .NET?

Singleton pattern mainly focuses on having one and only one instance of the object running. Example a windows directory service which has multiple entries but you can only have single instance of it through out the network.

Note:- May of developers would jump to a conclusion saying using the “STATIC” keyword we can have a single instance of object. But that’s not the real case there is something more that has to be done. But please note we can not define a class as STATIC, so this will not serve our actual purpose of implementing singleton pattern.

Following are the three steps needed to implement singleton pattern in .NET:-



154






Ö First create your class with static members.

Public class ClsStaticClass

Private shared objCustomer as clsCustomer

End class

This ensures that there is actually only one Customer object through out the project.

Ö Second define a private constructor to your class.

Note: - defining a private constructor to class does not allow a client to create objects directly.

Ö Finally provide a static method to get access to your singleton object.

(A)How do you implement prototype pattern in .NET?

Twist: - How to implement cloning in .NET ? , What is shallow copy and deep copy ?

Cloning is achieved by using ICloneable of the System namespace. It has a “Clone” method which actually returns the reference of the same copy. Clone method allows a Shallow copy and not a deep copy. In Shallow copy if you make changes to the cloned object it actually does change on the main object itself. So how is deep copy achieved, by using “ISerializable” interface? So what you do is first serialize the object then deserialize back to a complete new copy. Now any changes to this new copy do not reflect on the original copy of the object, this is called as Deep copy.

(I)What are the situations you will use a Web Service and Remoting in projects?
Well “Web services” uses “remoting” concepts internally. But the major difference between “web service” and “remoting” is that “web service” can be consumed by clients who are not .NET platform. While remoting you need the client to be .NET compliant. Regarding the speed issue “Remoting” is faster than “Web Services”. So I think when deciding the architecture side of choosing between “Web services” and “Remoting” keep the cross platform issue and the speed issue in mind.

(A)Can you give a practical implementation of FAÇADE patterns?






155






Façade pattern sits on the top of lot of subsystems and makes access easy to interfaces of these subsystems. Basic purpose of Façade is to make interfacing between many modules and classes manageable.









Figure: - 8.4 Façade in action

Above is a simple live application of a Façade class. In this we have four subsystems :-

Ö Customer

Ö Product

156






Ö Payment

Ö Invoicing

All the four modules when built at initial stage where built completely independent. The main interaction between all these subsystems is customer placing order. This functionality can be attained by using all these subsystems, which involves complex interaction between them.

There’s where FAÇADE comes in to action. We have built a FAÇADE called as “FACADEORDER” which sits on the top of all these subsystem and fulfills our functionality.

(I) How can we implement observer pattern in .NET?

Observer patterns can be implemented using “Delegates” and “Events”. I leave this to the readers to implement one sample code for observer patterns.

(B)What is three tier architecture?

The three tier software architecture emerged in the 1990s to overcome the limitations of the two tier architecture.

There are three layers in when we talk about three tier architecture:-

User Interface (Client) :- This is mostly the windows user interface or the Web interface. But this has only the UI part.

Mid layer: - Middle tier provides process management where business logic and rules are executed and can accommodate hundreds of users (as compared to only 100 users with the two tier architecture) by providing functions such as queuing, application execution, and database staging.

Data Access Layer: - This is also called by the famous acronym "DAL" component. It has mainly the SQL statement which do the database operation part of the job.

The three tier architecture is used when an effective distributed client/server design is needed that provides (when compared to the two tier) increased performance, flexibility, maintainability, reusability, and scalability, while hiding the complexity of distributed processing from the user.





157






(I)Have you ever worked with Microsoft Application Blocks, if yes then which?
Application Blocks are C# and VB.NET classes distributed as Visual Studio projects that can be downloaded from Microsoft's Web site and used in any .NET application, including ASP.NET Web applications. They are useful and powerful tools that can make applications more maintainable, scalable and efficient

The second question which of the application blocks has been used depends on really what you have implemented. But there are two famous MAB which is making buzz around the industry:-

Ö data access block

The Data Access Block provides static methods located in the SqlHelper class that encapsulates the most common data access tasks performed with Microsoft SQL server. If the term "static method" is new to you, it means that the class methods can be called without instantiating an instance of the class. For example, the method ExecuteReader () within the SqlHelper class can be called by simply using the statement SqlHelper.ExecuteReader () -- no object instantiation of the SqlHelper class is required.

Ö Exception management block.

The Exception Management Application Block provides a simple yet extensible framework for handling exceptions. With a single line of application code you can easily log exception information to the Event Log or extend it by creating your own components that log exception details to other data sources or notify operators, without affecting your application code. The Exception Management Application Block can easily be used as a building block in your own .NET application

Note: - It’s beyond the scope the book to look in to details of application block. Best is go to www.microsoft.com and search for these application block. Try to compile one or two
programs using there given classes and documentation.

(A)What is Service Oriented architecture?

“Services” are components which expose well defined interfaces and these interfaces communicate through XML messages. Using SOA you can build workflow, which uses

158






interfaces of these components. SOA is typically useful when you are crossing heterogeneous technical boundaries, organizations, domain etc.

In .NET SOA technically uses Web services to communicate with each service which is crossing boundaries. You can look SOA which sits on TOP of web services and provides a workflow.

SOA uses service components which operate in there own domain boundary. Let’s note some points of service :-

Ö They are independent components and operate in there own boundary and own technology.

Ö They have well defined interfaces which use XML and WSDL to describe themselves.

Ö Services have URL where any one can find them and clients can bind to these
URL to avail for the service.

Ö Services have very loosely coupled architecture. In order to communicate to service you only have to know the WSDL. Your client can then generate proxy from the WSDL of the service.








159

















Figure: - 8.5 SOA basic architecture



Above figure describes a broader picture of what service oriented architecture will look like. The basic fundamental of SOA is a web service. In above diagram you can see there are two services available. One is the “Credit Card” service and other is “Address Check”

160






web service. Both these services are provided by different company. Now we want to build a functionality which needs to validate a credit card and also check that addresses are proper. In short we will need functionalities of both the “CreditCard” and “AddressCheck” service. Also note the “CreditCard” service has its own business layer and DAL components, which can be in a proprietary language. It’s very much possible that the whole Credit card service is made in .NET and the Address check is SAP implementation or JAVA implementation. But because both the systems provide there functionality using Web ser vices which is nothing but basically XML message communication. So we have made new service which sits like a FAÇADE on top of both the web service and performs both functionalities in one common service. You will see I have made a third service which sits on top of both the webservice and consumes them. Also you can see that the UI part of the systems have access to Buisness layer and Web service of there system. But the service which does both these check has only access to the Web service.

Note:- It’s beyond the scope of this book to discuss about SOA. But just to keep you safe during interview this book has tried to clear some basics of SOA.

(I)What are different ways you can pass data between tiers?

There are many ways you can pass data between tiers :-

Ö Dataset the most preferred one as they maintain data in XML format.

Ö Datareader

Ö Custom classes.

Ö XML

(A)What is Windows DNA architecture?

Note :- If you have worked with classic ASP this question can come to you.

The Windows Distributed interNet Applications Architecture (DNA) is a Microsoft blueprint for robust, scalable, distributed business software. Windows DNA has evolved over time and was not preplanned. It gives all combined advantages of Centralized mainframe, application servers, internet technologies and Personal computers. Windows DNA is a evolution which started from mainframes (Where all logic was centralized) , Fox pro ages ( Where we talked in terms of two tier systems) , VB6 / SQL SERVER (Three tier where we talked in terms of having one more tier which was mainly COM


161






where business logic resided) , COM+ ( looking in terms of transactions and fulfilling
ACID rules) and finally the DNA.









Figure :- 8.6 Windows DNA sections

Above shown is a Windows DNA model which is a blue print which Microsoft has proposed. So if interviewer is asking you have you worked with Windows DNA , then answer is yes. You will see that you always use these sections in project. Do not get confused with the terminology DNA.







162






(A)What is aspect oriented programming?

Note :- This is something which is catching up the market so interviewer can ask you to see how you are in touch with the market.So probably this explanation can be quiet long but bear with me it is worth of it

I will try to be as short as possible as this book is not a reference book. Just to save you from interviewer I will give a short description of aspect oriented programming in .NET.

First let's try to define it which can probably save you during interview

Aspect-oriented software development is a new technology for separation of concerns (SOC) in software development. The techniques of AOSD make it possible to modularize crosscutting aspects of a system.

Ok that statement’s can save you for the first stage let’s get down actually what is it. Let’s revisit back how software development cycle evolved.

When we look back at times of Cobol where we used to break the modules in small functionalities and use reusability to its maximum.

Then came the time when we talked in terms of Objects where things where clearer as software was modeled in terms of real life examples. It worked fine and till today is the most accepted way of implementing and organizing project. So why AOP ?

Aspect oriented programming does not oppose OOP’s but rather supports it and make’s it more maintainable. So remove the logic from head the AOP is replacement of OOP. No its brother of OOP helping him to be better.

When we talk in terms of object’s it’s an entity which maps to real world domain. Object has attributes which represent the state of object and also define its behavior. By rule of object oriented programming object should be stand alone and communicate with other object’s using messages or defined interface.

One object should not communicate with other object directly rather communicate through defined interfaces. Every object satisfies some “Concern” in relation to the system.

Twist: - What is Concern in AOP?

“A concern is a particular goal, concept, or area of interest”

There are mainly two types of concern from an object perspective:-

Ö Core / Main concerns which it should satisfy and is his work.


I hope in my second edition i will come out with a seperate chapter on AOP

163






Ö System concerns which are not related to business functionalities but software related concerns example audit trail, Error handling, Security etc.

Ok let’s try to understand this principle by some actual example.










Figure :- 8.7 Customer and Audit trail relationships

Above is a class diagram which shows relationshipbetween two classes “ClsCustomer” and “ClsAuditTrail”. “ClsCustomer” class does inserting of new customer’s in to database and “ClsAuditTrail” does the auditing of what is changed in the customer class.

Now there are two concerns in this project :-

Ö Customer code should not exceed greater than 10 length (Business level concern)
Ö All customer data which is updated should be audited. (System level concern) Here goes the class code. If you see the ClsCustomer implementation in the update method
I have called the Audit trail implementation. If you really look from object oriented point
of view we are doing something in customer class which is supposed to be not his implementation: - Audit Trail logging. Thus we have also broken down the rule of



164






encapsulation. In short the class not only handles his work but also some other work which is not his concern.

Ok now lets define crosscutting which is one of important aspect’s of AOP.

Twist :- What is cross cutting in AOP ?

When one or many concerns span across module its called as cross cutting.Example in our audit trail example we will probably need to audit trail for customer as well as supplier.So Audit trail can span across other objects also that is termed as cross cutting.

Below are both the classes actually implemented as per class diagram 8.7. If you see the “Update” method of the customer class , its doing both of the concerns that is checking for customer code length and also maintaining the audit trail using the audit trail class.

Public Class ClsCustomer

Private pstrCustcode As String Private pstrCustName As String Public Property Code() As String
Get

Return pstrCustcode

End Get

Set(ByVal Value As String)

pstrCustcode = Value

End Set

End Property

Public Property CustomerName() As String

Get

Return pstrCustName

End Get

Set(ByVal Value As String)



165






pstrCustName = Value

End Set

End Property

Public Function Update() As Boolean

‘ first / core concern

If pstrCustcode.Length() > 10 Then

Throw New Exception("Value can not be greater than 10") End If
' usingthe customer audit trail to do auditing

‘ second concern / system concern

Dim pobjClsAuditTrail As New ClsAuditTrail

With pobjClsAuditTrail

.NewValue = "1001"

.OldValue = "1003"

.UserName = "shiv"

.Update() End With
' then inserting the customer in database

End Function

End Class

Public Class ClsAuditTrail
Private pstrUserName As String Private pstrOldValue As String Private pstrNewValue As String Private pdblLogTime As Double

166






Public Property UserName() As String

Get

Return pstrUserName

End Get

Set(ByVal Value As String)

pstrUserName = Value

End Set

End Property

Public Property OldValue() As String

Get

Return pstrOldValue

End Get

Set(ByVal Value As String)

pstrOldValue = Value

End Set

End Property

Public Property NewValue() As String

Get

Return pstrNewValue

End Get

Set(ByVal Value As String)

pstrNewValue = Value

End Set

End Property

Public Property LogTime() As Double


167






Get

Return pdblLogTime

End Get

Set(ByVal Value As Double)

pdblLogTime = Value

End Set

End Property

Public Sub Update()

' do the logging activity here

End Sub

End Class

In short the customer class is doing lot of activity. There is lot of tangling of code. So how do we overcome this problem… Simple separate the System level concern (Audit Trail) from the core level concern ( Customer code check). This is achieved at this moment in .NET using attribute programming.

Here is the change to the customer class

Imports System.Reflection

Public Class ClsCustomer

Private pstrCustcode As String Private pstrCustName As String Public Property Code() As String
Get

Return pstrCustcode

End Get

Set(ByVal Value As String)




168






pstrCustcode = Value

End Set

End Property

Public Property CustomerName() As String

Get

Return pstrCustName

End Get

Set(ByVal Value As String)



pstrCustName = Value

End Set

End Property

_ Public Function Update() As Boolean
If pstrCustcode.Length() > 10 Then

Throw New Exception("Value can not be greater than 10") End If
' usingthe customer audit trail to do auditing







' then inserting the customer in database

End Function



End Class


169






And here is the change to the audit trail class



Imports System.Reflection

_ Public Class ClsAuditTrail
Inherits Attribute

Private pstrUserName As String Private pstrOldValue As String Private pstrNewValue As String Private pdblLogTime As Double
Public Property UserName() As String

Get

Return pstrUserName

End Get

Set(ByVal Value As String)

pstrUserName = Value

End Set

End Property

Public Property OldValue() As String

Get

Return pstrOldValue

End Get

Set(ByVal Value As String)

pstrOldValue = Value

End Set

170






End Property

Public Property NewValue() As String

Get

Return pstrNewValue

End Get

Set(ByVal Value As String)

pstrNewValue = Value

End Set

End Property

Public Property LogTime() As Double

Get

Return pdblLogTime

End Get

Set(ByVal Value As Double)

pdblLogTime = Value

End Set

End Property

Public Sub New(ByVal pstrUserName As String, _ ByVal pstrOldValue As String, _
ByVal pstrnewValue As String, _ ByVal plng As Long)
Update() End Sub
Public Sub Update()

' do the logging activity here


171






End Sub

End Class



The fun part of the code is below , you can see that the code audit trail is now seperate and the code check belongs to the customer code.

_ Public Function Update() As Boolean
If pstrCustcode.Length() > 10 Then

Throw New Exception("Value can not be greater than 10") End If
' usingthe customer audit trail to do auditing

' then inserting the customer in database

End Function

Note:- Architecture is everybodys favorite and the wide combination is difficult to be covered in this book. Many of the things can only come by live experience.