Simplifying Sql Server Management

  • December 2019
  • PDF

This document was uploaded by user and they confirmed that they have the permission to share it. If you are author or own the copyright of this book, please report to us by using this DMCA report form. Report DMCA


Overview

Download & View Simplifying Sql Server Management as PDF for free.

More details

  • Words: 6,162
  • Pages: 14
Simplifying SQL Server Management

an

Storage eBook

contents [ ] Simplifying SQL Server Management

This content was adapted from Internet.com's Database Journal, DevX, and InternetNews.com Web sites. Contributors: Gregory A. Larsen, Susan Sales Harkins, and Richard Adhikari.

2

Connection Strategy for Multiple Database Environments Gregory A. Larsen

2

5

Setting Up a Linked Server for a Remote SQL Server Instance Gregory A. Larsen

10 5

10

Maintain Healthy Transaction Logs for Easy Data Recovery Susan Sales Harkins

Simplifying SQL Server Management, an Internet.com Storage eBook. © 2008, Jupitermedia Corp.

1

[

Simplifying SQL Server Management

]

Connection Strategy for Multiple Database Environments By Gregory A. Larsen do this by using a connection string. A typical connection string might look something like:

A

s machines get more powerful and less expensive, people are more likely to use a single machine to host more than a single SQL Server database. Over time, a SQL Server machine might support more and more databases. But eventually you will need to replace your hardware, your database server will fail due to a hardware problem, or your multi-database machine might become saturated with activity from multiple applications, eventually causing performance of all applications to suffer.

Server=SSEDB01; Initial Catalog=AdventureWorks; Integrated Security=SSPI; In this example, the database server is identified with a machine name, in this case SSEDB01. Now a connection string doesn't have to have a machine name. It could be an IP address, an OBDC DSN name, a DNS alias name, etc. The name just needs to be something that can be resolved to an IP address. Name resolution can be done a number of different ways.

What are you to do when one of these situations occurs? How can you minimize the work required to re-point your applications to a new database machine, or split your environment into multiple database machines for performance reasons? Let's look at one way to design your database connection strategy to simplify changing application connections so you can plug-and-play databases with less administrative overhead when the need arises.

How Applications Connect Each application needs to identify the database server it will be connecting with to retrieve data. Applications

If your SQL Server machine is located within a domain, it can be registered with the domain creating a DNS name. When a machine is registered with DNS Jupiterimages then a client or application can connect to it using the machines registered name, which is how the connection string above works. Even better, with DNS you can create a DNS alias, which is a logical name to represent your SQL Server machine. By using a DNS alias name in your connection string, DNS translates the name to an IP address behind the scenes

“ ”

Let's look at one way to design your database connection strategy to simplify changing application connections so you can plug-and-play databases with less administrative overhead when the need arises.

2

Simplifying SQL Server Management, an Internet.com Storage eBook. © 2008, Jupitermedia Corp.

[

Simplifying SQL Server Management

when a connection to the database server is made. This allows you to only need to remember a meaningful name of where to connect, instead of a cryptic numeric string of an IP address, or machine name. When you use a DNS alias name in the connection string you can create a connection strategy that insulates applications from the physical location or machine name of the database server.

Using DNS to Identify Location of Application Database When using DNS to identify the location of the application database you can use the name of the domain machine in the connection string, but this method is somewhat inflexible. What happens when you want to change the name of the physical SQL Server machine? If you use the machine name then you need to modify the application connection strings to reference the new machine name each time it changes. This might not be so bad when you only have a single application connecting to a database server. But if you have a lot of applications and databases on a single machine then this means a lot of connection strings will need to be changed anytime you rename your server. Therefore using the machine name in your connection string is not flexible when there are environment changes over time. A better approach is to creatively use DNS alias names to resolve where an application database lives. So instead of using the machine name to identify the location of the database machine for all applications, you should consider creating a meaningful, unique DNS alias name that resolves to the IP address of your database server. In my example above instead of coding SSEDB01, which is a machine name, I would be better off using a DNS name like SQL2005PROD. In this case, the name SQL2005PROD would be defined in DNS to have the same IP address as the physical machine SSEDB01. Using a DNS alias puts some meaning behind the name. Here by using SQL2005PROD you can tell this name is associated with the production SQL Server 2005 machine. So by defining this DNS alias for my production SQL Server 2005 machine my connection string above would now look like this: Server=SQL2005PROD; Initial Catalog=AdventureWorks; Integrated Security=SSPI;

3

]

This connection string and the one above will resolve to the same IP address. So why would using a DNS name in the connection string be a good idea? One reason would be to have a descriptive name, but that's not the only reason. Say your database server contains many different databases and supports 50 different applications. Now say SQL Server machine SSEDB01 has a hardware error of some kind. Moreover, you have a backup machine SSEDB02 that you can quickly restore all the databases from SSEDB01 to support those 50 different applications, because you have been shipping your SSEDB01 backups to this machine for safekeeping. Plus, you know you can restore all of the SSEDB01 databases on SSEDB02 quicker than it would take to resolve the hardware problem with SSEDB01. If you coded all of your connections strings for those 50 applications to the machine name SSEDB01 then you would have to modify all of the connections strings to use SSEDB02 in order to have them point to your new fallback server (SSEDB02) for your recovery to be complete. Modifying 50+ connection strings might take a fair amount of time and be error prone. If instead you used a logical name like SQL2005PROD as a connection name in each of those 50+ connection strings, then you would only need to make one change to re-point all of your applications to the new fallback server, SSEDB02. That one change would be to DNS, to change SQL2005PROD to point to the IP address of SSEDB02, instead of SSEDB01. Once you make this change, each application would automatically no longer connect to SSEDB01, and would instead connect to SSEDB02, without changing any of those 50+ connections strings. By making this small application change to the connection design, to use a logical name for a SQL Server machine, instead of a physical server name, or IP address, the amount of work required to re-point all applications and potential problems to point applications at a new SQL Server box is greatly reduced. How to Use DNS to Help with Capacity Management So how can using a DNS alias name in your connection string help with capacity management? Say your environment has a number of different production SQL Server machines. Each machine supports many applications. Let's also assume that your database growth for some applications is fairly linear, but a fair number of

Simplifying SQL Server Management, an Internet.com Storage eBook. © 2008, Jupitermedia Corp.

[

Simplifying SQL Server Management

application databases don't have a predictable growth rate. Those databases grow at an unpredictable rate, sometimes they don't grow at all, and other times they increase or decrease in size exponentially. Because of this volatile growth rate for some databases there are some servers that have very little space, other servers run out of space frequently, while still others have too much free space. So how can DNS help out with managing these kinds of disk space capacity issues? It is not always easy to add more disk space to a database server when the databases have ballooned up to the capacity of the disks drives of a server. It might take months to acquire the additional hardware and schedule a timeframe to extend the disks space capacity of a server. Therefore, if you have an environment with disk space capacity issues you need a way to plug and play databases to manage this kind of capacity problem. By "plug and play" I mean you need a method where you can copy databases from one server to another quickly and change that IP address of where applications get their data with minimal effort. By using DNS, you can re-point your applications to the new location for databases quickly. Of course, you need to design your application connection strategy to handle this kind of database movement scenario. In the prior example of how to use DNS, I talked about having a single DNS name that is logically associated with the physical SQL Server machine IP address. Using this strategy doesn't work if you only want to move a single database from one server to another because of a disk space capacity issue. So instead of having a single DNS name for all databases on a server, you need to develop a logical DNS naming strategy that has a unique name for each application.

]

machine where the database lived for the applications. When you need to move one of the databases off to another server because of capacity issues all you would need to do is move the database to the new server, and then change the DNS entry to point to the new database server. One change to DNS and all the connections for that application are re-pointed to the new location. You can use the logical DNS naming methodology to handle other situations as well. Say you have a development, quality assurance and production environment for each application. In this case, you can append an environment designation to your DNS names. So, for your BILLING application you might have BILLINGDV for development, BILLINGQA for quality assurance, and BILLINGPR for production. Or, say you have high CPU on one of your databases servers, then by using DNS you could quickly move one or more databases off the heavy hit server to underutilized servers and then re-point the DNS entries for the moved databases to new servers. Doing this provides you a low-tech solution to load balance CPU usage of your database servers.

Management by Design Each environment has its own unique requirements. If your environment has multiple applications with many databases on a single server, then designing your connection strategy to minimize issues that might come up over time makes sense. Next time you are bringing up a new machine and migrating your applications over to it, consider whether or not using logical DNS names might solve some issues associated with managing your environment. I

Say you have a database server that contains the databases for the Order, Accounting, Personnel, and Billing systems. For these four different applications, there are four different databases: Order, REV, HR, and Billing. In this situation you would define four different DNS entries one for each of your production applications, where the DNS names would be something like, ORDER, REV, HR, and BILLING. All of the connection strings for each application would then use the appropriate DNS entry from the above list to make sure the application was pointing to the current physical

4

Simplifying SQL Server Management, an Internet.com Storage eBook. © 2008, Jupitermedia Corp.

[

Simplifying SQL Server Management

]

Setting Up a Linked Server for a Remote SQL Server Instance By Gregory A. Larsen ometimes an application may need data stored in another database on a different instance of SQL Server. That different instance might be on the same physical machine or might be on another machine altogether. So what do you do in this situation? Your options depend on your data requirements, like how up to date does the data need to be. Also, the network/computer topology might be a factor in what you can and cannot do. To discuss all the different possible options would make this article quite lengthy so let me narrow down the scope a little bit. For the purpose of this article, I will be discussing how a linked server can be used to seamlessly provide an application access to data on a different instance of SQL Server. Note that linked servers can also be used to access other non-SQL Server data sources, but that notion will be outside the scope of this article. I will only be discussing options and situations related to using linked servers to access information stored in a SQL Server database.

the query redirected and processed on another SQL Server instance, and eventually have the results set sent back to the original server to be returned to the client. To better show how this works look at the following diagram (figure 1):

Basic Linked Sever Architecture

SELECT name "DBs on SERVER2" FROM SERVER2.master.sys.databases

S

Before I get into how to setup a linked server, let me discuss the basic architecture of a linked server. A linked server is a mechanism that allows a query to be submitted on one server and then have all or part of

In this diagram, a "Client" can connect to either an "Application Server" or directly to SQL Server to submit a query. If the "Client" or application running on the "Application Server" submits a query to "SERVER1" that needs to retrieve data from a database housed on "SERVER2", then this kind of query is known as a distributed query. Defining a linked server definition for "SERVER2" on "SERVER1" allows for a client Jupiterimages or an application to submit these kinds of distributed queries. A distributed query that runs against "SERVER2" from a linked server defined on "SERVER1" would look something like this:

Here I identify the object I want to reference on my linked server by using a four part naming convention. In my example, I wanted to return the names of all the

Figure 1

5

Simplifying SQL Server Management, an Internet.com Storage eBook. © 2008, Jupitermedia Corp.

[

Simplifying SQL Server Management

]

Figure 2

Figure 3

databases on "SERVER2." Therefore, I used a four part naming which consisted of ..<schema>. , or in my case "SERVER2.master.sys.databases." "SERVER2" is the name of the linked server, which is defined on "SERVER1."

linked server. To begin defining my linked server I enter "SERVER2" in the "Name" field and then click on the "SQL Server" radio button to identify that my new linked server is a SQL Server data source. When I do that my window looks like this (Figure 3):

How to Define a Linked Server To create or list the available linked servers already defined you look under the "Server Objects" folder within SQL Server Management Studio (SSMS). You can also use "sp_addlinkedserver" system stored procedure to add a linked server, or "sp_helpserver" to list linked servers.

To define how clients would authenticate to "SERVER2" I would click on the "Security" item in the upper left hand corner of this page, under the "Select a page" section. When I click on the "Security" item, the following page is displayed (Figure 4): Figure 4

To create linked "SERVER2" in my above example in SSMS, I would first expand the "Server Objects" folder, and then right click on the "Linked Servers" item. This would display the following window (Figure 2): On this window, you name your new linked server and identify the type of data source your linked server will be. Remember linked servers can be defined for a number of different kinds of data sources. For the purpose of this article, I will be defining "SERVER2," which is a SQL Server data source. In order to do that I will need to identify the name of the linked server and then use the "Security" and "Server Options" pages to define how I would like to authenticate to my linked server and what options will be associated with my 6

Simplifying SQL Server Management, an Internet.com Storage eBook. © 2008, Jupitermedia Corp.

[

Simplifying SQL Server Management

Here you have a number of different ways to identify how your clients would be authenticated to the linked server. Let me go through each one of these options. At the top of this screen, in the right hand pane you can define login mappings. Login mapping is a way to associate a login on the local server, with a login on the remote server. There are two different ways a local login can be mapped to a remote login. The first method is to impersonate, and the second is to associate the local login with a remote login and password. The impersonate option takes the local Windows login and uses it to connect to the linked server. It does this by impersonating the local login. In order for the local server to impersonate, the login requires that delegation be setup between the local server and the linked server. A discussion on delegation is outside the scope of this article. To map a local login you would associate it with a remote login and password. The remote login needs to be a SQL Server Authenticated user on the remote server. The following screenshot shows how I have mapped some local logins to remote logins on SERVER2 (Figure 5): Here I have mapped three different local logins to two different remote logins. The first login mapping is for "DJ\GREG", which is a Window domain authenticated user that is defined on the local server. I’ve identified the mapping so "DJ\GREG" is to be impersonated when connecting to "SERVER2." This means anytime Figure 5

]

"DJ\GREG" is logged onto SERVER1 and issues a linked server query to "SERVER2" those request will connect and run the query on "SERVER2" in the security context of "DJ\GREG." The second mapping is for "WEB_USER" which is a SQL Server authenticated user. I've mapped "WEB_USER" to the same remote login. In doing so, I had to provide the password for login "WEB_USER." This password must be the password for the "WEB_USER" on linked server, in my case that would be "SERVER2." The third login mapping demonstrates how you can map multiple local logins to a single remote login. In my example I mapped the Windows domain authenticated login "DJ\LINDA" to the remote login "WEB_USER." Using mapped logins is a way to identify only those users from the local machine that can connect to the linked server. In addition to mapping logins, you can also identify how logins that are not defined in the mappings would connect to the linked server. There are four different options that can be used. These four options are the different radio buttons in Figure 5. The first option "Not be made" is fairly obvious. When you select this option, any users not identified in the login mappings will not be able to connect to the linked server. The second method "Be made without using a security context" is to be used for connecting to data sources that do not require any authentication, like a text file. If you select this option to connect to a linked server then this has the same effect as selecting the "Not be made" option. The third option "Be made using Login's current security context" means you want the linked server request to use the Windows account of the login to connect to the linked server. In order for this option to work, your SQL Server machine will need to be able to impersonate a local account. This option is a simple way to identify that all Windows accounts can use a linked server, without mapping each login. However, remember this requires delegation to be set up. The last option "Be made with this security context" is a way to say everyone using this linked server will connect with a single remote login and password to the linked server. The remote login needs to be a SQL Server Authenticated login. When setting up a linked server the last thing to consider is defining the "Server Options." This can be done by clicking on the "Server Options" under the "Select a page" menu. When I do that, the following

7

Simplifying SQL Server Management, an Internet.com Storage eBook. © 2008, Jupitermedia Corp.

[

Simplifying SQL Server Management

Figure 6

Figure 7

screen will be displayed (Figure 6): On this screen, there are a number of different options. The first option, "Collation Compatible," is used to identify whether or not the linked server has the same collation as the local server. You should only set this to "True" if you know the local collation is the same as the linked server. The next option "Data Access" is used to control whether you want to allow data to be accessed on the linked server. When this option is set to "True", the linked server can be used to access data on the remote SQL Server instance. When this option is set to "False" then access to the remote server will be denied. This option is a useful way of disabling a linked server temporarily. The next option, "Rpc," is used to allow remote procedures calls "from" the linked server. Whereas, the option after that "Rpc Out" is used to allow remote procedure calls "to" the linked server. The "Use Remote Collation" option, when set to "True," means that the collation setting of remote columns will be used, but when this option is set to "False" the collation settings for the local server will be used. The "Collation Name" option is to specify the collation setting of the linked server. When specifying a collation name it must be a collation that SQL Server supports. The "Connection Timeout" is used to specify the maximum length of time the local server should wait to

8

]

obtain a connection to the linked server SQL Server instance. If "0" (zero) is specified for this option then the server option "remote login timeout" is used. By default the server option default is 20 seconds for the "remote login timeout". The "Query Timeout" option is used to specify the length of time a linked server process will be allowed to run before it times out. When this option is set to "0" (zero) then the server "remote query timeout" is used. The "remote query timeout" value defaults to 600 (10 minutes). On my SERVER2 linked server, the only option I need to change is "Rcp Out." I need to change this so I can run stored procedures that reside on SERVER2. Therefore, to do this I would have to change the "Rcp Out" option to true like so: Once you have specified a linked server, the security associated with the new linked server, and the server options you are ready to save your new linked server definition. This is done by clicking on the "OK" button at the bottom of the "New Linked Server" window.

TSQL Examples for Using Linked Servers Above I defined a linked server named "SERVER2." As stated earlier, in order to reference objects on "SERVER2" I would need to use a four part naming conven-

Simplifying SQL Server Management, an Internet.com Storage eBook. © 2008, Jupitermedia Corp.

[

Simplifying SQL Server Management

tion. Below are some examples of how to referencing objects on SERVER2. Here is how I would retrieve information in the "Product" table in the "AdventureWorks" databases stored on my linked server: SELECT * FROM SERVER2.AdventureWorks.Production.Product All you have to do here is put the linked server name followed by a period before the fully qualified table name. If you wanted to execute a stored procedure on a linked server, you would do something like the following: EXECUTE SERVER2.AdventureWorks.dbo.uspGetBillofMaterials 718,'2000-06-26'

]

Conclusion Linked Servers allow you to submit a TSQL statement on one SQL Server instance, which retrieves data from a different SQL Server instances. In fact, linked server can be used to join data from multiple SQL Server instances using a single TSQL statement. When you have databases on multiple SQL Server instances, you might find it useful to use linked servers in your application to retrieve data from more than one instance. By using a linked server your application will only need to connect to one SQL Server instance to retrieve data from multiple SQL Server instances. On that single SQL Server instance, you would define linked servers so your application could retrieve data from the databases that reside on a different SQL Server instance. Next time you are considering how to handle retrieving data from multiple instances of SQL Server from a single connection or single TSQL statement you might consider looking into using a linked server. I

Here I have executed the uspGetBillofMaterials stored procedure on SERVER2.

9

Simplifying SQL Server Management, an Internet.com Storage eBook. © 2008, Jupitermedia Corp.

[

Simplifying SQL Server Management

]

Maintain Healthy Transaction Logs for Easy Data Recovery By Susan Sales Harkins

D

ata is your business, and maintaining a healthy backup and recovery plan is vital to protecting your data. SQL Server's transaction logs, which bridge the most recent changes to the last backup, should be a part of that backup and recovery plan. Without a healthy, well-maintained transaction log, you can recover old data but your users will still have to reenter all of their changes since the last backup. Fail to maintain transaction logs and you may fail to keep your job. Fortunately, SQL Server's transaction logs are easy to maintain.

How SQL Server Transaction Logs Work Most client/server databases offer a transaction log, which is simply a separate file where the server tracks operations in the following sequence:

change directly to the data. Rather, SQL Server locates the appropriate data and then loads it into a special area of RAM called the data cache. Changes are made in RAM. Then, SQL Server copies the changes waiting in RAM to the transaction log. Only then does SQL Server write changes to the actual data file. This is called a write-ahead log because SQL Server writes changes to the log before it writes changes to the actual data file. This approach is quite a bit faster than writing directly to the data file. Perhaps more important than performance is the transaction log's role in data recovery. Thanks to the transaction log, you can recover changes right Jupiterimages up to the error from which you're recovering. During the recovery process, SQL Server scans the log for changes that weren't committed. That way, the database can finish what it started. The log stores changes in three parts: • Backed-up: This section contains changes that were committed the last time you backed up the database.

1. The log notes that the server expects a change. 2. The log notes that the server made a change. 3. The log notes that the server committed the change to the data file. When users change data, SQL Server doesn't write that

• Inactive: This section contains committed changes



Thanks to the transaction log, you can recover changes right up to the error from which you're recovering. During the recovery process, SQL Server scans the log for changes that weren't committed. That way, the database can finish what it started.

10



Simplifying SQL Server Management, an Internet.com Storage eBook. © 2008, Jupitermedia Corp.

[

Simplifying SQL Server Management

that haven't been backed-up yet. • Active: This section contains committed and uncommitted changes (depending on their sequence and relation to other changes). SQL Server identifies each event with a log sequence number (LSN) as follows: 101 102 103 104 105 106 107

Begin transaction 1 Update transaction 1 Begin transaction 2 Update transaction 2 Commit transaction 1 Checkpoint Commit transaction 2

]

Microsoft Bundling for BI

M

icrosoft announced new Business Intelligence technologies in October designed to help knowledge workers use business intelligence software with more smarts and analytical tools. It comes in a bundle that works with mainstay products such as SQL Server, Office applications Excel and Word, and PerformancePoint Server 2007.

When SQL Server begins a backup, it records the current LSN. For instance, from the checkpoint at LSN 106, SQL Server searches for the oldest open transaction. In this case, that's 103 because that transaction is uncommitted at the checkpoint. Therefore, transactions 103 and higher represent transactions that occurred during the actual backup process. When the backup is complete, SQL Server backs up the transactions from 103 to the most current transaction.

The BI tools are described in projects that include "Kilimanjaro," a combination of two projects to add business intelligence capabilities to Microsoft SQL Server 2008. Self-service analysis capabilities are also in "Project Gemini," which adds managed self-service analysis and reporting capabilities, and a project code-named "Madison" will add advanced data warehousing functionality, Microsoft said.

Avoid a Full Transaction Log at All Costs

"It's all about how to get more users empowered as they try to make relevant and faster business decisions day to day," Andy Kamlet, director of marketing for Microsoft Business Intelligence product group, told InternetNews.com. "We want to enable individual, team and corporate business intelligence, all aligned and with accountability and manageability."

As great as full transaction logs are, they'll work for you only if you maintain them properly. A full transaction log file grinds production to a halt. SQL Server will simply refuse to write changes, leaving your users unable to work. You'll probably suffer this error only once, because once is enough.

"Kilimanjaro will let users develop reports using Word and Excel as the front end to Report Builder, which we first released in SQL Server 2005," Kamet said. Kiliminjaro's BI capabilities are a continuation of the capabilities announced last year for Katmai, which hit the market as SQL Server 2008.

When it does happen, you must truncate the log by hand using BACKUP LOG as follows: BACKUP LOG databasename WITH TRUNCATE_ONLY Executing this statement forces SQL Server to dump the inactive area of the log, which gets your database running and lets your users get back to work. However, truncating the log has a downside (you just knew there would be): you lose the history necessary to restore the database if it crashes. To protect the data, run a BACKUP DATABASE statement on the 11

Microsoft is claiming that customers' total cost of ownership of SQL Server-based BI solutions is lower than the competition, of which there is plenty. After all, the Business Intelligence software sector has been a hot space for a few years now.

Microsoft's biggest rivals are Oracle, which spent $3.3 billion on BI leader Hyperion Solutions in 2007. It also faces SAP, which paid $6.7 billion to acquire Business Objects last year, which was followed by IBM's purchase of BI provider Cognos for $5 billion.

Mark Smith, CEO and executive vice president of analyst firm Ventana Research, said Microsoft is still lagging in the BI sector. In many ways, the company

continued

Simplifying SQL Server Management, an Internet.com Storage eBook. © 2008, Jupitermedia Corp.

[

Simplifying SQL Server Management

entire database immediately following the BACKUP LOG statement. Your users can continue to work, and although the unexpected backup might be a bit of a pain, it's necessary. The key is to avoid a quickly filling transaction log altogether. Fortunately, you can easily do that by heeding the following administrative guidelines: • Avoid large transactions. A large transaction can be adding or editing several million records with one INSERT or UPDATE. • Avoid long-running BEGIN TRANSACTION blocks. Everything between BEGIN TRANSACTION and COMMIT remains active. • Avoid using the KILL statement or canceling a transaction from Management Studio. Either action will render a statement active--forever. (An occasional KILL is sometimes necessary and harmless; just don't make a habit of it.)

Shrink the Log Truncating the log frees up space inside the log for new transactions, but it doesn't reclaim disk space. As a matter of habit, check the log's size. If it's truly large, shrink it after truncating it as follows: DBCC SHRINKFILE (databasename_log, targetsize) Targetsize represents, as an integer, the size that you want the file to be in megabytes. If you omit this value, SQL Server reduces it to the default file size. In addition, if the log is already larger than targetsize, SQL Server shrinks the file to the size needed to store the current records. Although this command will free up some space, it'll also play havoc with file fragmentation at the disk level, so use it infrequently. Perform this action manually only when necessary. Or, if you're specific about conditions, you can execute this statement via an alert script.

Back It Up SQL Server offers simple, full, and bulk-logged recovery models. For the most part, you should choose full, which allows you to back up both the database and 12

]

is playing catch-up to many of the technology offerings out there already, he said. Cognos and Hyperion are leveraging Web 2.0 technologies, including mashups. Meanwhile, SAP has been integrating business intelligence into all its offerings.

Microsoft also has to grapple with the sales cycle, Smith said. "The secret to success in software sales to very large organizations is direct relationships," he explained, adding that Microsoft does most of its business through its channels.

Also, Microsoft's BI capabilities will not be available in the full market for some time yet, and that will further hinder its sales, Smith pointed out. The advanced data warehousing functionality of Madison is slated for release in the first half of 2010, but customers will get early access to it through community technology previews, which lets customers download and play with the software before it's in production, Microsoft's Kamet said.

In the meantime, Microsoft has lined up a slew of alliances to work with on BI implementations, including appliances, such as Dell, EMC, Hewlett-Packard, Unisys and Bull in Europe. "Customers will get preconfigured deployments of SQL Server out of the box so they can develop solutions at once," Microsoft's Kamet said. "Customers want to move along the lines of using appliances" as part of their Business Intelligence strategy.

the transaction log. You can back up a transaction log quickly and frequently; every few minutes isn't too often if data is critical. If the worst happens, back up the current transaction log first. Then, restore the last full database backup, and all subsequent transaction log backups. For instance, suppose you adhere to the following backup schedule and a failure occurs at 9:00 PM: 8:00 AM 10:00 AM 12:00 PM 2:00 PM 4:00 PM 6:00 PM 8:00 PM

Back Back Back Back Back Back Back

up up up up up up up

database transaction database transaction transaction database transaction

log log log log

Simplifying SQL Server Management, an Internet.com Storage eBook. © 2008, Jupitermedia Corp.

[

Simplifying SQL Server Management

First, you'd back up the 8:00 PM transaction log. Then, you'd restore the database using the last database backup from 6:00 PM. Finally, you'd apply the 8:00 PM transaction log backup and the active transaction log. (Differential backups are a bit more complex.) After backing up a transaction log, SQL Server truncates the log's inactive section and reuses it to store new transactions. That way, the log doesn't grow uncontrollably large. Remember, SQL Server doesn't use the inactive items during recovery because those transactions are already complete. If possible, don't store a database and its backup and transaction logs on the same server. Store these files on different physical disks, ideally located in different buildings.

Warning: Simple Recovery May Not Be Enough

]

log at every checkpoint, which keeps the transaction log at a manageable size. If you follow this advice, however, you'll be living on the wild side. In a crash, you'll lose everything up to the last backup because the simple recovery model offers no transaction log with which to restore from the last backup to the crash. So be sure to back up the database frequently if you opt for simple recovery. On the other hand, if you seldom change data, or change only a few items frequently, simple can be more efficient, but that's really the only good reason to consider it. Otherwise, the full model is the way to go. I This content was adapted from Internet.com's Database Journal, DevX, and InternetNews.com Web sites. Contributors: Gregory A. Larsen, Susan Sales Harkins, and Richard Adhikari.

Some experts suggest using the simple recovery model because SQL Server truncates the transaction

13

Simplifying SQL Server Management, an Internet.com Storage eBook. © 2008, Jupitermedia Corp.

Related Documents

Sql Server
November 2019 28
Sql Server
November 2019 18
Sql Server
May 2020 11
Sql Server
November 2019 35
Sql Server
May 2020 13