Rethinking The Datacenter

  • December 2019
  • PDF

This document was uploaded by user and they confirmed that they have the permission to share it. If you are author or own the copyright of this book, please report to us by using this DMCA report form. Report DMCA


Overview

Download & View Rethinking The Datacenter as PDF for free.

More details

  • Words: 7,741
  • Pages: 20
Rethinking the Data Center

an

Networking eBook

contents [ ] Rethinking the Datacenter

This content was adapted from Internet.com's InternetNews, ServerWatch, and bITa Planet Web sites. Contributors: Paul Rubens, Drew Robb, Judy Mottl, and Jennifer Zaino.

2

Enterprises Face Data Growth Explosion Judy Mottl

4 2

What’s the State of Your Data Center? Jennifer Zaino

6

Create a RecessionProof Data Center Paul Rubens

4

6

9

Greening Your Data Center — You May Have No Choice Paul Rubens

12

Hardware for Virtualization: Do's and Don'ts Drew Robb

9

12

15

Why Tape Libraries Still Matter Drew Robb

18 15

18

Rethinking the Datacenter, An Internet.com Networking eBook. © 2008, Jupitermedia Corp.

1

Facilities Management Crosses Chasm to the Data Center Paul Rubens

[

Rethinking the Datacenter

]

Enterprises Face Data Growth Explosion By Judy Mottl

include burgeoning Internet access in emerging countries, increasing numbers of datacenters supporting cloud computing and the rise in social networks, the study found.

I

f you think storing your enterprise data is a tough challenge now, it's nothing compared to what it might be in just a few years.

According to a study from research firm IDC and storage vendor EMC, data requirements are growing at an annual rate of 60 percent. Today, that figure tops 45 gigabytes for every person, or 281 exabytes total (equivalent to 281 billion GB). What should concern IT managers is that the report predicts the total amount of digital information -- the "digital universe," as the study's authors call it -- will balloon to 1,800 exabytes by 2011.

Less than 5 percent of the digital universe is from data center servers, and only 35 percent is drawn from the enterprise overall, according to IDC. Nevertheless, the IT impact will be extensive, ranging from the need to boost information governance to improving data security. Individuals create about 70 percent of the digital universe, although companies are responsible for the security, privacy, reliability Jupiterimages and compliance of 85 percent of that data, the study said.

The findings should serve as a wake-up call to enterprises, said Charles King, principal analyst at Pund-IT. "Creation of information is accelerating at a torrid pace, and if organizations want the benefits of information they'll need effective management tools," King wrote in a response to the IDC/EMC report. Chief factors responsible for the growth in data

King said IT will need to cope by assessing relationships with business units that classify data. Additionally, enterprises will have to set and enforce policies for data security, access and retention, and



Individuals create about 70 percent of the digital universe, although companies are responsible for the security, privacy, reliability and compliance of 85 percent of that data, the study said.

2



Rethinking the Datacenter, An Internet.com Networking eBook. © 2008, Jupitermedia Corp.

[

Rethinking the Datacenter

adopt tools for contending with issues like unstructured data search, database analytics, and resource pooling, he said.

]

will be storing only about half on average until 2011. The report comes as a follow-up to an earlier, similarly aggressive IDC forecast about data growth.

Certain business segments may be more affected than others, since they churn out more data. The financial industry, for example, accounts for just 6 percent of the digital universe. Media and communications firms, meanwhile, collectively generate 10 times that amount, according to the study.

The research firm said it based its conclusions on estimates of how much data is captured or created annually from roughly 30 classes of devices or applications. It then converted the data to megabytes using assumptions about usage and compression. I

The EMC/IDC study found also that while not all information created and transmitted is stored, enterprises

3

Rethinking the Datacenter, An Internet.com Networking eBook. © 2008, Jupitermedia Corp.

[

Rethinking the Datacenter

]

What’s the State of Your Data Center? By Jennifer Zaino

I

f you’re like the data center managers surveyed by Symantec in the fall of 2007, maybe things aren’t as good as you’d like them to be.

Symantec issued the results of its inaugural “State of the Data Center” report, based on a survey of more than 800 data center managers in Global 2000 and other large companies worldwide, with average annual IT budgets of $75 million in the U.S. and $54 million outside the States. ”We found data center managers looking at a number of challenges and techniques to combat them,” says Sean Derrington, Symantec’s director of storage management. “Fixed costs are continuing to increase. Sixty-nine percent say expenditures are growing 5 percent a year, so a larger and larger piece of that IT budget goes to fixed costs. That doesn’t leave much incremental dollars for IT managers to play around with.” Trying to get out of that vicious cycle, and free up dollars for more strategic uses, data center man-

agers are attempting to contain costs, deploying new technologies such as server virtualization, but finding that the money they save on hardware sometimes gets swallowed up by the increasing management complexity of those environments. That’s keeping an emerging technology such as virtualization from making the leap from the test and development environment to production systems, Derrington notes. “So the recommendation is to figure out how to create a software infrastructure that runs across the entire data center and works across physical and virtual systems, so regardless of the technology you selected to constrain costs, you won’t have to sacrifice skills training for IT staff. They perform the same task the same way,” Derrington says. Symantec, of course, makes products designed to meet this need, such as Veritas NetBackup data protection and Veritas Cluster Server disaster recovery and high availability solution. Jupiterimages

One interesting finding of the survey was that increas-



Fixed costs are continuing to increase. Sixty-nine percent say expenditures are growing 5 percent a year, so a larger and larger piece of that IT budget goes to fixed costs.

4



Rethinking the Datacenter, An Internet.com Networking eBook. © 2008, Jupitermedia Corp.

[

Rethinking the Datacenter

ing or significantly increasing demands by the business, in combination with overall data center growth, are compounding the problems of data center complexity. The respondents noted that service-level expectations have increased 85 percent over the past two years — and 51 percent admit to not having met service-level agreements in the same time period. “As they’re looking at negotiating service levels, they have to figure out how to deliver those services,” says Derrington, and hitting a wall because of inadequate staff. Fifty-seven percent say staff skills do not meet current needs, and 60 percent say skill sets are too narrow. For example, they don’t just want a Tivoli storage administrator, but someone who can work across backup and data protection infrastructures. Sixty-six percent also say there are too many applications to manage. Consistency in operations supported by a standardized software infrastructure can make all the difference here as well, Symantec believes.

5

]

“They want to automate the same things that are potentially repetitive,” says Derrington. “Take, for example, storage provisioning. That task includes storage administrators, server administrators, SAN architects, maybe the business, maybe procurement and finance. How can a company actually define the workflow and process so everyone knows what needs to be done, so the person on the job for a day provisioning storage does the same thing as someone who has been on the job for 10 years?” That’s not to say data center managers are looking for automatons. In fact, part of the reason data center managers are having staffing troubles is that want employees not only to solve technology problems, but also to understand the implications of technology for the business. “The reason being that an individual could write the best code or solve the best way to write a script to integrate Technology A with Technology B," Derringer says, "but if they don’t understand how that impacts the business or how it delivers a business benefit, that person is not going to be as valuable." I

Rethinking the Datacenter, An Internet.com Networking eBook. © 2008, Jupitermedia Corp.

[

Rethinking the Datacenter

]

Create a Recession-Proof Data Center By Paul Rubens

Y

ou don't need a Nobel Prize in economics to realize that the world's economies are facing a slowdown or recession head on. And it doesn't take a genius, or large leap of logic, to work out that your data center's budget is likely to face a cut. Whether you have an inkling a cut is coming or you haven't been warned of an impending budget cut, establishing a course of action to cut costs now would be a wise move, according to Ken McGee, a vice president and Fellow at Gartner. As far back as 2007 Gartner was warning about the need to prepare for a recession. Since then, things have obviously changed for the worse. "Since that time, the factors we based the research on — such as GDP growth projections and expert predictions for the likelihood of a recession — have worsened to a degree that convinces us it is now time for clients to prepare for cutting IT costs," McGee said in January.

ance. He also recommends reporting progress to senior managers on a weekly basis and identifying a liaison with a legal representative to make it easier to work through legal issues that may crop up in connection with maintenance and other contracts or penalty clauses. This is to ensure cost-cutting measures don't result in increased legal liabilities for your company. So, having established that now is the time to take measures to help the data center weather a recession, the question is where should you look to cut costs?

Cost-Cutting Sweet Spots One of the most significant data center costs is electricity — for powering both the computing equipment and the systems used to provide Jupiterimages cooling. Virtualization can play a key role in reducing overall electricity consumption, as it reduces the number of physical boxes needed to power and cool.

McGee recommends dedicating top staff exclusively to investigating IT cost-cutting measures, and appointing a senior auditor or accountant to the team to provide an official record of the team's perform-

A single physical server hosting a number of virtual machines can replace two, three, or sometimes many more underutilized physical servers. Although a physi-



Whether you have an inkling a cut is coming or you haven't been warned of an impending budget cut, establishing a course of action to cut costs now would be a wise move

6



Rethinking the Datacenter, An Internet.com Networking eBook. © 2008, Jupitermedia Corp.

[

Rethinking the Datacenter

cal server working at 80 percent utilization uses more electricity than one working at 20 percent, it is still far more energy-efficient than running four servers at 20 percent along with the accompanying four disk drives, four inefficient power supplies, and so on. Virtualization also shrinks costs by reducing the amount of hardware that must be replaced. If you operate fewer servers, you then have fewer to replace when they reach the end of their lives. Thanks to advanced virtual machine management software from the likes of Microsoft and VMware, the time spent setting up and configuring them (and thus the associated cost) can be much less than that spent managing comparable physical servers. And virtualization needs not be restricted to servers. What's true of servers is true of storage systems, too:

Virtualization’s Success Hampers Server Sales

I

By Andy Patrizio

DC revised its forecast in terms of both server dollar and unit sales in the coming years. It attributes the downshift to the increasing popularity of virtualization and more powerful servers. In both cases, one server can accomplish what previously took several. IDC reported unit sales slid in 2006, while dollar sales grew, an indication that fewer but more powerful machines are being sold. Instead of a 61 percent increase in server shipments by 2010, IDC now expects server sales will grow by 39 percent.



In projecting this trend out a few years, the research firm had to revise its server sales projections downward. Between now and 2010, IDC sees the x86-based server market dollars shrinking by 9 percent, from $36 billion to $33 billion, and actual unit sales declining 18 percent, from 10.5 million servers to 8.7 million servers.



This is due to what said Michelle Bailey, research vice president for IDC's Enterprise Platforms and Datacenter Trends division, called a "perfect storm" of virtualization and multi-core processors.

Data center automation can take a vast amount of investment, but it also promises significant cost savings.

Storage virtualization can cut costs by reducing overprovisioning and reducing the number of disks and other storage media that must be powered (and cooled), bought and replaced. This leads to the concept of automation. Data center automation can take a vast amount of investment, but it also promises significant cost savings. In a time of recession it's prudent to look at initiatives that carry a modest price point and offer a relatively fast payback period. These may include patch management and security alerting (which in turn may enable lower cost remote working practices,) and labor-intensive tasks, such as password resets. Voice authentication systems, for example, can dramatically reduce password reset costs in organizations that have large numbers of employees calling the IT help desk with password problems. Such systems automatically authenticate the user and reset relevant passwords. Any automation software worth its salt also has the

7

]

"On its own, multi-core wouldn't have been that interesting," she told internetnews.com. "It probably would have been just another speed bump. It's the addition of virtualization that lets you take advantage of multi-core much more quickly." Virtualization lets you run multiple single-threaded apps and get the benefits of multi-core technology without having to rewrite applications to be multithreaded. So a single machine with a dozen or more virtual environments can run the applications in a way a single-core system cannot. "It allows you to fully exploit an unutilized processor. Virtualization is what we think of as the killer app for multi-core. It lets customers take advantage of multi-core early without having to re-architect for it," said Bailey.

Rethinking the Datacenter, An Internet.com Networking eBook. © 2008, Jupitermedia Corp.

[

Rethinking the Datacenter

added benefit that when it reduces the number of man-hours spent dealing with a task, managers have the flexibility to choose between reducing data center human resource costs and reassigning employees to other tasks, including implementing further cost cutting systems — thereby creating a virtual circle. A more straightforward, but contentious, strategy is application consolidation. Clearly the more applications your data center runs, the more complex and expensive it will be to manage them. Thus, consolidating on as few applications as possible makes good financial sense, assuming, of course, the apps are up to the required task. If these are open source applications, which in practice probably means Linux-based ones, then there's a potential for significant savings, in terms of operating system and applications license fees, and CALs. Bear in mind that significant support costs will remain, and Microsoft and other large vendors make the case that the total cost of ownership of open source software is no lower than closed source, but at the very least, you may be able to use open-source alternatives as bargaining chips to get a better deal from your existing closed-source vendors. As well as looking at changes that can be made at the micro level, it's also useful to look at the macro level at the way your whole data center operations are structured. For example, you may have set yourself a target of "the five nines" for system availability, but it's worth evaluating if this is really necessary. How much would it reduce your costs to ease this target to 99.9 percent? And what impact would it have on the profitability of the business as a whole? If you can identify only a few applications that require 99.999 percent uptime, it's important to consider if your data center is the best place from which to provide them. A specialized application service provider may be able to provide this sort of reliability at a lower cost for a fixed, per user fee, with compensation if they fall below this service level. It certainly doesn't make sense to provide more redundancy than you need: That's simply pouring money down the drain. Also consider whether your data center is operating longer hours than necessary. Thanks to the power of remote management tools, you may find it makes

8

]

Virtualization’s Success... continued

IDC estimates that the number of virtual servers will rise to more than 1.7 million physical servers by 2010, resulting in 7.9 million logical servers. Virtualized servers will represent 14.6 percent of all physical servers in 2010 compared to just 4.5 percent in 2005.

This means customers are growing more confident in the uptime reliability of x86-based hardware. While they haven't approached mainframes in reliability, x86 systems are a lot better than in previous years, and come with better configuration and management tools. A virtualized server going down could have far greater impact than a single application server going down, but Bailey said IT is not as concerned about that. "I would say customer perception around putting too many eggs in one basket has changed. A virtual environment is no less available than a single environment," she said. However, there won't be a great spillover benefit when it comes to power and cooling issues, a growing headache for IT. While Bailey sees the potential for server consolidation, she expects that virtualization will more likely extend the lifespan of a server, thus keeping more machines deployed, so there won't be a thinning of the herd. Worldwide, power and cooling cost IS organizations $30 billion in 2006, and that will hit $45 billion by 2010.

more sense financially to leave it unmanned at certain times, while having a number of staff "on call" to sort out problems remotely, should the need arise. Finally, it's worth mentioning best practice IT management frameworks like the IT Infrastructure Library (ITIL) and Microsoft Operations Framework (MOF). Aligning operations to these frameworks is a medium- to longterm project, but they are intended to ensure that all IT services, including those associated with the data center, are delivered as efficiently as possible. If you can achieve that, you are a long way down the path to ensuring your data center can endure any slowdown the economy can throw at it — not just this time, but the next time, and the time after that. I

Rethinking the Datacenter, An Internet.com Networking eBook. © 2008, Jupitermedia Corp.

[

Rethinking the Datacenter

]

Greening Your Data Center — You May Have No Choice By Paul Rubens The writing has been on the wall for some time. Electricity use in data centers is skyrocketing, sending corporate energy bills through the roof, creating environmental concerns and generating negative publicity for large corporations.

about 1.2 percent of total U.S. electricity consumption, equivalent to the output of about five 1000MW power stations, and costing $2.7 billion — about the gross national product of an entire country like Zambia or Nepal. Unless data centers go green, costs energy costs could soon spiral out of control, according to Rakesh Kumar, a vice president at Gartner. In a report titled "Why 'Going Green' Will Become Essential for Data Centers" he says that because space is limited, many organizations are deploying high-density systems that require considerably more power and cooling than last generation hardware.

Because IT budgets are limited and because governments in Europe and the United States may soon impose carbon taxes on wasteful data centers, something's got to give. Data centers are going to have to "go green." It's not as if no one saw this coming. The aggregate electricity use for servers actually doubled between 2000 and 2005, both in the U.S. and around the world as a whole, according to research conducted by Jonathan Koomey, a consulting professor at Stanford University. In the U.S. alone, servers in data centers accounted for 0.6 percent of total electricity usage in 2005. But that's only half the story. When you include the energy needed to get rid of the heat generated by these servers that figure doubles, so these data centers are responsible for

Add to that the rising global energy prices, and the proportion of IT budgets spent on energy could easily rise from 5 percent to 15 percent in five years. The mooted introduction of carbon taxes would make this proportion even Jupiterimages higher. "When people look at the amount of energy being consumed and model energy prices, and think about risk management and energy supply, they should begin to get worried," Kumar said.



The aggregate electricity use for servers actually doubled between 2000 and 2005, both in the U.S. and around the world as a whole, according to research conducted by Jonathan Koomey, a consulting professor at Stanford University.

9



Rethinking the Datacenter, An Internet.com Networking eBook. © 2008, Jupitermedia Corp.

[

Rethinking the Datacenter

It's Not Easy Being Green Since most data centers historically have not been designed with the environment in mind, Kumar says more than 60 percent of the energy used for cooling purposes is actually wasted. This is bad for the environment and reflects poorly on the organizations concerned — especially if, as increasingly is the case, they have corporate social responsibility commitments. And as a growing number of companies are adopting a "carbon neutral" policy (either out of genuine concern for the environment of for the positive PR this can produce) pressure from head office to reduce the carbon footprint of the data center, to help reduce overall carbon emissions, will become more intense. "There's no doubt that in the short term this problem is a financial one, but behind that there is the need of organizations to be seen to be green," he said. So what can be done to "green" the data center? "There is no one solution that will solve the problem — this is a collective issue and it will require a raft of solutions," Kumar said. "You need to start by getting some metrics to understand the problem, because it's not going to go away," he said. The ideal solution is to start from the ground up by designing and building a new data center with energy efficiency in mind. This includes looking at the thermal properties of the building being constructed, the layout of the building for maximum cooling efficiency, and even the site of the building: Locating a new data center far from urban areas means that it might be more feasible to incorporate renewable energy sources such as wind turbines or solar panels into the design, for example. For more specific guidance, organizations can turn to standards such as the U.S. Green Building Council's Leadership in Energy and Environmental Design certification. Vendor programs such as the Green Grid, an information network sponsored by AMD, IBM, HP, and Sun, may also be a useful source of information. Assuming you're not quite ready to tear down your buildings and start again, there's still plenty you can do to reduce your electricity bill and reduce your carbon footprint. Perhaps the most effective action you can take is to reduce the number of servers in use at any one time. Each server you switch off can reduce 10

]

your electricity bill by up to $500 per year (and reduce the amount of carbon dioxide released into the air annually by perhaps 2000 pounds) directly, with about the same savings again realizable from reduced cooling requirements. It may be that you have servers that don't need to be on at all hours of the day and night, but it's more likely that you can reduce the number of severs you need through virtualization. If you run corporate applications on separate servers, many may be only 10 to 20 percent utilized. Virtualization can dramatically cut the number of physical servers you need, while technology from companies such as VMWare can ensure that your virtual machines can be switched to highercapacity physical machines during peak times. If you do retire some servers, it obviously makes sense to get rid of the older ones. This has the added benefit of increasing your overall server energy efficiency because newer multi-core chips can offer significant performance gains over older ones, whilst using almost 50 percent less power. Power management technologies such as Intel's Demand Based Switching can further reduce individual processor electricity consumption by up to 30 percent. Another area where you can make significant power savings is server power supplies themselves. That's because they can vary enormously in efficiency, especially under certain loads. Bad power supplies waste about half of the energy they consume (and thus the same again used by cooling systems to dissipate the heat generated by this wasted energy.) To compound this, power supplies running at a small fraction of their rated capacity are often even more inefficient. Look for power supplies with the 80 Plus certification — this means that the power supply will run at least 80 percent efficiency even when running at just 20 percent of its full capacity.

It's a Long Way to Tipperary The answer to the question "how do you make your data center greener" is similar to the traditional question from the Emerald Isle: "How do you get to Tipperary?" The answer in both cases is "If I were you I wouldn't start from here." What this means is that while you can certainly make savings by switching power supplies and switching off unused machines,

Rethinking the Datacenter, An Internet.com Networking eBook. © 2008, Jupitermedia Corp.

[

Rethinking the Datacenter

]

the real solution requires a total rethinking of the data center. This ranges from the design of the buildings and cooling systems they contain, to the extensive use of virtualization to increase server utilization, all the way down to the use of energy efficient equipment, from power supplies to smart, power-managed processors. It's not a cheap undertaking, but one that may prove vital for the survival of the data center, the corporation, and perhaps — in a small way — even the planet. I

11

Rethinking the Datacenter, An Internet.com Networking eBook. © 2008, Jupitermedia Corp.

[

Rethinking the Datacenter

]

Hardware for Virtualization: Do’s and Don’ts By Drew Robb

V

irtualization is catching on like never before. Just about every server vendor is advocating it heavily, and IT departments worldwide are buying into the technology in ever-increasing numbers. "The use of virtualization in the mainstream is now relatively commonplace, rather than just in development and test," said Clive Longbottom, an analyst at U.K.based Quocirca. "In addition, business continuity based on long-distance virtualization is being seen more often."

article will focus on a typical case where infrastructure and business logic applications are the main targets. With that in mind, one obvious target is memory. It is a smart policy to buy larger servers that hold more memory to get the best return on investment. While single- and dual-processor systems can host multiple applications under normal circumstances, problems arise when two or more hit peak usage periods.

As a result, the time has come to more closely align hardware purchasing with virtualization deployment. So what are some of the important do's and don'ts of buying servers and other hardware for a virtual data center infrastructure? What questions should IT managers ask before they make selection decisions on servers? And how should storage virtualization gear be integrated into the data center?

Jupiterimages

Do’s and Don’ts There are, of course, plenty of ways to virtualize, depending on the applications being addressed. This

"Our field experience has shown that you can host more VMs [virtual machines] per processor and drive higher overall utilization on the server if there are more resources within the physical system," said Jay Bretzmann, worldwide marketing manager, System x at IBM. "VMware's code permits dynamic load balancing across the unused processor resources allocated to separate virtual machines."

He advised buying servers with more reliability features, especially those that predict pending failures and send alerts to move the workloads before the system experiences a hard failure. Despite the added cost, organizations should bear in mind that such



While single- and dual-processor systems can host multiple applications under normal circumstances, problems arise when two or more hit peak usage periods.

12



Rethinking the Datacenter, An Internet.com Networking eBook. © 2008, Jupitermedia Corp.

[

Rethinking the Datacenter

servers are the cornerstone of any virtualization solution. Therefore, they deserve the lion's share of investment. "Businesses will lose significant productivity if the consolidation server fails," said Bretzmann. "A hard crash can lead to hours of downtime depending upon what failed." Longbottom, however, made the point that an organization need not spend an arm and a leg for virtualization hardware — as long as it doesn't go too low end. "Cost of items should be low — these items may need swapping in and out as time goes on," said Longbottom. "But don't just go for cheapest kit around — make sure that you get what is needed." This is best achieved by looking for highly dense systems. Think either stackable within a 19-inch rack or usable as a blade chassis system. By focusing on such systems, overall cooling and power budgets can be better contained. Remember, too, not every server is capable of being managed in a virtual environment. Therefore, all assets should be recognizable by standard systems management tools. Just as there are things you must do, several key don'ts should be observed as well. One that is often violated is that servers should not be configured with lots of internal storage. "Servers that load VMs from local storage don't have the ability to use technologies like VMotion to move workloads from one server to another," cautioned Bretzmann. What about virtualizing everything? That's a no-no, too. Although many applications benefit from this technology, in some cases, it actually makes things worse. For example, database servers should not be virtualized for performance reasons. Support is another important issue to consider. "Find out if the adoption of virtualization will cause any application support problems," said Bretzmann. "Not all ISVs have tested their applications with VMware."

13

]

Storage Virtualization Most of the provisos covered above also apply to purchasing gear for storage virtualization. "Most of the same rules for classic physical environments still apply to virtual environments — it's really a question of providing a robust environment for the application and its data," said John Lallier, vice president of technology at FalconStor Software. While virtual environments can shield users from hardware specific dependencies, they can also introduce other issues. One concern when consolidating applications on a single virtualization server, for example, is that you may be over-consolidating to the detriment of performance and re-introducing a single-point-offailure. When one physical server fails, multiple virtual application servers are affected. "Customers should look for systems that can provide the same level of data protection that they already enjoy in their physical environments," said Lallier. He believes, therefore, that storage purchasers should opt for resilient and highly available gear that will keep vital services active no matter what hardware problems arise. In addition, Lallier suggests investing in several layers of protection for large distributed applications that may span multiple application servers. This should include disaster recovery (DR) technology so operations can quickly resume at remote sites. To keep costs down, he said users should select DR solutions that do not require an enormous investment in bandwidth. As a cost-cutting measure, Lallier advocates doubling up virtual environments. If the user is deploying a virtual environment to better manage application servers, for example, why not use the same virtualization environment to better manage the data protection servers? As an example, FalconStor has created virtual appliances for VMware Virtual Infrastructure that enable users to make use of its continuous data protection (CDP) or virtual tape library (VTL) systems that can be installed and managed as easily as application servers in this environment. Of course, every vendor has a different take. NettApp provides an alternative to FalconStor using the snap-

Rethinking the Datacenter, An Internet.com Networking eBook. © 2008, Jupitermedia Corp.

[

Rethinking the Datacenter

shot technology available in its StoreVault S500. This storage array handles instant backups and restores without disrupting the established IT environment. "Useful products are able to host VMs over multiple protocols, and the StoreVault can do it via NFS, iSCSI or FCP — whatever your environment needs," said Andrew Meyer StoreVault Product Marketing Manager at NetApp.

14

]

"Don't get trapped into buying numerous products for each individual solution. One product that is flexible with multiple options (can handle VMs, create a SAN, handle NAS needs, provide snapshots and replication) may be a smarter investment as a piece of infrastructure." I

Rethinking the Datacenter, An Internet.com Networking eBook. © 2008, Jupitermedia Corp.

[

Rethinking the Datacenter

]

Why Tape Libraries Still Matter By Drew Robb

by leaps and bounds during the past few years. LTO-2 offered 200 GB native and 30 to 35 MB/s, whereas LTO-3 provides 400 GB and 80 MB/s, and the new LTO-4 delivers 400 GB and 120 MB/s. It is also the first open systems tape drive technology to incorporate native encryption.

T

ape libraries aren't exactly a booming business or front-page news these days, but at the same time, they're not faring all that badly in the face of the diskbased backup onslaught. According to Freeman Reports, total revenue from all tape libraries declined 15.6 percent in 2006 compared to 2005, while unit shipments declined 4.5 percent.

With the growing popularity of disk-based backup and recovery solutions and the continued consolidation of tape library resources, however, tape is increasingly taking on a more specialized role in data protection. In many cases, tape is being used for disaster recovery and centralized backup.

Despite those statistics, tape users purchased more than 50 percent more capacity as they migrated to higher-capacity and higher-performance tape drives and cartridges. Thus, what looks a fading industry on the surface is very much alive and kicking. Revenue still amounted to a healthy $1.81 billion in 2006 and was expected to be $1.77 billion in 2007. According to Freeman Reports, it will rise to $2.15 billion by 2012. Within those numbers, older formats like 8-millimeter and DLT library sales continue to falter, offset by increased sales of LTO and half-inch cartridge libraries. LTO has evolved into the dominant player, accounting for 88 percent of library unit shipments and 58 percent of library revenue. LTO capacity and throughput grew

"Corporations must retain data for long periods of time and ensure compliance with internal service-level agreements and government regulations," said Mark Eastman, product line director, Tape Automation Jupiterimages Systems for Quantum. "As a result, customers are demanding higher security, capacity, performance and reliability across their tape investments. Automation platforms incorporating the latestgeneration LTO-4 technology deliver on these important features."



With the growing popularity of disk-based backup and recovery solutions and the continued consolidation of tape library resources, however, tape is increasingly taking on a more specialized role in data protection.

15



Rethinking the Datacenter, An Internet.com Networking eBook. © 2008, Jupitermedia Corp.

[

Rethinking the Datacenter

]

Engine is for mainframes and can be configured to participate in a grid environment.

Quantum On the vendor side, the top players are Sun Microsystems, IBM, and Quantum. Quantum gained serious ground in the enterprise tape library market with its acquisition of ADIC several years back. At the high end of the scale, the Quantum Scalar i2000 has a starting price of $65,000. According to Eastman, the i2000 is designed to meet the rigors of high-dutycycle data center operations and integration with diskbased backup solutions. It uses a standard 19-inch rack form and can holds 746 cartridges per square meter, as well as up to 192 LTO bulk loading slots in one library. In the midrange, the Scalar i500 is priced beginning at $25,000. The entry Scalar 50 has a starting price of $8,000. One box contains 38 slots, and its Quantum StorageCare Vision data management and reporting tools enable users to monitor multiple tape libraries and disk systems from one screen. "Backup and restore capabilities are just as critical in busy workgroups and remote environments as they are anywhere else," said Eastman. "The Scalar 50 tape library provides them with an easy-to-use, reliable and scalable solution that simplifies the backup process." IBM Tape According to IDC, IBM offers the leading enterprise tape drive in the TS1120. This tape drive comes with Encryption Key Manager for Java platform (EKM) to encrypt information being written to tape. "EKM technology is used in high-end enterprise accounts by Fortune 100 companies in a variety of industries including banking, finance and securities," said Master. "IBM's LTO tape offerings have achieved nearly 900,000 drive shipments and over 10 million cartridge shipments."

"Two or three TS7700s can communicate and replicate with each other over an IP network," said Master. "This arrangement helps reduce or eliminate bottlenecks in the tape environment, supports the re-reference of volumes without the physical delays typical to tape I/O, helps increase performance of tape processes, and helps to protect data and address business continuity objectives."

Sun StorageTek Like the other big vendors, Sun provides encryption for tape systems. The StorageTek T10000 tape drive, for example, includes this feature and has a starting price of $37,000. At the high end on the tape library side is the StorageTek SL8500, with a starting price of $195,830. It can house up to 56 Petabytes (70,000 slots) and can be shared among mainframe, Solaris, AS/400, Windows, Linux and Unix systems. Lower down the line is the StorageTek SL500 (starting at $16,400), an 8U rackmount tape automation model that scales from 30 to 575 LTO slots and can deal with multiple cartridge types, such as LTO and SDLT/DLT-S4. Its maximum capacity is around 460 terabytes (uncompressed). "We are seeing strong adoption of the scalable libraries in the distributed and small business space, as evidenced by continued growth of the SL500," said Alex North group manager for tape at Sun Microsystems. "The SL500 is particularly good for such applications as e-mail servers, database applications and file servers." Encryption is another feature making its way into StorageTek tape technology. Sun's StorageTek T10000 tape drive is an example of a product that has built-in software to encrypt your data. The T10000 pricing begins at $37,000.

The company's highest-end tape library is the TS3500, which scales up to 6,800 slots and up to 192 LTO tape drives. Lower down the ladder comes the TS3310, which can deal with up to 398 slots and 18 LTO drives. The company offers various lower-end models such as the TS3100 with 24 slots.

Green Tape

IBM also offers tape virtualization products, such as the TS7520 and TS7700. The TS7700 Tape Virtualization

As for the future of tape, these vendors are committed to it and believe it will continue to play an important

16

Rethinking the Datacenter, An Internet.com Networking eBook. © 2008, Jupitermedia Corp.

[

Rethinking the Datacenter

]

role. In fact, as green data center trends strengthen, tape usage will accelerate. "Tape storage TCO is as much as an order of magnitude less expensive than disk storage," said Bruce Master, senior program manager, Worldwide Tape Storage Systems Marketing at IBM. "Its consumption of energy for power and cooling is anywhere from 20 to 100 times less expensive than disk storage." I

17

Rethinking the Datacenter, An Internet.com Networking eBook. © 2008, Jupitermedia Corp.

[

Rethinking the Datacenter

]

Facilities Management Crosses Chasm to the Data Center By Paul Rubens

I

t wasn't so long ago that the facilities management (FM) team stalked the corridors of office buildings with greasy blue coats and large bunches of keys. That image is now as out of date as carbon paper and typing pools: Today's facilities manager is more likely to be found in a white shortsleeved shirt behind a 21-inch flat-screen monitor looking at CAD drawings and updating an asset database in a hightech basement lair.

The role of the FM department has changed, too. If you are involved in planning and running a modern data center, it's a good idea to get facilities management involved. Today's FM departments have much to offer data centers and the administrators that manage them. Working with them helps facilitate a flexible data center that is green and energy-efficient. Together, they enable the data center to supply the desired IT services to the people who need them, at close to optimal cost.

is used in the data center. That's an IT decision and nothing will change that. "Essentially, facility management is about power, cooling and fire protection, and also, where data centers are concerned, physical access controls," said Kevin Janus, vice president of the International Facility Management Association (IFMA) IT Council. "It is not involved in what servers you run, but it is concerned with the environment in which they will live." A facility manager can help with a number of environmental factors, purely because he has a complete overview of a building and its current and planned future uses — something IT staff probably lack. "Obviously you don't want the IT department creating a data center when there are kitchens on the floor above because of the danger of leaks," Janus points out. Jupiterimages

First, let's clear up some basics: the facilities management department does not dictate what technology

But the real issues are power and air conditioning. Air conditioning is the number one consumer of power. Servers, as anyone who has worked in a data center can testify, generate a great deal of heat. The high-



Air conditioning is the number one consumer of power. Servers, as anyone who has worked in a data center can testify, generate a great deal of heat.

18



Rethinking the Datacenter, An Internet.com Networking eBook. © 2008, Jupitermedia Corp.

[

Rethinking the Datacenter

density racks that are becoming increasingly common in today's data centers consume vast amounts of power, and a similar amount of power is needed to dissipate this heat. That makes the planning and layout of the data center, and the provision of power and air conditioning equipment, crucial. This falls clearly under the FM purview. How can FM help? In an organization of any size, it's likely that the facility managers will have a computer aided facility management (CAFM) package at their disposal. Among other things, a CAFM will usually store CAD floor plans of the building and a database of assets. For the data center, this will likely include plans showing the layouts of racks. In many cases, the database will hold the location of each server, the applications running on these servers, and information about the departments that "own" each application, where relevant. Software tools can also carry out calculations to work out the amount of power that must be supplied in a given area of the data center, and the corresponding cooling capacity needed to remove the resulting heat. Information like this is clearly invaluable for the IT department because no matter what IT strategy is in place, the available power and cooling capacity presents constraints. The only way the IT department can be free to install and run the hardware it wants is if FM has already put in place the power and cooling it requires. And the only way for FM to know the IT requirements is for the two departments to communicate regularly. "The IT strategy may call for increased use of virtualization two of three years down the line, but they won't necessarily know what implication that has for the facility, especially in terms of A/C," said Chris Keller, a past president of the IFMA's IT Council. "But it's also important to look at how the strategy will impact on people and the office layout elsewhere in the building. If the IT department wants to replace printer stations with inexpensive printers on every desk, then more power and A/C is going to be needed throughout the building or it won't be possible."

]

maximize the efficiency of the cooling systems. Detailing current thinking on hot and cool aisles and other energy efficient data center layout techniques is beyond the scope of this article. However, bear in mind that input from the FM department and the software tools at its disposal makes is possible to design a data center layout that will use significantly less energy and cost less to keep at an acceptable operating temperature than a badly laid out one. What about making changes to existing data centers? "The contents of racks have to be managed, and if the A/C can't handle it then racks or individual servers have to be moved," said Keller. "Then the question is how do you know which servers you are moving and how do you keep track of where they are going? The FM department has, in a CAFM database, the place to store that information, and can offer it to the IT department. There's no point in the IT department doing it all again when the information already exists. From the CEO's point of view, redundancy is not the way to go," he said. The message from the basement then is very clear. By involving the FM team in the planning and layout of your data center, it can provide the tools and resources to ensure the data center will be practical to run, and as green and energy efficient, as possible. By keeping lines of communication open between the two organizations, the data center will be the flexible enough to accommodate the changes that you have planned, so you can deliver the services you want in the way you want, without worrying about where you are going to put the boxes, whether you are going to run out of power, or if the servers might melt when a new system is deployed. I This content was adapted from Internet.com's InternetNews, ServerWatch, and bITa Planet Web sites. Contributors: Paul Rubens, Drew Robb, Judy Mottl, and Jennifer Zaino.

When a data center space is initially populated, the FM department can help design the layout of the racks to

19

Rethinking the Datacenter, An Internet.com Networking eBook. © 2008, Jupitermedia Corp.

Related Documents