EXECUTIVE GUIDE Back to TOC
EXECUTIVE GUIDE
Next-generation data centers:
Opportunities abound
IT transformation begins at the data center, as enterprises embrace technologies such as virtualization and cloud computing to build adaptable, flexible and highly efficient infrastructures capable of handling today’s business demands.
Sponsored by APC www.apc.com
EXECUTIVE GUIDE
Table of Contents Profile: APC by Schneider Electric
3
Introduction
4 Next-generation data centers: The new realities................................................................................................................................. 4
Trends
5 Elastic IT resources transform data centers ...................................................................................................................................... 5 What IT pros like best about next-generation technology................................................................................................................. 7 Thinking outside the storage box............................................................................................................................................................ 9
Emerging technologies
12 One in 10 companies deploying internal clouds..................................................................................................................................12 Google, Microsoft spark interest in modular data centers, but benefits may be exaggerated.................................................14 The big data-center synch.....................................................................................................................................................................16
Best practices
18 Seven tips for succeeding with virtualization....................................................................................................................................18 Don’t let the thin-provisioning gotchas getcha..................................................................................................................................21 The challenge of managing mixed virtualized Linux, Windows networks..................................................................................... 23
Inside the Data Center
26 The Google-ization of Bechtel ............................................................................................................................................................ 26 Why San Diego city workers expect apps up and running in 30 minutes or less........................................................................ 30
Sponsored by APC www.apc.com
2
EXECUTIVE GUIDE Back to TOC
Profile: APC by Schneider Electric APC by Schneider Electric, a global leader in critical power and cooling services, provides industry leading product, software and systems for home, office, data center and factory floor applications. Backed by the strength, experience, and wide network of Schneider Electric’s Critical Power & Cooling Services, APC delivers well planned, flawlessly installed and maintained solutions throughout their lifecycle. Through its unparalleled commitment to innovation, APC delivers pioneering, energy efficient solutions for critical technology and industrial applications. In 2007, Schneider Electric acquired APC and combined it with MGE UPS Systems to form Schneider Electric’s Critical Power & Cooling Services Business Unit, which recorded 2007 revenue of $3.5 billion (€2.4 billion) and employed 12,000 people worldwide. APC solutions include uninterruptible power supplies (UPS), precision cooling units, racks, physical security and design and management software, including APC’s InfraStruXure® architecture the industry’s most comprehensive integrated power, cooling, and management solution. Schneider Electric, with 120,000 employees and operations in 102 countries, had 2007 annual sales of $25 billion (€17.3 billion). For more information on APC, please visit www.apc.com. All trademarks are the property of their owners.
Sponsored by APC www.apc.com
3
EXECUTIVE GUIDE Back to TOC
Introduction
Next-generation data centers: The new realities
By Beth Schultz
S
everal years ago, Geir Ramleth, CIO of construction giant Bechtel, asked himself this question: If you could build your IT systems and operation from scratch today, would you recreate what you have? Ramleth certainly isn’t the only one who has contemplated such a question. And the answer always seems to be “no.” Yesterday’s technology can’t handle the agility required by today’s businesses. The IT transformations often begin at the data center. At Bechtel, for example, Ramleth began its migration to an Internet model of computing by scrapping its existing data centers in favor of three new facilities (see “The Google-ization of Bechtel,” page 26). The next-generation data centers are all about adaptability, flexibility and responsiveness, and virtualization is a building-block technology. The numbers show how tactical virtualization has become. As discussed in “Seven tips for succeeding with virtualization,” more than 4 million virtual machines will be installed on x86 servers this year and the number of virtualized desktops could grow from less than 5 million in 2007 to 660 million by 2011, according to Gartner. More telling, however, is the shift from a tactical to strategic mindset. The popularity of virtualizing x86 server and desktop resources has many enterprise IT managers reassessing ways to update already virtualized network and storage resources, too. “Enterprise IT managers are going to have to start thinking virtual first and learn how to make the case for virtualization across IT disciplines,” advises James Staten, principal analyst at Forrester Research. In the article “What IT pros like best about
next-generation technology,” Michael Geis, director of IS operations for Lifestyle Family Fitness, says he has discovered how strategic storage virtualization can be.The fitness chain set out to resolve a performance bottleneck originating in its storage network, and wound up upgrading its data center infrastructure in a way that not only took care of the problem but also drastically changed how it made storage decisions. Heterogeneous storage virtualization provides much-needed flexibility in the data center.“Five years ago, the day you made the decision to go with EMC or IBM or HP or anyone, you might get a great discount on the first purchase, but you were locked into that platform for three to five years,” Geis says. But now, every time the chain adds storage, its current vendor, IBM, has to work in a competitive situation, he says. If virtualization is the starting point, an Internet model of computing a la Bechtel’s vision, increasingly is the end goal. Many experts believe that corporate data centers ultimately will be operated as private clouds, much like the flexible computing networks operated by Internet giants Amazon and Google, but specifically built and managed internally for an enterprise’s own users. Besides Bechtel, corporations already building their own private clouds include such notable names as Deutsche Bank, Merrill Lynch and Morgan Stanley, according to The 451 Group.The research firm found in
a survey of 1,300 corporate software buyers that about 11% of companies are deploying internal clouds or planning to do so. As discussed in “One in 10 companies deploying internal clouds,” that may not seem like a huge proportion, but it’s a sign that private clouds are real. As Vivek Kundra, CTO for the District of Columbia government says, private clouds are “definitely not hype.” Kundra, for one, says he plans to blend IT services provided from its own data center with external cloud platforms such as Google Apps. And Gartner predicts that by 2012 private clouds will account for at least 14% of the infrastructure at Fortune 1000 companies, which will benefit from service-oriented, scalable and elastic IT resources. Of course, on their way to the cloud-based data center of the future enterprises will discover, deploy – and possibly discard – myriad other technologies. For example, some might embrace the concept of modularization, the idea of packaging servers and storage in a container with its own cooling system. Microsoft and Google have piqued interest in this approach with their containerized data center schemes. Others might look to integration, focusing on an emerging technology such as Fibre Channel over Ethernet to bring together data and storage on one data center fabric. Still others might turn the notion of the traditional storage-area network on its nose, using Web services that link applications directly to the storage they need. In the new data center, possibilities abound. Schultz, formerly Network World’s New Data Center editor, is now a freelance writer. She can be reached at
[email protected].
Sponsored by APC www.apc.com
4
EXECUTIVE GUIDE Back to TOC
Next-generation data centers: opportunities abound Section 1
Trends
Elastic IT resources transform data centers Several IT trends converge as data centers evolve to become more adaptable, Gartner says
T
By Jon Brodkin, Network World, 12/04/2008
The enterprise data center of the future will be a highly flexible and adaptable organism, responding quickly to changing needs because of technologies like virtualization, a modular building approach, and an operating system that treats distributed resources as a single computing pool. The move toward flexibility in all data center processes comes after years of building monolithic data centers that react poorly to change. “For years we spent a lot of money building out these data centers, and the second something changed it was: ‘How are we going to be able to do that?’” says Brad Blake, director of IT at Boston Medical Center.“What we’ve built up is so specifically built for a particular function, if something changes we have no flexibility.” Rapidly changing business needs and new technologies that require extensive power and cooling are necessitating a makeover of data centers, which represent a significant chunk of an organization’s
capital costs, Blake notes. For example,“when blade servers came out that completely screwed up all of our matrices as far as the power we needed per square foot, and the cooling we needed because these things sucked up so much energy, used so much heat,” he says. Virtualization of servers, storage, desktops and the network is the key to flexibility in Blake’s mind, because hardware has long been tied too rigidly to specific applications and systems. But the growing use of virtualization is far from the only trend making data centers more flexible. Gartner Group expects to see today’s blade servers replaced in the next few years with a more flexible type of server that treats memory, processors and I/O cards as shared resources that can be arranged and rearranged as often as necessary. Instead of relying on vendors to decide what proportion of memory, processing and I/O connections are on each blade, enterprises will be able to buy whatever resources they need in any amount, a far more efficient approach. For example, an IT shop could combine 32 processors and any number of memory modules to create one large
server that appears to an operating system as a single, fixed computing unit. This approach also will increase utilization rates by reducing the resources wasted because blade servers aren’t configured optimally for the applications they serve. Data centers also will become more flexible by building in a modular approach that separates data centers into self-contained pods or zones which each have their own power feeds and cooling. The concept is similar to shipping container-based data centers, but data center zones don’t have to be enclosed. By not treating a data center as a homogenous whole, it is easier to separate equipment into high, medium and low heat densities, and devote expensive cooling only to the areas that really need it. Additionally, this separation allows zones to be upgraded or repaired without causing other systems to go offline. “Modularization is a good thing. It gives you the ability to refresh continuously and have higher uptime,” Gartner analyst Carl Claunch says. This approach can involve incremental build-outs, building a few zones and leaving room for more when needed. But you’re not wasting power because each zone has its own power feed and cooling
Sponsored by APC www.apc.com
5
EXECUTIVE GUIDE Back to TOC
Section 1: Trends • • • supply, and empty space is just that. This is in contrast to long-used design principles, in which power is supplied to every square foot of a data center even if it’s not yet needed. “Historical design principles for data centers were simple – figure out what you have now, estimate growth for 15 to 20 years, then build to suit,” Gartner states. “Newly built data centers often opened with huge areas of pristine white floor space, fully powered and backed up by a UPS, water and air cooled, and mostly empty. With the cost of mechanical and electrical equipment, as well as the price of power, this model no longer works.” While the zone approach assumes that each section is self-contained, that doesn’t mean the data center of the future will be fragmented. Gartner predicts that corporate data centers will be operated as private “clouds,” flexible computing networks which are modeled after public providers such as Google and Amazon yet are built and managed internally for an enterprise’s own users. By 2012, Gartner predicts that private clouds will account for at least 14% of the infrastructure at Fortune 1000 companies, which will benefit from service-oriented, scalable and elastic IT resources. Private clouds will need a meta operating system to manage all of an enterprise’s distributed resources as a single computing pool, Gartner analyst Thomas Bittman says, arguing that the server operating system relied upon so heavily today is undergoing a transition. Virtualization became popular because of the failures of x86 server operating systems, which essentially limit each server to one application and waste tons of horsepower, he says. Now spinning up new virtual machines is easy, and they proliferate quickly. “The concept of the operating system used to be about managing a box,” Bittman says.“Do I really need a million copies of a general purpose operating system?” IT needs server operating systems with smaller footprints, customized to specific types of applications, Bittman argues. With
some functionality stripped out of the general purpose operating system, a meta operating system to manage the whole data center will be necessary. The meta operating system is still evolving but is similar to VMware’s Virtual
the meta operating system, building in zones and pods, and more customizable server architectures – are helping build toward a future when IT can quickly provide the right level of services to users based on individual needs, and not worry
“Newly built data centers often opened with huge areas of pristine white floor space, fully powered and backed up by a UPS, water and air cooled, and mostly empty. With the cost of mechanical and electrical equipment, as well as the price of power, this model no longer works.” Carl Claunch , analyst, Gartner Datacenter Operating System. Gartner describes the concept as “a virtualization layer between applications and distributed computing resources … that utilizes distributed computing resources to perform scheduling, loading, initiating, supervising applications and error handling.” All these new concepts and technologies – cloud computing, virtualization,
about running out of space or power. The goal, Blake says, is to create data center resources that can be easily manipulated and are ready for growth. “It’s all geared toward providing that flexibility because stuff changes,” he says. “This is IT. Every 12 to 16 months there’s something new out there new and we have to react.”
Sponsored by APC www.apc.com
6
EXECUTIVE GUIDE Back to TOC
Section 1: Trends • • •
What IT pros like best about next-generation technology Flexibility, costs savings and eco-friendly operations make the list
By Ann Bednarz, Network World, 10/20/2008
D
Do hundreds of gallons of used vegetable oil belong anywhere near a data center, let alone inside? Phil Nail thinks so. Nail is CTO of AISO.Net, whose Romoland, Calif., data center gets 100% of its electricity from solar energy. Now he’s considering waste vegetable oil as an alternative to using diesel fuel in the Web hosting company’s setup for storing solar-generated power. “We’re never opposed to trying something new,” says Nail, who has eliminated nearly 100 underutilized stand-alone servers in favor of four IBM System x3650 servers, partitioned into dozens of virtual machines using VMware software. Server virtualization fits right into AISO.Net’s environmentally friendly credo.The company increased its average server-utilization level by 50% while achieving a 60% reduction in power and cooling costs through its consolidation project. For Nail, virtualization technology lives up to the hype. Nevertheless, it isn’t always easy for IT executives to find the right technology to help a business stay nimble, cut costs or streamline operations. Read on to learn how Nail and three other IT professionals made big changes in their data centers, and what they like best about their new deployments.
No more vendor lock-in Vendor lock-in is a common plight for IT storage buyers. Lifestyle Family Fitness, however, found a way out of the shackles.The 56-club fitness chain in St. Petersburg, Fla., set out to resolve a performance bottleneck it traced to its storage network, and wound up upgrading its data center infrastructure in a way that not only took care of the problem but also extended the life of its older storage arrays. Lifestyle’s users were starting to notice that certain core applications, such as membership
and employee records, were sluggish. IT staff confirmed the problem by looking at such metrics as the average disk-queue length (which counts how many I/O operations are waiting for the hard disk to become available), recalls Michael Geis, director of IS operations for the chain.“Anything over two is considered a bottleneck, meaning your disks are too slow. We were seeing them into the 60, 80 and 100 range during peak times,” he says. After rolling out IBM’s SAN Volume Controller (SVC), queue lengths settled back below two, Geis says.The two-node clustered SVC software fronts EMC Clariion CX300 and IBM System Storage DS4700 disk arrays, along with two IBM Brocade SAN switches. SVC lets Lifestyle combine storage capacity from both
vendors’ disk systems into a single pool that is manageable as a whole for greater utilization.An onboard cache helps speed I/O performance; SVC acknowledges transactions once they’ve been committed to its cache but before they’re sent to the underlying storage controllers. A key benefit of virtualizing storage is the ability to retain older gear, rather than doing the forklift replacements typically required of storage upgrades, Geis says.“We didn’t have to throw away our old legacy equipment. Even though we’d had it for a few years, it still had a lot of performance value to us,” he says. Lifestyle uses the new IBM storage for its most performance-sensitive applications, such as its core databases and mail server, and uses the EMC gear for secondand third-tier storage. Using storage gear from more than one vendor adds management overhead, however.“You have to have a relationship with two manufacturers and have two maintenance contracts.There’s also expertise to think about. Our storage team internally has to become masters of multiple platforms,” Geis says. The payoff is worth it, however:“Now that I’ve got storage virtualization in place with the SVC, my next storage purchase doesn’t have to be IBM. I could go back and buy EMC again, if I wanted to, because I have this device in-between,” Geis says. Heterogeneous storage virtualization gives buyers a lot more purchasing flexibility -- and negotiating power.“Every time we add new storage, IBM has to work in a competitive
Sponsored by APC www.apc.com
7
EXECUTIVE GUIDE Back to TOC
Section 1: Trends • • • situation,” Geis says.“Five years ago, the day you made the decision to go with EMC or IBM or HP or anyone, you might get a great discount on the first purchase, but you were locked into that platform for three to five years,” he says. What Geis likes best is the flexibility the system affords and “knowing that I don’t have to follow in the footsteps of everybody else’s storage practices, that I can pick and choose the path that we feel is best for our organization.”
Getting more out of resources A key addition to Pronto.com’s Web infrastructure made all the difference in its e-commerce operations. Fifteen million people tap the Pronto.com comparison-shopping site each month to find the best deals on 70 million products for sale on the Web. If the site isn’t performing, revenue suffers. Pronto.com wanted to upgrade its load balancers in conjunction with a move from a hosting provider’s facility in Colorado to a Virginia data center operated by parent company IAC (which also owns Ask.com, Match.com and Evite, among other Internet businesses). The New York start-up invested in more than load balancing, however, when a cold call from Crescendo Networks led to a trial of the vendor’s application-delivery controllers. Load balancing is just one aspect of today’s application-delivery controllers, which combine such capabilities as TCP connection management, SSL termination, data compression, caching and network address translation.The devices manage server requests and offload process-intensive tasks from content servers to optimize Web application performance.“Our team knew load balancing really well, but we didn’t know optimization.And we didn’t know that optimization would be something that we’d really want,” recalls Tony Casson, director of operations at Pronto.com. When Casson and his team tried out Crescendo’s AppBeat DC appliances, however, they were convinced. In particular, the devices’ ability to offload TCP/IP and SSL transactions from Web servers won them over. A major benefit is that Pronto.com can delay new Web-server purchases even as its business grows.“It really has extended the life of our server platform,”Casson says.“The need for us to purchase new front-line equipment has been cut in half. Each Web server can handle approximately 1.5 times the volume it could before.”
Small investment, big payoff
Staying green
IT projects don’t have to be grand to be game-changing.A low-priced desktop add-on is yielding huge dividends for the Miami-Dade County Public Schools. The school district installed power-management software on 104,000 PCs at 370 locations. By automatically shutting down desktops that aren’t in use, the software has reduced the district’s average PC-on time from 20.75 hours per day to 10.3 hours. In turn, it’s shaved more than $2 million from the district’s $80 million annual power bill. Best of all, because Miami-Dade already was using software from IT management vendor BigFix for asset and patch management, adding the vendor’s power-management component cost the district just $2 more per desktop.“There was very little effort to implement the program once we defined the operating parameters,” says Tom Sims, Miami-Dade’s director of network services.
Going green is more than a buzzword for AISO.Net, which from its inception in 1997 has espoused eco-friendly operations. Other companies are buying carbon offsets to ease their environmental conscience, but energy credits aren’t part of the equation for AISO.Net. The company gets its data center electricity from 120 roof-mounted solar panels. Solar tubes bring in natural sunlight, eliminating the need for conventional lighting during the day; and air conditioning systems are water-cooled to conserve energy. As it did with its building infrastructure, AISO.Net overhauled its IT infrastructure with green savings in mind. By consolidating dozens of commodity servers to four IBM servers running VMware’s Infrastructure 3 software, it upped utilization while lowering electricity and cooling loads. CTO Nail sold the leftover servers on eBay.“Let somebody else have the power,” he says. For Nail, green business is good business, and that’s what he likes best about it.“Maybe it costs a little bit more, but it definitely pays for itself, and it’s doing the right thing for the environment,” he says. Customers like environmentally friendly technology too:“More and more companies are looking at their vendors to see what kind of environmental policies they have,” he says. Today AISO.Net is designing a rooftop garden for its data center; it estimates the green roof could reduce cooling and heating requirements by more than 50%. It’s also looking for an alternative way to store solar-generated power. That’s where the waste vegetable oil comes in. Nail wants to replace the company’s battery bank, which stores power from the solar panels, with a more environmentally friendly alternative. The idea is to retrofit a small generator to run not on diesel fuel but on recycled vegetable oil acquired from local restaurants and heated in 55-gallon drums. The generator in turn would feed power to air conditioning units, Nail says. The idea came from seeing others use waste vegetable oil to run cars, Nail says.“We figured, why can’t we take that technology and put it into something that would run our air conditioning?” he notes.“We’re kicking that around, trying to design it and figure out how we’re going to implement it.”
Automated shutdown Besides saving money, the power-management project has kick-started a new wave of green IT efforts – something that’s as important to the school district’s managers as it is to users.“We all want to save energy and keep the environment clean and functional for our kids, more so because we are a public school system,” Sims says. Miami-Dade is working on the program’s second phase, the goal of which is to let IT administrators customize shut-down times on a school-by-school basis.That will result in more power saved and more reductions in carbon emissions. The district also is eyeing the chance for even bigger savings by turning off air conditioning units in areas where desktop computers are powered off. IP-controlled thermostats will enable Miami-Dade to coordinate PC and air conditioning downtime.“The potential cost savings is even bigger than the desktop-power cost savings,” Sims says. For that to happen, the IT group has been working more closely with facilities management teams — a collaboration Sims expects to grow. “There are IP-driven devices that will interface with all kinds of facilities equipment.These devices allow remote management and control by a central office via the organization’s data network. So, the possibilities seem limitless.”
Sponsored by APC www.apc.com
8
EXECUTIVE GUIDE Back to TOC
Section 1: Trends • • •
Thinking outside the storage box
Unbridled growth in data storage and the rise in Web 2.0 applications are forcing a storage rethink. Is this the end of the storage-area network as we know it?
W
By Joanne Cummings, Network World, 01/26/2009
With storage growth tracking at 60% annually, according to IDC, enterprises face a dire situation. Throwing disks at the problem simply doesn’t cut it anymore. Andrew Madejczyk, vice president of global technology operations at pre-employment screening company Sterling Infosystems, in New York, likens the situation to an episode of “House,” the popular medical drama. “On ‘House,’ there are two ways to approach a problem.You treat the symptoms, or you find out what the root cause is and actually end the problem,” Madejczyk says. “With storage, up until now, the path of least resistance was to treat the symptoms and buy more disks” – a method that surely would ignite the ire of the show’s caustic but brilliant Dr. Gregory House. Were the doctor prone to giving praise, he’d give a call out to enterprise IT managers who are rethinking this traditional approach to storage. He’d love that technologists are willing to go outside their comfort zones to find a solution, and he’d thrive on the experimentation and contentiousness that surround the diagnosis. House probably would find an ally in Casey Powell, CEO at storage vendor Xiotech. “Everybody acknowledges the problem and understands it, but nobody’s solving it.As technologists, we have to step back, look at the problem and design a different way,” Powell says.
Optimizing the SAN Today most organizations store some combination of structured, database-type and unstructured file-based data. In most cases, they rely on storage-area network (SAN) technolo-
gies to ensure efficiencies and overall storage utilization, keeping costs down as storage needs increase. In and of themselves, SANs aren’t enough, however. Enterprises increasingly are turning to technologies that promise to provide an even bigger bang for the buck, including these: • Data deduplication, which helps reduce redundant copies of data so firms can shrink not only storage requirements but also backup times. • Thin provisioning, which increases storage utilization by making storage space that has been overprovisioned for one application available to others on an as-needed basis. • Storage tiering, which uses data policies and rules to move noncritical data to slower, less expensive storage media and leave expensive Tier 1 storage free to handle only the most mission-critical applications. • Storage resource management software, which helps users track and manage storage usage and capacity trends. “In the classic SAN environment, these tools don’t just provide a partial solution.They allow you to make fundamental improvements,” says Rob Soderbery, senior vice president of Symantec’s Storage and Availability Management
Group. Clients that have pursued these strategies even have been able to freeze storage spending for a year at a time, he says.“And when they get back on the storage spending cycle, they get back on at about half the spending rate they were at before,” he adds. Although few IT executives report such dramatic reductions in storage spending, many are pursuing such strategies. At Sterling, for example, moving from tape- to disk-based backups via Sepaton’s S2100-ES2 virtual tape library reduced the time it takes for nightly backups from 12 to just a few hours, Madejczyk says. Sepaton’s deduplication technology provides an added measure. In addition, he has virtualized more than 90% of his server environment,“reducing our footprint immensely,’” and implemented EMC thin provisioning and storage virtualization. Still, his company’s storage needs keep growing, Madejczyk says.“In this economy, Sterling is being very responsible and careful about what we spend on,” he says.“We’re concentrating on the data-management part of the problem, and we’re seeing results. But it’s a difficult problem to solve.” Tom Amrhein, CIO at Forrester Construction in Rockville, Md., has seen similar growth.The company keeps all data associated with its construction projects in a project management database, so the vast majority of that stored data is structured in nature. Regulatory and compliance issues have led to increased storage needs nonetheless. “Most companies need to keep their tax records for seven years, and that’s as long as they need to keep anything,”Amrhein says. “But tax records are our shorter-cycle data. Depending on the jurisdiction, the time we could be at fault for any construction defect is up to 10 years – and we’re required to have the
Sponsored by APC www.apc.com
9
EXECUTIVE GUIDE Back to TOC
Section 1: Trends • • • same level of discovery response for a project completed nine years ago as we would for a project that’s been closed out two weeks.” Forrester Construction has cut down a bit on storage needs by keeping the most dataintensive project pieces – building drawings, for example – on paper. “Because the scanning rate is so high and paper storage costs so low, retaining those as physical paper is more costeffective,”Amrhein says. The real key to keeping costs in check,
those. I’m paying $25K a month to Connectria, plus paying for about 10GB over my SLA volume.That overage is a wash.” For the firm’s unstructured data, Forrester Construction uses Iron Mountain’s Connected Backup for PCs service, which automatically backs up all PCs nightly via the Internet. If a PC is not connected to the Internet at night, the user receives a backup prompt on the next connection. “With 60% of the people out of the office,
however, is storage-as-a-service, Amrhein says. IT outsourcer Connectria hosts the company’s main application servers, including Microsoft Exchange, SQL Server and SharePoint; project management, finance, CRM and Citrix Systems. It handles all that storage, leaving Forrester Construction with a predictable, flat monthly fee. “I pay for a set amount of gigabytes of storage as part of the SLA [service-level agreement], and then I pay $1 per gig monthly for any excess,”Amrhein explains.“That includes the backup, restore and all the services around
this is perfect for us,”Amrhein says.“Plus, Iron Mountain helps us reduce the data volume by using deduplication,” he says.“Even for people on a job site with a wireless card or low-speed connection, it’s just a five- or 10-minute thing.” Still, the unstructured side is where the construction company sees its biggest storage growth. E-mail and saved documents are the biggest problem areas.
The rise in Web 2.0 data Forrester Construction is not alone there. In the enterprise, IDC reports, structured, transac-
tional data will grow at a 27.3% compounded annual rate over the next three to five years. The rise in unstructured, file-based data will dwarf that growth rate, however. IDC expects the amount of storage required for unstructured, file-based data to increase at an unprecedented 69.4% clip. By 2010, enterprises for the first time will find unstructured storage needs outstripping traditional, structured storage demands. The rub here is that although SANs are extremely efficient at handling structured, transactional data, they are not well optimized for unstructured data. “SANs are particularly ill-suited to Web 2.0, scale-out, consumer-oriented-type applications,” Symantec’s Soderbery says.“No. 1, the applications’ architecture is scale-out, so you have hundreds or thousands of systems working on the same problem instead of one big system, like you would have with a database.And SANs aren’t designed that way.And No. 2, these new applications – like storing photos on Facebook or video or display ads or consumer backup data – are tremendously data intensive.” Symantec hit the wall with this type of data in supporting its backup-as-a-service offering, which manages 26 petabytes of data, Soderbery says.“That probably puts us in the top 10 or 20 storage consumers in the world.We could never afford to implement a classic Tier 1 SAN architecture,” he says. Instead, Symantec went the commodity path, using its own Veritas Storage Foundation Scalable File Server software to tie it all together. “The Scalable File Server allows you to add file server after file server, and you get a single namespace out of that cluster of file servers. This in turn allows you to scale up your application and the amount of data arbitrarily.And the software runs on pure commodity infrastructure,” Soderbery explains. Plus, the storage communicates over a typical IP network vs. a more expensive Fibre Channel infrastructure. Symantec’s approach is similar to that of the big cloud players, such as Google and Amazon. com.“We happen to build packaged software to enable this, whereas some of the early adopters built their own software and systems. But it all works the same way,” Soderbery says. The prudent approach to storage as it continues to grow, Soderbery says, is to optimize and use SANs only for those applications that merit them – such as high-transaction, missioncritical ERP applications. Look to emerging commodity-storage approaches for more
Sponsored by APC www.apc.com
10
EXECUTIVE GUIDE Back to TOC
Section 1: Trends • • • scale-out applications, such as Web 2.0, e-mail and interactive call-center programs. Does that mean enterprises need to support SANs and new cloud-like scale-out architectures to make sure they’re managing storage as efficiently as possible? Perhaps. Eventually, however, the need to support unstructured, scale-out data will trump the need to support structured, SAN-oriented data, IDC research shows.With that in mind, smart organizations gradually will migrate most applications off SANs and onto new, less expensive, commodity storage setups.
A new enterprise approach One interesting strategy could provide an evolutionary steppingstone in the interim: using Web services. Championed primarily by Xiotech, the idea is to use the Web-services APIs and standards available from such organizations as the World Wide Web Consortium (W3C) as the communications link between applications and storage. “The W3C has a nifty, simple model for how you talk between applications and devices. It includes whole sets of standards that relate to how you provision resources in your infrastructure, back to the application,” says Jon Toigo, CEO of analyst firm Toigo Partners International. “All the major application providers are Web-services enabled in that they ask the infrastructure for services. But nobody on the storage hardware side is talking back to them.” Nobody, that is, except Xiotech. Xiotech’s new Intelligent Storage Element (ISE) is the first storage ware to talk back, although other vendors quickly are readying similar technology,Toigo says. ISE, based on technology Xiotech acquired with Seagate Technology, is a commodity building-block of storage, supporting as many as 40 disk drives plus processing power and cache that can be added to any storage infrastructure as needed. Xiotech claims ISE can support anything from high-performance transactional processing needs to scale-out Web 2.0 applications. All storage vendors should work to Webservices-enable their hardware and software so they can communicate directly with applications, Xiotech’s Powell says.This would preclude vendor lock-in and let enterprises build storage environments using best-in-breed tools instead of sticking with the all-in-one-array approach. “They’d be able to add more storage, services or management, without having to add everything monolithically to a SAN,” Powell says.
Eventually Web services support will have virtualized storage environments realizing even greater efficiencies, to the point where applications themselves provision and de-provision storage.“Today, when we provision storage, we have to guess, and typically, we either over- or underprovision,” Powell says.“And then, when the user is no longer using it, we seldom go back and reclaim the storage. But the application knows exactly what it needs, and when it needs it.Via Web services, it can request what it needs on the fly, and as long as its request is within the parameters and policies we set up initially, it gets it.” Web services already have proved an efficient storage tack at ISE user Raytown Quality Schools in Missouri, says Justin Watermann, technology coordinator for the school system. The system went with Xiotech shortly after it moved to a new data center and created an all-virtual server infrastructure.A big plus has been Xiotech’s Virtual View software, which uses Web services to communicate with VMware’s VirtualCenter management console for its ESX servers,Watermann says. He can manage his virtualized server and storage infrastructure from a single console. “When you create a new data store,Virtual View shows you what port and LUN [logical unit number] is available to all of your ESX hosts in that cluster,”Watermann says.“And when you provision it, it uses Web services to communicate with VirtualCenter, and says,‘OK, this is the data store for these ESX hosts.’ And you automatically have the data store there and available to use.You don’t even have to refresh or restore.” That bit of automation saves on administration, but enabling the application to do the provisioning and de-provisioning would be an even greater boon,Watermann says.“It’s really hard to get more staff, and you only have so many hours in the day. If you don’t have to tie your staff up with the repetitive tasks of carving up space and assigning it, so much the better.”
If they build it . . . Now Watermann is using a Web service from Xiotech partner Eagle Software to improve the school system’s backup times. Eagle, a storage reseller, provides Raytown with backup software from CommVault Systems.“The Web-services tool lets us mirror our data, pause that mirror, attach that to the backup server, back it up, then disconnect it from the backup server, unpause the mirror, and
re-sync that data so we don’t have to push it across our Ethernet network,”Watermann says. Like Eagle, companies are starting to develop pieces to put Web-services-enabled storage all together,Watermann adds. Such small, point approaches are the norm today, but experts say that in five or 10 years, every application and every device will use some kind of software glue, such as Web services, to provide a range of storage and IT services in an automated, efficient manner. “It would have to be levels of software that create services that include computing resources, network and storage infrastructure, and the retention and reliability metrics associated with all of those components,” Symantec’s Soderbery says.“It will be a combination of the point solutions we see now, like VMware and SAN virtualization via Cisco and Brocade, plus thin provisioning, replication and deduplication.We’re going to require all of those things to work in concert with a level of software that ties them together cohesively to provide those appropriate levels of service to the application.” Others, including Ken Steinhardt, CTO for customer operations at EMC, are less optimistic.“If someone could write something magical that does things that we’d love to have, wouldn’t that be great?” he asks rhetorically. That would take a miracle, Steinhardt says. “The tools to write software are out there, but the catch is it’s just not that simple.You need to be able to have a solution that works broadly across a range of applications as well, and typical, highly consolidated environments run a mix of broad, diverse apps, not just a single application. I don’t see it happening,” he says. The Web services model is too much of a stretch to Steinhardt:“From a storage perspective,Web services are a completely separate issue.We’re talking about storing zeros and ones on a storage device.That’s pretty agnostic to the individual application and always has been,” he says. Not so, analyst Toigo asserts.Web services provide a common framework, so by default, they can support every application.“People need to tell their vendors,‘Look, I’m not buying your junk anymore if I can’t manage it using this common [Web services] metaphor,’” he says.“That puts you in the driver’s seat.” Cummings is a freelance writer in North Andover, Mass. She can be reached at joanne@ cummingslimited.com.
Sponsored by APC www.apc.com
11
EXECUTIVE GUIDE Back to TOC
Next-generation data centers: opportunities abound Section 2
Emerging technologies
One in 10 companies deploying internal clouds
E
By Jon Brodkin, Network World, 12/15/2008
Enterprise IT shops are starting to embrace the notion of building private clouds, modeling their infrastructure after public service providers such as Amazon and Google. But while virtualization and other technologies exist to create computing pools that can allocate processing power, storage and applications on demand, the technology to manage those distributed resources as a whole is still in the early stages.
At the center of cloud computing is a services-oriented interface between a provider and user, enabled by virtualization, says Gartner analyst Thomas Bittman. “When I move away from physical to virtual machines for every requirement, I’m drawing a layer of abstraction,” Bittman says.“What virtualization is doing is you [the customers] don’t tell us what server to get, you just tell us what service you need.” While virtualization technologies for servers, desktops and storage are readily available, Gartner says to get all the benefits of cloud-computing enterprises will need a new meta operating system that controls and allocates all of an enter-
prise’s distributed computing resources. It’s not clear exactly how fast this technology will advance. VMware plans to release what might be considered a meta operating system with its forthcoming Virtual Datacenter Operating System. But cloud computing is less a new technology than it is a way of using technology to achieve economies of scale and offer self-service resources that are available on demand, The 451 Group says. Numerous enterprises are taking on this challenge of building more flexible, service-oriented networks using existing products and methodologies. Thin clients and virtualization is the key for Lenny Goodman, director of the
The corporations building their own private clouds include such notable names as Bechtel, Deutsche Bank, Morgan Stanley, Merrill Lynch and BT, according to The 451 Group. The research firm found in a survey of 1,300 corporate software buyers that about 11% of companies are deploying internal clouds or planning to do so. That may not seem like a huge proportion, but it’s a sign that private clouds are moving beyond the hype cycle and into reality. “It’s definitely not hype,” says Vivek Kundra, CTO for the District of Columbia government, which plans to blend IT services provided from its own data center with external cloud platforms like Google Apps.“Any technology leader who thinks it’s hype is coming at it from the same place where technology leaders said the Internet is hype.”
Sponsored by APC www.apc.com
12
EXECUTIVE GUIDE Back to TOC
Section 2: Emerging technologies • • • desktop management group at Baptist Memorial Health Care in Memphis, Tenn. Baptist uses 1,200 Wyse Technology thin clients, largely at patients’ bedsides, and delivers applications to them using Citrix XenApp application virtualization tools. Baptist also is rolling out virtual, customizable desktops to those thin clients using Citrix XenDesktop. Just as Internet users can access Amazon, Google, Barnes & Noble or any Web site they wish to use from anywhere, Goodman wants hospital workers to be able to move among different devices and have the same experience. “You get the advantage of taking that entire experience and making it roam without the nurse having to carry or push anything,” he says.“They can move from device to device.” Goodman also says a cloud-based model where applications and desktops are delivered from a central data center will make data more secure, because it’s not being stored on individual client devices. “If we relocate that data to the data center by virtualizing the desktop, we can back it up, we can secure it, and we can provide that data to the user wherever they are,” he says. In the Washington, D.C., government, Kundra came on board in March 2007 with the goal of establishing a DC.gov cloud that would blend services provided from his own data center with external cloud platforms like Google Apps. Washington moved aggressively toward server virtualization with VMware, and made sure it had enough network bandwidth to support applications hosted on DC.gov. The move toward acting as an internal hosting provider as well as accessing applications outside the firewall required an increased focus on security and user credentials, Kundra says. But that was a necessary part of giving users the same kind of anytime, anywhere access to data and applications they enjoy as consumers of services in their personal lives. “The line is blurred,” he says.“It used to be you would come to work and only work. The blurring started with mobile technologies, BlackBerries, people doing work anytime, anywhere.”
While Kundra and Goodman have begun thinking of themselves as internal cloud providers, many other IT shops view cloud computing solely as it relates to acquiring software-as-a-service and on-demand computing resources from external providers such as Salesforce. “Cloud computing is definitely the hot buzzword,” says Thomas Catalini, a member of the Society for Information Management and vice president of technology at insurance brokerage William Gallagher Associates in Boston.“To me it means outsourcing to a hosted provider. I would not think of it in terms of cloud computing to my own company. [Outsourcing] relieves me of having to buy hardware, software and staff to support a particular solution.” Analyst Theresa Lanowitz of Voke, a strong proponent of using external clouds to reduce management costs, says building internal clouds is too difficult for most IT shops. “That is a cumbersome task,” she says. “One of the big benefits of cloud computing is the fact that you have companies out there who can offer up things in a
cloud. To build it on your own is quite an ambitious project. Where I see more enterprises going is down the path of renting clouds that have already been built out by some service provider.” There is room for both internal and external cloud computing within the same enterprise, though. In Gartner’s view, corporations that build their own private clouds will also access extra capacity from public providers when needed. During times of increased demand, the meta operating system as described by Gartner will automatically procure additional capacity from outside sources, and users won’t necessarily know whether they are using computing capacity from inside or outside the firewall. While “cloud” might strike some as an overused buzzword, Kundra views cloud computing as a necessary transition toward more flexible and adaptable computing architectures. “I believe it’s the future,” Kundra says. “It’s moving technology leaders away from just owning assets, deploying assets and maintaining assets to fundamentally changing the way services are delivered.”
Sponsored by APC www.apc.com
13
EXECUTIVE GUIDE Back to TOC
Section 2: Emerging technologies • • •
Google, Microsoft spark interest in modular data centers, but benefits may be exaggerated Experts question energy efficiency claims
I
By Jon Brodkin, Network World, 10/13/2008
Interest in modular data centers is growing, fueled by high-profile endorsements from Microsoft and Google. But the model raises new management concerns, and efficiency claims may be exaggerated. Modular, containerized data centers being sold by vendors such as IBM, Sun and Rackable Systems fit storage and hundreds, sometimes thousands, of servers into one large shipping container with its own cooling system. Microsoft, using Rackable containers, is building a data center outside Chicago with more than 150 containerized data centers, each holding 1,000 to 2,000 servers. Google, not to be outdone, secured a patent last year for a modular data center that includes “an intermodal shipping container and computing systems mounted within the container.” To hear some people tell it, a containerized data center is far easier to set up than a traditional data center, easy to manage and more power-efficient. It also should be easier to secure permits, depending on local building regulations. Who wouldn’t want one? If a business has a choice between buying a shipping container full of servers, and building a data center from the ground up, it’s a no-brainer, says Geoffrey Noer, a vice president at Rackable, which sells the ICE Cube Modular Data Center. “We don’t believe there’s a good reason to go the traditional route the vast majority of the time,” he says. But that is not the consensus view by any stretch of the imagination. Claims about efficiency are over-rated, according to some observers. Even IBM, which offers a Portable Modular Data Center and calls the
Rackable Systems’ ICE Cube portable data center can be fitted with as many as 22,400 processing cores in 2,800 servers. container part of its green strategy, says the same efficiency can be achieved within the four walls of a normal building. IBM touts a “modular” approach to data center construction, taking advantage of standardized designs and predefined components, but that doesn’t have to be in a container.“We’re a huge supporter of modular. We’re a limited supporter of container-based data centers,” says Steve Sams, vice president of IBM Global Technology Services. Containers are efficient because they pack lots of servers into a small space, and use standardized designs with modular components, he says. But you can deploy storage and servers with the same level of density inside a building, he notes. Container vendors often tout 40% to 80% savings on cooling costs. But
according to Sams,“in almost all cases they’re comparing a highly dense [container] to a low-density [traditional data center].” Containers also eliminate one scalability advantage related to cooling found in traditional data centers, according to Sams. Just as it’s more efficient to cool an apartment complex with 100 living units than it is to cool 100 separate houses, it’s more cost-effective to cool a huge data center than many small ones, he says. Air conditioning systems for containerized data centers are locked inside, just like the servers and storage, making true scalability impossible to achieve, he notes. Gartner analyst Rakesh Kumar says it will take a bit of creative marketing for vendors to convince customers that containers are inherently more efficient
Sponsored by APC www.apc.com
14
EXECUTIVE GUIDE Back to TOC
Section 2: Emerging technologies • • • than regular data centers. Gartner is still analyzing the data, but as of now Kumar says,“I don’t think energy consumption will necessarily be an advantage.”
Finding buyers That doesn’t mean there aren’t any advantages, however. A container can be up and running within two or three months, eliminating lengthy building and permitting times. But if you need an instant boost in capacity, why not just go to a hosting provider, Kumar asks. “We don’t think it’s going to become a mainstream solution,” he says.“We’re struggling to find real benefits.” Kumar sees the containers being more suited to Internet-based,“hyper-scale” companies such as Google, Amazon and Microsoft. Containerized data centers offer scalability in big chunks, if you’re willing to buy more containers. But they don’t offer scalability inside each container, once it has been filled, he says. Container vendors tout various benefits, of course. Each container is almost fully self-contained, Rackable’s Noer says. Chilled water, power and networking are the only things from the outside world that must be connected to each one, he says. Rackable containers, which can be fitted with as many as 22,400 processing cores in 2,800 servers, are water-tight, fitted with locks, alarms and LoJack-like tracking units. Sun’s Modular Data Center can survive an earthquake — the company made sure of that by testing it on one of the world’s largest shake tables at the University of California in San Diego. A fully-equipped Rackable ICE Cube costs several million dollars, mostly for the servers themselves, Noer says. The container pays for itself with lower electricity costs due to an innovative Rackable design that maximizes server density, Noer says. But it’s still too early to tell whether containerized data centers are the way of the future.“We’re just at the cusp of broad adoption,” Noer says. Potential use cases for containers include disaster recovery, remote locations such as military bases, or big IT hosting companies that would prefer not to build brick-and-mortar data centers, Kumar says.
A TV crew that follows sporting events may want a mobile data center, says Robert Bunger, director of business development for American Power Conversion. APC doesn’t sell its portable data center, but in 2004 it built one into a tractor-trailer as a proof-of-concept. It was resilient.“We pulled that trailer all over the country” for demos, Bunger notes. But APC isn’t seeing much demand, except in limited cases. For example, a business that needs an immediate capacity upgrade but is also planning to move its data center in a year might want a container because it would be easier to move than individual servers and storage boxes.
Kumar notes. Besides limiting flexibility at the time of purchase, this raises the question of what happens when those servers reach end-of-life. Will you need the vendor to rip out the servers and put new ones in, once again limiting your choice of technology? “At the moment, most vendors will fill their containers only with their servers,” Kumar says. IBM, however, says it uses industrystandard racks in its portable data center, allowing customers to buy whatever technology they like. DeFanti said Sun’s Modular Data Center allows him the flexibility to buy a
“I think it is key that the combination of virtualization and distributed infrastructure produce a container that can be out of service without impacting the application as a whole.” Lee Kirby, general manager, Lee Technologies UC-San Diego bought two of Sun’s Modular Data Centers. One goal is to contain the cost of storing and processing rapidly increasing amounts of data, says Tom DeFanti, principal investigator of the school’s GreenLight energy efficiency research project. But it will take time to see whether the container approach is more efficient.“The whole idea is to create an experiment to see if we can get more work per watts,” DeFanti says. The Modular Data Center is not as convenient to maintain as a regular computer room, because there is so little space to maneuver inside, he says.”But it seems to me to be an extremely well-designed and thought-out system,” DeFanti says.“It gives us a way of dealing with the exploding amount of scientific computing that we need to do.”
Beware vendor lock-in Before purchasing a containerized data center, enterprises should consider several issues related to their manageability and usefulness. Vendors often want you to fill the containers with only their servers,
heterogeneous mix of servers and storage. Rackable, though, steers customers toward either its own servers or IBM BladeCenter machines through a partnership with IBM. “I think vendors are learning that people want more flexibility,” DeFanti says. Another consideration is failover capabilities, says Lee Kirby, who provides site assessments, data center designs and other services as the general manager of Lee Technologies. If one container goes down, its work must be transferred to another. Server virtualization will help provide this failover capability, and also make it easier to manage distributed containerized data centers — an important consideration for customers who want to distribute computing power and have it reside as close to users as possible, Kirby says. “I think it is key that the combination of virtualization and distributed infrastructure produce a container that can be out of service without impacting the application as a whole,” Kirby says.
Sponsored by APC www.apc.com
15
EXECUTIVE GUIDE Back to TOC
Section 2: Emerging technologies • • •
The big data-center synch With Fibre Channel over Ethernet now a reality, enterprises prep for data and storage convergence
F
By Dawn Bushaus, Network World, 01/26/2009
For the County of Los Angeles – with more than 10 million residents, the nation’s largest county – the timing couldn’t be better for the arrival of Fibre Channel over Ethernet on the data-center scene. L.A. County has undertaken a server virtualization initiative, a data-center redesign and construction of a new data center scheduled for completion in late 2011.“When we put those three things together, we knew we needed a new design that could accommodate them all,” says Jac Fagundo, the county’s enterprise solutions architect. Because it can unite data and storage networks on a 10 Gigabit Ethernet fabric, FCoE enabled Fagundo to turn that design vision into a reality. The plan for the new data center calls for extending virtualization beyond hosting servers to data and storage networks, and includes an overhaul of the county’s stovepipe approach to storage and server deployments.“We realized we were going to have to integrate the storage,” Fagundo says, “so we decided, why not go a step further and integrate the storage and IP networks in the data center?” When the county started looking at FCoE as an option, it turned to Cisco, its Ethernet switch vendor. It completed a proof-of-concept trial using the Nexus 7000 and 5000 switches – switches that represent Cisco’s biggest push yet into the enterprise data center. The 5000 supports a prestandard version of FCoE, while the 7000 will be upgraded with FCoE features once the American National Standards Institute finalizes the FCoE standards, probably in late 2009. Cisco currently is the only vendor offering FCoE support. Its chief Fibre
Channel competitor, Brocade Communications, is beta-testing an FCoE switch. Juniper Networks also is considering adding FCoE capability to its data-center switches – if customer demand warrants it, the company says. L.A. County is replacing eight Cisco Catalyst 6500 Ethernet switches with two Nexus 7000 switches in its data-center core, and will add a Nexus 5000 switch to the data-center distribution network. The county expects to have live traffic running through the switches by spring. “This is a multiyear project where the platform features will evolve as we migrate into it,” Fagundo says.“Our next-generation data center, of which the Nexus switches and FCoE are a part, is a design that we plan to implement over the next five years.”
A technology for tough times For large enterprises operating distinct Fibre Channel storage-area networks (SAN) and Ethernet data networks, FCoE’s potential benefits and cost savings are promising, even though the technology requires new hardware. “The rule of thumb is that during tight economic times, incumbent technologies do well and it’s not wise to introduce a new technology - unless your proposition is one of economic favor,” says Skip Jones, president of the Fibre Channel Industry Association (FCIA).“And that’s exactly what FCoE is.” Jones has an obvious bias toward FCoE, but Bob Laliberte, analyst at Enterprise Strategy Group, agrees.“The economy will certainly get more companies to look at FCoE,” he says.“If vendors can show how FCoE can reduce the cost of powering,
cooling and cabling, that will be a compelling reason for companies to look at it.” At a minimum, adopting FCoE lets a company consolidate server hardware and cabling at the rack level. That’s the first step in moving to FCoE. Then more consolidation can take place in the core switching fabric, and that reduces costs even further. Ultimately, storage arrays will support FCoE natively (NetApp already has announced such a product), making end-to-end FCoE possible. L.A. County’s Fagundo expects substantial cost savings from using FCoE, although he declined to share detailed estimates. He did say, however, that power consumption alone will be cut in half when the eight Cisco Catalyst switches are consolidated into two Nexus switches. In addition, FCoE will let the county reduce server hardware and cabling by using a converged network adapter (CNA) to replace Ethernet network interface cards and Fibre Channel host bus adapters. The cost savings in infrastructure, components, edge devices and cabling are compelling reasons to consider FCoE, agrees Ian Rousom, corporate data-center network architect at defense contractor Lockheed Martin in Bethesda, Md.“Cabling is often overlooked, but you can potentially eliminate one of two cable plants using FCoE. If you consolidate racks, you eliminate the biggest source of cable sprawl in the data center.” Cisco estimates that enterprises can expect a 2% to 3% savings in power consumption during FCoE’s rack-implementation phase,
Sponsored by APC www.apc.com
16
EXECUTIVE GUIDE Back to TOC
Section 2: Emerging technologies • • • and a total of about 7% savings once FCoE is pushed into the data-center core.
Tall orders for FCoE The promise of FCoE lies in a merged data and storage network fabric, but making the integration happen isn’t easy and shouldn’t be done without careful planning. Typically in an enterprise, network and storage management teams don’t work together. That has to change with FCoE. “Certainly at a minimum, collaboration between Ethernet and Fibre Channel departments will have to increase, and merging the two would not be a bad idea,” says Rousom, who is considering FCoE. “The people who manage the infrastructure have to agree on policy, change processes, troubleshooting policies and
the configuration management process.” That could be a tall order in many enterprises, FCIA’s Jones says.“Network and storage administrators drink different kinds of tea; they’re different breeds. There will be some turf wars.” L.A. County has dealt with those differences by combining its data-center storage and network groups under a single management umbrella. Initially, the groups’ engineering staffs were concerned about merging, Fagundo says.“Then they started contributing to the design and proof of concept, and now they’re happy about it.” Beyond cultural issues, enterprises must communicate with their server vendors to find out whether their platforms support CNAs as opposed to separate Ethernet and Fibre Channel adapters, Rousom says. IT managers also must consider security
ramifications.“If the Fibre Channel network has a firewall or special security features, the FCoE products may not offer those as part of their feature sets,” he explains. For now, FCoE is gaining traction with such early adopters as L.A. County, where unique circumstances have made the time right to consider a big change. Many more IT executives will take a more cautions approach, however, as Rousom is doing. “The fact that FCoE is still in the standards process tells me that there is still potential for change at the hardware level before this is all complete,” Rousom says. “That, more than anything, is preventing me from buying it.” Bushaus is a freelance technology writer in the Chicago area. She can be reached at
[email protected].
Sponsored by APC www.apc.com
17
EXECUTIVE GUIDE Back to TOC
Next-generation data centers: opportunities abound Section 3
Best practices
Seven tips for succeeding with virtualization Experts share best practices for optimizing strategic virtualization initiatives
A
By Denise Dubie, Network World, 10/20/2008
As server virtualization projects gain scale and strategic value, enterprise IT managers must move quickly beyond tactical approaches to achieve best results. Consider these Gartner forecasts: More than 4 million virtual machines will be installed on x86 servers this year, and the number of virtualized desktops could grow from less than 5 million in 2007 to 660 million by 2011. The popularity of virtualizing x86 server and desktop resources has many enterprise IT managers reassessing ways to update already virtualized network and storage resources, too. Virtualization’s impact will spread beyond technology changes to operational upheaval. Not only must enterprise IT executives move from a tactical to a strategic mindset but they also must shift their thinking and adjust their processes from purely physical to virtual. “Enterprise IT managers are going to have to start thinking virtual first and learn how to make the case for virtualization across IT disciplines,” says James Staten, principal analyst at Forrester Research. “This will demand they change processes. Technologies can help, but if managers don’t update their best practices to handle virtual environments, nothing will get easier.
Here enterprise IT managers and industry watchers share best practices they say will help companies seamlessly grow from 30 to 3,000 virtual machines without worry.
1. Approach virtualization holistically Companies considering standardizing best practices for x86-based server virtualization should think about how they plan to incorporate desktop, application, storage and network virtualization in the future. IT has long suffered from a silo mentality, with technology expertise living in closed clusters. The rapid adoption of virtualization could exacerbate already strained communications among such IT domains as server, network, storage, security and applications. “This wave of virtualization has started with one-off gains, but to approach the technology strategically, IT managers need to look to the technology as achieving more than one goal across more than one IT group,” says Andi Mann, research director at Enterprise Management Associates. To do that, an organization’s virtualization advocates should champion the technology by initiating discussions among various IT groups and approaching vendors with a broad set of requirements that address short- and long-term goals.Vendors
with technologies in multiple areas, such as servers and desktops, or with partnerships across IT domains could help IT managers better design their virtualization-adoption road maps. More important, however, is preventing virtualization implementations from creating more problems via poor communications or antiquated organizational charts, industry watchers say. “With ITIL and other best-practice frameworks, IT has become better at reaching out to other groups, but the speed at which things change in a virtual environment could hinder that progress,” says Jasmine Noel, a principal analyst at Ptak, Noel and Associates.“IT’s job is to evolve with the technology and adjust its best practices, such as change management, to new technologies like virtualization.”
2. Identify and inventory virtual resources Understanding the resources available at any given time in a virtual environment requires enterprise IT managers to enforce strict processes from a virtual machine’s birth through death. Companies need a way to identify virtual machines and other resources throughout their life cycles, says Pete Lindstrom, research director at Spire Security. The type of virtual-machine tagging he suggests would let IT managers “persistently
Sponsored by APC www.apc.com
18
EXECUTIVE GUIDE Back to TOC
Section 3: Best practices • • • identify virtual-machine instances over an extended period of time,” and help to maintain an up-to-date record of the changes and patches made to the original instance. The process would provide performance and security benefits because IT managers could weed out problematic virtual machines and keep an accurate inventory of approved instances. “The ability to track virtual machines throughout their life cycles depends on a more persistent identity scheme than is needed in the physical world. IT needs to know which virtual resources it created and which ones seemed to appear over time,” Lindstrom explains.“The virtual world is so much more dynamic that IT will need granular identities for virtual machines and [network-access control] policies that trigger when an unknown virtual machine is in the environment. Rogue virtual machines can happen on the client or the hypervisor.” For instance, using such tools as BMC Software’s Topology Discovery, EMC’s Application Discovery Manager or mValent’s Integrity, an IT manager could perform an ongoing discovery of the environment and track how virtual machines have changed. Manual efforts couldn’t keep pace with the configuration changes that would occur because of, say,VMware VMotion or Microsoft Live Migration technologies. “IT has to stay on top of a lot more data in a much more dynamic environment,” O’Donnell says.
3. Plan for capacity Just because virtual machines are faster to deploy than physical ones, the task shouldn’t be taken lightly.“If you are not careful, you can have a lot of virtual machines that aren’t being used,” says Ed Ward, senior technical analyst at Hasbro in Pawtucket, R.I. He speaks from the experience of supporting 22 VMware ESX host servers, 330 virtual machines, 100 workstations and 250 physical machines. To prevent virtual-machine sprawl and to curb spending for licenses and power for unused machines, Ward says he uses VKernel’s Capacity Analyzer virtual appliance. It alerts him to all the virtual machines in his environment, even those
he thought he had removed. “There are cases in which you build a virtual machine for test and then for some reason it is not removed but rather it’s still out there consuming resources, even though it is serving no purpose,” Ward says. “Knowing what we already have and planning our investments based on that helps. We can reassign assets that have outlived their initial purpose.” When they create virtual machines, IT managers also must plan for their deletion. “Assign expiration dates to virtual machines when they are allocated to a business unit or for use with a specific application; and when that date comes, validate the need is no longer there and expire the resource,” Forrester’s Staten says.“Park a virtual machine for three months and if it is no longer needed, archive and delete. Archiving keeps options open without draining storage resources or having the virtual machine sitting out there consuming compute resources.”
4. Marry the physical and virtual IT managers must choose the applications supported by virtual environments wisely, say experts, who warn that few if any IT services will rely only on the virtual infrastructure. “While some environments could support virtual-only clusters for testing, the more common scenario would have, for instance, two virtual elements and one physical one supporting a single IT service,” says Cameron Haight, a Gartner research vice president.“IT still needs to correlate performance metrics and understand the profile of the service that spans the virtual and physical infrastructures. Sometimes people are lulled into a false sense of security thinking the tools will tell them what they need to know or just do [the correlation] for them.” IT managers should push their vendors for reporting tools that not only show what’s happening in the virtual realm but also display the physical implications -- and potentially the cause -- of an event. Detailed views of both environments must be married to correlate why events take place in both realms. For instance, if utilization on a host
server drops from 20% to 10%, it would be helpful to know the change came about because VMware Distributed Resource Scheduler (DRS) moved virtual machines to a new physical server, Haight says. In addition, knowing when and where virtual machines migrate can help prevent a condition dubbed “VMotion sickness” from cropping up in virtual environments. This occurs when virtual machines move repeatedly across servers -- and bring problems they might have from one server to the next, Haight says. Proper reporting tools, for example, could help an administrator understand that a performance problem is traveling with a virtual machine unbeknown to DRS.
5. Eliminate virtual blind spots The fluid environment created by virtualization often includes blind spots. “We monitor all physical traffic, and there is no reason why we wouldn’t want to do the same for the virtual traffic. It’s a huge risk not knowing what is going on, especially when the number of virtual servers is double what you have for physical boxes,” says Nick Portolese, senior manager of data center operations at Nielsen Mobile in San Francisco. Portolese supports an environment with about 30 VMware ESX servers and 500 to 550 virtual machines. Early on, he realized he wasn’t comfortable with the amount of network traffic he could monitor in his virtual environment. Monitoring physical network traffic is a must, but he found the visibility into traffic within the virtual environment was non-existent. Start-up Altor Networks provided Portolese with what he considered necessary tools to track traffic in the entire environment. Altor’s Virtual Network Security Analyzer (VNSA) views traffic at the virtual -- not just the network -- switch layer. That means inter-virtual-machine communications or even virtual desktop chatter won’t be lost in transmission, the company says. VNSA provides a comprehensive look at the virtual network and analyzes traffic to give network security managers a picture of the top application talkers, most-used protocols and aspects of virtualization relevant to security. It’s a must-have for any
Sponsored by APC www.apc.com
19
EXECUTIVE GUIDE Back to TOC
Section 3: Best practices • • • virtual environment, Portolese says. “We didn’t have anything to monitor the virtual switch layer, and for me to try to monitor at the virtual port was very difficult. It was impossible to tell which virtual machine is coming from where,” Portolese explains.“You will get caught with major egg on your face if you are silly enough to think you don’t have to monitor all traffic on the network.”
6. Charge back for virtual resources Companies with chargeback policies should apply the practice to the virtual realm, and those without a set process should institute one before virtualization takes off. Converting physical resources to virtual ones might seem like a no-brainer to IT folks, who can appreciate the cost savings and administration changes, but business units often worry that having their application on a virtual server might affect performance negatively. Even if a company’s structure doesn’t support the IT chargeback model, business units might be more willing to get on board with virtualization if they are aware of the related cost savings, Forrester’s Staten says. “IT can provide some transparency to the other departments by showing them what they can gain by accepting a virtual server. This includes lower costs, faster delivery against [service-level agreements], better availability, more-secure disaster recovery and the most important one -- [shorter time to delivery]. It will take six weeks to get the physical server, but a virtual server will be over in more like six hours,” Staten says. In addition, chargeback policies would be an asset to IT groups looking to regain some of their investment in virtualization. At Hasbro, IT absorbs the cost of the technology while the rest of the company takes advantage of its benefits, Ward says.“The cost of physical machines comes out of the business department’s budget, but the cost of virtual machines comes out of the IT budget,” he says.
7. Capitalize on in-house talent IT organizations also must update staff to take on virtualization. Certification programs, such as the VMware Certified Professional (VCP) and Microsoft’s Windows Server Virtualization, are available, but in-house IT staff must weigh which skills they need and how to train in them.“Certifications are rare, though I do have two VCPs on my staff. Most IT professionals who are able to take the exam and get certified would probably work in consulting,” says Robert Jackson, director of infrastructure at Reliance Limited Partnership in Toronto. With training costing as much as $5,000 per course, IT workers might not get budget approval. Gartner’s Haight recommends assembling a group of individuals from the entire IT organization into a center of excellence of sorts. That would enable the sharing of knowledge about virtualization throughout the organization. “We surveyed IT managers about virtualization skills, and about one-quarter of respondents had a negative perspective about being able to retain those skills in-house,” Haight says.“Dissemi-
nating the knowledge across a team would make an organization more secure and improve the virtualization implementation overall with fewer duplicated efforts and more streamlined approaches.” In the absence of virtualization expertise, Linux proficiency can help, Hasbro’s Ward says. VMware support staff seems to operate most comfortably with that open source operating system, he says. In general, moving from pilot to production means increasing the staff for the daily care and feeding of a virtual environment, Ward says.“Tools can help, but they can’t replace people.”
Sponsored by APC www.apc.com
20
EXECUTIVE GUIDE Back to TOC
Section 3: Best Practices • • •
Don’t let the thin-provisioning gotchas getcha A how-to on getting the most out of this advanced storage technology
N
By Sandra Gittlen, Network World, 01/26/2009
Next-generation storage is all about dynamic allocation of resources, and thin provisioning can get that job done quickly and easily – but not carefree. As thin provisioning – also called dynamic provisioning or flex volumes – becomes a standard feature in virtual storage arrays, IT executives and other experts warn that dynamic resource allocation is not a one-sizefits-all proposition. Applying the technology in the wrong way could create a major disaster, they caution. “Vendors have made it so that IT teams think,‘Thin provisioning is so easy, why wouldn’t we use it?’ But some major thinking has to go into how your storage network is actually architected and deployed to benefit from the technology,” says Noemi Greyzdorf, research manager at IDC. In traditional storage networks,IT has to project the amount of storage a particular application will need over time,then cordon off that disk space.This means buying more hardware than is needed immediately,as well as keeping poorly utilized disks spinning ceaselessly – a waste of money and energy resources. With thin provisioning, IT can keep storage growth in check because it need not commit physical disk space for the projected amount of storage an application requires. Instead, IT relies on a pool of disk space that it draws from as the application needs more storage. Having fewer idling disks means better capacity management, increased utilization, and lower power and cooling consumption.
Hold on a sec . . . IT executives shouldn’t let these benefits, attractive as they are, blind them to the technology’s requirements, early adopters say. You can’t just flip the switch on your storage pool and walk away, says Matthew Yotko, senior
IT director at New York media conglomerate IAC, speaking about one of the biggest misconceptions surrounding thin provisioning. You’ve got to take the critical step of setting threshold alerts within your thin-provisioning tools because you’re allowing applications to share resources,Yotko says. Otherwise, you can max out your storage space, and that can lead to application shutdowns and lost productivity because users can’t access their data. “You can get pretty close to your boundary, fast, and that can lead to panicked calls asking your vendor to rush you a bunch of disks.Alerting is an important aspect that we originally missed,”Yotko says. Yotko has since integrated threshold alerts for IAC’s 3Par array into a central network management system, he says. Doing that lets him keep tabs on how myriad file, e-mail, domain-controller and Web-application servers for 15 business units are handling the shared resource pool.That pool supports more than 25TB of data, he adds. Yotko also calculates the time between
an application reaching its threshold and storage being drained, and adjusts his alerts accordingly. If the window is too close, he pads it.The goal is to allow sufficient time for adding capacity – and avoiding a disaster, he says. By setting and fine-tuning his alerts,Yotko reports being able to realize utilization rates of more than 80% among his array, a flip from the less than 20% he had realized before thin provisioning.
No dumping here Another common mistake IT teams make is thinking that every application is a candidate for thin provisioning, says Scott McCullough, manager of technical operations at manufacturer Mine Safety Appliances in Pittsburgh. “The only applications that can take advantage of thin provisioning are those for which you can predict storage growth,” McCullough says. For that reason, he uses NetApp’s Provisioning Manager to oversee resource pools for his Web servers, domain controllers and Oracle database, but not the company’s high-volume
Sponsored by APC www.apc.com
21
EXECUTIVE GUIDE Back to TOC
Section 3: Best Practices • • • SQL Server. That server would quickly drain any projected resource allotment, he says. Before he thin-provisions an application, McCullough studies its performance to make sure it won’t endanger the pool.“It doesn’t make sense to take up all your resources and potentially starve other applications,” he says. “You definitely need to be able to forecast or trend the application’s trajectory,”IDC’s Greyzdorf says. Biomedical and other science programs can be particularly tricky for thin provisioning, she notes, because they can start off needing 200GB of storage and quickly skyrocket to 3TB with the addition of a single project. Choosing the wrong applications to thin-provision not only endangers your entire storage pool, but also negates any management and budget relief you might gain otherwise. Done correctly, thin provisioning should reduce the overall time spent configuring and deploying storage arrays. If applications continuously hit their thresholds, however, and you’re forced to add capacity on the fly, that benefit is quickly negated, costing you in terms of personnel and budget. This concern has ResCare rolling out thin-provisioning piecemeal, says Daryl Walls, manager of system administration at the Louisville, healthcare support and services provider. “We are cautious about our deployments.We evaluate each implementation to see whether it makes sense from an application, server and storage point of view,” he says. Once his applications have been thin-provisioned,Walls closely monitors them to make sure that usage patterns don’t change dramatically. In the worst case, that would require them to be removed from the pool.“A few times we’ve underestimated, usage has crept up on us, and we’ve received alerts saying,‘You’re at 70% to 80% utilization,’”he says. In those instances, IT teams must decide whether to expand the application’s allotment, procure more resources or move the application off the system.
What goes where Thin provisioning can wreak havoc on your network if you don’t have proper allotment policies in place, says Matt Vance, CIO at Nutraceutical, a health supplements company in Park City, Utah. “IT has always managed and controlled space utilization, but with thin provisioning you can get a false sense of security.We’ve found that even with a resource pool, you still need to
take responsibility in managing the way people receive and use storage. Otherwise you wind up wasting space, and that’s hard to clean up after the fact,” Vance says. For instance, being lax about monitoring the amount of space users and applications are absorbing can lead to overspending on hardware and software, and necessitate an increase in system management.This is particularly concerning in Vance’s environment, where the principal driver for moving to virtualization and thin provisioning was the need to bring high-performance database applications online quickly without breaking the bank on storage requirements. Reporting tools have become essential at
Nutraceutical. Each time an application nears its threshold,Vance turns to Compellent Technologies’ Storage Center software tools to analyze how the server used its storage space.“We then decide whether it was used appropriately or if we need to tweak our policies,”he says. Vance says he is a solid proponent of thin provisioning, but he cautions his peers to stave off the complacency that automation can bring on:“We can’t let the pendulum swing so far toward automation that we forget to identify where IT still has to be managing its resources.” Gittlen is a freelance technology editor in the greater Boston area. She can be reached at
[email protected].
Sponsored by APC www.apc.com
22
EXECUTIVE GUIDE Back to TOC
Section 3: Best Practices • • •
The challenge of managing mixed virtualized Linux, Windows networks Windows and virtualization are driving need for new management standards, tools
By John Fontana, Network World, 10/27/2008
T
The sprawl of management consoles, the proliferation of data they provide and the rising use of virtualization are adding challenges to corporations looking to more effectively manage mixed Linux, Windows and cloud environments. Traditional standards are being tapped in order to bridge the platform divide, and new ones are being created to handle technologies such as virtualization that create physical platforms running one technology but hosting virtual machines running something completely different. The goal is better visibility into what is going right or wrong – and why – as complexity rises on the computing landscape. Some help is on the way. The Distributed Management Task Force (DMTF) has begun hammering out virtualization management standards it hopes will show up in products soon. Those standards will address interoperability, portability and virtual machine life-cycle management, as well as incorporate time-honored management standards such as the Common Information Model (CIM). Vendors such as Microsoft,VMware and Citrix are on board with the DMTF and are creating and marketing their own crossplatform virtualization management tools for x86 machines. Linux vendors, including Novell and Red Hat, and traditional management vendors such as HP also are joining in. To underscore the importance of heterogeneous management, Microsoft
is supporting Linux within its virtualization management tools slated to ship by year-end rather than relying on third-party partners. And the vendor has said it will integrate the OpenPegasus Project, an open source implementation of the DMTF’s CIM and Web-based Enterprise Management (WBEM) standards, so it can extend its monitoring tools to other platforms. The trend toward services is forcing IT to think about management across systems that may have little in common, including the same LAN. Services are increasingly
and from there you only start to see the stronger growth [of Linux],” says Ute Albert, marketing manager of HP’s Insight management platform. HP is boosting its Linux support with features HP already supports for Windows platforms, such as capacity planning. Analyst firm the Enterprise Management Associates reports that use of Linux on mainframes has grown 72% in the past two years while x86 Linux growth hit 57%. In the trenches, users are moving to suck the complexity out of their environments and make sense not only of individual
“We are starting to see IT put more missioncritical applications on Linux and from there you only start to see the stronger growth [of Linux].” Ute Albert, marketing manager of HP’s Insight management platform made up of numerous application components that can be running both internally and externally, complicating efforts to oversee all the piece parts, their platforms and their dependencies. The big four management vendors, BMC, CA, HP and IBM, are handling the mixedenvironment evolution by upgrading their monolithic platforms to better manage Linux as its use grows. And a crop of nexttier vendors, start-ups and open source players are angling for a piece of the pie by providing tools that work alone, as well as plug into the dominant management frameworks. “We are starting to see IT put more mission-critical applications on Linux
network and systems components but of composite services and how to aggregate data from multiple systems and feed results back to administrators and notification systems.
Console reduction At Johns Hopkins University, managers are trying to reduce “console sprawl” in a management environment that stretches across 200 projects – many with their own IT support in some nine research and teaching divisions, as well as healthcare centers institutes and affiliated entities. Project leader pick their own applications and platforms with about 90% to 95% running Windows and 5% to 10% on
Sponsored by APC www.apc.com
23
EXECUTIVE GUIDE Back to TOC
Section 3: Best Practices • • • Linux. There are also storage-area networks, network devices, Oracle software, Red Hat, VMware, EMC, IronPort e-mail relays, and hardware from Dell, HP and IBM. John Taylor, manager of the management and monitoring team, and Jamie Bakert, systems architect in the management and monitoring group, are responsible for 15,000 desktops and 1,500 servers, nearly 50% of the university’s total environment. “Our challenge is we do not want to create another support structure,” says Taylor, who has standardized on Microsoft’s System Center management tools anchored by Operations Manager 2007 and Configuration Manager 2007. Because Taylor doesn’t control what systems get rolled out, he is using Quest Software’s Management Xtensions for System Center to support non-Windows infrastructure. “Quest allows us to bring in anything with a heart beat,” Bakert says.
And that allows for managing distributed applications, which incorporate multiple components on multiple platforms “Microsoft has a limited scope of what they are bringing into System Center at this point,” he says. For instance, Bakert uses Quest Xtensions to monitor IronPort relays that work with Microsoft Exchange to ensure everything in the e-mail service is monitored in one tool. The Quest tools also let Bakert store security events on non-Windows machines so he can report on both Windows and non-Windows platforms, which helps with collecting compliance data. Taylor and Bakert also are beta testing Microsoft’s System Center Service Manager, slated to ship in early 2010, with hopes they can reduce System Center consoles from five to one. Eventually, Service Manager’s configuration management database will host data from Configuration Manager and Operations Manager, as well as incorporate
ITIL, a set of best practices for IT services management, and the Microsoft Operations Framework. Taylor and Bakert also are testing System Center’s Virtual Machine Manager, which will manage Windows, the VMware hypervisor and Suse Linux guest environments.
Virtualization getting Microsoft ironically had the title as first to support mixed hypervisor environments because it was last to release a hypervisor – Hyper-V. Without the benefit of the in-development Microsoft code,VMware, Novell, Red Hat, HP and others are momentarily playing catch-up on cross-platform management support. Novell is using its February 2008 acquisition of PlateSpin to support management across both physical and virtual environments. The company’s existing partnership and interoperability agreement with Microsoft has yielded virtualization bundles and
Sponsored by APC www.apc.com
24
EXECUTIVE GUIDE Back to TOC
Section 3: Best Practices • • • the company’s acquisition of Managed Objects will give IT admins and business managers a unified view of how business services work across both physical and virtual environments. “In the data center we see that people are not saying consolidate [on a platform], they are saying give me a universal remote,” says Richard Whitehead, director of product marketing for data center solutions. Red Hat also is developing its portfolio. Its February 2008 launch of the open source oVirt Project has a stated goal of producing management products for mixed environments. “The oVirt framework will be used to control guests in a cloud environment, create pools of resources, create images, deploy images, provision images and manage the life cycle of those,” says Mike Ferris, director of product strategy for the management business unit at Red Hat. HP has aligned its HP Insight Dynamics – Virtual Server Environment (VSE) with VMware and plans to add support for Microsoft’s Hyper-V in the next release, according to HP’s Albert. In addition, HP is increasing the feature set of its Linux management and monitoring support. And while the vendors work on their tools, the DMTF is working on standards it hopes will be as common as existing DMTF standards CIM and WBEM. The Virtualization Management Initiative (VMAN) released by the DMTF in September 2008 is designed to provide interoperability and portability standards for virtual computing. The initiative includes the Open Virtualization Format (OVF) for packaging up and deploying one or more virtual machines to either Linux or Windows platforms. Tools that are based on VMAN will provide consistent deployment, management and monitoring regardless of the hypervisor deployed. “The truth is we have been working on this whole platform independence since 1998,” says Winston Bumpus, president of the DMTF, in regard to the organization’s goals. Virtualization is only one of the DMTF’s initiatives. The group has started its interoperability certification program around its SMASH and DASH initiatives. The Systems
Management Architecture for Server Hardware (SMASH), used to unify data center management, includes the SMASH Server Management (SM) Command Line Protocol (CLP) specification, which simplifies management of heterogeneous servers in the data center. The Desktop and Mobile Architecture for System Hardware (DASH) provides standards-based Web services management for desktop and mobile client systems.
Open source Standards efforts are being complemented by open source vendors who are aligning their source-code flexibility with the interoperability trend. Upstarts such as GroundWork, Likewise, Hyperic, OpenQRM, Zenoss and Quest’s Big Brother platform are working the open source route to build a united management front. “We picked [tools] most people pick when they use open source, and we packaged them together,” says Dave Lilly, CEO of GroundWork. The company’s package includes 100 top open source projects, including Nagios, Apache and NMap. GroundWork also includes a plug-in it
wrote to integrate Windows systems using Microsoft’s native Windows Management Instrumentation. “We don’t provide the entire tool set you may want, but we at least take the time and energy out of providing the monitoring infrastructure,” Lilly says.Via standards, GroundWork can plug into other management tools such as service desk applications. Other open source management resources include Open WS-Man, an XML SOAP-based specification for management using Web services standards. The project, which focuses on management of Linux and Unix systems, is an open source implementation of WS-Management, an industry standard protocol managed by the DMTF. There are other WS-Man variations such as the Java implementation called Wiseman. “Interoperability is the end game,” DMTF’s Bumpus says.“You can have all the specs, but if you don’t have interoperability who cares.” In today’s evolving data centers and services revolution it turns out a lot of IT managers are beginning to care very much.
Sponsored by APC www.apc.com
25
EXECUTIVE GUIDE Back to TOC
Next-generation data centers: opportunities abound Section 4
Inside the Data Center
The Google-ization of Bechtel
How the construction giant is transforming its IT operations to emulate Internet leaders and embrace SaaS
I
By Carolyn Duffy Marsan, Network World, 10/29/2008
If you could build your IT systems and operation from scratch today, would you recreate what you have? That’s the question Geir Ramleth, CIO of construction giant Bechtel, asked himself several years ago. The question – and the industry benchmarking exercise that followed – prompted Bechtel to transform its IT department and model it after Internet front-runners YouTube, Google, Amazon.com and Salesforce.com. After all, these companies have exploited the latest in network design, server and storage virtualization to reach new levels of efficiency in their IT operations. Ramleth wanted to mimic these approaches as Bechtel turned itself into a software-as-a-service (SaaS) provider for internal users, subcontractors and business partners. After researching the Internet’s strongest brands, Bechtel scrapped all of its existing data centers and built three new facilities that feature the latest in server and storage virtualization. Bechtel also designed a new Gigabit Ethernet network with hubs at Internet exchange points that it is managing itself instead of using carriers. Now, Bechtel is
slashing its portfolio of software applications to simplify operations as well as the end user experience. Dubbed the Project Services Network, Bechtel’s new strategy applies the SaaS computing model internally to provide IT services to 30,000 users, including 20,000 employees and eventually 10,000 subcontractors and other business partners. We operate “as a service provider to a set of customers that are our own [construction] projects,” Ramleth said.“Until we can find business applications and SaaS models for our industry, we will have to do it ourselves, but we would like to operate with the same thinking and operating models as [SaaS providers] do.” Nicholas Carr, author of several books including “The Big Switch: Rewiring the World from Edison to Google” which chronicles a shift to the SaaS model, called Bechtel’s strategy a smart move. “For the largest enterprises, the very first step into the Internet cloud may well be exactly what Bechtel is doing: building their own private cloud to try to get the cost savings and flexibility of this new model,” Carr says.“Large companies have such enormous scale in their own IT operations that the outside providers, the true utility providers, just aren’t big enough yet…to make them a better option.”
Carr predicts, however, that Bechtel’s do-it-yourself SaaS strategy will be an interim step until the company is able to fully outsource its IT infrastructure. That may take as long as 10 years, he adds. “My guess is that over time -- and maybe it will start with the HR system -- Bechtel will look outside and start running some aspects of its IT operations off of [SaaS] sites,” Carr says.“Then its cloud will start to blur with the greater Internet cloud.” Bryan Doerr, CTO of utility computing provider Savvis, says many enterprises like Bechtel are interested in the SaaS model for applications that don’t differentiate them from their competition. “The move to SaaS is simply a different delivery model for an application that has little to do with intellectual property or innovation,” Doerr says.“Once you make the decision to outsource to somebody else’s software, the decision to host it yourself or use a SaaS provider is about economics…. For many enterprises, licensing by end user seat is much more efficient than licensing as a bulk package, buying servers and storage and data center space, and then training people to self-host the application.” Several business challenges are driving Bechtel’s SaaS strategy. Bechtel is a leading construction,
Sponsored by APC www.apc.com
26
EXECUTIVE GUIDE Back to TOC
Section 4: Inside the Data Center • • • engineering and project management firm with 42,500 employees and 2007 revenue of $27 billion. The privately held company is working in more far-flung locations, and it has difficulty finding and retraining talented employees to work on its projects. Bechtel’s employees are demanding business software that is as intuitive as popular Web sites. The company doesn’t have time to train end users in software applications, nor can it afford to maintain hundreds of applications. “We needed a different way of doing applications and supporting them,” Ramleth says.“We have more employees coming into the organization, and we need to get them up to speed fast.” Another key business challenge is
protecting Bechtel’s intellectual property when so many subcontractors and business partners have access to its network and data. “A third of the people on our network are non-Bechtel employees. That exposure forms a security risk,” Ramleth says. Bechtel started its transformation by trying to figure out how to revamp its software applications to operate more like leading Web sites. But what Bechtel discovered is that it had to fix the underlying IT infrastructure -- including data centers and networks -- before it could change its applications. “Not only do you have to solve the IT architecture and the way you operate it, but you have to make sure that IT is
accommodating Web applications that can operate more in an Internet mode than in an intranet mode,” Ramleth explains. Perhaps most impressive is that Bechtel is transforming its IT operations without additional funding. Bechtel would not release its annual budget for its Information Systems & Technology group, but the company said it has 1,150 full-time employees in its IS&T group and 75 to 100 contractors. “We’ve mostly paid for this by re-allocation of the budgets that we otherwise would have used for refresh and maintenance,” Ramleth says. We’re “doing a total change of the traditional way of doing things, and we have done it with very little, if any, incremental funding.”
Sponsored by APC www.apc.com
27
EXECUTIVE GUIDE Back to TOC
Section 4: Inside the Data Center • • • Leader benchmarks The transformation began with Bechtel’s IS&T group spending a year trying to figure out how to drive faster adoption of consumer technology such as Google and Amazon. com across the company. “We asked ourselves: If we started Bechtel today, would we do IT the same way we are doing it today? The answer was no. If we had a clean slate, we wouldn’t do it the way we were doing it,” Ramleth says. Ramleth decided to benchmark Bechtel’s IT operation against leading Internet companies launched in recent years. He zeroed in on YouTube, Google, Amazon.com and Salesforce.com for comparison. Bechtel’s IS&T staff studied the available information about how these Internet leaders run their IT operations, and they interviewed venture capitalists with experience investing in consumer applications. Bechtel came up with estimates for how much money YouTube spends on networking, Google on systems administration, Amazon.com on storage, and Salesforce.com on software maintenance. What Bechtel discovered is that its own IS&T group was lagging industry leaders. “What we found were tremendous discrepancies between our metrics and what these guys were dealing with,” Ramleth says.“You can learn a tremendous amount from [companies] that have the privilege of starting recently.” When Bechtel researched YouTube, it came to the conclusion that YouTube must be getting much less expensive network rates because otherwise it wouldn’t be able to send 100 million video
streams a day for free. Bechtel estimated that YouTube spent $10 to $15 per megabit for bandwidth, while Bechtel is spending $500 per megabit for its Internet-based VPN. YouTube was “paying a fraction of what we were paying,” Ramleth says.“We learned you have to be closer to the high-bandwidth areas and not haul the data away. We decided we better bring the data to the network, rather than bring the network to the data.” Next, Bechtel studied how Google operated its servers. Bechtel estimated that Google
used 12 system administrators for every 200,000 servers, or roughly 17,000 servers per system administrator. Bechtel, on the other hand, was operating with 1,000 servers per system administrator. “What we learned is that you have to standardize like crazy and simplify the environment,” Ramleth says.“Google basically builds their own servers by the thousands or gets them built in a similar fashion, and they run the same software on it. So we had to get more simplified and standardized.” Bechtel studied Amazon. com and determined that Amazon.com must have a better storage strategy if it is offering disk space for a fraction of Bechtel’s internal costs. While Amazon.com was offering storage for 10 cents per gigabyte per month, Bechtel’s internal rates in the United States was $3.75 per gigabyte. Ramleth says the key to reducing storage costs was not only to simplify and virtualize the storage environment, but also to drive up utilization. “Our average utilization was 2.3%,” Ramleth says. With virtualization,“we now expect to have utilization in the 70% to 75% range.” However, he added that the new virtualized storage environment is “more complex to operate.” Bechtel turned to Salesforce.com for its expertise in running a single application with millions of users. In contrast, Bechtel operates 230 applications, and it runs 3.5 versions per application, which means it maintains approximately 800 applications at any given time. “When you look at Sales-
force.com, not only are they running one application, but they are running one version and they are only running it in one location,” Ramleth says. “They upgrade that application four times per year, and they don’t disrupt the users by having to retain them. Every time we have a new version, we have to retrain our users.” With its benchmarking data in hand, Bechtel decided to revamp its IS&T operations to model itself as closely as possible after the SaaS model pioneered by these four Internet leaders. “If you take the ideal world, everything is done as a service: computing, storage, software and operations,” Ramleth says.“It’s maybe the ultimate goal…but if you start where all the enterprises are today, that’s a very long road to go.”
A trinity of data centers Once Bechtel committed to the SaaS model, the firm realized it needed to revamp its data centers to copy Google’s standardized, virtualized model. Bechtel was operating seven data centers worldwide, but in 2007 replaced those with three new data centers, one each in the United States, Europe and Asia. The three data centers have identical hardware from HP, Cisco, Juniper and Riverbed. On the software side, Bechtel is using Microsoft and Citrix packages. “The hardware is the same, the software is the same, and the same organization [is] managing all of them,” Ramleth says.“It’s like one data center, but it operates in three different locations.” The new data centers have virtualized servers with utiliza-
Sponsored by APC www.apc.com
28
EXECUTIVE GUIDE Back to TOC
Section 4: Inside the Data Center • • • tion of around 70%, Ramleth says. These centers boast the most energy-efficient design possible, so they are a fraction of the size of the older data centers and use significantly less electricity. “In square footage, we’re down by a factor of more than 10,” Ramleth says. “Two-thirds of the power in a data center is chilling, and if I don’t have to chill that extra space…I get a dramatic reduction in power needs.” The three new data centers are operational, and Bechtel expects to close all the older data centers by the end of 2009. Ramleth says one of the hardest aspects of the IS&T transition was closing data centers upgraded as recently as 2005. “Six of our data centers were relatively modern. That was a tough thing. We finished a [data center] consolidation in 2005, and already in 2006 we started talking about doing a re-do of our data centers again. [Our IS&T staff] felt like they hadn’t really gotten dry from the last shower before they started getting wet again,” Ramleth says.
Do-it-yourself networking At the outset of this research project, Bechtel was operating an IP VPN that it had installed in 2003. To drive down the cost of its network operations, Bechtel has redesigned that network using YouTube’s do-it-yourself model. Bechtel has a Gigabit Ethernet ring connecting the three new data centers, with dual paths for failover. Bechtel is buying raw bandwidth from a variety of providers -- Cox, AboveNet, Qwest, Level 3 and Sprint -- but it is managing the network itself. “We buy very little provisioned networking,” Ramleth says.“We do it ourselves because…it’s less costly than buying from others….We go to the Internet exchange points, to the carrier hotels, where the traffic terminates.” So far, Bechtel has migrated the three new data centers to the new network, along with nine offices: San Francisco; Glendale, Ariz.; Houston; Oak Ridge, Tenn.; Frederick, Md.; London; New Delhi and Brisbane. The new network “is about the same [cost] as what we paid before, but it offers
a heck of a lot more capacity,” Ramleth says, adding that Bechtel is getting around 10 times more capacity for the same amount of money. Ramleth says the biggest cost-saving of the new network design came from aggregating network traffic at Internet exchange points, which is what leading e-commerce vendors do. “We found that for the amount of traffic and the amount of capacity that we put it in, we could do it cheaper ourselves,” Ramleth says.“This is not something for a small or medium-sized enterprise, but we found that we were big enough to be able to do it ourselves.”
“Two-thirds of the power in a data center is chilling, and if I don’t have to chill that extra space … I get a dramatic reduction in power needs.” Geir Ramleth, CIO, Bechtel Simple secure software The next aspect of transforming Bechtel’s IS&T operations is migrating its applications to the new SaaS model. Bechtel runs 230 software applications, 60% of which were developed internally. Ramleth says he hopes to have 20 to 30 of the firm’s key applications converted to the new SaaS model within a year. So far, Bechtel has a dozen applications running in its new data centers, including e-mail, a Web portal, a time record system, and a workflow/document management application. Bechtel’s ERP applications, including Oracle Financials for accounting and SAP for human resources and payroll, will be migrated to the new infrastructure by the end of 2009, Ramleth says. “We will be getting applications in and getting them certified in this new environment,” Ramleth says.“The ones that we can’t totally certify, a few we will let die….For the low usage, infrequently
used applications that are not as critical, we are using Citrix solutions to solve some of them so we don’t have to do too much with them. We’re doing that as a bridge to get from our past to our new infrastructure.” Bechtel’s new portal is key to its SaaS strategy and its goal of having software applications that are as simple to use as Google. In its research, Bechtel found that 80% of its end users want to get information out of an application but don’t need to understand how it works. With the new Project Services Network Portal, Bechtel’s end users can interact with applications without requiring training. Bechtel built the portal using Microsoft’s SharePoint software. Ramleth says the portal “gives us consumerization and also gives us the new security model.” Ramleth likens Bechtel’s security strategy to Amazon.com’s approach. With Amazon.com, users can browse freely and security is applied when a purchase is made. Similarly, Bechtel is trying to create Web applications that apply security only when needed. “We will apply different forms of security based on what’s going on,” Ramleth says.“By using the portal, we can limit what you have access to by policy, by role, in a way where people won’t necessarily go in and find [our intellectual property] without having the right to access it.” Ramleth says the new portal and the policy-based security model it provides are among the top benefits that Bechtel is gaining out of the IS&T transformation effort. He says this benefit will be fully realized when Bechtel’s business partners are migrated to the portal, too. The SaaS model allows us “to bring in a different cost model that can afford us to have a high-capacity, global, collaborative work environment,” Ramleth says. The risk for enterprises that don’t start a SaaS migration strategy soon is that their IT organizational structures will be a competitive disadvantage, Ramleth warns. “If they don’t start thinking about this soon, the changeover in the future will be really hard,” Ramleth says.
Sponsored by APC www.apc.com
29
EXECUTIVE GUIDE Back to TOC
Section 4: Inside the Data Center • • •
Why San Diego city workers expect apps up and running in 30 minutes or less Major VMware project makes application deployment a snap while easing the IT budget
D
By Jon Brodkin, Network World, 10/20/2008
Deploying a new application used to be a month-long headache and a budget-drainer at the San Diego Data Processing Corp. Now the process takes as little as 30 minutes and costs next to nothing, thanks to the extensive use of server virtualization. The SDDPC, a private nonprofit that handles IT for the San Diego municipal government and its more than 10,000 city workers, embraced virtualization with a vengeance several years ago -- and has been reaping substantial rewards ever since. The first goal was server consolidation, says Rick Scherer, a Unix systems administrator who spearheaded the SDDPC virtualization project. “We were just like everyone else: We had tons of x86 machines that were only being 10% utilized,” he says. “I presented to our directors that we could virtualize 20 machines into one box, get a better ROI and save tons of money plus data center space.” Easier application deployment followed, which meant huge gains for business users, he adds. Before using VMware’s ESX Server, every application upgrade or new deployment meant the SDDPC had to buy -- then install -- a new server. Such is the nature of the Windows operating system, Scherer says. “Anytime there was a new application, we couldn’t put it on a box with another application, because either those applications wouldn’t work properly or the [application] vendor didn’t support that.”
For example, when the city needed to upgrade a purchase-order system for outside contractors, SDDPC had to push the project out three to four weeks to get the infrastructure ready, Scherer says. The same three- to four-week wait was in store when the organization needed to boost processing power for a Citrix Systems Presentation Server deployment. Besides the annoyance, each new HP ProLiant server cost somewhere around $10,000, he says. When SDDPC started with server virtualization, users were surprised at how
as 35 virtual machines onto one physical server. Such density has allowed the organization to power off 150 physical servers; it now runs 292 virtual machines on 22 physical x86 servers -- leaving plenty of room for expansion.“A lot of those hosts aren’t even being used; they’re just for future growth,” Scherer says. SDDPC also uses Sun’s virtualization technology on Sun Sparc servers, and now runs 120 logical servers on 90 boxes. The goal is to virtualize as much as possible:“We’ve set an initiative: For any
“We’ve set an initiative: For any new application or service that needs to be deployed in our data center, we’re going to do everything we can to virtualize first.” Rick Scherer, CIO, Unix systems administrator, SDDPC
speedily IT could turn up applications, Scherer says.“But what’s funny is now that we’ve been doing it so long, they expect it. It has put a damper on management,” he says. Users are disappointed now “if they put a request in for a server and it’s not up in a half-hour,” he adds. One of the only things preventing further virtualization right now is the time Scherer and his colleagues must devote to day-to-day tasks and other projects.
VMware loyalty Before deploying server virtualization, the SDDPC had about 500 physical x86 machines, largely from HP. With VMware, the organization can consolidate as many
new application or service that needs to be deployed in our data center, we’re going to do everything we can to virtualize first. If there’s no way to virtualize it, we’ll look at physical hardware,” Scherer says, noting that the organization also is aggressively moving the city’s existing applications, as appropriate, to the virtual infrastructure. VMware remains the server-virtualization tool of choice, even though products from rivals Citrix and Microsoft are now available and cost quite a bit less. Using VMware, SDDPC pays $10,000 to virtualize a four-socket machine, but that one physical host can support well over 20 virtual machines, Scherer says. If you buy 20 physical servers at $10,000 a pop, you’re
Sponsored by APC www.apc.com
30
EXECUTIVE GUIDE Back to TOC
Section 4: Inside the Data Center • • • shelling out $200,000, he says. Numerous management advantages, disaster-recovery capabilities and security also make the investment well worth it, Scherer says. The SDDPC operates two data centers and contracts with a colocation vendor in Chicago for disaster recovery. Initially, the organization included only mission-critical applications in its disaster-recovery plan, but it’s beginning to account for many other applications -- those it considers crucial though not necessarily mission-critical -- because virtualization makes it feasible to do so from a cost perspective, he says. In addition, SDDPC makes extensive use of VMware’s VMotion, a live-migration function that moves virtual machines from one physical server to another without any downtime. VMotion comes in handy for balancing loads among servers, as well as for routine maintenance, Scherer says.“We can migrate virtual machines on a specific host to others within the cluster; this allows us to perform hardware and software upgrades with zero downtime and no impact to the customer,” he says. As for the security benefits, if a virtual server becomes compromised, it’s easy to shut it down or isolate it into a separate network group.“If we [want to] have a Web zone, an application zone and a database zone, we can accomplish that with a lot less hardware. We’re virtualizing our network as well. Instead of having separate physical switches we have to pay for, maintain and manage, all of it can be done within the [VMware] ESX host just by creating separate virtual switches,” Scherer says. SDDPC now is looking forward to its next big project, Scherer says -- virtualizing
desktops. It’s testing several thin-client devices to replace many of the city’s 8,500 desktops, and plans to use VMware’s Virtual Desktop Manager to provision and manage clients. This software “includes the ability to create a work-from-home scenario. A user can go home, not necessarily have thin-client at home, but through a Web-site connect to his desktop,” he says.“That could potentially eliminate the need for Citrix, which is a significant licensing cost to the city every year.” Nevertheless, desktop virtualization will not happen immediately. The city refreshed most of its desktops in the past year, so the desktop virtualization project won’t kick off for the next two or three years, Scherer says.
Sticky points Despite realizing benefits, Scherer has run into some roadblocks and challenges related to virtualization. His goal is to virtualize nearly everything, but vendor licensing setups are holding him back. The SDDPC has virtualized Citrix’s application-delivery software, SQL servers and all its Web services on VMware boxes. Vendor support issues, however, meant it could run SAP software only on virtual servers in its development environment, and run Exchange nowhere on a virtual infrastructure. SAP and Microsoft now support VMware, so Scherer will revisit those earlier decisions, he says.“Until virtualization is adopted by every single software company, you’re going to run into those issues,” he adds. The potential for server overuse is another issue. Because it’s so easy to deploy new virtual machines, an overzealous IT pro runs the risk of over-
committing resources. It may be tempting to allocate nearly 100% of a server’s resources, but it’s better to leave room for future growth. In addition, virtualization introduces a potential single point of failure, because dozens of applications and virtual machines may reside on one physical server.“Your biggest risk is having a hardware failure and then no place for those virtual machines to run,” Scherer says. That’s why it’s important to use such disaster-recovery features as live migration, and build out a strong network and storage system to support virtual servers, he says. The SDDPC relies on about 250 terabytes of NetApp storage, and uses such features as data deduplication and thin-provisioning to maximize storage space and make sure each application has enough storage dedicated to it. Devoting too many physical and virtual machines to one storage array can be problematic. “That recently happened: We had another application that wasn’t virtualized and was on the same array as our virtual-machine environment,” Scherer says. “Resources spiked up and caused contention on our virtual-machine farm. Luckily, we caught it in the early stage and moved the data off, but it was definitely a lesson learned,” he adds. Beyond storage, IT pros who embrace virtualization need to design full multipath networks, Scherer says.“You want to make sure that if a link fails, you have full redundancy,” he says.“If all your virtual machines are running on a data store that has only one path and the link dies, that’s the same thing as unplugging all your ESX hosts, and you’re completely dead.”
Sponsored by APC www.apc.com
31