Dr Dobb Digest, June 09

  • Uploaded by: Ruxo Zheng
  • 0
  • 0
  • May 2020
  • PDF

This document was uploaded by user and they confirmed that they have the permission to share it. If you are author or own the copyright of this book, please report to us by using this DMCA report form. Report DMCA


Overview

Download & View Dr Dobb Digest, June 09 as PDF for free.

More details

  • Words: 15,609
  • Pages: 28
DIGEST The Art and Business of Software Development

June 2009

Editor’s Note

2

by Jonathan Erickson

Techno-News Game Theory, Machine Learning, and Better Bidding Strategies

4

A better way to find the best bidding strategy in simulated auctions?

Features A Model That’s Right For The Times

5

by Eric J. Bruno Companies face pressure to find new ways to extend distributed computing.

Software Engineering =/ Computer Science

9

by Chuck Connell Why can’t software engineering have more rigorous results, like the other parts of computer science?

Minimize Code by Using jQuery and Data Templates

12

by Dan Wahlin ASP.NET 4.0 validates the usefulness of client-side templates.

The Android 1.5 Developer Experience

15

by Mike Riley Android 1.5 corrects shortcomings and provides exciting new enhancements.

The System of a Dump

18

by Glen Matthews DebugDiag is a simple-to-use debugging tool that provides you with a trove of information.

Columns Of Interest

22

Conversations

23

by Jonathan Erickson Dr. Dobb’s talks with Erik Troan, CTO at rPath, about distributed computing and modular deployments.

Book Review

24

by Mike Riley Examining Gray Hat Python: Python Programming for Hackers and Reverse Engineers.

Effective Concurrency

25

by Herb Sutter Break up and interleave work to keep threads responsive.

Entire contents Copyright© 2009, Techweb/United Business Media LLC, except where otherwise noted. No portion of this publication june be reproduced, stored, transmitted in any form, including computer retrieval, without written permission from the publisher. All Rights Reserved. Articles express the opinion of the author and are not necessarily the opinion of the publisher. Published by Techweb, United Business Media Limited, 600 Harrison St., San Francisco, CA 94107 USA 415-947-6000.

D r. D o b b ’s D i g e s t

[

]

Editor’s Note

Distributed Computing

I By Jonathan Erickson, Editor In Chief

nnovation makes things easier. That’s the plan anyway. But often times, solutions to one problem give rise to unanticipated problems elsewhere. Take distributed computing technologies like cloud computing, for instance. In general, the problems cloud computing is intended to solve involve reducing time, cost, and complexity; and by all accounts, it appears to be working out that way. That is, until you add software licensing into the mix. Until now, the dominant licensing approaches have been the processor, site, and seat models. For the most part, these work. But what if your data center has multiple servers, each running multiple virtual operating environments? And what if the number of instances of OSs change from day-to-day— or even hour-to-hour? Or what if you need to move the virtual environment from one server to another? How much more will this cost? It’s safe to say that the existing models of software licensing just don’t work for today’s distributed computing world. Proposals for alternative licensing models are in the works, with attention being paid to transaction approaches that range from monthly (or even hourly) subscription to utilitybased “pay for what you use” models. Volume-based licensing is an attractive alternative in cases where hundreds of employees may need access to a particular software package, but for only a few hours per month. You’d think that major software vendors would have a handle these problems, but that’s not so. The different versions of Microsoft Windows Server 2008 are a case in point— standard, enterprise, and data center — each has its own set of licensing rules. Likewise, pricing for Oracle software with “Standard Edition” in the product name is based on the size of the Amazon EC2 instances; its Database Standard Edition can only be licensed on Amazon’s EC2 instances up to 16 virtual cores, and Standard Edition One can only be licensed on EC2 instances up to 8 virtual cores. I’ll leave it up to you to decide if that’s easier to understand than “one-processor/onelicense.”

Return to Table of Contents

DR. DOBB’S DIGEST

2

June 2009

www.ddj.com

A special game competition brought to you by the world-renowned Dr. Dobb's for software developers in association with Microsoft Visual Studio

TORNADO

DAY OF APE

Check out the best games so far. Final winners announced soon on DobbsChallenge2.com

D r. D o b b ’s D i g e s t

[

]

Techno-News

EDITOR-IN-CHIEF Jonathan Erickson

Game Theory, Machine Learning, and Better Bidding Strategies A better way to find the best bidding strategy in simulated auctions?

S

cientists at the University of Michigan have developed a better way to find the best bidding strategy in a simulated auction modeled after commodity and financial securities markets. In their paper “Stronger CDA Strategies Through Empirical Game-Theoretic Analysis and Reinforcement Learning,” Michael Wellman, a professor in the Division of Computer Science and Engineering, and doctoral candidate L. Julian Schvartzman describe a continuous double auction — an ever-changing market in which bidders exchange offers to both buy and sell, and transactions occur as soon as participants agree on a price. This dynamic behavior is characteristic of the stock market, for example. And it makes such markets difficult for researchers to study and solve. Analysts trying to “solve” such problems are seeking an equilibrium for the market. An equilibrium is a configuration of bidding strategies under which each participant uses the best strategy he or she can, taking into consideration the other participants’ strategies. Schvartzman and Wellman evaluated and tested all prior proposals for the best strategies, which include waiting until the last minute to bid, randomly bidding, and taking into account the history of the bids of all participants. They say they’ve conducted the most comprehensive continuous double auction strategy study ever published. DR. DOBB’S DIGEST

To this evaluation they added a layer of artificial intelligence, or machine learning. The “reinforcement learning” technique they used enables a computer to, in essence, learn from experimenting with actions in a variety of situations to determine what overall strategy would work best. “Nobody has put these techniques together before,” Schvartzman said. “One could take these techniques and apply them to real markets, not to predict specific price movements, but to determine the best bidding strategy, given your objectives,” Wellman said. This new combined method generated a more stable equilibrium candidate comprising stronger bidding strategies than any previously identified, the researchers say. The method would produce different strategies in different situations. “My goal is to make a contribution to the automation of markets,” Schvartzman said, “not just financial markets, but in other scenarios, such as web advertising or even nurses bidding for their shifts in hospitals. Eventually, any resource allocation problem in which there is uncertainty about what something is worth could use a dynamic market instead of a fixed price.”

EDITORIAL MANAGING EDITOR Deirdre Blake COPY EDITOR Amy Stephens CONTRIBUTING EDITORS Mike Riley, Herb Sutter WEBMASTER Sean Coady AUDIENCE DEVELOPMENT CIRCULATION DIRECTOR Karen McAleer MANAGER John Slesinski NATIONAL SALES DIRECTOR Eric Christopher CLIENT SERVICES MANAGER Gordon Peery SERVICES MARKETING COORDINATOR Laura Robison DR. DOBB’S 600 Harrison Street, 6th Floor, San Francisco, CA, 94107. 415-947-6000. www.ddj.com UBM LLC Pat Nohilly Senior Vice President, Strategic Development and Business Administration Marie Myers Senior Vice President, Manufacturing TechWeb Tony L. Uphoff Chief Executive Officer John Dennehy, CFO David Michael, CIO John Siefert, Senior Vice President and Publisher, InformationWeek Business Technology Network Bob Evans Senior Vice President and Content Director, InformationWeek Global CIO Eric Faurot Senior Vice President, TechWeb Events Network Joseph Braue Senior Vice President, Light Reading Communications Network Scott Vaughan Vice President, Marketing Services John Ecke Vice President, Financial Technology Network Beth Rivera Vice President, Human Resources Jill Thiry Publishing Director Fritz Nelson Executive Producer, TechWeb TV Scott Popowitz Senior Group Director, Audience Development

Return to Table of Contents

4

June 2009

www.ddj.com

D r. D o b b ’s D i g e s t

A Model That’s Right For The Times Companies face pressure to find new ways to extend distributed computing by Eric J. Bruno

I

n considering the history of distributed computing, CIOs should focus on how companies knew that it was the right time to make a big architectural bet on the next wave of technology. This past experience could help CIOs who are being asked to consider a similar move today. In the 1970s, for instance, Reuters introduced Monitor, through which journalists entered information via dumb terminals and a mainframe computer sent it out to readers. In the early 1990s, about the time I joined Reuters as a developer, it was taking that distributed computing concept further by building a pioneering electronic trading system, Globex, for the Chicago Mercantile Exchange, drawing on a mainframe and Windows-based PCs. As Globex’s costs grew with its popularity, CME moved the Globex architecture to an even more distributed model: a pair of mainframeclass computers coupled with about 1,500 workstation-class servers running Linux and Solaris. Now a new question is arising for CME: What role will cloud computing play in Globex and other platforms? IT leaders face real pressures in today’s economy to push the limits of technologies such as virtualization and cloud computing that extend distributed computing models. The decisions CME and Reuters faced in choosing which distributed model to adopt, and when, are much the same as the ones companies face today. One difference, however, is that economic pressures are forcing companies to consider emerging, often immature, distributed computing technologies.

What’s New Distributed computing refers broadly to applications that adhere to the client-server model, a cluster, an N-tier architecture, or some combination of them. While there are variations on these base models, what they have in common is that they divide computing across multiple computers to achieve greater application scale and availability. Large websites like eBay use a combination of these models, with database and app servers clustered within each tier of the design.

Examples of distributed computing architectures.

DR. DOBB’S DIGEST

5

June 2009

www.ddj.com

D r. D o b b ’s D i g e s t

With the increasing use of Ajax at the browser level, many sites have added a client-server element to the mix. As a result, large-scale distributed applications such as Google and Yahoo leverage all three computing models. Web services take the distributed model a step further, sharing the data processing load. Since Web services are based on HTTP, it’s straightforward to deploy a single service to multiple servers to share the load. This design lets developers distribute even single application components, resulting in greater scalability, more code reuse, and reduced costs. Thanks to open standards and Web services, message formats and protocols can be defined in XML, C++, or Java, then easily implemented on other platforms, reducing costs further. But the biggest recent change to the distributed computing model is virtualization, letting IT divide a physical server into multiple virtual servers. Beyond the oft-cited hardware, energy, and space savings, using virtualization to divide a physical server into virtual ones helps solve the problem of getting the most value from multicore computers. For instance, even if individual software components aren’t yet written to take advantage of multicore architectures, developers should still consider using virtualization to run multiple software components on one physical computer as though they were each running on separate computers. This maintains the security and robustness of the software, while squeezing the most value from multithreaded-capable multicore computers.

Distributed computing virtualized.

DR. DOBB’S DIGEST

As a result, individual application components can execute on multiple virtual servers, all running on a single physical server. One approach is to pair applications that communicate as much as possible, thereby eliminating network-induced latency that might be present if those apps are separated. Combining virtualization with the other models of distributed computing can result in a cost-effective, scalable architecture. But effectively monitoring virtualization requires administrator oversight, a timeconsuming process that companies should look to minimize using configuration management and other lifecycle tools. Instead of manually building configurations, for instance, administrators can link multiple templates containing the operating system, and build scripts and applications to

a new virtual machine volume. Companies that need to speed up the process also should provide developers with self-service facilities for submitting and acting on virtual resource requests, instead of formal submission processes and the delays they entail. While virtualization is the here and now of distributed computing, cloud computing is its future. Although its full impact has yet to be realized, it’s becoming clear that cloud computing will be the service-delivery option of choice for distributed applications, thanks in part to today’s CPU, memory, and bandwidth capabilities. Once vendors can combine a high level of developer control, a reasonable and flexible cost model, and compelling services, cloud computing will change the face of distributed computing.

Extreme distribution: Combining clustering, virtualization, N-tier architectures, Web services, and Ajax.

6

June 2009

www.ddj.com

D r. D o b b ’s D i g e s t

When will that happen for enterprise systems? Some of the most important factors to watch are design decisions for security, data protection and recovery guidelines, and application architectures. For instance, developers working with cloud computing need to contend not only with familiar dedicated security problems such as firewalls and intrusion detection, but with security in a shared cloud environment as well. Heavy lifting in these areas is being done by groups such as the Cloud Security Alliance, which has released its “Security Guidance for Critical Areas of Focus in Cloud Computing,” a best-practices document that provides security guidelines for enterprise cloud computing. Perhaps most importantly, as with other distributed computing models, cloud computing isn’t an all-or-nothing proposition. Hybrid models of dedicated/cloud resource implementations may be just the thing for applications with bursty usage patterns.

Distributed Computing Case Studies With Globex, when CME distributed the architecture from a central mainframe to a pair of mainframe-class computers coupled with

workstation-class servers running Linux and Solaris, it was able to handle more than 2 million transaction requests daily, with response times reduced to 150 milliseconds or less. To maintain an advantage in terms of cost, customer response time, and overall reliability, CME relies heavily on open source software and low-cost, increasingly powerful x86-based servers. CME also uses Novell’s ZENworks configuration management software for distributed-application management and as a step toward introducing virtualization into the architecture. What’s next? Like a lot of companies, CME has its eye on cloud computing, but it doesn’t have a detailed plan it’s ready to share since, like a lot of companies, it’s taking small steps while waiting for cloud standards to mature. Another distributed computing project worth examining is eBay. Although users sell millions of dollars of goods through the system daily, eBay’s product is a website that it must maintain with the most efficient software architecture possible. That architecture has grown to become a three-tier design with business logic, presentation code, and database code separated into different physical tiers. Over time, servers have been continuously added to each tier, including the database, which has been partitioned and distributed among many servers across multiple geographic locations. The application tier has been rewritten in Java to enable even more computing distribution. Entire feature sets — search, indexing, transaction management, billing, and fraud detection — have been moved out of the core application and into separate pools of servers. The architecture has moved to this level of distribution so that eBay can keep up with growth in the number of users and active auctions. Like CME, eBay hasn’t said how it will use cloud resources, but it’s a founding member of the Cloud Security Alliance. As companies consider moving distributed computing to cloud resources, Forrester Research’s Ted Schadler advises that they launch pilot projects that have milestones for measuring use, renegotiating pricing, and increasing employee training.

Design and Build

Distributed application design based on standards, open source.

DR. DOBB’S DIGEST

The cloud is tomorrow’s promise, but biggest current gains come from increasing levels of speed and parallelization of low-cost servers that let them reduce costs and meet increasing demands. There’s no magic in this; higher demands have been placed on networking and communication technologies. These demands are being met with software advances in the form of enterprise service buses, Web services, and virtualization. Tools for building highly scalable applications based on the distributed computing design methodologies are readily available, many as open source. But open source isn’t a panacea when it comes to building distributed systems. While freely available open source software can cut licensing costs, it also can drive up staff costs because of the special skills and knowledge required to work in an environment where formal technical support may not be available. An example of how developers can use open source to extend distributed computing is detailed below. Open source application 7

June 2009

www.ddj.com

D r. D o b b ’s D i g e s t

servers such as Sun Microsystems’ GlassFish and the Sun Cloud Computer Service let developers write and integrate application components in Java, Python, and Ruby. Sun — at least before its planned acquisition by Oracle — said its Sun Cloud would be a public cloud for developers. To further attract developers, Sun is also providing a set of REST-based open APIs published under the Creative Commons license, letting anyone use them at no cost. Developers can immediately deploy applications to the Sun Cloud, by leveraging prepackaged virtual machine images of Sun’s open source software. Say your developers want to build an application using a set of Web services written in Ruby, application logic written in Java and

PHP, presentation logic in JavaServer Pages, and running on OpenSolaris (or Linux). To add both desktop and mobile device support, they write a JavaFX application as an alternate presentation layer. With a combination of XML and an enterprise service bus, they can reliably tie all these components into one scalable application that runs on a farm of cheap x86 servers. To illustrate, the following sample application — a company’s research portal for investment bankers — combines HTML, an XML Web service, Ajax/JavaScript, Java, and JavaFX. (Source code that implements this application is available at informationweek .com/1231/ddj/code.htm.) The application consists of three Web services: • Stock quotes, a REST-based service that returns quote data in XML. • Data Requestor, a component that uses a message broker or ESB to reliably request and distribute critical financial data. • Stock Viewer, a JavaFX app that uses the other components to display the latest data for a given stock symbol. The key to this architecture is that not only is it distributed by design, but it’s also based on both open standards and open source. Thanks in part to XML and HTTP, the result is platform and language independence for each component at every tier. Some of the components, such as the Quotes and News services, use a messaging protocol (JMS) to distribute and scale their internal components. For instance, the Quotes service is distributed across multiple nodes for redundancy, and uses a reliable message queue to coordinate and distribute requests amongst them Multiple Java Servlets are deployed to serve requests for quote data over HTTP. This data is retrieved from an internal cache, if available and up to date. Otherwise a request to a third-party provider is enqueued on a reliable JMS queue. Separately, one of the redundant Quote Requestor components will dequeue the request, retrieve the data from the provider, and then return it to the client in the form of XML. This distributed architecture provides scalability to the Quotes service (multiple Quote Servlets share the load), and reliability (the queue has guaranteed delivery, and multiple queue readers ensure uptime). Whether companies are implementing mission-critical servicebased applications that span virtual servers around the globe or prototyping a simple application based on Web services and widgets like this, the underlying technology is fundamentally the same. Of course, the scale and complexity are different. But with advances in CPU, memory, and bandwidth capabilities, the promise of distributed computing is on the way to becoming fully realized. —Eric Bruno is a technology consultant specializing in software architecture, design, and development. Write to us at [email protected]. Return to Table of Contents

Distributed Quotes service design.

DR. DOBB’S DIGEST

8

June 2009

www.ddj.com

D r. D o b b ’s D i g e s t

Software Engineering =/ Computer Science Software engineering seems different,in a frustrating way,from other disciplines of computer science by Chuck Connell

A

few years ago, I studied algorithms and complexity. The field is wonderfully clean, with each concept clearly defined, and each result building on earlier proofs. When you learn a fact in this area, you can take it to the bank, since mathematics would have to be inconsistent to overturn what you just learned. Even the imperfect results, such as approximation and probabilistic algorithms, have rigorous analyses about their imperfections. Other disciplines of computer science, such as network topology and cryptography, also enjoy similar satisfying status. Now I work on software engineering, and this area is maddeningly slippery. No concept is precisely defined. Results are qualified with “usually” or “in general”. Today’s research may, or may not, help tomorrow’s work. New approaches often overturn earlier methods, with the new approaches burning brightly for a while and then falling out of fashion as their limitations emerge. We believed that structured programming was the answer. Then we put faith in fourth-generation languages, then objectoriented methods, then extreme programming, and

now maybe open source. But software engineering is where the rubber meets the road. Few people care whether P equals NP just for the beauty of the question. The computer field is about doing things with computers. This means writing software to solve human problems, and running that software on real machines. By the Church-Turing Thesis, all computer hardware is essentially equivalent. So while new machine architectures are cool, the real limiting challenge in computer science is the problem of creating software. We need software that can be put together in a reasonable amount of time, for a reasonable cost, that works something like its designers hoped for, and runs with few errors. With this goal in mind, something has always bothered me (and many other researchers): Why can’t software engineering have more rigorous results, like the other parts of computer science? To state the question another way, “How much of software design and construction can be made formal and provable?” The answer to that question lies in Figure 1.

Figure 1: The bright line in computer science.

DR. DOBB’S DIGEST

9

June 2009

www.ddj.com

D r. D o b b ’s D i g e s t The topics above the line constitute software engineering. The areas of study below the line are the core subjects of computer science. These latter topics have clear, formal results. For open questions in these fields, we expect that new results will also be formally stated. These topics build on each other — cryptography on complexity, and compilers on algorithms, for example. Moreover, we believe that proven results in these fields will still be true 100 years from now. So what is that bright line, and why are none of the software engineering topics below it? The line is the property “directly involves human activity”. Software engineering has this property, while traditional computer science does not. The results from disciplines below the line might be used by people, but their results are not directly affected by people. Software engineering has an essential human component. Software maintainability, for example, is the ability of people to understand, find, and repair defects in a software system. The maintainability of software may be influenced by some formal notions of computer science — perhaps the cyclomatic complexity of the software’s control graph. But maintainability crucially involves humans, and their ability to grasp the meaning and intention of source code. The question of whether a particular software system is highly maintainable cannot be answered just by mechanically examining the software. The same is true for safety. Researchers have used some formal methods to learn about a software system’s impact on people’s health and property. But no discussion of software safety is complete without appeal to the human component of the system under examination. Likewise for requirements engineering. We can devise all sorts of interview techniques to elicit accurate requirements from software stakeholders, and we can create various systems of notation to write down what we learn. But no amount of research in this area will change the fact that requirement gathering often involves talking to or observing people. Sometimes these people tell us the right information, and sometimes they don’t. Sometimes people lie, perhaps for good reasons. Sometimes people are honestly trying to convey correct information but are unable to do so. DR. DOBB’S DIGEST

This observation leads to Connell’s Thesis: Software engineering will never be a rigorous discipline with proven results, because it involves human activity.

This is an extra-mathematical statement, about the limits of formal systems. I offer no proof for the statement, and no proof that there is no proof. But the fact remains that the central questions of software engineering are human concerns: • What should this software do? (requirements, usability, safety) • What should the software look like inside, so it is easy to fix and modify? (architecture, design, scalability, portability, extensibility) • How long will it take to create? (estimation) • How should we build it? (coding, testing, measurement, configuration) • How should we organize the team to work efficiently? (management, process, documentation) All of these problems revolve around people. My thesis explains why software engineering is so hard and so slippery. Tried-andtrue methods that work for one team of programmers do not work for other teams. Exhaustive analysis of past programming projects may not produce a good estimation for the next. Revolutionary software development tools each help incrementally and then fail to live up to their grand promise. The reason is that humans are squishy and frustrating and unpredictable. Before turning to the implications of my assertion, I address three likely objections: The thesis is self-fulfilling. If some area of software engineering is solved rigorously, you can just redefine software engineering not to include that problem.

This objection is somewhat true, but of limited scope. I am asserting that the range of disciplines commonly referred to as software engineering will substantially continue to defy rigorous solution. Narrow aspects of some of the problems might succumb to a formal approach, but I claim this success will be just at the fringes of the central software engineering issues. 10

Statistical results in software engineering already disprove the thesis.

These methods generally address the estimation problem and include Function Point Counting, COCOMO II, PROBE, and others. Despite their mathematical appearance, these methods are not proofs or formal results. The statistics are an attempt to quantify subjective human experience on past software projects, and then extrapolate from that data to future projects. This works sometimes. But the seemingly rigorous formulas in these schemes are, in effect, putting lipstick on a pig, to use a contemporary idiom. For example, one of the formulas in COCOMO II is PersonMonths = 2.94 ? SizeB, where B = 0.91 + 0.01 × ∑ SFi, and SF is a set of five subjective scale factors such as “development flexibility” and “team cohesion”. The formula looks rigorous, but is dominated by an exponent made up of human factors. Formal software engineering processes, such as cleanroom engineering, are gradually finding rigorous, provable methods for software development. They are raising the bright line to subsume previously squishy software engineering topics.

It is true that researchers of formal processes are making headway on various problems. But they are guilty of the converse of the first objection: They define software development in such a narrow way that it becomes amenable to rigorous solutions. Formal methods simply gloss over any problem centered on human beings. For example, a key to formal software development methods is the creation of a rigorous, unambiguous software specification. The specification is then used to drive (and prove) the later phases of the development process. A formal method may indeed contain an unambiguous semantic notation scheme. But no formal method contains an exact recipe for getting people to unambiguously state their vague notions of what software ought to do. To the contrary of these objections, it is my claim that software engineering is essentially different from traditional, formal computer science. The former depends on people and the latter does not. This leads to Connell’s Corollary: June 2009

www.ddj.com

D r. D o b b ’s D i g e s t We should stop trying to prove fundamental results in software engineering and accept that the significant advances in this domain will be general guidelines.

As an example, David Parnas wrote a wonderful paper in 1972, “On The Criteria To Be Used in Decomposing Systems into Modules.” The paper describes a simple experiment Parnas performed about alternative software design strategies, one utilizing information hiding, and the other with global data visibility. He then drew some conclusions and made recommendations based on this small experiment. Nothing in the paper is proven, and Parnas does not claim that anyone following his recommendations is guaranteed to get similar results. But the paper contains wise counsel and has been highly influential in the popularity of objectoriented language design. Another example is the vast body of work known as CMMI from the Software Engineering Institute at Carnegie Mellon. CMMI began as a software process model and has now grown to encompass other kinds of projects as well. CMMI is about 1000 pages long — not counting primers, interpretations, and training materials — and represents more than 1000 person-years of work. It is used by many large organizations and has been credited with significant improvement in their software process and products. But CMMI contains not a single iron-clad proven result. It is really just a set of (highly developed) suggestions for how to organize a software project, based on methods that have worked for other organizations on past projects. In fact, the SEI states that CMMI is not even a process, but rather a meta-process, with details to be filled in by each organization. Other areas of research in this spirit include design patterns, architectural styles, refactoring based on bad smells, agile development, and data visualization. In these disciplines, parts of the work may include proven results, but the overall aims are systems that foundationally include a human component. To be clear: Core computer science topics (below the bright line) are vital tools to any software engineer. A background in algorithms is important when DR. DOBB’S DIGEST

designing high-performance application software. Queuing theory helps with the design of operating system kernels. Cleanroom engineering contains some methods useful in some situations. Statistical history can be helpful when planning similar projects with a similar team of people. But formalism is just a necessary, not sufficient, condition for good software engineering. To illustrate this point, consider the fields of structural engineering and physical architecture (houses and buildings). Imagine a brilliant structural engineer who is the world’s expert on building materials, stress and strain, load distributions, wind shear, earthquake forces, etc. Architects in every country keep this person on their speed-dial for every design and construction project. Would this mythical structural engineer necessarily be good at designing the buildings he or she is analyzing? Not at all. Our structural engineer might be lousy at talking to clients, unable to design spaces that people like to inhabit, dull at imagining solutions to new problems, and boring aesthetically. Structural engineering is useful to physical architects, but is not enough for good design. Successful architecture includes creativity, vision, multi-disciplinary thinking, and humanity. In the same way, classical computer science is helpful to software engineering, but will never be the whole story. Good software engineering also includes creativity, vision, multi-disciplinary thinking, and humanity. This observation frees software engineering researchers to spend time on what does succeed — building up a body of collected wisdom for future practitioners. We should not try to make software engineering into an extension of mathematically-based computer science. It won’t work, and can distract us from useful advances waiting to be discovered.

Acknowledgments My thanks to Steve Homer for a discussion that sparked my interest in this question. —Chuck Connell is a software consultant. He can be reached at www.chc-3.com. Return to Table of Contents

11

June 2009

www.ddj.com

D r. D o b b ’s D i g e s t

Minimize Code by Using jQuery and Data Templates ASP.NET 4.0 validates the usefulness of client-side templates by Dan Wahlin

DR. DOBB’S DIGEST

I

’m currently working on a heavily AJAXoriented ASP.NET MVC web application for a business client and using jQuery to call controller actions, retrieve JSON data, and then manipulate the DOM to display the data. Several of the pages have quite a bit of dynamic HTML that has to be generated once a JSON object is returned from an MVC controller action, which generally leads to a lot of custom JavaScript. After working through my first page on the project, I realized that I was creating a maintenance nightmare due to the amount of JavaScript being written and decided to look into other options. The first thing I looked for was some type of JavaScript template that would work much like GridView templates in ASP.NET. I wanted to be able to define a template in HTML and then bind a JSON object against it. That way I could easily tweak the template without having to actually touch my JavaScript code much. I found several potential template solutions (and Microsoft will be releasing a nice option with ASP.NET 4.0 as well) that were nice, but many were so CSS class centric that they ended up being a turn off since I felt like I had to learn yet another coding style just to use them. I eventually came across one by John Resig (creator of jQuery and overall JavaScript genius) that was so small that I wasn’t sure it would even be viable. I mean we’re talking tiny as far as code goes — so tiny that I figured it wouldn’t work well for what I needed. After doing more searching and research, I came across a post by Rick Strahl that mentioned John’s micro template technique and had a few tweaks in it. I tried it and was instantly hooked because it gave me the power to use templates yet still embed JavaScript to perform basic presentation logic (loops, conditionals, etc.) as 12

needed.

The Template Engine The first thing I did was add the template function as a jQuery extension so that I could get to it using familiar jQuery syntax. This isn’t required, but makes the template convenient to use. I ended up going with Rick’s slightly tweaked version and I only changed how the error was reported. I’m not going to go into how to extend jQuery in this article, but here’s what the parseTemplate extension function looks like: $.fn.parseTemplate = function(data) { var str = (this).html(); var _tmplCache = {} var err = “”; try { var func = _tmplCache[str]; if (!func) { var strFunc = “var p=[],print=function() {p.push.apply(p,arguments);};” + “with(obj){p.push(‘” + str.replace(/[\r\t\n]/g, “ “) .replace(/’(?=[^#]*#>)/g, “\t”) .split(“‘”).join(“\\’”) .split(“\t”).join(“‘”) .replace(/<#=(.+?)#>/g, “‘,$1,’”) .split(“<#”).join(“‘);”) .split(“#>”).join(“p.push(‘”) + “‘);}return p.join(‘’);”; //alert(strFunc); func = new Function(“obj”, strFunc); _tmplCache[str] = func; } return func(data); } catch (e) { err = e.message; } return “< # ERROR: “ + err.toString() + “ # >”; }

That’s all the code for the template engine. Unbelievable really. Runs on chipmunk power.

Creating a Template Once the extension function was ready, I had to create a template in my MVC view (note that this works fine in any web application, not just June 2009

www.ddj.com

D r. D o b b ’s D i g e s t ASP.NET MVC) that described how the JSON data should be presented. Templates are placed inside of a script tag as shown next (I chopped out most of the template to keep it more concise): <script id=”MenuSummaryTemplate” type=”text/html”>
Totals:
<# if (DeliveryFee > 0) { #> <# } #>
Sub Total: $<span id=”FinalSubTotal”> <#= FinalSubTotal #>
Sales Tax: $<span id=”FinalSalesTax”> <#= FinalSalesTax #>
Delivery Fee: $<span id=”DeliveryFee”> <#= DeliveryFee #>
Admin Fee: $<span id=”AdminFee”> <#= AdminFee #>
Total: $<span id=”FinalTotal”> <#= FinalTotal #>
Will be charged to your credit card ending with <#= CreditCard #>


You can see that the script block template container has a type of text/html and that the template uses <#= #> blocks to define placeholders for JSON properties that are bound to the template. The text/html type is a trick to hide the template from the browser and I DR. DOBB’S DIGEST

suspect some may not like that — your call though. I’m just showing one option. The template supports embedding JavaScript logic into it, which is one of my favorite features. After a little thought, you may wonder why I didn’t simply update the spans and divs using simple JavaScript and avoid the template completely. By using a template, my coding is cut-down to two lines of JavaScript code once the JSON object is created (which you’ll see in a moment), and this is only part of the template. Here’s another section of it that handles looping through menu items and creating rows: <# if (MainItems == null || MainItems.length == 0) { #> No items selected <# } else { for(var i=0; i < MainItems.length; i++) { var mmi = MainItems[i]; #> <#= mmi.Name #>: <#= mmi.NumberOfPeople #> ordered at $<#= mmi.PricePerPerson #> per person <# } } #>

Binding Data to a Template To bind JSON data to the template, I can call my jQuery extension named parseTemplate(), get back the final HTML as a string, and then add that into the DOM. Here’s an example of binding to the template shown above. I went ahead and left the JSON data that’s being bound in so that you could see it, but jump to the bottom of LoadApprovalDiv() to see where I bind the JSON object to the template. It’s only two lines of code. function LoadApprovalDiv() { var subTotal = parseFloat($(‘#SubTotal’).text()); var salesTaxRate = parseFloat($(‘#SalesTaxRate’).val()) / 100; var salesTaxAmount = subTotal * salesTaxRate; var deliveryFee = parseFloat($(‘#DeliveryFee’).val()); var adminFee = (subTotal + salesTaxAmount + deliveryFee) * .05; var total = subTotal + salesTaxAmount + deliveryFee + adminFee; var deliveryAddress = $(‘#Delivery_Street’).val() + ‘ ‘ + $(‘#Delivery_City’).val() + “ “ + $(‘#Delivery_StateID option:selected’).text() + ‘ ‘ + $(‘#Delivery_Zip’).val(); var creditCard = $(‘#Payment_CreditCardNumber’).val(); var abbrCreditCard = ‘*’ + creditCard.substring(creditCard.length - 5); var json = { ‘FinalSubTotal’ : subTotal.toFixed(2), ‘FinalSalesTax’ : salesTaxAmount.toFixed(2), ‘FinalTotal’ : total.toFixed(2), ‘DeliveryFee’ : deliveryFee.toFixed(2), ‘AdminFee’ : adminFee.toFixed(2), ‘DeliveryName’ : $(‘#Delivery_Name’).val(), ‘DeliveryAddress’: deliveryAddress, ‘CreditCard’ : abbrCreditCard, ‘DeliveryDate’ : $(‘#Delivery_DeliveryDate’).val(), ‘DeliveryTime’ : $(‘#Delivery_DeliveryTime option:selected’).text(),

13

June 2009

www.ddj.com

D r. D o b b ’s D i g e s t ‘MainItems’ : GenerateJson(‘Main’), ‘SideItems’ : GenerateJson(‘Side’), ‘DesertItems’ : GenerateJson(‘Desert’), ‘DrinkItems’ : GenerateJson(‘Drink’) }; var s = $(‘#MenuSummaryTemplate’).parseTemplate(json); $(‘#MenuSummaryOutput’).html(s); }

You can see that I call parseTemplate(), pass in the template to use and JSON object, and then get back a string. I then add the string into a div with an ID of MenuSummaryOutput using jQuery. Here’s a sample of what the template generates: Going this route cut down my JavaScript code by at least 75% over what I had originally and makes it really easy to maintain. If I need to add a new CSS style or modify how things are presented, I can simply change the template and avoid writing custom JavaScript code. By using the template and AJAX calls, I’ve been able to significantly minimize the amount of server code being written and meet the client’s requirement of having an extremely fast and snappy end user experience. If you’re writing a lot of custom JavaScript currently to generate DOM objects, I’d highly recommend looking into this template or some of the other template solutions out there. I can’t say I’ve tested performance, but I can say that I’m working with some fairly large templates that are loading in less than 1 second. I personally feel it’s the way to go, especially if you want to minimize code and simplify maintenance. I think Microsoft’s entry into this area with ASP.NET 4.0 further validates the usefulness of client-side templates. — Dan Wahlin (Microsoft Most Valuable Professional for ASP.NET and XML Web Services) is the founder of The Wahlin Group (www.TheWahlinGroup.com), which provides .NET, SharePoint, and Silverlight consulting and training services. Dan blogs at http://weblogs.asp.net/dwahlin.

Return to Table of Contents

DR. DOBB’S DIGEST

14

June 2009

www.ddj.com

D r. D o b b ’s D i g e s t

The Android 1.5 Developer Experience Version 1.5 corrects shortcomings and provides exciting new enhancements

by Mike Riley

O

n the heels of my Android Developer Experience article (see http://www .ddj.com/mobile/214600303), Google released a major update to its Android mobile operating system in late April 2009. Version 1.5 corrected a number of shortcomings with the initial launch of the OS, along with providing some exciting new enhancements. A number of additions have been included in the 1.5 release to not only bring it to parity with its closest technical competitor, the Apple iPhone, but also to exceed the iPhone’s capabilities in key hardware feature sets. Some of the most notable improvements include: • Onscreen soft-keyboard that rotates in both portrait and landscape mode, just like the iPhone’s touch screen keyboard. • A2DP support for high quality stereo Bluetooth audio reproduction. • Camcorder application and API for capturing 15 fps videos with the ability to save, e-mail, and directly upload to YouTube. • AppWidgets and its associated API designed to expose and extend dynamic application functionality to the Android desktop. • Live Folders provide dynamic shortcuts to application data like contact lists, video file inventory, e-mails, documents, and more. Additionally, there are a number of other improvements peppered throughout the existing APIs as well as the SDK and Eclipse plug-ins. For

DR. DOBB’S DIGEST

15

example, the new SDK allows the easy creation of profiled Android devices, running a particular version of the OS (1.0, 1.1, or 1.5) or allowing for the automatic creation of a virtual SD card up to 64 MB in size. Such an operation in earlier SDKs was an error-prone chore but the 1.5 SDK release via the version 0.9.1.v200905011822-1621 Android Development Tools and Editors add-ons transform this into a simple painless dialog box. Design-time form rendering is also now available. While one can still not ‘paint’ a form by dragging and dropping controls onto a design surface, the updated Eclipse Android editor plug-ins accurately prerender the layout XML code. This considerably speeds up development time, though until a robust design-time layout editor is fully baked into the Eclipse plug-ins, an external designer like the free

Figure 1: Setting up an Android emulator profile.

June 2009

www.ddj.com

D r. D o b b ’s D i g e s t DroidDraw is still quite helpful and sometimes even necessary.

No More Root for You Android Developer Phone users will have an unfortunate surprise waiting for them after

the WiFi Tether for Root Users application) that effectively transformed the phone into a WiFi access point. While I can understand the business reasons for Google appeasing the telecomms, this does prevent legitimate uses of root access (i.e., killing hidden

Writing an AppWidget is quite easy, especially for seasoned Android developers they download the 1.5 radio and system images from HTC. For developers and their unlocked developer phones, Google also chose to bow to the pressures of the mobile telecoms by removing local root access. As such, the section of my original article covering the ability to su to root-level file access no longer applies when running 1.5. The only legitimate way to gain root-level access to the filesystem in the 1.5 release is to tether the developer phone to an Apple OS X, Linux, or Windows desktop running the Android SDK, executing the adb shell command from a terminal window and then su’ing from that connection. While Google claims it removed root access on the phone for security reasons, it seems obvious to me that it did so in response to hacks that were appearing in the Android Marketplace (like

and/or unauthorized background application processes). It also backpedals on Google’s marketing of Android as a truly open, developer-friendly platform. If Google wanted to advise secure practices while appeasing the developer community, it could have instead simply added an adb switch (ex: tether the phone to a SDK-enabled PC, open a terminal window and execute something like a adb unlock command) to open the phone’s root access back up if the developer required it. Alas, at least for the 1.5 release, the only other way to obtain root access on the phone is to load a recompiled, unsigned, and unsupported kernel such as the ‘netfilter-enabled’ kernel the developers of the WiFi Tether for Root Users compiled, available Stickcam, Ustream, and others come to mind.

Figure 2: Rendering a Android form in the Layout View.

DR. DOBB’S DIGEST

16

Writing an AppWidget The AppWidget API is perhaps the most exciting new feature for 1.5 developers due to the way it can be used to expose larger applications in a very small viewport on the Android desktop. Similar to Microsoft Windows desktop widgets, Android widgets live on the desktop and can be mounted and relocated on any of the three Android desktop screens. The OS comes stocked with an Analog Clock, Calendar, Music, Picture Frame, and Search widgets, and the Android Marketplace has rapidly expanded with many others including widgets for monitoring battery, internal and SD storage capacity, messaging, and many more. Technically, AppWidgets can be embedded in other applications besides the Android home/desktop, reminding me of the days when Microsoft introduced the concept of OLE (Object Linking and Embedding) to the PC desktop. Writing an AppWidget is quite easy, especially for seasoned Android developers. The main coding concepts to remember are the XML AppWidget layout metadata and its conduit of communication via the BroadcastReceiver intent. Each of these are defined and stored like other application metadata such as name, icon, version, permissions and more in the Android project’s AndroidManifest.xml file. Once defined, extending a custom class from the AppWidgetProvider helper class helps manage the various broadcast events and managing whatever update and error handling within that class makes widget construction almost too easy. AppWidget is an excellent, easy-to-follow tutorial (accompanied by full source code), written by Android Developer Challenge winner and relatively new Google employee Jeff Sharkey. Additional AppWidget details can be found in the Android Dev Guide (http://developer.android.com/guide/topics/appwidgets/ind ex.html). Using this simple framework, I wrote a rudimentary widget to post an XMPP message to my Identi.ca account that then replicates the post to my Twitter account. Although several Twitter clients are currently available in the Android Marketplace (the best still being Twidroid), I prefer quick and June 2009

www.ddj.com

D r. D o b b ’s D i g e s t lightweight. While most widgets offer a window into a much larger backend application, others like mine perform a simple service. The advantageous design aspect behind widgets is the fact that they can support this wide spectrum of usage scenarios.

Gaining Momentum While still quite a ways behind the iPhone in terms of elegance and ease of use, the Android platform is rapidly gaining ground and, in certain technical categories, surpassing what the iPhone 3G running the 2.x version OS can do. When I demo’d the new Android 1.5 features to a coworker who is an ardent iPhone fan, he was pleased to see the innovations the Android team had injected into their OS. Why was he pleased? Because these improvements will drive Apple to further innovation on the platform, prompting a mobile arms war between the two platforms that will ultimately benefit consumers with amazing capabilities in the palm of their hand. Even so, the new 1.5 additions supply developers with powerful, exciting possibilities. Looking over some of my favorite Android 1.0 applications, I foresee programs like the TTS Translator program (http://www.imt11.com/android/tts/info_an droid.html) leveraging the android.speech RecognizerIntenet to allow near real-time spoken word language translation. My favorite Twitter (and Identi.ca/Laconi.ca) client on Android, Twidroid, will hopefully leverage the Widget API in a future release to expose portions of its program to the desktop. In the meantime, I have created my own simple XMPP widget for immediate access, one click quick posts to my Identi.ca account that then propagates my Tweet to my Twitter account. Considering that this hack took me less than an hour to design, code, package, and install on my Android Dev Phone, the power and potential of the Android platform is really coming into focus. I can’t wait to see what’s in store for the next major release of the Android OS. —Mike Riley is a Dr. Dobb’s Contributing Editor. Return to Table of Contents

DR. DOBB’S DIGEST

17

June 2009

www.ddj.com

D r. D o b b ’s D i g e s t

The System of a Dump DebugDiag is a simple-to-use debugging tool that provides you with a trove of information by Glen Matthews

M

emory dumps have always been a tried and true method of debugging errant programs. The image of a programmer toiling over a stack of paper to track down that elusive bug is very real to me — I remember doing it. I also remember entering the wired age when a systems programmer from Ireland e-mailed his dump to us for our review — and consequently bogged down Bitnet (or NetNorth in Canada, a store-and-forward e-mail network dating circa 1981). Still, it was faster than sending a tape! Nowadays, memory dumps can be taken in a flash, and can easily be transmitted across the Internet. They are a useful tool, in a variety of settings, particularly if we regard the debugging process as extending past RTM to the support lifecycle. If we can add memory dumps to the set of tools available for debugging — such as the debugger, simple output of information through instrumentation, observing the program behavior — then we become more effective and make life easier for ourselves. Microsoft’s Debug Diagnostics 1.1 (“DebugDiag” for short) is designed to facilitate the generation and analysis of memory dumps. There are three components to DebugDiag: • DebugSvc.exe, a service that can attach invasively to a process. • DebHost.exe, which contains the debugging engine and report generation components. • DebugDiag.exe, the UI, managing dump rules, and the analysis of memory dumps.

built-in analytics are scripts. They provide an easy way of analyzing dumps — either “crash/hang” dumps, or “memory pressure” snapshots of a process. Moreover, DebugDiag is tailored for use with IIS, and designed to be able to take multiple dumps to reflect changing conditions (memory, for instance) or failing threads in IIS. Finally, in the case of IIS, DebugDiag has the ability to bundle together a number of data sources for debugging (such as the event logs, the IIS Metabase, the W3SVC logs, and the HTTP error logs) into a cab file. Figure 1 is DebugDiag’s interface. The first tab lets you create a crash rule that determines when and how a crash dump is taken. In the second tab, you can analyze a dump. There is also a default wizard that appears at startup to guide you through the dump rule creation process. The third tab displays the currently running processes, and lets you take manual dump of a particular process.

Crash/Hang Dump DebugDiag can simply be pointed to a crash dump and told to analyze it. Alternatively, you can prepare for a crash by creating “rules” that trigger a dump by using the wizard to walk you through the rule creation process (either a crash/hang rule, or a memory and handle leak rule). In Figure 2, you see the four panels of the wizard when setting up a crash rule. Here I’m setting up a rule for demodump, a demonstration program that exercises some of DebugDiag’s features.

DebugDiag’s UI is designed to assist in handling dumps: creation of rules defining when and how to take a dump, and analyzing the resulting crash dumps. It is also extensible via scripting. In fact, the

Figure 1: DebugDiag User Interface.

DR. DOBB’S DIGEST

18

Figure 2: Crash Rule Wizard Panels.

June 2009

www.ddj.com

D r. D o b b ’s D i g e s t The wizard lets you choose the type of process (for instance, all IIS/COM+ processes, or a specific generic process, MTS/COM+ process, IIS web application pool, or NT service). This choice leads to a panel displaying either executing processes or applications of the type you selected. If you select the MTS/COM+ application radio button, you are shown a list of packages. If you choose the NT service radio button, a list of services is displayed. Finally, if you select a specific IIS web application pool, you are shown a list of pools from which to select. The fourth panel in Figure 2 is the Advanced Configuration panel, where you specify the type of dump action that you want (a log trace, or minidump, or full userdump, or even custom action). Limits and further dump configuration can be done here. Also, PageHeap can be configured for the process using a simple dialog (rather than using the Microsoft gflags tool). PageHeap is a tool that can be used to diagnose heap corruption problems, which can be really nasty! For more information, see How to use Pageheap.exe in Windows XP, Windows 2000, and Windows Server 2003.

Memory Pressure Analysis The configuration behind a memory or handle leak rule is less obvious. Tools such as Process Explorer from Windows Sysinternals show the changes of state in a process. However, to completely analyze memory utilization you need a dump. Moreover, when tracking down a problem, multiple dumps may be necessary. Figure 3 illustrates the configuration of a leak rule. DebugDiag injects LeakTrack.dll into the process under study. (You can also inject this dll manually into a process.) The leak rule

lets you specify points at which a dump will be taken, as well as a final dump.

Microsoft maintains a public symbol server which is available over the Internet. The path is:

demodump: Illustrating DebugDiag

srv*c:\symcache*http://msdl.microsoft.com/ download/symbols

To illustrate some of these scenarios, I’ve put together a simple program called demodump that attempts to replicate certain crash situations. This application (demodump) takes one of four parameters: stack, address, heap, and hang.

It’s fairly easy to create your own symbol store using the symstore utility (available in the Debugging Tools for Windows download from Microsoft). The following command line creates a symbol store for our example application: symstore add /f debug /s store /t demodump

• address. Two threads are started. In each thread, one of two routines (Recur1(), Recur2() is called, based on a pseudorandom number that is generated for each iteration. Depending on a threshold constant, a function that will generate an intentional addressing exception will be randomly called. • stack. The same two threads are started, but no intentional addressing exception is generated. The calls are continued until we get a stack overflow, thus generating an exception. • heap. As above, two threads are started. Instead of generating an exception, the application makes memory allocations and then pauses. The intention here is to simulate a leak, but not necessarily to cause a crash. • hang. This scenario generates a deadlock using three functions, three critical sections, and three threads.

Symbols and a Symbol Store To take full advantage of the dump analysis in DebugDiag, you need to make program symbols available. This can be either the PDB file associated with the binary in question, or a symbol store derived from this. When many versions of a binary have been released to customers, the latter is much more convenient.

In this example, add indicates that files are to be added to this symbol store. The files to be added are specified by the /f switch; here they are the files in the debug output directory for demodump. /s points to the symbol store itself. Once the symbol store has been built, either of the two following symbol paths can be specified to DebugDiag, and it will resolve the symbols for demodump. srv*C:\Users\glen\Devlocal\demodump\store;srv*c:\symcache*http: //msdl.microsoft.com/download/symbols

or srv*c:\symcache*http://msdl.microsoft.com/ download/symbols;C:\Users\glen\Devlocal\demodump\debug\demodump.pdb

Example of a Crash Figure 4 shows the demodump application in the process of being dumped. At the point when the screenshot was taken, Windows was dumping the process to disk. To analyze this dump, select the crash dump file by browsing for it. Since this is a crash, choose the Crash/Hang Analyzers and click on the Start Analysis button. Once the analysis is complete, DebugDiag presents the analysis in a browser in an MHTML (Mime HTML) page, which can be saved as an MHT file and viewed later. Figure 5 shows this resulting crash dump analysis.

Addressing Exception At the top of the dump report (Figure 5), there is a summary of the dump analysis. In a real world issue, this can point to the crash problem and potentially (if in a third-party module) the vendor. In the next section of the dump, there is a table of contents, in case multiple dumps are analyzed simultaneously. This can be

Figure 3: Panel from Leak Rule Configuration.

DR. DOBB’S DIGEST

19

June 2009

www.ddj.com

D r. D o b b ’s D i g e s t very useful for comparing related dumps. Following this, at the beginning of the dump, we see some general information about the process that crashed.

Figure 4: Example of an exception generated using the “address” parameter.

After this, we find the meat of the crash — the call stack. Each line shows the calling function and offset where the call is being made, as well as the source file and line number. Thus we can often pin down the exception to the actual statement that failed. demodump can produce a stack overflow dump, but this is essentially the same as the addressing exception shown above. Consequently, I won’t describe that here, although a sample dump report that contains four output MHTML pages, each showing the output of debugdiag for a particular

Figure 5: Crash dump analysis

Figure 6: Demodump deadlocked, taking a manual userdump.

exception condition, is included with the source code for a demonstration Visual Studio project for generating exception conditions — bad pointer, too much memory used, and so on. The resulting dumps can be analyzed with the Microsoft tool.

Deadlock or Hang It’s normal for programs to be waiting. A process contains one or more threads that are executing, and these threads may need to synchronize with something else — perhaps each other. If synchronization is incorrect, threads may block waiting for each other, and thus deadlock. Other threads may continue running, but the blocked threads consume resources and are a problem. If no thread can run within a process, the process is said to be hung. In this case, Windows won’t automatically dump your process. After all, if your process is simply waiting for something, it would be rather rude to terminate it without asking for your permission. In DebugDiag, you can manually choose which type of dump, if any, to take (Figure 6). In the demodump application, it is not really dead-locked — you timeout after a waiting for a bit, so that the application will terminate normally. The analysis carried out by the hang analysis script not only discovers that there is a deadlock in the application, but also pinpoints the threads involved in the hang — Thread 2 is deadlocked with Thread 1 (Figure 7). Other advice is given in the Recommendation cell, in case this does not enable you to resolve the problem. Other elements of the hang report show the Top 5 Threads (by CPU time), and the call stacks of the deadlocked threads (see Figure 8). There is also a report of the critical sections involved. Because of the symbols available, the analyzer is able to identify g_cs1 and g_cs2 as the two critical sections involved. We can see that Thread 2 (thread id 5780) owns g_cs2 while Thread 1 (thread id 800) owns g_cs1.

Example of Heap Analysis (Memory Leaks) A memory or handle leak can be a difficult thing to track down. DebugDiag can make this simpler. In tracking a leak, DebugDiag

Figure 7: Diagnosis of a Program Deadlock.

DR. DOBB’S DIGEST

20

June 2009

www.ddj.com

D r. D o b b ’s D i g e s t injects the LeakTrack dll into the monitored process. Tools such as Process Explorer can show this (or you can see it in the dump report). The LeakTrack dll saves things such as sample call stacks that are later shown in the dump report. In your memory rule, you can specify that dumps should be taken periodically, with a final dump at the end of the monitoring period. DebugDiag forces you to monitor the process under study for at least 15 minutes before taking the final dump. However, you can force DebugDiag to start recording sample call stacks immediately — and then take a manual dump. In the analysis summary (Figure 9), you can see that the analyzer has identified the two functions that are doing the memory allocations — Recur1() and Recur2(), and has reported the allocations attributed to each. Next, the memory manager for these allocations is reported (ntdll.dll) and you are

directed to the callstack samples to see where the allocations occur. Since this program ran for a very short time, the report of high memory fragmentation should be taken with a grain of salt. Generally, there is a settling period for programs, as shown by the recommendation that you monitor a process for at least 15 minutes to get reliable information. Figure 10 shows the virtual memory summary and heap analysis. In this particular example, this is not too meaningful, but you can see the analytical results available. The Leak Analysis (Figure 11) and the call stacks (Figure 12) are more interesting. demodump has 26 outstanding allocations, and we can see sample call stacks for these allocations. In this dump, there are 10 call stacks (only three are shown). This can be invaluable in tracking down leaks, because it shows the program logic in terms of the functions called when potential leaks occur. Note as well in the details for Recur2() that there is a statistic emitted called the Leak Probability — in this case, it was 78%. Memory allocations can fall into one of three categories: • Cached memory that will be held for a very long time. • Short-term allocations that are freed quickly. • Leaks.

Figure 8: Detail of the Hang Analysis Dump.

Figure 11: Leak Analysismanual userdump.

Figure 12: Sample call stacks.

Clearly, you need to distinguish between the first and the last. The only reliable way to determine if a leak exists is to watch the memory behavior of the program over a long period of time. The leak probability is calculated according to a formula that is based on the observed allocation patterns of your program over a period of time — it’s only a suggestion. Often knowing that there is a leak, and seeing the sample call stacks, is enough to track it down and fix it.

Conclusion DebugDiag is simple to use, and provides a wealth of information. I don’t think it’s a silver bullet for every issue (it’s perhaps not as robust as you’d want it to be), but if you aren’t aware of its capabilities, then you are missing out on a very useful tool. The best way to learn about it is to try it, and with the simple demonstration application demodump you can investigate how this tool works.

Figure 9: Analysis Summary for Memory Analysis.

—Glen Matthews is manager of R&D at Hummingbird Connectivity, a division of Open Text Corp. He can be contacted at [email protected]/. Return to Table of Contents

Figure 10: Virtual Memory and Heap Analysis.

DR. DOBB’S DIGEST

21

June 2009

www.ddj.com

D r. D o b b ’s D i g e s t

[

Of Interest

]

Of Interest Nokia has announced the release of the Ovi Maps Player API, a tool for embedding Ovi Maps into any compatible JavaScript-enabled website. This opens the Ovi Maps service and technology for third parties wishing to add contextual innovation to their websites. The latest version of the Ovi Maps application on a user’s device is enhanced through additional features and services, providing high-resolution satellite and terrain maps (both in 2D and 3D views), 3D landmarks for over 200 cities, rotation, tilting, night view, and fly-overs and fly-throughs. Ovi Maps also offers enriched POI information, as well as a weather service that provides 24 hour and 5-day forecasts. With Ovi Maps on the Web, users can also search for addresses and POIs, find places and save them into Favorites, and organize them into Collections. Users can check the weather and preplan routes for walking or driving —and then save them. Your places, collections and routes can also be synchronized over-the-air between Ovi Maps on your compatible mobile device and Ovi Maps on the Web. www.ovi.com.

Perforce Software has announced the release of P4Scout, an application for the iPhone that gives administrators of the Perforce Software Configuration Management (SCM) system real-time access to server status information. P4Scout, available for free at the iTunes App store, lets administrators quickly monitor the status of their Perforce servers from anywhere. Features of the P4Scout iPhone application include: • Server status: The status of each server identified is indicated by an icon next to the server name. • Simple navigation: Save discovered servers, and define and add new ones quickly. • Levels of information: By clicking on the server name, more detail is available including host and port, version, uptime, time remaining until licenses expire, and number of users (see Figure 1, right). A list of longrunning SCM operations is also available and e-mailing a user who launched the operation is just a tap away. “Many of our customers have huge Perforce installations supporting thousands of users who access their servers around the clock,” said Perforce CTO Christopher Seiwald. “System administration is a big responsibility and we hope P4Scout will contribute to their peace of mind. This new app offers a quick way to check on a server or multiple servers and confirm everything is running smoothly.” Compatible with any iPhone or iPod touch with version 2.0 software, P4Scout was developed with the new Perforce Derived API for Objective-C, which will be available soon from Perforce. P4Scout is available now for free at the iTunes App store, and it can be downloaded and evaluated free from the Perforce website. This offer includes free technical support during evaluation. www.perforce.com.

Soft Service, a developer of wireless communications software, has released the new version of Wireless Communication Library (also known as Bluetooth Framework). Wireless Communication Library 6.5.8 is a VCL, MFC, and .NET components library for software developers, enabling the addition of Bluetooth, IrDA, and Serial support to applications. WCL supports most Bluetooth drivers (BlueSoleil, Microsoft, Toshiba, WidComm/BroadComm), IrDA, and Serial Ports. Along with a complete set of components for use with Borland Delphi, Borland Developer Studio, Microsoft Visual Studio .NET, Microsoft Visual C++, and Microsoft Visual Basic 6, Wireless Communication Library also includes a special component for development of Bluetooth Proximity Marketing software. This component features Bluetooth device discovery, simultaneous data sending to multiple Bluetooth devices, and listing the files to be sent. Moreover, this component allows managing the discovered devices, so that the application can selectively ignore certain devices and send advertisements to others. A number of various parameters provide an opportunity to make the sending process fast and steady. WCL is compatible with WinXP (and higher) and features cross-platform support. There are three types of licenses: Lite, Personal, and Developer. All licenses can be used for developing commercial applications and are completely royalty free. A demo version is available at www.btframework.com/download.htm. DR. DOBB’S DIGEST

3M Touch Systems has announced a multitouch LCD developer kit that tracks up to 10 individual fingers and is compatible with Windows 7. The 3M Multi-touch Developer Kit is a 19-inch wideaspect ratio (16:10, 1440×900 resolution) LCD display with builtin touch technology. Supporting standard flick and scale gestures, as well as accurately tracking 10 individual fingers from single or multiple users, this display has a glass surface and less than 15 milliseconds touch response. http://solutions.3m.com/wps/portal/3M/en_US/3MTouchSystems/TS/Technologies/Multitouch/?WT.mc_id=www.3M.com/multitouch

Return to Table of Contents

22

June 2009

www.ddj.com

D r. D o b b ’s D i g e s t

[

]

Conversations

Q&A: Clouds of Distribution

E

rik Troan is CTO at rPath, a company that delivers enterprise applications across cloud-based environments. He recently spoke with Dr. Dobb’s editor in chief Jonathan Erickson about the rapidly changing world of distributed computing.

Dr. Dobb’s: How has the advent of cloud computing changed distributed computing? Troan: Distributed computing is all about modular deployments, where each type of node can be scaled independently of the others. Deploying that into environments that allow on-demand scaling of each node allows properly designed distributed architectures to flexibly scale themselves and adjust to the changing demands of their users. On-demand environments like clouds are poised to become the default deployment environment for distributed and scale-out compute infrastructures. Dr. Dobb’s: How far can developers go with lightweight “mashup” style tools when it comes to distributed computing? Troan: Mashup tools are highly relevant today. Human beings do a good job of breaking tasks into components that feed each other; we do a poor job of dividing up tasks into pieces that need to share a single resource. Mashups of components using distributed environments allow scalable applications to be built quickly in a way most people are comfortable architecting. DR. DOBB’S DIGEST

23

Dr. Dobb’s: Are distributed computing environments relevant? Troan: I don’t think DCEs are even remotely relevant any longer. Web technologies, including XML, SOAP, and REST, have completely displaced those efforts in mainstream Erik Troan architecture. Simple, standard interfaces allow applications to be built much more quickly across organizational boundaries than anything as complex as DCE or CORBA. Dr. Dobb’s: Have large-scale distributed computing projects such as seti@home, folding@home, and the like helped us solve any day-to-day distributed computing problems, or are they simply academic exercises? Troan: Projects like seti@home have encouraged IT organizations to look at the spare computing capacity they have spread out across thousands of desktop machines as a resource they could tap to solve compute problems. I know organizations are exploring virtualization as a low-impact way of harnessing those machines.

Return to Table of Contents

June 2009

www.ddj.com

D r. D o b b ’s D i g e s t

[

]

Book Review

Examining Gray Hat Python: Python Programming for Hackers and Reverse Engineers Reviewed by Mike Riley

Gray Hat Python: Python Programming for Hackers and Reverse Engineers Justin Seitz No Starch Press, $39.95 ISBN: 978-1-59327-192-3

B

eing a fan of the Python programming language, I immediately gravitate toward any new book title with Python in the title. Gray Hat Python explores the relatively easy security penetration testing and, in particular, Windows-centric hacking using Python and several free security testing libraries. Does it deliver the goods? Read on to find out. Author Justin Seitz, a senior security researcher for Immunity, Inc., clearly enjoys his job and the freedom to use the Python language to achieve his company’s security testing objectives. The techniques described in his book are real-world exploits covering a wide array of Windows OS-centric hacks. The book starts off with setting up the necessary test bench tools including various debugging tools (such as the popular PyDbg and the author’s contributed Immunity debugger) and learning how to leverage the ctypes library to call upon DLL’s and manipulate stacks, breakpoints and event handlers. The third chapter delivers a walkthrough construction of a home-made Python-based debugger that helps readers understand how more sophisticated debuggers work. The next chapter focuses on using the PyDbg tool with examples of extending breakpoints, handling access violations and and generating buffer overflows and obtaining process snapshots. The Immunity debugger is introduced in Chapter 5 as a smarter replacement for the PyDbg tool, and after seeing how easy it is to use compared to PyDbg, the free Immunity Debugger, available at http://debugger.immunityinc.com, is demonstrated. The chapter opens with a tour of setting up and using the tool, activating its functions with the PyCommands library and the 13 debugging hooks that include: BpHook/LogBpHook, AllExceptHook, PostAnalysisHook, AccessViolationHook, LoadDLLHook/UnloadDLLHook, CreateThreadHook/ExitThreadHook, CreateProcessHook/ExitProcessHook ,and FastLogHook/STDCALLFastLogHook. Once the Immunity Debugger is configured, the hack attacks begin. Bad character filtering, Data Execution Prevention (DEP) DR. DOBB’S DIGEST

bypass, malware anti-debug routines, and a chapter devoted to soft hooking (using PyDbg) and hard hooking (via Immunity Debugger) are discussed (incidentally, hooking is the term used for attaching to a target process and intercepting its flow of execution). Chapter 7 covers remote thread creation, DLL injection, file hiding and backdoor coding. Chapter 8 is all about fuzzing (creating and sending malformed data to an application, making that application fail), and Chapter 9 discloses the Sulley (named after the Monsters, Inc character) python-based fuzzing framework. Chapter 10 is about fuzzing Windows drivers via the Immunity Debugger with the help of DriverLib, a Python-based driver static analysis tool. Chapter 11 covers scripting Ida Pro (a professional-grade disassembly tool) via the Idapython library. The book concludes with a chapter on PyEmu, a scriptable, pure Python IA32 emulator, allowing Python developers the ability to emulate CPU tasks. Overall, Gray Hat Python achieves its objective, albeit primarily for Windows security researchers versed in the Python language. While it certainly makes sense for the author to focus the discussion around the dominant Windows platform, I hoped he would share a bit more hacking knowledge leveraging (or penetration testing) the Linux and Mac OS X platforms. At the very least, show how these non-Windows platforms can be used in forensics, diagnostics and hacking the Windows platform. However, even without such demonstrations, each chapter is packed with Python code, clear disections, and a serious education in taking control of what was previously considered untouchable OS territory. The book succeeded in showing me with relative ease how a trained security researcher or determined hacker could use relatively straightforward Python scripts to infiltrate the most prevalent consumer operating system today.

Return to Table of Contents

24

June 2009

www.ddj.com

D r. D o b b ’s D i g e s t

[

Effective Concurrency

]

Break Up and Interleave Work to Keep Threads Responsive Breaking up is hard to do,but interleaving can be even subtler by Herb Sutter

I

n a recent article, we covered reasons why threads should strive to make their data private and communicate using asynchronous messages. [1] Each thread that needs to receive input should have an inbound message queue and organize its work around an event-driven message pump mainline that services the queue requests, idealized as follows: // An idealized thread mainline // do { message = queue.pop() // get the message // (might wait) message->run(); // and execute it } while( !done ); // check for exit

But what happens when this thread must remain responsive to new incoming messages that have to be handled quickly, even when we’re in the middle of servicing an earlier lower-priority message that may take a long time to process? If all the messages must be handled on this same thread, then we have a problem. Fortunately, we also have two good solutions, both of which follow the same basic strategy: somehow break apart the large piece of work to allow the thread to perform other work in between, interleaved between the chunks of the large item. Let’s consider the two major ways to implement that interleaving, and their respective tradeoffs in the areas of fairness and performance.

Example: Breaking Up a Potentially Long-Running Operation Consider this potentially expensive message we might be asked to execute: // A simplified message type to accomplish some // long operation // class LongOperation : public Message { public: void run() { LongHelper helper = GetHelper(); // issue: what if this loop could take a long time? for( int i = 0; i < items.size(); ++i ) { helper->render( items[i] ); } helper->print(); } }

DR. DOBB’S DIGEST

25

This thread may be a background worker that runs all the work we want to get off the GUI thread (see [1]). Alternatively, in cases where it is impossible to obey the good hygiene of getting all significant work off the GUI thread (for example, because for some reason the work may need to happen on the GUI thread itself for legacy or OS-specific reasons), this thread may be the GUI itself. Whatever the case, what matters is that to remain responsive to other messages we need to break up LongOperation.run into smaller pieces and interleave them with the processing of other messages.

Option 1: Use Continuations The first way to tackle the problem is to use continuations. A continuation is just a way to talk about “the rest of the work that’s left to do.” It includes capturing the state of any local or intermediate variables that we’re in the middle of using, so that when we resume the computation we can correctly pick up again where we left off. Example 1(a) shows one way to implement a continuation style: // Example 1(a): Using a continuation style // class LongOperation : public Message { int start; LongHelper helper; public: LongOperation( int start_ = 0, LongHelper helper_ = nullptr ) : start(start_), helper(helper_) { } void run() { if( helper == nullptr // if first time through, get helper helper = GetHelper(); int i = 0; // do just another chunk’s worth for( ; i < ChunkSize && start+i < items.size(); ++i ) { helper->render( items[ start+i ] ); } if( start+i < items.size() ) // if not done, launch a continuation queue.push(LongOperation(start+i, helper)); else // if last time through, finish up helper->print(); } }

The first LongOperation object executes only a suitably-sized chunk of its loop. To ensure that the June 2009

www.ddj.com

D r. D o b b ’s D i g e s t remainder of the work gets done, it creates a new LongOperation object (the continuation) that stores the current intermediate state, including the helper local variable and the loop index, and pushes the continuation on the message queue. (For simplicity this code assumes LongOperation has direct access to queue; in practice you would provide an indirection such as a callback to decouple the message type from a specific queue.)

A good way to think about this is that we’re “saving our stack frame” off in a separate object on the heap A good way to think about this is that we’re “saving our stack frame” off in a separate object on the heap. The overhead we’re incurring for each continuation is the cost of copying the local variables, one allocation (and subsequent destruction) for the continuation message, and one extra pair of enqueue/dequeue operations. This approach has the major advantage of fairness. It’s fair to the waiting messages, because each continuation gets pushed onto the end of the queue and so all messages currently waiting will be processed before we do another chunk of the longer work. More importantly, it’s fair to LongOperation itself, because any other new messages that arrive after the continuation is enqueued, during the time while we’re draining the queue, will stay in line behind the continuation. This fairness can also be a disadvantage, however, if we’d actually like to execute most of the messages in order no matter how long they take, and only enable interleaved “early” execution for certain high-priority messages. Some of that can be accomplished using a priority queue as the message queue, but in general this kind of flexibility will be easier to accomplish under Option 2, where we opt for a different set of tradeoffs.

Beware State Changes When Interleaving Note that there is a subtle but important semantic issue that we have to be aware of that wasn’t possible in the noninterleaved version of the code. The issue is that whenever the code cedes control to allow other messages to be executed, it must be aware that the thread’s state can be changed by that other code that executed on this thread in the meantime. When the code resumes, it cannot in general assume that any data that is private to the thread, including thread-local variables, has remained unchanged across the interruption. In Example 1(a), consider: What happens if another message changes the size or contents of the items collection? If our operation is trying to process a consistent snapshot of the collection’s state, we may need to save even more off to the side by taking a snapshot of the collection and doing all of our work against the snapshot, so that we maintain the semantics of doing our operation against the collection in the state it had when we started: DR. DOBB’S DIGEST

// Example 1(b): Using a continuation style, // and adding resilience to collection changes // class LongOperation : public Message { Collection myItems; int start; LongHelper helper; public: LongOperation( int start_ = 0, LongHelper helper_ = nullptr, Collection myItems = nullptr ) : start(start_), helper(helper_), myItems(myItems_) { } void run() { if( helper == nullptr ) {// if first time through helper = GetHelper(); // get helper myItems = items.copy(); // and take a snapshot of items } int i = 0; // do just another chunk’s worth for( ; i < ChunkSize && start+i < myItems.size(); ++i ) { helper->render( myItems[ start+i ] ); } if( start+i < myItems.size() ) // if not done, // launch a continuation queue.push( LongOperation( start+i, helper, myItems ) ); else { // if last time through, finish up helper->print(); Free( myItems ); // and clean up myItems } } }

Now the continuation is resilient to the case where other messages may change items, by doing all of its processing using the state the collection had when our operation began. Note that we have still introduced one other semantic change, in that we’re deliberately allowing later messages to run against newer state before this earlier operation on the older state is complete. That’s often just fine and dandy, but it’s important to be aware that we’re buying into those semantics. All of these considerations apply even more to Option 2. Let’s now turn to our second strategy for interleaving work, and see how it offers an alternative set of tradeoffs.

Option 2: Use Reentrant Message Pumping Option 2 could be subtitled: “Cooperative multitasking to the rescue!” It will be familiar to anyone who’s worked on systems based on cooperative multitasking, such as early versions of MacOS and Windows. Example 2 illustrates the simplest implementation of Option 2, where instead of “saving stack frames on the heap” in the form of continuations, we instead just keep our in-progress state right on the stack where it already is and use nesting (aka reentrancy) to pump additional messages. // Example 2(a): Explicit yield-and-reenter (possibly flawed) // class LongOperation : public Message { public: void run() { LongHelper helper = GetHelper(); for( int i = 0; i < items.size(); ++i ) { if( i % ChunkSize == 0 ) // from time to time Yield(); // explicitly yield control helper->render( items[i] ); } helper->print(); } }

In a moment we’ll consider several options for implementing Yield. For now, the key is just that the Yield call contains a message 26

June 2009

www.ddj.com

D r. D o b b ’s D i g e s t pump loop that will process waiting messages, which is what gives them the chance to get unblocked and interleave with this loop. Option 2 avoids the performance and memory overhead of separate allocation and queueing we saw in Option 1, but it leads to bigger stacks. Stack overflow shouldn’t be a problem, however, unless stack frames are large and nesting is pathologically frequent (in which case, there are bigger problems; see next paragraph). Probably the biggest advantage of this approach is that the code is simpler. We just have to call Yield every so often to allow other messages to be processed, and we’re golden…or so we think, because unfortunately it’s not actually quite that easy. The code is simpler to write, but requires a little more care to write correctly. Why?

Remember: Beware State Changes When Interleaving, Really Just as we saw with continuations, so with any other interleaving including Yield: Whenever the code Yields it must be aware that the thread’s state can be changed by the code that executed on this thread during Yield operation. It cannot in general assume that any data that is private to the thread, including thread-local variables, remains unchanged across the Yield call. With continuations, the issue was a bit easier to remember because that style already requires the programmer to explicitly save the local state off to the side and then return, so that by the time we get to the code where the continuation resumes it’s easy to remember that time has passed and the world might have changed. When using Yield-reentrancy instead of continuations, it’s easier to forget about this effect, in part because it really is (too) easy to just throw a Yield in anywhere. To see how this can cause problems, assume for a moment that semantically we don’t care if nested messages change the contents of items (which was the case in the discussion around Example 1(b)). That is, assume we don’t necessarily need to process a snapshot of the state, but only get through items until we’re done. Even with those relaxed requirements, do you notice a subtle bug in Example 2? Consider: What happens if a nested message changes the size of the items collection? If that’s possible, then the collection contains at least i objects before the Yield and the expression items[i] is valid, but after the Yield the collection may no longer contain i objects and so items[i] could fail. In this simple example, we can apply a simple fix. The issue is that we have a window between observing that i < items.size() and using items[i], so the simplest fix is to eliminate the problematic window by not yielding in between those points: // Example 2(b): Explicit yield-and-reenter (immediate problem fixed) // class LongOperation : public Message { public: void run() { LongHelper helper = GetHelper(); for( int i = 0; i < items.size(); ++i ) { helper->render( items[i] ); if( i % ChunkSize == 0 ) // from time to time Yield(); // explicitly yield control } helper->print(); } }

DR. DOBB’S DIGEST

Now the code is resilient to having the items collection change during the call. Of course, as in the discussion around Example 1(b), it’s possible that we may not want the collection to change at all, for example to ensure we’re processing a consistent snapshot of the collection’s state. If so, we may need to save even more off to the side just like we did in Example 1(b), except here it’s easier to do because we don’t have to mess with a continuation object: // Example 2(c): As resilient as Example 1(b) // class LongOperation : public Message { public: void run() { Collection myItems = items.copy(); LongHelper helper = GetHelper(); for( int i = 0; i < myItems.size(); ++i ) { if( i % ChunkSize == 0 ) // from time to time Yield(); // explicitly yield control helper->render( myItems[i] ); // ok now do to this here } helper->print(); Free( myItems ); } }

Notice that we can now correctly use myItems[i] both before and after Yield because we’ve insulated ourselves against the problematic state change.

What About Yield? The Yield call contains a message pump loop that will process some or all waiting messages. Here’s the most straightforward way to implement it: // Example 3(a): A simple Yield that pumps all waiting messages // (note: contains a subtle issue) // void Yield() { while( queue.size() > 1 ) do { // do message pump message = queue.pop(); message->run(); } }

Quick: Is there anything that’s potentially problematic about this implementation? Hint: Consider the context of how it’s used in Example 2(c) and the order in which messages will be handled. The potential issue is this: With this Yield, Option 2 has a subtle but significant semantic difference from Option 1. In Option 1, the continuation was pushed on the current end of the queue and so enjoyed the fairness guarantee of being executed right after whatever waiting items were in the queue at the time it relinquished control. If more messages arrive while the continuation waits in line, they will get enqueued behind the continuation and be serviced after it in a pure first-in/first-out (FIFO) queue order—modulo priority order if the queue is a priority queue. Using the Yield in Example 3(a), however, we’ve traded away these pure FIFO queue semantics because we will also execute any new messages that arrive after we call Yield but before we completely drain the queue. This might seem innocuous at first; after all, it’s usually just “as if” we had called Yield slightly later. But notice what happens in the worst case: If messages arrive quickly enough so that the queue never gets completely empty, 27

June 2009

www.ddj.com

D r. D o b b ’s D i g e s t

the operation that yielded might never get control again; it could starve. Starvation is usually unlikely, because normally we don’t give a thread more work than it can ever catch up with. But it can arise in practice in systems designed to always keep just enough work ready to keep a thread busy and avoid making it have to wait, and so by design the thread always has a little backlog of work ready to do and the queue stays in a small-but-not-empty steady state. In that kind of sitaution, beware the possibility of starvation. The simplest way to prevent this potential problem is to remember how many messages are in the queue and then pump only that many messages: // Example 3(a): A Yield that pumps only the messages // that were already in the queue at the time it began // void Yield() { int n = queue.size(); while( n-- > 0 ) do { // pump ‘n’ messages message = queue.Pop(); message->run(); } }

This avoids pumping newer messages that arrive during the Yield processing and usually guarantees progress (non-starvation) for the function that called Yield, as long as we avoid pathological cases where the nested messages Yield again. (Exercise for the reader: Instrument Yield to detect starvation due to nesting.) Incidentally, note that once again we’re applying the technique of taking a snapshot of the state of the system as it was when we began the operation, just like we did in Examples 1(b) and 2(c) where we took a copy of the items collection. In this case, thanks to the FIFO nature of the queue, we don’t need to physically copy the queue; it’s sufficient to remember just the number of items to pump.

Summary Sometimes a thread has to interleave its work in order to stay responsive, and avoid blocking new time-sensitive requests that may arrive while it’s already DR. DOBB’S DIGEST

processing a longer but lower-priority operation. There are two main ways to interleave: Option 1 is to use continuations, which means saving the operation’s intermediate local state in an object on the heap and creating a new message containing “the rest of the work left to do,” which gets inserted into the queue so that we can handle other messages and then pick up and resume the original work where we left off. Option 2 is to use cooperative multitasking and reentrancy by yielding to a function that will pump waiting messages; this yields simpler code and avoids some allocation and queueing overhead because the locals can stay on the stack where they already are, but it also allows deeper stack growth. In both cases, remember: The issues of interleaving other work are really nasty and it’s all too easy to get things wrong, especially when Yield calls are sprinkled around and make the program very hard to reason about locally. Always be careful to remember that the interleaved work could have side effects, and make sure the longer work is resilient to changes to the data it cares about. If we don’t do that correctly and consistently, we’ll expose ourselves to a class of subtle and timing-dependent surprises. Next month, we’ll consider a way to avoid this class of problems by making sure the state of the system is valid at all times, even with partially done work outstanding. Stay tuned.

Notes [1] H. Sutter. “Use Threads Correctly = Isolation + Asynchronous Messaging” (Dr. Dobb’s Digest, March 2009). Available online at http://www.ddj.com/go-parallel/article/ showArticle.jhtml?articleID=215900465.

Acknowledgments Thanks to Terry Crowley for his insightful comments on drafts of this article.

Return to Table of Contents

28

June 2009

www.ddj.com

Related Documents

June 09
May 2020 16
June 09
May 2020 18

More Documents from ""

Recuerdo Del 1-4-19.pdf
August 2019 15
April 2020 3
April 2020 8
April 2020 10
April 2020 4