C++ Network Programming Mastering Complexity with ACE & Patterns
Dr. Douglas C. Schmidt
[email protected] www.cs.wustl.edu/~schmidt/tutorials-ace.html Professor of EECS Vanderbilt University Nashville, Tennessee
Motivation: Challenges of Networked Applications Observation • Building robust, efficient, & extensible concurrent & networked applications is hard • e.g., we must address many complex topics that are less problematic for nonconcurrent, stand-alone applications Complexities in networked applications
2
Accidental Complexities • Low-level APIs • Poor debugging tools • Algorithmic decomposition • Continuous re-invention/discovery of core concepts & components Inherent Complexities • Latency • Reliability • Load balancing • Causal ordering • Scheduling & synchronization • Deadlock
Presentation Outline Cover OO techniques & language features that enhance software quality • Patterns, which embody reusable software architectures & designs • ACE wrapper facades, which encapsulate OS concurrency & network programming APIs • OO language features, e.g., classes, dynamic binding & inheritance, parameterized types Presentation Organization 1. Background 2. Concurrent & network challenges & solution approaches 3. Patterns & wrapper facades in ACE + applications 3
The Evolution of Information Technologies
2,400 bits/sec to 1 Gigabits/sec
CPUs and networks have increased by 3-7 orders of magnitude in the past decade Extrapolating this trend to 2010 yields • ~100 Gigahertz desktops • ~100 Gigabits/sec LANs • ~100 Megabits/sec wireless • ~10 Terabits/sec Internet backbone
10 Megahertz to 1 Gigahertz
In general, software has not Increasing software productivity improved as rapidly or as and QoS depends heavily on COTS effectively as hardware 4
These advances stem largely from standardizing hardware & software APIs and protocols, e.g.: • Intel x86 & Power PC chipsets • TCP/IP, ATM • POSIX & JVMs • Middleware & components • Quality of service aspects
Component Middleware Layers Historically, mission-critical apps were built directly atop hardware & OS • Tedious, error-prone, & costly over lifecycles There are layers of middleware, just like there are layers of networking protocols
There are multiple COTS layers & research/ business opportunities 5
Standards-based COTS middleware helps: • Control end-to-end resources & QoS • Leverage hardware & software technology advances • Evolve to new environments & requirements • Provide a wide array of reuseable, offthe-shelf developer-oriented services
Operating System & Protocols •Operating systems & protocols provide mechanisms to manage endsystem resources, e.g., •CPU scheduling & dispatching •Virtual memory management •Secondary storage, persistence, & file systems •Local & remove interprocess communication (IPC) •OS examples •UNIX/Linux, Windows, VxWorks, QNX, etc. •Protocol examples •TCP, UDP, IP, SCTP, RTP, etc. INTERNETWORKING ARCH RTP
TFTP
FTP
MIDDLEWARE ARCH Middleware Applications
HTTP
TELNET
DNS UDP
Middleware Services
TCP
Middleware
IP
Solaris
Fibre Channel Ethernet 6
ATM
20th Century
FDDI
Win2K
VxWorks Linux
LynxOS
21st Century
Host Infrastructure Middleware •Host infrastructure middleware encapsulates & enhances native OS mechanisms to create reusable network programming components
Common Middleware Services
• These components abstract away many tedious & error-prone aspects of low-level OS APIs
Distribution Middleware
•Examples •Java Virtual Machine (JVM), Common Language Runtime (CLR), ADAPTIVE Communication Environment (ACE) Asynchronous Event Handling Physical Memory Access
Memory Management 7
Domain-Specific Services
Host Infrastructure Middleware
Asynchronous Transfer of Control
Synchronization
Scheduling
www.rtj.org
www.cs.wustl.edu/~schmidt/ACE.html
Distribution Middleware •Distribution middleware defines higher-level distributed programming models whose reusable APIs & components automate & extend native OS capabilities •Examples • OMG CORBA, Sun’s Remote Method Invocation (RMI), Microsoft’s Distributed Component Object Model (DCOM) Interface Repository
Client
IDL Compiler
OBJ REF
IDL STUBS
Object (Servant)
in args operation() out args + return
ORB CORE 8
ORB INTERFACE
Common Middleware Services Distribution Middleware Host Infrastructure Middleware
Implementation Repository
IDL SKEL DII
Domain-Specific Services
DSI
Object Adapter
GIOP/IIOP/ESIOPS
•Distribution middleware avoids hard-coding client & server application dependencies on object location, language, OS, protocols, & hardware
Common Middleware Services •Common middleware services augment distribution middleware by defining higher-level domain-independent services that focus on programming “business logic” •Examples •CORBA Component Model & Object Services, Sun’s J2EE, Microsoft’s .NET
Domain-Specific Services Common Middleware Services Distribution Middleware Host Infrastructure Middleware
•Common middleware services support many recurring distributed system capabilities, e.g., • Transactional behavior • Authentication & authorization, • Database connection pooling & concurrency control • Active replication • Dynamic resource management 9
Domain-Specific Middleware • Domain-specific middleware services are tailored to the requirements of particular domains, such as telecom, ecommerce, health care, process automation, or aerospace •Examples
Siemens MED Syngo • Common software platform for distributed electronic medical systems • Used by all ~13 Siemens MED business units worldwide
Boeing Bold Stroke • Common software platform for Boeing avionics mission computing systems
Modalities e.g., MRI, CT, CR, Ultrasound, etc.
10
Domain-Specific Services Common Middleware Services Distribution Middleware Host Infrastructure Middleware
Overview of Patterns • Present solutions to common software problems arising within a certain context
• Help resolve key software design forces
• Capture recurring structures & dynamics among software participants to facilitate reuse of successful designs
• Generally codify expert knowledge of design strategies, constraints & “best practices”
AbstractService service Client
Proxy service 11
Service 1
1
service
The Proxy Pattern
•Flexibility •Extensibility •Dependability •Predictability •Scalability •Efficiency
Overview of Pattern Languages Motivation •Individual patterns & pattern catalogs are insufficient •Software modeling methods & tools that just illustrate how, not why, systems are designed
Benefits of Pattern Languages • Define a vocabulary for talking about software development problems • Provide a process for the orderly resolution of these problems • Help to generate & reuse software architectures 12
Taxonomy of Patterns & Idioms Type
Description
Examples
Idioms
Restricted to a particular language, system, or tool
Scoped locking
Design patterns
Capture the static & dynamic roles & relationships in solutions that occur repeatedly
Active Object, Bridge, Proxy, Wrapper Façade, & Visitor
Architectural patterns
Express a fundamental structural organization for software systems that provide a set of predefined subsystems, specify their relationships, & include the rules and guidelines for organizing the relationships between them
Half-Sync/HalfAsync, Layers, Proactor, PublisherSubscriber, & Reactor
Optimization principle patterns
Document rules for avoiding common design & implementation mistakes that degrade performance
Optimize for common case, pass information between layers
13
The Layered Architecture of ACE www.cs.wustl.edu/~schmidt/ACE.html
•Large open-source user community • www.cs.wustl.edu/~schmidt/ACE-users.html 14
Features •Open-source •200,000+ lines of C++ •40+ personyears of effort •Ported to many OS platforms
•Commercial support by Riverace • www.riverace.com/
Sidebar: Platforms Supported by ACE • ACE runs on a wide range of operating systems, including: • PCs, e.g., Windows (all 32/64-bit versions), WinCE; Redhat, Debian, and SuSE Linux; & Macintosh OS X; • Most versions of UNIX, e.g., SunOS 4.x and Solaris, SGI IRIX, HPUX, Digital UNIX (Compaq Tru64), AIX, DG/UX, SCO OpenServer, UnixWare, NetBSD, & FreeBSD; • Real-time operating systems, e.g., VxWorks, OS/9, Chorus, LynxOS, Pharlap TNT, QNX Neutrino and RTP, RTEMS, & pSoS; • Large enterprise systems, e.g., OpenVMS, MVS OpenEdition, Tandem NonStop-UX, & Cray UNICOS • ACE can be used with all of the major C++ compilers on these platforms • The ACE Web site at http://www.cs.wustl.edu/~schmidt/ACE.html contains a complete, up-to-date list of platforms, along with instructions for downloading & building ACE 15
Key Capabilities Provided by ACE Service Access & Control
Concurrency
16
Event Handling & IPC
Synchronization
The Pattern Language for ACE Pattern Benefits • Preserve crucial design information used by applications & middleware frameworks & components • Facilitate reuse of proven software designs & architectures • Guide design choices for application developers
17
POSA2 Pattern Abstracts Service Access & Configuration Patterns
Event Handling Patterns
The Wrapper Facade design pattern encapsulates the functions and data provided by existing non-object-oriented APIs within more concise, robust, portable, maintainable, and cohesive object-oriented class interfaces.
The Reactor architectural pattern allows eventdriven applications to demultiplex and dispatch service requests that are delivered to an application from one or more clients.
The Component Configurator design pattern allows an application to link and unlink its component implementations at run-time without having to modify, recompile, or statically relink the application. Component Configurator further supports the reconfiguration of components into different application processes without having to shut down and re-start running processes. The Interceptor architectural pattern allows services to be added transparently to a framework and triggered automatically when certain events occur. The Extension Interface design pattern allows multiple interfaces to be exported by a component, to prevent bloating of interfaces and breaking of client code when developers extend or modify the functionality of the component. 18
The Proactor architectural pattern allows event-driven applications to efficiently demultiplex and dispatch service requests triggered by the completion of asynchronous operations, to achieve the performance benefits of concurrency without incurring certain of its liabilities. The Asynchronous Completion Token design pattern allows an application to demultiplex and process efficiently the responses of asynchronous operations it invokes on services. The Acceptor-Connector design pattern decouples the connection and initialization of cooperating peer services in a networked system from the processing performed by the peer services after they are connected and initialized.
POSA2 Pattern Abstracts (cont’d) Synchronization Patterns
Concurrency Patterns
The Scoped Locking C++ idiom ensures that a lock is acquired when control enters a scope and released automatically when control leaves the scope, regardless of the return path from the scope.
The Active Object design pattern decouples method execution from method invocation to enhance concurrency and simplify synchronized access to objects that reside in their own threads of control. The Monitor Object design pattern synchronizes concurrent method execution to ensure that only one method at a time runs within an object. It also allows an object’s methods to cooperatively schedule their execution sequences.
The Strategized Locking design pattern parameterizes synchronization mechanisms that protect a component’s The Half-Sync/Half-Async architectural pattern decouples critical sections from concurrent asynchronous and synchronous service processing in access. concurrent systems, to simplify programming without The Thread-Safe Interface design unduly reducing performance. The pattern introduces two pattern minimizes locking overhead and intercommunicating layers, one for asynchronous and one ensures that intra-component method for synchronous service processing. calls do not incur ‘self-deadlock’ by The Leader/Followers architectural pattern provides an trying to reacquire a lock that is held by efficient concurrency model where multiple threads take the component already. turns sharing a set of event sources in order to detect, The Double-Checked Locking demultiplex, dispatch, and process service requests that Optimization design pattern reduces occur on the event sources. contention and synchronization The Thread-Specific Storage design pattern allows multiple overhead whenever critical sections of threads to use one ‘logically global’ access point to retrieve code must acquire locks in a threadan object that is local to a thread, without incurring locking safe manner just once during program overhead on each object access. 19 execution.
The Frameworks in ACE
ACE Framework
Inversion of Control
Reactor & Proactor
Calls back to application-supplied event handlers to perform processing when events occur synchronously & asynchronously
Service Configurator
Calls back to application-supplied service objects to initialize, suspend, resume, & finalize them
Task
Calls back to an application-supplied hook method to perform processing in one or more threads of control
Acceptor-Connector
Calls back to service handlers to initialize them after they are connected
Streams
Calls back to initialize & finalize tasks when they are pushed & popped from a stream
20
Example: Applying ACE in Real-time Avionics Goals • Apply COTS & open systems to missioncritical real-time avionics Key System Characteristics • Deterministic & statistical deadlines • ~20 Hz • Low latency & jitter • ~250 usecs • Periodic & aperiodic processing • Complex dependencies • Continuous platform upgrades Key Results •• Test Test flown flown at at China China Lake Lake NAWS NAWS by by Boeing Boeing OSAT OSAT IIII ‘98, ‘98, funded funded by by OS-JTF OS-JTF •• www.cs.wustl.edu/~schmidt/TAO-boeing.html www.cs.wustl.edu/~schmidt/TAO-boeing.html •• Also Also used used on on SOFIA SOFIA project project by by Raytheon Raytheon •• sofia.arc.nasa.gov sofia.arc.nasa.gov •• First First use use of of RT RT CORBA CORBA in in mission mission computing computing •• Drove Drove Real-time Real-time CORBA CORBA standardization standardization 21
Example: Applying ACE to Time-Critical Targets Goals • Detect, identify, track, & destroy time-critical targets
Joint JointForces Forces Global GlobalInfo InfoGrid Grid
Challenge Challenges are is to make this also relevant to possible! TBMD & NMD
Key System Characteristics • Real-time mission-critical sensor-to-shooter needs Adapted from “The Future of AWACS”, • Highlybydynamic QoS LtCol Joe Chapa requirements & environmental Key Solution Characteristics ••Adaptive & •Efficient & scalable Adaptive & reflective reflective conditions Time-critical targets require immediate response because: •Affordable flexible& •High • Multi-service & •They asset pose a clear andconfidence present danger to friendly& forces •COTS-based •Safety criticaltargets of coordination •Are highly lucrative, fleeting opportunity 22
Example: Applying ACE to Large-scale Routers IOM IOM
IOM
BSE
BSE
BSE
IOM
IOM
IOM IOM
IOM
IOM
BSE
BSE
BSE
Goal • Switch ATM cells + IP packets at terabit rates
IOM
Key System Characteristics IOM IOM • Very high-speed WDM IOM BSE BSE BSE IOM links IOM IOM • 102/103 line cards • Stringent requirements www.arl.wustl.edu for availability Key Software Solution Characteristics • Multi-layer load •High confidence & scalable computing architecture balancing, e.g.: • Networked embedded processors • Layer 3+4 • Distribution middleware • Layer 5 • FT & load sharing • Distributed & layered resource management •Affordable, flexible, & COTS IOM
IOM
Example: Applying ACE to Hot Rolling Mills Goals • Control the processing of molten steel moving through a hot rolling mill in real-time
System Characteristics • Hard real-time process automation requirements • i.e., 250 ms real-time cycles • System acquires values representing plant’s current state, tracks material flow, calculates new settings for the rolls & devices, & submits new settings back to plant Key Software Solution Characteristics ••Affordable, Affordable, flexible, flexible, & & COTS COTS •• Product-line Product-line architecture architecture •• Design Design guided guided by by patterns patterns & & frameworks frameworks 24
www.siroll.de
• Windows NT/2000 • Real-time CORBA (ACE+TAO)
Example: Applying ACE to Real-time Image Processing www.krones.com
Goals • Examine glass bottles for defects in real-time
System Characteristics • Process 20 bottles per sec • i.e., ~50 msec per bottle • Networked configuration • ~10 cameras Key Software Solution Characteristics ••Affordable, Affordable, flexible, flexible, & & COTS COTS •• Embedded Embedded Linux Linux (Lem) (Lem) •• Compact Compact PCI PCI bus bus ++ Celeron Celeron processors processors 25
• Remote booted by DHCP/TFTP • Real-time CORBA (ACE+TAO)
Networked Logging Service Example Key Participants • Client application processes • Generate log records • Server logging daemon • Receive, process, & store log records C++ code for all logging service examples are in • ACE_ROOT/examples/ C++NPv1/ • ACE_ROOT/examples/ C++NPv2/
• The logging server example in C++NPv2 is more sophisticated than the one in C++NPv1 • There’s an extra daemon involved 26
Patterns in the Networked Logging Service Half-Sync/ Half-Async
Leader/ Followers
Monitor Object
Active Object Reactor
Pipes & Filters AcceptorConnector Component Configurator Proactor Wrapper Facade
27
Strategized Locking
Scoped Locking
Thread-safe Interface
ACE Basics: Logging • ACE’s logging facility usually best for diagnostics • Can customize logging sinks • Filterable logging severities • Portable printf()-like format directives (thread/process ID, date/time, types) • Serializes output across multiple threads • ACE propagates settings to threads created via ACE • Can log across a network •ACE_Log_Msg class; use thread-specific singleton most of the time, via ACE_LOG_MSG macro • Macros encapsulate most usage. Most common: •ACE_DEBUG ((severity, format [, args…])); •ACE_ERROR[_RETURN] ((severity, format [,args…])[, return-value]);
28
• See ACE Programmer’s Guide (APG) tables 3.1 (severities), 3.2 (directives), 3.3 (macros)
ACE Logging Usage • The ACE logging API is similar to printf(), e.g.: ACE_ERROR ((LM_ERROR, "(%t) fork failed")); generates: Oct 31 14:50:13
[email protected]@2766@LM_ERROR@client::(4) fork failed and ACE_DEBUG ((LM_DEBUG, "(%t) sending to server %s", host)); generates: Oct 31 14:50:28
[email protected]@1832@LM_DEBUG@drwho::(6) sending to server tango Format %l %N %n %P %p
29
%T %t
Action Displays the line number where the error occurred Displays the file name where the error occurred Displays the name of the program Displays the current process ID Takes a const char * argument and displays it and the error string corresponding to errno (similar to perror()) Displays the current time Displays the calling thread’s ID
Logging Severities • You can control which severities are seen at run time • Two masks determine whether a message is displayed: • Process-wide mask (defaults to all severities enabled) • Per-thread mask (defaults to all severities disabled) • If logged severity is enabled in either mask, message is displayed • Set process/instance mask with: • ACE_Log_Msg::priority_mask (u_long mask, MASK_TYPE which);
•MASK_TYPE is ACE_Log_Msg::PROCESS or ACE_Log_Msg::THREAD. • Since default is to enable all severities process-wide, all severities are logged in all threads unless you change it • Per-thread mask initializer can be adjusted (default is all severities disabled): • ACE_Log_Msg::disable_debug_messages (); • ACE_Log_Msg::enable_debug_messages();
• Any set of severities can be specified (OR’d together) • Note that these methods set and clear a (set of) bits instead of replacing the mask, as priority_mask() does 30
Logging Severities Example • To allow threads to decide their own logging, the desired severities must be: • Disabled at process level & enabled in the thread(s) to display them. • e.g., ACE_LOG_MSG->priority_mask (0, ACE_Log_Msg::PROCESS); ACE_Log_Msg::enable_debug_messages (); ACE_Thread_Manager::instance ()->spawn (service); ACE_Log_Msg::disable_debug_messages (); ACE_Thread_Manager::instance ()->spawn_n (3, worker);
• LM_DEBUG severity (only) logged in service thread • LM_DEBUG severity (and all others) not logged in worker threads • Note that enable_debug_messages() & disable_debug_messages() are static methods 31
Redirect Logging to a File • Default logging sink is stderr. Redirect to a file by setting the OSTREAM flag and assigning a stream. Can set the flag in two ways: •ACE_Log_Msg::open (const ACE_TCHAR *prog_name, u_long options_flags = ACE_Log_Msg::STDERR, const ACE_TCHAR *logger_key = 0); •ACE_Log_Msg::set_flags (u_long flags); • Assign a stream: •ACE_Log_Msg::msg_ostream (ACE_OSTREAM_TYPE *); (Optional 2nd arg to tell ACE_Log_Msg to delete the ostream) •ACE_OSTREAM_TYPE is ofstream where supported, else FILE* • To also stop output to stderr, use open() without STDERR flag, or ACE_Log_Msg::clr_flags (STDERR)
32
Redirect Logging to Syslog • Redirected log output to ACE_Log_Msg::SYSLOG goes to: • Windows NT4 and up: system’s Event Log • UNIX/Linux: syslog facility (uses LOG_USER syslog facility) • Can’t set this with set_flags/clr_flags; must open. For example: •ACE_LOG_MSG->open (argv[0], ACE_Log_Msg::SYSLOG, ACE_TEXT (“syslogTest”)); • Windows: 3rd arg, if supplied, replaces 1st as program name in event log • To turn it off, call open() again with different flag(s). This seems odd, but you’re effectively resetting the logging… think of it as reopen().
33
Logging Callbacks • Logging callbacks are useful for adding special processing or filtering to log output • Derive a class from ACE_Log_Msg_Callback & reimplement: •virtual void log (ACE_Log_Record &log_record); • Use ACE_Log_Msg::msg_callback() to register callback • Also call ACE_Log_Msg::set_flags() to add ACE_Log_Msg::MSG_CALLBACK flag • Beware… • Callback registration is specific to each ACE_Log_Msg instance • Callbacks are not inherited when new threads are created
34
Useful Logging Flags • There are some other ACE_Log_Msg flags that add useful functionality to ACE’s logging: •VERBOSE: Prepends program name, timestamp, host name, process ID, and message priority to each message •VERBOSE_LITE: Prepends timestamp and message priority to each message (this is what ACE test suite uses) •SILENT: Don’t display any messages of any severity •LOGGER: Write messages to the local client logger deamon
35
Tracing • ACE’s tracing facility logs function/method entry & exit • Uses logging with severity LM_TRACE, so output can be selectively disabled • Just put ACE_TRACE macro in the function: #include “ace/Log_Msg.h” void foo (void) { ACE_TRACE (“foo”); // … do stuff } Says: (1024) Calling foo in file ‘test.cpp’ on line 8 (1024) Leaving foo • Clever indenting by call depth makes output easier to read • Huge amount of output, so tracing no-op’d out by default; rebuild with config.h having: #define ACE_NTRACE 0 36
Networked Logging Service Example Key Participants • Client application processes • Generate log records • Server logging daemon • Receive, process, & store log records C++ code for all logging service examples are in • ACE_ROOT/examples/ C++NPv1/ • ACE_ROOT/examples/ C++NPv2/
• We’ll develop architecture similar to ACE’s, but not same implementation.
37