Introduction Of Unified Modeling Language (uml)

  • Uploaded by: mitt
  • 0
  • 0
  • April 2020
  • PDF

This document was uploaded by user and they confirmed that they have the permission to share it. If you are author or own the copyright of this book, please report to us by using this DMCA report form. Report DMCA


Overview

Download & View Introduction Of Unified Modeling Language (uml) as PDF for free.

More details

  • Words: 8,455
  • Pages: 37
Introduction of Unified Modeling Language (UML) Unified Modeling Language (UML) is a standardized generalpurpose modeling language in the field of software engineering. UML includes a set of graphical notation techniques to create abstract models of specific systems. Overview The Unified Modeling Language (UML) is a graphical language for visualizing, specifying and constructing the artifacts of a software-intensive system. The Unified Modeling Language offers a standard way to write a system's blueprints, including conceptual things such as business processes and system functions as well as concrete things such as programming language statements, database schemas, and reusable software components.[1] UML combines the best practice from data modeling concepts such as entity relationship diagrams, business modeling (work flow), object modeling and component modeling. It can be used with all processes, throughout the software development life cycle, and across different implementation technologies.[2] Standardization UML is officially defined by the Object Management Group (OMG) as the UML metamodel, a Meta-Object Facility metamodel (MOF). Like other MOF-based specifications, UML has allowed software developers to concentrate more on design and architecture.[1] UML models may be automatically transformed to other representations (e.g. Java) by means of QVT-like transformation languages, supported by the OMG.

Extensible Mechanisms UML is extensible, offering the following mechanisms for customization: profiles and stereotype. The semantics of extension by profiles have been improved with the UML 1.0 major revision. History

History of Object Oriented methods and notation. Before UML 1.x After Rational Software Corporation hired James Rumbaugh from General Electricin 1994, the company became the source for the two most popular object-orientedmodeling approaches of the day: Rumbaugh's OMT, which was better for object-oriented analysis (OOA), and Grady Booch's Booch method, which was better forobject-oriented design (OOD). Together Rumbaugh and Booch attempted to reconcile their two approaches and started work on a Unified Method. They were soon assisted in their efforts by Ivar Jacobson, the creator of the object-oriented software engineering (OOSE) method. Jacobson joined Rational in 1995, after his company, Objectory, was acquired by Rational. The three methodologists were collectively referred to as the Three Amigos, since they were well known to argue frequently with each other regarding methodological preferences. In 1996 Rational concluded that the abundance of modeling languages was slowing the adoption of object technology, so

repositioning the work on a Unified Method, they tasked the Three Amigos with the development of a non-proprietary Unified Modeling Language. Representatives of competing Object Technology companies were consulted during OOPSLA '96, and chose boxes for representing classes over Grady Booch's Booch method's notation that used cloud symbols. Under the technical leadership of the Three Amigos, an international consortium called the UML Partners was organized in 1996 to complete the Unified Modeling Language (UML) specification, and propose it as a response to the OMG RFP. The UML Partners' UML 1.0 specification draft was proposed to the OMG in January 1997. During the same month the UML Partners formed a Semantics Task Force, chaired by Cris Kobryn and administered by Ed Eykholt, to finalize the semantics of the specification and integrate it with other standardization efforts. The result of this work, UML 1.1, was submitted to the OMG in August 1997 and adopted by the OMG in November 1997[3]. UML 1.x As a modeling notation, the influence of the OMT notation dominates (e. g., using rectangles for classes and objects). Though the Booch "cloud" notation was dropped, the Booch capability to specify lower-level design detail was embraced. The use case notation from Objectory and the component notation from Booch were integrated with the rest of the notation, but the semantic integration was relatively weak in UML 1.1, and was not really fixed until the UML 2.0 major revision. Concepts from many other OO methods were also loosely integrated with UML with the intent that UML would support all OO methods. For example CRC Cards (circa 1989 from Kent Beck and Ward Cunningham), and OORam were retained. Many others contributed too with their approaches flavoring the many models of the day including: Tony Wasserman and Peter Pircher with the "Object-Oriented Structured Design (OOSD)" notation

(not a method), Ray Buhr's "Systems Design with Ada", Archie Bowen's use case and timing analysis, Paul Ward's data analysis and David Harel's "Statecharts", as the group tried to ensure broad coverage in the real-time systems domain. As a result, UML is useful in a variety of engineering problems, from single process, single user applications to concurrent, distributed systems, making UML rich but large. The Unified Modeling Language is an international standard: ISO/IEC 19501:2005 Information technology — Open Distributed Processing — Unified Modeling Language (UML) Version 1.4.2. Development toward UML 2.0 UML has matured significantly since UML 1.1. Several minor revisions (UML 1.3, 1.4, and 1.5) fixed shortcomings and bugs with the first version of UML, followed by the UML 2.0 major revision that was adopted by the OMG in 2005[4]. There are four parts to the UML 2.x specification: the Superstructure that defines the notation and semantics for diagrams and their model elements; the Infrastructure that defines the core metamodel on which the Superstructure is based; the Object Constraint Language (OCL) for defining rules for model elements; and the UML Diagram Interchange that defines how UML 2 diagram layouts are exchanged. The current versions of these standards follow: UML Superstructure version 2.1.2, UML Infrastructure version 2.1.2, OCL version 2.0, and UML Diagram Interchange version 1.0[5]. Although many UML tools support some of the new features of UML 2.x, the OMG provides no test suite to objectively test compliance with its specifications. Unified Modeling Language topics Methods UML is not a method by itself; however, it was designed to be compatible with the leading object-oriented software development methods of its time (for example OMT, Booch method, Objectory). Since UML has evolved, some of these methods have been recast to take advantage of the new notations (for example OMT), and

new methods have been created based on UML. The best known is IBM Rational Unified Process (RUP). There are many other UML-based methods like Abstraction Method, Dynamic Systems Development Method, and others, designed to provide more specific solutions, or achieve different objectives. Modeling It is very important to distinguish between the UML model and the set of diagrams of a system. A diagram is a partial graphical representation of a system's model. The model also contains a "semantic backplane" — documentation such as written use cases that drive the model elements and diagrams. UML diagrams represent three different views of a system model:  Functional requirements view: Emphasizes the functional requirements of the system from the user's point of view. And includes use case diagrams.  Static structural view: Emphasizes the static structure of the system using objects, attributes, operations and relationships. And includesclass diagrams and composite structure diagrams.  Dynamic behavior view: Emphasizes the dynamic behavior of the system by showing collaborations among objects and changes to the internal states of objects. And includes sequence diagrams, activity diagrams and state machine diagrams. UML models can be exchanged among UML tools by using the XMI interchange format. Diagrams overview UML 2.0 has 13 types of diagrams divided into three categories: Six diagram types represent structure application structure, three represent general types of behavior, and four represent different aspects of interactions. These diagrams can be categorized hierarchically as shown in the following Class diagram:

UML does not restrict UML element types to a certain diagram type. In general, every UML element may appear on almost all types of diagrams. This flexibility has been partially restricted in UML 2.0. In keeping with the tradition of engineering drawings, a comment or note explaining usage, constraint, or intent is always allowed in a UML diagram. Structure diagrams Structure diagrams emphasize what things must be in the system being modeled:  Class diagram: describes the structure of a system by showing the system's classes, their attributes, and the relationships among the classes.  Component diagram: depicts how a software system is split up into components and shows the dependencies among these components.  Composite structure diagram: describes the internal structure of a class and the collaborations that this structure makes possible.  Deployment diagram serves to model the hardware used in system implementations, the components deployed on the hardware, and the associations among those components.





Object diagram: shows a complete or partial view of the structure of a modeled system at a specific time. Package diagram: depicts how a system is split up into logical groupings by showing the dependencies among these groupings.

Class diagram Component diagram

Object diagram

Composite structure diagrams

Deployment diagram

Package diagram

Behavior diagrams Behavior diagrams emphasize what must happen in the system being modeled:  Activity diagram: represents the business and operational step-by-step workflows of components in a system. An activity diagram shows the overall flow of control.  State diagram: standardized notation to describe many systems, from computer programs to business processes.  Use case diagram: shows the functionality provided by a system in terms of actors, their goals represented as use cases, and any dependencies among those use cases.

UML Activity Diagram

Use case diagram State Machine diagram

Interaction diagrams Interaction diagrams, a subset of behavior diagrams, emphasize the flow of control and data among the things in the system being modeled:  Communication diagram shows the interactions between objects or parts in terms of sequenced messages. They represent a combination of information taken from Class, Sequence, and Use Case Diagrams describing both the static structure and dynamic behavior of a system.  Interaction overview diagram: a type of activity diagram in which the nodes represent interaction diagrams.  Sequence diagram: shows how objects communicate with each other in terms of a sequence of messages. Also indicates the lifespans of objects relative to those messages.  Timing diagrams: are a specific type of interaction diagram, where the focus is on timing constraints.

Communication diagram

Interaction overview Sequence diagram diagram The Protocol State Machine is a sub-variant of the State Machine. It may be used to model network communication protocols.

The Rational Unified Process The Rational Unified Process is based on the integrated work of three primary methodologists, Ivar Jacobson, Grady Booch and James Rumbaugh. These methodologists, aided by a large and extended methodologist community, were assembled by Rational

Corporation to form a unified, cohesive and comprehensive methodology framework for the development of software systems. Their work, occurring over several years and based on existing, previously tested methodologies, have lead to significant standards in the development community, including the general acceptance of Use Cases and theUnified Modeling LanguageTM (UML). The Unified Process has three distinguishing characteristics. These characteristics are: • Use-Case Driven - The process employs Use Cases to drive the development process from inception to deployment. • Architecture-Centric - The process seeks to understand the most significant static and dynamic aspects in terms of software architecture. • Iterative and Incremental - The process recognizes that it is practical to divide large projects into smaller projects or miniprojects. Each miniproject comprises an iteration that results in an increment. An iteration may encompass all of the workflows in the process. The iterations are planned using Use Cases. Four Process Phases The Unified Process consists of cycles that may repeat over the long-term life of a system. A cycle consists of four phases: Inception, Elaboration, Construction and Transition. Each cycle is concluded with a release, there are also releases within a cycle. Let’s briefly review the four phases in a cycle: • Inception Phase - During the inception phase the core idea is developed into a product vision. In this phase, we review and confirm our understanding of the core business drivers. We want to

understand the business case for why the project should be attempted. The inception phase establishes the product feasibility and delimits the project scope. • Elaboration Phase - During the elaboration phase the majority of the Use Cases are specified in detail and the system architecture is designed. This phase focuses on the "Do-Ability" of the project. We identify significant risks and prepare a schedule, staff and cost profile for the entire project. • Construction Phase - During the construction phase the product is moved from the architectural baseline to a system complete enough to transition to the user community. The architectural baseline grows to become the completed system as the design is refined into code. • Transition Phase - In the transition phase the goal is to ensure that the requirements have been met to the satisfaction of the stakeholders. This phase is often initiated with a beta release of the application. Other activities include site preparation, manual completion, and defect identification and correction. The transition phase ends with a postmortem devoted to learning and recording lessons for future cycles.

Core Workflows The Unified Process identifies core workflows that occur during the software development process. These workflows include Business Modeling, Requirements, Analysis, Design, Implementation and Test. The workflows are not sequential and likely will be worked on during all of the four phases. The workflows are described separately in the process for clarity but they do in fact run concurrently, interacting and using each other’s artifacts.

The Unified Process book and on-line documentation provide extensive information for implementing the process. It captures activities and artifacts for each workflow complete with examples. It also provides complete descriptions of workers and their roles, activities and artifacts during each of the phases. An excellent and easy to follow introduction to the process is Philipp Kruchten’s book “The Rational Unified Process®, An Introduction.”

Unified Modeling Language The Unified Modeling Language™ (UML) was developed in conjunction with The Unified Process. Throughout the entire Unified Process lies the idea of creating models of the system being constructed. Models represent abstract views of the system from a particular point of view. These models are captured and communicated using UML. UML is a powerful tool for some people and multiple books have been published on it including two by the process authors Booch, Rumbaugh and Jacobson: Ο The Unified Modeling Language™ User Guide Ο The Unified Modeling Language™ Reference Manual These books may be used as the definitive references on UML. It is also recommended that you acquire the easier to read Martin Fowler book "UML Distilled."

A Large Process The Unified Process and its accompanying text require significant study. Theyare, in many ways, an academic study of the topic. The texts, though complete, are very intimidating to most people. The best way to start is with the on-line documentation, along with formal training in the process. Find a mentor who can work directly with your team to introduce the workflows and activities into your organization. It is important to recognize that the process is meant to be a living thing. It must be adjusted to your work environment and work habits. The trick is knowing when to adjust

the process and when to adjust your habits. The Unified Process provides a powerful framework for application development. It identifies necessary activities and helps you layout a formal plan for the software development process. Standard Process Qualifications All of the requirements necessary for a complete development process are fully captured in the Unified Process Workflows. • Open and public - The Unified Process is openly published, distributed and supported. The Unified Process is documented, coherent and complete. In fact, the process is modeled and documented using the process itself. As a result, thousands of software developers have already been trained in the Unified Process. Even more software developers have already been trained in key supporting technology such as UML. • Complimentary documentation - A complete description of the Unified Process with sample deliverables is available on-line. Four texts by the primary creators of the Unified Process also exist: Ο The Rational Unified Process® - Philippe Kruchten Ο Unified Software Development Process - Ivar Jacobson, et al Ο The Unified Modeling Language™ Reference Manual - James Rumbaugh Ο The Unified Modeling Language™ User Guide - Grady Booch, et al There are an additional 66 books readily available by a variety of authors on applying and using the Unified Process and UML. Also, there are hundreds of on-line white papers, articles and case studies. • Training readily available - The on-line version of the Unified Process walks users through the process in a step-by-step tutorial manner. The Rational Corporation offers training on the Unified

Process, tools and UML. The Menlo Institute offers training on the Unified Process, tools and UML and multiple other vendors offer training on the Unified Process, tools and UML. • o o o o o

Experienced process developers Grady Booch James Rumbaugh Ivar Jacobson Philippe Kruchten Extended cross-industry contributing team

• Supporting tools o Rational Rose™ for Business Modeling, Analysis and Design o Rational RequisitePro™ for Requirements Tracking o Rational Unified Process® for Process Training and Templates o Rational ClearQuest™ for Bug Tracking and Change Requests Ο Rational ClearCase™ for Configuration Management Because the Unified Process has been widely and publicly disseminated there are multiple tool choices from other vendors, all designed to work specifically with the Unified Process. Unified Process Conclusion The Menlo Institute considers the Rational Unified Process® to be a well documented and complete methodology. We use it as an interesting source of ideas and tools and offer extensive training on its techniques and practices. If you decide to use The Unified Process, we can confidently support your implementation initiatives. However, unless you have a real expert on-staff it is likely that you will not significantly increase your likelihood of success trying to

adopt this process. The process is too complex, too difficult to learn, and too difficult to apply correctly. If you don’t have an expert, an expert who has actually delivered similar projects using this process, then either hire or rent one and plan to engage the expert for at least one year. The Unified Process does not capture the sociological aspects of software development and the details of how to truly develop incrementally. In order to complement your Unified Process initiative consider studying the core development practices of Extreme Programming (XP). Feature Driven Development (FDD) Feature Driven Development (FDD) is an iterative and incremental software development process. It is one of a number of Agile methods for developing software and forms part of the Agile Alliance. FDD blends a number of industry-recognized best practices into a cohesive whole. These practices are all driven from a client-valued functionality (feature) perspective. Its main purpose is to deliver tangible, working software repeatedly in a timely manner. History FDD was initially devised by Jeff De Luca to meet the specific needs of a 15 month, 50 person software development project at a largeSingapore bank in 1997. Jeff De Luca delivered a set of five processes that covered the development of an overall model and the listing, planning, design and building of features. The first process is heavily influenced by Peter Coad´s approach to object modeling. The second process incorporates Peter Coad's ideas of using a feature list to manage functional requirements and development tasks. The other processes and the blending of the processes into a cohesive whole is a result of Jeff De Luca's experience. Since its successful use on the Singapore project there have been several implementations of FDD.

The description of FDD was first introduced to the world in Chapter 6 of the book Java Modeling in Color with UML[1] by Peter Coad, Eric Lefebvre and Jeff De Luca in 1999. In Stephen Palmer and Mac Felsing´s book A Practical Guide to FeatureDriven Development[2] (published in 2002) a more general description of FDD, decoupled from java modeling in color, is given. The original and latest FDD processes can be found on Jeff De Luca´s website under the ´Article´ area. There is also a Community websiteavailable at which people can learn more about FDD, questions can be asked, and experiences and the processes themselves are discussed. Overview FDD is a model-driven short-iteration process that consists of five basic activities. For accurate state reporting and keeping track of the software development project, milestones that mark the progress made on each feature are defined. This section gives a high level overview of the activities.

Meta-process model for FDD Activities FDD describes five basic activities that are within the software development process. In the figure on the right the meta-process model for these activities is displayed. During the first three sequential activities an overall model shape is established. The

final two activities are iteratedfor each feature. For more detailed information about the individual sub-activities have a look at Table 2 (derived from the process description in the ´Article´ section of Jeff De Luca´s website). The concepts involved in these activities are explained in Table 3. Develop Overall Model The project starts with a high-level walkthrough of the scope of the system and its context. Next, detailed domain walkthroughs are held for each modeling area. In support of each domain, walkthrough models are then composed by small groups which are presented for peer review and discussion. One of the proposed models or a merge of them is selected which becomes the model for that particular domain area. Domain area models are merged into an overall model, the overall model shape being adjusted along the way. Build Feature List The knowledge that is gathered during the initial modeling is used to identify a list of features. This is done by functionally decomposing the domain into subject areas. Subject areas each contain business activities, the steps within each business activity form the categorized feature list. Features in this respect are small pieces of client-valued functions expressed in the form , for example: ´Calculate the total of a sale´ or ´Validate the password of a user´. Features should not take more than two weeks to complete, else they should be broken down into smaller pieces. Plan By Feature Now that the feature list is complete, the next step is to produce the development plan. Class ownership is done by ordering and assigning features (or feature sets) as classes to chiefprogrammers.

Design By Feature A design package is produced for each feature. A chief programmer selects a small group of features that are to be developed within two weeks. Together with the corresponding class owners, the chief programmer works out detailed sequence diagrams for each feature and refines the overall model. Next, the class and method prologues are written and finally a design inspection is held. Build By Feature After a successful design inspection a per feature activity to produce a completed client-valued function (feature) is being produced. The class owners develop the actual code for their classes. After a unit test and a successful code inspection, the completed feature is promoted to the main build. Milestones Since features are small, completing a feature is a relatively small task. For accurate state reporting and keeping track of the software development project it is however important to mark the progress made on each feature. FDD therefore defines six milestones per feature that are to be completed sequentially. The first three milestones are completed during the Design By Feature activity, the last three are completed during the Build By Feature activity. To help with tracking progress, a percentage complete is assigned to each milestone. In the table below the milestones (and their completion percentage) are shown. A feature that is still being coded is 44% complete (Domain Walkthrough 1%, Design 40% and Design Inspection 3% = 44%). Table 1: Milestones Domain Design Code Promote Design Code Walkthrough Inspection Inspection To Build 1% 40% 3% 45% 10% 1% Best practices

Feature-Driven Development is built around a core set of industryrecognized best practices, derived from software engineering. These practices are all driven from a client-valued feature perspective. It is the combination of these practices and techniques that makes FDD so compelling. The best practices that make up FDD are shortly described below. For each best practice a short description will be given.  Domain Object Modeling. Domain Object Modeling consists of exploring and explaining the domain of the problem to be solved. The resulting domain object model provides an overall framework in which to add features.  Developing by Feature. Any function that is too complex to be implemented within two weeks is further decomposed into smaller functions until each sub-problem is small enough to be called a feature. This makes it easier to deliver correct functions and to extend or modify the system.  Individual Class (Code) Ownership. Individual class ownership means that distinct pieces or grouping of code are assigned to a single owner. The owner is responsible for the consistency, performance, and conceptual integrity of the class.  Feature Teams. A feature team is a small, dynamically formed team that develops a small activity. By doing so, multiple minds are always applied to each design decision and also multiple design options are always evaluated before one is chosen.  Inspections. Inspections are carried out to ensure good quality design and code, primarily by detection of defects.  Configuration Management. Configuration management helps with identifying the source code for all features that have been completed to date and to maintain a history of changes to classes as feature teams enhance them.  Regular Builds. Regular builds ensure there is always an up to date system that can be demonstrated to the client and

helps highlighting integration errors of source code for the features early.  Visibility of progress and results. By frequent, appropriate, and accurate progress reporting at all levels inside and outside the project, based on completed work, managers are helped at steering a project correctly. Metamodel (MetaModeling)

Process-Data Model for FDD Metamodeling helps visualizing both the processes and the data of a method, such that methods can be compared and method fragments in the method engineering process can easily be reused. The advantage of the technique is that it is clear, compact, and consistent with UML standards. The left side of the metadata model, depicted on the right, shows the five basic activities involved in a software development project using FDD. The activities all contain sub-activities that correspond to the sub-activities in the FDD process description on Jeff De Luca´s website. The right side of the model shows the concepts involved. These concepts originate from the activities depicted in the left side of the diagram. A definition of the concepts is given in Table 3.

Executable and Translatable UML (XTUML)

Executable and Translatable UML (XTUML) is a subset of the Unified Modeling Language endowed with rules for execution. With an executable model, you can formally test the model before making decisions about implementation technologies, and with a translatable model, you can retarget your model to new implementation technologies. This article describes the fundamental ideas behind XTUML and how they work in practice.

Figure 1: Separation of application models and software architecture Separate but equal XTUML separates application models from software architecture design, weaving them together through a translator at deployment time, as shown in Figure 1. There are three components of an XTUML design: • Application models capture what the application does. The models are executable, which enables you to validate that your application meets requirements early on. Application models are independent of design and implementation technologies. • Software architecture designs, defined as design patterns, design rules, and implementation technologies, are incorporated by a translator that generates code for the target system. The software architecture designs are independent of the applications they support. • The translator applies the design patterns to the application models according to the appropriate design rules.

Figure 2: Concurrent design and modeling The separation of the software architecture design from the application models supports concurrent design and application analysis modeling, as illustrated in Figure 2. Using XTUML, you can iteratively and incrementally construct both the application and the software architecture design. UML in execution The UML specification defines the "abstract syntax" of UML diagrams but provides few rules on how the various elements interact dynamically. XTUML incorporates well-defined execution semantics. Objects execute concurrently, and every object is in exactly one state at a time. An object synchronizes its behavior with another object by sending a signal interpreted by the receiver's state machine as an event. When the event causes a transition in the receiver, the procedure in the destination state of the receiver executes after the action that sent the signal, thus capturing the desired "cause and effect" in the system's behavior. Each procedure consists of a set of actions, such as a functional computation, a signal send, or a data access. These action semantics are a fundamental part of UML. The actions are defined so that data structures can be changed at translation time without affecting the definition of functional computation-a critical requirement for translatability.

The application model therefore contains the decisions necessary to support execution, verification, and validation, independent of design and implementation. No design decisions need be made nor code developed or added for model execution, so formal test cases can be executed against the model to verify that application requirements have been properly addressed. At system construction time, the conceptual objects are mapped to threads and processors. The translator's job is to maintain the desired sequencing (cause-and-effect) specified in the application models, but it may choose to distribute objects, sequentialize them, duplicate them, or even split them apart, as long as application behavior is preserved. UML notation UML provides a notation set in which the same concept can be represented in a variety of ways. Before a new development team can be effective, it must first discuss, select, and agree on an acceptable UML subset, which can then be mapped to the team's method and modeling processes. As the project proceeds and new members join the team, constant vigilance is necessary to ensure that modeling conventions are maintained. XTUML, however, is designed to be enforced not by convention, but by execution: either a model compiles, or it doesn't. Policing the notation-along with the time it consumes-is no longer an issue. The notational subset is based not on diagrams, but on the underlying execution model. All diagrams (that is, class diagrams, state diagrams, procedure specifications, collaboration diagrams, and sequence diagrams) are projections of this underlying model. Other UML models that do not support execution, such as use-case diagrams, can be used to help build the XTUML models. Translation Translators generate code from models automatically. The translator (Figure 1 again) is made up of three elements:

A set of design patterns ("archetypes") to be applied in code generation together with rules for when a given archetype or model component will be used to build code. • A translation engine that extracts application model information and applies the archetypes and rules to generate complete code. • A run-time library comprising pre-compiled routines that support the generated code modules. The partitioning of translators into three pieces means that changes and additions can be made to the archetypes or run-time library without having to contend with the details of the translation engine itself. When generating code, the translator extracts information from the application model, then selects the appropriate archetype for the tobe-translated model element. The information extracted from this model is then used to "fill in the blanks" of the selected archetype. The result is a coded implementation component. This approach is excellent for real-life applications. Application of an archetype commonly requires invocation of other archetypes. These newly invoked archetypes, in turn, often invoke other archetypes. The creation of code, for what appears to be one model element, can ultimately involve several nested layers of archetypes for multiple model elements. This intricate process is automated by the translator. •

Integration, maintenance A realized software architecture design is a model compiler. It takes an executable UML model and translates it-according to the patterns embedded within the model compiler-into a running system. As with a programming language compiler, developers simply compile, link, load, and go. Off-the-shelf model compilers automatically translate models to source code. Legacy, COTS, handwritten, and externally generated code can be integrated with translator-generated code by adding wrappers.

Integration is a matter of application model compilation using the selected model compiler. The application model has already been tested by execution, and the model compiler has been similarly tested by iterative performance verification throughout development.[1] All that remains is the final system test. Failure of any part of system test requires predeployment maintenance, in which case changing or growing system requirements and system functionality only requires modification of the application model; such changes to the application model are automatically translated to new functionality in the target code without any modification to the design or manual coding. Redeployment onto prospective software platforms works the same way. Because applications are independent of target platform characteristics, implementation technologies and language, they can be redeployed across multiple products and product families. Executable and translatable In practice, application modeling proceeds iteratively. As each increment, along with its test cases, is developed, it is executed and tested. This process implies an unmistakably clear exit gate: a completed application model must execute and execute correctly. Then it's on to compilation. Model compilers may require additional information that is in neither the application model nor the software architecture design. For example, a model compiler may provide for persistent attributes, but the application model captures only required data. The application engineer must therefore annotate the application model with those attributes that are intended to be persistent. Similarly, your implementation environment may provide for one thread per processor, though each object in XTUML conceptually executes concurrently. The application engineer must therefore annotate the model to distinguish the task in which each object will execute. These annotations constitute design rules for when to apply a pattern.

The allocation implied by the annotation will affect performancedramatically in some cases. Several allocations may be attempted until performance is adequate. Off-the-shelf model compilers can meet performance needs more often than is generally recognized, but when all else fails more radical performance optimization is required. Performance optimization Model compilers are completely open. The archetypes and runtime library routines can be modified or replaced at will. In addition, some systems may require multiple archetype implementations for a given model element type.[2] In such cases, an archetype can be defined for each desired implementation option. The default archetype used for a specific model element can be overridden by the preferred archetype using annotations. Model annotation provides a design engineer with a fine, second level of control over the implementation characteristics and performance of a particular system. Model compiler development should proceed concurrently with application model development. As a risk-reduction strategy, application model increments likely to have performance problems should be modeled first. The design development team then generates code for each increment and profiles memory usage and performance to identify any "hot spots." Incremental changes are then made to the base model compiler until performance is adequate. Defect eradication Executable models catch defects through model execution. If an error slips through the early part of the lifecycle, the direct mapping between application model elements and the generated code leads to the specific application model element causing the problem. Fix the defective application model element, and the error goes away. Independent software architecture design testing provides another defect-reduction checkpoint prior to initial integration and test, and

it reduces the accidental introduction of new defects into the system. If a design or implementation error is caused by a faulty archetype, it will occur frequently and be all too visible. The error can then be traced back to the specific archetype that generated the defect. Repairing the faulty archetype propagates the correction throughout all of the system's generated code. This approach simplifies repair of defects. Because a fix is only made at the source of the defect, fixing one problem is unlikely to introduce new problems. The introduction of side effects due to hand-coded modifications is eliminated. Regeneration prompted by long-term support, continued defect fixes, addition of functionality, or performance tuning, should produce clean, well-documented, and structured code. Reuse and target migration Separation of design and application makes application models reusable because they are independent of target characteristics, implementation technologies, and language. Application models can be retargeted by selecting a different model compiler, so product families can grow and change in response to the underlying technology. Software architecture designs-captured as model compilers-are reusable across multiple systems, independently of the specifics of the applications. Reuse of the same design in multiple applications also guarantees that applications built in different locations, with different developers, will nonetheless integrate with each other. Automation XTUML cries out for automated support for execution and translation. However, many of the ideas I've outlined so far can be put in place even without automation for the benefits they bring immediately and as forerunner to a more automated future.

Figure 3: The XTUML development process The key concept-separation of application and software architecture design-can be expressed by thinking of design as a set of patterns and rules, rather than the crafting of individual implementation elements. (See Figure 3.) Each pattern is a particular approach to a design problem and a rule. For example, if the design problem is to search a class extent, the rule might be to use a linear search except when a linear search is "too slow." The designer should provide an alternative (a hash table, for example) and define exactly when to use it instead of the linear search. ("Too slow" is too vague.) Design documents are not elaborations on the analysis, but a set of formal mappings, in natural language if necessary. The laws of an XTUML design can be applied without resort to a machine. We have all, at some time or another, "executed" a program by hand either before executing it in a real machine, or perhaps to ascertain why the program did not execute as expected. Because an executable UML model has a defined interpretation, it is possible to establish exactly what the model does before coding. Each object is in one state at a time, and the order of signals to sender-and-receiver pairs of objects is determinate. When multiple signals are outstanding to different pairs, each possible ordering can be checked for the correct outcome-providing endless hours of amusement for those with a limited social life. XTUML defines a tractable notation. This notation is known to work; following its rules tends to require system modelers to think

through exactly what behavior they require without hiding behind abstruse notational complexities. Translator structure and translator operation both describe an automated approach. The partitioning into patterns, rules, and implementation mechanisms, however, provides guidance for hand construction of the design. Integration is eased, even in the absence of automation, because the design was expressed as rules. To the extent the system was constructed by applying the patterns and rules uniformly, then the system will integrate quickly and easily because the constructed components have been built to fit "by design." Maintenance, however, is another matter, because the Law of System Entropy will surely set in, and despite our best efforts, the system eventually degrades. Concurrent, iterative development can be applied to application model construction and to the construction of the unautomated model "compiler" (that is, the natural language set of patterns to be applied) in the same manner as for an automated solution. Performance optimization means changing the patterns, rules, and implementation technologies to run faster and smaller. Modifying the implementation technologies is the same in both automated and unautomated cases, while the patterns and rules are simply expressed less formally than in an automated solution. However, using these new rules and patterns means recoding the application, or at least those portions affected by the new patterns and rules. Clever use of scripts and text recognition and generation programs can help here, but eventually entropy will have its way. Defect identification and eradication in the application is similar in both cases, though hand simulation is more prone to error than an automated solution. Finding defects in the "model complier" has two parts: working out whether the pattern or rule was incorrect and, separately, establishing whether the patterns and rules were followed correctly. Astonishingly, reuse and target migration using XTUML can yield significant benefits even without automation. The reason is that

both the application models and the design, expressed as implementation technologies, patterns, and rules, each can be reused and retargeted. While it would be rather more efficient to automate the weaving of the two parts, mapping the application according to the patterns and rules from an existing application model is still much easier than hacking it out from scratch. Similarly, an existing set of design patterns and rules can be applied to a new application model more easily than if you have to work them out from scratch. The benefits realized XTUML has been used on over 1,400 real-time and technical projects, including life-critical implanted medical devices, Department of Defense flight-critical systems, performancecritical, fault-tolerant telecom systems, highly resource-constrained consumer electronics, and large-scale discrete-event simulation systems. Capabilities provided by XTUML automation include: • rapid project ramp-up resulting from a streamlined UML subset and a well-defined process • concurrent application analysis and design to compress project schedules modeling • reduced defect rates from early execution of targetindependent application models and test of applicationindependent designs • customizable translation generating complete, targetoptimized code • performance tuning and resource optimization • effective, practical reuse of target-independent application models and application-independent designs • accelerated development of products with multiple releases, growing or changing requirements, and families of products • reduced maintenance costs and extended product lifetimes

In summary, the approach accelerates development and improves the quality, performance, and resource use of real-time embedded systems.

Executable UML Executable UML, often abbreviated to xtUML [1] or xUML [2], is the evolution of the Shlaer-Mellor method[3] to UML. Executable UML graphically specifies a system using a profile of the UML. The models are testable, and can be compiled into a less abstract programming language to target a specific implementation. [3] [4] Executable UML supports MDA through specification of platform-independent models, and the compilation of the platform-independent models into platform-specific models. Usage of Executable UML Executable UML is used to model the domains in a system. Each domain is defined at the level of abstraction of its subject matter independent of implementation concerns. The resulting system view is composed of a set of models represented by at least the following:  The domain chart provides a view of the domains in the system, and the dependencies between the domains.  The class diagram defines the classes and class associations for a domain.  The statechart diagram defines the states, events, and state transitions for a class or class instance.  The action language defines the actions or operations that perform processing on model elements. Domain Chart See also: Aspect (computer science), Concern (computer_science) Executable UML requires identification of the domains (also known as: aspects [6] or concerns) of the system. "Each domain is

an autonomous world inhabited by conceptual entities" [7] Each domain can be modelled independent of the other domains in the system, enabling a separation of concerns. As an example, domains for an automated teller system may include the following:  The application domain model of the automated teller's business logic.  The security domain model of various issues regarding system security (such as authentication and encryption).  The data access domain model of methods for external data usage.  The logging domain model of the various methods through which the system can or must log information.  The user interface domain model of the user interactions with the system.  The architecture domain model of the implemented of the Executable UML model on the system's hardware and software platforms. The separation of concerns enables each domain to be developed and verified independently of the other domains in the system by the respective domain experts. The connections between domains are called bridges. "A bridge is a layering dependency between domains" [8]. This means that the domains can place requirements upon other domains. It is recommended that bridges are agreed upon by the different domain experts. A domain can be marked as realized to indicate that the domain exists and does not need to be modeled. For example, a data access domain that uses a MySQL database would be marked as realized. Class diagram See also: Class diagram Conceptual entities, such as tangible things, roles, incidents, interactions, and specifications, specific to the domain being modeled are abstracted into classes. Classes can have attributes and operations.

The relationships between these classes will be indicated with associations and generalizations. Some associations may require further abstraction as an Association Class. Constraints on the class diagram can be written in both Action Language and Object Constraint Language (OCL). The Executable UML profile limits which UML elements can be used in an Executable UML class diagram. An Executable UML class diagram is meant to expose information about the domain. Too much complexity in the statechart diagrams is a good indicator that the class diagram should be reworked. Statechart Diagram See also: Finite State Machine, State diagram Classes have lifecycles which are modeled in Executable UML with a statechart diagram. The statechart diagram defines the states,transitions, events, and procedures that define a class' behaviour. Each state has only one procedure that is executed upon entry into that state. A procedure is composed of actions, which are specified in an action language. Action Language The class and state models by themselves can only provide a static view of the domain. In order to have an executable model, there must be a way to create class instances, establish associations, perform operations on attributes, call state events, etc. In Executable UML, this is done using an action language that conforms to the UML Action Semantics. Action Semantics was added to the UML specification in 2001. Action languages have been around much longer than that in support of theShlaer-Mellor method. Some existing action languages are Object Action Language(OAL), Shlaer-Mellor Action Language(SMALL), Action Specification Language(ASL), That Action Language(TALL), Starr's Concise Relational Action Language(SCRALL), Platform-independent Action Language

(PAL) and PathMATE Action Language (PAL). SCRALL is the only one that is a graphical action language. Model testing and execution See also: Software verification, Debugging Once a domain is modelled, it can be tested independent of the target implementation by executing the model. Each domain can be verified and validated independent of any other domain. This allows errors detected to be associated with the domain and independent of other system concerns, and lowers the cost of verification and validation. Verification will involve such things as human review of the models, performed by experts in the relevant domain, and automated checking of the Executable UML semantics. e.g., whether or not the Executable UML is compilable, whether or not each class attribute has a type, etc. Validation will typically involve use of an Executable UML tool to execute the model. The execution can occur either before or after model compilation. Model compilation In order to support execution on the target implementation, the domain model must be translated into a less abstract form. This translation process is called model compilation. Most model compilers target a known programming language, because this allows reuse of existingcompiler technologies. Optimizing the domain models for target implementation reasons will reduce the level of abstraction, adversely affect domain independence, and increase the cost of reuse. In executable UML, optimizations are done by the model compiler either automatically or through marking. Marking allows specific model elements to be targeted for specific lower-level implementations, and allows for broader architectural decisions, such as specifying that collections of objects should be implemented as a doublylinked list.

In MDA terms, the model compiler creates the PSM. The separation between the PIM and PSM in Executable UML disables the ability toround-trip engineer the model, and deters modifications to the PSM.[9] Executable UML profile Executable UML is a profile of the UML, defining execution semantics for a subset of the UML. Some of notable aspects of the Executable UML profile include the following:  No support for implementation specific constructs, like aggregation and composition.[10]  Generalizations are always notated as {complete, disjoint}.  Associations between classes are always named, have verb phrases on both ends specifying the roles, and have multiplicity specified on both ends.  Multiplicities on association ends are restricted to 0..1 (zero to one), * (zero to many), 1 (exactly one), or 1..* (one to many).  Data types are restricted to the following core data types: boolean, string, integer, real, date, timestamp, and arbitrary_id, or one of the following domain-specific data types: numeric, string, enumerated, and composite. Domainspecific numeric and string data types can represent subsets of the core data types. The domain-specific composite data type is to always be treated as a single unit. e.g., aMailingAddress data type could be declared, but city information couldn't be extracted from it.  Constraints on the Executable UML models can either be represented as Object Constraint Language (OCL) or action language.  Domains are represented as a Package, and bridges are represented as a Dependency. Advantages of Executable UML The intended advantages of Executable UML are as follows:













Executable UML is a higher level of abstraction than 3GLs. This allows developers to develop at the level of abstraction of the application.[11] The Executable UML allows for true separation of concerns. This significantly increases ease of reuse and lowers the cost of software development. This also enables Executable UML domains to be cross-platform and not tied to any specific programming language, platform or technology. Executable UML allows for translation of Platformindependent models into Platform-specific models. This eliminates the labor-intensive task of elaborating the PIM into a PSM. Executable UML enables easy exchange and reuse of knowledge by using the industry standard UML. Since the final model is a fully executable solution for the problem space, it can be valued as intellectual property. Executable UML closes the disconnect between documentation and programming language, as the models are a graphical, executable specification of the problem space that is compiled into a target implementation. Since actions are specified in action language, the automatic generation of implementation code from Executable UML models can leverage this complete, semantic-level knowledge of behavior to perform self-optimization, producing implementations that are far more efficient than other forms of code generation.

UML eXchange Format In computing, UXF is an acronym for UML eXchange Format, see Unified Modeling Language. UXF is a XML-based model interchange format for UML, which is a standard software modeling language. UXF is a simple and well-structured format to encode, publish, access and exchange UML models, and allows UML to be highly interoperable.

UXF sequence diagram

Related Documents


More Documents from "Banhi"