How to earn

Borland strategy and products. Overcoming Heterogeneity: The Last Frontier of ALM. Life cycle trace

Borland strategy and products.  Overcoming Heterogeneity: The Last Frontier of ALM.  Life cycle trace

It is known that many users (and, to be honest, some IT specialists) when talking about software development mean, first of all, the creation and debugging of application code. There was a time when such ideas were close enough to the truth. But modern application development consists not only and not so much of writing code, but of other processes, both preceding programming itself and following it. Actually, they will be discussed further.

Application development life cycle: dreams and reality

It's no secret that many commercially successful products both in Russia and abroad were implemented using only application development tools, and even the data was often designed on paper. It would not be an exaggeration to say that of all the possible tools for creating software in Russia (and in many European countries) now mainly application development tools and, to a lesser extent, data design tools are popular (primarily for projects with a small budget and tight deadlines). All project documentation, starting with the technical task and ending with the user manual, is created using text editors, and the fact that some of it is the source information for the programmer only means that he simply reads it. And this despite the fact that, on the one hand, requirements management tools, business process modeling, application testing tools, project management tools, and even project documentation generation tools have existed for a long time, and on the other hand, any project manager naturally wants to facilitate life for himself and for other performers.

What is the reason for the distrust of many project managers in tools that allow you to automate many stages of the work of the teams they lead? In my opinion, there are several reasons for this. The first of them is that the tools used by the company very often do not integrate well with each other. Consider a typical example: Rational Rose is used for modeling, Delphi Professional is used for writing code, CA AllFusion Modeling Suite is used for data design; integration tools for these products are either not available at all for this combination of their versions, or do not work correctly with the Russian language, or simply have not been purchased. As a result, use case diagrams and other models created with Rose become nothing more than pictures in the design documentation, and the data model mainly serves as a source for answering questions like: “Why is this field even needed in that table?” And even such simple parts of the application as the Russian equivalents of database field names are written by project participants at least three times: once when documenting the data model or application, the second time when writing user interface code, and the third time when creating a help system file and user manuals.

The second, no less serious reason for distrust of software life cycle support tools is that, again, due to the lack or poor functionality of integration tools for such products, in many cases it may not be possible to constantly synchronize all parts of the project with each other: process models , data models, application code, database structure. It is clear that a project that implements the classic waterfall scheme (Fig. 1), in which requirements are first formulated, then modeling and design is carried out, then development and, finally, implementation (you can read about this scheme and other project implementation methodologies in a series of reviews by Lilia Hough, published in our magazine), is more of a dream than a reality - while the code is being written, the customer will have time to change part of the processes or wish for additional functionality. As a result of the project, an application is often obtained that is very far from what was described in terms of reference, And database, which has little in common with the original model, and the synchronization of all this with each other for the purpose of documenting and transferring to the customer turns into a rather laborious task.

A third reason why software lifecycle support tools are not used in all the places where they could be useful is that they are so limited in choice. On Russian market two product lines are being actively promoted: IBM/Rational tools and Computer Associates tools (mainly the AllFusion Modeling Suite product line), which are currently largely focused on certain types of modeling, and not on the constant process of synchronization of code, database and models.

There is another reason that can be attributed to the category psychological factors: there are developers who do not at all strive for the complete formalization and documentation of their applications - after all, in this case they become indispensable and valuable employees, and a person who is forced to understand after the dismissal of such a developer in his code or simply accompanying his product will feel for a very long time complete idiot. Such developers, of course, are by no means in the majority, nevertheless, I know at least five company executives who have been spoiled with a lot of blood by such ex-employees.

Of course, many project managers, especially projects with a small budget and limited time, would like to have a tool with which they could formulate requirements for the developed software product ... and as a result, get a ready-made distribution kit of a working application. This, of course, is only an ideal, which for the time being one can only dream of. But if you go down from heaven to earth, then you can formulate more specific wishes, namely:

1. Requirements management tools should simplify the creation of the application model and data model.

2. Based on these models, a significant part of the code should be generated (preferably not only client, but also server).

3. A significant part of the documentation should be generated automatically, and in the language of the country for which this application is intended.

4. When creating the application code, automatic changes should occur in the models, and when the model changes, automatic code generation should occur.

5. Code written by hand should not disappear when changes are made to the model.

6. The appearance of a new customer requirement should not cause serious problems associated with changes in models, code, database and documentation; all changes must be made synchronously.

7. Version control tools for all of the above should be convenient in terms of finding and tracking changes.

8. And finally, all this data (requirements, code, models, documentation) should be available to the project participants exactly to the extent that they need them to perform their duties - no more and no less.

In other words, the application development cycle should enable iterative collaborative development without the added cost of changing customer requirements or how they are implemented.

I will not assure you that all these wishes are absolutely impossible to implement with the help of IBM / Rational or CA tools - technologies develop, new products appear, and what was impossible today will become available tomorrow. But, as practice shows, the integration of these tools with the most popular development tools, unfortunately, is far from being as ideal as it might seem at first glance.

Borland products from a project manager's perspective

Borland is one of the most popular developers of development tools: for twenty years, its products have been well-deserved love of developers. Until recently, this company mainly offered a wide range of tools intended directly for application coders - Delphi, JBuilder, C++Builder, Kylix (we have repeatedly written about all these products in our magazine). However, the success of a company in the market is largely determined by how much it follows the trends of its development and how much it understands the needs of those who are consumers of its products (in this case- companies and departments specializing in application development).

That is why Borland's current development strategy is to support the full application lifecycle (Application Lifecycle Management, ALM), including requirements definition, design, development, testing, implementation and maintenance of applications. This is evidenced by the acquisition of a number of companies by Borland Corporation last year - BoldSoft MDE Aktiebolag (a leading provider of the latest Model Driven Architecture application development technology), Starbase (a provider of configuration management tools for software projects), TogetherSoft Corporation (a provider of software engineering solutions). In the time that has passed since the acquisition of these companies, a lot of work has been done in terms of integrating these products with each other. As a result, these products already meet the needs of project managers for the possibility of iterative collaborative development. Below we discuss what exactly Borland offers to managers and other participants in projects related to software development (many of the products and integration technologies described below were presented by this company at the developer conferences held in San Jose, Amsterdam and Moscow in November) .

Requirements Management

Requirements management is one of the most important parts of the development process. Without formulated requirements, as a rule, it is almost impossible to organize work on the project normally, or to understand whether the customer really wanted to get exactly what was implemented.

According to analysts, at least 30% of the budget of projects is spent on what is called reworking the application (and I personally think that this figure is greatly underestimated). Moreover, more than 80% of this work is associated with incorrectly or inaccurately formulated requirements, and the correction of such defects is usually quite expensive. And how much customers like to change requirements when the application is almost ready is probably known to all project managers ... It is for this reason that requirements management should be given the closest attention.

For requirements management, Borland has a product Borland CaliberRM, which is essentially a platform for automating the requirements management process, providing change tracking tools (Fig. 2).

CaliberRM integrates with many development tools from both Borland and other manufacturers (for example, Microsoft), up to embedding a list of requirements in the development environment and generating code stubs by dragging the requirement icon into the code editor with the mouse. In addition, you can create your own solutions based on it - for this there is a special set of tools CaliberRM SDK.

Note that this product is used to manage requirements not only for software, but also for other products. Thus, cases of its successful application in the automotive industry to manage the requirements for various vehicle components (including Jaguar vehicles) are known. In addition, according to Jon Harrison, manager in charge of the JBuilder product line, using CaliberRM to create Borland JBuilderX greatly simplifies the process of developing this product.

And finally, the availability of a convenient requirements management tool greatly simplifies the creation of project documentation, and not only at early stages. project stages, but also on all subsequent ones.

Application and data design

Design is an equally important part of creating an application and should be based on the formulated requirements. The design results are models used by programmers at the stage of code creation.

For designing applications and data, Borland offers the Borland Together product (Fig. 3), which is a platform for analyzing and designing applications that integrates with various development tools from both Borland and other manufacturers (in particular, Microsoft). The specified product allows modeling and designing applications and data; at the same time, the degree of its integration with development tools at the moment is such that changes in the data model lead to an automatic change in the application code, as well as changes in the code lead to a change in the models ( this technology integration of modeling tools and development tools is called LiveSource).

Borland Together can be used as a tool that connects requirements management and modeling tasks with development and testing tasks and helps you understand what the product requirements implementation should be.

Creating the application code

Application code creation is an area in which Borland has specialized throughout its 20 years of existence. Today Borland produces development tools for Windows, Linux, Solaris, Microsoft .NET platforms, as well as for a number of mobile platforms. We have repeatedly written about the development tools of this company, and we will not repeat ourselves in this article. We only note that the latest versions of the development tools of this company (Borland С#Builder, Borland C++BuilderX, Borland JBuilderX), as well as the expected soon a new version one of the most popular development tools in our country, Borland Delphi 8 for Microsoft .NET Framework, allows for close integration of Together modeling tools and CaliberRM requirements management tools with their development environments. We will definitely cover Delphi 8 in a separate article in the next issue of our magazine.

Testing and optimization

Testing is an absolutely essential part of creating quality software. It is at this stage that it is checked whether the application satisfies the formulated requirements for it, and appropriate changes are made to the application code (and often to models and databases). The testing phase usually requires the use of application performance analysis and optimization tools, and Borland Optimizeit Profiler is used for this purpose. Today, this product is also integrated into development environments latest versions Borland development tools, as well as into the Microsoft Visual Studio .NET environment (Fig. 4).

Implementation

Software implementation is one of the most important components of project success. It should be carried out in such a way that at the stage of trial operation of the product it can be changed without serious costs and losses, it is easy to increase the number of users without reducing reliability. Since application adoption today occurs in a context where companies use different technologies and platforms and operate a number of existing applications, the ability to integrate a new application with legacy systems can be important during deployment. For this purpose, Borland offers a number of cross-platform integration technologies (such as Borland Janeva, which allow integration of .NET applications with applications based on CORBA and J2EE technologies).

Change management

Change management is performed at all stages of application creation. From Borland's point of view, this is the most important component of the project - after all, changes can occur in requirements, and in code, and in models. It is difficult to manage a project without tracking changes - the project manager must be aware of what exactly is happening at this stage and what has already been implemented in the project, otherwise he risks not completing the project on time.

To solve this problem, you can use Borland StarTeam (Fig. 5) a scalable software configuration management tool that stores all the necessary data in a centralized repository and optimizes the interaction of employees responsible for performing various tasks. This product provides a team of project participants with a variety of tools for publishing requirements, managing tasks, planning, working, discussing changes, version control, document management.

Features of this product are tight integration with other Borland products, support for distributed teams of developers interacting over the Internet, the presence of several types of client interfaces (including the Web interface and the Windows interface), support for many platforms, and operating systems, the availability of the StarTeam Software Development Kit (SDK), which is an application programming interface for creating solutions based on StarTeam, data protection tools on the client and server side, tools for accessing Merant PVCS Version Manager and Microsoft Visual SourceSafe repositories, integration tools with Microsoft Project, data visualization, reporting and decision support tools.

Instead of a conclusion

What does the appearance on the Russian market of the above set of products from a well-known manufacturer whose development tools are widely used in a wide variety of projects? At a minimum, the fact that today we will be able to get not just a set of tools for various project participants, but also an integrated platform for implementing the entire development life cycle - from requirements definition to implementation and maintenance (Fig. 6). At the same time, this platform, unlike its competing product sets, will guarantee support for all the most popular development tools and will allow you to integrate your components into them at the level of full code synchronization with models, requirements, and changes. And let's hope that project managers will breathe a sigh of relief, saving themselves and their employees from many tedious and routine tasks...

Software development is a rather complex undertaking. Creation of a software product with sufficiently well-defined characteristics, carried out with acceptable quality, within the allotted budget and on time, requires the constant coordination of a large number of actions between numerous specialists. Over the past 15 years, the development of software products has become a full-fledged industry, there is no place for an undocumented, purely individual approach, therefore it is natural that the emergence of management methodology has become a noticeable trend. life cycle applications .

Of course, in the process of software development there is a place for the art of talented programmers and professional excellence other participants in the processes of creating a software product, but today it has become a key realization of the fact that in this activity there is no place for incoherence, undocumentedness and dictate of the individual. One of the most notable trends in the first decade of this century in the software systems industry was the emergence of ALM (Application Lifecycle Management, ALM) - application lifecycle management .

Such an approach should bring management discipline into development, considering the creation of a software product as a business process and taking into account its cyclical nature. In accordance with the idea of ​​ALM, work on any software solution does not end at the stage of its commissioning: the system is modernized and improved, new versions are released, which each time initiates the next round of the application life cycle.

Forrester Research analysts compare ALM with ERP for the software industry. True, the history of ALM is much shorter and cannot yet boast a comparable list of successful implementations. Analysts acknowledge that despite objective necessity in such solutions, ALM tools are still of limited use, and their market is still fragmented. Market watchers believe that none of the current ALM offerings fully realize the full potential benefits and capabilities of application lifecycle management automation tools. However, the development of development towards controlled, predictable, efficient processes for creating reliable and high-quality software cannot but be accompanied by the emergence of appropriate platforms for automating these processes.

ALM vendors provide a variety of tools and technologies to support the software development process. These tools go far beyond the traditional productivity tools of the individual developer. They are aimed at providing methodologies and tools focused on the collective work on software development. To create a viable ALM solution, vendors must consider the needs of the "extended" software development team and include roles in their products that participate in the larger process.

IT expert D. Chappel warns against a simplistic view of ALM, which is often identified only with the software development life cycle (Software Development LifeCycle, SDLC): initiation, iterative development cycle, product release and implementation. The discipline of ALM covers a wider range of tasks, considering all aspects of the existence of such an enterprise resource as applications. By definition, D. Chappel, the life cycle of an application includes all stages at which an organization invests in this resource in one way or another - from the initial idea of ​​a software solution to the disposal of end-of-life software.

This definition is extremely detailed in HP - according to the company, the cycle is only one of the stages of a full-fledged model

ALM is the application delivery phase (Figure 3.14), and besides it, there is also planning, operation, and decommissioning. The cycle is closed: until the moment when the organization comes to the final conclusion about the uselessness of the application, it continues to improve. Competent implementation of ALM is aimed, among other things, at extending the term effective work software solution and, as a result, reduce the cost of purchasing or creating fundamentally new software products.

Business needs analysis

Priority and investment

Wor4dlenne SHSHDOISH "Monitoring of programs

Perfection

Planning

Guiding decisions

Correction

mistakes

Monitoring

setting

application lifecycle

practices

Correspondence

requirements

Repeated

islopkyuvanis

Initiation

development iterations

Delivery

Removal from service

Release

penetration

Rice. 3.14. ALM model

D. Chappel expands the picture of the life cycle into a linear one, highlighting three main areas of ALM: management (governance), development (development) and operation (operations). The processes corresponding to these areas flow, overlapping, from the inception of the idea of ​​a new application or the modernization of an existing one, to the stage of its deployment and to the complete completion of the operation.

Governance in ALM is implemented throughout the application life cycle and includes all processes and procedures that are relevant to decision making and project management. the main task here - ensuring that the application meets one or another business goal, which determines the significance of this component of ALM. To the management processes, D. Chappel refers to the development of a detailed investment proposal (a business case containing an analysis of the costs, benefits and risks associated with a future application), which precedes the development stage; development management using methods and tools for project and portfolio management (Project Portfolio Management, PPM); managing a running application as part of enterprise application portfolio management (Application Portfolio Management, AWP).

Application development occurs between the moment the idea is born and the deployment of the finished solution. Development processes are also implemented post-deployment when there is a need to upgrade the application or release new versions. Development includes requirements definition, design, coding, and testing, all of which are typically completed in multiple iterations.

Operations refers to the processes of monitoring and managing a running application that are planned and started shortly before development is completed and continue until scrapped. The inclusion of operational processes in the software life cycle is key point: it is the fragmentation of development teams and operating personnel that is considered one of the most acute problems of enterprise applications, and their integration using ALM promises a serious increase in the efficiency of using business software. The only trouble is that in ALM environments such integration is still a good goal, and not a real implementation.

The described general picture of ALM in practice is transformed into the need to plan and automate many stages of the software life cycle. The ideal ALM environment integrates all participants in the application life cycle, provides them with consistent access to the appropriate resources and tasks, and at the same time understands the context of each individual role, providing its performers with the right tools.

The expanded list of roles of participants in ALM processes and the tasks they perform that must be supported by the corresponding toolkit includes:

  • top managers - manage project portfolios and use dashboards to control key software life cycle metrics, including risks and product quality;
  • project managers - plan and control the implementation of the project, analyze possible risks and are responsible for the allocation of resources;
  • analysts - interact with business users, define requirements for a software product, manage requirements and their changes throughout the project;
  • architects - model architecture software system, including its functional components, data and processes;
  • developers - write code using integrated development environments and various software quality assurance tools at the coding stage;
  • quality department engineers - create and manage tests, perform functional, regression testing, performance testing, including using automated testing tools;
  • operating staff - monitors and manages the application and implements feedback with the development team about emerging issues;
  • business users - with the help of specialized tools, they are able to formulate requirements, report application defects and track the status of changes made.

However, the "traditional" ALM process is not able to achieve its full potential in generating profit for the organization. The point is that many vendors are aggressively pushing limited end-to-end ALM solutions to the market that aim to tie customers to closed technology platforms. Customers soon discover that these solutions do not integrate with their existing development processes, tools, and platforms. Unfortunately, this leaves development teams alone with the siled processes and data hodgepodge of ALM, which in turn prevents them from realizing the full potential of ALM.

The unified ALM software environment is designed to provide tools for working and managing processes based on configuration and change management and software version control. In general, the implementation of ALM approaches and tools allows you to create standard processes for creating and operating applications, control compliance with them in all projects, implement a strict change management process, predict their impact on the IT environment and the business as a whole, form a system of quality metrics, productivity and development risks , track and analyze these metrics throughout the life cycle and ultimately ensure that the applications you build are truly aligned with your business goals.

Initially, some of the few innovators who understood the importance of ALM and changed their product release strategies to explicitly support it were Borland and IBM Rational. Reacting to the obvious opportunities, other companies joined the winning ALM concept: Microsoft, Telelogic, Mercury, Serena, Compuware, CollabNet, and Mercury. Today, ALM is an established trend and a growing industry recognized by analysts. ALM vendors provide a variety of tools and technologies to support the software development process. These tools go far beyond the traditional productivity tools of the individual developer. They are aimed at providing methodologies and tools focused on the collective work on software development. To create a viable ALM solution, vendors must consider the needs of the larger software development team and include roles in their products that participate in the larger process.

A common drawback of the first ALM systems was the weak integration of modules for different stages of the life cycle, both within the platform of one manufacturer and within solutions from different vendors. Not being able to use a comprehensive ALM platform, customers built it from disparate parts, which forced them to implement end-to-end lifecycle process management manually, thereby leveling the main potential benefit of ALM automation. Therefore, four years ago, Forrester analysts predicted the emergence of integrated ALM 2.0 platforms as the main direction for improving ALM environments, which would provide common services to support different roles in the life cycle, use a single physical or virtual repository of development artifacts, manage micro and macro life cycle processes, provided integration into a single environment of tools for different roles, supported end-to-end reporting capabilities for different stages of the life cycle.

Today there are new requirements for ALM, and the widespread use of rapid (agile) development methods plays a decisive role in this. A few years ago, D. Sutherland, the creator of one of the most famous early Scrum methods, announced the upcoming total adaptation of the ideas of early development. It seemed like an exaggeration, but the prediction turned out to be correct. According to a joint study by Capgemini Group analysts and HP Software & Solutions, in 2010 over 60% of companies already used or planned to use agile development, and among Forrester survey participants, only 6% admitted that they are still only looking at fast methods, all the rest use them to one degree or another, with 39% considering their implementations to be quite mature.

Developers use agile methods and put the product into production that does not take into account the realities of agile development, which creates serious obstacles to the speed of response of working applications to changes in business requirements and, as a result, the flexibility (agility) of the business itself. The inability or unwillingness of operating personnel to respond to changes in the application environment made by developers is often associated with shortcomings in the documentation that is passed from stage to stage without reflecting the key dependencies between the components of the released software release, and, more globally, with the lack of a reliable and automated communication channel between developers and operational staff. This problem is only getting worse with the spread of modern data center management automation tools and new approaches to the implementation of IT infrastructures, including clouds. Extremely automated and designed to deploy applications as quickly as possible, such environments will not be able to respond to changes in the absence of an automated communication channel and without the implementation of end-to-end processes between development and operation stages.

Awareness of the severity of the problem and the tendency to search for solutions to it even gave rise to the new term DevOps, used to refer to concepts and technologies for improving the interaction between development and operations. The main hopes for the implementation of these ideas are placed by analysts on the new generation of ALM environments, which in practice, and not in theory, will ensure the integration of key stages of the application life cycle. The applications created today are in many cases composite and integrate, on the basis of service principles, components implemented in different programming languages ​​for different platforms, as well as the code of external systems and legacy solutions. To manage their lifecycle, the ALM environment must support multiple development environments and runtime platforms (such as NET and J2EE) and provide the ability to control the source code, licensing, and development status of external application components.

Among the signs of widespread adoption of Agile processes, analysts point to organizations moving away from orthodoxy regarding these methods. Developers are not afraid to combine different processes if it allows them to optimize work on new systems, so the ALM 2.0 environment must support different processes and methodologies in the areas of development, portfolio management, and product quality assurance. The latter is especially important: automation of end-to-end quality management processes - from requirements definition to testing and operation - can become one of the most strengths integrated ALM platform.

The Rational product line for supporting various stages of the software life cycle has always been distinguished by its breadth and focus on the integration of modules among themselves. Butler Group analysts rated IBM Rational Software and Systems Delivery as the most complete solution on the market in terms of the range of implemented ALM components. This suite includes products for project portfolio management, model-based design and development, requirements management, configuration and change management, quality management, build and release management; orchestrating software life cycle processes and providing reporting and documentation on these processes. The word Systems in the name appeared after the acquisition of Telclogic, whose solutions are focused on supporting systems engineering processes and are now integrated into the Rational portfolio. Their inclusion in the IBM ALM environment reflects the trend of convergence between software and systems development processes and the formation of a single lifecycle management environment for them.

But IBM's most significant contribution to the development of ALM technologies is Jazz's long-term project to create an infrastructure for implementing an integrated enterprise application lifecycle management platform. For now whole line The Rational family of products is already integrated with the Jazz platform, several new solutions have been released that are built to run on Jazz from the ground up, and in the future, support for the Jazz infrastructure will be provided across all components of the Rational product line.

The core of Jazz is the Jazz Foundation platform, which combines the Jazz Team Server and a number of additional integration modules. Jazz Team Server demonstrates a new approach for ALM to integrate components for different stages of the life cycle (Fig. 3.15, ). If traditionally such integration was based on a point-to-point connection between individual products, then Jazz implements an open distributed service architecture based on the REST standard, which provides a simple interaction of instrumental components with each other (a kind of ALM Web). The RESTful interface allows data and functionality of various modules to be represented as services. Using a Web standards-based approach makes Jazz highly scalable, making the platform a versatile solution that can support ALM tasks in small teams and large development teams .

Project and Team Structure

event notification

Jazz Team Server

j * ;

Requirements Items and relationships IlJ Event history,

Use "cases ...... Item history trends

Builds Source code. Test cases Test results

visual studio

Client Platform

Client Platform

Client Platform

Security and Access

Rice. 3.15. Integrated corporate platform application lifecycle management

The Jazz Foundation provides services that are common to all ALM components to enable the key capabilities of a modern application lifecycle management environment. These are, for example, collaboration services that ensure the interaction of various team members in the process of solving common problems, maintaining relationships between different stages of the life cycle, and at the same time taking into account the context of each specific role in ALM. Jazz-powered collaboration tools use instant messaging, long discussion tools, wikis, and other popular Web 2.0 features. In this case, all interactions between team members are considered as project resources, which are stored in relation to those artifacts that served as the source of these interactions (for example, defects or test cases).

The Jazz Foundation services also allow you to define and execute processes according to a variety of methodologies, including the Rational Unified Process and various rapid development options. For this, event notification tools are provided, support for communication between team members in the execution of certain workflows, setting and checking the implementation of rules, automating basic tasks, organizing workflows using tools for different stages of the life cycle. Much attention is paid to ensuring the transparency of life cycle processes and process management, for which precise process metrics are entered on the status, problems and risks of the project and dashboards are provided to track them, including in real time, on various levels, from individual process participants to the team and portfolio management level. Other Jazz Foundation services include search engines, security tools, role-based access, and a distributed repository for all development resources.

The Jazz platform integrates with the Eclipse development environment by providing a range of views and projections. Some Jazz components also support web clients. The Jazz framework provides two significant views for Eclipse: Team Central and Team Artifacts. Both views are used to collect information and can be augmented with Jazz platform components. Developed by Eclipse, some components of the Jazz platform allow users to access the Jazz server directly from a web browser.

The Jazz web user interface provides this capability. This interface is more suitable for occasional or occasional users rather than an IDE because it does not require any special software to be installed on the client computer; all you need is a web browser. Each Jazz server has a main web page where the user can select a project area and log in. Once logged in, the user can interact with the Jazz server and explore the information in the Jazz repository, including checking the latest events, entering and updating workflow items, and downloading assemblies.

Among the most exciting new additions to the Rational family built specifically to run on Jazz is Rational Team Conceit (RTC), a suite of collaboration and software lifecycle process automation products built entirely on the Jazz architecture. IBM Rational Team Concert is a complete environment designed to organize the development of information systems in a multi-project environment in which many developers learn. The tool allows you to combine the efforts of development specialists, organize them effective interaction and maintain the highest level of control over all project activities throughout the project.

The RTC system implements software configuration management, task and build management, as well as iteration planning and project reporting, provides definition various types development processes and integrates with other Rational products to support the full software lifecycle. In 2009, IBM also released Rational Quality Manager, a Jazz-powered test management portal, and Rational Insight, a performance management tool built for the Jazz platform using Cognos analytics for high-level development project portfolio management.

The extensive integration capabilities of IBM Rational Team Concert make this tool absolutely unique. Among the existing integrations, the following should be noted.

  • 1. Integration with IBM Rational Requirements Composer as part of collaborative application lifecycle management (CALM), which allows you to associate work orders with requirements generated or modified on the basis of these tasks, and vice versa, requirements with tasks created for work planning for the implementation of these requirements.
  • 2. Integration with IBM Rational Quality Manager as part of collaborative application lifecycle management, on the basis of which it becomes possible to organize defect tracking based on the results of tests performed during testing of released software products.
  • 3. Integration with IBM Rational ClearQuest to synchronize work orders and change requests defined in the classic IBM Rational ClearQuest development management tool.
  • 4. Integration with IBM Rational ClearCase to synchronize versioning and configuration management artifacts between the two tools.

The open Jazz Integration Architecture underlying based on IBM Rational Team Concert allows for additional development of new integration mechanisms with other systems that can be deployed and actively used in the organization. One of the integration options with these systems can be the use of the RTC Email Reader product from the Fineco Soft company, which provides synchronization of IBM Rational Team Concert work tasks in accordance with email messages of a predefined format. However, reverse synchronization is also possible thanks to the built-in IBM Rational Team Concert notification subsystem.

It should also be noted that versioning and configuration management based on IBM Rational Team Concert can be organized in almost any project, even if the development environment (IDE) does not have direct integration with this tool. This is made possible thanks to sharing"thick client" IBM Rational Team Concert and non-integrable IDE. So, if such integrations exist for Eclipse IDE, IBM Rational Software Architect, IBM Rational Application Developer and Microsoft Visual Studio, then, for example, with Delphi you will have to additionally use the IBM Rational Team Conceit “thick client”, which is not very difficult.

Etc..

"Life cycle management" comes down to the need to master the practices familiar to system engineering:

  • information managementnecessary information should be available to the right stakeholders in a timely manner and in a usable form”).
  • configuration management(“design information must comply with the requirements, information “as built” must comply with the project, including design justifications, physical system must be consistent with the “as built” information, while the different parts of the project must be consistent with each other, sometimes part of this practice is called “change management”).

LCMS vs PLM

The newly formulated LCMS does not use PLM as a required class software tools around which such a system is built. In large engineering projects, several (most often significantly "underdeveloped") PLMs from different vendors are used at once, and when creating an LCMS, we are usually talking about their interorganizational integration. Of course, at the same time, the questions of how to integrate into the LCMS information of those systems that are not yet connected with any of the PLM systems of the extended enterprise are also solved. The term "extended enterprise" (extended enterprise) usually refers to an organization created through a system of contracts from the resources (people, tools, materials) involved in a specific engineering project of various legal entities. In extended enterprises, the answer to the question, into which PLM the data of which particular CAD / CAM / ERP / EAM / CRM / etc. systems are integrated, becomes non-trivial: you cannot prescribe the owners of different enterprises to use software from the same supplier.

And since the PLM system is still software tools, and the "management system" from the LCMS is clearly understood, among other things, as a "management system", the term LCMS clearly implies the organizational aspect, and not just the aspect information technologies. Thus, the phrase "using PLM to support a life cycle management system" is quite meaningful, although it can be confusing when it literally translates “PLM” into Russian in it.

However, the understanding of a "lifecycle management system" when handled by IT people immediately dwindles back to "only software" that looks suspiciously like PLM software. And after this oversimplification, difficulties begin: a “boxed” PLM system from some supplier of design automation software is usually immediately presented constructively, as a set of software modules from the catalog of this supplier, without regard to supported engineering and management functions, and is considered as a trio of the following component:

  • data-centric life cycle data repository,
  • "workflow engine" to support "management",
  • "portal" to view the contents of the repository and the status of the workflow.

Purpose of the LCMS

Main purpose: LCMS detect and prevent collisions that are inevitable in collaborative development. All other LCMS functions are derivatives that support this main function.

The main idea of ​​any modern LCMS- this is the use of an accurate and consistent representation of the system and the world around it in the inevitably heterogeneous and initially incompatible computer systems of an extended organization. The use of virtual layouts, information models, data-centric repositories of design information ensures the detection of collisions during "construction in a computer", "virtual assembly", and not when drawing drawings and other project models into material reality during the actual construction "in metal and concrete" and start-up into operation.

The idea of ​​LCMS is thus not related to various “design automation”, primarily to “generative design” (generative design) and “generative production” (generative manufacturing). The LCMS is no longer concerned with synthesis, but with analysis: it detects and/or prevents collisions in the design results of individual subsystems when they are assembled together using a variety of technologies:

  • merging project data together into one repository,
  • running the integrity check algorithm for engineering data distributed in several repositories,
  • by conducting actual "virtual assembly" and simulation for a specially selected subset of design data.

Model-Based Approach

The use of LCMS implies rejection not only of paper in design, but also of "electronic paper"(.tiff or other raster formats) and the transition to data-centric representation of information. Comparing two models that exist on paper in some notations and finding inconsistencies in them is much more difficult and longer than preventing collisions in structured electronic documents that use not raster graphics, but engineering data models.

The data model can be designed according to some language, for example:

  • in terms of the ISO 24744 development method description standard),
  • metamodel (in terms of the OMG standardization consortium),
  • data model/reference data (in terms of the ISO 15926 lifecycle data integration standard).

It is this transition to structurally represented models that already exist at the early stages of design and is called "Model-based systems engineering" (MBSE, model-based systems engineering). It becomes possible to remove collisions using computer data processing already at the earliest stages of the life cycle, even before the appearance of full-fledged 3D models of the structure.

The LCMS should:

  • provide a mechanism for transferring data from a single application CAD/CAM/ERP/PM/EAM/etc. to another- and in an electronic structured form, and not in the form of a "pack of electronic paper." The transfer of data from one engineering information system (with a clear understanding of where, where, when, what, why, how) is part of the functionality provided by the LCMS. Thus, the LCMS must support workflow (the flow of work, which is partly performed by people and partly by computer systems).
  • version control, that is, to provide a configuration management function - both models and physical parts of the system. The LCMS maintains a taxonomy of tiered requirements and provides a means to check for tiered design decisions and their justifications to conflict with those requirements. In the course of engineering development, any description of the system, any of its models are changed and supplemented many times, and therefore exist in many alternative versions of varying degrees of correctness, and to varying degrees corresponding to each other. The LCMS must ensure that only the correct combination of these versions is used for the current work.

LCMS Architecture

There can be many architectural solutions for LCMS, the same function can be supported by various structures and mechanisms of work. There are three types of architecture:

  1. Traditional attempts to create a LCMS is to provide major broadcasts data on a point-to-point basis between different applications. In this case, some specialized workflow support system (BPM engine, "business process management engine"), or an event processing system (complex event processing engine) can be used. Unfortunately, the amount of work involved in providing point-to-point exchanges turns out to be simply enormous: each time, specialists are required who understand both the linking systems and the method of information transfer.
  2. Using the Life Cycle Data Integration Standard ISO 15926 according to the "ISO 15926 outside" method, when an adapter is developed for each engineering application into a neutral representation that complies with the standard. Thus, all data gets the opportunity to meet in some application and a collision between them can be detected - but the application needs to develop only one data transfer adapter, and not several such adapters (according to the number of other applications with which it is necessary to provide communication).
  3. PLM(Teamcenter, ENOVIA, SPF, NET Platform, etc.) - a standardized architecture is used, with the only exception that the data model used in each of these PLMs is less universal in terms of reflecting any engineering subject area, and is also not neutral and available in all formats. So the use of ISO 15926 as a baseline for transferring data to the LCMS can be considered a further development of the ideas actually implemented in modern PLM.

According to the configuration management architecture, LCMS can be divided into three types:

  • "repository"(up-to-date storage of all project data in one LCMS repository, where data is copied from where it was developed),
  • "register"(LCMS maintains a list of lifecycle data addresses in numerous repositories of other CAD systems, engineering simulation systems, PLM, ERP, etc.),
  • "hybrid architecture"-- when part of the data is copied to the LCMS central repository and part of the data is available from other places via links.

The LCMS architect should also describe:

  • "portal"(including "web portal"), its functions and method of implementation. The very presence of the portal allows you to reassure top managers by demonstrating the absence of conflicts. Specific requirements are imposed on architectural solutions for the LCMS portal.
  • data integrity/consistency check algorithms life cycle, as well as a description of the operation of these algorithms:
    • a standard module in a separate application that works on data in the repository of this application - be it CAD or PLM;
    • Collision checking software specially developed for LCMS, which has access to data from different applications located in the LCMS central repository;
    • a specially developed software tool that accesses via the Internet via a secure channel to different data repositorieslocated in different organizations;
    • specially programmed checks with collision control when loading different engineering data sets into the LCMS central repository;
    • a combination of all the listed methods - different for different types of collisions; etc.
  • the way LCMS users interact(design engineers, purchasers, installers, facility project managers, etc.), and how exactly software The LCMS supports this interaction in a manner that avoids collisions. Systems engineering standards (particularly the ISO 15288 system engineering practice standard) require a choice of life cycle type for complex object engineering and an indication of which systems engineering practice options will be used. The life cycle model is one of the main artifacts that serve as organizational arrangements for coordinating the work of the extended engineering project organization. Coordinated work in the course of collaborative engineering is the key to a small number of design collisions. How exactly will the LCMS lifecycle model support it? So, PLM systems usually do not find a place for life cycle models, and even more so for organizational models. Therefore, for LCMS, it is necessary to look for other solutions for software support of these models.
  • Organizational aspect of the transition to the use of LCMS. The transition to the use of LCMS can cause a significant change in the structure and even the personnel of an engineering company: not all diggers are taken as excavators, not all cabbies are taken as taxi drivers.

The main thing for the LCMS is how the proposed solution contributes to the early detection and even prevention of collisions. If it comes to something else (meaningful choice of life cycle type in accordance with the risk profile of the project, aging management, cost management and budget reform, mastering axiomatic design, building with just-in-time deliveries, generating design and construction, and much, much more, also extremely useful-modern-interesting), then this is a matter of other systems, other projects, other methods, other approaches. The LCMS should do its job well, and not badly solve a huge set of arbitrarily chosen foreign tasks.

The LCMS architect thus has two main tasks:

  • spawn a number of top candidate architectures and their hybrids
  • make a multi-criteria choice among these architectures.
    • meaningful consideration (meaningfulness of selection criteria)
    • presentation of the result (justification).

Criteria for choosing an architectural solution for the LCMS

  1. The quality of the performance of the LCMS for its main purpose: detection and prevention of collisions The main criterion is: how much can the engineering progress be accelerated by accelerating the detection or avoidance of collisions using the proposed LCMS architecture? And if the time of work cannot be reduced, then by how much can the amount of work be increased in the same time using the same resources? The following methodologies are recommended:
    • Goldratt's Theory of Constraints(TOC, theory of constraints) - the architecture should indicate which system constraints it removes on the critical resource path of an engineering project (not to be confused with the critical path).
    • ROI(return on investments) for investments in the LCMS at the stage of formalizing the result of a substantive review of candidate architectures.
    It is important to choose the boundaries of consideration: the overall speed of an engineering project can only be measured at the boundary of the considered organizational system. The boundaries of a single legal entity may not coincide with the boundaries of an extended enterprise carrying out a large-scale engineering project, and an enterprise participating in only one stage of the life cycle may incorrectly assess its usefulness and criticality for the full life cycle of the system and choose the wrong way to integrate itself into the LCMS. Then it may turn out that the creation of the LCMS does not affect the overall timing and budgets of the project, because the most unpleasant collisions will continue to be unaddressed by the new LCMS.
  2. Ability to adopt incremental LCMS development lifecycle Incremental in ISO 15288 is such a life cycle, in which the functionality is not given to the user all at once, but in stages - but investments in development also occur not at once, but in stages. Of course, in this case, the law of diminishing utility must be taken into account: each increment of the LCMS (each new type of collisions detected in advance) is more expensive, and the benefits from it are less and less, until the development of the LCMS that has been going on for many years does not die out by itself. If it turns out that for some of the proposed architectures, a lot of money needs to be invested in the creation of the LCMS at once, but the benefits can be obtained immediately in the amount of 100% and only after five years on a turnkey basis, then this is a bad architecture. If it turns out that it is possible to develop and put into operation some compact LCMS core, and then many, many modules of the same type for different types of collisions with a clear mechanism for their development (for example, based on the use of ISO 15926), then this is very good. It's not so much about applying agile development» (agile methodologies) how much to envision a modular LCMS architecture and propose an implementation plan for a prioritized list of modules – first the most pressing, then the least pressing, etc. Not to be confused with ICM (incremental commitment model), although the meaning here is the same: the architecture is better, under which you can get some kind of installment payment for the system, and get the necessary functionality as early as possible - in order to get the benefit (at least a small one) early, and pay for the late benefit later.
  3. Fundamental financial and intellectual ability to master and maintain technology If we calculate the costs not only for the LCMS itself, but also for all the personnel and other infrastructure required for the implementation of the project, then we need to understand how much of these investments in education, computers and organizational efforts will remain with the payer and owner of the LCMS, and how much will settle outside - with numerous contractors , who, of course, will be grateful first for receiving a "scholarship" for the development of new technology, and then for supporting the system they have created. The new is usually extremely expensive, and not because it is expensive in itself, but because it causes an avalanche of changes it causes. It is this point that I take into account the total cost of ownership of the LCMS, and the same point includes consideration of the full life cycle, no longer of an engineering system with its avoidable collisions, but of the LCMS itself.
  4. Scalability of the LCMS architecture This criterion is relevant for large engineering projects. Since you want the system to be used by all the thousands of people in the extended organization, it will need to grow rapidly to that extent. To what extent can the "pilot" or "polygon" of the LCMS be able to grow quickly without fundamental architectural changes? Most likely, they will not be able to grow. Therefore, architecturally, we need not a "pilot" or a "polygon", but immediately a "first stage". The requirement of the scaling criterion closely intersects with the requirement of the incrementality criterion, but affects a slightly different aspect - not so much the extension of the creation of the LCMS in time, but the possibility of extension by the covered volume. Experience shows that all systems cope with test volumes of design data, but they cannot cope with industrial ones. How will the cost of hardware and software increase non-linearly with the growth of volumes / speed? How long will they work out the regulations, when it turns out that after some workplace more data goes through than can be meaningfully viewed by one person? Poor scalability can lie in wait not only with technical side architecture of the software and hardware solution, but also from the side of its financial architecture. Thus, a small license price per LCMS seat, or even a small price per new connection on the repository server, can turn a more or less attractive solution for ten seats into an absolutely financially unsustainable solution for the target thousand seats.
  5. Ability to address inevitable organizational challenges including attitudes towards beloved legacy systems in the extended organization. How much would the proposed centralized or distributed architecture require to "give away functions to other departments", "give away our data" and generally "give away" something compared to the current situation without LCMS? Mainframes massively lost the competition to mini-computers, and those to personal ones. Back to centralized systems LCMS inevitably appears) there is almost no way, because all the data is in separate applications, and pulling this data into new systems is a very difficult organizational task. How is the LCMS architecture structured: does it replace current legacy engineering applications, does it build on top of the current IT infrastructure, is it installed "for free" by various services? How much organizational/managerial/consulting effort will it take to "shove through" new technology? How many people to fire, how many to find and hire new specialists? This criterion of organizational acceptability is closely related not only to centralization/decentralization, but also to the consideration of the motivation system in the extended enterprise, i.e. evaluating the architecture of the LCMS against this criterion goes far beyond the narrow consideration of the LCMS alone, but requires a thorough analysis of the principles of building the extended organization, up to and including revisiting the principles underlying the contracts under which it was created. But this is the essence of the system approach: any target system (in this case, LCMS) is considered, first of all, not "in depth, from what parts", but "outward, part of what" - not its design and mechanism of operation are primarily interesting, but supported The LCMS is the collision avoidance function in the external supersystem - and the price that the external supersystem is willing to pay for this new function. Therefore, possible LCMS architectures are considered primarily not in terms of "decent technologies used, for example from software vendor XYZ" (this is the default: all proposed architectures are usually technologically decent, otherwise they are not options!), but in terms of the above five criteria.

LCMS functions

  1. Collision avoidance
    1. Configuration management
      1. Identification (classifications, encodings)
      2. Configuration accounting (all possible baselines - ConOp, Architecture, design, as built), including data transfer to the LCMS repository, including support for workflow changes, including support for parallel engineering (working in conditions of incomplete baselines)
      3. Versioning (including forks)
    2. Lack of manual data transfer (transfer of input and output data between already existing automation islands, including the transfer of data from the islands of "rise to digital" of old design developments)
    3. NSI configuration
    4. Collaborative engineering support system (video conferences, remote project sessions, etc. - possibly not the one used to build the LCMS system itself)
  2. Collision detection
    1. Support for a register of checked collision types and check technologies corresponding to the register
    2. Data transfer for checking collisions between automation islands (without assembly in the LCMS repository, but by means of the LCMS integration technology)
    3. Validation workflow run different types collisions
      1. in the LCMS repository
      2. not in a repository, but by means of the LCMS integration technology
    4. Starting a run of the workflow to resolve the found collision (sending notifications of collisions, because the run of the workflow to resolve is not the concern of the CLMS)
    5. Maintaining an up-to-date list of unresolved collisions
  3. Development(here the LCMS is considered as an autopoietic system, because "incremental implementation" is one of the most important properties of the LCMS itself - so this is a function of the LCMS itself, and not a function of the supporting system for the LCMS)
    1. Ensuring communication about the development of the LCMS
      1. Work planning for the development of the LCMS (roadmap, development of an action plan)
      2. Functioning of the LCMS project office,
      3. Maintaining a register of types of collision checks (the "Wishlist" register itself and the roadmap for the implementation of checks)
      4. Organizational and technical modeling (Enterprise Architecture) for LCMS
      5. Communication infrastructure for LCMS developers (internet conferencing, videoconferencing, knowledge management, etc. -- perhaps not the one used in collaborative engineering using LCMS)
    2. Uniformity of data integration technology (for example, ISO 15926 technology)
      1. Using a Neutral Data Model
        1. Reference data library support
        2. Development of reference data
      2. Technology for supporting adapters to a neutral data model
    3. Uniform workflow/BPM integration technology (wide enterprise)
  4. Data security(on the scale of information systems operating within the LCMS)
    1. Ensuring unity of access (one login and password to all information systems participating in the workflow)
    2. Managing access rights to data elements
    3. Backup

Analyzing the development of the development tools market over the past 10-15 years, one can note a general trend of shifting emphasis from the technologies of actually writing programs (which, since the early 90s, were marked by the emergence of RAD tools - "rapid application development") to the need for an integrated management of the entire life cycle of applications - ALM (Application Lifecycle Management) .

As the complexity of software projects increases, the requirements for the efficiency of their implementation increase sharply. This is all the more important today, when software developers are involved in almost all aspects of the work of enterprises and the number of such specialists is growing. At the same time, research data in this area suggests that the results of at least half of the "in-house" software development projects do not justify the hopes placed on them. Under these conditions, the task of optimizing the entire process of creating software tools with the coverage of all its participants - designers, developers, testers, support services and managers - becomes especially urgent. Application Lifecycle Management (ALM) views the software release process as a constantly repeating cycle of interrelated stages:

definition of requirements (Requirements);

design and analysis (Design & Analysis);

Development (Development);

testing (Testing);

deployment and maintenance (Deployment & Operations).

Each of these steps must be carefully monitored and controlled. A properly organized ALM system allows you to:

Reduce the time it takes to bring products to market (developers only have to take care of the compliance of their programs with the formulated requirements);

improve quality while ensuring that the application will meet the needs and expectations of users;

increase productivity (developers get the opportunity to share best practices in development and implementation);

Accelerate development through the integration of tools;

Reduce maintenance costs by constantly maintaining consistency between the application and its project documentation;



Get the most out of your investment in skills, processes and technology.

Strictly speaking, the very concept of ALM, of course, is not something fundamentally new - such an understanding of the problems of software development arose about forty years ago, at the dawn of the formation of industrial development methods. However, until relatively recently, the main efforts in automating software development tasks were aimed at creating tools directly for programming as the most time-consuming stage. And only in the 80s, due to the complication of software projects, the situation began to change significantly. At the same time, the relevance of expanding the functionality of development tools (in the broad sense of the term) in two main areas has sharply increased: 1) automation of all other stages of the software life cycle and 2) integration of tools with each other.

Many companies dealt with these tasks, but the undisputed leader here was Rational, which for more than twenty years, since its inception, has specialized in automating software development processes. At one time, it was she who became one of the pioneers of widespread use visual methods program design (and practically the author of the UML language, accepted as a de facto standard in this area), created a common ALM methodology and a corresponding set of tools. It can be said that by the beginning of this century, Rational was the only company that had in its arsenal a full range of products to support ALM (from business design to maintenance), with the exception, however, of one class of tools - ordinary coding tools. However, in February 2003, it ceased to exist as an independent organization and became a division of IBM Corporation, called IBM Rational.

Until quite recently, Rational was practically the only manufacturer of integrated development tools of the ALM class, although there were and are competing tools from other vendors for certain stages of software development. However, a couple of years ago, Borland Corporation, which has always had a strong position in the field of traditional application development tools (Delphi, JBuilder, etc.), which is actually the basis of the corporation's ALM complex, which was expanded through the acquisition of other companies producing similar products. This is the fundamental difference in the business models of the two companies, which opens up potential opportunities for real competition. After Rational became part of IBM, Borland positions itself as the only independent supplier of a comprehensive ALM platform today (that is, it does not promote its own operating systems, languages, etc.). In turn, competitors note that Borland has not yet formulated a clear ALM methodology that provides a basis for combining the tools it has.

Another major player in the development tools field is Microsoft Corporation. While she does not threaten to create her own ALM platform; promotion in this direction is only in the framework of cooperation with other suppliers, the same Rational and Borland (both of them became the first participants in the Visual Studio Industry Partner program). At the same time, Microsoft's flagship Visual Studio .NET development tool is constantly expanding functionality through the use of high-level modeling and project management tools, including through integration with Microsoft Visio and Microsoft Project.

It should be noted that today almost all leading companies developing technologies and software products (except for those listed above, one can name Oracle, Computer Associates, etc.) have developed software development technologies that were created as on your own, and through the acquisition of products and technologies created by small specialized companies. And although, like Microsoft, they do not yet plan to create their own ALM platform, the CASE tools released by these companies are widely used at certain stages of the software life cycle.