THE G-MARC METHODOLOGY AND SUPPORTING TOOL-SET

 

Pedigree

 

The G-MARC methodology has a substantial pedigree.  Commonly accepted concepts such as:

-           Atomic requirements,

-           Objective requirements,

-           Separating User requirements from System requirements,

-           Separating goals from constraints,

-           Requirements Modelling,

and many more, are all G-MARC originated[1].

In total, more than 50 elapsed years of in-depth experience (involving several hundred man years of effort) have been invested overall in the development of G-MARC.  No one should entertain the possibility of developing a viable methodology on the back of a single project - especially when the need to verify its effectiveness needs to be included before it is used in anger.  To date and for every project upon which G-MARC has been used, the customer has expressed considerable satisfaction at the results achieved.

G-MARC is unique.  It is unique in terms of the wide range of capabilities that it brings to bear when engineering quality into a requirement specification and it is also unique in terms of its effectiveness when introducing structure into a specification.

No other known methodology or tool provides its users with the ability to objectively measure the quality of a specification as it progressively evolves from an unstructured, subjective and nearly always untestable collection of narrative statements, to a fully structured, precisely defined and objectively testable set of atomic requirements.

 

Process

 

The G-MARC methodology continues to be developed.  It is supported by a clearly defined, mature and robust, suite of application software that, in turn, is supported by a comprehensive suite of Microsoft foundation software.

Having captured an initial body of ad hoc (unstructured) information, either via prompted interviews, workshops or from documents, the first sequence of G-MARC activities is referred to as 'Reduction'. This sequence of activities leads to the development of a (tentative) set of objectively testable, atomic requirements.  Note that it is important that each atomic requirement be objectively testable. If it is not possible to identify an acceptance test for a requirement then the item of information concerned cannot really be regarded as a requirement.  This is a fundamental principle that has been subscribed to by the International Standards Organization [ISO] for many years.

The body of atomic requirements that is produced by the G-MARC 'Reduction' process is allocated to a multi-dimensional classification matrix. This matrix facilitates the ability of anyone to view the coverage of the subject matter by the number and type of atomic requirements.

Each cell in the matrix may contain one or more groups of requirements. These groups are assigned names - according to their content - and, having been thus identified, these groups of objective and atomic requirements are then analysed to develop appropriate hierarchical structures.  Ideally, this Application structuring should always be provided with the best possible input as this reduces the risk of having to repeat activities due to successive waves of revision.

The G-MARC Requirements Engineering methodology uses the hierarchical structures and framework to identify and rectify:

-        bias in the type and number of requirements against technical (or stakeholder) area;

-        unjustified (orphan) requirements;

-        incompleteness (such as missing requirements as indicated by gaps in the framework);

-        inconsistency (contradiction);

-        redundancy (duplicate requirements);

-        incorrectness (functions which do not operate together as expected);

-        incoherence (requirements expressed at differing levels of detail within a single group).

 

 

Benefits

 

A major benefit that derives from using the G-MARC methodology is the access to a variety of metrics for measuring specification quality during 'Reduction'.  This is vital when wishing to know whether the quality is improving (or not) as development proceeds.

A further major G-MARC benefit is the fact that G-MARC derived structural information is able to be used generically to assist the development of other specifications.  The classification matrix is able to be rapidly evolved into a structural standard for all specifications in a given area thus drawing attention to, and capitalizing on, commonality across systems.

The structural framework can be used by the G-MARC tool-set for behavior modelling, testing and development of operational scenarios in those situations where it is considered desirable to do so.

The structural framework can also be used to accumulate such things as typical cost and time for each requirement and/or group of requirements.  Using this type of capability the G-MARC User is able to produce rapid estimates of likely implementation costs and timescales for an entire proposed system or for any combination of any set of subassemblies within such a system.  These capabilities apply equally well to the development of Business requirements as they do to System requirements

Application of all of the above capability progressively improves the overall quality of, confidence in and value of the specification.  This, in turn, leads to a clearer record of the impact of the various families of goals and constraints on any resulting system.  This, in turn, facilitates the ability to easily modify the system without risk and in full knowledge of the implications.

Finally, basing an Invitation to Tender (ITT) on a specification produced using G-MARC provides an opportunity to objectively evaluate competing proposed solutions on a level footing rather than use the typical, highly subjective evaluation process based upon the opinions of a review board (this is because the structure and full modelling described above are axiomatic to the quantitative assessment of any complex system description, be it in the form of a SOR or a design).  Review Board opinions, even if they come from the best experts, inevitably evolve during the course of evaluating a series of tenders.  This results in a subjective evaluation process which also varies in time in an unknown manner.

Previous Page     Next Page      Contents List

 



 [1] These assertions can be readily confirmed by reference to the final report of the UK Department of Trade & Industry [DTI], Information Engineering Directorate's project ref. No. IED4/1/1151.  This project was originally submitted by CSA to the DTI for consideration, in 1988. The project was subsequently awarded to CSA and was completed in 1992 in Collaboration with the Civil Aviation Authority [CAA], The National Engineering Research Council [NERC], King's College - London University, the City University and the Ministry of Defence. The work was based on a code of practice that ISA had been developing in-house since1980