Software design at the module level. Development of a software module. Structured programming. Professional activity

The transition from informal to formal is essentially informal.

Lecture 8.

DEVELOPMENT OF THE SOFTWARE MODULE

The procedure for developing a software module. Structured programming and step by step detailing. Understanding pseudocode. Control of the program module.

8.1. The procedure for developing a software module.

When developing a software module, it is advisable to adhere to the following order:

· Study and verification of the module specification, choice of programming language;

· Choice of algorithm and data structure;

· Programming (coding) of the module;

· Polishing the text of the module;

· Checking the module;

· Compilation of the module.

The first step in the development of a software module is largely a contiguous control of the program structure from below: by studying the specification of a module, the developer must make sure that it is clear to him and is sufficient for the development of this module. At the end of this step, a programming language is selected: although the programming language may already be predefined for the entire software system, in some cases (if the programming system allows it) another language may be chosen that is more suitable for the implementation of this module (for example, assembly language).

At the second step in the development of a software module, it is necessary to find out whether any algorithms are already known for solving the problem posed and or close to it. And if a suitable algorithm is found, then it is advisable to use it. The choice of suitable data structures that will be used when a module performs its functions largely predetermines the logic and quality indicators of the module being developed, so it should be considered as a very responsible decision.


At the third step, the text of the module is constructed in the selected programming language. The abundance of all kinds of details that must be taken into account when implementing the functions specified in the module's specification can easily lead to the creation of a very confusing text containing a lot of errors and inaccuracies. Finding errors in such a module and making the required changes to it can be a very time-consuming task. Therefore, it is very important to use a technologically sound and practically proven programming discipline to construct the text of the module. For the first time, Dijkstra drew attention to this, formulating and substantiating the basic principles of structured programming. Many programming disciplines that are widely used in practice are based on these principles. The most common discipline is incremental refinement, which is discussed in detail in sections 8.2 and 8.3.

The next step in the development of the module is associated with bringing the text of the module to a complete form in accordance with the specification of the quality of the software. When programming a module, the developer focuses on the correct implementation of the module's functions, leaving incomplete comments and allowing some violations of the requirements for the style of the program. When polishing the text of the module, he must edit the existing comments in the text and, possibly, include additional comments in it in order to ensure the required primitives of quality. For the same purpose, the text of the program is edited to meet stylistic requirements.

The module check step is manual check the internal logic of the module before debugging it (using its execution on a computer), implements the general principle formulated for the discussed programming technology, about the need to control the decisions made at each stage of software development (see lecture 3). Module validation methods are discussed in Section 8.4.

And finally, the last step in the development of a module means completing the verification of the module (with the help of the compiler) and proceeding to the process of debugging the module.

8.2. Structured programming.

When programming a module, it should be borne in mind that the program must be understandable not only for a computer, but also for a person: both the developer of the module, the persons who check the module, and the testers who prepare tests for debugging the module, and the PS maintainers who make the required changes to the module are forced to will repeatedly analyze the logic of the module. In modern programming languages, there are enough tools to confuse this logic as much as you like, thereby making the module difficult for humans to understand and, as a consequence, make it unreliable or difficult to maintain. Therefore, it is necessary to take steps to select the appropriate language tools and follow a specific programming discipline. In this regard, Dijkstra proposed to build a program as a composition of several types of control structures (structures), which can greatly increase the understanding of the logic of the program. Programming using only such constructs was called structural.


Rice. 8.1. Basic control constructs of structured programming.

The basic constructs of structured programming are: follow, branch, and repetition (see Figure 8.1). The components of these constructions are generalized operators (processing nodes) S, S1, S2 and a condition (predicate) P. The generalized operator can be either a simple operator of the programming language used (assignment, input, output, procedure call operators), or a program fragment , which is a composition of basic structured programming control constructs. It is essential that each of these structures has only one input and one output for control. Thus, the generalized operator has only one input and one output.

It is also very important that these constructs are already mathematical objects (which, in essence, explains the reason for the success of structured programming). It is proved that for each unstructured program it is possible to construct a functionally equivalent (that is, solving the same problem) structured program. For structured programs, you can prove some properties mathematically, which allows you to detect some errors in the program. A separate lecture will be devoted to this issue.

Structured programming is sometimes referred to as "no-GO TO programming". However, the point here is not in the GO TO statement, but in its erratic use. Very often, when implementing structured programming in some programming languages ​​(for example, in FORTRAN), the transition operator (GO TO) is used to implement structured constructs, which does not violate the principles of structured programming. It is the "non-structural" jump operators that confuse the program, especially the jump to the operator located in the module text above (earlier) the jump operator being executed. Nevertheless, an attempt to avoid the branch operator in some simple cases can lead to too cumbersome structured programs, which does not improve their clarity and contains the danger of additional errors in the text of the module. Therefore, we can recommend avoiding the use of the jump operator whenever possible, but not at the cost of clarity of the program.

Useful cases of using the jump operator include exiting a loop or procedure by a special condition that "ahead of schedule" terminates the work of a given cycle or a given procedure, that is, terminates the work of some structural unit (generalized operator) and thus only locally violates the structure of the program. Great difficulties (and the complication of the structure) are caused by the structural implementation of the reaction to emerging exceptional (often erroneous) situations, since this requires not only an early exit from the structural unit, but also the necessary processing (exclusion) of this situation (for example, the issuance of a suitable diagnostic information). The exception handler can be at any level of the program structure, and it can be accessed from different lower levels. The following "non-structural" implementation of the response to exceptional situations is quite acceptable from the technological point of view. Exception handlers are placed at the end of one or another structural unit, and each such handler is programmed in such a way that, after finishing its work, it exits the structural unit at the end of which it is placed. The call to such a handler is made by the transition operator from the given structural unit (including any structural unit nested in it).

8.3. Step-by-step detailing and the concept of pseudocode.

Structured programming provides guidelines for how the module text should be. The question arises how the programmer should act to construct such a text. Often, the programming of a module begins with the construction of its block diagram, which outlines the logic of its operation. but modern technology programming does not recommend doing this without suitable computer support. Although block diagrams make it possible to very clearly represent the logic of the module's operation, when they are manually coded in a programming language, a very specific source of errors arises: the mapping of essentially two-dimensional structures, such as block diagrams, to linear text representing the module contains the danger of distorting the logic of the module's operation. all the more so psychologically it is quite difficult to maintain a high level of attention when re-examining it. An exception may be the case when a graphical editor is used to build block diagrams and they are formalized so that text in a programming language is automatically generated from them (as, for example, it is done in R-technology).

As the main method of constructing the text of the module, modern programming technology recommends step by step detailing... The essence of this method is to break down the process of developing a module text into a number of steps. On the first

step describes the general scheme of the module in an observable linear text form (i.e., using very large concepts), and this description is not fully formalized and is focused on human perception. At each next step, one of the concepts is refined and detailed (we will call it clarified), in any description developed in one of the previous steps. As a result of this step, a description of the selected refined concept is created either in terms of the basic programming language (i.e., the module chosen for presentation), or in the same form as in the first step using new refined concepts. This process ends when all the refined concepts are clarification(i.e. will ultimately be expressed in the underlying programming language). The last step is to obtain the text of the module in the basic programming language by replacing all occurrences of the refined concepts with their given descriptions and expressing all occurrences of structured programming constructs by means of this programming language.

Step-by-step detailing is associated with the use of a partially formalized language to represent the specified descriptions, which is called pseudocode... This language allows you to use all structured programming constructs that are formalized, together with informal fragments in natural language to represent generic operators and conditions. The corresponding fragments in the basic programming language can also be specified as generalized operators and conditions.

· The beginning of a module in the base language, that is, the first sentence or heading (specification) of this module;

· Section (set) of descriptions in the base language, and instead of descriptions of procedures and functions - only their external design;

· Informal designation of the sequence of operators of the module body as one generalized operator (see below), as well as informal designation of the body of each description of a procedure or function as one generalized operator;

· The last sentence (end) of the module in the base language.

The appearance of a description of a procedure or function is similar. However, if you follow Dijkstra, it is better to present the descriptions section here with an informal designation, making its detailing in the form of a separate description.

Informal designation of a generalized operator in pseudocode is made in natural language by an arbitrary sentence that outlines its content. The only formal requirement for the design of such a designation is the following: this sentence must occupy one or more graphic (printed) lines in its entirety and end with a dot (or some other sign specially allocated for this).

Rice. 8.2. Basic constructions of structured programming in pseudocode.

For each informal generalized operator, a separate description must be created that expresses the logic of its operation (detailing its content) using the composition of the basic structures of structured programming and other generalized operators. The title of such a description should be the informal designation of the detailed generic operator. The basic structures of structured programming can be represented in the following form (see Fig. 8.2). Here, the condition can either be explicitly set in the base programming language as a boolean expression, or it can be informally represented in natural language by some fragment that outlines the meaning of this condition. In the latter case, a separate description should be created detailing this condition, indicating the designation of this condition (a fragment in natural language) as the title.

Exit from repetition (loop):

Exiting the procedure (function):

    J. Hughes, J. Michtom. A structured approach to programming. - M .: Mir, 1980 .-- p. 29-71.

    V. Tursky. Programming methodology. - M .: Mir, 1981 .-- p. 90-164.

    E.A. Zhogolev. Technological foundations of modular programming // Programming, 1980, no. - p. 44-49.

    R.C. Holt. Structure of Computer Programs: A Survey // Proceedings of the IEEE, 1975, 63 (6). - p. 879-893.

    G. Myers. Reliability of software. - M .: Mir, 1980. - p. 92-113.

    Ya Pyle. ADA is the language of embedded systems. M .: Finance and statistics, 1984. - p. 67-75.

    M. Zelkovets, A. Shaw, J. Gannon. Principles of software development. - M .: Mir, 1982, p. 65-71.

    A.L. Fuksman. Technological aspects of creation software systems... M .: Statistics, 1979. - p. 79-94.

  1. Lecture 8. Development of a software module

  2. The procedure for developing a software module. Structured programming and step by step detailing. Understanding pseudocode. Control of the program module.

  3. 8.1. The procedure for developing a software module.

  4. When developing a software module, it is advisable to adhere to the following order:

    studying and checking the specification of the module, choosing the language

    programming;

    choice of algorithm and data structure;

    module programming;

    polishing the module text;

    module check;

    compiling the module.

    The first step in the development of a software module is largely a contiguous control of the program structure from below: by studying the specification of a module, the developer must make sure that it is clear to him and is sufficient for the development of this module. At the end of this step, a programming language is selected: although the programming language may already be predefined for the entire software system, in some cases (if the programming system allows it) another language may be chosen that is more suitable for the implementation of this module (for example, assembly language).

    At the second step in the development of a software module, it is necessary to find out whether any algorithms are already known for solving the problem posed and or close to it. And if a suitable algorithm is found, then it is advisable to use it. The choice of suitable data structures that will be used when a module performs its functions largely predetermines the logic and quality indicators of the module being developed, so it should be considered as a very responsible decision.

    At the third step, the text of the module is constructed in the selected programming language. The abundance of all kinds of details that must be taken into account when implementing the functions specified in the module's specification can easily lead to the creation of a very confusing text containing a lot of errors and inaccuracies. Finding errors in such a module and making the required changes to it can be a very time-consuming task. Therefore, it is very important to use a technologically sound and practically proven programming discipline to construct the text of the module. For the first time, Dijkstra drew attention to this, formulating and substantiating the basic principles of structured programming. Many programming disciplines that are widely used in practice are based on these principles. The most common discipline is incremental refinement, which is discussed in detail in sections 8.2 and 8.3.

    The next step in the development of the module is associated with bringing the text of the module to a complete form in accordance with the quality specification of the software system. When programming a module, the developer focuses on the correct implementation of the module's functions, leaving incomplete comments and allowing some violations of the requirements for the program style. When polishing the text of the module, he should edit the existing comments in the text and, possibly, include additional comments in it in order to provide the required primitives of quality. For the same purpose, the text of the program is edited to meet stylistic requirements.

    The module verification step is a manual verification of the internal logic of the module before debugging it (using its execution on a computer), implements the general principle formulated for the discussed programming technology, about the need to control the decisions made at each stage of software development (see Lecture 3). Module validation methods are discussed in Section 8.4.

    And finally, the last step in the development of a module means completing the verification of the module (with the help of the compiler) and proceeding to the process of debugging the module.

  5. 8.2. Structured programming.

  6. When programming a module, it should be borne in mind that the program must be understandable not only for a computer, but also for a person: both the developer of the module, and persons who check the module, and textualists who prepare tests for debugging the module, and the PS maintainers who make the required changes to the module, will have to repeatedly disassemble the logic of the module. In modern programming languages, there are enough tools to confuse this logic as much as you like, thereby making the module difficult for humans to understand and, as a consequence, make it unreliable or difficult to maintain. Therefore, it is necessary to take steps to select the appropriate language tools and follow a specific programming discipline. For the first time, Dijkstra drew attention to this and proposed to build a program as a composition of several types of control structures (structures), which can greatly increase the understanding of the logic of the program. Programming using only such constructs was called structural.

    The basic constructs of structured programming are: follow, branch, and repetition (see Figure 8.1). The components of these constructions are generalized operators (processing nodes) S, S1, S2 and a condition (predicate) P. The generalized operator can be either a simple operator of the programming language used (assignment, input, output, procedure call operators), or a program fragment , which is a composition of basic structured programming control constructs. It is essential that each of these structures has only one input and one output for control. Thus, the generalized operator has only one input and one output.

    It is also very important that these constructs are already mathematical objects (which, in essence, explains the reason for the success of structured programming). It is proved that for each unstructured program it is possible to build a functionally equivalent (i.e., solving the same problem) structured program. For structured programs, you can prove some properties mathematically, which allows you to detect some errors in the program. A separate lecture will be devoted to this issue.

    Structured programming is sometimes referred to as "no-GO TO programming". However, the point here is not in the GO TO statement, but in its disordered use. Very often, when implementing structured programming in some programming languages ​​(for example, in FORTRAN), the transition operator (GO TO) is used to implement structured constructs without compromising the main advantages of structured programming. It is the "non-structural" jump operators that confuse the program, especially the jump to the operator located in the module text above (earlier) the jump operator being executed. Nevertheless, an attempt to avoid the branch operator in some simple cases can lead to too cumbersome structured programs, which does not improve their clarity and contains the danger of additional errors in the text of the module. Therefore, we can recommend avoiding the use of the jump operator whenever possible, but not at the expense of clarity of the program.

    Useful cases of using the jump operator include exiting a loop or procedure by a special condition that "early" terminates the work of a given cycle or a given procedure, i.e. terminating the work of some structural unit (generalized operator) and thus only locally violating the structuredness of the program. Great difficulties (and the complication of the structure) are caused by the structural implementation of the reaction to emerging exceptional (often erroneous) situations, since this requires not only an early exit from the structural unit, but also the necessary processing (exclusion) of this situation (for example, the issuance of a suitable diagnostic information). The exception handler can be at any level of the program structure, and it can be accessed from different lower levels. The following "non-structural" implementation of the reaction to exceptional situations is quite acceptable from the technological point of view. Exception handlers are placed at the end of one or another structural unit, and each such handler is programmed in such a way that, after finishing its work, it exits the structural unit at the end of which it is placed. The call to such a handler is made by the transition operator from the given structural unit (including any structural unit nested in it).

  7. 8.3. Step-by-step detailing and the concept of pseudocode.

  8. Structured programming provides guidelines for how the module text should be. The question arises how the programmer should act to construct such a text. Sometimes the programming of a module begins with the construction of its block diagram, which outlines the logic of its operation. However, modern programming technology does not recommend doing this. Although block diagrams make it possible to very clearly represent the logic of the module's operation, when they are coded in a programming language, a very specific source of errors arises: the mapping of essentially two-dimensional structures, such as block diagrams, to a linear text representing a module, contains the danger of distorting the logic of the module, so more, that psychologically it is quite difficult to maintain a high level of attention when re-examining it. An exception may be the case when a graphical editor is used to build block diagrams and they are formalized so that text in a programming language is automatically generated from them (as for example, this can be done in R - technology).

    As the main method of constructing the text of the module, modern programming technology recommends step-by-step detailing. The essence of this method is to break down the process of developing a module text into a number of steps. At the first step, the general scheme of the module's operation is described in an observable linear text form (i.e., using very large concepts), and this description is not fully formalized and is focused on human perception. At each next step, one of the concepts is refined and detailed (we will call it refined), used (as a rule, not formally) in any description developed in one of the previous steps. As a result of this step, a description of the selected refined concept is created either in terms of the basic programming language (i.e., the module chosen for presentation), or in the same form as in the first step using new refined concepts. This process ends when all the refined concepts are ultimately expressed in the underlying programming language. The last step is to obtain the text of the module in the basic programming language by replacing all occurrences of the refined concepts with their given descriptions and expressing all occurrences of structured programming constructs by means of this programming language.

    Step-by-step detailing is associated with the use of a partially formalized language to represent the specified descriptions, which is called pseudocode. This language allows the use of all structured programming constructs that are formalized, together with informal fragments in natural language to represent generic operators and conditions. Corresponding fragments in the basic programming language can also be specified as generalized operators and conditions.

    The head description in pseudocode can be considered the external design of the module in the base programming language, which

    the beginning of a module in the base language, i.e. the first sentence or title (specification) of this module;

    section (set) of descriptions in the base language, and instead of descriptions of procedures and functions - only their external design;

    informal designation of the sequence of operators of the module body as one generalized operator (see below), as well as informal designation of the sequence of operators of the body of each description of a procedure or function as one generalized operator;

    the last sentence (end) of the module in the base language.

    The appearance of a description of a procedure or function is similar. However, if you follow Dijkstra, it is better to present the descriptions section here with an informal designation, making its detailing in the form of a separate description.

    Informal designation of a generalized operator in pseudocode is made in natural language by an arbitrary sentence that outlines its content. The only formal requirement for the design of such a designation is the following: this sentence must occupy one or more graphic (printed) lines in its entirety and end with a dot.

    For each informal generalized operator, a separate description must be created that expresses the logic of its operation (detailing its content) using the composition of the basic structures of structured programming and other generalized operators. The title of such a description should be the informal designation of the detailed generic operator. The basic structures of structured programming can be represented in the following form (see Fig. 8.2). Here, the condition can either be explicitly specified in the base programming language as a boolean expression, or it can be informally represented in natural language by some fragment that outlines the meaning of this condition. In the latter case, a separate description should be created detailing this condition, indicating the designation of this condition (a fragment in natural language) as the title.

  9. Rice. 8.2. Basic constructions of structured programming in pseudocode.

  10. Rice. 8.3. Special cases of the transition operator as a generalized operator.

    As a generalized operator on pseudocode, the above particular cases of the transition operator can be used (see Fig. 8.3). A sequence of exception handlers (exceptions) is specified at the end of a module or a description of a procedure (function). Each such handler looks like:

    EXCEPTION exception_name

    generic_operator

    ALL EXCEPTIONS

    The difference between an exception handler and a procedure without parameters is as follows: after the procedure is executed, control returns to the operator following the call to it, and after the exception is executed, control returns to the operator following the call to the module or procedure (function), at the end of which ( which) this exception is placed.

    It is recommended at each step of detailing to create a sufficiently meaningful description, but easily visible (descriptive), so that it is placed on one page of text. Typically, this means that such a description should be a composition of five or six structured programming constructs. It is also recommended to place nested structures with a shift to the right by several positions (see Fig. 8.4). As a result, it is possible to obtain a description of the logic of work in terms of clarity, which is quite competitive with block diagrams, but has a significant advantage - the linearity of the description remains.

  11. DELETE RECORDS IN THE FILE BEFORE THE FIRST,

    SATISFACTORY FILTER:

    SET FILE BEGINNING.

    IF REGULAR ENTRY SATISFIES

    FILTER TO

    DELETE REGULAR ENTRY FROM FILE.

    ALL IF

    BYE

    IF RECORDS ARE NOT DELETED THEN

    PRINT "ENTRIES ARE NOT DELETED".

    PRINT "DELETED n RECORDS".

    ALL IF

  12. Rice. 8.4. An example of one step of detailing in pseudocode.

  13. Dijkstra is sometimes credited with the idea of ​​step-by-step detailing. However, Dijkstra proposed a fundamentally different method of constructing the text of the module, which seems to us to be deeper and more promising. First, along with the clarification of operators, he proposed to gradually (step by step) refine (detail) the data structures used. Second, at each step he proposed to create a virtual machine for detailing and, in its terms, to detail all the refined concepts for which this machine allows it to be done. Thus, Dijkstra proposed, in essence, detailing by horizontal layers, which is a transfer of his idea of ​​layered systems (see Lecture 6) to the level of module development. This method of module development is currently supported by ADA language packages and object-oriented programming tools.

  14. 8.4. Control of the program module.

  15. The following methods are used to control the software module:

    static check of the module text;

    end-to-end tracking;

    proof of the properties of the program module.

    When statically checking the text of a module, this text is read from beginning to end in order to find errors in the module. Usually, for such a check, in addition to the module developer, one or even several programmers are involved. It is recommended that errors detected during such a check should not be corrected immediately, but upon completion of reading the module text.

    Tracking is one type of dynamic module control. It also involves several programmers who manually scroll through the execution of the module (operator by operator in the sequence that follows from the logic of the module operation) on a certain set of tests.

    The next lecture is devoted to proving the properties of programs. It should only be noted here that this method is still used very rarely.

  16. Literature for lecture 8.

  17. 8.2. E. Dijkstra. Notes on structured programming // W. Dahl, E. Dijkstra, K. Hoore. Structured programming. - M .: Mir, 1975 .-- S. 24-97.

    8.3. N. Virt. Systematic programming. - M .: Mir, 1977 .-- S. 94-164.

  18. Lecture 9. Proof of program properties

  19. The concept of program justification. Formalization of program properties, Hoor's triad. Rules for setting properties of the assignment operator, conditional and compound operators. Rules for establishing the properties of a loop operator, the concept of a loop invariant. Completeness of program execution.

  20. 9.1. Justification of programs. Formalization of program properties.

  21. To improve the reliability of software, it is very useful to supply programs additional information, with the use of which you can significantly increase the level of control of the PS. Such information can be specified in the form of informal or formalized statements that are tied to various fragments of programs. We will call such statements the rationale for the program. Informal justifications for programs can, for example, explain the motives for making certain decisions, which can greatly facilitate the search and correction of errors, as well as the study of programs while maintaining them. Formalized justifications allow one to prove some properties of programs both manually and to control (set) them automatically.

    One of the currently used concepts of formal justification of programs is the use of the so-called Hoor triads. Let S be some generalized operator over the information environment IS, P and Q - some predicates (statements) over this environment. Then the notation (P) S (Q) is called Hoor's triad, in which the predicate P is called the precondition and the predicate Q is called the postcondition with respect to the operator S. The operator (in particular, the program) S is said to have the property (P) S (Q) if whenever before the execution of the operator S the predicate P is true, after the execution of this operator S the predicate Q will be true.

    Simple examples of program properties:

    (9.1) (n = 0) n: = n + 1 (n = 1),

    (9.2) (n

    (9.3) (n

    (9.4) (n> 0) p: = 1; m: = 1;

    WHILE m / = n DO

  22. BYE

    To prove the property of the program S, the properties of simple operators of the programming language are used (here we restrict ourselves to an empty operator and an assignment operator) and the properties of control structures (compositions), with the help of which a program is built from simple operators (we will restrict ourselves here to three basic compositions of structured programming, see Lecture eight). These properties are usually referred to as program verification rules.

  23. 9.2. Properties of simple operators.

  24. For an empty operator, the following is valid

    Theorem 9.1. Let P be a predicate over the information environment. Then property (P) (P) holds.

    The proof of this theorem is obvious: an empty operator does not change the state of the information environment (in accordance with its semantics), therefore its precondition remains true even after its execution.

    The assignment operator satisfies

    Theorem 9.2. Let the IS information environment consist of the X variable and the rest of the RIS information environment:

  25. Then the property

    (Q (F (X, RIS), RIS)) X: = F (X, RIS) (Q (X, RIS)),

    where F (X, RIS) is some single-valued function, Q is a predicate.

    Proof. Let the predicate Q (F (X0, RIS0), RIS0) be true before the execution of the assignment operator, where (X0, RIS0) is some arbitrary state of the information environment IS, then after the execution of the assignment operator the predicate Q (X, RIS) will be true, so how X will get the value F (X0, RIS0) and the RIS state is not changed by this assignment operator, and therefore after executing this assignment operator in this case

    Q (X, RIS) = Q (F (X0, RIS0), RIS0).

    Due to the arbitrariness of the choice of the state of the information environment, the theorem is proved.

    Example 9.1 is an example of the property of an assignment operator.

  26. 9.3. Properties of the basic structures of structured programming.

  27. Let us now consider the properties of the main structures of structured programming: succession, branching and repetition.

    The properties of succession are expressed by the following

    Theorem 9.3. Let P, Q, and R be predicates over the information environment, and S1 and S2 be generalized operators with the properties

    (P) S (Q) and (Q) S2 (R).

    Then for the compound operator

    S1; S2<.blockquote>

    the property holds

    (P) S1; S2 (R).

    Proof. Let the predicate P be true for some state of the information environment before the execution of the operator S1. Then, by virtue of the property of the operator S1, after its execution, the predicate Q will be true. execution of statement S2. Consequently, after the execution of the operator S2, by virtue of its property, the predicate R will be true, and since the operator S2 terminates the execution of the compound operator (in accordance with its semantics), the predicate R will also be true after the execution of this compound operator, which was required to prove.

    For example, if properties (9.2) and (9.3) hold, then it has

    location and property

    (n

    The branching property expresses the following

    Theorem 9.4. Let P, Q, and R be predicates over the information environment, and S1 and S2 be generalized operators with the properties

    (P, Q) S1 (R) and (`P, Q) S2 (R).

    Then for the conditional operator

    IF P THEN S1 ELSE S2 ALL IF

    the property holds

    (Q) IF P THEN S1 ELSE S2 ALL IF (R).

    Proof. Let the predicate Q be true for a certain state of the information environment before the execution of the conditional operator. If the predicate P is also true, then the execution of the conditional operator in accordance with its semantics is reduced to the execution of the operator S1. By virtue of the property of the operator S1, after its execution (and in this case, after the execution of the conditional operator), the predicate R will be true. If, before the execution of the conditional operator, the predicate P is false (and Q is still true), then the execution of the conditional operator in accordance with its semantics is reduced to the execution of operator S2. By virtue of the property of the operator S2, after its execution (and in this case - and after the execution of the conditional operator), the predicate R will be true. Thus, the theorem is completely proved.

    Before moving on to the property of the repetition construct, it should be noted that it is useful for further

    Theorem 9.5. Let P, Q, P1, and Q1 be predicates over the information environment for which the implications

    P1 => P and Q => Q1,

    and let property (P) S (Q) hold for an operator S. Then property (P1) S (Q1) holds.

    This theorem is also called the property weakening theorem.

    Proof. Let the predicate P1 be true for some state of the information environment before the execution of the operator S. Then the predicate P will also be true (by virtue of the implication P1 => P). Therefore, by virtue of the property of the operator S, after its execution, the predicate Q will be true, and hence the predicate Q1 (by virtue of the implication Q => Q1). This proves the theorem.

    The repetition property expresses the following

    Theorem 9.6. Let I, P, Q, and R be predicates over the information environment for which the implications are valid

    P => I and (I, `Q) => R,

    and let S be a generalized operator with property (I) S (I).

    Then for the loop operator

    WHILE Q DO S EVERYTHING WHILE

    the property holds

    (P) WHILE Q DO S EVERYTHING YET (R).

    The predicate I is called the invariant of the cycle operator.

    Proof. To prove this theorem, it suffices to prove the property

    (I) WHILE Q DO S EVERYTHING WHILE (I, `Q)

    (by Theorem 9.5 on the basis of the implications in the conditions of this theorem). Let the predicate I be true for some state of the information environment before the execution of the cycle operator. If in this case the predicate Q is false, then the cycle operator will be equivalent to the empty operator (in accordance with its semantics) and, by virtue of Theorem 9.1, after the execution of the cycle operator, the statement (I , `Q). If, before the execution of the cycle operator, the predicate Q is true, then the cycle operator, in accordance with its semantics, can be represented as a compound operator S; WHILE Q DO S EVERYTHING WHILE

    Due to the property of the operator S, after its execution, predicate I will be true, and an initial situation arises for proving the property of the cycle operator: predicate I is true before the execution of the cycle operator, but already for another (changed) state of the information environment (for which the predicate Q can either be true or false). If the execution of the cycle operator ends, then applying the method of mathematical induction, in a finite number of steps, we will come to a situation where the statement (I, `Q) will be true before its execution. And in this case, as was proved above, this statement will also be true after the execution of the loop operator. The theorem is proved.

    For example, the loop operator from example (9.4) has the property

    m: = m + 1; p: = p * m

    EVERYTHING YET (p = n.!}

    This follows from Theorem 9.6, since the invariant of this cycle operator is the predicate p = m! and the implications are true (n> 0, p = 1, m = 1) => p = m! and (p = m !, m = n) => p = n!

  28. 9.4. Completeness of program execution.

  29. One of the properties of the program, which we may be interested in, in order to avoid possible errors in the SS, is its completeness, i.e. the absence of looping in it with certain initial data. In the structured programs we have considered, only the repetition construct can be a source of looping. Therefore, to prove the completeness of a program, it is sufficient to be able to prove the termination of a cycle operator. The following is useful for this.

    Theorem 9.7. Let F be an integer function that depends on the state of the information environment and satisfies the following conditions:

    (1) if the predicate Q is true for a given state of the information environment, then its value is positive;

    (2) it decreases when the state of the information environment changes as a result of the execution of the operator S.

    Then the execution of the loop operator

    WHILE Q DO S ALL WHILE completes.

    Proof. Let is be the state of the information environment before the execution of the cycle operator and let F (is) = k. If the predicate Q (is) is false, then the execution of the loop statement ends. If Q (is) is true, then by the hypothesis of the theorem k> 0. In this case, the S statement will be executed one or more times. After each execution of the operator S, according to the hypothesis of the theorem, the value of the function F decreases, and since before the execution of the operator S the predicate Q must be true (according to the semantics of the cycle operator), the value of the function F at this moment must be positive (according to the conditions of the theorem). Therefore, due to the integer value of the function F, the operator S in this loop can be executed more than k times. The theorem is proved.

    For example, for the example of the cycle operator considered above, the conditions of Theorem 9.7 are satisfied by the function f (n, m) = n-m. Since before executing the loop operator m = 1, the body of this loop will be executed (n-1) times, i.e. this loop statement ends.

  30. 9.5. An example of proving a property of a program.

  31. Based on the proven rules for program verification, it is possible to prove the properties of programs consisting of assignment operators and empty operators and using three main compositions of structured programming. To do this, by analyzing the structure of the program and using its specified pre- and postconditions, it is necessary to apply a suitable verification rule at each step of the analysis. In the case of using a repetition composition, you will need to choose a suitable loop invariant.

    As an example, let us prove property (9.4). This proof will consist of the following steps.

    (Step 1). n> 0 => (n> 0, p - any, m - any).

    (Step 2). Occurs

    (n> 0, p - any, m - any) p: = 1 (n> 0, p = 1, m - any).

    By Theorem 9.2.

    (Step 3). Occurs

    (n> 0, p = 1, m - any) m: = 1 (n> 0, p = 1, m = 1).

    By Theorem 9.2.

    (Step 4). Occurs

    (n> 0, p - any, m - any) p: = 1; m: = 1 (n> 0, p = 1, m = 1).

    By Theorem 9.3, in view of the results of steps 2 and 3.

    Let us prove that the predicate p = m! is a cycle invariant, i.e. (p = m m:=m+1; p:=p*m {p=m!}.!}

    (Step 5). It takes place (p = m m:=m+1 {p=(m-1)!}.!}

    By Theorem 9.2, if we represent the precondition in the form (p = ((m + 1) -1).!}

    (Step 6). It takes place (p = (m-1) p:=p*m {p=m!}.!}

    By Theorem 9.2, if we represent the precondition in the form (p * m = m.!}

    (Step 7). There is a cycle invariant

    (p = m m:=m+1; p:=p*m {p=m!}.!}

    By Theorem 9.3, in view of the results of steps 5 and 6.

    (Step 8). Occurs

    (n> 0, p = 1, m = 1) WHILE m / = n DO

    m: = m + 1; p: = p * m

    EVERYTHING YET (p = n.!}

    By Theorem 9.6, by virtue of the result of step 7 and bearing in mind that (n> 0, p = 1, m = 1) => p = m !; (p = m !, m = n) => p = n !.

    (Step 9). Occurs

    (n> 0, p - any, m - any) p: = 1; m: = 1;

    WHILE m / = n DO

    m: = m + 1; p: = p * m

    EVERYTHING YET (p = n.!}

    By Theorem 9.3, in view of the results of steps 3 and 8.

    (Step 10). Property (9.4) holds by Theorem 9.5 by the results of steps 1 and 9.

  32. Literature for lecture 9.

  33. 9.1. S.A. Abramov. Elements of programming. - M .: Nauka, 1982.S. 85-94.

    9.2. M. Zelkovets, A. Shaw, J. Gannon. Principles of software development. - M .: Mir, 1982.S. 98-105.

  34. Lecture 10. Testing and debugging software

  35. Basic concepts. Test design strategy. Debugging Commandments. Offline debugging and testing of a software module. Complex debugging and testing of software tools.

  36. 10.1. Basic concepts.

  37. Debugging a software system is an activity aimed at detecting and correcting errors in a software system using the processes of executing its programs. Testing a software system is the process of executing its programs on a certain data set, for which the result of application is known in advance or the rules of behavior of these programs are known. The specified dataset is called a test or just a test. Thus, debugging can be represented as a multiple repetition of three processes: testing, as a result of which the presence of an error in the software system can be ascertained, searching for the place of the error in software programs and software’s documentation, and editing programs and documentation in order to eliminate the detected error. In other words:

    Debugging = Testing + Finding Errors + Editing.

    In foreign literature, debugging is often understood only as a process of finding and fixing errors (without testing), the fact of which is established during testing. Sometimes testing and debugging are considered synonymous. In our country, testing is usually included in the concept of debugging, so we will follow the established tradition. However, the joint consideration of these processes in this lecture makes this discrepancy not so significant. It should be noted, however, that testing is also used as part of the FP attestation process (see Lecture 14).

  38. 10.2. Debugging principles and types.

  39. The success of debugging is largely determined by the rational organization of testing. During debugging, it is found and eliminated mainly those errors, the presence of which in the software system is established during testing. As already noted, testing cannot prove the correctness of the SE; at best, it can demonstrate the presence of an error in it. In other words, it cannot be guaranteed that by testing the software system with a practically feasible set of tests, it is possible to establish the presence of every error in the software system. Therefore, two tasks arise. The first is to prepare such a set of tests and apply the software to them in order to find as many errors as possible in it. However, the longer the testing process (and debugging in general) lasts, the higher the cost of the software becomes. Hence the second task: to determine the moment of completion of debugging the software system (or its separate component). A sign of the possibility of finishing debugging is the completeness of the coverage of the tests passed through the PS (i.e., tests to which the PS is applied) of many different situations that arise during the execution of PS programs, and the relatively rare manifestation of errors in the PS in the last segment of the testing process. The latter is determined in accordance with the required degree of PS reliability specified in its quality specification.

    To optimize the test suite, i.e. to prepare a set of tests that would allow for a given number of tests (or for a given time interval allotted for testing) to detect more errors, it is necessary, firstly, to plan this set in advance and, secondly, to use a rational strategy for planning (designing) tests. Test design can begin immediately after the completion of the stage of external description of the software. There are different approaches to developing a test design strategy, which can be conditionally graphically placed (see Figure 9.1) between the following two extreme approaches. The left extreme approach is that tests are designed only on the basis of studying the specifications of the software system (external description, architecture description and module specification). In this case, the structure of the modules is not taken into account in any way, i.e. they are considered black boxes. In fact, this approach requires a complete enumeration of all input data sets, since when only a part of these sets are used as tests, some parts of the PS programs may not work on any test and, therefore, the errors contained in them will not appear. However, testing software systems with a full set of input data sets is practically impracticable. The right extreme approach is that tests are designed on the basis of studying the texts of programs in order to test all the ways of execution of each software program. If we take into account the presence of loops with a variable number of repetitions in programs, then there may be an extremely large number of different ways of executing SS programs, so that their testing will also be practically impracticable.

    The optimal test design strategy is located within the interval between these extreme approaches, but closer to the left edge. It includes the design of a significant part of the tests according to the specifications, based on the principles: for each function or opportunity used - at least one test, for each region and for each boundary of change of any input variable - at least one test, for each special case or for every exception specified in the specifications is at least one test. But it also requires the design of some tests and the texts of programs, based on the principle (at least): each command of each software program must work through at least one test.

    The optimal test design strategy can be concretized on the basis of the following principle: for each program document (including program texts) included in the software system, their own tests should be designed in order to identify errors in it. In any case, this principle must be observed in accordance with the definition of software systems and the content of the concept of programming technology as a technology for developing reliable software systems (see Lecture 1). In this regard, Myers even defines different types testing depending on the type of program document on the basis of which the tests are built. In our country, there are two main types of debugging (including testing): stand-alone and complex debugging. Autonomous debugging means testing only some part of the program included in the software system, searching for and fixing errors fixed during testing. It actually involves debugging each module and debugging the pairing of modules. Comprehensive debugging means testing the software system as a whole, with the search and correction of errors recorded during testing in all documents (including texts of software systems) related to the software system as a whole. Such documents include the definition of the requirements for the software, the quality specification of the software, the functional specification of the software, the description of the software architecture and the texts of the software programs.

  40. 10.3. Debugging Commandments.

  41. This section provides general guidelines for organizing debugging. But first, we should note a certain phenomenon that confirms the importance of preventing errors at the previous stages of development: as the number of detected and fixed errors in a software system increases, the relative probability of undetected errors in it also increases. This is due to the fact that with an increase in the number of errors detected in a software system, our understanding of the total number of errors made in it, and therefore, to some extent, of the number of errors still undetected, is also refined. This phenomenon confirms the importance of early error detection and the need for careful control of decisions made at every stage of software development.

    Commandment 1. Consider testing a key task in software development, entrust it to the most qualified and gifted programmers; it is not advisable to test your own program.

    Commandment 2. A good test is one for which there is a high probability of detecting an error, and not one that demonstrates the correct operation of the program.

    Commandment 3. Prepare tests for both correct and incorrect data.

    Commandment 4. Avoid non-reproducible tests, document them being passed through the computer; study the results of each test in detail.

    Commandment 5. Connect each module to the program only once; never modify the program to make it easier to test.

    Commandment 6. Skip again all tests related to checking the operation of any PS program or its interaction with other programs, if changes have been made to it (for example, as a result of fixing an error).

  42. 10.4. Debugging a module offline.

  43. In standalone debugging, each module is actually tested in some programming environment, unless the program being debugged consists of only one module. This environment consists of other modules, some of which are modules of the program being debugged that have already been debugged, and some are modules that control debugging (debug modules, see below). Thus, during autonomous debugging, a certain program is always tested, built specifically for testing the module being debugged. This program only partially coincides with the program being debugged, except for the case when the last module of the program being debugged is being debugged. As the debugging of the program progresses, more and more of the environment of the next debugged module will be made up of already debugged modules of this program, and when debugging the last module of this program, the environment of the debugged module will entirely consist of all the other (already debugged) modules of the program being debugged (without any) debugging modules, i.e. in this case, the program being debugged will be tested. This process of building up a debugged program with debugged and debugged modules is called program integration.

    The debug modules included in the environment of the module being debugged depend on the order in which the modules of this program are debugged, on which module is being debugged and, possibly, on which test will be skipped.

    During bottom-up testing (see Lecture 7), this environment will always contain only one debug unit (except for the case when the last unit of the program being debugged), which will be the head in the program under test and which is called the master (or driver). The leading debug module prepares the information environment for testing the module being debugged (that is, it generates its state required for testing this module, in particular, it can enter some test data), addresses the module being debugged and, after its completion, issues the necessary messages. When debugging one module, different leading debug modules can be compiled for different tests.

    In top-down testing (see Lecture 7), the environment of the module being debugged as debugging modules contains simulators of all modules that the module being debugged can access, as well as simulators of those modules that can be accessed by debugged modules of the program being debugged (included in this environment), but which are not yet debugged. Some of these simulators, when debugging a single module, may change for different tests.

    In fact, the environment of the module being debugged in many cases can contain both types of debug modules for the following reasons. Both bottom-up and top-down testing have their advantages and disadvantages.

    The merits of bottom-up testing include

    ease of preparing tests and

    the ability to fully implement the unit test plan.

    This is due to the fact that the test state of the information environment is prepared immediately before accessing the module being debugged (by the leading debugging module). The disadvantages of bottom-up testing are the following:

    test data is prepared, as a rule, not in the form that is intended for the user (except for the case when the last, head, module of the program being debugged is being debugged);

    a large amount of debugging programming (when debugging one module, you often have to compose many leading debugging modules for different tests);

    the need for special testing of interface modules.

    The advantages of top-down testing include the following features:

    most tests are prepared in a user-friendly form;

    in many cases, a relatively small amount of debugging programming (module simulators are usually quite simple and each is suitable for a large number, often all, tests);

    there is no need to test the pairing of modules.

    The disadvantage of top-down testing is that the test state of the information environment before accessing the debugged module is prepared indirectly - it is the result of applying already debugged modules to test data or data issued by simulators. This, firstly, makes it difficult to prepare tests, requires high qualifications of the test engineer, and secondly, it makes it difficult or even impossible to implement the full test plan of the module being debugged. This disadvantage sometimes forces developers to apply bottom-up testing even in the case of top-down development. However, some modifications of top-down testing, or some combination of top-down and bottom-up testing are used more often.

    Proceeding from the fact that top-down testing, in principle, is preferable, let us dwell on the techniques that allow, to some extent, overcome these difficulties. First of all, it is necessary to organize the debugging of the program in such a way that the modules carrying out data entry are debugged as early as possible - then the test data can be prepared in a form designed for the user, which will greatly simplify the preparation of subsequent tests. This input is by no means always carried out in the head module; therefore, it is necessary first of all to debug the chains of modules leading to the modules carrying out the specified input (compare with the method of purposeful constructive implementation in Lecture 7). Until the input modules are debugged, the test data is supplied by some simulators: they are either included in the simulator as part of it, or are injected by the simulator.

    During downstream testing, some states of the information environment, in which it is necessary to test the module being debugged, may not occur when the program being debugged is executed for any input data. In these cases, it would be possible not to test the module being debugged at all, since the errors detected in this case will not appear during the execution of the program being debugged for any input data. However, it is not recommended to do this, since when the program being debugged changes (for example, when maintaining the software system), the information environment states that are not used for testing the debugged module may already arise, which requires additional testing of this module (and this could not have been done with a rational organization of debugging if the given module itself has not changed). To test the debugged module in these situations, sometimes suitable simulators are used to create the required state of the information environment. More often, they use a modified version of top-down testing, in which the debugged modules are pre-tested before they are integrated (in this case, the leading debug module appears in the environment of the debugged module, along with simulators of modules that the debugged module can access). However, another modification of top-down testing seems to be more expedient: after the completion of top-down testing of the debugged module for achievable test states of the information environment, it should be tested separately for the remaining required states of the information environment.

    A combination of upward and downward testing, which is called the sandwich method, is also often used. The essence of this method lies in the simultaneous implementation of both bottom-up and top-down testing, until these two testing processes meet on some module somewhere in the middle of the structure of the program being debugged. This method allows, with a reasonable approach, to take advantage of the advantages of both bottom-up and top-down testing and largely neutralize their disadvantages. This effect is a manifestation of a more general principle: the greatest technological effect can be achieved by combining top-down and bottom-up methods of developing FP programs. It is to support this method that the architectural approach to software development is intended (see lecture 7): a layer of skillfully developed and thoroughly tested modules greatly facilitates the implementation of a family of programs in the corresponding subject area and their subsequent modernization.

    It is very important for offline debugging to test the pairing of modules. The fact is that the specification of each program module, except for the head one, is used in this program in two situations: firstly, when developing the text (sometimes they say: the body) of this module and, secondly, when writing a call to this module in others modules of the program. In either case, as a result of an error, the required compliance with the given module specification may be violated. Such errors need to be detected and eliminated. This is the purpose of testing the interface between modules. In top-down testing, pairing testing is performed along the way with each test that is skipped, which is considered the strongest advantage of top-down testing. During bottom-up testing, the address to the debugged module is made not from the modules of the program being debugged, but from the master debugger. In this regard, there is a danger that the last module may adapt to some "delusions" of the module being debugged. Therefore, starting (in the process of program integration) debugging a new module, you have to test each call to a previously debugged module in order to detect inconsistency of this call with the body of the corresponding module (and it is possible that a previously debugged module is to blame for this). Thus, it is necessary to partially repeat the testing of a previously debugged module under new conditions, while the same difficulties arise as in top-down testing.

    It is advisable to carry out autonomous testing of a module in four sequential steps.

    Step 1. Based on the specification of the module being debugged, prepare a test for each possibility and each situation, for each boundary of the range of valid values ​​of all inputs, for each range of data changes, for each range of invalid values ​​of all inputs and each invalid condition.

    Step 2. Check the text of the module to make sure that every direction of any branch will pass at least one test. Add missing tests.

    Step 3. Make sure according to the module text that for each loop there is a test for which the loop body is not executed, a test for which the loop body is executed once, and a test for which the loop body is executed the maximum number of times. Add missing tests.

    Step 4. Check in the text of the module its sensitivity to individual special values ​​of the input data - all such values ​​should be included in the tests. Add missing tests.

  44. 10.5. Complex software debugging.

  45. As mentioned above, during complex debugging, the software system is tested as a whole, and tests are prepared for each of the software documents. Testing of these documents is performed, as a rule, in the reverse order of their development (the only exception is testing the application documentation, which is developed according to an external description in parallel with the development of program texts; this testing is best done after testing the external description). Testing with complex debugging is the application of software systems to specific data that, in principle, may arise from the user (in particular, all tests are prepared in a form designed for the user), but, possibly, in a simulated (and not in real) environment. For example, some input and output devices that are not available in complex debugging can be replaced by their software simulators.

    Testing the PS architecture. The purpose of testing is to find a discrepancy between the description of the architecture and the set of software programs. By the time the testing of the PS architecture begins, the autonomous debugging of each subsystem must have already been completed. Errors in the implementation of the architecture can be associated primarily with the interaction of these subsystems, in particular, with the implementation of architectural functions (if any). Therefore, I would like to check all the ways of interaction between the subsystems of the PS. But since there may be too many of them, it would be desirable to test at least all the chains of execution of the subsystems without re-entering the latter. If a given architecture represents a PS as a small system of dedicated subsystems, then the number of such chains will be quite visible.

    Testing external functions. The purpose of testing is to find discrepancies between the functional specification and the set of software programs. Despite the fact that all these programs have already been debugged autonomously, the indicated discrepancies may be, for example, due to the inconsistency of the internal specifications of the programs and their modules (on the basis of which the autonomous testing was carried out) with the external functional specification of the software system. As a rule, testing of external functions is performed in the same way as testing of modules in the first step, i.e. like a black box.

    Testing the quality of the PS. The purpose of testing is to find violations of the quality requirements formulated in the quality specification of the software system. This is the most difficult and least studied type of testing. It is only clear that not every primitive of software quality can be tested by testing (about assessing the quality of software, see the next lecture). The completeness of the PS is checked already when testing external functions. At this stage, testing of this quality primitive can be continued if it is required to obtain any probabilistic estimate of the PS reliability degree. However, the methodology for such testing still needs to be developed. Accuracy, robustness, security, time efficiency, to some extent memory efficiency, device efficiency, scalability and, in part, device independence can be tested. Each of these types of testing has its own specifics and deserves separate consideration. We will limit ourselves here only to their enumeration. The ease of application of the PS (a quality criterion that includes several quality primitives, see Lecture 4) is assessed when testing the documentation for the application of the PS.

    Testing documentation for the application of the PS. The purpose of testing is to search for inconsistency between the application documentation and the set of software programs, as well as the inconveniences of using the software. This stage immediately precedes the connection of the user to the completion of the software development (testing the requirements for the software and attesting the software), therefore it is very important for the developers to first use the software as the user will do it. All tests at this stage are prepared solely on the basis of only the documentation for the application of the PS. First of all, the capabilities of the software should be tested as it was done when testing external functions, but only on the basis of the application documentation. Any obscure points in the documentation should be tested, as well as all examples used in the documentation. Further, the most difficult cases of application of the PS are tested in order to detect a violation of the requirements of the relativity of the ease of application of the PS.

    Testing the definition of requirements for the PS. The purpose of testing is to find out to what extent the PS does not meet the stated definition of requirements for it. The peculiarity of this type of testing is that it is carried out by the purchasing organization or the user organization of the software system as one of the ways to overcome the barrier between the developer and the user (see Lecture 3). Usually this testing is performed using control tasks - typical tasks for which the result of the solution is known. In cases where the developed software system should replace another version of the software system, which solves at least part of the tasks of the developed software system, testing is performed by solving common problems using both the old and new software systems with subsequent comparison of the results obtained. Sometimes, as a form of such testing, they use experimental operation of the PS - a limited application of a new PS with an analysis of the use of the results in practical activities. In essence, this type of testing has much in common with testing a PS during its attestation (see Lecture 14), but it is performed before attestation, and sometimes instead of attestation.

  46. Literature for lecture 10.

  47. 10.1. G. Myers. Reliability of software. - M .: Mir, 1980 .-- S. 171-262.

    10.2. D. Van Tassel. Style, development, efficiency, debugging and testing of programs. - M .: Mir, 1985 .-- S. 179-295.

    10.3. J. Hughes, J. Michtom. Structural Approach to programming. - M .: Mir, 1980 .-- S. 254-268.

    10.4. J. Fox. Software and its development. - M .: Mir, 1985 .-- S. 227-241.

    10.5. M. Zelkowitz, A. Shaw, J. Gannon. Principles of software development. - M .: Mir, 1982 .-- S. 105-116.

    10.6. Yu.M. Bezborodov. Individual debugging of programs. - M .: Nauka, 1982 .-- S. 9-79.

    10.7. V.V. Lipaev. Testing programs. - M .: Radio and communication, 1986. - S. 15-47.

    10.8. E.A. Zhogolev. Introduction to programming technology (lecture notes). - M .: "DIALOG-MGU", 1994.

    10.9. E. Dijkstra. Notes on structured programming. // U. Dahl, E. Dijkstra, K. Hoore. Structured programming. - M .: Mir, 1975 .-- S. 7-13.

  48. Lecture 11. Ensuring the functionality and reliability of the software

  49. 11.1. Functionality and reliability as mandatory criteria for the quality of a software tool.

  50. In the previous lecture, we examined all the stages of software development, except for its certification. At the same time, we did not touch upon the issues of ensuring the quality of the PS in accordance with its quality specification (see lecture 4). True, while implementing the functional specification of the PS, we thereby discussed the main issues of ensuring the criterion of functionality. Having declared the reliability of the software system as its main attribute (see Lecture 1), we chose error prevention as the main approach to ensure the reliability of the software system (see Lecture 3) and discussed its implementation at different stages of software development. Thus, the thesis about the mandatory functionality and reliability of the software system as criteria for its quality was manifested.

    Nevertheless, the quality specification of the software system may contain additional characteristics of these criteria, the provision of which requires special discussion. This lecture is devoted to these questions. Assurance of other quality criteria will be discussed in the next lecture.

    Below, we discuss the provision of software quality primitives that express criteria for the functionality and reliability of the software.

  51. 11.2. Ensuring the completeness of the software.

  52. Completeness of the PS is the general primitive of the quality of the PS for the expression of both the functionality and reliability of the PS, and for the functionality it is the only primitive (see Lecture 4).

    The functionality of a software system is determined by its functional specification. The completeness of the PS as a primitive of its quality is a measure of how this specification is implemented in the given PS. Providing this primitive in its entirety means implementing each of the functions defined in the functional specification, with all the details and features specified there. All of the previously discussed technological processes show how this can be done.

    However, in the specification of the quality of the software system, several levels of implementation of the software functionality can be defined: a certain simplified (initial or initial) version can be defined, which must be implemented in the first place, and several intermediate versions can also be defined. In this case, an additional technological task arises: the organization of increasing the functionality of the PS. It is important to note here that the development of a simplified version of the software system is not the development of its prototype. The prototype is being developed in order to better understand the conditions for the application of the future PS, to clarify its external description. It is designed for selected users and therefore may differ greatly from the required software system not only in the functions performed, but also in the features of the user interface. A simplified version of the required software system should be designed for practical use by any users for whom it is intended. Therefore, the main principle of ensuring the functionality of such software is to develop software from the very beginning in such a way as if the software is required in full, until the developers deal with those parts or details of the software, the implementation of which can be postponed. according to the specification of its quality. Thus, both the external description and the description of the PS architecture should be developed in full. It is possible to postpone only the implementation of those software subsystems defined in the architecture of the developed software system, the functioning of which is not required in the initial version of this software system. The implementation of the software subsystems themselves is best done by the method of purposeful constructive implementation, leaving in the initial version of the software the suitable simulators of those software modules, the functioning of which is not required in this version. A simplified implementation of some software modules is also acceptable, omitting the implementation of some details of the corresponding functions. However, from a technological point of view, such modules are best viewed as a kind of their imitators (albeit far advanced).

    Due to the errors in the developed software system, the completeness achieved while ensuring its functionality (in accordance with the specification of its quality) may actually not be as expected. We can only say that this completeness has been achieved with a certain probability, determined by the volume and quality of the testing performed. In order to increase this probability, it is necessary to continue testing and debugging the software. However, the estimation of such a probability is a very specific task (taking into account the fact that the manifestation of the error in the PS is a function of the initial data), which is still awaiting corresponding theoretical studies.

  53. 11.3. Ensuring the accuracy of the software.

  54. The provision of this primitive is associated with actions on values ​​of real types (more precisely, on values ​​that are represented with some error). To provide the required accuracy when calculating the value of a particular function means to obtain this value with an error that does not go beyond the specified limits. Computational mathematics deals with the types of errors, methods of their estimation and methods of achieving the required accuracy (the so-called approximate calculations). Here we will only pay attention to some structure of the error: the error of the calculated value (total error) depends

    on the error of the calculation method used (in which we include the inaccuracy of the model used),

    from the error in the presentation of the data used (from the so-called fatal error),

    from the rounding error (inaccuracies in the execution of the operations used in the method).

  55. 11.4. Ensuring the autonomy of the software.

  56. This quality primitive is provided at the stage of quality specification by deciding whether to use any suitable basic software in the developed software or not to use any basic software in it. In this case, it is necessary to take into account both its reliability and the resources required for its use. With increased requirements for the reliability of the developed software system, the reliability of the basic software at the disposal of the developers may turn out to be unsatisfactory, therefore, it has to be abandoned, and the implementation of its functions in the required volume must be included in the software system. Similar decisions have to be made with strict restrictions on the resources used (according to the PS efficiency criterion).

  57. 11.5. Ensuring the stability of the software.

  58. This quality primitive is provided by what is known as defensive programming. Generally speaking, defensive programming is used to increase the reliability of the PS when programming a module in a broader sense. As Myers argues, "Defensive programming is based on an important premise: the worst a module can do is accept incorrect input and then return an incorrect but plausible result." In order to avoid this, the text of the module includes checks of its input and output data for their correctness in accordance with the specification of this module, in particular, the fulfillment of the restrictions on the input and output data and the relationships between them specified in the module specification must be checked. If the check fails, a corresponding exception is raised. In this regard, fragments of the second kind are included at the end of this module - handlers of the corresponding exception situations, which, in addition to issuing the necessary diagnostic information, can take measures either to eliminate errors in the data (for example, to require their re-entry), or to weaken the effect of the error (for example , soft stop of the devices controlled by the PS in order to avoid their breakdown in case of an emergency termination of the program execution).

    The use of defensive programming of modules leads to a decrease in the efficiency of the PS both in time and in memory. Therefore, it is necessary to reasonably regulate the degree of application of defensive programming, depending on the requirements for the reliability and efficiency of the software system, formulated in the quality specification of the software system being developed. The input data of the module being developed can come either directly from the user or from other modules. The most common use case for defensive programming is its application for the first group of data, which means the implementation of the stability of the software. This should be done whenever there is a requirement to ensure the stability of the software in the PS quality specification. Using defensive programming for the second group of inputs means trying to detect an error in other modules during the execution of a module under development, and for the output of a developed module, an attempt to detect an error in this module itself during its execution. In essence, this means a partial implementation of the error self-detection approach to ensure the reliability of the software, which was discussed in lecture 3. This case of defensive programming is used extremely rarely - only when the requirements for the reliability of the software are extremely high.

  59. 11.6. Ensuring the security of software.

  60. There are the following types of software protection against information distortion:

    protection against hardware failures;

    protection from the influence of a "foreign" program;

    protection against failures of "own" program;

    protection against operator (user) errors;

    protection against unauthorized access;

    protection from protection.

    Protection against hardware failures is currently not a very pressing problem (given the level of achieved reliability of computers). But it is still useful to know its solution. This is ensured by the organization of the so-called "double-triple miscalculations". For this, the whole process data processing, determined by the PS, is divided in time into intervals by the so-called "control points". The length of this interval should not exceed half the computer's MTBF. A copy of the state of the memory changed in this process for each control point is written to the secondary memory with a certain checksum (a number calculated as a function of this state) in the case when it will be considered that the processing of data from the previous control point to this one (i.e. one "miscalculation") was done correctly (without crashing the computer). In order to find out, two such "miscalculations" are made. After the first "miscalculation", the specified checksum is calculated and stored, and then the memory state for the previous reference point is restored and the second "miscalculation" is made. After the second "miscalculation", the specified checksum is calculated again, which is then compared with the checksum of the first "miscalculation". If these two checksums match, the second miscalculation is considered correct, otherwise the checksum of the second "miscalculation" is also remembered and the third "miscalculation" is performed (with a preliminary restoration of the memory state for the previous reference point). If the checksum of the third "miscalculation" matches the checksum of one of the first two "miscalculations", then the third miscalculation is considered correct, otherwise an engineering check of the computer is required.

    Protection against the influence of a "foreign" program refers primarily to operating systems or to programs that partially perform their functions. There are two types of this protection:

    failure protection,

    protection from the malicious influence of a "foreign" program.

    When a multi-program mode of operation of a computer appears in its memory, several programs can be simultaneously in the execution stage, alternately receiving control as a result of arising interruptions (the so-called quasi-parallel program execution). One of these programs (usually: the operating system) deals with interrupt handling and multiprogramming. In each of these programs, failures (errors) can occur, which can affect the performance of functions by other programs. Therefore, the control program (operating system) must protect itself and other programs from such influence. To do this, the computer hardware must implement the following capabilities:

    memory protection,

    two modes of computer functioning: privileged and working (user),

    two types of operations: privileged and ordinary,

    correct implementation of interrupts and initial start-up of the computer,

    temporary interruption.

    Memory protection means the ability to programmatically set for each program areas of memory inaccessible to it. In the privileged mode, any operations can be performed (both ordinary and privileged), and in the operating mode, only ordinary ones. An attempt to perform a privileged operation, as well as to access protected memory in operating mode, causes a corresponding interrupt. Moreover, privileged operations include operations for changing memory protection and operating mode, as well as access to the external information environment. Initial power-up of the computer and any interruption should automatically enable privileged mode and memory protection release. In this case, the control program (operating system) can completely protect itself from the influence of other programs, if all points of transfer of control during initial power-up and interrupts belong to this program, if it does not allow any other program to work in privileged mode (when transferring control to any other the program will only turn on the operating mode) and if it completely protects its memory (containing, in particular, all its control information, including the so-called interrupt vectors) from other programs. Then no one will prevent it from performing any functions of protecting other programs implemented in it (including access to the external information environment). To facilitate the solution of this problem, a part of such a program is placed in permanent memory, i.e. inseparable from the computer itself. The presence of a temporary interrupt allows the control program to protect itself from looping in other programs (without such an interruption, it could simply lose the ability to control).

    Protection against failures of "own" program is ensured by the reliability of this program, which is the focus of all programming technology discussed in this course of lectures.

    Protection against user errors (in addition to input data errors, see ensuring the stability of the software system) is provided by issuing warning messages about attempts to change the state of the external information environment with the requirement to confirm these actions, as well as the ability to restore the state of individual components of the external information environment. The latter is based on archiving changes in the state of the external information environment.

    Protection against unauthorized access is ensured by using secret words (passwords). In this case, each user is provided with certain information and procedural resources (services), the use of which requires the PS to present a certain password previously registered in the PS by this user. In other words, the user, as it were, "hangs a lock" on the resources allocated to him, the "key" of which only this user has. However, in some cases, persistent attempts can be made to break such protection, if the protected resources are of extreme value to someone. For such a case, you have to take additional measures to protect against hacking protection.

    Protection against hacking of protection is associated with the use of special programming techniques in the software that make it difficult to overcome the protection against unauthorized access. The use of regular passwords is insufficient when it comes to an extremely persistent desire (for example, of a criminal nature) to gain access to valuable information. Firstly, because the information about passwords, which is used by a PS to protect against unauthorized access, the "cracker" of this protection can relatively easily get if he has access to this PS itself. Secondly, using a computer, it is possible to carry out a sufficiently large enumeration of possible passwords in order to find a suitable one for accessing the information of interest. You can protect yourself from such hacking as follows. The secret word (password) or just a secret integer X is known only by the owner of the protected information, and to check access rights, another number Y = F (X) is stored in the computer, which is uniquely calculated by the PS for each attempt to access this information upon presentation of the secret word. In this case, the function F may be well known to all users of the PS, but it has such a property that the restoration of the word X from Y is practically impossible: with a sufficiently large length of the word X (for example, several hundred characters), this requires astronomical time. Such a number Y will be called the electronic (computer) signature of the owner of the secret word X (and hence of the protected information).

    Another type of such protection is associated with the protection of messages sent over computer networks, deliberate (or malicious) distortion. Such a message can be intercepted at the "transshipment" points of the computer network and replaced by another message from the author of the intercepted message. This situation arises primarily when carrying out banking operations using a computer network. By substituting such a message, which is an order of the owner of a bank account to perform a certain banking operation, money from his account can be transferred to the account of the "cracker" of protection (a kind of computer bank robbery). Protection against such a breach of protection can be implemented as follows. Along with the function F, which determines the computer signature of the owner of the secret word X, which the addressee of the protected message knows (if only its owner is a client of this addressee), the PS defines another Stamp function, according to which the sender of the message must calculate the number S = Stamp (X, R ), using the secret word X and the text of the transmitted message R. The Stamp function is also considered well known to all PS users and has such a property that it is practically impossible to recover the number X from S, or to pick up another message R with the corresponding computer signature. The transmitted message itself (together with its protection) must have the form:

    moreover, Y (computer signature) allows the addressee to establish the truth of the client, and S, as it were, bonds the protected message R with a computer signature Y. In this regard, we will call the number S an electronic (computer) seal. The PS defines one more function Notary, according to which the recipient of the protected message checks the truth of the transmitted message:

  61. This makes it possible to unambiguously establish that the message R belongs to the owner of the secret word X.

    Protection from protection is necessary in the event that the user has forgotten (or lost) his password. For such a case, it should be possible for a special user (PS administrator) responsible for the functioning of the protection system to temporarily remove the protection against unauthorized access for the owner of the forgotten password in order to enable him to fix a new password.

  62. Literature for lecture 11.

  63. 11.1. I.S. Berezin, N.P. Zhidkov. Calculation methods, t. 1 and 2. - Moscow: Fizmatgiz, 1959.

    11.2. NS. Bakhvalov, N.P. Zhidkov, G.M. Kobelkov. Numerical methods. - M .: Nauka, 1987.

    11.3. G. Myers. Reliability of software. - M .: Mir, 1980.S. 127-154.

    11.4. A.N. Lebedev. Banking information security and modern cryptography // Information security issues, 2 (29), 1995.

  64. Lecture 12. Software quality assurance

  65. 12.1. General characteristics of the quality assurance process for software tools.

  66. As already noted in Lecture 4, the quality specification defines the main guidelines (goals), which at all stages of software development in one way or another affect the choice of a suitable option when making various decisions. However, each quality primitive has its own characteristics of such an influence, thus ensuring its presence in the software system may require its own approaches and methods for developing the software system or its individual parts. In addition, the inconsistency of the quality criteria of the PS and the quality primitives expressing them was also noted: good security one of any primitive of quality of the FS can significantly complicate or make it impossible to provide some other of these primitives. Therefore, an essential part of the process of ensuring the quality of software products consists of finding acceptable trade-offs. These trade-offs should be partially determined already in the PS quality specification: the PS quality model should specify the required degree of presence in the PS of each of its quality primitives and determine the priorities for achieving these degrees.

    Quality assurance is carried out in each technological process: the decisions made in it, to one degree or another, have an impact on the quality of the software system as a whole. In particular, because a significant part of the quality primitives is associated not so much with the properties of the programs included in the software system, but with the properties of the documentation. Due to the noted inconsistency of the primitives of quality, it is very important to adhere to the selected priorities in their provision. But in any case, it is useful to adhere to two general principles:

    first, it is necessary to provide the required functionality and reliability of the PS, and then bring the remaining quality criteria to an acceptable level of their presence in the PS;

    there is no need and may even be harmful to achieve a higher level of presence in the PS of any quality primitive than that defined in the PS quality specification.

    Ensuring the functionality and reliability of the PS was discussed in the previous lecture. The following discusses the provision of other quality criteria for the COP.

    12.2 .. Ensuring ease of use of the software tool

    P-documentation of the software system determines the composition of user documentation

    In the previous lecture, we have already discussed the provision of two of the five quality primitives (stability and security), which determine the ease of application of the PS.

    P-documentation and informativeness determine the composition and quality of user documentation (see the next lecture).

    Interoperability is ensured by creating a suitable user interface and appropriate implementation of exceptions. What is the problem here?

  67. 12.3. Ensuring the effectiveness of the software tool.

  68. The effectiveness of the software system is ensured by making appropriate decisions at different stages of its development, starting with the development of its architecture. The choice of the structure and presentation of the data affects the efficiency of the SE (especially in terms of memory) especially strongly. But the choice of algorithms used in certain software modules, as well as the specifics of their implementation (including the choice of a programming language) can significantly affect the efficiency of a software system. At the same time, it is constantly necessary to resolve the contradiction between temporal efficiency and memory efficiency. Therefore, it is very important that the quality specification clearly indicates the quantitative relationship between the indicators of these primitives of quality, or at least set quantitative boundaries for one of these indicators. And yet, different software modules have different effects on the efficiency of the software system as a whole: both in terms of the contribution to the total costs of the software system in terms of time and memory, and in terms of the effect on different quality primitives (some modules can strongly influence the achievement of time efficiency and have practically no effect on memory efficiency, and others can significantly affect total expense memory without having a noticeable effect on the operating time of the PS). Moreover, this influence (primarily in relation to temporary efficiency) in advance (before the end of the implementation of the PS) is far from always possible to correctly assess.

    first, you need to develop a reliable PS, and only then achieve the required efficiency in accordance with the quality specification of this PS;

    to increase the efficiency of the software system, use, first of all, an optimizing compiler - this can provide the required efficiency;

    if the achieved efficiency of the PS does not meet the specification of its quality, then find the most critical modules in terms of the required efficiency of the PS (in the case of time efficiency, this will require obtaining the distribution by the modules of the operating time of the PS by means of appropriate measurements during the execution of the PS); these modules and try to optimize them first by manually reworking them;

    do not optimize the module if it is not required to achieve the required efficiency of the PS.

    12.4. Ensuring maintainability.

    C-documentation, informativeness and comprehensibility determine the composition and quality of the maintenance documentation (see the next lecture). In addition, the following recommendations can be made regarding the texts of programs (modules).

    use comments in the text of the module that clarify and explain the features of the decisions made; If possible, include comments (at least in short form) at the earliest stage of the development of the module text;

    use meaningful (mnemonic) and persistently distinguishable names (the optimal name length is 4-12 letters, numbers at the end), do not use similar names and keywords;

    be careful when using constants (a unique constant must have only one occurrence in the module text: when it is declared or, in extreme cases, when the variable is initialized as a constant);

    don't be afraid to use optional parentheses (parentheses are cheaper than mistakes;

    place no more than one operator per line; to clarify the structure of the module, use additional spaces (indentation) at the beginning of each line;

    avoid tricks i.e. such programming techniques, when fragments of a module are created, the main effect of which is not obvious or hidden (veiled), for example, side effects of functions.

    Extensibility is provided by creating a suitable installer.

    The structuredness and modularity make it easier to understand both the program texts and their modification.

    12.5. Providing mobility.

  69. Literature for lecture 12.

  70. 12.1. Ian Sommerville. Software Engineering. - Addison-Wesley Publishing Company, 1992. P.

    12.3. D. Van Tassel. Style, development, efficiency, debugging and testing of programs. - M .: Mir, 1985.S. 8-44, 117-178.

    12.4. Software User Documentation / ANSI / IEEE Standard 1063-1987.

  71. Lecture 13. Documenting software

  72. 13.1. Documentation created during software development.

  73. When developing a software system, a large amount of various documentation is created. It is needed as a means of transferring information between software developers, as a means of managing software development and as a means of transferring information to users necessary for the application and maintenance of software. The creation of this documentation accounts for a large share of the cost of the PS.

    This documentation can be broken down into two groups:

    Software development management documents.

    Documents that are part of the PS.

    Software development management documents (process documentation) record the processes of software development and maintenance, providing connections within the development team and between the development team and managers - persons managing the development. These documents can be of the following types:

    Plans, estimates, schedules. These documents are created by managers to predict and manage development and maintenance processes.

    Resource usage reports during development. Created by managers.

    Standards. These documents prescribe to developers what principles, rules, agreements they must follow in the process of developing software. These standards can be both international or national, and specially created for the organization, which is developing this software.

    Work documents. These are the main technical documents that provide communication between developers. They contain a fixation of ideas and problems arising in the development process, a description of the strategies and approaches used, as well as working (temporary) versions of documents that should be included in the PS.

    Notes and correspondence. These documents document various details of interaction between managers and developers.

    The documents included in the PS (product documentation) describe PS programs both from the point of view of their use by users and from the point of view of their developers and maintainers (in accordance with the purpose of the PS). It should be noted here that these documents will be used not only at the stage of operation of the PS (in its phases of application and maintenance), but also at the development stage to manage the development process (together with working documents) - in any case, they should be checked (tested) for compliance with the PS programs. These documents form two sets with different purposes:

    PS user documentation (P-documentation).

    Substation support documentation (C-documentation).

  74. 13.2. User documentation of software tools.

  75. The user documentation for the software system (user documentation) explains to users how they should proceed to apply this software system. It is necessary if the PS involves any interaction with users. Such documentation includes the documents that the user is guided by when installing the PS (when installing the PS with the appropriate setting for the PS application environment), when using the PS to solve their problems and when controlling the PS (for example, when this PS interacts with other systems). These documents partially touch upon the issues of software support, but do not touch upon the issues related to the modification of programs.

    In this regard, two categories of PS users should be distinguished: ordinary PS users and PS administrators. An ordinary user of the PS (end-user) uses the PS to solve his problems (in his subject area). This can be an engineer designing a technical device, or a cashier selling train tickets using a PS. He may not know many details of the computer or the principles of programming. The PS administrator (system administrator) manages the use of the PS by ordinary users and maintains the PS that is not related to the modification of programs. For example, it can regulate access rights to the PS between ordinary users, keep in touch with PS suppliers, or perform certain actions to keep the PS in working order if it is included as part of another system.

    The composition of user documentation depends on the audiences of users to which this software is oriented, and on the mode of use of the documents. The audience here is understood as the contingent of PS users who need a certain PS user documentation. A successful user document essentially depends on the precise definition of the audience for which it is intended. User documentation should contain the information needed for each audience. The mode of use of a document refers to the way that determines how the document is used. Typically, a user of large enough software systems requires either documents for studying PS (use in the form of instructions), or to clarify some information (use in the form of a reference book).

    In accordance with the works, the following composition of user documentation for sufficiently large PS can be considered typical:

    General functional description of the PS. Gives a brief description of the functionality of the PS. It is intended for users who need to decide how much they need this software.

    Substation installation manual. Designed for system administrators. It should prescribe in detail how to install systems in a particular environment. It should contain a description of the machine-readable medium on which the PS is supplied, files representing the PS, and the requirements for the minimum hardware configuration.

    Instructions for the use of PS. Designed for ordinary users. Contains the necessary information on the application of the PS, organized in a form convenient for its study.

    Handbook on the use of PS. Designed for ordinary users. Contains the necessary information on the application of the PS, organized in a form convenient for the selective search for individual parts.

    PS Management Guide. Designed for system administrators. It should describe the messages generated when the PS interacts with other systems and how to respond to these messages. In addition, if the MS uses system hardware, this document may explain how to maintain that hardware.

    As discussed earlier (see Chapter 4), custom documentation development begins as soon as the external description is created. The quality of this documentation can significantly determine the success of the COP. It should be quite simple and user-friendly (otherwise, this PS, in general, should not have been created). Therefore, while drafts (sketches) of user documents are created by the main developers of the software, professional technical writers are often involved in the creation of their final versions. In addition, to ensure the quality of user documentation, a number of standards have been developed (see, for example,), which prescribe the procedure for developing this documentation, formulate requirements for each type of user documents, and determine their structure and content.

    13.3. Software maintenance documentation.

    The system documentation describes the software system from the point of view of its development. This documentation is necessary if the software system involves the study of how it is arranged (designed), and the modernization of its programs. As noted, maintenance is an ongoing development. Therefore, if it is necessary to modernize the substation, this work is involved special team maintainer developers. This team will have to deal with the same documentation that determined the activities of the original (main) software development team, with the only difference that this documentation for the maintainer development team will, as a rule, be foreign (it was created by another team). The development team-maintainers will have to study this documentation in order to understand the structure and development process of the upgraded software system, and make the necessary changes to this documentation, largely repeating the technological processes with which the original software was created.

    The documentation for the support of the PS can be divided into two groups:

    (1) documentation that defines the structure of programs and data structures of software systems and the technology for their development;

    (2) documentation to help make changes to the OS.

    The documentation of the first group contains the final documents of each technological stage of software development. It includes the following documents:

    External description of the SS (Requirements document).

    Description of the system architecture, including the external specification of each of its programs.

    For each PS program - a description of its modular structure, including the external specification of each module included in it.

    For each module - its specification and description of its structure (design description).

    Module texts in the selected programming language (program source code listings).

    Validation documents describing how the validity of each CA program was established and how the validation information was linked to the CA requirements.

    Documents for establishing the validity of software systems include, first of all, documentation on testing (testing scheme and description of a set of tests), but may also include the results of other types of software testing, for example, proof of program properties.

    The second group documentation contains

    The system maintenance guide, which describes the known problems with the software, describes which parts of the system are hardware and software dependent, and how the development of the software is taken into account in its structure (design).

    A common problem in maintaining a COP is to ensure that all of its views keep up (stay consistent) when the COP changes. To help this, the links and dependencies between documents and their parts must be captured in the configuration management database.

  76. Literature for lecture 13.

  77. 13.1. Ian Sommerville. Software Engineering. - Addison-Wesley Publishing Company, 1992. P.

    13.2. ANSI / IEEE Std 1063-1988, IEEE Standard for Software User Documentation.

    13.3. ANSI / IEEE Std 830-1984, IEEE Guide for Software Requirements Specification.

    13.4. ANSI / IEEE Std 1016-1987, IEEE Recommended Practice for Software Design Description.

    13.5. ANSI / IEEE Std 1008-1987, IEEE Standard for Software Unit Testing.

    13.6. ANSI / IEEE Std 1012-1986, IEEE Standard for Software Verification and Validation Plans.

    13.7. ANSI / IEEE Std 983-1986, IEEE Guide for Software Quality Assurance Planning.

    13.8. ANSI / IEEE Std 829-1983, IEEE Standard for Software Test Documentation.

  78. Lecture 14. Software certification

  79. Purpose of software certification. Testing and assessing the quality of software. Types of tests and methods for assessing the quality of software.

  80. 14.1. Purpose of software certification.

  81. PS certification is an authoritative confirmation of the PS quality. Typically, a representative (certification) commission of experts, customer representatives and developer representatives is created for PS certification. This commission conducts PS tests in order to obtain the necessary information to assess its quality. By PS testing, we mean the process of carrying out a set of measures that investigate the PS suitability for its successful operation (application and maintenance) in accordance with the customer's requirements. This complex includes checking the completeness and accuracy of the software documentation, studying and discussing its other properties, as well as the necessary testing of the programs included in the software system, and, in particular, the compliance of these programs with the available documentation.

    Based on the information obtained during the testing of the PS, first of all, it should be established that the PS performs the declared functions, and it should also be established to what extent the PS has the declared primitives and quality criteria. Thus, the assessment of the quality of the PS is the main content of the certification process. The performed assessment of the quality of the PS is recorded in the corresponding decision of the attestation commission.

  82. 14.2. Types of software tests.

  83. The following types of PS tests are known for the purpose of PS certification:

    testing of PS components;

    system tests;

    acceptance tests;

    field trials;

    industrial tests.

    Testing of PS components is a check (testing) of the operability of individual PS subsystems. They are carried out only in exceptional cases by a special decision of the attestation commission.

    System tests of a PS is a check (testing) of the PS as a whole. It can include the same types of testing as in complex debugging of software systems (see Lecture 10). It is carried out by the decision of the attestation commission, if there are doubts about the quality of debugging by the developers of the software system.

    Acceptance tests are the main type of tests for PS certification. It is with these tests that the certification commission begins its work. These tests begin with a study of the provided documentation, including the documentation for testing and debugging software systems. If the documentation does not contain sufficiently complete results of PS testing, the certification committee may decide to conduct system tests of the PS or to terminate the certification process with a recommendation to the developer to conduct additional (more complete) PS testing. In addition, during these tests, developer tests can be selectively skipped, as well as user control tasks (see Lecture 10) and additional tests prepared by the panel to assess the quality of the certified software.

    Field testing of a PS is a demonstration of the PS, together with the technical system that is controlled by this PS, to a narrow circle of customers in real conditions, and a careful observation of the behavior of the PS is carried out. Customers should be given the opportunity to set their own test cases, in particular, from the exits to critical modes of operation of the technical system, as well as triggering emergency situations in it. These are additional tests carried out by the decision of the certification commission only for some PSs that control certain technical systems.

    Industrial testing of PS is the process of transferring PS for permanent operation to users. It is a period of pilot operation of the aircraft (see lecture 10) by users with the collection of information about the features of the aircraft's behavior and its operational characteristics. These are the final tests of the PS, which are carried out according to the decision of the certification commission, if insufficient complete or reliable information was obtained during the previous tests to assess the quality of the certified PS.

  84. 14.3. Methods for assessing the quality of software.

  85. Assessment of the quality of the PS for each of the criteria is reduced to the assessment of each of the primitives associated with this criterion of the quality of the PS, in accordance with their specification, made in the quality specification of this PS. Methods for assessing the primitives of the quality of software systems can be divided into four groups:

    direct measurement of indicators of a quality primitive;

    processing of programs and documentation of PS with special software tools (processors);

    testing of PS programs;

    expert assessment based on the study of programs and documentation of the PS.

    Direct measurement of the indicators of the quality primitive is made by counting the number of occurrences of characteristic units, objects, structures, etc., in a particular program document, as well as by measuring the operating time of various devices and the amount of computer memory occupied when executing test cases. For example, some measure of memory efficiency might be the number of lines of program in a programming language, and some measure of time efficiency might be the response time to a request. The use of any indicators for quality primitives can be defined in the quality specification of the software system. The method of direct measurement of indicators of a quality primitive can be combined with the use of testing programs.

    To establish the presence of some quality primitives in the software, certain software tools can be used. Such software tools process texts of programs or software documentation in order to control any quality primitives or to obtain some indicators of these quality primitives. To assess the structuredness of PS programs, if they were programmed in a suitable structural dialect of the basic programming language, it would be enough to pass them through the structured programs converter, which carries out syntactic and some semantic control of this dialect and translates the texts of these programs into the input language of the basic translator. However, in this way at present it is possible to control only a small number of quality primitives, and even then in rare cases. In some cases, instead of software tools that control the quality of software systems, it is more useful to use tools that transform the representation of programs or software documentation. Such, for example, is a program formatter that brings program texts to a readable form - processing of PS program texts with such a tool can automatically ensure the availability of an appropriate quality primitive in the PS.

    Testing is used to assess some primitives of the quality of the PS. These primitives include, first of all, the completeness of the PS, as well as its accuracy, stability, security and other primitives of quality. In a number of cases, testing is used in combination with other methods to assess individual primitives of the quality of software systems. So, to assess the quality of documentation for the application of a software system (P-documentation), testing is used in combination with an expert assessment of this documentation. If, during complex debugging of the software system, a sufficiently complete testing was carried out, then the same tests can be used for attestation of the software system. In this case, the certification committee can use the test protocols carried out during complex debugging. However, even in this case, it is necessary to perform some new tests or at least re-run some of the old ones. If testing during complex debugging is deemed not complete enough, then more complete testing should be carried out. In this case, a decision may be made to test the components or system tests of the PS, as well as to return the PS to the developers for revision. It is very important that in order to evaluate the software by the criterion of ease of use, full testing was carried out (during debugging and attestation of the software) using tests prepared on the basis of the application documentation, and by the criterion of maintainability - by tests prepared for each of the documents proposed for maintenance. PS.

    At present, only the method of expert assessments can be used to assess most of the primitives of the quality of software systems. This method consists in the following: a group of experts is appointed, each of these experts, as a result of studying the submitted documentation, makes his own opinion on the possession of the PS with the required quality primitive, and then the assessment of the required PS quality primitive is established by voting of the members of this group. This assessment can be made both according to a two-point system ("possesses" - "does not possess"), and also take into account the degree of possession of the PS by this primitive of quality (for example, it can be made according to a five-point system). In this case, the group of experts should be guided by the specification of this primitive and an indication of the method for its assessment, formulated in the quality specification of the certified PS.

    Literature for lecture 14.

    14.2. V.V. Lipaev. Testing programs. - M .: Radio and communication, 1986 .-- S. 231-245.

    14.3. D. Van Tassel. Style, development, efficiency, debugging and testing of programs. - M .: Mir, 1985 .-- S. 281-283.

    14.4. B. Shneiderman. Psychology of programming. - M .: Radio and communication, 1984 .-- S. 99-127.

  86. Lecture 15 Object approach to software development

  87. 15.1. Objects and relations in programming. The essence of the object approach to software development.

  88. The world around us consists of objects and the relationships between them. An object embodies a certain entity and has a certain state that can change over time as a result of the influence of other objects that are with the given in any relation. It can have an internal structure: it can consist of other objects that are also among themselves in some relationship. Based on this, you can build a hierarchical structure of the world from objects. However, for each specific consideration of the world around us, some objects are considered indivisible ("point"), and depending on the goals of consideration, such (indivisible) objects can be taken at different levels of the hierarchy. The relationship connects some objects: we can assume that the union of these objects has some property. If a relation connects n objects, then such a relation is called n-ary (n-ary). At each place of the union of objects that can be connected by any specific relationship, there may be different objects, but quite definite (in this case, they say: objects of a certain class). A unary relationship is called a property of an object (of the corresponding class). The state of an object can be studied by the value of the properties of this object or implicitly by the value of the properties of unions of objects associated with this or that relationship.

    In the process of cognizing or changing the world around us, we always take into consideration one or another simplified model of the world (the model world), in which we include some of the objects and some of the relationships of the world around us and, as a rule, one level of the hierarchy. Each object with an internal structure can represent its own model world, which includes objects of this structure and the relations that connect them. Thus, the world around us can be considered (in some approximation) as a hierarchical structure of model worlds.

    Currently, in the process of cognizing or changing the world around us, computer technology is widely used to process various kinds of information. In this regard, a computer (informational) representation of objects and relationships is used. Each object can be informationally represented by some data structure that reflects its state. The properties of this object can be set directly as separate components of this structure, or by special functions on this data structure. N-ary relations for N> 1 can be represented either in an active form or in a passive form. In the active form, an N-local relation is represented by some program fragment that implements either an N-local function (determining the value of a property of the corresponding union of objects), or a procedure that changes the states of some of them according to the state of the representations of the objects connected by the represented relation. In a passive form, such a relation can be represented by some data structure (which may also include representations of objects connected by this relation), interpreted on the basis of accepted conventions according to general procedures that do not depend on specific relations (for example, relational base data). In any case, the relationship view defines some of the data processing activities.

    When exploring the model world, the user can receive (or want to receive) information from the computer in different ways. With one approach, he may be interested in obtaining information about individual properties of objects of interest to him or the results of any interaction between some objects. To do this, he orders the development of one or another software system that performs the functions of interest to him, or some information system capable of producing information about the relations of interest to him, using the appropriate database. In the initial period of the development of computer technology (with insufficiently high power of computers), such an approach to the use of computers was quite natural. It was he who provoked the functional (relational) approach to the development of software systems, which was discussed in detail in previous lectures. The essence of this approach consists in the systematic use of decomposition of functions (relations) to construct the structure of the PS and the texts of the programs included in it. At the same time, the objects themselves, to which the ordered and implemented functions were applied, were presented fragmentarily (to the extent necessary to perform these functions) and in a form convenient for the implementation of these functions. Thus, an integral and adequate computer representation of the model world of interest to the user was not provided: mapping it to the used SS could be a rather laborious task for the user, attempts to slightly expand the volume and nature of information about the model world of interest to the user. received from such substations, could lead to their serious modernization. This approach to software development is supported by most of the used programming languages, ranging from assembly languages ​​and procedural languages ​​(FORTRAN, Pascal) to functional languages ​​(LISP) and logical programming languages ​​(PROLOGUE).

    In another approach to the study of the model world using a computer, the user may be interested in observing the change in the states of objects as a result of their interactions. This requires a fairly integral representation in the computer of the object of interest to the user, and the software components that implement the relations in which this object participates are explicitly associated with it. To implement this approach, it was necessary to build software that simulates the processes of interaction of objects (model world). With traditional development tools, this has proven to be a rather tedious task. True, programming languages ​​have appeared that are specifically focused on such modeling, but this has only partially simplified the task of developing the required software systems. The object approach to the development of software systems most fully meets the solution of this problem. Its essence lies in the systematic use of the decomposition of objects in the construction of the structure of the PS and the texts of the programs included in it. In this case, the functions (relations) performed by such a PS were expressed through the relations of objects of different levels, i.e., their decomposition significantly depended on the decomposition of objects.

    Speaking about the object approach, one should also clearly understand what kind of objects we are talking about: objects of the user's model world, their informational representation, program objects with the help of which the software system is built. In addition, one should distinguish between the proper objects (objects "passive") and subjects (objects "active").

  89. 15.2. Objects and subjects in programming.

  90. 15.3. Object and subjective approaches to software development.

  91. Descartes noted that people usually have an object-oriented view of the world (in).

    It is believed that object-oriented design is based on the following principles:

    highlighting abstractions,

    Access limitation,

    modularity,

    hierarchy,

    typing,

    parallelism,

    stability.

    But all this can be applied with a functional approach.

    It is necessary to distinguish between the advantages and disadvantages of the general object approach and its particular case - the subject-oriented approach.

    Advantages of a General Objective Approach:

    Natural mapping of the real world to the structure of the PS (natural human perception of the capabilities of the PS, there is no need to "invent" the PS structure, but to use natural analogies).

    The use of sufficiently meaningful structural units of the PS (an object as the integrity of non-redundant associations, information-strong modules).

    Reducing the complexity of developing software systems by using a new level of abstractions (using a hierarchy of "non-programmatic" abstractions when developing software systems: classifying objects in the real world, the method of analogies in nature) as a new level of inheritance.

  92. 15.4. An object-based approach to the development of an external description and architecture of a software tool.

  93. Object Oriented Design - a method using object decomposition; the object-oriented approach has its own notation and offers a rich set of logical and physical models for the design of highly complex systems. ...

    Object-oriented analysis (OOA) rendered the object approach. OOA is aimed at creating models that are closer to reality using an object-oriented approach; it is a methodology in which requirements are formed on the basis of the concepts of classes and objects that make up the vocabulary of the domain. ...

    Features of object-oriented programming.

    Objects, classes, object behavior, properties, events.

  94. Literature for lecture 15.

  95. 15.1. K. Fuchi, N. Suzuki. Programming languages ​​and circuitry VLSI. - M .: Mir, 1988.S. 85-98.

    15.2. Ian Sommerville. Software Engineering. - Addison-Wesley Publishing Company, 1992. P.? -?

    15.3. G. Booch. Object-oriented design with examples of application: per. from English - M .: Concord, 1992.

    15.4. V.Sh. Kaufman. Programming languages. Concepts and principles. M .: Radio and communication, 1993.

REPORT

For educational practice

PM.01. Development of software modules for computer systems

PM.04. Execution of work by profession 16199 "Operator of computers and computers"

UGS: 09.00.00 Informatics and Computing

Specialty: 09.02.03 Programming in computer systems

Graduate qualification: software technician

Completed by student gr. PKS-16 KT TI

Afanasyev Vasily

Practice leaders:

L. N. Alekseeva

T.Ts.Kirillina

Grade _____________

Completion date__________

Yakutsk - 2017

Introduction …………………………………………………………………………………………… 3

1. a brief description of practice bases ………………………………………………………… .7

1.1. Brief description of the material and technical equipment of laboratories ... ... ... ... 8

1.2. Brief characteristics of the laboratory software ………………………… ..9

1.3. Safety precautions ………………………………………………………………………… ..10

2. Description of the technologies of the work performed ………………………………………………… ..12

2.1 PM.01. Theoretical foundations for the performance of work ……………………………………… 12

2.1.1. PM.01. Development of software modules for computer systems …………………………………………………………………………………………… ... 13

2.1.2 PM.04. Performance of work by profession 16199 "Operator of electronic computers and computers" ………………………………………………………………………………… .15

2.2. Description of the technologies of the work performed ……………………………………………… ... 22

2.2.1.ПМ.04.01 Performing work by profession 16199 "Operator of electronic computers and computers" ……………………………………………………………… 30

Conclusion ……………………………………………………………………………………… ..39

List of used literature …………………………………………………………… 42


INTRODUCTION

Educational practice - type of practice for obtaining primary professional skills, for familiarization with production.

The objectives of the training practice on the UGS 09.00.00 Informatics and computer engineering communications by specialty : mastering by students of all types of professional activities in this specialty, the formation of general and professional competencies, as well as the acquisition of the necessary skills and experience of practical work in the specialty.

The educational practice in the specialty is aimed at the formation of students' skills, the acquisition of initial practical experience and is implemented within the framework of the professional modules of the PPSSP SVE on the main types of professional activity for their subsequent mastering of general and professional competencies in the specialty 09.02.03 Programming in computer systems.



Objectives of educational practice

In the course of mastering the program of educational practice, the student must:

have practical experience:

· Development of an algorithm for the task and its implementation by means of computer-aided design;

· Development of software product code based on a ready-made specification at the module level;

· Use of tools at the stage of software product debugging;

· Testing the software module according to a specific scenario;

· Input of digital and analog information into a personal computer from various media, peripheral and multimedia equipment;

· Converting media files into various formats, exporting and importing files into various editor programs;



· Processing of audio, visual and multimedia content using specialized editor programs;

· Creating and playing video clips, presentations, slideshows, media files and other game products from the original audio;

be able to:

according to PM.01. DEVELOPMENT OF SOFTWARE MODULES FOR COMPUTER SYSTEMS:

· To develop the code of the software module in modern programming languages;

· Create a program according to the developed algorithm as a separate module;

· Debug and test the program at the module level;

· Draw up documentation for software;

· Use tools to automate the execution of documentation;

by PM 04. PERFORMANCE OF WORKS BY PROFESSION: 16199 OPERATOR OF ELECTRONIC - COMPUTER AND COMPUTER MACHINES:

· Connect and configure the parameters of the functioning of a personal computer, peripheral and multimedia equipment;

· Configure the main components of the graphical interface of the operating system and specialized editor programs;

· Manage data files on local, removable storage devices, as well as on disks of a local computer network and on the Internet;

· Enter digital and analog information into a personal computer from various media, peripheral and multimedia equipment;

· Create and edit graphic objects using programs for processing raster and vector graphics;

· Convert files with digital information into various formats;

· Scan transparent and opaque originals;

· To shoot and transfer digital images from a photo and video camera to a personal computer;

· Process audio, visual content and media files using sound, graphics and video editors;

· Create videos, presentations, slide shows, media files and other final products from the original audio, visual and multimedia components;

· Reproduce audio, visual content and media files by means of a personal computer and multimedia equipment;

· Print, copy and duplicate documents on a printer and other peripheral output devices;

· Use a multimedia projector to demonstrate the contents of screen forms from a personal computer;

· Keep reporting and technical documentation;

know:

according to PM.01. DEVELOPMENT OF SOFTWARE MODULES FOR COMPUTER SYSTEMS:

· The main stages of software development;

· Basic principles of structured and object-oriented programming technology;

· Basic principles of debugging and testing software products;

· Methods and tools for the development of technical documentation.

by PM 04. PERFORMANCE OF WORKS BY PROFESSION: 16199 OPERATOR OF ELECTRONIC - COMPUTER AND COMPUTER MACHINES:

· Principles of digital representation of sound, graphic, video and multimedia information in a personal computer;

· Types and parameters of formats of audio, graphic and video and multimedia files and methods of converting them;

· Purpose, possibilities, rules of operation of multimedia equipment;

· Basic types of interfaces for connecting multimedia equipment;

· Basic techniques of digital information processing;

· Purpose, types and functionality of sound processing programs;

· Purpose, types and functionality of programs of graphic images;

· Purpose, types and functionality of video and multimedia content processing programs.

And the following professional competencies:

according to PM.01. DEVELOPMENT OF SOFTWARE MODULES FOR COMPUTER SYSTEMS:

PC 1.1. Develop specifications for individual components.

PC 1.2. Develop software product code based on ready-made specifications at the module level.

PC 1.3. Debug software modules using specialized software.

PC 1.4. Perform testing of software modules.

PC 1.5. Optimize the program code of the module.

PC 1.6. Develop components of design and technical documentation using graphical specification languages.

by PM 04. PERFORMANCE OF WORKS BY PROFESSION: 16199 OPERATOR OF ELECTRONIC - COMPUTER AND COMPUTER MACHINES:

PC 6.1. Enter digital and analog information into a personal computer from various media.

PC 6.2. Convert digital files to various formats.

PC 6.3. Process audio and visual content using sound, graphics and video editors.

PC 6.4. Create videos, presentations, slideshows, media files and other final products from original audio, visual and multimedia components.

PC 6.5. Play audio, visual content and media files by means of a personal computer and multimedia equipment.

PC 7.1. Form media libraries for structured storage and cathologization of digital information.

PC 7.2. To manage the placement of digital information on the disks of a personal computer, as well as disk storages of the local and global computer network.

PC 7.3. Replicate multimedia content on various removable media.

PC 7.4. Publish multimedia content on the Internet.

The structure and complexity of educational practice

P / p No. Sections (stages) of practice Weeks Total labor intensity Monitoring forms
Loans Watch
Preparatory phase, including an orientation conference (safety briefing) Participation in the conference; checking the practice diary;
PM 01. DEVELOPMENT OF SOFTWARE MODULES FOR COMPUTER SYSTEMS MDK.01.01. System programming MDK.01.02. Applied programming
Types of work: · Development of the software module code in C ++. · Creation of an information base by means of 1C: Enterprise. · Construction of the simplest shapes, editing objects using CAD.
PM.04. PERFORMANCE OF WORKS ON THE WORKING PROFESSION: 16199 OPERATOR OF ELECTRONIC - COMPUTER AND COMPUTING MACHINES MDK.04.01. Technology for creating and processing multimedia information MDK.04.02. Digital multimedia publishing technology
Types of work: · Methods of photo processing · Technologies for creating and processing multimedia information · Video creation Verification and analysis of reporting materials

Educational, research and production technologies used in educational practice

Used computer technology, collaboration technology, game technology, modular technology, research technology and others.

SAFETY

Safety requirements before starting work

1) It is forbidden to enter the office in outerwear, hats, with bulky items and food

5) Before the start of classes, all personal mobile devices of students (phone, player, etc.) must be turned off

6) It is allowed to work only on the computer that is allocated for the lesson

7) Before starting work, the student is obliged to inspect workplace and your computer for visible hardware damage

Safety requirements during work

1) Handle the equipment with care: do not knock on monitors, do not bump the mouse on the table, do not knock on the keyboard keys

2) In case of malfunctions: changes in the functioning of the equipment, its spontaneous shutdown, it is necessary to immediately stop work and inform the teacher about it

3) Don't try to fix hardware problems yourself

4) Perform at the computer only those actions that the teacher says

5) Control screen distance and correct posture

6) Avoid operating at maximum brightness of the display screen

7) In case of emergency situations, keep calm and strictly follow the instructions of the teacher.

Is prohibited

Operate faulty equipment

When the mains voltage is switched on, disconnect, connect the cables connecting various devices computer

Work with open covers of computer devices

Touching the display screen, the back of the display, connectors, connecting cables, live parts of the equipment

Touching circuit breakers, starters, alarms

During operation, touch pipes, batteries

Eliminate keyboard malfunction on your own

Press the keys with force or allow sharp blows

Use any object when pressing the keys

Move system unit, display or table on which they stand

Clutter up the aisles in the office with bags, briefcases, chairs

Take bags, briefcases for the workplace at the computer

Take outerwear with you to class and clutter up the office with it

Move quickly around the office

· Putting any objects on the system unit, display, keyboard.

Work with dirty, damp hands, wet clothes

Work in low light

· Work at the display for longer than the prescribed time.


Design

The design of the system is carried out on the basis of the previous stage. This design methodology combines object decomposition, techniques for representing physical, logical, as well as dynamic and static models of the system.

During the design process, design solutions are developed for choosing a platform where the system of the language or languages ​​of implementation will function, requirements for the user interface are assigned, and the most suitable DBMS is determined. A functional software specification is being developed: the architecture of the system is selected, the requirements for the hardware are stipulated, the set of org is determined. activities that are necessary for the implementation of software, as well as a list of documents regulating its use.

Implementation

This stage of software development is organized in accordance with the models of the evolutionary type of software life cycle. During development, experimentation and analysis are used, prototypes are built, both of the whole system and its parts. Prototypes provide an opportunity to delve deeper into the problem and make all the necessary design decisions at an early stage of design. Such decisions can affect different parts of the system: internal organization, user interface, access control, etc. As a result of the implementation phase, a working version of the product appears.

Product testing

Testing is closely related to the design and implementation phases of software development. Special mechanisms are built into the system, which make it possible to test the system for compliance with the requirements for it, check the design and the availability of the necessary documentation package.

The result of testing is the elimination of all the shortcomings of the system and a conclusion about its quality.

Implementation and support

System implementation usually involves the following steps:

· system installation,

User training,

· exploitation.

2) Basic principles of structured and object-oriented programming technology;

Structured programming is a set of recommended technological techniques that cover the implementation of all stages of software development. The main principles of the development were formulated:

Structural programming itself, recommending certain structures of algorithms and programming style (the clearer the text of the program, the less the probability of an error);

The principle of end-to-end structural control, which involves carrying out meaningful control of all stages of development (the earlier an error is discovered, the easier it is to fix it).

3) Basic principles of software debugging and testing.

Debugging principles

Principles of error localization: Most errors are detected without starting the program at all - just by careful examination of the text. If debugging is deadlocked and the error cannot be detected, it is best to postpone the program. When the eye is "blurred", work efficiency stubbornly tends to zero. An extremely convenient auxiliary means are the debugging mechanisms of the development environment: tracing, intermediate control of values. You can even use a memory dump, but such drastic actions are rarely needed. Experiments like "what will happen if you change the plus to minus" should be avoided by all means. Usually this does not give results, but only confuses the debugging process more, and even adds new errors.

The principles of error correction are even more similar to Murphy's laws: Where one error is found, there may be others. The probability that the error is found correctly is never one hundred percent. Our task is to find the error itself, not its symptom. I would like to clarify this statement. If the program stubbornly gives a result of 0.1 instead of the reference zero, simple rounding will not solve the issue. If the result is negative instead of the reference positive, it is useless to take it modulo - instead of solving the problem, we will get nonsense with fitting.
By fixing one mistake, it is very easy to add a couple more to the program. Bugs are the real scourge of debugging. Correcting errors often forces us to return to the programming stage. This is unpleasant, but sometimes inevitable.

Vector image is a collection of graphic primitives. Each primitive consists of elementary curve segments, the parameters of which (coordinates of nodal points, radius of curvature, etc.) are described by mathematical formulas.

II-1) Types and parameters of video formats and methods of converting them.

First of all, let's decide on video standards... They must be taken into account when creating a video film or video clip.

PAL- video standard for analog color television used in Europe and Russia: video size 720x576, 25 fps (25 frames per second). NTSC is a standard for analog color television, developed in the USA, with a resolution of 720x480, 29.97 fps.

MPEG is one of the main compression standards. The abbreviation MPEG (Moving Pictures Expert Group) is the name of the international committee that develops this compression standard. Its varieties: DVD-, HDD-, Flash-cameras. MPEG-3 - Currently not used. Do not confuse it with MP3(MPEG Audio Layer 3) - audio compression technology! MPEG-4 is a format obtained using the well-known codecs DivX, XviD, H.264, etc. It is often referred to simply as MP4. Reduces the video stream even more than MPEG-2, but the picture is of decent quality, so this format is supported by most modern DVD players. Of particular note is the high quality of video compressed with the latest generation codec H.264.3gp- video for mobile phones third generation, small size and low quality.

Graphics file format

- method of presentation and arrangement of graphic data on an external medium.

Vector formats

Vector format files contain descriptions of drawings in the form of a set of commands for building the simplest graphic objects(lines, circles, rectangles, arcs, etc.).

Raster formats

Raster files store:

Image size - the number of video pixels in the picture horizontally and vertically

Bit depth - the number of bits used to store the color of one video pixel

Data describing the drawing (the color of each video pixel in the drawing), as well as some additional information.

TIFF (Taged Image File Format)- a standard format in topographic graphics and publishing systems. Files in TIFF format provide best quality print. Because of big size, this format is not used when creating websites and publishing on the Internet.

III-1) Purpose, possibilities, rules of operation of multimedia equipment.

Projector - optical instrument, designed to create an actual image of a flat object of small size on big screen... The advent of projection devices led to the emergence of cinematography related to the art of projection.

Appointment.

The principle of operation of a light modulation projector is that a stream of light falls sequentially on two light-absorbing rasters, between which there is an oil film on a mirror surface. If the oil film is not disturbed, the light is blocked by both rasters and the screen is completely black. The oil film is placed inside the cathode-ray tube, which forms a charge distribution on it in accordance with the incoming video signal. The charge distribution, in combination with the potential applied to the mirror, generates a perturbation of the film surface. Passing through this section of the film, the light flux passes by the second raster and hits the screen at the appropriate point.

Possibilities.

Modern multimedia projectors usually have a standard set functionality, among which:

· Availability OSD menu and an IR remote control (sometimes such a remote can turn into a cable one),

Inversion of the image horizontally and vertically, which allows the use of projection screens and ceiling mount of the projector,

The ability to adjust the brightness, contrast, clarity of the image,

The ability to customize the color gamut,

The ability to work with 3D content,

The ability to work in interactive mode (interactive projector),

The ability to adjust to the parameters of input computer and video signals,

· possibility remote control computer cursor (so-called infrared screen mouse),

The ability to correct keystone distortion of the image,

The ability to choose the language of the menu,

· The presence of an economical mode of operation (a decrease in the luminous flux by 15-20%, providing an increase in the lamp life by 1.5-2 times).

Rules.

1. Do not open the casing of the device. Other than the projection lamp, this product contains no user serviceable parts. For Maintenance refer to qualified specialists.

2. Heed all warnings and cautions in this manual and marked on the product.

3. The projection lamp is extremely bright. To avoid eye damage, do not look into the lens when the lamp is on.

4. Do not place the projector on an unstable surface, cart, or stand.

5. Avoid using the projector near water, in direct sunlight, or near heating appliances.

6. Do not place heavy objects such as books or bags on the projector.

This is a digital multimedia interface for uncompressed HDTV signals up to 1920x1080 (or 1080i), with built-in Digital Rights Management (DRM) copyright protection. Current technology uses 19-pin type A plugs.

Serial-ATA

SATA is a serial interface for connecting storage devices (today it is mainly hard drives) and is intended to replace the old parallel ATA interface. The first generation Serial ATA standard is widely used today and offers a maximum data transfer rate of 150 Mbps. The maximum cable length is 1 meter. SATA uses a point-to-point connection where one end of the SATA cable is connected to the PC's motherboard and the other to the hard drive. Additional devices are not connected to this cable, unlike parallel ATA, when two drives can be "hung" on each cable. So the "master" and "slave" drives are a thing of the past.

Appointment.

Audio editors are used for recording musical compositions, preparing phonograms for radio, television and Internet broadcasting, dubbing films and computer games, restoration of old phonograms (pre-digitized), acoustic analysis of speech. Audio editors are professionally used by sound engineers.

Possibilities.

The functions of audio editors may differ depending on their purpose. The simplest of them, often freely distributed, have limited audio editing capabilities and a minimal number of supported audio formats. Professional packages can include multitrack recording, support for professional sound cards, synchronization with video, an expanded set of codecs, a huge number of effects, both internal and plug-ins.

Varieties.

FL Studio(previously - Fruity Loops) - digital sound workstation (DAW) and sequencer for writing music. Music is created by recording and mixing audio or MIDI material. The finished composition can be recorded in WAV, MP3 or OGG format.

MAGIX Music Maker is a program for creating and recording music at home, developed by the German company Magix Software. Part of the Music Maker interface is borrowed from Samplitude, which is a professional audio workstation, while Music Maker is mainly aimed at aspiring musicians. Since the release of the first version of Music Maker in 1994, more than one million licenses have been sold, making it one of the most successful music production software in Europe. License price from $ 60.

Application.

Use of graphic editors to generate patterns for embroidery, beads, braiding and knitting. With the help of graphic editors (for example, EmbroBox or BeadsWicker), you can create schemes for performing work in different techniques (for example, "weaving", "mosaic" or "brick stitch"), as well as openwork schemes. As a basis, you can take any photo or scanned image (in BMP, JPEG, GIF). The finished scheme can be printed, including using color conventions, or you can save it as a digital drawing in the format most convenient for the artist (see above). A particularly useful feature in such editors is the setting of the number of colors used in embroidery / weaving. The artist can make both a full-color scheme and a sketch for a work done in a limited color palette (for example, a sketch for cross-stitching a towel, done with black and red threads on a white field).

Possibilities.

The graphic editor allows you to quickly and efficiently edit a photo, create a montage, and even draw a picture. blank slate". As a tool for an artist, it may seem not as convenient as specially designed graphic editors, but this is only at first glance. The program has all the necessary tools for drawing, ranging from a simple pen, with a changeable and easily customizable "brush", to a variety of color palettes that allow you to "mix" colors in any proportion. There are also vector graphics tools that can often make your work much faster and easier. And if you do drawing at a professional level, then the program makes it easy to connect Graphics tablet and fully realize your fantasies.

Varieties.

1) Graphic editor Paint - a simple one-window graphic editor that allows you to create and edit fairly complex drawings. Paint editor has a standard form. 2) Adobe's Photoshop multi-window graphics editor allows you to create and edit complex drawings, as well as process graphic images(Photo). Contains many filters for processing photos (changing brightness, contrast, etc.). 3) Microsoft Draw program - included with MS Office. This program is used to create various drawings, diagrams. Usually called from MS Word. 4) Adobe Illustrator, Corel Draw - programs are used in publishing, allows you to create complex vector images.

Appointment.

Video production can be a very valuable skill in the practice of an information teacher. The author's video film is a means of purposefully visual teaching, increasing the interest of schoolchildren to the subject. On the other hand, learning how to create a video with various special effects can also help to improve your mental health.

Possibilities.

Capture

In addition to the ability to download finished video files, many editors allow you to capture video, that is, save a video stream to a file. As a rule, the phonogram is recorded simultaneously with the video, but it can also be recorded later, during editing, in the form of audio comments or additional soundtrack.

In order to save disk space, the video stream is compressed during capture, that is, it is encoded using compression algorithms. The choice of encoding parameters depends on the capabilities of the computer or editing station, a reasonable ratio of file size and video quality, as well as further intentions for using this file.

Mounting

All video editors have the simplest editing capabilities, such as the ability to cut or glue fragments of video and sound. But more advanced applications have much more possibilities that allow you to change the characteristics of the video, create various transitions between clips, change the scale and format of the video, add and remove noise, make color correction, add titles and graphics, manage sound track finally, create stereoscopic video (3D).

Varieties.

Sony Vegas is a professional software for multitrack recording, editing and editing of video and audio streams.

Vegas offers an unlimited number of video and audio tracks, advanced audio processing tools, support for multi-channel I / O in full duplex mode (26 physical outputs can be used for signal output with an independent mixing bus on each), real-time resampling, automatic creation crossfade, MIDI Time Code and MIDI Clock, subgroup outputs dither (with noise shaping) and 192 kHz 24/32-bit audio. For real-time audio processing, you can install a four-band parametric equalizer and compressor in the insert of each track, as well as use 32 sends to the DirectX format plug-ins.

AdobePremierePro-professional non-linear video editing software Adobe Systems. Is the heir Adobe programs Premiere (which was last released as 6.5). The first version of the program (aka Adobe Premiere 7) was released on August 21, 2003 for Windows operating systems. Starting with the third version, the program became available for operating systems OS X. The first two versions were released as separate products, the third version was released as part of the Adobe Creative Suite 3


CONCLUSION

As a result of the implementation of the educational practice program, tasks:

Consolidation, deepening and expansion of theoretical knowledge, abilities and skills acquired by students in the process of theoretical training;

Mastering professional and practical skills, production skills and advanced labor methods;

Mastering the norms of the profession in the motivational sphere: awareness of motives and spiritual values ​​in the chosen profession;

Mastering the basics of the profession in the operational sphere: familiarization and assimilation of the methodology for solving professional problems (problems);

Study of different aspects of professional activity: social, legal, hygienic, psychological, psychophysical, technical, technological, economic.

During the training practice, practical work was performed to form general competencies, including the ability to:

OK1. To understand the essence and social significance of your future profession, to show a steady interest in it.

OK 2. Organize your own activities, choose standard methods and ways of performing professional tasks, evaluate their effectiveness and quality.

OK 3. Make decisions in standard and non-standard situations and be responsible for them.

OK 4. Search and use the information necessary for the effective performance of professional tasks, professional and personal development.

OK 5. Use information and communication technologies in professional activities.

OK 6. Work in a team and team, communicate effectively with colleagues, management, consumers.

OK 7. Take responsibility for the work of team members (subordinates), the result of assignments.

OK 8. To independently determine the tasks of professional and personal development, engage in self-education, consciously plan professional development.

OK 9. To navigate in the conditions of frequent changes in technologies in professional activities.

and professional competencies corresponding to the main types of professional activity:

PC 1.1. Carry out the development of specifications from

MINISTRY OF EDUCATION AND SCIENCE

DONETSK PEOPLE'S REPUBLIC

STATE PROFESSIONAL

EDUCATIONAL INSTITUTION

"DONETSK INDUSTRIAL AND ECONOMIC COLLEGE"

WORKING PROGRAMM

Educational practice UP.01

professional module PM.01 Development of software modules for computer systems

in the specialty 09.02.03 "Programming in computer systems"

Compiled by:

Volkov Volodymyr Aleksandrovich, teacher of computer disciplines of the qualification category "specialist of the highest category", State Public Educational Institution "Donetsk Industrial and Economic College"

The program was approved by: Vovk Pavel Andreevich, director of "Smart IT Service"

1. PASSPORT OF THE PRACTICE PROGRAM

2. RESULTS OF PRACTICE

3. STRUCTURE AND CONTENT OF THE PRACTICE

4. CONDITIONS FOR ORGANIZATION AND PRACTICE

5. CONTROL AND EVALUATION OF PRACTICE RESULTS

1 PASSPORT OF THE EDUCATIONAL PRACTICE PROGRAM UP. 01

1.1 Place of training practice UP.01

Practice program UP.01 of the professional module PM.01 "Development of software modules for computer systems" specialty 09.02.03 "Programming in computer systems » enlarged group 09.00.00 "Informatics and computer technology", in terms of mastering the main type of professional activity (VPA):

Development of software modules for computer systems and related professional competencies (PC):

Develop specifications for individual components.

Develop software product code based on ready-made specifications at the module level.

Debug software modules using specialized software.

Perform testing of software modules.

Optimize the program code of the module.

Develop components of design and technical documentation using graphical specification languages.

The program of educational practice UP.01 of the professional module PM.01 "Development of software modules for computer systems" can be used in additional vocational education and professional training of workers for specialties 09.02.03 Programming in computer systems in the presence of secondary (complete) general education. No work experience required.

1.2 Goals and objectivestraining practice UP.01

In order to master the specified type of professional activity and the corresponding professional competencies, the student in the course of educational practice UP.01 must:

have practical experience:

    development of an algorithm for the task and its implementation by means of computer-aided design;

    development of software product code based on a ready-made specification at the module level;

    use of tools at the stage of software product debugging;

    testing the software module according to a specific scenario;

be able to:

    to develop the code of the software module in modern programming languages;

    create a program according to the developed algorithm as a separate module;

    debug and test the program at the module level;

    draw up documentation for software;

    use tools to automate the execution of documentation;

know:

    the main stages of software development;

    basic principles of structured and object-oriented programming technology;

    basic principles of debugging and testing software products;

methods and tools for the development of technical documentation.

1.3 Number of weeks(hours) to master the programtraining practice UP.01

Only 1.5 weeks, 54 hours.

2 RESULTS OF PRACTICE

The result of the training practice UP.01 of the professional module PM.01 "Development of software modules for computer systems" is the development of general competencies (GC):

Name of the result of practice

-

OK 2. Organize your own activities, choose standard methods and ways of performing professional tasks, evaluate their effectiveness and quality.

OK 3. Make decisions in standard and non-standard situations and be responsible for them.

OK 4. Search and use the information necessary for the effective performance of professional tasks, professional and personal development.

OK 5. Use information and communication technologies in professional activities.

OK 6. Work in a team and in a team, communicate effectively with colleagues, management, consumers.

OK 7. Take responsibility for the work of team members (subordinates), for the result of tasks.

-

qualifications

OK 9. To navigate in the conditions of frequent changes in technologies in professional activities.

professional competencies (PC):

Professional activity

Name of the results of practice

Mastering the main type of professional activity

    use of resources of local and global computer networks;

    management of data files on local, removable storage devices, as well as on disks of a local computer network and on the Internet;

    printing, duplication and copying of documents on a printer and other office equipment.

    monitoring in the form of a report for each practical work.

    qualifying exam modulo.

    literacy and accuracy of work in application programs: text and graphic editors, databases, presentation editor;

    the speed of searching for information in the content of databases.

    accuracy and literacy of settings Email, server and client software:

    the speed of information retrieval using Internet technologies and services;

    accuracy and literacy of entering and transmitting information using Internet technologies and services.

    literacy in the use of methods and means of protecting information from unauthorized access;

    correctness and accuracy Reserve copy and data recovery;

    literacy and accuracy of working with file systems, various file formats, file management programs;

    maintaining accounting and technical documentation.

3 STRUCTURE AND CONTENT OF THE PROGRAMEDUCATIONAL PRACTICE UP.01

3.1 Thematic plan

Competency codes

Professional module name

Amount of time, set aside for practice

(in weeks, hours)

Dates of the

PC 1.1 - PC 1.6

PM.01 "Development of software modules for computer systems"

1.5 weeks,

54 hours

3.2 Practice content

Activities

Types of jobs

Name of academic disciplines, interdisciplinary courses indicating topics, ensuring the performance of types of work

Number of hours (weeks)

"Mastering the main type of professional activity »

Topic 1. Introduction. Algorithms for solving problems. The structure of the linear algorithm. The structure of the cyclic algorithm. Algorithm of a subroutine (function).

Formed knowledge on the basics of creating special objects

Theme2 . Wednesday Skratch (Scratch).

Formed knowledge on the basics of process automation tools Formed knowledge on the basics of animation effects to objects; use of hyperlinks and buttons; demo setup; presentation saved in different formats.

MDK.01.01 "System programming"

Theme 3 ... Creation of a training program (lesson from the subject).

Formed knowledge on the basics of data analysis using processor functions

MDK.01.02 "Applied programming"

Topic 4. Development of a game program.

Formed knowledge on the basics of calculating the final characteristics

MDK.01.01 "System programming"

Topic 5. LabVIEW graphical programming language.

Formed knowledge of the basics of creating a processor test.

MDK.01.02 "Applied programming"

Theme 6. Creating an application using LabVIEW.

Formed knowledge of the basics of user dialogue with the system

MDK.01.02 "Applied programming"

Theme 7 Reusing a program fragment.

Knowledge of the operators and functions of the system has been formed.

MDK.01.02 "Applied programming"

Theme 8 LabVIEW Workshop. Labor protection when working with a computer at the user's workplace.

Formed knowledge on the computation of elementary functions. Formed knowledge on labor protection.

MDK.01.02 "Applied Programming".

OP 18 "Labor Protection"

Theme 9 Conclusions. Drawing up a practice report.

Analysis skills formed computer technology, problem solving Skills are formed.

MDK.01.01 "System programming"

MDK.01.02 "Applied programming"

MDK.04.01 "Office software"

4 CONDITIONS OF ORGANIZATION AND CONDUCT

EDUCATIONAL PRACTICE UP. 01

4.1 Documentation requirements, necessary for practice:

The work program of the training practice UP.01 of the professional module PM.01. "Development of software modules for computer systems" is part of the training program for mid-level specialists by the State Professional Educational Institution "Donetsk Industrial and Economic College" in accordance with the state educational standard of secondary vocational education in the specialty 09.02.03 "Programming in Computer Systems", founded on the curriculum in the specialty, work program on disciplines MDK.01.01 "System Programming", MDK01.02 "Applied Programming", methodological recommendations for teaching and methodological support of the practice of students mastering educational programs of secondary vocational education.

4.2 Requirements for educational and methodological support of practice:

a list of approved assignments by type of work, guidelines for students on the performance of work, recommendations for the implementation of reports on practice.

4.3 Logistics requirements:

the organization of industrial practice requires the presence of classrooms and a laboratory.

Office equipment and workplaces:

    seats according to the number of students (table, computer, chair);

    teacher's workplace (table, computer, chair);

    cabinet for storing teaching aids and information carriers;

    tasks for an individual approach to training, organization independent work and exercises, student on the computer;

    reference and methodological literature;

    a set of system, application and training programs for PCs on optical and electronic media;

    journal of instructing students on labor protection;

    a set of teaching aids.

Technical training aids:

    classroom board;

    personal computer with licensed software;

    laser printer;

  • educational PCs;

    set of interactive equipment (projector, screen, speakers);

    fire extinguishing means (fire extinguisher).

Office equipment and workstations of development tools: personal computers(monitor, system unit, keyboard, mouse), a set of educational and methodological documentation, software in accordance with the content of the discipline (shells of programming languages).

All computers in the classroom are combined into local area network, have access to the network storage of information and have access to the Internet.

Communication equipment:

    network adapters;

    network cables;

    WiFi wireless equipment.

Components for installation of networks, equipment for installation.

4.4 List of educational publications, Internet resources, additional literature

Main sources:

    Olifer V.G. Network operating systems: textbook for universities / V.G.Olifer, N.A.Olifer. - 2nd ed. - St. Petersburg: Peter, 2009,2008. - 668 p .:

    E. Tanenbaum. OS. Development and implementation. SPb .: Peter, 2006 .-- 568 p.

    Pupkov K.A. Mastering the operating room Unix systems/ K.A. Pupkov, A.S. Chernikov, N.M. Yakusheva. - Moscow: Radio and communication, 1994 .-- 112 p.

    L. Beck Introduction to system programming - M .: Mir, 1988.

    Grekul V.I., Denischenko G.N., Korovkina N.L. Design information systems/ Moscow: Binom, 2008 .-- 304 p.

    Lipaev, V.V. Software engineering. Methodological foundations [Text]: Textbook. / V. V. Lipaev; State un-t - Higher School of Economics. - M .: TEIS, 2006 .-- 608 p.

    Lavrischeva E. M., Petrukhin V. A. Methods and means of software engineering. - Textbook

    Ian Somerville. Software Engineering, 6th edition .: Per. from English ―M. : Publishing house "Williams", 2002. ― 624 p.

    Excel 2010: professional programming in VBA .: Per. from English - M .: LLC “I.D. Williams ”, 2012. - 944 p. : ill. - Parallel. tit. Eng

    Fowler M. Refactoring: improving existing code. ― Per. From English ― SPb: Symbol-plus, 2003. ― 432 p.

Additional sources:

    Volkov V.A. METHODOLOGICAL INSTRUCTIONS for the implementation of practical work in the discipline "System programming", Donetsk: DONPEK, 2015.

    Volkov V.A. Methodical instructions for the implementation of the course project, Donetsk: DONPEK, 2015.

Internet- resources:

    System programming [electronic resource] / Access mode: http://www.umk3.utmn.ru.

    Software and Internet resources: http://www.intuit.ru

    Discipline Literature - http://www.internet-technologies.ru/books/

    Electronic textbook "Introduction to Software Engineering" - http://www.intuit.ru/studies/professional_skill_improvements/1419/info

    Electronic textbook "Programming technology" -http: //bourabai.kz/alg/pro.htm

4.5 Requirements for practice leaders from an educational institution and organization

Requirements for practice leaders from an educational institution:

engineering and teaching staff: graduates - teachers of interdisciplinary courses and general professional disciplines. Work experience in organizations of the relevant professional field is required.

Master of industrial training: the presence of 5-6 qualification categories with a compulsory internship in specialized organizations at least once every 3 years. Work experience in organizations of the relevant professional field is required.

5 CONTROL AND EVALUATION OF RESULTS

EDUCATIONAL PRACTICE UP. 01

Form of reporting on educational practice UP.01 - a report on practice, drawn up in accordance with the requirements of methodological recommendations.

results

(mastered professional competencies)

Main factors

result of preparation

Forms and methods

control

PC 1.1. Carry out the development of specifications for individual components

Development of an algorithm for the task and its implementation by means of computer-aided design

Expert observation and assessment of the student's activity in the process of mastering the educational program in practical classes, while performing work on educational and industrial practice.

PC 1.2. Develop software product code based on ready-made specifications at the module level.

Know the basic principles of structured and object-oriented programming technology.

To develop the code of the software module in modern programming languages.

PC 1.3. Debug software modules using specialized software

Debug and test the program at the module level.

PC 1.4. Perform testing of software modules.

Create a program according to the developed algorithm as a separate module.

PC 1.5. Optimize the program code of the module

Development of software product code based on a ready-made specification at the module level.

PC 1.6. Develop components of design and technical documentation using graphical specification languages

Know the methods and tools for developing technical documentation.

Draw up documentation for software tools.

Use tools to automate paperwork.

The forms and methods of monitoring and assessing learning outcomes should allow checking not only the formation of professional competencies in students, but also the development of general competencies and the skills that provide them.

results

(mastered general competences)

Main indicators for assessing the result

Forms and methods of control and evaluation

OK 1. Understand the essence and social significance of your future profession, show a steady interest in it.

Demonstration of constant interest in the future profession;

- the validity of the application of mastered professional competencies;

Expert observation and assessment in practical training when performing work on industrial practice;

OK 2. Organize your own activities, determine the methods and ways of performing professional tasks, evaluate their effectiveness and quality.

Substantiation of goal setting, selection and application of methods and methods for solving professional problems;

Introspection and correction of the results of one's own work

Assessment in practical training during the performance of work;

Observation during practice;

Introspection

OK 3. Solve problems, assess risks and make decisions in non-standard situations.

The effectiveness of making decisions on standard and non-standard professional tasks for certain time;

The effectiveness of the plan to optimize the quality of work performed

Interpretation of the results of observation of the student's activities in the process of completing tasks

OK 4. Search, analyze and evaluate the information necessary for setting and solving professional problems, professional and personal development.

Selection and analysis of the necessary information for a clear and quick implementation of professional tasks, professional and personal development

Expert assessment in the course of work;

Self-control in the course of posing and solving problems

OK 5. Use information and communication technologies to improve professional activity.

the ability to use information and communication technologies to solve professional problems

assessment of assignments

OK 6. Work in a team and a team, ensure its cohesion, communicate effectively with colleagues, management, consumers.

Ability to interact with a group, teachers, industrial training master

OK 7. Set goals, motivate the activities of subordinates, organize and control their work with the assumption of responsibility for the result of assignments.

- introspection and correction of the results of one's own work and the work of the team

Observing the progress of work in a group in the process of practical training

OK 8. To independently determine the tasks of professional and personal development, engage in self-education, consciously plan professional development.

Organization of independent work on the formation of a creative and professional image;

Organization of work on self-education and improvement

qualifications

Observation and evaluation in the process of industrial practice;

Reflexive analysis (algorithm of student actions);

Practice diary;

Student portfolio analysis

OK 9. Be ready to change technologies in professional activity.

Analysis of innovations in the field of technological processes for the development and manufacture of garments

Assessment of solutions to situational tasks;

Business and organizational learning games;

Observation and evaluation in practical training, in the process of industrial practice

ESSAY

Test design work PM.01 "Development of software modules for computer systems". State budgetary professional educational institution of the Republic of Crimea "Feodosia Polytechnic College". 2015 -20 pp., Illustrations 7, appendix 1, bibliographic sources 3.

The software tool "Actions on matrices" has been designed and implemented, a graphical interface has been developed for it in the environmentMicrosoft Visual Studio Ultimate 2013 C #. The software product allows you to study the structure and syntax of new programming languages.

SOFTWARE TOOL, TERMS OF REFERENCE, FUNCTIONAL TESTING, EVALUATION TESTING, STRUCTURAL TESTING, DEVELOPMENT ENVIRONMENT, DEBUGGING, ALGORITHM, INTERFACE

INTRODUCTION

1 DEVELOPMENT OF AN ALGORITHM FOR THE STATED PROBLEM AND IMPLEMENTATION OF ITS MEANS OF AUTOMATED DESIGN

1.1 Analysis of the task

1.2 Choice of methods and development of basic algorithms for solving

2 DEVELOPMENT OF THE SOFTWARE PRODUCT CODE BASED ON A READY SPECIFICATION AT THE MODULE LEVEL

3. USE OF TOOLS AT THE STAGE OF DEBUGGING THE SOFTWARE MODULE

4 TESTING THE SOFTWARE MODULE FOR A SPECIFIC SCENARIO

5 REGISTRATION OF DOCUMENTATION ON THE SOFTWARE

LIST OF REFERENCES

APPENDIX A


INTRODUCTION

Each software product consists of modules. The module can be developed separately and thus upgrade the software to improve its functionality.

The purpose of the work is:

  • Consolidation of the obtained theoretical knowledge in the disciplines Applied programming, System programming, Theory of algorithms, Fundamentals of programming and algorithmic languages ​​";
  • Collection, analysis and synthesis of materials for the preparation of a report on practice.

The tasks of the work are determined by an individual task:

  • analysis of the task at hand;
  • selection of methods and development of basic solution algorithms;
  • choice of technology and programming environment;
  • building an application framework and designing a user interface;
  • development of software product code based on a ready-made specification;
  • choosing a testing strategy and test development;
  • using the debugging tools provided by the user interface;
  • testing a software module according to a specific scenario;
  • preparation of documentation for a software tool.

The work is divided into five sections.

The first section describes the development of an algorithm for the task and its implementation by means of computer-aided design.

In the second section, the choice of the technology of the programming environment is justified, the designed user interface is described and the code of the software product is developed.

The third section describes how to use the tools during the debugging phase of a program module.

The fourth section describes the testing of the software module, describes the functional, structural, evaluative testing.

The fifth section is devoted to the design of the documentation for the software tool.

1 DEVELOPMENT OF AN ALGORITHM FOR THE STATED PROBLEM AND IMPLEMENTATION OF ITS MEANS OF AUTOMATED DESIGN

1.1 Analysis of the task

It is necessary to write a program that will perform actions on matrices: multiplication, addition, subtraction, transposition. The program must solve the manually entered matrix into the form. For the convenience of the user, the program should have an intuitive interface.

1.2 Choice of methods and development of basic algorithms for solving

The program uses the following algorithm of work: the program has forms in which matrix elements are entered, elements are translated from String type to Integer ... Then you need to press the button for the corresponding action. The algorithm for solving matrices is executed and the result is displayed in the element DataGridView.

To build block diagrams, we used the program Microsoft Office Visio 2013. With its help, you can draw up various diagrams and diagrams, including block diagrams.

Figure 1.1 - Block diagram of reading and writing data from writing to an array

Figure 1.2 - Check for accessibility for input

Figure 1.3 - Block diagram of data entry into textbox and comparisons with an existing array

Figure 1.4 - Method call Vizov with parameters

2 DEVELOPMENT OF THE SOFTWARE PRODUCT CODE BASED ON A READY SPECIFICATION AT THE MODULE LEVEL

The matrix calculator is implemented in the C # programming language in the Microsoft Visual Studio Ultimate 2013 programming environment. The choice of the C # language is due to the fact that it is a modern and popular object-oriented programming language, and the Microsoft Visual Studio Ultimate 2013 environment is a powerful tool that allows you to quickly create a program with graphical window interface.

The window layout is shown in Figure 2.1.

Figure 2.1 - Window interface of the future application

There are 3 elements on the form DataGridView , matrices will be placed in them. Also 4 Button to perform actions on matrices.

3. USE OF TOOLS AT THE STAGE OF DEBUGGING THE SOFTWARE MODULE

When debugging a software product, use the Debug menu command (Fig. 3.1). There are a number of commands in the debug menu, the purpose of which is shown below.

Figure 3.1- Debug menu window

Windows - Opens the Breakpoints window in the IDE, which gives access to all breakpoints this decision... Displays the Output window in the framework.

The Output window is a running log of many messages issued by the framework, compiler, and debugger. Therefore, this information does not only apply to the debugging session, but also opens the Interpret window in the framework, which allows you to execute the commands:

  • start debugging - starts the application in debug mode;
  • attach to process - allows you to attach a debugger to a running process (executable). for example, if the application is running without debugging, you can then attach to this running process and start debugging;
  • Exceptions - opens the Exceptions dialog box, which allows you to choose how to stop the debugger for each exceptional state;
  • step with entry - launches the application in debug mode. for most projects, choosing the step-in command means invoking the debugger on the first line of the application that is executed. thus, you can enter the application from the first line;
  • workaround step - when you are not in a debugging session, the workaround step command simply starts the application in the same way as the run button would;
  • breakpoint - enables or disables the breakpoint on the current (active) line of code text editor... this option is grayed out if there is no active code window in the framework;
  • create breakpoint - activates the Create Breakpoint dialog box allowing you to specify the name of the function for which you want to create a breakpoint;
  • remove all breakpoints - removes all breakpoints from the current solution;
  • clear all data hints - deactivates (without deleting) all breakpoints of the current solution;
  • Options and Settings - Abort execution when exceptions cross the application domain boundary or the boundary between managed and native code.

4 TESTING THE SOFTWARE MODULE FOR A SPECIFIC SCENARIO

Evaluation testing, also called "overall system testing"the purpose of which is to test the program for compliance with the basic requirements. This stage of testing is especially important for software products.Includes the following types:

  • usability testing - consistent verification of the compliance of the software product and its documentation with the main provisions of the terms of reference;
  • testing at maximum volumes - checking the performance of the program on the largest possible amount of data, for example, volumes of texts, tables, a large number of files, etc .;
  • testing at maximum loads - checking the execution of the program for the ability to process a large amount of data received within a short time;
  • usability testing - analysis of psychological factors that arise when working with software; this testing allows you to determine whether the interface is user-friendly, whether the color or sound accompaniment is annoying, etc .;
  • security testing - verification of protection, for example, from unauthorized access to information;
  • performance testing - determining the throughput for a given configuration and load;
  • testing memory requirements - determining the real needs for RAM and external memory;
  • hardware configuration testing - checking the software performance on different hardware;
  • compatibility testing - checking the continuity of versions: in cases where the next version of the system changes data formats, it should provide for special convectors that provide the ability to work with files created previous version systems;
  • ease of installation testing - checking the ease of installation;
  • reliability testing - reliability testing using mathematical models;
  • recovery testing - testing the recovery of software, such as a system that includes a database, after hardware and software failures;
  • serviceability testing - checking the service tools included in the software;
  • documentation testing - a thorough check of the documentation, for example, if the documentation contains examples, then all of them should be tried;
  • procedure testing - checking the manual processes assumed in the system.

Naturally, the purpose of all these checks is to find discrepancies with the terms of reference. It is believed that only after all types of testing have been completed, the software product can be presented to the user or for implementation. However, in practice, not all types of assessment testing are usually performed, as it is very expensive and time consuming. As a rule, for each type of software, those types of testing are performed that are most important to it. So databases are necessarily tested at maximum volumes, and real-time systems - at maximum loads.

5 REGISTRATION OF DOCUMENTATION ON THE SOFTWARE

The created software product is intended for performing arithmetic operations on matrices.

To run the program, you need to run the application.

In order to create matrices, you need to enter the dimensions of the matrix and click the "Build" button. Then enter the data into the matrix and select the desired action.

Figure 5.1 - Running application

The program has a user-friendly interface and provides the ability to easily solve matrices of arbitrary dimensions.

CONCLUSIONS

In the course of the work, an individual task was completed:

  • the analysis of the subject area has been performed;
  • the chosen and developed solution algorithm is justified;
  • certain technology and selected programming environment;
  • the framework of the application was built and the user interface was designed;
  • the code of the software module has been developed;
  • the debugging tools used during testing are described;
  • the software module was tested according to a specific scenario;
  • added a menu item with a brief description of how to work with the program.

The set goals have been achieved.

LIST OF REFERENCES

1 Cyber ​​Forum [Electronic resource]: http: // CyberForum. ru

2 Microsoft Developer [Microsoft Official Documentation for C #] ttps: // msdn. microsoft. com

3 http://programming-edu.ru/ C # Newbie Help Blog

APPENDIX A

Program code

MyMatrix. cs

using System;

using System.Linq;

using System.Text;

using System.Windows.Forms;

namespace Matrix

Class MyMatrix

Int [,] a = new int;

// passing values

Public void Set (int i, int j, int znach)

A = znach;

// addition

Public static MyMatrix operator + (MyMatrix matrix1, MyMatrix matrix2)

For (int i = 0; i< 3; i++)

For (int j = 0; j< 3; j++)

NewMatrix.a = matrix1.a + matrix2.a;

Return NewMatrix;

// matrix output

Public string Visual (int i, int j)

Return a.ToString ();

// output all at once. Xd

Public DataGridView FullVisual (DataGridView dt)

For (int i = 0; i< 3; i++)

For (int j = 0; j< 3; j++)

Dt.Rows [j] .Cells [i] .Value = a;

Return dt;

// subtract

Public static MyMatrix operator - (MyMatrix matrix1, MyMatrix matrix2)

MyMatrix NewMatrix = new MyMatrix ();

For (int i = 0; i< 3; i++)

For (int j = 0; j< 3; j++)

NewMatrix.a = matrix1.a - matrix2.a;

Return NewMatrix;

// transposition

Public MyMatrix Trans ()

MyMatrix NewMatrix = new MyMatrix ();

For (int i = 0; i< 3; i++)

For (int j = 0; j< 3; j++)

NewMatrix.a = a;

Return NewMatrix;

// multiplication

Public static MyMatrix operator * (MyMatrix matrix1, MyMatrix matrix2)

MyMatrix NewMatrix = new MyMatrix ();

For (int i = 0; i< 3; i++)

For (int k = 0; k< 3; k++)

// int a = 0;

For (int j = 0; j< 3; j++)

// a + = matrix1.a * matrix2.a;

NewMatrix.a + = matrix1.a * matrix2.a;

//NewMatrix.a = a;

Return NewMatrix;

// fill

Public void Zapoln (DataGridView grid)

For (int i = 0; i< 3; i++)

For (int j = 0; j< 3; j++)

A = Convert.ToInt32 (grid.Rows [j] .Cells [i] .Value);

Form1.cs

using System;

using System.Collections.Generic;

using System.ComponentModel;

using System.Data;

using System.Drawing;

using System.Linq;

using System.Text;

using System.Windows.Forms;

namespace Matrix

Public partial class Form1: Form

Public Form1 ()

InitializeComponent ();

Private void Form1_Load (object sender, EventArgs e)

For (int i = 0; i< 3; i++)

DataGridView1.Rows.Add ();

DataGridView2.Rows.Add ();

DataGridView3.Rows.Add ();

//dataGridView1.Rows [i ].Cells.Value = i.ToString ();

Private void button1_Click (object sender, EventArgs e)

MyMatrix matrix3;

Matrix3 = (matrix1 + matrix2);

Private void button2_Click (object sender, EventArgs e)

MyMatrix matrix1 = new MyMatrix ();

MyMatrix matrix2 = new MyMatrix ();

MyMatrix matrix3;

Matrix1.Zapoln (dataGridView1);

Matrix2.Zapoln (dataGridView2);

Matrix3 = (matrix1 - matrix2);

Matrix3.FullVisual (dataGridView3);

Private void button3_Click (object sender, EventArgs e)

MyMatrix matrix1 = new MyMatrix ();

MyMatrix matrix3;

Matrix1.Zapoln (dataGridView1);

Matrix3 = matrix1.Trans ();

Matrix3.FullVisual (dataGridView3);

Private void button4_Click (object sender, EventArgs e)

MyMatrix matrix1 = new MyMatrix ();

MyMatrix matrix2 = new MyMatrix ();

MyMatrix matrix3;

Matrix1.Zapoln (dataGridView1);

Matrix2.Zapoln (dataGridView2);

Matrix3 = (matrix1 * matrix2);

Matrix3.FullVisual (dataGridView3);

PAGE \ * MERGEFORMAT 3