1.0 Software Design,Coding & Testing
1.1 What is a good Software Design?
1.2 Cohesion and Coupling- Classification of Cohesiveness Classification of Coupling.
1.3 Software design Approaches:- Function Oriented Design , Object Oriented Design. Function –Oriented vs Object Oriented Design.
1.4 User Interface Design-Characteristics of a good User Interfaces Basic Concepts-User guidance and Online Help Mode Based vs Modeless Interface-Graphical User Interface (GUI) vs Text Based User Interface- Types of User Interface Command Language Based Interface- Menu-Based Interface Direct Manipulation Interfaces-Component-Based GUI Development-Window System Types of Widgets
1.5 Software Coding and Testing-Coding Standards and Guidelines code Review-Code Walk-throughs-code Inspection Clean Room Testing-Software Documentation-Software Testing What is Testing?-Verification vs Validation Design of test cases testing in the large vs testing in the small-unit testing- driver and stub modules-Black-Box testing-white-box testing
1.6 Debugging-Debugging approaches-debugging guidelines program analysis tools-static analysis tools-dynamic analysis tools-Integration Testing Phases vs Incremental Integration Testing System Testing-Performance Testing- Error Seeding
concepts are:
1. INTRODUCTION TO SOFTWARE DESIGN
2. COHESION AND COUPLING
3 SOFTWARE DESING APPROACHES
What are Software Design Principles?
It should use consistent and meaningful names for various design components.
These are the characteristics are listed below:
- Correctness : A good design should correctly implement all the functionalities of the system.
- Understandability: A good design should be easily understandable
- efficiency : It should be efficient.
- Maintainability: It should be easily amenable to change.
- Modularity: The design should be modular. The term modularity means that is should use a cleanly decomposed set of modules.
- Clean decomposition.: Means the modules in a software design should display high cohesion and low coupling.
- Neat Arrangement: It should neatly arrange the modules in a hierarchy. e.g. tree-like diagram(i)Layered solution (ii)Low fan out (iii) Abstraction
/* some more explanation about software design principles*/
Software design principles represent a set of guidelines that helps us to avoid having a bad design. The design principles are associated to Robert Martin who gathered them in "Agile Software Development: Principles, Patterns, and Practices". According to Robert Martin there are 3 important characteristics of a bad design that should be avoided:
Rigidity(inflexibility) - It is hard to change because every change affects too many other parts of the system.
Fragility(weakness) - When you make a change, unexpected parts of the system break.
Immobility(calmness) - It is hard to reuse in another application because it cannot be disentangled from the current application.
Software design principles represent a set of guidelines that helps us to avoid having a bad design. The design principles are associated to Robert Martin who gathered them in "Agile Software Development: Principles, Patterns, and Practices". According to Robert Martin there are 3 important characteristics of a bad design that should be avoided:
Rigidity(inflexibility) - It is hard to change because every change affects too many other parts of the system.
Fragility(weakness) - When you make a change, unexpected parts of the system break.
Immobility(calmness) - It is hard to reuse in another application because it cannot be disentangled from the current application.
Open Close Principle
- Software entities like classes, modules and functions should be open for extensionbut closed
- for modifications.
OPC is a generic principle. You can consider it when writing your classes to make sure that when you need to extend their behavior you don�t have to change the class but to extend it. The same principle can be applied for modules, packages, libraries. If you have a library containing a set of classes there are many reasons for which you�ll prefer to extend it without changing the code that was already written (backward compatibility, regression testing, �). This is why we have to make sure our modules follow Open Closed Principle.When referring to the classes Open Close Principle can be ensured by use of Abstract Classes and concrete classes for implementing their behavior. This will enforce having Concrete Classes extending Abstract Classes instead of changing them. Some particular cases of this are Template Pattern and Strategy Pattern.Dependency Inversion Principle
- High-level modules should not depend on low-level modules. Both should depend on abstractions.
- Abstractions should not depend on details. Details should depend on abstractions.
Dependency Inversion Principle states that we should decouple high level modules from low level modules, introducing an abstraction layer between the high level classes and low level classes. Further more it inverts the dependency: instead of writing our abstractions based on details, the we should write the details based on abstractions.Dependency Inversion or Inversion of Control are better know terms referring to the way in which the dependencies are realized. In the classical way when a software module(class, framework, �) need some other module, it initializes and holds a direct reference to it. This will make the 2 modules tight coupled. In order to decouple them the first module will provide a hook(a property, parameter, �) and an external module controlling the dependencies will inject the reference to the second one.By applying the Dependency Inversion the modules can be easily changed by other modules just changing the dependency module. Factories and Abstract Factories can be used as dependency frameworks, but there are specialized frameworks for that, known as Inversion of Control Container.
Interface Segregation Principle
- Clients should not be forced to depend upon interfaces that they don't use.
This principle teaches us to take care how we write our interfaces. When we write our interfaces we should take care to add only methods that should be there. If we add methods that should not be there the classes implementing the interface will have to implement those methods as well. For example if we create an interface called Worker and add a method lunch break, all the workers will have to implement it. What if the worker is a robot?As a conclusion Interfaces containing methods that are not specific to it are called polluted or fat interfaces. We should avoid them.Single Respoinsibility Principle
- A class should have only one reason to change.
In this context a responsibility is considered to be one reason to change. This principle states that if we have 2 reasons to change for a class, we have to split the functionality in two classes. Each class will handle only one responsibility and on future if we need to make one change we are going to make it in the class which handle it. When we need to make a change in a class having more responsibilities the change might affect the other functionality of the classes.Single Responsibility Principle was introduced Tom DeMarco in his book Structured Analysis and Systems Specification, 1979. Robert Martin reinterpreted the concept and defined the responsibility as a reason to change.Liskov's Substitution Principle
- Derived types must be completely substitutable for their base types.
This principle is just an extension of the Open Close Principle in terms of behavior meaning that we must make sure that new derived classes are extending the base classes without changing their behavior. The new derived classes should be able to replace the base classes without any change in the code.Liskov's Substitution Principle was introduced by Barbara Liskov in a 1987 Conference on Object Oriented Programming Systems Languages and Applications, in Data abstraction and hierarchy
4. User Interface Design(i) Rules of user Interface design
(ii) Interface design models
2. What are the Interface Design Models?Ans:Interface Design Models:(i)The Designer's System Model(ii)The User's Mental Model:A mental model is an explanation of someone's thought process about how something works in the real world. It is a representation of the surrounding world, the relationships between its various parts and a person's intuitive(of possessing) perception(act, understanding) about their own acts and their consequences. Our mental models help shape our behaviour and define our approach to solving problems (akin to a personal algorithm) and carrying out tasks.
(iii) User interface design process(iv) User interface design process requirements/activities(v) Various types of interface(vi) Graphical user interface (vii) Text based user interface
1. Explain about User interface design?Liskov's Substitution Principle was introduced by Barbara Liskov in a 1987 Conference on Object Oriented Programming Systems Languages and Applications, in
Data abstraction and hierarchy
4. User Interface Design
(i) Rules of user Interface design
(ii) Interface design models
2. What are the Interface Design Models?
Ans:
Interface Design Models:
(i)The Designer's System Model
(ii)The User's Mental Model:
A mental model is an explanation of someone's thought process about how something works in the real world. It is a representation of the surrounding world, the relationships between its various parts and a person's intuitive(of possessing) perception(act, understanding) about their own acts and their consequences. Our mental models help shape our behaviour and define our approach to solving problems (akin to a personal algorithm) and carrying out tasks.
(iii) User interface design process
(iv) User interface design process requirements/activities
(v) Various types of interface
(vi) Graphical user interface
(vii) Text based user interface
ans:
User interface design or user interface engineering is the design of computers, appliances, software applications, and websites with the focus on the user's experience and interaction. The main goal of user interface design is to make the user's interaction as simple and efficient as possible, in terms of accomplishing user goals. Good user interface design provides finishing the task(work) at hand without drawing unnecessary attention to itself. Graphic design may be utilized(used) to support its usability. The design process must balance technical functionality and visual elements (e.g., mental model) to create a system that is not only operational but also usable and adaptable to changing user needs.
Interface design is involved in a wide range of projects from computer systems, to cars, to commercial planes; all of these projects involve much of the same basic human interactions yet also require some unique skills and knowledge. As a result, designers tend to specialize in certain types of projects and have skills centered around their expertise, whether that be software design, user research, web design.
5. Software Coding and Testing
4.5.1 coding standards
4.5.2 coding guidelines
4.5.2 Coding Walk through
4.5.3 Code inspection process
CODE REVIEW:
3. Explain about the Code Review?
ans:
Review means “Look at again; examine again”.
Code review for module is performed after a module is successfully compiled and the syntax errors are eliminated.
These are cost-effective for reducing coding errorsà it is performed in order to produce high quality code.
Two types of code reviews are performed on the code of a module:
- Code Walk Through
- Code Inspection
4. What is Walk through?
Ans:
Major objective: FIND ERRORS.Uncover errors in function, logic, or implementation for any representation of the software
Types of Walk throughs: (i) Specifiation Walkthroughs
(ii) Design Walkthroughs
(iii) Code Walkthroughs
(iv) Test Walkthroughs
Code Walk Through:
5. Explain about Code Walk through?
ans:
5. Explain about Code Walk through?
ans:
- It is an informal code analysis technique.
- Before the Walk Through meetingà some members of development team are given the code a few days to read and understand the code.
- In this, each member selects some test cases and looks execution of code by hand (execution through each statement and function execution) which can be performed after a module has been coded, successfully compiled and syntax errors are eliminated.
- Main objective of the Walk Through is: Detect the algorithmic and logical errors in the code.
- The errors which they found are noted and discussed in Walk Through meeting in presence of the coder of the module.
- Several guidelines are evolved for making this technique more effective and useful.
Guideline:
1. The team performing walk-through should not be either too big or too smallàit should consist of 3 to 7 members.
2. Discussion should focus on detection of error (not on how to fix the detected errors).
3. In order to help develop cooperation, they are being evaluated in the code walk-through meetingàin which will not attend.
Steps for Code Walk Through:
1. Team Makeup
Code inspector teams consists of 2-5 individuals. The author of the code to be inspected is not part of the initial inspection team.
2. Preparing for Inspection
To get ready for the inspection, print separate hardcopies of the source code for each inspector. A single code inspector should cover more than 250 source code lines, including comments, but not including white space. The hardcopy should contain a count of the source code lines shown.
3. Inspection overview
4. Individual Inspections
5. Meeting
6. Rework
7. Follow up
8. Record keeping
Steps for Code Walk Through:
1. Team Makeup
Code inspector teams consists of 2-5 individuals. The author of the code to be inspected is not part of the initial inspection team.
2. Preparing for Inspection
To get ready for the inspection, print separate hardcopies of the source code for each inspector. A single code inspector should cover more than 250 source code lines, including comments, but not including white space. The hardcopy should contain a count of the source code lines shown.
3. Inspection overview
4. Individual Inspections
5. Meeting
6. Rework
7. Follow up
8. Record keeping
- Review issues list
- Identify problem areas within the product
- Action Item checklist for corrections to be made
- Review summary report
- What was reviewed?
- Who reviewed it?
- What were the findings and conclusions?
Code Inspection:
The aim of this is to detect (discover) common types of errors caused due to oversight and improper programming.
In addition to detecting commonly made errors, adherence (attachment) to coding standards is also checked during code inspection.
Some classical programming errors which can be checked during code inspection are:
· Use of uninitialized variables
· Jumps into loops
· No terminating loops
· Mismatches between actual and formal parameters in procedure calls
Use of incorrect logical operations or incorrect precedence among operatorsTESTING
Exception handling
An exception is an error or an unexpected event When an exception has not been anticipated: control is transferred to the system exception handling mechanism
Many programming languages do not have facility: to detect and handle exceptions.
Languages which support exception handling: Ada , C++, Java
4.5.5 Unit Testing
4.5.6 Integration Testing
4.5.7 System Testing
4.5.8 Black Box Testing
4.5.9 White Box Testing
4.5.10 Code Coverage Metrics
1. Why do we do Testing?
ans:
- To discover what errors are present in the software
- To demonstrate that errors are not present
- To show that intended functions are present
- To gain confidence in the software's ability to do what it is required to do
Black-Box Testing
Two alternate and complimentary approaches to testing are called black-box and white-box testing. Black-box testing is also called data-driven (or input/output-driven) testing. In using this approach, the tester views the program as a black box and is not concerned about the internal behavior and structure of the program. The tester is only interested in finding circumstances in which the program does not behave according to its specifications. Test data are derived solely from the specifications (i.e., without taking advantage of knowledge of the internal structure of the program).
If one wishes to find all errors in the program, using this approach, the criterion is exhaustive input testing. Exhaustive input testing is the use of every possible input condition as a test case. Since this is usually impossible or impractical from an economic view point, exhaustive input testing is rarely used. In order to maximize the yield on the testing investment (i.e., maximize the number of errors found by a finite number of test cases), the white-box approach is also used.
3. What is White Box testing?
Another testing approach, white-box or logic-driven structural testing, permits one to examine the internal structure of the program. In using this strategy, the tester derives test data from an examination of the program's logic and structure.
The analog to exhaustive input testing of the black-box approach is usually considered to be exhaustive path testing. That is, if one executes (via test cases) all possible paths of control flow through the program, then possibly the program can be said to be completely tested.
There are two flaws in this statement, however. One is that the number of unique logic paths through a program is astronomically large. The second flaw in the statement that exhaustive path testing means a complete test is that the path in a program could be tested, yet the program might still be loaded with errors. There are three explanations for this. The first is that an exhaustive path test in no way guarantees that a program matches its specification. Second, a program may be incorrect because of missing paths. Exhaustive path testing, of course, would not detect the absence of necessary paths. Third, an exhaustive path test might not uncover data-sensitivity errors.
Although exhaustive input testing is superior to exhaustive path testing, neither prove to be useful strategies because both are infeasible. Some way of combining elements of both black-box and white-box testing to derive reasonable, but not air-tight, testing strategy is desirable.
4. Discuss briefly on Black Box and White Box Testing?
While dealing with software testing it has to be noted that, testing is implemented on a given software with an intention to finding errors. Hence, making the software error free:
Basically there are many forms of testing, but two types of testing are of prime focus.
They are:
--Black box Testing and
--White box testing
Black Box Testing: (integration + Validation)
This type of testing is conducted so as to ensure that the software satisfies its purpose of development. Hence, in this case, the software is exercised in all its functional aspects and is closely analyzed to conclude that its modules, (together) functions to the expectations. Also while exercising the software, the bugs which are adhered with the normal functioning of the software are uncovered(so that they can be rectified in the future).
White box Testing: (unit + integration)
This type of testing lays stress on testing the internal frame work of the software i.e., here, each individual unit of the software is tested along with they way each unit collaborates with others to bring up the required functionality. Hence, the bugs associated during this testing are uncovered.
5. STRESS TESTING:
Ans:
STRESS TESTING
Stress testing is also known as endurance(stamina) testing. Stress testing evaluates system performance when it is stressed for short periods of time. Stress tests are black-box tests which are designed to impose (enforce) a range of abnormal and even illegal input conditions so as to stress the capabilities of the software.
Input data volume, input data rate, processing time, utilization of memory, etc, are tested beyond the designed capacity.
(just go throgh to understand no need to write below examples in the exam)
For example, suppose an operating system is supposed to support 15 multiprogrammed jobs, the system is stressed by attempting to run 15 or more jobs simultaneously. A real-time system might be tested to determine the effect of simultaneous arrival of several high-priority interrupts.
Stress testing is especially important for systems that usually operate below the maximum capacity but are severely stressed at some peak demand hours.
Where abnormal demands are made upon the sw by increasing the rate at which it is asked to accept data, or the rate at which it is asked to produce information. More complex tests may attempt to create very large data sets or cause the sw to make excessive demands on the operating system
When conducting a stress test, an adverse environment is deliberately created and maintained. Actions involved may include:
- Running several resource-intensive applications in a single computer at the same time
- Attempting to hack into a computer and use it as a zombie to spread spam
- Flooding a server with useless e-mail messages
- Making numerous, concurrent attempts to access a single Web site
- Attempting to infect a system with viruses, Trojans, spyware or other malware.
Stress testing can be time-consuming and tedious. Nevertheless, some test personnel enjoy watching a system break down under increasingly intense attacks or stress factors. Stress testing can provide a means to measure graceful degradation, the ability of a system to maintain limited functionality even when a large part of it has been compromised.
Once the testing process has caused a failure, the final component of stress testing is determining how well or how fast a system can recover after an adverse event.
4.6 Debugging Approaches
Ans:
Debugging approach !!!!
In general, three categories for debugging approaches may be proposed.
• Brute force
• Back tracking
• Cause elimination
The brute force category of debugging is probably the most common and efficient method for isolating the cause of a software error. Brute force debugging methods are applied when all methods of debugging fail. Using a philosophy, memory dumps are taken, run time traces are invoked and the program is loaded with WRITE statement. When this is done, one finds a clue by the information produced which leads to cause of an error.
Backtracking is a common debugging approach that can be used successfully in small programs. Beginning at the site where a symptom has been uncovered, the source code is traced backward (manually) until the site of the cause is found. This process has a limitation when the source lines are more.
Cause Elimination is manifested by induction or deduction and introduces the concept of binary partitioning. Data related to the error occurrence are organized to isolate potential causes.
Alternatively, a list of all possible causes is developed and tests are conducted to eliminate each.
If initial tests indicate that a particular cause hypothesis shows promise the data are refined in an attempt to isolate the bug.
• Brute force
• Back tracking
• Cause elimination
The brute force category of debugging is probably the most common and efficient method for isolating the cause of a software error. Brute force debugging methods are applied when all methods of debugging fail. Using a philosophy, memory dumps are taken, run time traces are invoked and the program is loaded with WRITE statement. When this is done, one finds a clue by the information produced which leads to cause of an error.
Backtracking is a common debugging approach that can be used successfully in small programs. Beginning at the site where a symptom has been uncovered, the source code is traced backward (manually) until the site of the cause is found. This process has a limitation when the source lines are more.
Cause Elimination is manifested by induction or deduction and introduces the concept of binary partitioning. Data related to the error occurrence are organized to isolate potential causes.
Alternatively, a list of all possible causes is developed and tests are conducted to eliminate each.
If initial tests indicate that a particular cause hypothesis shows promise the data are refined in an attempt to isolate the bug.
**************
4.6.1. Debugging guidelines
4.6.2. Program Analysis Tools-Static and Dynamic
4.6.3. Integration Testing
4.6.4. Phased and incremental integration testing
4.6.5. Alpha,beta and stress testing
Ans:
Alpha testing is simulated or actual operational testing by potential users/customers or an independent test team at the developers' site. Alpha testing is often employed for off-the-shelf software as a form of internal acceptance testing, before the software goes to beta testing.
4.6.6. Acceptance Tests
***********************
4.6.7. Error seeding
Ans:
Error seeding:
when you are testing application/function/module if you find any bugs/errors you have to log the bug and send it to developer or team leader ( it depends on company procedure) to mak changes to code to fix the bug/error.
(or)
Error seeding is a process in which the programmer(developer) intendedly(wantedly= kavalani(telugu)) induces(keeps) error in the application(code or program) to see the rate of error discovery. This method serves multiple purposes. For Example - Guages the tester's skills of error finding application's ability to survive through the errors and also that how easily these errors can fixed without blocking the usage of the application.
Error Seeding is the process of adding known faults
wantedly in a program for monitoring the rate of detection
& removal and also to estimate the total number of faults
remaining in the program.
4.6.8. Benefits of testing
Benefits of testing
Since Solidworks is a vector based program and cannot save to a rasterized file format, you will need to either do a screen grab (screen capture) and save to JPG or print to PDF and embed the PDF file in the PowerPoint file.
ReplyDeleteSolidworks 2012
I truly value this superb post that you have accommodated us. I guarantee this would be helpful for a large portion of the general population. sketchup pro crack 2022
ReplyDeleteprecision machining A very awesome blog post. We are really grateful for your blog post. You will find a lot of approaches after visiting your post.
ReplyDelete