Software has become the key element in the evolution of
computer based systems and products.
Over the years, software has evolved from a problem solving and
information analysis tool to industry itself.
The intent of software engineering is to provide a framework for
building higher quality software.
Software Characteristics:
1.
Software is
developed or engineered; it is not manufactured in the classical sense.
2.
Software
doesn’t wear out but it does deteriorate.
Legacy software systems were developed decades ago and have been
continually modified to meet the changes and are becoming costly to maintain
and to evolve.
Ø The software must be enhanced to implement new business
requirements.
Ø The software must be adapted to the needs of new
technology.
Ø The software must be re-architected to make it viable
within a network.
SOFTWARE ENGINEERING PROCESS PARADIGMS:
A paradigm is the model of a process. It defines the
flow of activities that occur as the process progresses from start to end. In
software engineering, paradigm provides a framework that identifies major
activities, called as phases, detailed work tasks, and software delivery.
A software process
is a framework for the tasks that are required to build high‐quality
software. Software Engineering is the
application of a systematic, disciplined, quantifiable approach to the
development, operation, and maintenance of software
1. The bedrock that
supports software engineering is a quality focus.
2. The foundation for
software engineering is the process layer. A software process is a
framework for the tasks that are required to build high‐quality
software.
3. Software
engineering methods provide the technical how‐to's
for building software and includes requirements analysis, design, program
construction, testing, and support.
4. Software
engineering tools provide automated or semi‐automated
support for the process
and the methods.
THE
WATER-FALL MODEL:
1. The waterfall model
is the oldest paradigm for software engineering.
2. This model is
sometimes called as classical life-cycle.
3. It suggests a
systematic sequential approach to software development that begins with
requirements and progresses through planning, modeling, construction and
deployment.
4. The main benefit of
this technology is its simplistic, systematic and orthodox approach.
5. This model adopts
top-down approach.
![]() |
Waterfall model is used when the
requirements are well known, clear and fixed; when the technology is
understood; when the project is short.
INCREMENTAL
PROCESS MODELS:
INCREMENTAL
MODEL:
1. Incremental model
in software engineering combines the elements of waterfall model in an
iterative manner.
2. It delivers a
series of releases called increments, waterfall model is applied in each
increment, and provides more functionality for the client as each increment is
delivered.
3. This process
continues, with increments being delivered until the complete product is
delivered.
4. This model can be
used when the requirements of the complete system are clearly defined and
understood.
Ø
Core
product is developed first i.e main functionality is added in the first
increment.
Ø
Initial
product delivery is faster and cost less.
Ø
It
is easier to test and debug than other methods of software development.
Ø
With
each release a new feature is added to the product.
Ø
Work
load is less.
Ø
Requires
good analysis.
Ø
Resulting
cost may exceed the cost of the organization.
Ø
Each
phase of an iteration is rigid and do not overlap each other.
Ø
As
additional functionality is added to the product, problems may arise related to
system architecture which was not evident in earlier prototypes.
Tasks
in Incremental Model:
Communication: Helps to understand the
objective.
Planning: Require as many people (software
teams) work on the same project
but different function at same
time.
Modeling: Involves business modeling, data
modeling, and process modeling.
Construction: This involves the reuse
software components and automatic code.
Deployment: Integration of all the
increments.
RAD (RAPID
APPLICATION DEVELOPMENT) MODEL:
1. Rapid Application
Development (RAD) is an incremental software process model.
2. In RAD
model the components or functions are developed in parallel as if they were
mini projects.
3. RAD should be used
when there is a need to create a system that can be modularized in 2-3 months
of time.
4. It should be used
if there’s high availability of designers for modeling and the budget is high
enough to afford their cost along with the cost of automated code generating
tools.
5. RAD SDLC model should be
chosen only if resources with high business knowledge are available and there
is a need to produce the system in a short span of time (2-3 months).

The phases in the rapid application
development (RAD) model are:
Business modeling: The information
flow is identified between various business functions.
Data modeling: Information gathered from business modeling is used to define data objects that are needed for the business.
Process modeling: Data objects defined in data modeling are converted to achieve the business information flow to achieve some specific business objective. Description are identified and created for CRUD of data objects.
Application generation: Automated tools are used to convert process models into code and the actual system.
Testing and turnover: Test new components and all the interfaces.
RAD model reduces development
time, Increases reusability of components, Encourages customer feedback.
RAD model depends
on strong team and individual performances for identifying business
requirements, requires highly skilled developers/designers, Highly dependent on
modeling skills and inapplicable to cheaper projects as cost of modeling and
automated code generation is very high.
EVOLUTIONARY
PROCESS MODELS:
Evolutionary
models are iterative. They are
characterized in a manner that enables the software engineers to develop
increasingly more complete versions of the software.
PROTOTYPE
MODEL:
1. The basic idea in Prototype
model is that instead of freezing the requirements before a design or
coding can proceed, a prototype is built to understand the requirements.
2. This prototype is
developed based on the currently known requirements.
3. The client can get
an “actual feel” of the system, since the interactions with prototype can
enable the client to better understand the requirements of the desired
system.
4. Prototype model
should be used when the desired system needs to have a lot of interaction with
the end users.
5. Prototyping ensures
that the end users constantly work with the system and provide a feedback which
is incorporated in the prototype to result in a useable system.
6. They are excellent
for designing good human computer interface systems.

Ø
Users
are actively involved in the development.
Ø
The
users get a better understanding of the system being developed.
Ø
Errors
can be detected much earlier.
Ø
Quicker
user feedback is available leading to better solutions.
Ø
Missing
functionality can be identified easily
Ø
Confusing
or difficult functions can be identified.
Ø
Leads to
implementing and then repairing way of building systems.
Ø
Scope of the system
may expand beyond original plans.
Ø
Incomplete
application may cause application not to be used as the
full system was designed incomplete or inadequate problem analysis.
full system was designed incomplete or inadequate problem analysis.
SPIRAL
MODEL:
1. The spiral model is
similar to the incremental
model, with more emphasis placed on risk analysis.
2. The spiral model
has four phases: Planning, Risk Analysis, Engineering and Evaluation.
Planning: To define resources,
responsibilities, milestones and schedules.
Evaluation: To define the requirements and
constraints for the product and define possible alternatives.
Risk Analysis: To assess both technical and
management risks.
Engineering:
To design and implement one or more prototypes or samples of the application.

The process begins at the center position.
From there it moves clockwise in traversals. Each traversal of the spiral
usually results in a deliverable. It is not clearly defined what this
deliverable is. This changes from traversal to traversal.
Spiral model is used when cost and risk
evaluation is important, requirements are complex, the users are unsure of
their needs. It can be a costly model to
use. It works well for large projects.
PROJECT
METRICS:
Metrics is used to improve product quality,
develop team productivity and is concerned with productivity and quality
measures.
1. Project metrics are
tactical
2. Project Metrics are
used by project manager to adapt project work flow and technical activities
3. Project Metrics are
used to guide adjustments to work schedule to avoid delays; assess product
quality on an ongoing basis
4. Project metrics
enable a software project manager to:
Ø
Assess
the status of an ongoing project
Ø
Track
potential risks
Ø
Uncover
problem areas before their status becomes critical
Ø
Adjust
work flow or tasks
Ø
Evaluate
the project team’s ability to control quality of software work products
5. As quality
improves, defects are minimized; As defects go down, the amount of
rework required during the project is also reduced; As rework goes down,
the overall project cost is reduced.
PROCESS
METRICS:
a. Process metrics
enable an organization to take strategic view by providing insight into the
effectiveness of the software process.
b. They enable the project
manager to adapt project work flow and technical activities.
c. Size-oriented
metrics and function-oriented metrics are used throughout the industry.
d. Size-oriented
metrics is derived by normalizing quality and/or productivity measures by
considering the size of the software produced.
e. A line of code
(LOC) is used as a normalizing value in size-oriented metrics.
f. Size-oriented
metrics are not universally accepted as the best way to measure the software
process
g. Function-oriented
metrics use a measure of the functionality delivered by the application as a
normalization value. Function Point (FP)
is most widely used metric.
h. LOC and FP can be
used to estimate object-oriented software projects.
SOFTWARE
ESTIMATION:
The
software project planner must estimate the following before the project begins:
1. How long it will
take to complete the project (Time)
2. How much effort
will be required to complete the project (Effort)
3. Number of personnel
involved to complete the project (People)
4. Resources involved
viz. hardware and software
5. Risk involved.
Also, the other costs such as environmental
cost, specialized tools, political environment can affect the ultimate cost of
software and effort applied to develop it.
DECOMPOSITION techniques take divide and conquer approach
for software project estimation. By
decomposing a project into major functions and related software engineering
activities, cost and effort estimation can be performed step-by-step.
EMPIRICAL ESTIMATION MODELS can be used to
complement decomposition techniques and offer potentially valuable estimation
approach in their own process.
COCOMO
MODEL (Empirical
Estimation Model):
COCOMO à Constructive Cost Model
1. COCOMO Model allows
estimating the cost, effort, and scheduling when planning a new software
development activity.
2. It consists of
three sub-models, called as Applications Composition, Early Design, and
Post-architecture models.
COCOMO Model is used for:
i.
Making
investment or other financial decisions involving a software development effort
ii.
Setting
project budgets and schedules as a basis for planning and control
iii.
Deciding
on or negotiating tradeoffs among software cost, schedule, functionality,
performance or quality factors
iv.
Making
software cost and schedule risk management decisions
v.
Deciding
which parts of a software system to develop, reuse, lease, or purchase
vi.
Making
legacy software inventory decisions: what parts to modify, phase out,
outsource, etc
vii.
Setting
mixed investment strategies to improve organization's software capability, via
reuse, tools, process maturity, outsourcing, etc
PLANNING:
1. S/W project
management process begins with project planning
2. The objective of sw
project planning is to provide a framework for manager to make reasonable estimates
of resources, costs and schedules.
3. The activities
associated with planning include:
Ø
Software
scope
Ø
resources
Ø
project
estimation
Ø
decomposition
4. Project
planning is often used to organize different areas of a project, including project
plans, work-loads
and the management of teams and individuals.
5. Project
planning is inherently uncertain as it must be done before the project is
actually started.
SOFTWARE
RISK:
1. Risk is an
expectation of loss, problem that may or may not occur in the future.
2. It is
generally caused due to lack of information, control or time.
3. A possibility
of suffering from loss in software development process is called a software
risk.
4. Loss can be
anything, increase in production cost, development of poor quality software,
not being able to complete the project on time.
5. Software risk
exists because the future is uncertain and there are many known and unknown
things that cannot be incorporated in the project plan.
6. Software risk
can be of two types (a) internal risks that are within the control of the
project manager and (2) external risks that are beyond the control of project
manager.
RISK
ANALYSIS:
1. Risk analysis
is used to identify the high risk elements of a project in software
engineering.
2. Risk analysis
has also been found to be most important in the software design phase to
evaluate criticality of the system, where risks are analyzed and necessary
counter measures are introduced.
3. The main
purpose of risk analysis is to understand risks in better ways and to verify
and correct attributes.
4. A successful
risk analysis includes important elements like problem definition, problem
formulation, data collection.
SOFTWARE
PROJECT SCHEDULING:
1. Project
scheduling is a mechanism to communicate what tasks need to get done and
which organizational resources will be allocated to complete those tasks in
what timeframe.
2. A project
schedule is a document collecting all the work needed to deliver the project on
time.
3. Project
scheduling occurs during the planning phase of the project.
i.
What
needs to be done?
ii.
When
will it be done?
iii.
Who
will do it?
4. The project
schedule should reflect all of the work associated with delivering the project
on time.
5. Without a full
and complete project schedule, the project manager will be unable to
communicate the complete effort, in terms of cost and resources, necessary to
deliver the project.
REQUIREMENT
ANALYSIS
Requirement
Engineering Process:
Requirements engineering occurs during
the customer communication and modeling activities that are defined for the
generic software process. The functions
viz. inception, elicitation, elaboration, negotiation, specification, validation
and management are conducted by the software team. As requirements are identified and the
analysis model is created, the software team and stakeholders negotiate
priority, availability and relative cost.
ANALYSIS MODEL:
1. Requirement
Analysis results in the specification of the software’s operational
characteristics.
2. It indicates
software’s interface with other system elements and establish the required
constraints.
3. It allows the
software engineer/analyst to elaborate on basic requirements established during
requirement engineering and build models to depict user scenarios, functional
activities, problem classes and their relationships, data flow and etc.
4. It provides
the software engineer with the information, function that can be translated into
architectural, interface and component-level design.
5. The analysis
model (shown below) and requirements specification provide a means for
assessing quality once the software is built.
6. The analysis
model is composed of four modeling elements viz. scenario-based models,
flow-models, class-based models (all depict static behavior), and behavioral
models (depict dynamic behavior).
![]() |
7. Scenario-based
models depict software requirements from user’s point of view. Flow-models focus on the flow of data objects
as they are transformed by the processing functions. Class-based modeling uses the information
derived from scenario-based and flow-oriented modeling element to identify
analysis classes. Behavioral modeling
depicts dynamic behavior and uses the input from other models to represent the
state of analysis classes and the system as a whole.
FEASIBILITY
STUDY:
1. Feasibility is defined as
the practical extent to which a project can be performed successfully.
2. To evaluate
feasibility, a feasibility study is performed, which determines whether the
solution considered to accomplish the requirements is practical and workable in
the software.
3. Information
such as resource availability, cost estimation for software development,
benefits of the software to the organization after it is developed and cost to
be incurred on its maintenance are considered during the feasibility study.
4. The objective
of the feasibility study is to establish the reasons for developing the
software that is acceptable to users, adaptable to change and conformable to
established standards.
The feasibility study concentrates on
the following areas:
1. Operational
Feasibility
2. Technical
Feasibility
3. Economic
Feasibility
Operational Feasibility: Operational
feasibility study tests the operational scope of the software to be developed.
The proposed software must have high operational feasibility. The usability
will be high.
Technical Feasibility: The technical
feasibility study compares the level of technology available in the software
development firm and the level of technology required for the development of
the product. Here the level of technology consists of the programming language,
the hardware resources, other software tools etc.
Economic Feasibility: The economic
feasibility study evaluates the cost of the software development against the
ultimate income or benefits gets from the developed system. There must be
scopes for profit after the successful Completion of the project.
PROBLEM
OF REQUIREMENTS:
1. The
requirements may address too little or too much information.
2. The boundary
of the system is ill-defined
3. Unnecessary
design information may be given
4. Problems of
understanding within users and developers
5. Users have
incomplete understanding of their needs
6. Analysts have
poor knowledge of problem domain
7. User and
analyst speak different languages
8. Ease of
omitting “obvious” information
9. Conflicting
views of different users
10. Requirements
are often vague and un-testable.
REQUIREMENT
ANALYSIS:
Requirements
analysis is the first stage in the systems engineering process and software
development process, beneficiaries or users.
Requirements
analysis
in systems engineering and software engineering, determine the needs or
conditions to meet for a new or altered product, taking account of the possibly
conflicting requirements of the various stakeholders, such as beneficiaries or
users.
Requirements
analysis is critical to the success of a development project. Requirements must
be documented, actionable, measurable, testable, related to identified business
needs or opportunities, and defined to a level of detail sufficient for system
design.
Requirements
can be architectural, structural, behavioral, functional, and non-functional.
SOFTWARE
ANALYSIS CONCEPTS AND PRINCIPLES:
The overall
role of software in large system is identified during system engineering. It is
necessary to look at software’s role to understand the specific requirements
that must be achieved to build high-quality software. To do so, one should
follow a set of underlying concepts and principles.
Requirement
analysis is a software engineering task that bridges the gap between system
level requirements engineering and software design. Requirements engineering
activities result in the specification of software’s operational
characteristics, indicate software’s interface with other system elements, and
establish constraints that software must meet. Requirement analysis allows the
software engineer to refine domains that will be treated by software.
Before
requirements can be analyzed, modeled, or specified they must be gathered
through an elicitation process.
Analysis Principles:
1. The
information domain of a problem must be represented and understood.
2. The functions
that the software is to perform must be defined.
3. The behavior
of the software must be represented.
4. The models
that depict information, function, and
5. The models
that depict information function and behavior must be partitioned in a manner
that uncovers details in a layered fashion.
6. The analysis
process should move from essential information toward implementation
detail.
7. Understand the problem before you begin to create the analysis model.
8. Develop prototype that enable a user to understand how human/machine
interaction will occur.
9. Record the origin of and the reason for every requirement.
10. Use multiple views of requirements.
11. Rank requirements.
12. Work to eliminate ambiguity
SOFTWARE
DESIGN:
1. Software design is a process of
problem-solving and planning for a software solution.
2. After the purpose and specifications of software are determined,
software developers design or employ designers to develop a plan for a
solution.
3. The goal of design is to create a model of software that will
implement all the customer requirements correctly.
4. The design should provide the complete picture of the software
addressing the data, functional, and behavioral domains.
5. A software design may be platform-independent or platform-specific,
depending on the availability of the technology called for by the design
DESIGN
CONCEPTS:
The
design concepts provide the software designer with a foundation from which more
sophisticated methods can be applied. They are:
Abstraction: Abstraction is the process or result of
generalization by reducing the information content of a concept in order to
retain only information which is relevant for a particular purpose.
Architecture: Architecture is the structure of program components and the manner in
which these components interact, and the structure of the data that are used by
the components.
Patterns:
A
pattern provides a description of the solution to a recurring design problem of
some specific domain in such a way that the solution can be used again and
again. The objective of each pattern is to determine - Whether the pattern can
be reused; Whether the pattern is applicable to the current project; Whether
the pattern can be used to develop a similar but functionally or structurally
different design pattern.
Modularity: Modularity is achieved by dividing the
software into uniquely named and addressable components, which are also
known as modules.
A complex system is partitioned into a set of discrete modules
in such a way that each module can be developed independent of other modules.
After developing the modules, they are integrated together to meet the software
requirements. Modularizing a design helps to plan the development in a more
effective manner, accommodate changes easily, conduct testing and debugging
effectively and efficiently, and conducts maintenance work without adversely affecting
the functioning of the software.
Information Hiding: Modules should be specified and designed so that information contained
within a module is inaccessible to other modules that have no need for such
information.
Functional Independence: It is a direct outgrowth of modularity and the concepts of abstraction
and Information Hiding. Independence is
assessed using Cohesion and Coupling.
Cohesion is an indication of the relative functional strength of a
module. Coupling is an indication of the
relative interdependence among modules.
Refinement: It
is the process of elaboration. It causes the designer to elaborate on the
original statement, providing more and more details. Abstraction and Refinement
are complementary concepts. Abstraction enables the designer to specify
procedure and data. Refinement helps the
designer to reveal low-level details as design progresses.
Refactoring: It is a technique that simplifies the design or code of a component
without changing its function or behavior.
It leads to easy integration, testing and maintenance of software.
Translating
the Analysis Model into the Design Model:
Each of the elements of the analysis
model provides information that is necessary to create the four design models
required for a complete specification of design. The flow of information during the software
design is illustrated in the diagram drawn below:

The data
design / class design transforms analysis-class models into design class
realizations and the requisite data structures required to implement the
software.
The
architectural design defines the relationship between major structural elements
of the software, the architectural styles and design patterns that can be used
to achieve the requirements defined for the system.
The
interface design describes how the software communicates with systems that
interoperate with it, and with humans who use it.
The
component level design transforms structural elements of the software
architecture into procedural description of software components.
EFFECTIVE
MODULAR DESIGN – (FUNCTIONAL INDEPENDENCE):
Concept of Functional Independence is
a direct outgrowth of Modularity and Information hiding. Functional Independence is measured by
Cohesion and Coupling.
Cohesion is an indication of the relative functional strength of a
module. Coupling is an indication of the
relative interdependence among modules.
Types of Cohesion: The different
types of cohesion in the software engineering are as follows:
1. Functional
Cohesion:
It is best type of cohesion, in which parts of the module are grouped because
they all contribute to the module’s single well defined task.
2. Sequential
Cohesion:
When the parts of modules grouped due to the output from the one part is the
input to the other, then it is known as sequential cohesion.
3. Communication
Cohesion:
In Communication Cohesion, parts of the module are grouped because they operate
on the same data. For e.g. a module operating on same information records.
4. Procedural
Cohesion: In
Procedural Cohesion, the parts of the module are grouped because a certain
sequence of execution is followed by them.
5.
Logical
Cohesion: When
the module’s parts are grouped because they are categorized logically to do the
same work, even though they are all have different nature, it is known as
Logical Cohesion. It is one of the worst types of the cohesion in the software
engineering.
Types of Coupling: The different types of coupling in software engineering are as follows:
1. Content
Coupling:
Content Coupling is the highest type of coupling which occurs when one of the
module relies on the other module’s internal working. It means a change in the
second module will lead to the changes in the dependent module.
2. Common
Coupling: It
is the second highest type of coupling also known as Global Coupling. It occurs
when the same global data are shared by the two modules. In this, the modules
will undergo changes if there are changes in the shared resource.
3. External
Coupling:
This type of coupling occurs when an external imposed data format and
communication protocol are shared by two modules. External Coupling is
generally related to the communication to external devices.
4. Control
Coupling: In
this type of coupling, one module controls the flow of another and passes
information from one to another.
5. Message
Coupling: This
type of coupling can be achieved by the state decentralization. It is the
loosest type of coupling, in which the component communication is performed
through message passing.
6. Data Coupling: The modules
are connected by the data coupling, if only data can be passed between them.
7.
Stamp
Coupling: In
this type of coupling, the data structure is used to transfer information from
on component to another.
ARCHITECTURAL
DESIGN:
Requirements of the software should be
transformed into an architecture that describes the software's top-level
structure and identifies its components. This is accomplished through
architectural design
Functions:
1. It defines an
abstraction level at which the designers can specify the functional and
performance behaviour of the system.
2. It acts as a
guideline for enhancing the system (when ever required) by describing those
features of the system that can be modified easily without affecting the system
integrity.
3. It evaluates
all top-level designs.
4. It develops
and documents top-level design for the external and internal interfaces.
5. It develops
preliminary versions of user documentation.
6. It defines and
documents preliminary test requirements and the schedule for software
integration.
7. The sources of
architectural design are listed below.
8. Information
regarding the application domain for the software to be developed
9. Using
data-flow diagrams
10. Availability
of architectural patterns and architectural styles.
Architectural design can be
represented using the following models.
i.
Structural model: Illustrates
architecture as an ordered collection of program components
ii.
Dynamic model: Specifies the behavioral aspect
of the software architecture and indicates how the structure or system
configuration changes as the function changes due to change in the external
environment
iii.
Process model: Focuses on the design of the
business or technical process, which must be implemented in the system
iv.
Functional model: Represents the
functional hierarchy of a system
v.
Framework model: Attempts to
identify repeatable architectural design patterns encountered in similar types
of application. This leads to an increase in the level of abstraction.
PROCEDURAL
DESIGN:
1.
Procedural design is when the
programmer specifies what must be done and in what sequence. It is based on the
concept of the modularity and scope of program code.
2. Transforms
structural elements of the program architecture into a procedural description
of software components.
3. Information
obtained from the Process and Control Specifications and the State Transition
Diagrams serve as a basis for procedural design.
4. The two major
diagramming tools used in procedural design are data flow diagrams and
structure charts.
5. A data flow
diagram (or DFD) is a tool to help you discover and document the program’s major
processes. The DFD is a conceptual model – it doesn’t represent the computer
program, it represents what the program must accomplish.
6. A structure
chart is a tool to help you derive and document the program’s architecture. It
is similar to an organization chart. A structure chart can be used to show the
relationship between conceptual tasks.
DATA
FLOW ORIENTED DESIGN:
The transition from information flow
(such as DFD) to structure is typically accomplished as a five step process:
1. The type of
information flow (either transform or transaction flow) is established
2. Flow
boundaries are indicated
3. The DFD is
mapped into program structure
4. Control
hierarchy is defined by Factoring
5. The resultant
structure is defined using design measures and heuristics
Data flow-oriented design
technique identifies different processing stations (functions) in a system and
the data items that flows between processing stations.
USER INTERFACE DESIGN:
User
interface design (UI) or user interface engineering is the design of user
interfaces for machines and software, such as computers, home appliances,
mobile devices, and other electronic devices, with the focus on maximizing
usability and the user experience.
HUMAN-COMPUTER
INTERACTION:
HCI
(human-computer interaction) is the study of how people interact with computers
and to what extent computers are or are not developed for successful
interaction with human beings.
HCI is a very
broad discipline that encompasses different specialties with different concerns
regarding computer development: computer science is concerned with the application design
and engineering of the human interfaces; sociology and anthropology are
concerned with the interactions between technology, work and organization and
the way that human systems and technical systems mutually adapt to each other;
ergonomics is concerned with the safety of computer systems and the safe limits
of human cognition and sensation; psychology is concerned with the cognitive
processes of humans and the behavior of users; linguistics is concerned with
the development of human and machine languages and the relationship between the
two.
The goals of
HCI are to produce usable and safe systems, as well as functional systems. In
order o produce computer systems with good usability, developers must attempt
to:
1. understand the
factors that determine how people use technology
2. develop tools
and techniques to enable building suitable systems
3. achieve
efficient, effective, and safe interaction
4. put people
first
HUMAN-COMPUTER INTERFACE DESIGN:
Human Computer Interface (HCI) was
previously known as the man-machine studies or man-machine interaction. It
deals with the design, execution and assessment of computer systems and related
phenomenon that are for human use.
Human-Computer Interface Design seeks to discover the most efficient way
to design understandable electronic messages.
To assess the interaction between
human and computers, the seven stages that can be used to transform difficult
tasks:
1. Use both
knowledge in world & knowledge in the head.
2. Simplify task
structures.
3. Make things
visible.
4. Get the
mapping right
(User mental
model = Conceptual model = Designed model).
5. Convert
constrains into advantages (Physical constraints, Cultural constraints,
Technological constraints).
6. Design for
Error.
7. When all else
fails − Standardize.
USER-INTERFACE
DESIGN:
Interface
design focuses on three areas viz. -
(i) The design of Interfaces
between software components; (ii) The
design of Interfaces between the software and other external entities; (iii)
The design of interface between the user and the computer
(user-interface design)
User-Interface
is the most important element of a computer-based system or product. If the interface is poorly designed, the
product fails. The principles guiding
the design of effective user interface are – (i) Place the user in
Control; (2) reduce the user’s
memory load; (3) Make the interface
consistent.
The software
becomes more popular if its user interface is – (i) Simple to use; (2)
Attractive; (3) Responsive in short time;
(4) clear to understand; (5)
consistent to all interface screens.
The
development of a user-interface begins with a series of analysis tasks. Once the tasks have been identified,
user-scenarios are created and analyzed to define a set of interface objects
and actions. Design issues such as
response time, command and action structure, error handling and help facilities
are considered as the design model is refined.
Many implementation tools are used to build a prototype for evaluation
by the user.
INTERFACE
STANDARDS:
1. User interface
standards can be hard to use for developers.
2. Designers rely
heavily on the examples in the standard and their experience with other user
interfaces.
3. User interface
standards have become the object of increasingly intense activities in recent
years
4. Given the
potential future importance of usability standards, it seems reasonable to
study the usability of the standards themselves to assess whether developers
can actually apply the content of the documents.
5. The ability of
designers to use and understand a standard can have more impact on interface
quality than the rules specified in the standard.
6. As with all
system design, if the intended users cannot use the system nor have the trouble
doing so, the proper response is to redesign the system to make it more usable.
For a user
interface standard to increase usability in the resulting products, two conditions
have to be met viz. (i) The standard must specify a usable interface; (ii) the
standard must be usable by developers so that they actually build the interface
according to the specifications.
To increase the usability of user interface
standards, it is recommended having development tools or Web templates that support implementation of
interfaces that follow the standard.
SOFTWARE
QUALITY ASSURANCE:
1. Software
quality assurance (SQA) is a process that ensures that developed software meets
and complies with defined quality specifications.
2.
SQA is an ongoing process within the software
development life cycle (SDLC) that checks the developed software to ensure it
meets desired quality measures.
3. It consists of a
means of monitoring the software engineering processes and methods used
to ensure quality.
4. SQA
encompasses the entire software development process, which includes
processes such as requirements definition, software
design, coding, source
code control, code reviews, software configuration management,
testing, release management, and product integration.
5. SQA is
organized into goals, commitments, abilities, activities, measurements, and
verifications
QUALITY
METRICS:
1.
Quality
metrics is a key component of an effective quality
management plan.
2.
It is the measurement used in
ensuring that the customers receive acceptable products or deliverables.
3.
Quality
metrics are used to directly translate
customer needs into acceptable performance measures in both products and
processes.
4. Quality
Metrics helps to translate the clients' needs into measurable goals.
5. Quality
Metrics focuses on effectiveness and measures that the right things are being
done correctly.
SOFTWARE
RELIABLITY:
1. Software
Reliability is the probability of failure-free software operation for a
specified period of time in a specified environment.
2. It is also an
important factor affecting system reliability.
3. Software
reliability is a dynamic process.
4. It differs
from hardware reliability in that it reflects the design perfection, rather
than manufacturing perfection.
5. Various
approaches can be used to improve the reliability of software, however, it is
hard to balance development time and budget with software reliability.
6. Metrics to
measure software reliability do exist and can be used starting in the
requirements phase
SOFTWARE
TESTING:
1. Software
Testing is evaluation of the software against requirements gathered from users
and system specifications.
2. Testing is
conducted at the phase level in software development life cycle or at module
level in program code.
3. Software
testing comprises of Validation and Verification.
SYSTEM
TESTING:
1. System Testing is a level of
the software testing where complete and integrated software is
tested.
2. The purpose of
this test is to evaluate the system’s compliance with the specified
requirements.
3. It includes
both functional and Non-Functional testing.
4. System testing
is actually a series of different tests whose purpose is to exercise the full
computer based system.
5. System Testing
is performed after Integration
Testing and before Acceptance
Testing.

INTEGRATION
TESTING:
1. Integration
Testing
is a level
of software testing where individual units are combined and tested as a
group.
2.
It occurs after unit testing
and before validation testing.
3.
This testing is done to
expose faults in the interaction between integrated units.
4. Upon
completion of unit testing, the units or modules are to be integrated which
gives raise to integration testing.
5. The purpose of
integration testing is to verify the functional, performance, and reliability
between the modules that are integrated.

UNIT
TESTING:
1. Unit Testing is a level of
software testing where individual units/ components of software are
tested.
2. The purpose is
to validate that each unit of the software performs as designed.
3. Unit Testing
is performed by using the White Box
Testing method.
4. Unit Testing
is the first level of testing and is performed prior to Integration
Testing.
5. Unit testing
is normally performed by software developers.
6. Unit testing
increases confidence in changing/ maintaining code.

ACCEPTANCE
TESTING:
1. Acceptance
Testing is
a level of the
software testing where a system is tested for acceptability.
2. The purpose of
this test is to evaluate the system’s compliance with the business requirements
and assess whether it is acceptable for delivery.
3. Usually, Black Box
Testing method is used in Acceptance Testing.
4. Acceptance
Testing is performed after System
Testing and before making the system available for actual use.

VALIDATION:
1. Validation is
process of examining whether or not the software satisfies the user
requirements.
2. It is carried
out at the end of the SDLC. If the software matches requirements for which it
was made, it is validated.
3. Validation
ensures the product under development is as per the user requirements.
4. Validation
emphasizes on user requirements.
VERIFICATION:
1. Verification
is the process of confirming if the software is meeting the business
requirements, and is developed according to the proper specifications and
methodologies.
2. Verification
ensures the product being developed is according to design specifications.
3. Verification
concentrates on the design and system specifications.
TEST
CHARACTERISTICS:
1. A good test
has a high probability of finding an error.
2. A good test is
not redundant.
3. A good test
should not be too simple.
4. A good test
should not be too complex.
BLACK-BOX
TESTING:
1. It is carried
out to test functionality of the program.
2. It is also
called ‘Behavioral’ testing.
3. It is often
used for validation.
4.
It is based entirely on the software requirements
and specifications.
5. It facilitates
testing communication amongst modules
6. It
can be functional or non-functional, though usually functional.
Equivalence class - The input
is divided into similar classes. If one element of a class passes the test, it
is assumed that all the class is passed.
Boundary values - The input
is divided into higher and lower end values. If these values pass the test, it
is assumed that all values in between may pass too.
Cause-effect graphing - In both
previous methods, only one input value at a time is tested. Cause (input) –
Effect (output) is a testing technique where combinations of input values are
tested in a systematic way.
Pair-wise Testing - The
behavior of software depends on multiple parameters. In pair-wise testing, the
multiple parameters are tested pair-wise for their different values.
State-based testing - The system
changes state on provision of input. These systems are tested based on their
states and input.
WHITE-BOX
TESTING:
1. It is
conducted in order to improve code efficiency or structure.
2. It is also
called structural testing and glass box testing.
3. It is often
used for verification.
4. The design and
structure of the code are known to the tester.
5. Programmers of
the code conduct this test on the code.
6. It does not
facilitate testing communication amongst modules.
Basis-path
Testing
is a white-box testing technique makes use of program graphs to derive a set of
linearly independent tests that will ensure coverage. This method can be applied to a procedural
design or to source code.
Condition
Testing
exercises the logical conditions contained in a program module.
This method focuses on testing each
condition in the program to ensure that it does not contain errors.
Data-Flow
Testing
selects test paths of a program according to the conditions and variables in
the program.
Loop
testing
provides a procedure for exercising loops of varying degrees of complexity.
REVERSE
ENGINEERING:
1. Reverse
engineering is taking apart an object to see how it works in order to duplicate
or enhance the object.
2. Reverse-engineering
is used for many purposes:
·
as
a learning tool;
·
as
a way to make new,
·
compatible
products
·
for
making software interoperate more effectively
·
to
bridge data between different operating systems or
databases;
·
to
uncover the undocumented features of commercial products.
3. Reverse engineering consists of the following steps:
·
Observe
and assess the mechanisms that make the device work.
·
Study
the inner workings of a mechanical device.
·
Compare
the actual device to observations made and suggest improvement.
4. There are
three important issues in reverse engineering.
·
Abstraction Level
Ø
This
level helps in obtaining the design information from the source code.
Ø
Abstraction
level should be as high as possible.
·
Completeness level:
Ø
The
completeness of reverse engineering process refers to the level of details that
is provided at an abstraction level.
Ø
The
completeness decreases as abstraction level increases.
·
Directionality level:
Ø
Directionality
means extracting the information from source code and give it to software
engineer.
Ø The
directionality can be one way or two way.
PROCESS OF
REVERSE ENGINEERING:
1. Initially the
dirty source code or unstructured source code is taken and processed and code
is restructured.
2. After
restructuring process the source code becomes clean source code.
3. The core to
reverse engineering is an activity called extract abstractions.

4. In abstraction
activity, the engineer must evaluate older program and extract information
about procedures, interface, data structure or database used.
5. The output of
reverse engineering process is a clear, unambiguous final specification
obtained from unstructured source code.
6. The final
specification helps in easy understanding of source code.
RE-ENGINEERING:
Re-engineering is the adjustment,
alteration, or partial replacement of a product in order to change its
function, adapting it to meet a new need.

Re-Engineering
Process:
1. Decide what to
re-engineer. Is it whole software or a part of it?
2. Perform Reverse
Engineering, in order to obtain specifications of existing software.
3. Restructure
Program
if required. For example, changing function-oriented programs into
object-oriented programs.
4. Re-structure
data
as required.
5. Apply Forward
engineering
concepts in order to get re-engineered software.
Forward
Engineering:
Forward
engineering is a process of obtaining desired software from the specifications
in hand which were brought down by means of reverse engineering. It assumes
that there was some software engineering already done in the past. Forward
engineering is same as software engineering process with only one difference –
it is carried out always after reverse engineering.

CASE
TOOLS:
A CASE (Computer Aided Software Engineering)
tool is a generic term used to denote any form of automated support for
software engineering.
CASE tools are used:
1. To increase
productivity
2. To help
produce better quality software at lower cost
Project
Management Tools:
These tools
are used for project planning, cost and effort estimation, project scheduling
and resource planning. Managers have to strictly comply project execution with
every mentioned step in software project management. Project management tools
help in storing and sharing project information in real-time throughout the
organization.
Analysis
and Design Tools:
Analysis tools
help to gather requirements, automatically check for any inconsistency,
inaccuracy in the diagrams, data redundancies or errors & omissions. For
example, Accept 360 tool is for requirement analysis, Visible Analyst for total
analysis.
Design Tools
help software designers to design the block structure of the software, which
may further be broken down in smaller modules using refinement techniques.
These tools provide detailing of each module and interconnections among
modules. For example, Animated Software Design
Programming
Tools:
These tools
consist of programming environments like IDE (Integrated Development
Environment), in-built modules library and simulation tools. These tools
provide comprehensive aid in building software product and include features for
simulation and testing.
Integration
and testing tools:
For:
Ø data
acquisition (get data for testing)
Ø static
measurement (analyze source code without using test cases)
Ø dynamic
measurement (analyze source code during execution)
Ø simulation
(simulate function of hardware and other externals)
Ø test
management (assist in test planning, development, and control)
Ø cross-functional
(tools that cross test tool category boundaries)


No comments:
Post a Comment