industrial manpower resource organizer
DESCRIPTION
this a project we can used as a refference for btech,mca projectsTRANSCRIPT
1. PROBLEM DEFINITION
The objective behind developing IMPRO (Industrial Manpower Resource Organizer) is to
maintain the hierarchy of the employees within an organization. It provides the manger and
administrative department an overall hierarchical view of the complete enterprise and helps them
in managing employees.
Description:
Every Organization has many managers, who are responsible for all the activities in the
organization. These managers manage different aspects of the organizational management issues,
such as manufacturing, production, Marketing, etc; one such essential management issue is
IMPRO.
As years progressed, the approach of the management changed towards the human capital.
Now Hierarchical Organization is part of every organization, and has its own identity and
importance. In this scenario, the bigger organizations need to put lot of effort in the management
of human Resources, as they are underlying capital asset to the organization. In doing so, along
with times, the Organization Information changed from its basic operations to more strategic
approach.
Features include
Maintenance of profile details of the employees, and retrievals as and when
required.
Overall & detailed view of the organization hierarchy, which is very much
essential in making effective decisions.
Maintenance of the data when the organization has many branches spread over
wide geographical area.
Accessing one branch information from another branch.
Vacancy situations and their priority /effect on the organizations performance.
Job Rotation
1
2.COMPANY PROFILE
2
3
2.1 PROBLEM DOMAIN
2.1 .1EXISTING SYSTEM
• The existing system is a manual system. In this system User needs to save his
information in the form of excel sheets or Disk Drives.
• There is no sharing possibility if the data is in the form of paper or Disk drives.
• There is no rich user interface.
• There is very less security for saving data; some data may be loss due to
mismanagement.
• In this system there is no report generation.
• . It’s a limited system and fewer users friendly.
• In this system users cannot able to restrict the information.
• There is no vacancies information facility for the employee within firm.
2.1.2 PRPOSED SYSTEM
IMPRO BENEFITS:
The project is identified by the merits of the system offered to the user. The merits of this
project are as follows:
• It’s a webenabled project.
• This project offers user to enter the data through simple and interactive forms. This is very
helpful for the client to enter the desired information through so much simplicity.
• The user is mainly more concerned about the validity of the data, whatever he is entering.
There are checks on every stages of any new creation, data entry or updation so that the user
cannot enter the invalid data, which can create problems at later date.
4
• Sometimes the user finds in the later stages of using project that he needs to update some of
the information that he entered earlier. There are options for him by which he can update the
records. Moreover there is restriction for his that he cannot change the primary data field.
This keeps the validity of the data to longer extent.
• User is provided the option of monitoring the records he entered earlier. He can see the
desired records with the variety of options provided by him.
• From every part of the project the user is provided with the links through framing so that he
can go from one option of the project to other as per the requirement. This is bound to be
simple and very friendly as per the user is concerned. That is, we can sat that the project is
user friendly which is one of the primary concerns of any good project.
• Data storage and retrieval will become faster and easier to maintain because data is stored in
a systematic manner and in a single database.
• Decision making process would be greatly enhanced because of faster processing of
information since data collection from information available on computer takes much less
time then manual system.
• Allocating of sample results becomes much faster because at a time the user can see the
records of last years.
• Easier and faster data transfer through latest technology associated with the computer and
communication.
• Through these features it will increase the efficiency, accuracy and transparency,
2.2 Advantages for Employees
For employees of IMPRO, advantages primarily concern access, time, and cost factors
compared to those incurred from attending as manual.
Employees to make online registration view his profile and update his profile and look
vacancies list and upload resume.
5
2.3 STUDY OF THE SYSTEM
In the flexibility of uses the interface has been developed a graphics concepts in mind,
associated through a browser interface. The GUI’s at the top level has been categorized as
follows
1. Administrative User Interface Design
2. The Operational and Generic User Interface Design
The administrative user interface concentrates on the consistent information that is
practically, part of the organizational activities and which needs proper authentication for the
data collection. The Interface helps the administration with all the transactional states like data
insertion, data deletion, and data updating along with executive data search capabilities.
The operational and generic user interface helps the users upon the system in transactions
through the existing data and required services. The operational user interface also helps the
ordinary users in managing their own information helps the ordinary users in managing their own
information in a customized manner as per the assisted flexibilities.
2.4 NUMBER OF MODULES
The system after careful analysis has been identified to be presented with the following modules:
1 Administrator
2 HR Manager
3 Employee
4 Web Registration
5 Reports
6 Authentication
6
2.4.1 Administrator
Administrator is treated as a super user in this system. He can have all the privileges to
do anything in this system. He is the person who adds the Profile of a HR Manager.
1. He is the person who can manage Employees, Branches, Departments and
Designations.
2. He can take care of maintain the monthly database backup.
3. He can generate the reports of branches and employees.
Another tasks done by the administrator is he can generates log files, backup, recovery of data
any time.
2.4.2 HR Manager
1. He has to login with secured Credentials.
2. He can recruit the employees.
3. He can find the department wise vacancies and assign the employees
4. He can change the employees from one department to another department.
5. He can generate Reports of employees and vacancies.
2.4.3 Employee
1. He has to login with Username and Password.
2. He can view his profile.
3. He can change his profile and password also.
4. He can view the vacancies list and upload the resume.
2.4.5 Web Registration
The system has a process of registration. Every User need to submit his complete details
in the form of registration. Whenever a User registration completed automatically he/she can get
a user id and password. By using that user id and password he/she can log into the system.
7
2.4.6 Reports
Different kind of reports is generated by the system.
1. Branches Report
2. Employees Report
3. Vacancies Report
2.4.7 Authentication:
Authentication is nothing but providing security to the system. Here every must
enter into the system throw login page. The login page will restrict the unauthorized users. A
user must provide his credential like user Id and password for log into the system. For that the
system maintains data for all users. Whenever a user enters his user id and password, it checks
in the database for user existence. If the user is exists he can be treated as a valid user.
Otherwise the request will throw back.
2.5 INPUTS AND OUTPUTS
The major inputs and outputs and major functions of the system are follows:
Inputs:
1. Admin enter his user id and password for login
2. Admin Add the Employees Details into the System.
3. Admin add the branches, departments and designations and assign HR .
4. Employee enters his user id and password for login.
5. New user gives his completed personnel, address and phone details for registration.
6. HR Manager gives information to generate various kinds of reports.
7. HR adds the vacancies and assigns employees to designations.
8
Outputs:
1. Admin can have his own home page.
2. HR Manager can get the entire vacancies list.
3. Employees can have their own home pages.
4. Admin can get all the employee details.
5. Different kinds of reports are generated.
2.6 SDLC METHDOLOGIES
This document play a vital role in the development of life cycle (SDLC) as it describes
the complete requirement of the system. It means for use by developers and will be the basic
during testing phase. Any changes made to the requirements in the future will have to go
through formal change approval process.
SPIRAL MODEL was defined by Barry Boehm in his 1988 article, “A spiral Model of
Software Development and Enhancement. This model was not the first model to discuss
iterative development, but it was the first model to explain why the iteration models.
As originally envisioned, the iterations were typically 6 months to 2 years long. Each
phase starts with a design goal and ends with a client reviewing the progress thus far.
Analysis and engineering efforts are applied at each phase of the project, with an eye toward
the end goal of the project.
The steps for Spiral Model can be generalized as follows:
1. The new system requirements are defined in as much details as possible. This usually
involves interviewing a number of users representing all the external or internal users
and other aspects of the existing system.
2. A preliminary design is created for the new system.
3. A first prototype of the new system is constructed from the preliminary design. This
9
is usually a scaleddown system, and represents an approximation of the
characteristics of the final product.
4. A second prototype is evolved by a fourfold procedure:
1. Evaluating the first prototype in terms of its strengths, weakness, and risks.
2. Defining the requirements of the second prototype.
3. Planning an designing the second prototype.
4. Constructing and testing the second prototype.
1. At the customer option, the entire project can be aborted if the risk is deemed too
great. Risk factors might involve development cost overruns, operatingcost
miscalculation, or any other factor that could, in the customer’s judgment, result in a
lessthansatisfactory final product.
2. The existing prototype is evaluated in the same manner as was the previous prototype,
and if necessary, another prototype is developed from it according to the fourfold
procedure outlined above.
3. The preceding steps are iterated until the customer is satisfied that the refined
prototype represents the final product desired.
4. The final system is constructed, based on the refined prototype.
5. The final system is thoroughly evaluated and tested. Routine maintenance is carried
on a continuing basis to prevent large scale failures and to minimize down time.
10
Fig Spiral Model
11
2.7 ADVANTAGES:
1. Estimates (i.e. budget, schedule etc.) become more realistic as work progresses,
because important issues discover earlier.
2. It is more able to cope with the changes that are software development generally
entails.
3. Software engineers can get their hands in and start woring on the core of a project
earlier.
2.8 INPUT DESIGN
Input design is a part of overall system design. The main objective during the input design as
given below:
1. To produce costeffective method of input
2. To achieve the highest possible level of accuracy.
3. To ensure that the input is acceptable and understood by the user.
Input States: The main input stages can be listed as below:
1. Data recording
2. Data transcription
3. Data conversion
4. Data verification
5. Data control
6. Data transmission
7. Data validation
8. Data correction
12
Input Types:
It is necessary to determine the various types of input. Inputs can be categorized as
follows:
1. External Inputs which are prime inputs for the system.
2. Internal Inputs, which are user communications with the systems.
3. Operational, which are computer department’s communications to the system?
4. Interactive, which are inputs entered during a dialogue.
Input Media:
At this stage choice has to be made about the input media. To conclude about the input
media consideration has to be given to:
1. Type of Input
2. Flexibility of Format
3. Speed
4. Accuracy
5. Verification methods
6. Rejection rates
7. Ease of correction
8. Storage and handling requirements
9. Security
10. Easy to use
11. Portability
13
Keeping in view the above description of the input types and input media, it can be said
that most of the inputs are of the form of internal and interactive. As input data is to be directly
keyed in by the user, the keyboard can be considered to be the most suitable input device.
2.9 OUTPUT DESIGN:
Outputs from computer systems are required primarily to communicate the results of
processing to users. They are also used to provide a permanent copy of the results for later
consultation. The various types of outputs in general are:
1. External Outputs, whose destination is outside the organization,.
2. Internal Outputs whose destination is within organization and they are the
User’s main interface with the computer.
3. Operational outputs whose use is purely within the computer department.
4. Interface outputs, which involve the user in communicating directly with User
Interface.
Output Definition:The outputs should be defined in terms of the following points:
• Type of the output
• Content of the output
• Format of the output
• Location of the output
• Frequency of the output
• Volume of the output
• Sequence of the output
It is not always desirable to print or display data as it is held on a computer. It should be
decided as which form of the output is the most suitable.
For Example
1. Will decimal points need to be inserted
2. Should leading zeros be suppressed.
14
Output Media:
In the next stage it is to be decided that which medium is the most appropriate for the
output. The main considerations when deciding about the output media are:
1. The suitability for the device to the particular application.
2. The need for a hard copy.
3. The response time required.
4. The location of the users
5. The software and hardware available.
Keeping in view the above description the project is to have outputs mainly coming under
the category of internal outputs. The main outputs desired according to the requirement
specification are: The outputs were needed to be generated as a hot copy and as well as queries
to be viewed on the screen. Keeping in view these outputs, the format for the output is taken
from the outputs, which are currently being obtained after manual processing. The standard
printer is to be used as output media for hard copies.
15
3.REQUIREMENT ANALYSIS
3.1 FEASIBILITY STUDY
Preliminary investigation examine project feasibility, the likelihood the system will be
useful to the organization. The main objective of the feasibility study is to test the Technical,
Operational and Economical feasibility for adding new modules and debugging old running
system. All system is feasible if they are unlimited resources and infinite time. There are aspects
in the feasibility study portion of the preliminary investigation:
• Technical Feasibility
• Economical Feasibility
• Operation Feasibility3.1
3.1.1 Technical Feasibility
The technical issue usually raised during the feasibility stage of the investigation includes
the following:
• Do the proposed equipments have the technical capacity to hold the data required to use
the new system?
Yes, the proposed equipments have the technical capacity to hold the data required to use
the new system and the requirements are
• Personal computer
• Microsoft windows operating system
• Microsoft visual studio 2008
• Sql server 2008
• The proposed system be upgraded if developed in the feature aspects?
If necessary the system can be upgraded.
• Are there technical guarantees of accuracy, reliability, ease of access and data security?
Yes, Permission to the users would be granted based on the roles specified. Therefore, it
provides the technical guarantee of accuracy, reliability and security.
So, the current system developed is technically feasible. Thus it provides an easy access
to the users.
16
3.1.2 Economic Feasibility
The proposed system can be developed technically .organization still equipped with all
proposed technologies, it must still be a good investment for the organization. They don’t go for
new technologies .so it will be a economical feasible for implementing the proposed system. It
does not require any addition hardware or software.
3.1.3 Operational Feasibility
Operational feasibility aspects of the project are to be taken as an important part of the
project implementation. Some of the important issues raised are to test the operational feasibility
of a project includes the following:
• Is there sufficient support for the management from the users?
Yes, the user by entering the fields the management is easy to handle.
• Will the system be used and work properly if it is being developed and implemented?
The system is user friendly and works properly if also it is developed and implemented.
Finally according to the management issues and user requirements have been taken into
consideration this system operationally feasible.
Conclusion
This system is targeted to be in accordance with the abovementioned issues. Beforehand,
the management issues and user requirements have been taken into consideration.
17
3.2 DATA MODELING
3.2.1 ER-DIAGRAM
In software engineering, an entityrelationship model is an abstract and conceptual
representation of data. Entity relationship modeling is a database modeling method, used to
produce a type of conceptual schema or semantic data model of a system, often a relation
database, and its requirements in a topdown fashion. Diagrams created by this process are called
entityrelationship diagrams ER diagrams or ERD’s.
Definition: An entityrelationship (ER) diagram is a specialized graphic that illustrates the
interrelationships between entities in a database. ER diagrams often use symbols to represent
three different types of information. Boxes are commonly used to represent entities. Diamonds
are normally used to represent relationships and ovals are used to represent attributes.
18
3.3 DATAFLOW DIAGRAMS
A dataflow diagram (DFD) is a significant modelling technique for analyzing and
constructing information processes. DFD literally means an illustration that explains the course
or movement of information in a process. DFD illustrates this flow of information in a process
based on the inputs and outputs. A DFD can be referred to a Process Model.
Additionally, a DFD can be utilized to visualize data processing or a structured design. A
DFD illustrates technical or business processes with the help of the external data stored, the data
flowing from process to another, and the results.
A designer usually draws a contextlevel DFD showing the relationship between the
entities inside and outside of a system as one single step. This basic DFD can be disintegrated to
a lower level diagram demonstrating similar steps exhibiting details of the system that is being
modelled. Numerous levels may be required to explain a complicated system.
Therefore, the principle for creating a DFD is that one system may be disintegrated into
subsystems, which in turn can be disintegrated into subsystems at a much lower level, and so on
and so forth. Every subsystem in a DFD represents a process. In this process are activity the
input data is processed. Processes can not be decomposed after reaching a certain lower level.
Each process in DFD characterizes an entire system. In a DFD system, data is introduced into the
system from the external environment. Once entered the data flows between processes. And then
processed data is produced as an output or a result.
3.3.1 DFD SYMBOLS
In the DFD, there are four symbols
1. A square defines a source (originator) or destinations of the system data
2. An arrow identifies data flow. It is the pipeline through which the information flows
3. A circle or a bubble represents a process that transforms incoming data flow into
outgoing data flows.
4. An open rectangle is a data store, data at rest or a temporary repository of data
19
Process that transforms data flow
Source or Destination of data
Data flow
Data Store
3.3.1.1 SAILENT FEATURES OF DFD’S
The DFD shows of data flow of data, not of control loops and decisions are controlled
considerations do not appear on a DFD.
1. The DFD does not indicate the time factor involved in any process whether the dataflow
take place daily, weekly, monthly or yearly.
2. The sequence of events is not brought out on the DFD.
3.3.1.2 RULES GOVERNING THE DFD’S
Process
1. No process can have only outputs.
2. No process can have only inputs. If any object have only inputs than it must be a sink.
3. A process has a verb phrase label.
20
Data Store
1. Data cannot move directly from one data store to another data store, a process must move
data.
2. Data cannot move directly from an outside source to a data store, process, which
receives, must move data from the source and place the data into data store.
3. A data store has a noun phrase label.
Source or Sink
The origin and/or destination of data
1. Data cannot move directly from a source to sink it must be moved by a process
2. A source and/or sink have a noun phrase land.
Data Flow
1. A Data Flow has only one direction of low between symbols. It may flow in both
directions between a process and a data store to show la read before an update. The later
is usually indicated however by two separate arrows since these happen at different type.
2. A join in DFD means that exactly the same data comes from any of two or more
Different processes data store or sink to a common location.
3. A data flow cannot go directly back to the same process it leads. There must be at least
one other process than handles the data flow produce some other data flow produce some
other data flow returns the original data into the beginning process.
4. A Data Flow to a data store means update (delete or change).
5. A Data Flow from a data store means retrieve or use.
A Data Flow has a noun phrase label more than one data flow noun phrase can appear on
a single arrow as long as all of the flows on the same arrow move together as one package.
21
3.3.2 ALL LEVELS OF DFD’s
Context Level DFD
Login DFD
22
ADMINISTRATORHR
EMPLOYEE IMPRO
ADMINISTRATORHR
EMPLOYEE
ADMINISTRATOR
EMPLOYEE
ADMINISTRATOR
EMPLOYEE
YesYes
No
OpenLogin Form
EnterUser Name &
PasswordChecking
UserHome Page
Registration
registration
ADMIN First - Level DFD
23
Reports
ADMIN
Open admin home page
Add/modifyBRANCHES
Add/modifyEMPLOYEE/
HR
Add/modifyDEPARTMENTS
branches
Registration
departments
ADMIN
ADMIN Second - Level DFD
24
Reports
Reports
ADMIN
OpenBRANCHES form
branches
branches
ADMIN
HR
ADD BRANCHES
ASSIGN HR
UpdateBRANCHES
ADMIN Level-3 DFD
25
Reports
Reports
Open home
Post HR vacancy
DEPARTMENTS info
Add BRANCHES
Update BRANCHE
assign HR
ChangeHR
addDEPARTMENTS
updateDEPARTMENT
ADMIN
BRANCHES info
HR/EMPLOYEE
ADMIN
registration
branches
departments
HR First- Level DFD
26
Reports
Reports
ADD EMPLOYEE
vacancies
Registration
Add/updateVACANCIES
HR
hr home page
HR
EMPLOYEE
HR Second - Level DFD
27
Reports
Reports
Vacancies
HR
EMPLOYEE
Registration
Departments
HR
hr home page
ADD EMPLOYEE
AddVACANCIES
ASSIGNDEPARTMENTS
HR Level-3 DFD
28
Reports
Reports
ADD EMPLOYEE
Add/updateVACANCIES
ASSIGNDESIGNATIONS
HR
Hr home page
vacancies
Departments
ASSIGNDEPARTMENT
S
designations
HR
EMPLOYEE
EMPLOYEE First - Level DFD
a
29
Reports
EMPLOYEE
Employee home page
Registration
vacancies
Manage/update PROFILE
APPLY VACANCIES
EMPLOYEE
EMPLOYEE Second - Level DFD
30
Reports
Applications
EMPLOYEE
Registration
Registration
EMPLOYEE
Open employee home page
Manage profile
Applying VACANCIES
Update profile
3.3.3 Activity DiagramsRegistration Activity Diagram:
Get The Details
Validate Details
[Enter User Name and Password]
Get Details
[Enter Registration
Details]
[submit]
[submit]
Validate Data
Accepted
[Success Fully Registered ]
31
Login Activity Diagram:
Get Details
Validate Data
[Enter User Name and Password ]
[Submit]
Rejected AcceptedyesNo
32
Admin Activity Diagram:
Get the Data
Validate Data
[Enter Login Details ]
Get the Data Get the Data
Add Employees [Add Branches ]
Validate Data
no
yes
no
[submit]
[submit]
Validate Details
yesyes
no
33
HR Manager Activity Diagram
Get the Data
Validate Data
[Enter Login Details ]
Get the Data Get the Data
Vacancies List Recruit Employees
Validate Data
no
yes
no
[submit]
[submit]
Validate Details
yesyes
no
34
Emp Activity
Get the Data
Validate Data
Get the Data Get the Data
View vacancies Send Resume
Validate Data
no
yes
no
[submit]
[submit]
Validate Details
yesyes
no
[Enter Login Details ]
35
3.4 DATA DICTIONARY
DATA DICTIONARYS.
NAME DATATYPE SIZE CONSTRAINT DESCRIPTION LOCATION
1 Abbreviation Varchar 50 Not nullAbbreviation of the designation Designations
2 Abbreviation Varchar 50 Not null Description of the department Department
3 Address Varchar 250 Not null Address of employee Registration
4 Address Varchar 250 Not null Address of branch Branches
5 Answer Varchar 50 Not null Answer Registration
6 App_id Big int Primary key Indicates application id applications
7 Appdate Datetime Not null Date of application posted applications
8 Attfile varchar max Not null Attach the file applications
9 Branchid Int Foreign key It references branch id Vacancies
10 Branchid Int Foreign key It references Brach id Registration
11 Branchid Int Foreign key It references Branch id department
12 Branchid Int Primary key It indicates Branch id Branches
13 Branchname Varchar 50 Not null Branch name Branches
14 Contactname Varchar 45 Not null Vacancy creator name Vacancies
15 Contactno Varchar 15 Another phone number Registration
16 Departmentid Int Foreign keyIt references Department id in section Section
17 departmentid Int Primary key It indicates Department id Department
18 Deptcode Varchar 25 Not null Code of the Department Department
19 deptid Int Foreign keyIt references Department id invacancies Vacancies
20 deptid Int Foreign keyIt references Department id in registration Registration
21 Desgcode Varchar 25 Not null Code of the Designation Designations
22 Desgid Int Foreign keyIt references designation id in vacancies Vacancies
23 desgid Int Foreign keyIt references Designation id in vacancies Registration
24 Designationid Int Primary key It indicates Designation id Designations
25 Dob Datetime Not null Date of birth Registration
26 Dor Datetime Not null Date of registration Registration
27 education varchar 45 Not null Educational qualifications Vacancies
28 Email Varchar 50 Not null Email Registration
29 experience varchar 45 Not null Experience Vacancies
30 Fathername Varchar 45 Not null Father or guardian name Registration
36
31 Firstname Varchar 25 Not null First name Registration
32 gender Varchar 5 Not null Male or female Registration
33 Hintquestion Varchar 50 Not null Hint question Registration
34 hrid Long int Foreign key It references HR manager id Branches
35 Image Varbinary Max Not null User image Registration
36 Job posted Datetime Not null Indicates job posted Vacancies
37 Jobdescription varchar 250 Not null description about the job Vacancies
38 Last date datetime Not null Last date of the job posted Vacancies
39 Lastname Varchar 45 Last name Registration
40 Middlename Varchar 45 Middle name Registration
41 Noofvacancies Int Not null Indicates number of vacancies Vacancies
42 Password Varchar 10 Not null Password Registration
43 Phoneno Varchar 15 Not null Phone number Registration
44 phoneno Varchar 15 Not null Phone number branches
45 Priority varchar 15 Not null Indicates priority Vacancies
46 qualification Varchar 70 Not null Qualification of the employee Registration
47 Receiverid Long int Foreign key It indicates receiver id applications
48 Recrutedby Long int Not null It indicates Hr id Registration
49 Secname Varchar 50 Not null It indicates section name section
50 sectionid Int Foreign keyIt references Section id in registration Registration
51 Sectionid Int Primary key Indicates the sectionid section
52 Senid Big int Foreign key It indicates the sender id applications
53 Status Varchar 5 Not null indicates the status Vacancies
54 status Varchar 5 Not null Status of employee Registration
55 status Varchar 5 Not null Status of the branch branches
56 Status Varchar 5 Not null Status of the section section
57 status Varchar 5 Not null Status of the designation designations
58 status varchar 5 Not null Status of the departments department
59 status Varchar 5 Not null Status of the application applications
60 teleponeno varchar 15 Not null Call canter number Vacancies
61 Userid Big int Primary key User registration id Registration
62 Username Varchar 45 Unique User name Registration
63 vacancyid Int Primary key Indicates vacancy id Vacancies
64 Vacancyid Int Foreign keyIt references vacancy id in apllications applications
65 Workexp Varchar 50 Not nullPrevious work experience or fresher Registration
37
4.SYSTEM DESIGN
Design is the first step in the development phase of any engineering product or system. It
may define as “the process of applying various techniques and principles for the purpose of
defining a device, a process, or system insufficient detail to permit its physical realization.
Software design is an interactive process through which requirements are translated into a
‘Blue Print’ for construction of software. The design is represented at high level of abstraction a
level that can be directly translated to specific data, functional and behavioural requirements.
Design Principles
Basic design principles that enable the software engineer to navigate the design process
are:
• The design process should not suffer from “tunnel vision”.
• The design should be traceable to the analysis model.
• The design should not reinvent the wheel
• The design should not exhibit uniformity and integrity
4.1 DATABASE DESIGN
4..1.1NORMALIZATION
It is a process of converting a relation to a standard form. The process is used to handle
the problems that can arise due to data redundancy i.e. represent of data is the database, maintain
data integrity as well as handling the problems that can arise due to insertion, updating, deletion
anomalies.
Decomposing is the process of splitting relations into multiple relations to eliminate
anomalies and maintains anomalies and maintains data integrity. To do this we use normal forms
for structuring relation.
Insertion anomaly: Inability to add data to the database due to absence of other data.
Deletion anomaly: Unintended loss of data due to deletion of other data.
Update anomaly: Data inconsistency resulting from data redundancy and partial update.
Normal Forms: These are rules for structuring relations that eliminate anomalies.
38
FIRST NORMAL FORM (1NF)
A relation is said to be in first normal form if the values in the relation are atomic for
every attribute in the relation. By this we mean simply that no attribute value can be set of
values, or as it is sometimes expressed, a repeating group.
SECOND NORMAL FORM (2NF)
A relation is said to be in second Normal form is it is in first normal form and it should
satisfy any one of the following rules.
1) Primary key is not a composite primary key.
2) No nonkey attributes are present.
3) Every nonkey attribute is fully functionally dependent on full set of primary key.
THIRD NORAML FORM (3NF)
A relation is said to be in third normal form if there exists no transitive dependencies.
Transitive Dependency: If two nonkey depend on each other as well as on the primary key
then they are said to be transitively dependent.
The above normalization principles were applied to decompose the data in multiple tables
thereby making the data to be maintained in a consistent state.
39
4.1.2 TABLE STRUCTURE
Name : Vacancies
Description : stores the details of vacancies information
40
NAME DATATYPE SIZE CONSTRAINT DESCRIPTION
vacancyid Int Primary key Indicates vacancy id
branchid Int Foreign key Indicates branch id
deptid Int Foreign key Department id
Desgid Int Foreign key Indicates designation id
education Varchar 45 Not null Educational qualifications
experience Varchar 45 Not null Experience
Jobdescription Varchar 250 Not null description about the job
Job posted Datetime Not null Indicates job posted
Noofvacancies Int Not null Indicates number of vacancies
contactname Varchar 45 Not null Vacancy creator name
teleponeno Varchar 15 Not null Call canter number
Priority Varchar 15 Not null Indicates priority
Last date Datetime Not null Last date of the job posted
Status Varchar 15 Not null indicates the status
Name :Registration
Description : stores the details of Registration information
NAME DATATYPE SIZE CONSTRAINT DESCRIPTION
Userid Long int Primary key User registration id
Username Varchar 45 Unique User name
Password Varchar 10 Not null Password
Firstname Varchar 25 Not null First name
Middlename Varchar 45 Middle name
Lastname Varchar 45 Last name
Fathername Varchar 45 Not null Father or guardian name
gender Varchar 5 Not null Male or female
qualification Varchar 70 Not null Qualification of the employee
Workexp Varchar 50 Not null Previous work experience or fresher
Dob Datetime Not null Date of birth
Dor Datetime Not null Date of registration
Address Varchar 250 Not null Address
Email Varchar 50 Not null Email
41
Phoneno Varchar 15 Not null Phone number
Contactno Varchar 15 Another phone number
Image Varbinary Max Not null User image
Hintquestion Varchar 50 Not null Hint question
Answer Varchar 50 Not null Answer
Recrutedby Long int Not null Recruited employee id
branchid int Foreign key Brach id
deptid Int Foreign key Department id
desgid int Foreign key Designation id
sectionid Int Foreign key Section id
status Varchar 5 Not null Status of employee
42
Name : Branches
Description : stores the details of Branches information
NAME DATATYPE SIZE CONSTRAINT DESCRIPTION
Branchid Int Primary key Branch id
Branchname Varchar 50 Not null Branch name
phoneno Varchar 15 Not null Phone number
address Varchar 250 Not null Address
hrid Long int Foreign key HR manager id
status Varchar 5 Not null Status of the branch
Name :Section
Description : stores the details of section information
NAME DATATYPE SIZE CONSTRAINT DESCRIPTION
Sectionid Integer Primary key Indicates the sectionid
Secname Varchar 50 Not null section name
Departmentid Integer Foreign key Department id
Status Varchar 5 Not null Status of the section
43
Name : Designation
Description : stores the details of Designation information
Name : Departments
Description : stores the details of Departments information
NAME DATATYPE SIZE CONSTRAINT DESCRIPTION
departmentid Int Primary key Department id
Deptcode Varchar 25 Not null Department name
abbreviation Varchar 50 Not null Description of the department
branchid Int Foreign key Branch id
status varchar 5 Not null Status of the departments
44
NAME DATATYPE SIZE CONSTRAINT DESCRIPTION
Designationid Int Primary key Designation id
Desgcode Varchar 25 Not null Designation name
Abbreviation Varchar 50 Not null Abbreviation of the designation
status Varchar 5 Not null Status of the designation
Name : applications
Description : stores the details of applications of vacancies information
NAME DATATYPE SIZE CONSTRAINT DESCRIPTIONApp_id Long int Primary key Indicates application id
Vacancyid
int Foreign key Indicates vacancy id
appdate Datetime Not null Date of application posted
Senid Long int Foreign key It indicates the Sender id
Receiverid
Long int Foreign key It indicates receiver id
Attfile varchar max Not null Attach the file
Status Varchar 5 Not null Status of the application
45
4.2 ARCHITECTURE DESIGN
To implement a web application clientserver architecture is required. The most popular
clientserver architectures are the twotier and the threetier architecture. The choice of
architecture affects the development time and the future flexibility and maintenance of the
application. While selecting the architecture most suitable for an application, many factors
including the complexity of the application, the number of users and their geographical
dispersion are considered. This system is designed based on a traditional threetier architecture
used by many web applications. Threetier architecture includes a presentation layer, business
rules/ logic layer, and the data layer. The threetier architecture is shown in Figure .
Figure : Threetier architecture
46
Tier 1:Presentation/ Client Layeruser interaction with thesystem is entirely throughthis layer.
Tier 2:Business Rules/Logic Layer
consists of compiled businessobjects, components and
Tier 3:Data LayerSQL Server, Oracle or any otherdatabase engine required tosupport web application.
DATA BASE
The threetier architecture is generally used when an effective distributed client/server
design is needed that provides
• increased performance
• flexibility
• maintainability
• reusability and
• Scalability
This model hides the complexity of distributed processing from the user. These features
have made the threetier architecture a popular choice over the twotier architecture for Internet
applications. The three layers are discussed below.
The Data layer
The Data layer is responsible for data storage. Primarily this tier (layer) consists of one or
more relational databases and/or file systems.
The Business Rules/Logic layer
The Business Rules/Logic layer is the middleman between the presentation layer and the
data layer. This middle tier was introduced to overcome the deployment limitation (whenever the
application logic changed the application had to be redistributed at each and every client) in the
twotier architecture. The middle tier provides process management where business logic and
rules are executed and can accommodate hundreds of users.
The Presentation Layer
The Presentation Layer, also called the Client tier, is responsible for the presentation of
data, receiving user events, and controlling the user interface. The user interaction with the
system is entirely through this layer.
47
4.3 INTERFACE DESIGN
4.3.1 SCREENS
48
49
50
51
52
53
54
55
56
57
58
59
60
61
4.3.2 REPORTS
62
63
64
65
66
67
z
68
69
70
71
72
73
74
75
76
4.SYSTEM TESTING
Testing is the phase where the errors remaining from all the previous phase must be
detected. Hence, testing is very critical role for quality assurance and for ensuring the reliability
of software.
Testing of designed software consists of providing the software with a set of test outputs
and observing if the software behaves as expected. If the software fails to behave as expected,
then the conditions under which failure occurs when needed for debugging and correction.
Presence of an error implies that a failure must have occurred, and the observation of a
failure applies that a fault does not imply a failure must occur.
We have tested our project in many ways for e.g., by storing information of employees
branches and their departments etc., into the database and checking the information by retrieving
them from the database.
Testing Objectives
A good case is one that has a high probability of finding an undiscovered error.
A testing is a process of executing a program with the intent of finding an error.
Successful test is one that uncovers yet undiscovered error. Testing can’t show the
absences of defects are present. It can only show that software defects are present.
Testing Principles: Before applying methods to design effective test cases, a software engineer
must understand the basic principles that guide software testing.
All tests should be traceable to consumer requirements.
Test should be planned long before testing begins.
Testing should begin “in small” and progress toward testing “in large”.
Exhaustive testing is not possible.
To be most effective, an independent third party should conduct testing.
5.1 TEST CASE DESIGN
The primary objective for test case is to derive a set of tests that has the highest
likelihood for uncovering defects in the software. Testing is the process of executing program
with the intent of finding as an yet uncovered error. To accomplish these objective two different
categories of test case techniques used.
77
5.1.1 BLACK BOX TESTING
Black box testing allows to tests that are conducted at the software interface. They are
used to demonstrate these software functions operational that input is properly accepted and the
output is correctly produced, at the same time searching for errors.
Case 1:
In the registration form the user click on update link it opens update form then user enters
the details and change password etc. then it should updates the database.
Case 2:
In the update profile the user click on change password it opens change password form
the user enters the password then username should be changed, shows current password of
employee.
5.1.2 WHITE BOX TESTING
Knowing the internal working of the system, tests can be conducted to ensure that “all
gears mesh”, that is, the internal operation performs according to specifications and all internal
components have been adequately exercised.
It is predicted on close examinations of procedural details and logical details providing
test cases that exercise specific sets of conditions and/or loops test paths through the software.
The basis path method enables the test case designer to derive a logical complexity of a
procedural design and use this measure as a guide for defining as basis set of execution paths.
Using the white box testing methods, the software engineer can derive test cases that
1. Guarantee that all independent paths within a module have been exercised at least once
is called Basis path testing.
2. Exercise all logical decisions on there true and false sides are Condition Testing.
3. Exercise internal data structure to assure their validation is called Data Flow Testing.
4. Exercise all loops at their boundaries and within their operation bounds is called Loop.
Data Flow Testing
The data flow testing method selects test paths of a program according to the location of
definitions and uses of variables in the program.
78
Test Case Explanation
Case 1:
When the user log on to the application, they should enter their user name and password
for security of application in login form. The user must give their username and password
without spelling mistakes and proper design.
Case 2:
When the user log on to the application, they should enter their user name and password
for security of application in login form. If the user enters the spelling mistakes it will display
error message “Invalid Username/Password.”.
TEST CASES
Test Case 1 – Login
Test 1:
• Incorrect input: An empty requirement field. (user name and password)
• Pass criteria: An appropriate error message should be displayed and the user
shouldn’t be allowed to login.
• Correct input: Right user name and password.
• Pass criteria: The user should be directed to the secure web page which the
user is requested.
Test 2:
• Incorrect input: Wrong user name and/or wrong password.
• Pass criteria: The user shouldn’t be allowed to login to the system and an
appropriate error message should be displayed.
• Correct input: Right user name and password.
• Pass criteria: The user should be login to the system and directing to the
requested secure web page.
79
Test Case 2 – New User Register
• name Incorrect input: An empty requirement field. (first name, last name,
middle, Photo, address, Date Of Birth, phone number, user name, password, e
mail address)
• Pass criteria: An appropriate error message should be displayed and the user
shouldn’t be allowed to create an account.
• Correct input: Fill in all requirement fields in correct format.
• Pass criteria: The user information should be added into the database.
Test Case 3 – Generate Report (Order)
• Incorrect input: An empty requirement field. (Select Date)
• Pass criteria: An appropriate error message should be displayed and the user
should not be able to generate a report.
• Correct input: Enter(Select) Correct Date
• Pass criteria: The user (admin) should be allowed to generate the report.
5.1.3 Basis Path Testing
The Basis Path method enables the test case designer to derive a logical complexity
measure of a procedural design and use this measure as a guide for defining a basis set of
execution paths. Test cases derived to exercise the basis test are guaranteed to execute every
statement in the program at least one time during testing. It consists of Flow Graph Notation,
Independent Program paths, Deriving Test Cases, Graph Matrices.
5.1.4 Control Structure Testing
Although Basis Path Testing is simple and effective, it is not sufficient in it self. In this
section, variations on control structure testing discussed briefly.
Condition testing is a test case design method that exercises the logical conditions
contained in a program module. Data flow testing method selects test paths of program according
to the locations of definitions and uses of variables in a program. Loop testing is focuses
exclusively on the validity of loop constructs.
80
5.2 TESTING STRATEGIES
A strategy for software testing must accommodate low level tests that that are
necessary to verify that a small source code segment has been correctly implemented as well as
high level tests that validate major system functions against customer requirements. A strategy
must provide guidance for the practitioner.
5.2.1 UNIT TESTING
Unit testing focuses verification effort on smallest unit of software design. This is white
box testing oriented in the “INDUSTRIAL MANPOWER RESOURCE ORGANIZER”
project each and every module is tested in the following ways.
This module interfaces are tested to ensure the information properly flows into and put
the program unit under test. The local data structure is examined to ensure data stored
temporarily maintains its integrity during all steps in an algorithm execution.
Boundary conditions are tested to ensure that the module operations properly at
boundaries establish to limit or restrict processing. All independent paths through the control
structure are exercised to ensure that all statements in a module have been executed at least once.
Error handling paths tested.
Case 1:
Input: The employee must input all the values in the registration form except employeeid
number.
Process: The system checks all the constraints, necessary validations and finally stores the
account creation details of the particular employee.
Output: The details of a particular employee will be displayed in the corresponding fields
whenever we select the particular employee id.
Case 2:
Input: The employee must input all the values while Registration updating except employee
number.
Process: The system checks all the constraints, necessary validations and finally stores the
details of specific account in corresponding databases.
81
Output: The registration details of a particular employee or branch will be displayed whenever
we select the particular employee.
5.2.2 INTEGRATION TESTING
Integration testing is a systematic technique for construction the program structure while
at the some time conducting tests to uncover errors associated with interfacing. The objective is
to take unit tested modules and build a program structure that has been dictated by design. All
the modules are combined in advance. The entire program tested as a whole.
Present developed software is tested using bottom integration begins construction and
testing with atomic modules. Lowlevel modules are combined into clusters and driver was
written to coordinate test case input output. The cluster is tested. The drivers are removed and
clusters are combined and moving upward in the program structure.
5.2.2.1 TopDown Integration
TopDown Integration testing is an incremental approach to construction of the software
architecture. Modules are integrated by moving downward through the control hierarchy,
beginning with the main control module. Modules subordinate to the main control module are
incorporated into the structure in either a depthFirst or BreadthFirst manner.
5.2.2.2. BottomUp Integration
BottomUp Integration testing, as its name implies, begins constructions and testing with
atomic modules. Because components are integrated from the BottomUp, processing required
for components subordinate to a given level is always available and the need for stubs is
eliminated.
5.2.2.3 Regression Testing
Each time a new module is added as a part of integration testing, the software changes.
New data flow paths are established, new I/O may occur and new control logic is invoked. These
changes may cause problems with functions that previously worked flawlessly. In the context of
an integration test strategy, regression testing is the reexecution of some subset of tests that have
already been conducted to ensure that changes have not propagated unintended side effects.
82
5.2.3 VALIDATION TESTING
Software validation is achieved through a series of blackbox tests that demonstrate
conformity with requirements. A test plan outlines the classes of tests to be conducted, and a test
procedure defines specific test cases that will be used in an attempt to uncover errors in
conformity with requirements.
In this validation testing we check each object for its validity like whether it is valid with
the value entered or not.
Test Case Explanation
Case 1:
All the validations will be checked when the user pressing the create button like name
should be entered in Alphabetic and all the codes should be entered in numeric.
Case 2:
If we enter the already existing data into database the system will generate the warning
messages that “The record information is already exist”.
Case 3:
In the registration, whatever the details updated by the employee immediately then the
system generated the status of the employee as profile updated.
5.2.4 SYSTEM TESTING
System testing is actually a series of different test whose primary purpose is to fully
exercise the computerbased system. Steps taken during software design and testing can greatly
improve the probability of successful software integration in the larger system.
5.2.4.1 Security Testing
Security testing verifies that protection mechanisms built into a system will, in fact,
protect it from improper penetration. To quote Beizer: “The system’s security must, of course,
be tested for invulnerability from frontal attackbut must also be tested for invulnerability from
flank or rear attack”.
5.2.4.2 Stress Testing
Stress testing executes a system in a manner that demands resources in abnormal
quantity, frequency, or volume.
83
For example,
(1)Special tests may be designed that generate ten interrupts per second, when one or
two is the average rate,
(2)(2)input data rates may be increased by an order of magnitude to determine how
input functions will respond.
5.2.4.3 Performance Testing
Performance tests are often coupled stress testing and usually require both hardware and
software instrumentation. That is, it is often necessary to measure resource utilization in an
exacting fashion. Externally instrumentation can monitor execution intervals, log events has they
occur, and sample machine states on a regular basis.
5.2.4.4. Debugging
Debugging occurs has a consequence of successful testing. That is, when test case
uncovers an error, debugging is an action that results in the removal of error. Although
debugging can and should be an orderly process, it is still very much an art. A software
engineer, evaluating the results of a test, is often confronted with a “Symptomatic” indication of
a software problem.
5.3 SYSTEM SECURITY
The protection of computer based resources that includes hardware, software, data,
procedures and people against unauthorized use or natural Disaster is known as System Security.
System Security can be divided into four related issues:
• Security
• Integrity
• Privacy
• Confidentiality
5.3.1 SYSTEM SECURITY
5.3.2
It refers to the technical innovations and procedures applied to the hardware and
operation systems to protect against deliberate or accidental damage from a defined threat.
84
5.3.2 DATA SECURITY
It is the protection of data from loss, disclosure, modification and destruction.
5.3.3 SYSTEM INTEGRITY
It refers to the power functioning of hardware and programs, appropriate physical
security and safety against external threats such as eavesdropping and wiretapping.
5.3.4 PRIVACY
It defines the rights of the user or organizations to determine what information they are
willing to share with or accept from others and how the organization can be protected against
unwelcome, unfair or excessive dissemination of information about it.
5.3.5 CONFIDENTIALITY
It is a special status given to sensitive information in a database to minimize the possible
invasion of privacy. It is an attribute of information that characterizes its need for protection.
5.4 SECURITY SOFTWARE
System security refers to various validations on data in form of checks and controls to
avoid the system from failing. It is always important to ensure that only valid data is entered and
only valid operations are performed on the system. The system employees two types of checks
and controls.
5.5 CLIENT SIDE VALIDATION
Various client side validations are used to ensure on the client side that only valid data is
entered. Client side validation saves server time and load to handle invalid data. Some checks
imposed are:
• C# Script in used to ensure those required fields are filled with suitable data only. Maximum
lengths of the fields of the forms are appropriately defined.
• Forms cannot be submitted without filling up the mandatory data so that manual mistakes of
submitting empty fields that are mandatory can be sorted out at the client side to save the
server time and load.
• Tabindexes are set according to the need and taking into account the ease of user while
working with the system.
85
5.6 SERVER SIDE VALIDATION
Some checks cannot be applied at client side. Server side checks are necessary to save the
system from failing and intimating the user that some invalid operation has been performed or
the performed operation is restricted. Some of the server side checks imposed is:
• Server side constraint has been imposed to check for the validity of primary key and foreign
key. A primary key value cannot be duplicated. Any attempt to duplicate the primary value
results into a message intimating the user about those values through the forms using foreign
key can be updated only of the existing foreign key values.
• User is intimating through appropriate messages about the successful operations or
exceptions occurring at server side.
• Various Access Control Mechanisms have been built so that one user may not agitate upon
another. Access permissions to various types of users are controlled according to the
organizational structure. Only permitted users can log on to the system and can have access
according to their category. User name, passwords and permissions are controlled o the
server side.
• Using server side validation, constraints on several restricted operations are imposed.
86
6. SYSTEM IMPLEMENTATION
By implementation the transformation of the design into a database system, this operates
on a particular machine.
An application is complete only on the successful installation of the same. The successful
installation of the application requires the hardware and software as specified in the requirements
analysis phase.
After implementing some traditional methods, it checks for problems in them and
implements the same using modern methods. But the proposed system is not implemented in a
stretch.
During the implementation stage, the system is physically created. Necessary program
are coded, debugged and documented. The test plan is implementation.
Including the following activities:
• Obtaining and installing the system hardware.
• Installing the system and making it run on its intended hardware.
• Providing use access to the system.
• Training the users on the new system.
• Documentation the system for its users and for those who will be responding for
maintaining it in future.
• Transferring on going responsibility for its system from its developers to the operations
or maintenance part.
• Evaluating the operation and use of the system.
87
SOFTWARE AND HARDWARE REQUIREMENTS
HARDWARE REQUIREMNTS
Processor : PC with a Pentium IV Processor, 1 GHz
RAM : 512 MB
HDD : 40 GB on System Drive
Recommended 80 GB on System Drive
Video : 1024X786, 32 Bit Color Mode
SOFTWARE REQUIREMENTS
Operating System : Microsoft Windows vista
Database Server : Microsoft SQL Server 2008
Clients : Microsoft Internet Explorer 6.0 or Latest
Tools : Microsoft Visual Studio .NET 2008
Services : ASP.NET XML Web Services
88
7.SYSTEM MAINTENANCE
7.1 ASP.NET
Server Application Development
Serverside applications in the managed world are implemented through runtime hosts.
Unmanaged applications host the common language runtime, which allows your custom
managed code to control the behavior of the server. This model provides you with all the features
of the common language runtime and class library while gaining the performance and scalability
of the host server.
The following illustration shows a basic network schema with managed code running in
different server environments. Servers such as IIS and SQL Server can perform standard
operations while your application logic executes through the managed code.
Serverside managed code
ASP.NET is the hosting environment that enables developers to use the .NET Framework
to target Webbased applications. However, ASP.NET is more than just a runtime host; it is a
complete architecture for developing Web sites and Internetdistributed objects using managed
code. Both Web Forms and XML Web services use IIS and ASP.NET as the publishing
mechanism for applications, and both have a collection of supporting classes in the .NET
Framework.
XML Web services, an important evolution in Webbased technology, are distributed,
serverside application components similar to common Web sites. However, unlike Webbased
applications, XML Web services components have no UI and are not targeted for browsers such
as Internet Explorer and Netscape Navigator. Instead, XML Web services consist of reusable
software components designed to be consumed by other applications, such as traditional client
applications, Webbased applications, or even other XML Web services. As a result, XML Web
services technology is rapidly moving application development and deployment into the highly
distributed environment of the Internet.
89
If you have used earlier versions of ASP technology, you will immediately notice the
improvements that ASP.NET and Web Forms offers. For example, you can develop Web Forms
pages in any language that supports the .NET Framework. In addition, your code no longer needs
to share the same file with your HTTP text (although it can continue to do so if you prefer). Web
Forms pages execute in native machine language because, like any other managed application,
they take full advantage of the runtime. In contrast, unmanaged ASP pages are always scripted
and interpreted. ASP.NET pages are faster, more functional, and easier to develop than
unmanaged ASP pages because they interact with the runtime like any managed application.
The .NET Framework also provides a collection of classes and tools to aid in
development and consumption of XML Web services applications. XML Web services are built
on standards such as SOAP (a remote procedurecall protocol), XML (an extensible data format),
and WSDL ( the Web Services Description Language). The .NET Framework is built on these
standards to promote interoperability with nonMicrosoft solutions.
For example, the Web Services Description Language tool included with the .NET
Framework SDK can query an XML Web service published on the Web, parse its WSDL
description, and produce C# or Visual Basic source code that your application can use to become
a client of the XML Web service. The source code can create classes derived from classes in the
class library that handle all the underlying communication using SOAP and XML parsing.
Although you can use the class library to consume XML Web services directly, the Web
Services Description Language tool and the other tools contained in the SDK facilitate your
development efforts with the .NET Framework.
If you develop and publish your own XML Web service, the .NET Framework provides a
set of classes that conform to all the underlying communication standards, such as SOAP,
WSDL, and XML. Using those classes enables you to focus on the logic of your service, without
concerning yourself with the communications infrastructure required by distributed software
development.
Finally, like Web Forms pages in the managed environment, your XML Web service will run
with the speed of native machine language using the scalable communication of IIS.
90
Active Server Pages.NET
ASP.NET is a programming framework built on the common language runtime that can
be used on a server to build powerful Web applications. ASP.NET offers several important
advantages over previous Web development models:
• Enhanced Performance. ASP.NET is compiled common language runtime code
running on the server. Unlike its interpreted predecessors, ASP.NET can take advantage of early
binding, justintime compilation, native optimization, and caching services right out of the box.
This amounts to dramatically better performance before you ever write a line of code.
• WorldClass Tool Support. The ASP.NET framework is complemented by a rich
toolbox and designer in the Visual Studio integrated development environment. WYSIWYG
editing, draganddrop server controls, and automatic deployment are just a few of the features
this powerful tool provides.
• Power and Flexibility. Because ASP.NET is based on the common language
runtime, the power and flexibility of that entire platform is available to Web application
developers. The .NET Framework class library, Messaging, and Data Access solutions are all
seamlessly accessible from the Web. ASP.NET is also languageindependent, so you can choose
the language that best applies to your application or partition your application across many
languages. Further, common language runtime interoperability guarantees that your existing
investment in COMbased development is preserved when migrating to ASP.NET.
• Simplicity. ASP.NET makes it easy to perform common tasks, from simple form
submission and client authentication to deployment and site configuration. For example, the
ASP.NET page framework allows you to build user interfaces that cleanly separate application
logic from presentation code and to handle events in a simple, Visual Basic like forms
processing model. Additionally, the common language runtime simplifies development, with
managed code services such as automatic reference counting and garbage collection.
91
• Manageability. ASP.NET employs a textbased, hierarchical configuration
system, which simplifies applying settings to your server environment and Web applications.
Because configuration information is stored as plain text, new settings may be applied without
the aid of local administration tools. This "zero local administration" philosophy extends to
deploying ASP.NET Framework applications as well. An ASP.NET Framework application is
deployed to a server simply by copying the necessary files to the server. No server restart is
required, even to deploy or replace running compiled code.
• Scalability and Availability. ASP.NET has been designed with scalability in
mind, with features specifically tailored to improve performance in clustered and multiprocessor
environments. Further, processes are closely monitored and managed by the ASP.NET runtime,
so that if one misbehaves (leaks, deadlocks), a new process can be created in its place, which
helps keep your application constantly available to handle requests.
• Customizability and Extensibility. ASP.NET delivers a wellfactored architecture
that allows developers to "plugin" their code at the appropriate level. In fact, it is possible to
extend or replace any subcomponent of the ASP.NET runtime with your own customwritten
component. Implementing custom authentication or state services has never been easier.
• Security. With built in Windows authentication and perapplication configuration,
you can be assured that your applications are secure.
Language Support: The Microsoft .NET Platform currently offers builtin support for three
languages: C#, Visual Basic, and JScript.
What is ASP.NET Web Forms?
The ASP.NET Web Forms page framework is a scalable common language runtime
programming model that can be used on the server to dynamically generate Web pages.
Intended as a logical evolution of ASP (ASP.NET provides syntax compatibility with
existing pages), the ASP.NET Web Forms framework has been specifically designed to address a
number of key deficiencies in the previous model. In particular, it provides:
92
• The ability to create and use reusable UI controls that can encapsulate common
functionality and thus reduce the amount of code that a page developer has to write.
• The ability for developers to cleanly structure their page logic in an orderly
fashion (not "spaghetti code").
• The ability for development tools to provide strong WYSIWYG design support
for pages (existing ASP code is opaque to tools).
ASP.NET Web Forms pages are text files with an .aspx file name extension. They can be
deployed throughout an IIS virtual root directory tree. When a browser client requests .aspx
resources, the ASP.NET runtime parses and compiles the target file into a .NET Framework
class. This class can then be used to dynamically process incoming requests. (Note that the .aspx
file is compiled only the first time it is accessed; the compiled type instance is then reused across
multiple requests).
An ASP.NET page can be created simply by taking an existing HTML file and changing
its file name extension to .aspx (no modification of code is required). For example, the following
sample demonstrates a simple HTML page that collects a user's name and category preference
and then performs a form post back to the originating page when a button is clicked:
ASP.NET provides syntax compatibility with existing ASP pages. This includes support
for <% %> code render blocks that can be intermixed with HTML content within an .aspx file.
These code blocks execute in a topdown manner at page render time.
CodeBehind Web Forms
ASP.NET supports two methods of authoring dynamic pages. The first is the method
shown in the preceding samples, where the page code is physically declared within the
originating .aspx file. An alternative approachknown as the codebehind methodenables the
page code to be more cleanly separated from the HTML content into an entirely separate file.
93
Introduction to ASP.NET Server Controls
In addition to (or instead of) using <% %> code blocks to program dynamic content,
ASP.NET page developers can use ASP.NET server controls to program Web pages. Server
controls are declared within an .aspx file using custom tags or intrinsic HTML tags that contain a
runat="server" attribute value. Intrinsic HTML tags are handled by one of the controls in the
System.Web.UI.HtmlControls namespace. Any tag that doesn't explicitly map to one of the
controls is assigned the type of System.Web.UI.HtmlControls. Html Generic Control.
Server controls automatically maintain any cliententered values between round trips to
the server. This control state is not stored on the server (it is instead stored within an <input
type="hidden"> form field that is roundtripped between requests). Note also that no clientside
script is required.
In addition to supporting standard HTML input controls, ASP.NET enables developers to
utilize richer custom controls on their pages. For example, the following sample demonstrates
how the <asp:adrotator> control can be used to dynamically display rotating ads on a page.
• ASP.NET Web Forms provide an easy and powerful way to build dynamic Web UI.
• ASP.NET Web Forms pages can target any browser client (there are no script library or
cookie requirements).
• ASP.NET Web Forms pages provide syntax compatibility with existing ASP pages.
• ASP.NET server controls provide an easy way to encapsulate common functionality.
• ASP.NET ships with 45 builtin server controls. Developers can also use controls built by
third parties.
• ASP.NET server controls can automatically project both uplevel and downlevel HTML.
• ASP.NET templates provide an easy way to customize the look and feel of list server
controls.
94
7.2 C#.NET
ADO.NET Overview
ADO.NET is an evolution of the ADO data access model that directly addresses user
requirements for developing scalable applications. It was designed specifically for the web with
scalability, statelessness, and XML in mind.
ADO.NET uses some ADO objects, such as the Connection and Command objects, and
also introduces new objects. Key new ADO.NET objects include the DataSet, DataReader, and
DataAdapter.
The important distinction between this evolved stage of ADO.NET and previous data
architectures is that there exists an object the DataSet that is separate and distinct from any
data stores. Because of that, the DataSet functions as a standalone entity. You can think of the
DataSet as an always disconnected recordset that knows nothing about the source or destination
of the data it contains. Inside a DataSet, much like in a database, there are tables, columns,
relationships, constraints, views, and so forth.
A DataAdapter is the object that connects to the database to fill the DataSet. Then, it
connects back to the database to update the data there, based on operations performed while the
DataSet held the data. In the past, data processing has been primarily connectionbased. Now, in
an effort to make multitiered apps more efficient, data processing is turning to a messagebased
approach that revolves around chunks of information. At the center of this approach is the
DataAdapter, which provides a bridge to retrieve and save data between a DataSet and its source
data store. It accomplishes this by means of requests to the appropriate SQL commands made
against the data store.
The XMLbased DataSet object provides a consistent programming model that works
with all models of data storage: flat, relational, and hierarchical. It does this by having no
'knowledge' of the source of its data, and by representing the data that it holds as collections and
data types. No matter what the source of the data within the DataSet is, it is manipulated through
the same set of standard APIs exposed through the DataSet and its subordinate objects.
95
While the DataSet has no knowledge of the source of its data, the managed provider has
detailed and specific information. The role of the managed provider is to connect, fill, and persist
the DataSet to and from data stores. The OLE DB and SQL Server .NET Data Providers
(System.Data.OleDb and System.Data.SqlClient) that are part of the .Net Framework provide
four basic objects: the Command, Connection, DataReader and DataAdapter. In the remaining
sections of this document, we'll walk through each part of the DataSet and the OLE DB/SQL
Server .NET Data Providers explaining what they are, and how to program against them.
The following sections will introduce you to some objects that have evolved, and some
that are new. These objects are:
• Connections. For connection to and managing transactions against a database.
• Commands. For issuing SQL commands against a database.
• DataReaders. For reading a forwardonly stream of data records from a SQL Server
data source.
• DataSets. For storing, remoting and programming against flat data, XML data and
relational data.
• DataAdapters. For pushing data into a DataSet, and reconciling data against a
database.
When dealing with connections to a database, there are two different options: SQL Server
.NET Data Provider (System.Data.SqlClient) and OLE DB .NET Data Provider
(System.Data.OleDb). In these samples we will use the SQL Server .NET Data Provider. These
are written to talk directly to Microsoft SQL Server. The OLE DB .NET Data Provider is used to
talk to any OLE DB provider (as it uses OLE DB underneath).
Connections: Connections are used to 'talk to' databases, and are respresented by provider
specific classes such as SQLConnection. Commands travel over connections and resultsets are
returned in the form of streams which can be read by a DataReader object, or pushed into a
DataSet object.
96
Commands :Commands contain the information that is submitted to a database, and are
represented by providerspecific classes such as SQLCommand. A command can be a stored
procedure call, an UPDATE statement, or a statement that returns results. You can also use input
and output parameters, and return values as part of your command syntax. The example below
shows how to issue an INSERT statement against the Northwind database.
DataReader :The DataReader object is somewhat synonymous with a readonly/forwardonly
cursor over data. The DataReader API supports flat as well as hierarchical data. A DataReader
object is returned after executing a command against a database. The format of the returned
DataReader object is different from a recordset. For example, you might use the DataReader to
show the results of a search list in a web page.
DataSet: The DataSet object is similar to the ADO Recordset object, but more powerful, and
with one other important distinction: the DataSet is always disconnected. The DataSet object
represents a cache of data, with databaselike structures such as tables, columns, relationships,
and constraints. However, though a DataSet can and does behave much like a database, it is
important to remember that DataSet objects do not interact directly with databases, or other
source data. This allows the developer to work with a programming model that is always
consistent, regardless of where the source data resides. Data coming from a database, an XML
file, from code, or user input can all be placed into DataSet objects. Then, as changes are made to
the DataSet they can be tracked and verified before updating the source data. The GetChanges
method of the DataSet object actually creates a second DatSet that contains only the changes to
the data. This DataSet is then used by a DataAdapter (or other objects) to update the original data
source.
The DataSet has many XML characteristics, including the ability to produce and
consume XML data and XML schemas. XML schemas can be used to describe schemas
interchanged via WebServices. In fact, a DataSet with a schema can actually be compiled for
type safety and statement completion.
97
Data Adapter (OLEDB/SQL)
The DataAdapter object works as a bridge between the DataSet and the source data.
Using the providerspecific SqlDataAdapter (along with its associated SqlCommand and
SqlConnection) can increase overall performance when working with a Microsoft SQL Server
databases. For other OLE DBsupported databases, you would use the OleDbDataAdapter object
and its associated OleDbCommand and OleDbConnection objects.
The DataAdapter object uses commands to update the data source after changes have been
made to the DataSet. Using the Fill method of the DataAdapter calls the SELECT command;
using the Update method calls the INSERT, UPDATE or DELETE command for each changed
row. You can explicitly set these commands in order to control the statements used at runtime to
resolve changes, including the use of stored procedures. For adhoc scenarios, a
CommandBuilder object can generate these at runtime based upon a select statement. However,
this runtime generation requires an extra roundtrip to the server in order to gather required
metadata, so explicitly providing the INSERT, UPDATE, and DELETE commands at design
time will result in better runtime performance.
1. ADO.NET is the next evolution of ADO for the .Net Framework.
2. ADO.NET was created with nTier, statelessness and XML in the forefront. Two new
objects, the DataSet and DataAdapter, are provided for these scenarios.
3. ADO.NET can be used to get data from a stream, or to store data in a cache for
updates.
4. There is a lot more information about ADO.NET in the documentation.
5. Remember, you can execute a command directly against the database in order to do
inserts, updates, and deletes. You don't need to first put data into a DataSet in order to
insert, update, or delete it.
6. Also, you can use a DataSet to bind to the data, move through the data, and navigate
data relationships
98
7.3 SQL SERVER 2008
DATABASE
A database management, or DBMS, gives the user access to their data and helps them
transform the data into information. Such database management systems include dBase, paradox,
IMS, SQL Server and SQL Server. These systems allow users to create, update and extract
information from their database.
A database is a structured collection of data. Data refers to the characteristics of people,
things and events. SQL Server stores each data item in its own fields. In SQL Server, the fields
relating to a particular person, thing or event are bundled together to form a single complete unit
of data, called a record (it can also be referred to as raw or an occurrence). Each record is made
up of a number of fields. No two fields in a record can have the same field name.
During an SQL Server Database design project, the analysis of your business needs
identifies all the fields or attributes of interest. If your business needs change over time, you
define any additional fields or change the definition of existing fields.
SQL Server Tables: SQL Server stores records relating to each other in a table. Different
tables are created for the various groups of information. Related tables are grouped together to
form a database.
Primary Key:Every table in SQL Server has a field or a combination of fields that uniquely
identifies each record in the table. The Unique identifier is called the Primary Key, or simply the
Key. The primary key provides the means to distinguish one record from all other in a table. It
allows the user and the database system to identify, locate and refer to one particular record in
the database.
Relational Database:Sometimes all the information of interest to a business operation can be
stored in one table. SQL Server makes it very easy to link the data in multiple tables. Matching
an employee to the department in which they work is one example. This is what makes SQL
Server a relational database management system, or RDBMS. It stores data in two or more
tables and enables you to define relationships between the table and enables you to define
relationships between the tables.
99
Foreign Key: When a field is one table matches the primary key of another field is referred to as
a foreign key. A foreign key is a field or a group of fields in one table whose values match those
of the primary key of another table.
Referential Integrity:Not only does SQL Server allow you to link multiple tables, it also
maintains consistency between them. Ensuring that the data among related tables is correctly
matched is referred to as maintaining referential integrity.
Data Abstraction:A major purpose of a database system is to provide users with an abstract
view of the data. This system hides certain details of how the data is stored and maintained. Data
abstraction is divided into three levels.
Physical level: This is the lowest level of abstraction at which one describes how the data are
actually stored.
Conceptual Level: At this level of database abstraction all the attributed and what data are
actually stored is described and entries and relationship among them.
View level: This is the highest level of abstraction at which one describes only part of the
database.
Advantages of RDBMS
• Redundancy can be avoided
• Inconsistency can be eliminated
• Data can be Shared
• Standards can be enforced
• Security restrictions ca be applied
• Integrity can be maintained
• Conflicting requirements can be balanced
• Data independence can be achieved.
Disadvantages of DBMS: A significant disadvantage of the DBMS system is cost. In
addition to the cost of purchasing of developing the software, the hardware has to be upgraded to
allow for the extensive programs and the workspace required for their execution and storage.
While centralization reduces duplication, the lack of duplication requires that the database be
adequately backed up so that in case of failure the data can be recovered.
100
FEATURES OF SQL SERVER (RDBMS)
SQL SERVER is one of the leading database management systems (DBMS) because it is
the only Database that meets the uncompromising requirements of today’s most demanding
information systems. From complex decision support systems (DSS) to the most rigorous online
transaction processing (OLTP) application, even application that require simultaneous DSS and
OLTP access to the same critical data, SQL Server leads the industry in both performance and
capability
• SQL SERVER is a truly portable, distributed, and open DBMS that delivers unmatched
performance, continuous operation and support for every database.
• SQL SERVER RDBMS is high performance fault tolerant DBMS which is specially
designed for online transactions processing and for handling large database application.
• SQL SERVER with transactions processing option offers two features which contribute
to very high level of transaction processing throughput, which are
• The row level lock manager
Enterprise wide Data Sharing: The unrivaled portability and connectivity of the SQL SERVER
DBMS enables all the systems in the organization to be linked into a singular, integrated
computing resource.
Portability: SQL SERVER is fully portable to more than 80 distinct hardware and operating
systems platforms, including UNIX, MSDOS, OS/2, Macintosh and dozens of proprietary
platforms. This portability gives complete freedom to choose the database sever platform that
meets the system requirements.
Open Systems: SQL SERVER offers a leading implementation of industry –standard SQL.
SQL Server’s open architecture integrates SQL SERVER and non –SQL SERVER DBMS with
industries most comprehensive collection of tools, application, and third party software products
SQL Server’s Open architecture provides transparent access to data from other relational
database and even nonrelational database.
Distributed Data Sharing: SQL Server’s networking and distributed database capabilities to
access data stored on remote server with the same ease as if the information was stored on a
single local computer. A single SQL statement can access data at multiple sites. You can store
101
data where system requirements such as performance, security or availability dictate.
Unmatched Performance: The most advanced architecture in the industry allows the SQL
SERVER DBMS to deliver unmatched performance.
7.4 IIS (Internet Information Services)
IIS (Internet Information Services) is a group of Internet servers including a Web or
Hypertext Transfer Protocol server and a File Transfer Protocol server. IIS is Microsoft's entry to
compete in the Internet server market that is also addressed by Apache, Sun Microsystems (Sun
Java System Web Server), O'Reilly and others. The current version of IIS is 7.0 for Windows
Vista, 6.0 for Windows Server 2003 and IIS 5.1 for Windows XP Professional. IIS 5.1 for
Windows XP is a restricted version of IIS that supports only 10 simultaneous connections and a
single web site .
The web server itself cannot directly perform server side processing but can delegate the
task to ISAPI (Internet Server Application Program Interface) applications on the server.
Microsoft provides a number of these ISAPI applications including one for Active Server Page
and one for ASP.NET.
A typical company that buys IIS can create pages for Web sites. There are two types of
web pages, static and dynamic web pages. The static web pages are discussed in detailed in
section 7.4..1 and the dynamic web pages are discussed in section7.4.2.
7.4.1 Static Web pages
A Static web page consists of some HTML code typed directly into a text editor and
saved as a .htm or .html file. The content and appearance of these web pages is always the same,
regardless of who visits the page, or when they visit, or how they arrive at the page. The
following five steps are involved for the building of a static web page :
1. An author writes a HTML page, and saves it within an .htm or .html file on the server
2. Sometime later, a client (user) requests a page by typing a URL into their browser, and
the request is passed from the browser to the web server
3. The web server locates the .htm or .html page and converts it to an HTML stream
4. The web server sends the HTML stream back across the network to the browser
5. The browser processes the HTML and displays the page
102
There are several limitations for Static Web Pages. HTML offers no features for
personalizing the web pages. Each web page that is served is the same for every user who request
the page. The other limitation is that there is also no security with HTML as the code can be
viewed by everybody. Though Static pages are very fast to download, as quickly as copying a
small file over a network, they are quite limited without any dynamic features.
7.4.2. Dynamic Web Pages
In a dynamic web page content (text, images, fields, etc.) on the web page can change, in
response to different contexts or conditions. There are two ways to create this kind of web pages:
1. Using clientside scripting to change interface behaviours within a specific web page
2. Using serverside scripting to change the sequence of the web pages or web content
Supplied to the browser.
These are determined by conditions such as data in a posted HTML form, parameters in the
URL, the type of browser being used and so on.
7.4.2.1. ClientSide Dynamic Web Page
In the clientside model, modules (or plug ins) attached to the browser do all the work of
creating dynamic pages. The HTML code is sent to the browser along with a separate file
containing a set of instructions. These instructions are referenced from within the HTML page. It
is also quite common to find these instructions intermingled with the HTML codes. The modules
within the browser then use the instructions to generate pure HTML for the page, generating the
page dynamically on request, which is sent back to the browser. This model hence involves six
steps:
1.An author writes a set of instructions for creating HTML, and saves it within an .htm
file. The instructions might be contained within the .htm file, or within a separate file.
2. Sometime later, a client (user) requests a page by typing it into their browser, and the
request is passed from the browser to the web server.
3. The web server locates the .htm page, and any other file that contains the instructions.
4. The web server sends both the newly created HTML stream and instructions back
across the network to the browser.
5. A module within the browser processes the instructions and returns it as HTML
6. The HTML is then processed by the browser which displays the page
103
Clientside technologies have fallen out of favor in recent times.
The main reason is that it takes a long time to download, especially when there is a
separate second file for instructions. Another drawback is that each browser interprets these
instructions in a different way and there is no guarantee that all the browsers will understand
them. It is also difficult to write clientside code that uses serverside resources like database, as
it is interpreted at the clientside. There are clientside technologies like Ajax (shorthand for
Asynchronous Java and XML) for providing dynamic contents. The client side technologies are a
mixture of scripting languages, controls and fully fledged programming languages including
JavaScript, VBScript and flash.
7.4.2.2. ServerSide Dynamic Web Page
In the serverside model, when a user types a page request such as an ASP, PHP or
ASP.NET page, the web server locates the page and invokes the appropriate servicing program.
The servicing program is not part of the Web server but it is an independent executable program
running on the Web server. The servicing program, processes any user input, determines the
action that must be taken, interacts with any external sources and finally produces an HTML
document and terminates. The Web server then sends the HTML document back to the user’s
browser where it is displayed. The page is thus generated dynamically upon request. The six
steps involved in developing a server side dynamic web page are
A web author writes a set of instructions for creating HTML, and saves these instructions
within a file such as a .php or .asp or .aspx file .Sometime later, a user types a page request into
their browser, and the request is passed from the browser to the web server. The web server
locates the file of instructions and invokes the appropriate servicing program.The servicing
program follows the instructions in order to create a stream of HTML. The web server sends the
newly created HTML stream back across the network to the browser The browser processes the
HTML and displays the page
104