bug tracking system

Upload: shailkgarg

Post on 06-Mar-2016

20 views

Category:

Documents


0 download

DESCRIPTION

anti virus

TRANSCRIPT

  • BUG TRACKING SYSTEM A PROJECT REPORT

    Submitted by

    NIHARIKA ASTHANA

    ID: 1308002118

    In partial fulfillment of the award of the degree of

    M.B.A

    in

    INFORMATION TECHNOLOGY

  • DECLARATION BY THE CANDIDATE

    I here by declare that the project report entitled Bug

    Tracking System submitted by me to Sikkim Manipal University; in

    partial fulfillment of the requirement for the award of the degree of

    MBA in Information Technology is a record of bonafide project work

    carried out by me. I further declare that the work reported in this project

    has not been submitted and will not be submitted, either in part or in full,

    for the award of any other degree or diploma in this institute or any other

    institute or university.

    Place: Almora

    Date:10/11/2015

    (Niharika Asthana)

  • ABSTRACT

  • ABSTRACT

    Buges and other malicious programs are an ever increasing threat to current

    computer systems. They can cause damage and consume countless hours and system

    administration time to combat. The Bug Tracking System will scan the system files

    and detects the bug and alerts the user whenever necessary.

    The basic ideas, concepts, components and approaches involved in developing an

    anti-bug program from a developers/software engineers point of view is discussed

    here. It will focus on the main elements of an anti-bug engine and will exclude

    aspects like graphical user interfaces, real-time monitors, file system drivers and

    plug-ins for certain application software like Microsoft Exchange or Microsoft

    Office.

    The main parts of an anti-bug engine are typically compiled based on the same

    source code for various platforms, which may have differences in the byte order,

    CPUs and general requirements on aligned code. All of these considerations must be

    kept in mind when developing the concept of an anti-bug engine, as the platform on

    which the engine is designed to run will be a central design consideration. As well,

    when developing a new anti-bug engine from the ground up, the following

    consideration or requirements must be considered:

    Targeted platforms

    Programming language

  • File access

    Required modularity

    Pragmatic Functions:

    Now that some of the conceptual aspects of the anti-bug engine design have been

    discussed, it would be helpful to consider some of the pragmatic functions that must

    be incorporated into the design of an anti-bug engine. The following components or

    functions must all be taken into account in the development of a modern anti-bug

    engine:

    Engine core

    File system layer

    File type scanners (rtf, ppt, mz, pe, etc.)

    Memory scanners

    File Decompression (e.g. ZIP archives, UPX compressed executables)

    Code emulators (e.g. Win32)

    Heuristic engines

    Update mechanisms.

  • INTRODUCTION

  • 1. INTRODUCTION

    Computer bug:

    A computer bug is a computer program that can copy itself and infect a computer

    without permission or knowledge of the user. The term "bug" is also commonly used,

    albeit erroneously, to refer to many different types of malware and adware programs.

    The original bug may modify the copies, or the copies may modify themselves, as

    occurs in a metamorphic bug. A bug can only spread from one computer to another

    when its host is taken to the uninfected computer, for instance by a user sending it

    over a network or the Internet, or by carrying it on a removable medium such as a

    floppy disk, CD, or USB drive.

    Most personal computers are now connected to the Internet and to local area

    networks, facilitating the spread of malicious code. Today's buges may also take

    advantage of network services such as the World Wide Web, e-mail, Instant

    Messaging and file sharing systems to spread, blurring the line between buges and

    worms. Furthermore, some sources use an alternative terminology in which a bug is

    any form of self-replicating malware.

    Types of buges:

    Infection strategies:

    In order to replicate itself, a bug must be permitted to execute code and write to

    memory. For this reason, many buges attach themselves to executable files that may

    be part of legitimate programs. If a user tries to start an infected program, the bug'

    code may be executed first. Buges can be divided into two types, on the basis of their

    behavior when they are executed. Nonresident buges immediately search for other

    hosts that can be infected, infect these targets, and finally transfer control to the

    application program they infected. Resident buges do not search for hosts when they

    are started. Instead, a resident bug loads itself into memory on execution and

    transfers control to the host program. The bug stays active in the background and

  • infects new hosts when those files are accessed by other programs or the operating

    system itself.

    Nonresident buges:

    Nonresident buges can be thought of as consisting of a finder module and a

    replication module. The finder module is responsible for finding new files to infect.

    For each new executable file the finder module encounters, it calls the replication

    module to infect that file.

    Resident buges:

    Resident buges contain a replication module that is similar to the one that is

    employed by nonresident buges. However, this module is not called by a finder

    module. Instead, the bug loads the replication module into memory when it is

    executed and ensures that this module is executed each time the operating system is

    called to perform a certain operation. For example, the replication module can be

    called each time the operating system executes a file. In this case, the bug infects

    every suitable program that is executed on the computer.

    Resident buges are sometimes subdivided into a category of fast infectors and a

    category of slow infectors. Fast infectors are designed to infect as many files as

    possible. For instance, a fast infector can infect every potential host file that is

    accessed. This poses a special problem to anti-bug software, since a bug scanner will

    access every potential host file on a computer when it performs a system-wide scan.

    If the bug scanner fails to notice that such a bug is present in memory, the bug can

    "piggy-back" on the bug scanner and in this way infect all files that are scanned. Fast

    infectors rely on their fast infection rate to spread. The disadvantage of this method is

    that infecting many files may make detection more likely, because the bug may slow

    down a computer or perform many suspicious actions that can be noticed by anti-bug

    software. Slow infectors, on the other hand, are designed to infect hosts infrequently.

    For instance, some slow infectors only infect files when they are copied. Slow

    infectors are designed to avoid detection by limiting their actions: they are less likely

    to slow down a computer noticeably, and will at most infrequently trigger anti-bug

  • software that detects suspicious behavior by programs. The slow infector approach

    does not seem very successful, however.

    Vectors and hosts:

    Buges have targeted various types of transmission media or hosts. This list is not

    exhaustive:

    Binary executable files (such as COM files and EXE files in MS-DOS,

    Portable Executable files in Microsoft Windows, and ELF files in Linux)

    Volume Boot Records of floppy disks and hard disk partitions

    The master boot record (MBR) of a hard disk

    General-purpose script files (such as batch files in MS-DOS and Microsoft

    Windows, VBScript files, and shell script files on Unix-like platforms).

    Application-specific script files (such as Telix-scripts)

    Documents that can contain macros (such as Microsoft Word documents,

    Microsoft Excel spreadsheets, AmiPro documents, and Microsoft Access

    database files)

    Cross-site scripting vulnerabilities in web applications

    Arbitrary computer files. An exploitable buffer overflow, format string, race

    condition or other exploitable bug in a program which reads the file could be

    used to trigger the execution of code hidden within it. Most bugs of this type

    can be made more difficult to exploit in computer architectures with

    protection features such as an execute disable bit and/or address space layout

    randomization.

    PDFs, like HTML, may link to malicious code.

    It is worth noting that some bug authors have written an .EXE extension on the end

    of .PNG (for example), hoping that users would stop at the trusted file type without

    noticing that the computer would start with the final type of file. (Many operating

    systems hide the extensions of known file types by default, so for example a filename

    ending in ".png.exe" would be shown ending in ".png".)

  • Anti-bug software and other preventive measures:

    Many users install anti-bug software that can detect and eliminate known buges after

    the computer downloads or runs the executable. There are two common methods that

    an anti-bug software application uses to detect buges. The first, and by far the most

    common method of bug detection is using a list of bug signature definitions. This

    works by examining the content of the computer's memory (its RAM, and boot

    sectors) and the files stored on fixed or removable drives (hard drives, floppy drives),

    and comparing those files against a database of known bug "signatures". The

    disadvantage of this detection method is that users are only protected from buges that

    pre-date their last bug definition update. The second method is to use a heuristic

    algorithm to find buges based on common behaviors. This method has the ability to

    detect buges that anti-bug security firms have yet to create a signature for.

    Some anti-bug programs are able to scan opened files in addition to sent and received

    e-mails 'on the fly' in a similar manner. This practice is known as "on-access

    scanning." Anti-bug software does not change the underlying capability of host

    software to transmit buges. Users must update their software regularly to patch

    security holes. Anti-bug software also needs to be regularly updated in order to

    prevent the latest threats.

    One may also minimize the damage done by buges by making regular backups of

    data (and the Operating Systems) on different media, that are either kept unconnected

    to the system (most of the time), read-only or not accessible for other reasons, such

    as using different file systems. This way, if data is lost through a bug, one can start

    again using the backup (which should preferably be recent). If a backup session on

    optical media like CD and DVD is closed, it becomes read-only and can no longer be

    affected by a bug. Likewise, an Operating System on a bootable can be used to start

    the computer if the installed Operating Systems become unusable. Another method is

    to use different Operating Systems on different file systems. A bug is not likely to

    affect both. Data backups can also be put on different file systems. For example,

  • Linux requires specific software to write to NTFS partitions, so if one does not install

    such software and uses a separate installation of MS Windows to make the backups

    on an NTFS partition, the backup should remain safe from any Linux buges.

    Likewise, MS Windows can not read file systems like ext3, so if one normally uses

    MS Windows, the backups can be made on an ext3 partition using a Linux

    installation.

    Recovery methods:

    Once a computer has been compromised by a bug, it is usually unsafe to continue

    using the same computer without completely reinstalling the operating system.

    However, there are a number of recovery options that exist after a computer has a

    bug. These actions depend on severity of the type of bug.

    Bug removal:

    One possibility on Windows Me, Windows XP and Windows Vista is a tool known

    as System Restore, which restores the registry and critical system files to a previous

    checkpoint. Often a bug will cause a system to hang, and a subsequent hard reboot

    will render a system restore point from the same day corrupt. Restore points from

    previous days should work provided the bug is not designed to corrupt the restore

    files. Some buges, however, disable system restore and other important tools such as

    Task Manager and Command Prompt. An example of a bug that does this is

    CiaDoor.

    Administrators have the option to disable such tools from limited users for various

    reasons. The bug modifies the registry to do the same, except, when the

    Administrator is controlling the computer, it blocks all users from accessing the

    tools. When an infected tool activates it gives the message "Task Manager has been

    disabled by your administrator.", even if the user trying to open the program is the

    administrator.

    Users running a Microsoft operating system can go to Microsoft's website to run a

    free scan, if they have their 20-digit registration number.

  • Operating system reinstallation:

    Reinstalling the operating system is another approach to bug removal. It involves

    simply reformatting the OS partition and installing the OS from its original media, or

    imaging the partition with a clean backup image (taken with Ghost or Acronis for

    example).

    This method has the benefits of being simple to do, can be faster than running

    multiple anti-bug scans, and is guaranteed to remove any malware. Downsides

    include having to reinstall all other software as well as the operating system. User

    data can be backed up by booting off of a Live CD or putting the hard drive into

    another computer and booting from the other computer's operating system.

    Antibug software:

    Antibug software are computer programs that attempt to identify, neutralize or

    eliminate malicious software. The term "antibug" is used because the earliest

    examples were designed exclusively to combat computer buges; however most

    modern antibug software is now designed to combat a wide range of threats,

    including worms, phishing attacks, rootkits, trojan horses and other malware.

    Antibug software typically uses two different approaches to accomplish this:

    examining (scanning) files to look for known buges matching definitions in a

    bug dictionary, and

    Identifying suspicious behavior from any computer program which might

    indicate infection.

    The second approach is called heuristic analysis. Such analysis may include data

    captures, port monitoring and other methods.

    Most commercial antibug software uses both of these approaches, with an emphasis

    on the bug dictionary approach. Some people consider network firewalls to be a type

    of antibug software, however this is not correct.

  • Approaches:

    Dictionary:

    In the bug dictionary approach, when the antibug software looks at a file, it refers to

    a dictionary of known buges that the authors of the antibug software have identified.

    If a piece of code in the file matches any bug identified in the dictionary, then the

    antibug software can take one of the following actions:

    1. attempt to repair the file by removing the bug itself from the file,

    2. quarantine the file (such that the file remains inaccessible to other programs

    and its bug can no longer spread), or

    3. delete the infected file.

    To achieve consistent success in the medium and long term, the bug dictionary

    approach requires periodic (generally online) downloads of updated bug dictionary

    entries. As civically-minded and technically-inclined users identify new buges "in the

    wild", they can send their infected files to the authors of antibug software, who then

    include information about the new buges in their dictionaries.

    Dictionary-based antibug software typically examines files when the computer's

    operating system creates, opens, closes, or e-mails them. In this way it can detect a

    known bug immediately upon receipt. Note too that a System Administrator can

    typically schedule the antibug software to examine (scan) all files on the computer's

    hard disk on a regular basis.

    Although the dictionary approach can effectively contain bug outbreaks in the right

    circumstances, bug authors have tried to stay a step ahead of such software by

    writing "oligomorphic", "polymorphic" and more recently "metamorphic" buges,

    which encrypt parts of themselves or otherwise modify themselves as a method of

    disguise, so as to not match the bug's signature in the dictionary.

    An emerging technique to deal with malware in general is whitelisting. Rather than

    looking for only known bad software, this technique prevents execution of all

    computer code except that which has been previously identified as trustworthy by the

    system administrator. By following this default deny approach, the limitations

  • inherent in keeping bug signatures up to date are avoided. Additionally, computer

    applications that are unwanted by the system administrator are prevented from

    executing since they are not on the whitelist. Since modern enterprise organizations

    have large quantities of trusted applications, the limitations of adopting this

    technique rest with the system administrators' ability to properly inventory and

    maintain the whitelist of trusted applications. As such, viable implementations of this

    technique include tools for automating the inventory and whitelist maintenance

    processes.

    Suspicious behavior:

    The suspicious behavior approach, by contrast, doesn't attempt to identify known

    buges, but instead monitors the behavior of all programs. If one program tries to

    write data to an executable program, for example, the antibug software can flag this

    suspicious behavior, alert a user, and ask what to do.

    Unlike the dictionary approach, the suspicious behavior approach therefore provides

    protection against brand-new buges that do not yet exist in any bug dictionaries.

    However, it can also sound a large number of false positives, and users probably

    become desensitized to all the warnings. If the user clicks "Accept" on every such

    warning, then the antibug software obviously gives no benefit to that user. This

    problem has worsened since 1997, since many more non-malicious program designs

    came to modify other .exe files without regard to this false positive issue. Therefore,

    most modern antibug software uses this technique less and less.

    Other approaches:

    Some antibug software uses other types of heuristic analysis. For example, it could

    try to emulate the beginning of the code of each new executable that the system

    invokes before transferring control to that executable. If the program seems to use

    self-modifying code or otherwise appears as a bug (if it immediately tries to find

    other executables, for example), one could assume that a bug has infected the

    executable. However, this method could result in a lot of false positives.

  • Yet another detection method involves using a sandbox. A sandbox emulates the

    operating system and runs the executable in this simulation. After the program has

    terminated, software analyzes the sandbox for any changes which might indicate a

    bug. Because of performance issues, this type of detection normally only takes place

    during on-demand scans. Also this method may fail as buges can be nondeterministic

    and result in different actions or no actions at all done when run so it will be

    impossible to detect it from one run.

    Some bug scanners can also warn a user if a file is likely to contain a bug based on

    the file type.

    What Does Heuristic Really Mean?

    Heuristic refers to the act or process of finding or discovering. The Oxford English

    Dictionary defines heuristic as enabling a person to discover or learn something for

    themselves or (in the computing context) proceeding to a solution by trial and error

    or by rules that are only loosely defined. The Merriam-Webster Dictionary defines

    it as an aid to learning, discovery, or problem-solving by experimental and

    especially trial-and-error methods or (again, in the context of computing) relating

    to exploratory problem-solving techniques that utilize self-educating techniques (as

    the evaluation of feedback) to improve performance.

    Heuristic programming is usually regarded as an application of artificial intelligence,

    and as a tool for problem solving. Heuristic programming, as used in expert systems,

    builds on rules drawn from experience, and the answers generated by such a system

    get better as the system learns by further experience, and augments its knowledge

    base.

    As it is used in the management of malware (and indeed spam and related nuisances),

    heuristic analysis, though closely related to these elements of trial-and-error and

    learning by experience, also has a more restricted meaning. Heuristic analysis uses a

    rule-based approach to diagnosing a potentially-offending file (or message, in the

    case of spam analysis). As the analyzer engine works through its rule-base, checking

    the message against criteria that indicate possible malware, it assigns score points

    when it locates a match. If the score meets or exceeds a threshold score, the file is

  • flagged as suspicious (or potentially malicious or spammy) and processed

    accordingly.

    In a sense, heuristic anti-malware attempts to apply the processes of human analysis

    to an object. In the same way that a human malware analyst would try to determine

    the process of a given program and its actions, heuristic analysis performs the same

    intelligent decision-making process, effectively acting as a virtual malware

    researcher. As the human malware analyst learns more from and about emerging

    threats he or she can apply that knowledge to the heuristic analyzer through

    programming, and improve future detection rates.

    Heuristic programming has a dual role in AV performance: speed and detection. In

    fact, the term heuristic is applied in other areas of science in a very similar sense;

    aiming to improve performance (especially speed of throughput) through a good

    enough result rather than the most exact result. Otherwise the increased time needed

    to scan for an ever-increasing number of malicious programs would make the system

    effectively unusable.

    Despite the much-improved performance of some contemporary heuristic engines,

    there is a danger that the impact of heuristic (and even non-heuristic) scanning may

    be seen as outweighing the advantages of improved detection. There is a common

    belief that heuristic scanners are generally slower than static scanners, but at a certain

    point of sophistication this ceases to be true.

    Even early heuristic scanners using simple pattern detection benefited from

    optimization techniques that searched only the parts of an object where a given bug

    could be expected to be found. (A simple example - theres no point in scanning an

    entire file for a bug signature, if that bug always stores its core code at the beginning

    or end of an infected file.) This reduces scanning overhead and lessens the risk of a

    false positive.

    The inappropriate detection of a viral signature in a place where the bug would never

    be found in normal circumstances is not only a side effect of poor detection

    methodology, but a symptom of poorly designed detection testing. For instance,

    some testers have attempted to test the capabilities of an AV program by inserting

    bug code randomly into a file or other infectible object. Similarly, a particular kind

    of object such as a file or boot sector can be selectively scanned for only those types

    of malware that can realistically be expected to be found in that object, a process

  • sometimes described as filtering. After all, theres no reason to look for macro bug

    code in a boot sector.

    However, correct identification of a file type is not concrete proof of an

    uncontaminated file. For example, Microsoft Word document files containing

    embedded malicious executables have long been a major attack vector for

    information theft and industrial espionage. Similarly, malware authors are constantly

    in search of attacks where an object not normally capable of executing code can be

    made to do so for example, by modifying the runtime environment. W32/Perrun, for

    example, appended itself to .JPG and .TXT files, but could not actually run unless

    specific changes were made in the operating environment to allow the Perrun code to

    be extracted and run.

    De-compiler:

    The term "decompiler" is most commonly applied to a program which translates

    executable programs (the output from a compiler) into source code in a (relatively)

    high level language which, when compiled, will produce an executable whose

    behavior is the same as the original executable program. By comparison, a

    disassembler translates an executable program into assembly language (and an

    assembler could be used to assemble it back into an executable program).

    Decompilation is the act of using a decompiler, although the term, when used as a

    noun, can also refer to the output of a decompiler. It can be used for the recovery of

    lost source code, and is also useful in some cases for computer security,

    interoperability and error correction. The success of decompilation depends on the

    amount of information present in the code being decompiled and the sophistication

    of the analysis performed on it. The bytecode formats used by many virtual machines

    (such as the Java Virtual Machine or the .NET Framework Common Language

    Runtime) often include extensive metadata and high-level features that make

    decompilation quite feasible. Machine language has typically much less metadata,

    and is therefore much harder to decompile.

  • Some compilers and post-compilation tools produce obfuscated code (that is, they

    attempt to produce output that is very difficult to decompile). This is done to make it

    more difficult to reverse engineer the executable.

    Design:

    Decompilers can be thought of as composed of a series of phases each of which

    contributes specific aspects of the overall decompilation process.

    Loader:

    The first decompilation phase is the loader, which parses the input machine code or

    intermediate language program's binary file format. The loader should be able to

    discover basic facts about the input program, such as the architecture (Pentium,

    PowerPC, etc), and the entry point. In many cases, it should be able to find the

    equivalent of the main function of a C program, which is the start of the user written code. This excludes the runtime initialization code, which should not be decompiled

    if possible.

    Disassembly:

    The next logical phase is the disassembly of machine code instructions into a

    machine independent intermediate representation (IR). For example, the Pentium

    machine instruction

    mov eax, [ebx+0x04]

    might be translated to the IR

    eax := m[ebx+4];

    Idioms:

    Idiomatic machine code sequences are sequences of code whose combined semantics

    is not immediately apparent from the instructions' individual semantics. Either as part

    of the disassembly phase, or as part of later analyses, these idiomatic sequences need

    to be translated into known equivalent IR. For example, the x86 assembly code:

  • cdq eax ; edx is set to the sign-extension of eax xor eax, edx sub eax, edx

    could be translated to

    eax := abs(eax);

    Some idiomatic sequences are machine independent; some involve only one

    instruction. For example, xor eax, eax clears the eax register (sets it to zero). This can be implemented with a machine independent simplification rule, such as a xor

    a = 0.

    In general, it is best to delay detection of idiomatic sequences if possible, to later

    stages that are less affected by instruction ordering. For example, the instruction

    scheduling phase of a compiler may insert other instructions into an idiomatic

    sequence, or change the ordering of instructions in the sequence. A pattern matching

    process in the disassembly phase would probably not recognize the altered pattern.

    Later phases group instruction expressions into more complex epressions, and

    modify them into a canonical (standardized) form, making it more likely that even

    the altered idiom will match a higher level pattern later in the decompilation.

    Program analysis:

    Various program analyses can be applied to the IR. In particular, expression

    propagation combines the semantics of several instructions into more complex

    expressions. For example,

    mov eax,[ebx+0x04] add eax,[ebx+0x08] sub [ebx+0x0C],eax

    could result in the following IR after expression propagation:

    m[ebx+12] := m[ebx+12] - (m[ebx+4] + m[ebx+8]);

    The resulting expression is more like high level language, and has also eliminated the

    use of the machine register eax . Later analyses may eliminate the ebx register.

  • Type analysis:

    A good machine code de-compiler will perform type analysis. Here, the way

    registers or memory locations are used result in constraints on the possible type of

    the location. For example, an and instruction implies that the operand is an integer; programs do not use such an operation on floating point values (except in special

    library code) or on pointers. An add instruction results in three constraints, since the operands may be both integer, or one integer and one pointer (with integer and

    pointer results respectively; the third constraint comes from the ordering of the two

    operands when the types are different).

    Various high level expressions can be recognized which trigger recognition of

    structures or arrays. However, it is difficult to distinguish many of the possibilities,

    because of the freedom that machine code or even some high level languages such as

    C allow with casts and pointer arithmetic.

    The example from the previous section could result in the following high level code:

    struct T1* ebx; struct T1 { int v0004; int v0008; int v000C; }; ebx->v000C -= ebx->v0004 + ebx->v0008;

    Structuring:

    The penultimate de-compilation phase involves structuring of the IR into higher level

    constructs such as while loops and if/then/else conditional statements. For example, the machine code

    xor eax, eax l0002: or ebx, ebx jge l0003 add eax,[ebx] mov ebx,[ebx+0x4]

  • jmp l0002 l0003: mov [0x10040000],eax

    could be translated into:

    eax = 0; while (ebx < 0) { eax += ebx->v0000; ebx = ebx->v0004; } v10040000 = eax;

    Unstructured code is more difficult to translate into structured code than already

    structured code. Solutions include replicating some code, or adding boolean

    variables.

    Code generation:

    The final phase is the generation of the high level code in the back end of the

    decompiler. Just as a compiler may have several back ends for generating machine

    code for different architectures, a decompiler may have several back ends for

    generating high level code in different high level languages.

    Just before code generation, it may be desirable to allow an interactive editing of the

    IR, perhaps using some form of graphical user interface. This would allow the user to

    enter comments, and non-generic variable and function names. However, these are

    almost as easily entered in a post de-compilation edit. The user may want to change

    structural aspects, such as converting a while loop to for loop. These are less readily

    modified with a simple text editor, although source code refactoring tools may assist

    with this process. The user may need to enter information that failed to be identified

    during the type analysis phase, e.g. modifying a memory expression to an array or

    structure expression. Finally, incorrect IR may need to be corrected, or changes made

    to cause the output code to be more readable.

  • OVERVIEW OF SYSTEM

  • 2. OVER VIEW OF SYSTEM

    2.1 SYSTEM REQUIREMENTS:

    HARDWARE REQUIREMENTS:

    The various hardware details required for the project are,

    PROCESSOR : Intel Pentium II or above

    PROCESSOR speed : 1.76 GHZ or above

    RAM : 32 MB or above

    HDD : 40 MB

    SOFTWARE REQUIREMENTS:

    The various software requirements of this project are,

    PLATFORM : WINDOWS XP

    FRONT END : C, C# .NET

    BACK END : MS ACCESS

  • 2.2 SYSTEM ANALYSIS:

    In this phase here we make the analysis of the system. That means in this phase

    mainly we do the study of the system which means what is the problem definition of

    the system. Alternate system solutions are studied and recommendations are made

    about omitting the resources required designing the system.

    Here in this phase apart from the problem definition we make the determination

    system performances the identification and evaluation of the potential system

    solutions and the analysis of alternate solutions. The reason for these activities is to

    pick the most cost effective system that meets the desired the performance

    requirement to the lowest cost. A study phase report is prepared and this is

    recommended to the user or users of the system at the most feasible solution to the

    problem. The greater the participation of the user in the study phases the more likely

    the success of the subsequent phases.

    PRELIMINARY INVESTIGAITON:

    It involves understanding and clarification of the given problem, evaluation of the

    merits of the project request and determining the feasibility of the project. For this

    project the data required for preliminary investigation is gathered by reviewing

    documents. Unstructured interview technique was used during the initial stages to

    gain understanding of the system. Documents and records maintained by the

    organization were referred.

    FEASIBILTY STUDY:

    An important outcome of preliminary investigation is the determination that the

    system requested is feasible i.e. preliminary investigation examine project feasibility

    the likelihood of the system being useful to the organization. Tests of feasibility are

    OPERATIONAL FEASIBILITY

    TECHNICAL FEASIBILITY

    ECONOMIC FEASIBILTY

  • OPERATIONAL FEASIBILITY:

    A system is said to be operationally feasible only if it can be turned into information

    systems that will meet the organizations operating requirements. The Bug Tracking

    System has no barrier in operation and implementation. Further, it reduces manual

    effort and increases the performance when compared to conventional methods. It

    increases efficiency and also it automatically valuated. Our system thus found to be

    operationally feasible.

    TECHNICAL FEASIBILITY:

    A system is said to be technically feasible only if the system can be developed by

    using the existing technology. Our system satisfies technical feasibility, owing the

    existing technology, reliability, ease of access and security.

    ECONOMIC FEASIBILITY:

    This test is carried out to determine the costs of conducting a full system

    investigation, to costs of required hardware and software and the benefits in the form

    of reduced costs. The costs to conduct preliminary investigation, cost of hardware

    and software were not considerable due to the availability of all requirements at

    college. The benefits in developing the system are substantial.

    DETERMINATION OF THE SYSTEM:

    Now after the approval made to develop the required system, the requirements that

    are to develop the system in the required manner was to be found out by asking

    various persons in the company. Data was to be collected from them; flow diagrams

    would be developed to indicate the flow of the information. After collecting the

    required information it would be send for the sake of further process.

    DESIGN SYSTEM:

    The system design starts by converting the logical model of the system into physical

    model. Physical model represents the transactions that take place in the system and

    the physical components that are involved; the documents for the physical models

    namely, flow of charts for the program.

  • DEVELOPMENT OF SYSTEM:

    Now after the design of the system is developed, the software that is required to

    develop, the system is selected such that it is easy to develop, and proper

    documentation is done so that the user can benefit the developed to the maximum

    extent.

    TESTING:

    Now the developed system would be checked for its accuracy by using simple test

    data to check if it gives the correct output or not. Now the system will be checked to

    find whether it was developed according to the requirements of the company or not.

    If changes are to be made it is done in the current process so that the reliable system

    can be developed.

    IMPLEMENTATION:

    Now the developed system would be shown to the company to find of the system

    was developed accordingly or not. Now, the evaluation of the system is done to find

    the strength and weakness of the system by giving the live data to the system. If the

    company was to make any changes they carried out. Once the system is delivered to

    the company it is implemented to serve their requirements.

  • 2.3 SYSTEM PLANNING:

    As the project is developed in the C# DOTNET, through its complexity in

    programming the project takes 8 months to complete.

    The following steps describe the works that are carried out in the particular weeks.

    MONTH 1: The problem has been identified and is discussed with in the group and

    the problem definition is developed.

    MONTH 2: The different types of software, hardware requirements and user

    requirements have been identified and analyzed. The SOFTWARE

    REQUIREMENTS DOCUMENT has been developed.

    MONTH 3: The interfaces have been designed manually, and different modules are

    identified.

    MONTH 4: The user interface forms have been designed using C# DOTNET.

    MONTH 5: The data flow diagrams for the interfaces designed were prepared.

    MONTH 6: The coding is done.

    MONTH 7: The coding is done.

    MONTH 8: Different levels of testing were performed.

  • 2.4 USER REQUIREMENTS:

    The system should be user friendly and should be designed in a way that

    the system can be easily understandable by an inexperienced user also.

    The user should be able to scan a single file, a specific folder and the

    entire system.

    The system should scan any file extensions.

    The system should generate a report of the scan performed.

    The report should provide options to delete or quarantine the affected files.

    The system should allow the quarantined files to be moved to the bug

    vault.

    The user should get the help he needs, when operating our system. So

    helping should be provided, if necessary for the system.

    The system should be designed in way that it can be operated on any

    machine even with low configuration also, so that an ordinary user can

    access it.

    The system should be reliable.

    There should be restrictions on certain kinds of commands in accessing

    requirements so that it can provide security.

  • 2.5 SCOPE OF THE PROJECT

    The bug detection method has certainly two types of methods. The first one is

    signature scanning and the other one is the heuristic analysis. The signature scanning

    method is just specific to a single signature of a particular bug but most famous

    method because of its simplicity. Heuristic analysis unlike signature scanning is

    generic antibug approach which we selected to improve. Our system is developed by

    using the heuristic analysis approach which uses decompilation as its main activity.

    But we just written one basic disassembler in C language and we are making use of it

    as our disassembling tool. So our project can cover the entire bug files that are being

    constituted from C and C++ and some more languages. The system scope is beyond

    the files that are implemented in other computer languages. The system scope can be

    increased further by adding more disassembler tools that can disassemble the exe file

    from other languages also.

  • 2.6 METHODOLOGY

    Our project is developed by the following main methodologies,

    C# DOTNET (front end)

    C (middle end)

    MS ACCESS (back end)

    C # DOTNET: C# DOTNET is used as the front end as it is the latest and flexible technology which

    is comprised of C and Visual C++. It has a wide range of features which are very

    useful.

    DOTNET FEATURES:

    DOTNET makes it easy for your database administrator to set up a

    centralized unit database on your unit's FTP site so that multiple leaders can

    access the SAME set of data files. This means you no longer have to worry

    about providing database backups to numerous leaders and then trying to

    coordinate database updates without someone getting left out of the loop.

    Data security! The web database is fully encrypted using a data encryption

    password that you define. No one without that password can view your data,

    even if someone hacks into your FTP site or intercepts the database upload.

    Through the use of Data Access Passwords, the database administrator can

    control who can update the data and which functional area(s) they're allowed

    to update. You can assign the same functional area to more than one user. Of

    course, the database administrator retains update authority over the entire

    database.

    For each Data Access Password, simple checkbox options allow you to block

    users with that password from even being able to see sensitive data items,

    such as social security numbers and driver's licenses. There's a separate

    checkbox for each sensitive data item, so you have full control.

  • DOTNET automatically handles the FTP site interface for you. When you log

    on, DOTNET connects to your FTP site, downloads your encrypted database,

    and decrypts it. TroopMaster/PackMaster then decompresses the database and

    loads the files into your TroopMaster/PackMaster data folder. At that point,

    you can even disconnect from the Internet. When you exit

    TroopMaster/PackMaster, DOTNET compresses and encrypts your updated

    database and uploads the encrypted files back to your FTP site.

    DOTNET guarantees the safe execution of code, including code created by

    unknown or semi-trusted third parties. This is where the term managed code

    comes from, because the applications have to meet security standards and are

    managed just for that very purpose.

    DOTNET enables developers to work in a consistent programming

    environment whether creating applications for desktops or the Internet. This

    ensures that although there are techniques that vary between Web and

    desktop applications, you can use the same languages, such as C#.

    DOTNET builds all communication on industry standards to ensure that code

    based on the .NET Framework can integrate with any other code. .NET uses

    XML extensively, as well as other communication protocols such as SOAP

    (Simplified Object Application Programming), which are both industry

    standards.

    DOTNET minimizes software deployment and versioning conflicts. Also

    called DLL hell, these conflicts occurred frequently when you were

    developing in prior platforms such as Visual Basic and using ActiveX

    controls. A lot of times when you installed new versions of your applications,

    controls would conflict and not work.

  • DOTNET eliminates performance problems of scripted or interpreted

    environments. Everything is compiled into a common language that the

    various parts of the platform are designed to work with.

    CONCEPTS USED: FORMS

    OLEDB PROVIDER

    FORMS:

    The objects from the standard classes are called graphical user interface (GUI)

    objects, and are used to handle the user interface aspect or programs. The style of

    programming we use with these GUI objects is called event-driven programming. An

    event occurs when the user interacts with a GUI object. For example, when we move

    the cursor, click on a button, or select a menu choice, an event occurs. In event-

    driven programs, we program objects to respond to these events by defining event-

    handling methods.

    A form is a general-purpose window in which the user interfaces with the

    application. A java GUI application program must have at least one form that serves

    as the programs main window. The visual basic supports the most rudimentary

    functionality to support features found in any frame window, such as minimizing the

    window, moving the window, resizing the window and so forth.

    OLEDB:

    The OLE DB Data Provider is for use with databases that support OLE DB

    interfaces. This data provider uses native OLE DB through COM interoperability to

    access the database and execute commands. To use the OLEDB Data Provider we

    must also have a compatible OLE DB provider. The following OLE DB providers

    are, at the time of writing, compatible with ADO.NET:

    SQLOLEDB Microsoft OLE DB Provider for SQL Server

    MSDAORA Microsoft OLE DB Provider for Oracle

    Microsoft.Jet.OLEDB.4.0 OLE DB Provider for Microsoft Jet

  • The OLED DB Data Provider does not support OLE DB 2.5 interfaces, such as those

    required for Microsoft OLE DB Provider for Exchange and Microsoft OLE DB

    Provider for Internet Publishing. The OLE DB Data Provider also does not support

    the MSDASQL Provider (Microsoft OLE DB Provider for ODBC). The OLEDB

    Data Provider is the recommended data provider for applications that use SQL

    Server 6.5 or earlier, Oracle, or Microsoft Access.

    The classes for the OLE DB Data Provider are found in the System.Data.OleDb

    namespace

    In OLE DB Data Provider there are four key classes that are derived from the

    following ADO.NET interfaces, found in the System.Data namespace:

    IDbConnection SqlConnection and OleDbConnection

    IDbCommand SqlCommand and OleDbCommand

    IDataReader SqlDataReader and OleDbDataReader

    IDbDataAdapter SqlDataAdapter and OleDbDataAdapter

    Connection:

    The connection classes inherit, as we just saw, from the IDbConnection interface.

    They are manifested in each data provider as either the SqlConnection (for the SQL

    Server Data Provider) or the OleDbConnection (for the OLE DB Data Provider). The

    connection class is used to open a connection to the database on which commands

    will be executed.

    Command:

    The command classes inherit from the IDbCommand interface. As with the

    connection class, the command classes are manifested as either the SqlCommand or

    the OleDbCommand. The command class is used to execute T-SQL commands or

    stored procedures against a database. Commands require an instance of a connection

    object in order to connect to the database and execute a command. In turn, the

    command class exposes several execute methods, depending on what expectations

    you have.

  • DataReader:

    The datareader classes inherit from the IDataReader interface. Continuing the trend,

    the data reader is manifested as either a SqlDataReader or an OleDbDataReader. The

    datareader is a forward-only, read-only stream of data from the database. This makes

    the datareader a very efficient means for retrieving data, as only one record is

    brought into memory at a time.

    DataAdapter:

    The DataAdapter classes inherit from the IDbDataAdapter interface and are

    manifested as SqlDataAdapter and OleDbDataAdapter. The DataAdapter is intended

    for use with a DataSet and can retrieve data from the data source, populate

    DataTables and constraints, and maintain the DataTable relationships. The DataSet

    can contain multiple DataTables, disconnected from the database. The data in the

    DataSet can be manipulated changed, deleted, or added to without an active

    connection to the database.

    C: The disassembling part of the system requires the language that can be written in

    both high level and low level and the immediate option is the C language. We used C

    language to create the disassembler and we have created the executable file of the

    disassembly program and we used it as a disassembler tool in our project.

    MS ACCESS:

    Microsoft Access has changed the image of desktop databases from specialist

    applications used by dedicated professionals to standard business productivity

    applications used by a wide range of users. More and more developers are building

    easy-to-use business solutions on, or have integrated them with, desktop applications

    on users' desktops.

    Microsoft Access has built a tradition of innovation by making historically difficult

    database technology accessible to general business users. Whether users are

    connected by a LAN, the Internet, or not at all, Microsoft Access ensures that the

    benefits of using a database can be quickly realized. With its integrated technologies,

  • Microsoft Access is designed to make it easy for all users to find answers, share

    timely information, and build faster solutions.

    At the same time, Microsoft Access has a powerful database engine and a robust

    programming language, making it suitable for many types of complex database

    applications.

    Data engine: Microsoft Access ships with the Microsoft Jet database engine. (For

    additional information on the Jet database engine, please refer to Microsoft Jet

    Database Engine Programmer's Guide, published by Microsoft Press). This is the

    same engine that ships with Visual Basic and with Microsoft Office. Microsoft Jet is

    a 32-bit, multithreaded database engine that is optimized for decision-support

    applications and is an excellent workgroup engine.

    Microsoft Jet has advanced capabilities that have typically been unavailable on

    desktop databases. These include:

    Access to heterogeneous data sources: Microsoft Jet provides transparent access,

    via industry-standard Open Database Connectivity (ODBC) drivers, to over 170

    different data formats, including Borland International dBASE and Paradox,

    ORACLE from Oracle Corporation, Microsoft SQL Server, and IBM DB2.

    Developers can build applications in which users read and update data

    simultaneously in virtually any data format.

    Engine-level referential integrity and data validation: Microsoft Jet has built-in

    support for primary and foreign keys, database-specific rules, and cascading updates

    and deletes. This means that a developer is freed from having to create rules using

    procedural code to implement data integrity. Also, the engine itself consistently

    enforces these rules, so they are available to all application programs.

    Advanced workgroup security features: Microsoft Jet stores User and Group

    accounts in a separate database, typically located on the network. Object permissions

    for database objects (such as tables and queries) are stored in each database. By

    separating account information from permission information, Microsoft Jet makes it

  • much easier for system administrators to manage one set of accounts for all databases

    on a network.

    Updateable dynasets: As opposed to many database engines that return query

    results in temporary views or snapshots, Microsoft Jet returns a dynaset that

    automatically propagates any changes users make back to the original tables. This

    means that the results of a query, even those based on multiple tables can be treated

    as tables themselves. Queries can even be based on other queries.

    Binding objects and data is easy with Microsoft Access. Complex data-management

    forms can be created easily by dragging and dropping fields and controls onto the

    form design surface. If a form is bound to a parent table, dragging a child table onto

    the form creates a sub form, which will automatically display all child records for the

    parent.

    Microsoft Access has a variety of wizards to ease application development for both

    users and developers. These include:

    The Database Wizard, which includes more than 20 customizable templates to

    create full-featured applications with a few mouse clicks.

    The Table Analyzer Wizard, which can decipher flat-file data intelligently from a

    wide variety of data formats and create a relational database.

    Several form and report wizards, which allow users great flexibility in creating

    the exact view of data required, regardless of underlying tables or queries.

    The Application Splitter Wizard, which separates a Microsoft Access application

    from its tables and creates a shared database containing the tables for a multi-user

    application.

    The PivotTable Wizard, which walks users through the creation of Microsoft

    Excel PivotTables based on a Microsoft Access table or query.

    The Performance Analyzer Wizard, which examines existing databases and

    recommends changes to improve application performance.

  • In addition to the wizards just listed, Microsoft Access provides a number of ease-of-

    use features in keeping with its goal of providing easy access to data for users. These

    include:

    Filter by Form, which allows users to type the information they seek and have

    Microsoft Access build the underlying query to deliver only that data, in a form

    view.

    Filter by Input, which allows users simply to right-click on any field, in any view,

    and then type the criteria they are looking for into an input box on a pop-up

    menu. Upon pressing ENTER, the filter is applied and the user then sees only the

    information they are looking for.

    Filter by Selection, which allows users to locate information quickly on forms or

    datasheets by highlighting a selection and filtering the underlying data based on

    that selection.

  • SYSTEM DESIGN

  • 3. SYSTEM DESIGN

    3.1 DETAILS OF THE DEVELOPMENT:

    The following are the different modules in to which we have divided our complete

    project.

    HOME PAGE:

    The home page of the Bug Tracking System contains the options the user can

    navigate to the other modules of the system. This is the beginning module of the

    project.

    DATABASE UPDATION:

    This module is the part of the project where the user can update the database of the

    antibug so that we can increase the antibug efficiency and accuracy.

    This module again subdivides into two different categories where one is about the

    adding the new code and other is to delete the existing code from the bug database.

    In add module the user will be given the option to enter any bug source code in

    assembly level so that he will be intimated whenever the file that is made by the

    same source code is scanned.

    In the deletion module the system will display the list of the codes that are there in

    the database and the user can select and delete the code which he doesnt want to

    detect it as bug.

    SCANNING MODULE:

    This module corresponds to the scanning part of the antibug. Here the user gets the

    options of scanning the file he wanted to scan. Here the selected file or folder will be

    passed to the disassembler module.

    This module again consists of three different sub modules namely, scanning of a

    given single file, scanning of a specific folder, and scanning of the entire system.

  • DISASSEMBLER MODULE:

    The disassembler is the core part of our system. Here the file or set of files that are

    selected in the scanning module will be taken and they are translated to the assembly

    level code. The assembly level code that was generated for each file scanned will be

    passed to the next module.

    PARSING THE DISASSEMBLED CODE:

    The translated assembly level code will be compared instruction by instruction with

    the source code that was stored in the bug database. The sequence of instructions that

    were there in the database will be checked with that of the translated code. Generally

    the bug cod will be inserted at the beginning or at the ending of the target file and

    hence the beginning and ending part of the translated code will be compared with

    that of the database code. If there is a match with the sequence of the instructions that

    are in the database with that of the scanned file then the file will be marked as

    affected or else unaffected and it will be passed to the report module. This module

    will be called incrementally from the above module until all the files that have been

    selected are completed.

    REPORT AND REPAIR:

    The report and repair phase of the project is the logical end of the antibug process. In

    this module the files that have been scanned will be generated as a report with their

    status as affected or not and the user will get the options such as deleting the affected

    files. The affected files that the user doesnt want to delete can be moved to the vault

    where the location of the file will be preserved for future so that the user can delete

    later if he wants.

    BUG VAULT:

    The files that the user doesnt want to delete will be moved to the vault where the

    user will get the option of deleting it later. The bug vault seems to be same as the

    above module but the difference is it will store the affected file locations for a long

    time.

  • 3.2 DATAFLOW DIAGRAMS:

    A graphical tool is used to describe and analyze the movement of the data through a

    system manually or automate including the process of data storage, and delay in the

    system. DFDs are central tools and the basis for the development of other

    components. The transformation of data from one process to another process is

    independent of physical components. These types of DFDs are called LOGICAL

    DATA FLOW DIAGRAMS. In contrast, physical data flow diagrams show the

    actual implementation and movements of the data through people, departments and

    workstations.

    ADVANTAGES OF DFDs:

    Users, persons who are part of the process being studied, early understand the

    notations. So analysts can work with the users and involve them in the study of data

    flow diagram. For accurate business activity users can make suggestions for

    modification and also they examine charts and spot out the problems quickly. If the

    errors are not found in the development process they will be very difficult to correct

    latter and the system may be failure.

    Data flow analysis permits analyst to isolate areas of interest in the organization and

    study them by examining the data that enters the process and see how it is changed

    when it leaves the process.

    DFD Symbols:

    Square:

    It defines a source (originator) or destination of system data.

    Arrow:

  • It indicates data flow-data in motion. It is a pipeline through which

    information flows.

    Circle or Bubble:

    It represents a process that transforms incoming data flow(s) into outgoing data

    flow(s).

    Open Rectangle:

    It is a data store-data at rest, or a temporary repository of data. Here I am giving only

    the Data Flow Diagram.

    They are explained by

    GANE and SARON method

    DEMACRO YORDAN method

    GANE AND SARSON NOTATION:

    DATA FLOW

    DATA STRUCTURE

    EXTERNAL ENTITY

    OR DATA LINK

  • PROCESS

    DATA BASE

    DEMACRO YORDAN NOTATION:

    DATA FLOW

    DATA STRUCTURE

    EXTERNAL ENTITY

    OR DATA LINK

    PROCESS

    DATA BASE

  • Context level diagram (Zero level diagram)

    User0.0

    Virus Tracking System

    UserRequest

    service

    service

  • Level 1 Diagram

    User

    1.0

    DatabaseUpdatingModule

    2.0

    ScanningModule

    3.0

    Virus Vault

    User

  • Level 2 Diagram

    user

    1.1

    ADDCODE

    1.2

    Databaseinsertion

    user

    Level 2 Diagram

    user

    1.3

    DELETECODE

    1.4

    Databasedeletion

    user

  • Level 2 Diagram

    user

    2.1

    Dis-Assembler

    module

    2.2

    Parsingmodule

    user

    Level 2 Diagram

    User User3.0

    VirusVault

  • 3.3 DATABASE DESIGN:

    TABLE1:

    NAME: SCODE

    PURPOSE:

    This table is used to store the bug codes that will be used to compare with the

    translated file codes.

    FIELD NAME CONSTRAINT DATATYPE SIZE DESCRIPTION

    name not null varchar2 20 stores the name of the

    bug code

    inst not null varchar2 30 stores the individual

    instruction

    eno not null number integer used for grouping the

    instructions

  • TABLE2:

    NAME: TEMP

    PURPOSE:

    This table is used to store the file locations and their status that have been scanned

    temporarily to pass them to the next module after completing all the selected files.

    FIELD NAME CONSTRAINT DATATYPE SIZE DESCRIPTION

    fname not null varchar2 100 stores the location of

    the file

    status not null varchar2 10 stores the status of the

    file

  • TABLE3:

    NAME: VAULT

    PURPOSE:

    This table is used to store the locations of the files that are affected and have been

    moved to the vault for deleting them in the future.

    FIELD NAME CONSTRAINT DATATYPE SIZE DESCRIPTION

    fname not null varchar2 100 stores the location of

    the file

    status not null varchar2 10 stores the status of the

    file

  • IMPLEMENTATION

  • 4. IMPLEMENTATION

    4.1 SOURCE CODE:

    using System;

    using System.Collections.Generic;

    using System.ComponentModel;

    using System.Data;

    using System.Drawing;

    using System.Text;

    using System.IO;

    using System.Windows.Forms;

    namespace antibug

    {

    public partial class start : Form

    {

    public start()

    {

    InitializeComponent();

    }

    string[] drives;

    string drive;

    private void start_Load(object sender, EventArgs e)

    {

    drives = Environment.GetLogicalDrives();

    drive = drives[1];

    Directory.CreateDirectory(drive + @"\TrackingSystem");

    try

    {

  • File.Copy("start.JPG", drive + @"\TrackingSystem\\start.JPG");

    }

    catch (Exception ae1)

    {

    }

    try

    {

    File.Copy("update.JPG", drive + @"\TrackingSystem\\update.JPG");

    }

    catch (Exception ae1)

    {

    }

    try

    {

    File.Copy("addcode.JPG", drive + @"\TrackingSystem\\addcode.JPG");

    }

    catch (Exception ae1)

    {

    }

    try

    {

    File.Copy("delcode.JPG", drive + @"\TrackingSystem\\delcode.JPG");

    }

    catch (Exception ae1)

    {

    }

    try

    {

    File.Copy("scanner.JPG", drive + @"\TrackingSystem\\scanner.JPG");

  • }

    catch (Exception ae1)

    {

    }

    try

    {

    File.Copy("trail.exe", drive + @"\TrackingSystem\\trial.exe");

    }

    catch (Exception ae)

    {

    }

    try

    {

    File.Copy("bug.mdb", drive + @"\TrackingSystem\\bug.mdb");

    }

    catch (Exception ae1)

    {

    }

    pictureBox1.ImageLocation = drive + @"TrackingSystem\start.JPG";

    }

    private void button6_Click(object sender, EventArgs e)

    {

    scanner f = new scanner();

    f.Show();

    }

    private void button7_Click(object sender, EventArgs e)

  • {

    update u = new update();

    u.Show();

    }

    private void button5_Click(object sender, EventArgs e)

    {

    bugvault vv = new bugvault();

    vv.Show();

    }

    private void button8_Click(object sender, EventArgs e)

    {

    this.Close();

    }

    private void pictureBox1_Click(object sender, EventArgs e)

    {

    }

    using System;

    using System.Collections.Generic;

    using System.ComponentModel;

    using System.Data;

    using System.Drawing;

    using System.Text;

    using System.Windows.Forms;

    namespace antibug

  • {

    public partial class update : Form

    {

    public update()

    {

    InitializeComponent();

    }

    String[] drives;

    string drive;

    private void update_Load(object sender, EventArgs e)

    {

    drives = Environment.GetLogicalDrives();

    drive = drives[1];

    pictureBox1.ImageLocation = drive + @"TrackingSystem\update.JPG";

    }

    private void button3_Click_1(object sender, EventArgs e)

    {

    this.Close();

    }

    private void button6_Click(object sender, EventArgs e)

    {

    addcode a = new addcode();

    a.Show();

    }

    private void button5_Click(object sender, EventArgs e)

    {

  • delcode d = new delcode();

    d.Show();

    }

    private void pictureBox1_Click(object sender, EventArgs e)

    {

    }

    }

    }

  • 4.2 OUTPUT SCREENS:

    HOME PAGE

  • SCANNING MODULE

  • DATABASE UPDATION

  • FORM TO ADD THE NEW CODE

  • FORM TO DELETE THE CODE FROM DATABASE

  • BUG REPORT

  • BUG VAULT

  • RESULTS & DISCUSSION

  • 5. RESULTS AND DISCUSSTION

    TESTING:

    Software testing is a crucial element of Software Quality Assurance and represents

    the ultimate review of specification design and coding. Errors tend to creep into our

    work when we design and implement the function, condition or controls that are of

    the main stream. The logical flow of the program sometimes encounter intuitive

    meaning that our unconscious assumptions about flow control and data may lead us

    to make design errors that are uncovered only once path testing commences.

    UNIT TESTING:

    Actually unit testing is performed for the all the individual modules in the system.

    The system is examined at unit level in the sense all the individual modules of the

    system will be examined to check whether they work individually or not.

    Our system consists of there main modules that are to be tested

    1. Update module

    2. Scanning module

    3. Bug Vault

    Update module:

    The update module in deep will consist of another two sub modules one is for adding

    the code to the database and one is to delete from the database. The add code module

    has two fields that are to be entered by the user to store them in the database.

    The sample test case of the module will be like this

    Code name to be stored: replication

    Instruction: MOV R, L

    As the above input is given to the add code module it will be tested to see whether

    the data is stored in the database or not. The database will be examined to see

    whether the data is inserted or not. If the data is not there in the database then the add

    code module will be rebuild.

  • The delete code module also be tested in the same way but here the parameter to be

    passed to the module is just the code name that is inserted earlier in the database. The

    sample test case of the module will be like this

    Code name: replication

    The code name parameter will be passed to the module and the delete operation will

    be performed. Then the database will be examined to see whether the code is deleted

    from the database or not. If the data is not inserted then the module will be debugged

    again to check.

    Scanning Module:

    The scanning module consists of again three sub modules. They are scanning a

    specific file, scan a specific folder and scan the whole computer. The scanning a

    specific file module is to be tested by passing the file name as the parameter to the

    module. The file name is the file that is to be scanned. The file will be disassembled

    by using a proper tool manually and it will be checked manually to see whether it

    matches with the code that is being the database. If the code matches then the file has

    to be flagged as affected and not affected in the other case. If the contradiction occurs

    then the module has to re checked again.

    The scanning a specific folder is to be tested in the same way but here instead of the

    one single file one particular folder will be passed as a parameter.

    The scan whole computer module doesnt want any parameter to be passed and it

    may be ignored because this module includes the procedures of the above two

    modules. If the testing this module also compulsory then it is better to execute it on a

    machine which has very less file in the hard disk.

    Bug Vault:

    The bug vault module will be tested in a simple way. We will just open the vault

    report and then we will check out the listed file with those that are there in the vault

    database. If it matches then it is working correctly.

  • INTEGRATION TESTING:

    Bottom-Up testing is adopted here. It means that all the modules at the lowest level

    are tested first and then integrated for integration testing. Each time a module is

    added for integration testing, the class paths are changed and the control flow is

    tested. This is performed by re-executing the integrated modules if any error is

    encountered performs this test.

    The first module we will find in our system is the update module and the next

    module in the order is the scanning module. So now the update module and scanning

    module will be integrated to check whether they will work correctly by working

    together to give the correct output. First of all after being integrated the update

    module will be given the parameters to add a new code in to the database and one

    new suspicious code will be added to it. Now we will take a file that is constituted

    from that suspicious code and we will pass this file as a parameter to the scanning

    module. If the scanning module gives the out put by flagging the file as affected then

    two modules are working together correctly. Now it will be checked in the reverse

    manner another code will be added to the database and again we will pass a new file

    to the scanning module but this file is not constituted from the code that is inserted.

    So now the system should respond by flagging the file as not affected. If the system

    fails in any one of the above two test cases then the modules have to be debugged

    again by working together.

    Now after confirming that the update module and the scanning module are working

    well together then the third module that is to be integrated with these two is the bug

    vault. Now as the first two modules are working correctly, then one specific file or

    folder will be passed to the scanning module and in the bug report we will move the

    entire files to the bug vault. Now the bug vault will be opened by coming back and it

    will be checked to see whether all the files appeared there or not. If the files appeared

    correctly in the order they were deleted then the system works correctly.

    So the integration testing will be completed with the integration of the above all

    modules.

  • SYSTEM TESTING:

    System testing is actually a series of different tests whose primary purpose is to fully

    exercise the computer based system. Although each test has a different purpose, the

    main work of the system testing is to verify that all the system elements have been

    properly integrated and are performing allocated functions. The different types of

    system testing are recovery testing, security testing, stress testing and performance

    testing.

    Recovery testing is a system test that forces the software to fall in a variety of ways

    and that verifies that recovery is properly performed. Recovery testing can be done in

    our system by giving the system a set of files that will overload the system and then

    it can be verified to see whether the system is recovered properly or not.

    Security testing attempts to verify that production mechanisms built onto a system

    will in fact protect it from proper penetration. The main purpose of this test is to fully

    exercise the total software system. Then all modules are put together, there are more

    chances for errors when the software is exercised as a computer-based system.

    Security testing for our system is a critical work. The system receives the files from

    the system to scan them but the system should provide the security by not going to

    change or modify the files that are scanned. The system doesnt cause the files to be

    deleted unnecessarily until they seem more suspicious. The computer performance

    should not be degraded by executing the system. The security testing will be done by

    passing several files to the system and then the machine will be examined to see

    whether the performance degrades or not.

    Stress testing or sensitive testing is performed as a part of this final test. This is done

    to find the maximum stress the system can bear before it fails. Stress testing for our

    system will be performed by passing more number of file to scan them and the

    system will be examined up to what extent it can bear.

  • CONCLUSION

  • 6. CONCLUSION AND FUTURE ENHANCEMENT

    This project has dropped a small stone in water, by designing an application that

    provides a generic antibug approach that is used to scan the files efficiently. Bug

    Tracking System being developed by restricting to the present technology available

    in our college meets the desired needs of the requirements completely.

    Our system can be extended further to an extent at which it can provide more

    facilities and flexibility than it provides at present. At present the disassembling of

    the file to be scanned is limited to the exe files that were written in C and C++ only.

    The disassembler provided in this system may not work properly when we are going

    to scan the files that are written in other high level languages. So more the

    decompiling tools we can add we can scan a wide range of variety of files.

    At present in our system only the files that were scanned and reported as affected can

    be deleted or can be moved to vault to delete in future. So the only option provided

    for the user is to delete the affected file. More over the affected file can be repaired

    by deleting the bug code that was matched from the disassembled code and restoring

    the new file from the repaired code.

    The above discussed increments are left for the future enhancement of the project.

  • REFERENCES

  • REFERENCES

    TEXT BOOKS:

    1. F. SCOTT BARKER, Visual C# 2005 Express Edition Starter Kit, WROX PUBLICATIONS, Wiley Publishing, Inc. 2. ERIC BUTOW & TOMY RYAN, C#: YOUR VISUAL BLUE PRINT FOR BUILDING .NET APPLICATIONS, HUNGRY MINDS PUBLISHING, Inc. 3. K. JOSEPH WESLEY & R. RAJESH JEBA ANBIAH, A TO Z C, FIRST

    EDTION

    4. Heuristic Analysis Detecting Unknown Buges by DAVID HARLEY & ANDREW LEE. URL:

    [1].http://www.this.net/~frank/pstill.html

    [2]. http://en.wikipedia.org/wiki/Disassembler

    [3]. http://en.wikipedia.org/wiki/Antibug

    [4]. http://en.wikipedia.org/wiki/bug

    [5]. http://www.eset.com