Monday, 13 August 2012

Industrial Engineering Research Paper

                                                                                                   
HELPING ENGINEERS TO ANALYSE AND INFLUENCE THE HUMAN FACTORS IN ACCIDENTS AT WORK


   2006 Institution of Chemical Engineers
                                                      Trans IChemE, Part B, May 2006


     R. LARDNER_ and R. SCAIFE
The Keil Centre Ltd, Edinburgh, UK

In the UK process industries, there are strong societal,industry and regulatory expectations that every effort will be made to ensure the safety of process plant, minimize injury, and protect the environment. As part of their efforts to meet such expectations and minimize commercial loss, many companies in the process industries have implemented an incident analysis process, which includes some form of root cause analysis to determine the immediate and system causes for accidents, incidents and near-misses. The existing process involved structured evidence gathering, interviewing
by trained staff, development of an incident time-line, identification of critical factors, and the application of a root cause analysis model to guide recommendations.

The following principles were applied to the design of
the analysis toolkit:

. tools to be based on sound analytical methods, supported by existing research;
. methods designed to help the investigator reach their conclusions on the basis of evidence gathered;
. methods to be suitable for use by trained investigators, who are not human factors specialists;
. toolkit capable of being imparted via a 2-day training course, delivered by internal company personnel;
. toolkit to permits analysis of intentional and unintentional unsafe behaviour and identification of trends suggestive of a problem with certain aspects of safety culture;
. provide written support, guidance and examples for investigators.

A four-step process was developed, supported by structured worksheets, which allowed investigators to
(1) Accurately define and describe the behaviour(s) they wished to analyse.
(2) Determine, on the basis of the evidence available, whether it appeared the behaviour(s) were intentional or unintentional.
(3) For intentional behaviour, apply ABC analysis.
(4) For unintentional behaviour, apply human error analysis.

The ABC model assumes the following three propositions are true:
. Behaviour is largely a function of its consequences.
. People do what they do because of what happens to them when they do it.
. What people do (or do not do) during the working day is what is being reinforced.

The results of the analysis can then be turned into practical recommendations to reduce unsafe behaviours and introduce new, safe alternatives to replace them.

Human Error Analysis

these four stages, as the following process industry examples illustrate:
. Perception error—misperceive a reading on a display
. Memory error—forget to implement a step in a procedure
. Decision error—fail to integrate various pieces of data and information, resulting in misdiagnosis of a process upset
. Action error—inadvertently operate the wrong device(e.g., a valve).

Trialling of Methods

Peer Review

Safety Culture Analysis

·        Visible management commitment
·        Safety communication
·        Productivity versus safety
·        Learning organization
·        Health and safety resources
·        Participation in safety
·        Risk-taking behaviour
·        Trust between management and front-line staff
·        Contractor management
·        Competence.

Conclusion


This paper describes a series of projects in four organizations, each of whom wished to deepen their understanding of the human factors that influence accidents and incidents at work. Current analysis of human behaviour in incident investigation is often relatively superficial, thus missing opportunities to improve human performance and prevent incidents recurring. A specific weakness is understanding of human error, which is much better understood and managed in other domains, for example aviation.A set of human factors analysis tools were developed such as Human error Analysis,peer review,trailling of methods and safety culture analysis which encompassed violations, errors and aspects of safety culture. Following a trial period and a peer review, the methods have been implemented, and used by investigators who were typically from an engineering background, and did not possess human factors expertise. Whilst the results of this series of projects have been largely positive, two challenges remain. The first is to streamline the methods to be more readily used by busy incident investigators operating under considerable time pressure. In doing so, a balance has to be struck between simplicity and ease-of-use, and maintaining sufficient rigour. The second challenge is to be more selective in the choice of delegates for this type of training. The process and outcomes of these projects is described, with examples of how a human factors approach can add value to existing analytical methods. Some of the difficulties encountered are described, together with areas for future development.



Sunday, 12 August 2012

Computer Science Research Paper


               International Journal of Computer Applications (0975 – 8887)
                                                      Volume 15– No.7, February 2011


          A Review on Data mining from Past to the Future



Venkatadri.M                                  
Research Scholar,
Dept. of Computer Science,
Dravidian University, India.
                              
Dr. Lokanatha C. Reddy
Professor,
Dept. of Computer Science,
Dravidian University, India.




ABSTRACT

Data and Information or Knowledge has a significant role on human activities. Data mining is the knowledge discovery process by analyzing the large volumes of data from various perspectives and summarizing it into useful information. Due
to the importance of extracting knowledge/information from the large data repositories, data mining has become an essential component in various fields of human life. Advancements in Statistics, Machine Learning, Artificial Intelligence, Pattern Recognition and Computation capabilities have evolved the present day’s data mining applications and these applications have enriched the various
fields of human life including business, education, medical, scientific etc. Hence, this paper discusses the various improvements in the field of data mining from past to the present and explores the future trends.

1. INTRODUCTION

The advent of information technology in various fields of human life has lead to the large volumes of data storage in various formats like records, documents, images, sound recordings, videos, scientific data, and many new data formats. The data collected from different applications require proper mechanism of extracting knowledge/information from large repositories for better decision making. Knowledge discovery in databases (KDD), often called data mining, aims at the discovery of useful information from large collections of data. The core functionalities of data mining are applying various methods and algorithms in order to discover and extract patterns of stored data. From the last two decades data mining and knowledge discovery applications have got a rich focus due to its significance in decision making and it has become an essential component in various organizations.

2. HISTORICAL TRENDS OF DATA MINING

The building blocks of data mining is the evolution of a field with the confluences of various disciplines, which includes database management systems(DBMS), Statistics, Artificial Intelligence(AI), and Machine Learning(ML). The era of data
mining applications was conceived in the year1980 primarily by research-driven tools focused on single tasks [3]. The early day’s data mining trends are as under.

2.1 Data Trends

In initial days, data mining algorithms work best for numerical data collected from a single data base, and various data mining techniques have evolved for flat files, traditional and relational databases where the data is stored in tabular representation. Later on, with the confluence of Statistics and Machine Learning techniques, various algorithms evolved to mine the non numerical data and relational databases.

2.2 Computing Trends

The field of data mining has been greatly influenced by the development of fourth generation programming languages and various related computing techniques. In, early days of data mining most of the algorithms employed only statistical techniques. Later on they evolved with various computing techniques like AI, ML and Pattern Reorganization. Various data mining techniques (Induction, Compression and Approximation) and algorithms developed to mine the large
volumes of heterogeneous data stored in the data warehouses.

3. CURRENT TRENDS

The field of data mining has been growing due to its enormous success in terms of broad-ranging application achievements and scientific progress, understanding. Various data mining applications have been successfully implemented
in various domains like health care, finance, retail, telecommunication, fraud detection and risk analysis...etc.

3.1 Mining the Heterogeneous data

The following table depicts various currently employed data mining techniques and algorithms to mine the various data formats in different application areas. The various data mining
areas are explained after the table1.



3.2 Utilizing the Computing and Networking Resources

Data mining has been prospered by utilizing the advanced computing and networking resources like Parallel, Distributed and Grid technologies. Parallel data mining applications have evolved using the Parallel computing, typical parallel data mining applications employ the Apriori algorithm. Parallel computing and distributed data mining are both integrated in Grid technologies . Grid based Support Vector Machine method is used in distributed data mining. Recently, various soft computing methodologies have been applied in data mining such as fuzzy logic, rough set, neural networks, evolutionary computing (Genetic Algorithms and Genetic Programming), and support vector machines to analyze various formats of data stored in distributed databases results in a more intelligent and robust system providing a human-interpretable, low cost, approximate solution, as compared to traditional techniques [15] for systematic analysis, a robust preprocessing system, flexible information processing, data analysis and decision making.


3.3 Research and Scientific Computing Trends
The explosion in the amount data from many scientific disciplines, such as astronomy, remote sensing, bioinformatics, combinatorial chemistry, medical imaging, and experimental physics are tuning to various data mining techniques, to find out useful information.

3.4 Business Trends
Today’s business must be more profitable, react quicker and offer high quality services that ever before. With these types of expectations and constraints, data mining becomes a
fundamental technology in enabling customer’s transactions more accurately.

4. FUTURE TRENDS
Due to the enormous success of various application areas of data mining, the field of data mining has been establishing itself as the major discipline of computer science and has shown interest potential for the future developments. Ever increasing technology and future application areas are always poses new challenges and opportunities for data mining, the typical future trends of data mining includes:
·        Standardization of data mining languages
·        Data preprocessing
·        Complex objects of data
·        Computing resources
·        Web mining
·        Scientific Computing
·        Business data                      

5. COMPARATIVE STATEMENT

The following table presents the comparative statement of various data mining trends from past to the future.



6. CONCLUSION

In this paper we the various data mining trends are reviewed from its inception to the future. This review would be helpful to researchers to focus on the various issues of data mining.

Saturday, 11 August 2012

Industrial Engineering Group Assignment



NATIONAL INSTITUTE OF INDUSTRIAL ENGINEERING

PGDIE-42


By: -
RATIKA KAPOOR
Roll NO. 75
K.RATNAKAR REDDY
Roll No. 112
YOGESH BHONDEKAR
Roll No. 105


Test-driven development (TDD) of software systems


Introduction

Test driven development (TDD) is a software engineering technique to promote fast feedback, task-oriented development, improved quality assurance and more comprehensible low-level software design. Benefits have been shown for non-reusable software development in terms of improved quality (e.g. lower defect density)
Test-Driven Development (TDD) is a novel approach to software engineering that consists of short development iterations where the test case(s) covering a new functionality are written first. The implementation code necessary to pass the tests is implemented afterwards then tested against the test cases. Defects are fixed and components are refactored to accommodate changes. In this approach, writing the code is done with a greedy approach, i.e. writing just enough to make tests pass, and the coding per TDD cycle is usually only a one-to-atmost-few short method, functions or objects that is called by the new test, i.e. small increments of code.
Test-Driven Development can be implemented at several levels:
• Basic cycle which covers the generation of unit test cases, implementation, build and refactoring.
• Comprehensive development process which extends the methodology to overall black box testing scripts, including performance, scalability and security testing.
• Agile process such as SCRUM

Benefits and Limitations

There are several benefits to have the test cases written before the production code
• The engineer(s) have almost an immediate feedback on the components they develop and tested against the test cases. The turnaround time for the resolution of defects is significant shorter than traditional waterfall methodology where the code is tested days or weeks after implementation and the developer has moved to other modules.
• The implementation is more likely to match the original requirements as defined  by product management. The test cases are easily generated from the use cases or user scenario and reflect the functional specifications accurately without interference from the constraints and limitation of the architecture design or programming constructs. The approach guarantees to some degree that the final version fulfill customer’ or product marketing requests.
• The methodology prevents unwarranted design or components to crimp into the product. The test cases or unit test drivers define the exact set of required features. It is quite easy to identify redundant code, detect and terminate unnecessary engineering tasks.
• The pre-existence of test cases allow the developer to provide a first draft of an implementation for the purpose of prototyping, evaluation or alpha release, and
postponed a more formal implementation through refactoring.
• TDD fits nicely into customer-centric agile process such as SCRUM. For instance, the sprint phase can be defined as the implementation of functionality to execute against a pre-defined set of test cases instead of the more traditional set of specifications or problem statement.
• TDD can lead to more modularized, flexible, and extensible code. The methodology requires that the developer think of the software in terms of small units that can be written and tested independently and integrated together later.

Basic TDD Cycle

Unit Test Drivers

The most primitive cycle for TDD involves the automation or semi-automation of the generation and execution of unit test drivers. Programming language such as Java, C++ and Perl as well as IDE such as Eclipse or Visual Studio provide support for integrating unit test frameworks.

Fig. 1 Representation of Basic TDD Cycle

The key tasks in the TDD methodology are:
1. Product management defines the new features or improvement to implement
2. Architect or senior developer define an interface (declarative) that fulfill the use case
3. Engineers create a unit test driver to test the feature
4. Engineers implements a first version which is tested against the test driver
5. The implementation is subsequently refined then re-tested incrementally
6. At each increment engineer(s) may upgrade the original design through refactoring

Automation

One well documented concern regarding this methodology is the effort and cost required to create detailed unit test code. Consequently, a commitment to automate part of the unit testing cycle is essential to the overall adoption of TDD.

The two key elements of an automated TDD strategy are:

• Automated test generation: The team should investigate and adopt framework or IDE that support the generation of unit test code from a declarative interface
• Automated test execution: The test code should be automatically compiled and executed as part of the build and regression test process.

Extending the TDD Process

Product or System Testing
Most of TDD practitioners focus on unit test driver with or without automation. However, the concept can be extended to product testing as long as interfaces are clearly defined and a test scripting engine is available. The engine should be able to generate sequence of actions or traffic which simulates the user interaction with the overall product or any of its components such as GUI, database or web servers.

The typical client test engine generates:
o Graphical actions and events through a record and replay mechanism
o Command line request
o SQL, HTTP or TCP traffic or packets

Fig. 2 Representation of Extended TDD Process

Here are the steps:

1. Product management defines the new set of features or improvement to implement
2. Use cases are created to describe the basic sequence
3. Architect specifies the set of programming or graphical interface functions
4. QA engineers rely on use case to create test cases and test scripts
5. Unit test drivers are created from the user cases and programming interface
6. Engineers implements a first version or draft which is validated (failed in the first couple attempts) against the test scripts and unit test drivers
7. The implementation is incrementally refined and re-factored as necessary
8. The product is then tested against the test scripts following the integration phase.

Refactoring

Refactoring is a key element of any agile process including TDD methodology. A trend analysis of the evolution of the design and implementation through successive refactoring iteration may give a hint on the probability to reach a stable architecture or component. It is quite conceivable to add design review of the component design during the development cycle to evaluate the stability of the implementation.

Integration with SCRUM

Test Driven Development is the greatest asset to come out of the agile movement. Significant quality and productivity gains are made by using TDD. Hence it is very rapidly spreading through development circles. However, the engineering teams who already practicing agile process such as SCRUM need to reconcile their existing practice with TDD.
By providing a framework to automatically validate incrementally production code against predefined unit test drivers, TDD allows to extend the purpose of sprint as converging the implementation to pass the test harness and add critical quality metrics (ratio of code coverage and passed test) to the monitoring dashboard.


Fig.3 Overall SCRUM agile process with TDD integration points

Conclusion

"Based on the findings of the existing studies, it can be concluded that TDD seems to improve software quality, especially when employed in an industrial context. The findings were not so obvious in the semi industrial or academic context, but none of those studies reported on decreased quality either. The productivity effects of TDD were not very obvious, and the results vary regardless of the context of the study. However, there were indications that TDD does not necessarily decrease the developer productivity or extend the project lead times: In some cases, significant productivity improvements were achieved with TDD while only two out of thirteen studies reported on decreased productivity. However, in both of those studies the quality was improved."

References:
  1. en.wikipedia.org/wiki/Test-driven_development
  2. searchsoftwarequality.techtarget.com/.../test-driven-development
  3. citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1
  4. www.acm.org/src/subpages/gf.../DavidJanzen_src_gf06.pdf
  5. https://www.ibm.com/developerworks/mydeveloperworks
  6. www.methodsandtools.com/archive/archive.php?id=20
  7. www.webopedia.com/TERM/T/test_driven_development.html