954 resultados para software engineering: metrics


Relevância:

80.00% 80.00%

Publicador:

Resumo:

Benchmarking was used to compare the Australian SIA’s (Safety Institute of Australia) OHS BoK with three different approaches to systemize the knowledge that should be taught by universities. The Australian Health and Safety Professionals Alliance (HaSPA) Core Body of Knowledge for Generalist OHS Professionals was benchmarked against three other international bodies of knowledge, the German Ergonomic Society’s Body of Knowledge Ergonomics – Core Definition, Object Catalogue and Research Domains, the IEEE Computer Society Software Engineering Body of Knowledge and the American ‘Association of Schools of Public Health’ Master’s Degree in Public Health Core Competency Model. It was found that quality, structure and content of the OHS BoK ranked lowest when compared with the other benchmarked documents. The HaSPA body of knowledge was ranked poorly when compared to the German Ergonomic Society’s Body of Knowledge for Ergonomics, IEEE Computer Society Software Engineering Body of Knowledge and the American Association of Schools of Public Health Core Competency Model. Analysis and discussion of the HaSPA BoK is important given its use as an audit tool for tertiary education in Australia. Furthermore the International Network of Safety & Health Practitioner Organisations (INSHPO) is apparently promoting the Australian SIA’s OHS BoK as the basis of an international standard.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

If the land sector is to make significant contributions to mitigating anthropogenic greenhouse gas (GHG) emissions in coming decades, it must do so while concurrently expanding production of food and fiber. In our view, mathematical modeling will be required to provide scientific guidance to meet this challenge. In order to be useful in GHG mitigation policy measures, models must simultaneously meet scientific, software engineering, and human capacity requirements. They can be used to understand GHG fluxes, to evaluate proposed GHG mitigation actions, and to predict and monitor the effects of specific actions; the latter applications require a change in mindset that has parallels with the shift from research modeling to decision support. We compare and contrast 6 agro-ecosystem models (FullCAM, DayCent, DNDC, APSIM, WNMM, and AgMod), chosen because they are used in Australian agriculture and forestry. Underlying structural similarities in the representations of carbon flows though plants and soils in these models are complemented by a diverse range of emphases and approaches to the subprocesses within the agro-ecosystem. None of these agro-ecosystem models handles all land sector GHG fluxes, and considerable model-based uncertainty exists for soil C fluxes and enteric methane emissions. The models also show diverse approaches to the initialisation of model simulations, software implementation, distribution, licensing, and software quality assurance; each of these will differentially affect their usefulness for policy-driven GHG mitigation prediction and monitoring. Specific requirements imposed on the use of models by Australian mitigation policy settings are discussed, and areas for further scientific development of agro-ecosystem models for use in GHG mitigation policy are proposed.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Over the last few decades, there has been a significant land cover (LC) change across the globe due to the increasing demand of the burgeoning population and urban sprawl. In order to take account of the change, there is a need for accurate and up-to-date LC maps. Mapping and monitoring of LC in India is being carried out at national level using multi-temporal IRS AWiFS data. Multispectral data such as IKONOS, Landsat-TM/ETM+, IRS-ICID LISS-III/IV, AWiFS and SPOT-5, etc. have adequate spatial resolution (similar to 1m to 56m) for LC mapping to generate 1:50,000 maps. However, for developing countries and those with large geographical extent, seasonal LC mapping is prohibitive with data from commercial sensors of limited spatial coverage. Superspectral data from the MODIS sensor are freely available, have better temporal (8 day composites) and spectral information. MODIS pixels typically contain a mixture of various LC types (due to coarse spatial resolution of 250, 500 and 1000 in), especially in more fragmented landscapes. In this context, linear spectral unmixing would be useful for mapping patchy land covers, such as those that characterise much of the Indian subcontinent. This work evaluates the existing unmixing technique for LC mapping using MODIS data, using end-members that are extracted through Pixel Purity Index (PPI), Scatter plot and N-dimensional visualisation. The abundance maps were generated for agriculture, built up, forest, plantations, waste land/others and water bodies. The assessment of the results using ground truth and a LISS-III classified map shows 86% overall accuracy, suggesting the potential for broad-scale applicability of the technique with superspectral data for natural resource planning and inventory applications. Index Terms-Remote sensing, digital

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Abstract-To detect errors in decision tables one needs to decide whether a given set of constraints is feasible or not. This paper describes an algorithm to do so when the constraints are linear in variables that take only integer values. Decision tables with such constraints occur frequently in business data processing and in nonnumeric applications. The aim of the algorithm is to exploit. the abundance of very simple constraints that occur in typical decision table contexts. Essentially, the algorithm is a backtrack procedure where the the solution space is pruned by using the set of simple constrains. After some simplications, the simple constraints are captured in an acyclic directed graph with weighted edges. Further, only those partial vectors are considered from extension which can be extended to assignments that will at least satisfy the simple constraints. This is how pruning of the solution space is achieved. For every partial assignment considered, the graph representation of the simple constraints provides a lower bound for each variable which is not yet assigned a value. These lower bounds play a vital role in the algorithm and they are obtained in an efficient manner by updating older lower bounds. Our present algorithm also incorporates an idea by which it can be checked whether or not an (m - 2)-ary vector can be extended to a solution vector of m components, thereby backtracking is reduced by one component.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Although incidence matrix representation has been used to analyze the Petri net based models of a system, it has the limitation that it does not preserve reflexive properties (i.e., the presence of selfloops) of Petri nets. But in many practical applications self-loops play very important roles. This paper proposes a new representation scheme for general Petri nets. This scheme defines a matrix called "reflexive incidence matrix (RIM) c which is a combination of two matrices, a "base matrix Cb,,, and a "power matrix CP." This scheme preserves the reflexive and other properties of the Petri nets. Through a detailed analysis it is shown that the proposed scheme requires less memory space and less processing time for answering commonly encountered net queries compared to other schemes. Algorithms to generate the RIM from the given net description and to decompose RIM into input and output function matrices are also given. The proposed Petri net representation scheme is very useful to model and analyze the systems having shared resources, chemical processes, network protocols, etc., and to evaluate the performance of asynchronous concurrent systems.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This article presents a method for checking the conformance between an event log capturing the actual execution of a business process, and a model capturing its expected or normative execution. Given a business process model and an event log, the method returns a set of statements in natural language describing the behavior allowed by the process model but not observed in the log and vice versa. The method relies on a unified representation of process models and event logs based on a well-known model of concurrency, namely event structures. Specifically, the problem of conformance checking is approached by folding the input event log into an event structure, unfolding the process model into another event structure, and comparing the two event structures via an error-correcting synchronized product. Each behavioral difference detected in the synchronized product is then verbalized as a natural language statement. An empirical evaluation shows that the proposed method scales up to real-life datasets while producing more concise and higher-level difference descriptions than state-of-the-art conformance checking methods.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Usability testing is a productive and reliable method for evaluating the usability of software. Planning and implementing the test and analyzing its results is typically considered time-consuming, whereas applying usability methods in general is considered difficult. Because of this, usability testing is often priorized lower than more concrete issues in software engineering projects. Intranet Alma is a web service, users of which consist of students and personnel of the University of Helsinki. Alma was published in 2004 at the opening ceremony of the university. It has 45 000 users, and it replaces several former university network services. In this thesis, the usability of intranet Alma is evaluated with usability testing. The testing method applied has been lightened to make its taking into use as easy as possible. In the test, six students each tried to solve nine test tasks with Alma. As a result concrete usability problems were described in the final test report. Goal-orientation was given less importance in the applied usability testing. In addition, the system was tested only with test users from the largest user group. Usability test found general usability problems that occurred no matter the task or the user. However, further evaluation needs to be done: in addition to the general usability problems, there are task-dependent problems, solving of which requires thorough gathering of users goals. In the basic structure and central functionality of Alma, for example in navigation, there are serious and often repeating usability problems. It would be of interest to verify the designed user interface solutions to these problems before taking them into use. In the long run, the goals of the users, that the software is planned to support, are worth gathering, and the software development should be based on these goals.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The study examines various uses of computer technology in acquisition of information for visually impaired people. For this study 29 visually impaired persons took part in a survey about their experiences concerning acquisition of infomation and use of computers, especially with a screen magnification program, a speech synthesizer and a braille display. According to the responses, the evolution of computer technology offers an important possibility for visually impaired people to cope with everyday activities and interacting with the environment. Nevertheless, the functionality of assistive technology needs further development to become more usable and versatile. Since the challenges of independent observation of environment were emphasized in the survey, the study led into developing a portable text vision system called Tekstinäkö. Contrary to typical stand-alone applications, Tekstinäkö system was constructed by combining devices and programs that are readily available on consumer market. As the system operates, pictures are taken by a digital camera and instantly transmitted to a text recognition program in a laptop computer that talks out loud the text using a speech synthesizer. Visually impaired test users described that even unsure interpretations of the texts in the environment given by Tekstinäkö system are at least a welcome addition to complete perception of the environment. It became clear that even with a modest development work it is possible to bring new, useful and valuable methods to everyday life of disabled people. Unconventional production process of the system appeared to be efficient as well. Achieved results and the proposed working model offer one suggestion for giving enough attention to easily overlooked needs of the people with special abilities. ACM Computing Classification System (1998): K.4.2 Social Issues: Assistive technologies for persons with disabilities I.4.9 Image processing and computer vision: Applications Keywords: Visually impaired, computer-assisted, information, acquisition, assistive technology, computer, screen magnification program, speech synthesizer, braille display, survey, testing, text recognition, camera, text, perception, picture, environment, trasportation, guidance, independence, vision, disabled, blind, speech, synthesizer, braille, software engineering, programming, program, system, freeware, shareware, open source, Tekstinäkö, text vision, TopOCR, Autohotkey, computer engineering, computer science

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Multi-agent systems (MAS) advocate an agent-based approach to software engineering based on decomposing problems in terms of decentralized, autonomous agents that can engage in flexible, high-level interactions. This chapter introduces scalable fault tolerant agent grooming environment (SAGE), a second-generation Foundation for Intelligent Physical Agents (FIPA)-compliant multi-agent system developed at NIIT-Comtec, which provides an environment for creating distributed, intelligent, and autonomous entities that are encapsulated as agents. The chapter focuses on the highlight of SAGE, which is its decentralized fault-tolerant architecture that can be used to develop applications in a number of areas such as e-health, e-government, and e-science. In addition, SAGE architecture provides tools for runtime agent management, directory facilitation, monitoring, and editing messages exchange between agents. SAGE also provides a built-in mechanism to program agent behavior and their capabilities with the help of its autonomous agent architecture, which is the other major highlight of this chapter. The authors believe that the market for agent-based applications is growing rapidly, and SAGE can play a crucial role for future intelligent applications development. © 2007, IGI Global.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This work belongs to the field of computational high-energy physics (HEP). The key methods used in this thesis work to meet the challenges raised by the Large Hadron Collider (LHC) era experiments are object-orientation with software engineering, Monte Carlo simulation, the computer technology of clusters, and artificial neural networks. The first aspect discussed is the development of hadronic cascade models, used for the accurate simulation of medium-energy hadron-nucleus reactions, up to 10 GeV. These models are typically needed in hadronic calorimeter studies and in the estimation of radiation backgrounds. Various applications outside HEP include the medical field (such as hadron treatment simulations), space science (satellite shielding), and nuclear physics (spallation studies). Validation results are presented for several significant improvements released in Geant4 simulation tool, and the significance of the new models for computing in the Large Hadron Collider era is estimated. In particular, we estimate the ability of the Bertini cascade to simulate Compact Muon Solenoid (CMS) hadron calorimeter HCAL. LHC test beam activity has a tightly coupled cycle of simulation-to-data analysis. Typically, a Geant4 computer experiment is used to understand test beam measurements. Thus an another aspect of this thesis is a description of studies related to developing new CMS H2 test beam data analysis tools and performing data analysis on the basis of CMS Monte Carlo events. These events have been simulated in detail using Geant4 physics models, full CMS detector description, and event reconstruction. Using the ROOT data analysis framework we have developed an offline ANN-based approach to tag b-jets associated with heavy neutral Higgs particles, and we show that this kind of NN methodology can be successfully used to separate the Higgs signal from the background in the CMS experiment.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Conformance testing focuses on checking whether an implementation. under test (IUT) behaves according to its specification. Typically, testers are interested it? performing targeted tests that exercise certain features of the IUT This intention is formalized as a test purpose. The tester needs a "strategy" to reach the goal specified by the test purpose. Also, for a particular test case, the strategy should tell the tester whether the IUT has passed, failed. or deviated front the test purpose. In [8] Jeron and Morel show how to compute, for a given finite state machine specification and a test purpose automaton, a complete test graph (CTG) which represents all test strategies. In this paper; we consider the case when the specification is a hierarchical state machine and show how to compute a hierarchical CTG which preserves the hierarchical structure of the specification. We also propose an algorithm for an online test oracle which avoids a space overhead associated with the CTG.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Inspite of numerous research advancements made in recent years in the area of formal techniques, specification of real-time systems is still proving to be a very challenging and difficult problem. In this context, this paper critically examines state-of-the-art specification techniques for real-time systems and analyzes the emerging trends.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Over the last few decades, there has been a significant land cover (LC) change across the globe due to the increasing demand of the burgeoning population and urban sprawl. In order to take account of the change, there is a need for accurate and up- to-date LC maps. Mapping and monitoring of LC in India is being carried out at national level using multi-temporal IRS AWiFS data. Multispectral data such as IKONOS, Landsat- TM/ETM+, IRS-1C/D LISS-III/IV, AWiFS and SPOT-5, etc. have adequate spatial resolution (~ 1m to 56m) for LC mapping to generate 1:50,000 maps. However, for developing countries and those with large geographical extent, seasonal LC mapping is prohibitive with data from commercial sensors of limited spatial coverage. Superspectral data from the MODIS sensor are freely available, have better temporal (8 day composites) and spectral information. MODIS pixels typically contain a mixture of various LC types (due to coarse spatial resolution of 250, 500 and 1000 m), especially in more fragmented landscapes. In this context, linear spectral unmixing would be useful for mapping patchy land covers, such as those that characterise much of the Indian subcontinent. This work evaluates the existing unmixing technique for LC mapping using MODIS data, using end- members that are extracted through Pixel Purity Index (PPI), Scatter plot and N-dimensional visualisation. The abundance maps were generated for agriculture, built up, forest, plantations, waste land/others and water bodies. The assessment of the results using ground truth and a LISS-III classified map shows 86% overall accuracy, suggesting the potential for broad-scale applicability of the technique with superspectral data for natural resource planning and inventory applications.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Denial-of-service (DoS) attacks form a very important category of security threats that are prevalent in MIPv6 (mobile internet protocol version 6) today. Many schemes have been proposed to alleviate such threats, including one of our own [9]. However, reasoning about the correctness of such protocols is not trivial. In addition, new solutions to mitigate attacks may need to be deployed in the network on a frequent basis as and when attacks are detected, as it is practically impossible to anticipate all attacks and provide solutions in advance. This makes it necessary to validate the solutions in a timely manner before deployment in the real network. However, threshold schemes needed in group protocols make analysis complex. Model checking threshold-based group protocols that employ cryptography have not been successful so far. Here, we propose a new simulation based approach for validation using a tool called FRAMOGR that supports executable specification of group protocols that use cryptography. FRAMOGR allows one to specify attackers and track probability distributions of values or paths. We believe that infrastructure such as FRAMOGR would be required in future for validating new group based threshold protocols that may be needed for making MIPv6 more robust.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper presents a method of partial automation of specification based regression testing, which we call ESSE (Explicit State Space Enumeration). The first step in ESSE method is the extraction of a finite state model of the system making use of an already tested version of the system under test (SUT). Thereafter, the finite state model thus obtained is used to compute good test sequences that can be used to regression test subsequent versions of the system. We present two new algorithms for test sequence computation - both based on our finite state model generated by the above method. We also provide the details and results of the experimental evaluation of ESSE method. Comparison with a practically used random-testing algorithm has shown substantial improvements.