986 resultados para Validated Interval Software


Relevância:

30.00% 30.00%

Publicador:

Resumo:

A solution for the problem of reusability of software system for batch production systems is proposed. It is based on ISA S88 standard that prescribes the abstraction of elements in the manufacturing system that is equipment, processes and procedures abstraction, required to make a product batch. An easy to apply data scheme, compatible with the standard, is developed for management of production information. In addition to flexibility provided by the S88 standard, software system reusability requires a solution supporting manufacturing equipment reconfigurability. Toward this end a coupling mechanism is developed. A software tool, including these solutions, was developed and validated at laboratory level, using product manufacturing information of an actual plant.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Costs and environmental impacts are key elements in forest logistics and they must be integrated in forest decision-making. The evaluation of transportation fuel costs and carbon emissions depend on spatial and non-spatial data but in many cases the former type of data are dicult to obtain. On the other hand, the availability of software tools to evaluate transportation fuel consumption as well as costs and emissions of carbon dioxide is limited. We developed a software tool that combines two empirically validated models of truck transportation using Digital Elevation Model (DEM) data and an open spatial data tool, specically OpenStreetMap©. The tool generates tabular data and spatial outputs (maps) with information regarding fuel consumption, cost and CO2 emissions for four types of trucks. It also generates maps of the distribution of transport performance indicators (relation between beeline and real road distances). These outputs can be easily included in forest decision-making support systems. Finally, in this work we applied the tool in a particular case of forest logistics in north-eastern Portugal

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We provide an axiomatisation of the Timed Interval Calculus, a set-theoretic notation for expressing properties of time intervals. We implement the axiomatisation in the Ergo theorem prover in order to allow the machine-checked proof of laws for reasoning about predicates expressed using interval operators. These laws can be then used in the machine-assisted verification of real-time applications.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The traditional waterfall software life cycle model has several weaknesses. One problem is that a working version of a system is unavailable until a late stage in the development; any omissions and mistakes in the specification undetected until that stage can be costly to maintain. The operational approach which emphasises the construction of executable specifications can help to remedy this problem. An operational specification may be exercised to generate the behaviours of the specified system, thereby serving as a prototype to facilitate early validation of the system's functional requirements. Recent ideas have centred on using an existing operational method such as JSD in the specification phase of object-oriented development. An explicit transformation phase following specification is necessary in this approach because differences in abstractions between the two domains need to be bridged. This research explores an alternative approach of developing an operational specification method specifically for object-oriented development. By incorporating object-oriented concepts in operational specifications, the specifications have the advantage of directly facilitating implementation in an object-oriented language without requiring further significant transformations. In addition, object-oriented concepts can help the developer manage the complexity of the problem domain specification, whilst providing the user with a specification that closely reflects the real world and so the specification and its execution can be readily understood and validated. A graphical notation has been developed for the specification method which can capture the dynamic properties of an object-oriented system. A tool has also been implemented comprising an editor to facilitate the input of specifications, and an interpreter which can execute the specifications and graphically animate the behaviours of the specified systems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Most parametric software cost estimation models used today evolved in the late 70's and early 80's. At that time, the dominant software development techniques being used were the early 'structured methods'. Since then, several new systems development paradigms and methods have emerged, one being Jackson Systems Development (JSD). As current cost estimating methods do not take account of these developments, their non-universality means they cannot provide adequate estimates of effort and hence cost. In order to address these shortcomings two new estimation methods have been developed for JSD projects. One of these methods JSD-FPA, is a top-down estimating method, based on the existing MKII function point method. The other method, JSD-COCOMO, is a sizing technique which sizes a project, in terms of lines of code, from the process structure diagrams and thus provides an input to the traditional COCOMO method.The JSD-FPA method allows JSD projects in both the real-time and scientific application areas to be costed, as well as the commercial information systems applications to which FPA is usually applied. The method is based upon a three-dimensional view of a system specification as opposed to the largely data-oriented view traditionally used by FPA. The method uses counts of various attributes of a JSD specification to develop a metric which provides an indication of the size of the system to be developed. This size metric is then transformed into an estimate of effort by calculating past project productivity and utilising this figure to predict the effort and hence cost of a future project. The effort estimates produced were validated by comparing them against the effort figures for six actual projects.The JSD-COCOMO method uses counts of the levels in a process structure chart as the input to an empirically derived model which transforms them into an estimate of delivered source code instructions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Although crisp data are fundamentally indispensable for determining the profit Malmquist productivity index (MPI), the observed values in real-world problems are often imprecise or vague. These imprecise or vague data can be suitably characterized with fuzzy and interval methods. In this paper, we reformulate the conventional profit MPI problem as an imprecise data envelopment analysis (DEA) problem, and propose two novel methods for measuring the overall profit MPI when the inputs, outputs, and price vectors are fuzzy or vary in intervals. We develop a fuzzy version of the conventional MPI model by using a ranking method, and solve the model with a commercial off-the-shelf DEA software package. In addition, we define an interval for the overall profit MPI of each decision-making unit (DMU) and divide the DMUs into six groups according to the intervals obtained for their overall profit efficiency and MPIs. We also present two numerical examples to demonstrate the applicability of the two proposed models and exhibit the efficacy of the procedures and algorithms. © 2011 Elsevier Ltd.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The focus of our work is the verification of tight functional properties of numerical programs, such as showing that a floating-point implementation of Riemann integration computes a close approximation of the exact integral. Programmers and engineers writing such programs will benefit from verification tools that support an expressive specification language and that are highly automated. Our work provides a new method for verification of numerical software, supporting a substantially more expressive language for specifications than other publicly available automated tools. The additional expressivity in the specification language is provided by two constructs. First, the specification can feature inclusions between interval arithmetic expressions. Second, the integral operator from classical analysis can be used in the specifications, where the integration bounds can be arbitrary expressions over real variables. To support our claim of expressivity, we outline the verification of four example programs, including the integration example mentioned earlier. A key component of our method is an algorithm for proving numerical theorems. This algorithm is based on automatic polynomial approximation of non-linear real and real-interval functions defined by expressions. The PolyPaver tool is our implementation of the algorithm and its source code is publicly available. In this paper we report on experiments using PolyPaver that indicate that the additional expressivity does not come at a performance cost when comparing with other publicly available state-of-the-art provers. We also include a scalability study that explores the limits of PolyPaver in proving tight functional specifications of progressively larger randomly generated programs. © 2014 Springer International Publishing Switzerland.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Electrocardiography (ECG) has been recently proposed as biometric trait for identification purposes. Intra-individual variations of ECG might affect identification performance. These variations are mainly due to Heart Rate Variability (HRV). In particular, HRV causes changes in the QT intervals along the ECG waveforms. This work is aimed at analysing the influence of seven QT interval correction methods (based on population models) on the performance of ECG-fiducial-based identification systems. In addition, we have also considered the influence of training set size, classifier, classifier ensemble as well as the number of consecutive heartbeats in a majority voting scheme. The ECG signals used in this study were collected from thirty-nine subjects within the Physionet open access database. Public domain software was used for fiducial points detection. Results suggested that QT correction is indeed required to improve the performance. However, there is no clear choice among the seven explored approaches for QT correction (identification rate between 0.97 and 0.99). MultiLayer Perceptron and Support Vector Machine seemed to have better generalization capabilities, in terms of classification performance, with respect to Decision Tree-based classifiers. No such strong influence of the training-set size and the number of consecutive heartbeats has been observed on the majority voting scheme.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

As researchers and practitioners move towards a vision of software systems that configure, optimize, protect, and heal themselves, they must also consider the implications of such self-management activities on software reliability. Autonomic computing (AC) describes a new generation of software systems that are characterized by dynamically adaptive self-management features. During dynamic adaptation, autonomic systems modify their own structure and/or behavior in response to environmental changes. Adaptation can result in new system configurations and capabilities, which need to be validated at runtime to prevent costly system failures. However, although the pioneers of AC recognize that validating autonomic systems is critical to the success of the paradigm, the architectural blueprint for AC does not provide a workflow or supporting design models for runtime testing. ^ This dissertation presents a novel approach for seamlessly integrating runtime testing into autonomic software. The approach introduces an implicit self-test feature into autonomic software by tailoring the existing self-management infrastructure to runtime testing. Autonomic self-testing facilitates activities such as test execution, code coverage analysis, timed test performance, and post-test evaluation. In addition, the approach is supported by automated testing tools, and a detailed design methodology. A case study that incorporates self-testing into three autonomic applications is also presented. The findings of the study reveal that autonomic self-testing provides a flexible approach for building safe, reliable autonomic software, while limiting the development and performance overhead through software reuse. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The purpose of this study was to determine the flooding potential of contaminated areas within the White Oak Creek watershed in the Oak Ridge Reservation in Tennessee. The watershed was analyzed with an integrated surface and subsurface numerical model based on MIKE SHE/MIKE 11 software. The model was calibrated and validated using five decades of historical data. A series of simulations were conducted to determine the watershed response to 25 year, 100 year and 500 year precipitation forecasts; flooding maps were generated for those events. Predicted flood events were compared to Log Pearson III flood flow frequency values for validation. This investigation also provides an improved understanding of the water fluxes between the surface and subsurface subdomains as they affect flood frequencies. In sum, this study presents crucial information to further assess the environmental risks of potential mobilization of contaminants of concern during extreme precipitation events.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The large upfront investments required for game development pose a severe barrier for the wider uptake of serious games in education and training. Also, there is a lack of well-established methods and tools that support game developers at preserving and enhancing the games’ pedagogical effectiveness. The RAGE project, which is a Horizon 2020 funded research project on serious games, addresses these issues by making available reusable software components that aim to support the pedagogical qualities of serious games. In order to easily deploy and integrate these game components in a multitude of game engines, platforms and programming languages, RAGE has developed and validated a hybrid component-based software architecture that preserves component portability and interoperability. While a first set of software components is being developed, this paper presents selected examples to explain the overall system’s concept and its practical benefits. First, the Emotion Detection component uses the learners’ webcams for capturing their emotional states from facial expressions. Second, the Performance Statistics component is an add-on for learning analytics data processing, which allows instructors to track and inspect learners’ progress without bothering about the required statistics computations. Third, a set of language processing components accommodate the analysis of textual inputs of learners, facilitating comprehension assessment and prediction. Fourth, the Shared Data Storage component provides a technical solution for data storage - e.g. for player data or game world data - across multiple software components. The presented components are exemplary for the anticipated RAGE library, which will include up to forty reusable software components for serious gaming, addressing diverse pedagogical dimensions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This work provides a holistic investigation into the realm of feature modeling within software product lines. The work presented identifies limitations and challenges within the current feature modeling approaches. Those limitations include, but not limited to, the dearth of satisfactory cognitive presentation, inconveniency in scalable systems, inflexibility in adapting changes, nonexistence of predictability of models behavior, as well as the lack of probabilistic quantification of model’s implications and decision support for reasoning under uncertainty. The work in this thesis addresses these challenges by proposing a series of solutions. The first solution is the construction of a Bayesian Belief Feature Model, which is a novel modeling approach capable of quantifying the uncertainty measures in model parameters by a means of incorporating probabilistic modeling with a conventional modeling approach. The Bayesian Belief feature model presents a new enhanced feature modeling approach in terms of truth quantification and visual expressiveness. The second solution takes into consideration the unclear support for the reasoning under the uncertainty process, and the challenging constraint satisfaction problem in software product lines. This has been done through the development of a mathematical reasoner, which was designed to satisfy the model constraints by considering probability weight for all involved parameters and quantify the actual implications of the problem constraints. The developed Uncertain Constraint Satisfaction Problem approach has been tested and validated through a set of designated experiments. Profoundly stating, the main contributions of this thesis include the following: • Develop a framework for probabilistic graphical modeling to build the purported Bayesian belief feature model. • Extend the model to enhance visual expressiveness throughout the integration of colour degree variation; in which the colour varies with respect to the predefined probabilistic weights. • Enhance the constraints satisfaction problem by the uncertainty measuring of the parameters truth assumption. • Validate the developed approach against different experimental settings to determine its functionality and performance.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Trabajo realizado en la empresa CAF Power&Automation

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND: High-intensity interval training (HIIT) may be a feasible and efficacious strategy for improving health-related fitness in young people. The objective of this systematic review and meta-analysis was to evaluate the utility of HIIT to improve health-related fitness in adolescents and to identify potential moderators of training effects. METHODS: Studies were considered eligible if they: (1) examined adolescents (13-18 years); (2) examined health-related fitness outcomes; (3) involved an intervention of ≥4 weeks in duration; (4) included a control or moderate intensity comparison group; and (5) prescribed high-intensity activity for the HIIT condition. Meta-analyses were conducted to determine the effect of HIIT on health-related fitness components using Comprehensive Meta-analysis software and potential moderators were explored (ie, study duration, risk of bias and type of comparison group). RESULTS: The effects of HIIT on cardiorespiratory fitness and body composition were large, and medium, respectively. Study duration was a moderator for the effect of HIIT on body fat percentage. Intervention effects for waist circumference and muscular fitness were not statistically significant. CONCLUSIONS: HIIT is a feasible and time-efficient approach for improving cardiorespiratory fitness and body composition in adolescent populations.