967 resultados para driver verification


Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper focuses on the low-volume incident detection and subsequent driver warning objectives of the PORTICO project. It proposes Automatic Incident Detection (AID) which uses a multimodel approach comprised of a number of different algorithms. A set of thresholds and conditions are defined which determine which algorithm(s) need to indicate an incident under different traffic conditions in order to trigger an alarm

Relevância:

20.00% 20.00%

Publicador:

Resumo:

COSTA, Umberto Souza; MOREIRA, Anamaria Martins; MUSICANTE, Matin A.; SOUZA NETO, Plácido A. JCML: A specification language for the runtime verification of Java Card programs. Science of Computer Programming. [S.l]: [s.n], 2010.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

COSTA, Umberto Souza da; MOREIRA, Anamaria Martins; MUSICANTE, Martin A. Specification and Runtime Verification of Java Card Programs. Electronic Notes in Theoretical Computer Science. [S.l:s.n], 2009.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper presents a technique called Improved Squeaky Wheel Optimisation (ISWO) for driver scheduling problems. It improves the original Squeaky Wheel Optimisation’s (SWO) effectiveness and execution speed by incorporating two additional steps of Selection and Mutation which implement evolution within a single solution. In the ISWO, a cycle of Analysis-Selection-Mutation-Prioritization-Construction continues until stopping conditions are reached. The Analysis step first computes the fitness of a current solution to identify troublesome components. The Selection step then discards these troublesome components probabilistically by using the fitness measure, and the Mutation step follows to further discard a small number of components at random. After the above steps, an input solution becomes partial and thus the resulting partial solution needs to be repaired. The repair is carried out by using the Prioritization step to first produce priorities that determine an order by which the following Construction step then schedules the remaining components. Therefore, the optimisation in the ISWO is achieved by solution disruption, iterative improvement and an iterative constructive repair process performed. Encouraging experimental results are reported.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this thesis, we present a quantitative approach using probabilistic verification techniques for the analysis of reliability, availability, maintainability, and safety (RAMS) properties of satellite systems. The subject of our research is satellites used in mission critical industrial applications. A strong case for using probabilistic model checking to support RAMS analysis of satellite systems is made by our verification results. This study is intended to build a foundation to help reliability engineers with a basic background in model checking to apply probabilistic model checking to small satellite systems. We make two major contributions. One of these is the approach of RAMS analysis to satellite systems. In the past, RAMS analysis has been extensively applied to the field of electrical and electronics engineering. It allows system designers and reliability engineers to predict the likelihood of failures from the indication of historical or current operational data. There is a high potential for the application of RAMS analysis in the field of space science and engineering. However, there is a lack of standardisation and suitable procedures for the correct study of RAMS characteristics for satellite systems. This thesis considers the promising application of RAMS analysis to the case of satellite design, use, and maintenance, focusing on its system segments. Data collection and verification procedures are discussed, and a number of considerations are also presented on how to predict the probability of failure. Our second contribution is leveraging the power of probabilistic model checking to analyse satellite systems. We present techniques for analysing satellite systems that differ from the more common quantitative approaches based on traditional simulation and testing. These techniques have not been applied in this context before. We present the use of probabilistic techniques via a suite of detailed examples, together with their analysis. Our presentation is done in an incremental manner: in terms of complexity of application domains and system models, and a detailed PRISM model of each scenario. We also provide results from practical work together with a discussion about future improvements.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Benthic microorganisms are key players in the recycling of organic matter and recalcitrant compounds such as polyaromatic hydrocarbons (PAHs) in coastal sediments. Despite their ecological importance, the response of microbial communities to chronic PAH pollution, one of the major threats to coastal ecosystems, has received very little attention. In one of the largest surveys performed so far on coastal sediments, the diversity and composition of microbial communities inhabiting both chronically contaminated and non-contaminated coastal sediments were investigated using high-throughput sequencing on the 18S and 16S rRNA genes. Prokaryotic alpha-diversity showed significant association with salinity, temperature, and organic carbon content. The effect of particle size distribution was strong on eukaryotic diversity. Similarly to alpha-diversity, beta-diversity patterns were strongly influenced by the environmental filter, while PAHs had no influence on the prokaryotic community structure and a weak impact on the eukaryotic community structure at the continental scale. However, at the regional scale, PAHs became the main driver shaping the structure of bacterial and eukaryotic communities. These patterns were not found for PICRUSt predicted prokaryotic functions, thus indicating some degree of functional redundancy. Eukaryotes presented a greater potential for their use as PAH contamination biomarkers, owing to their stronger response at both regional and continental scales.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Part 21: Mobility and Logistics

Relevância:

20.00% 20.00%

Publicador:

Resumo:

El presente proyecto tiene como objetivo el análisis de la tecnología a de la tarjeta de disparos/driver del convertidor de potencia de un vehículo eléctrico. En él, se ha realizado un estudio profundo de las protecciones, consiguiendo así los conocimientos necesarios para el desarrollo de un diseño robusto capaz de hacer frente a situaciones de falta como cortocircuitos, picos de tensión, encendidos parásitos del IGBT, etc. Además de esto, se ha realizado la simulación de las protecciones estudiadas, con lo que es posible visualizar el funcionamiento de las mismas y acabar de comprender de forma correcta como actúan en los diferentes casos de falta. Con todo esto, es posible la realización de una tarjeta de disparos/driver para el inversor de un motor de un vehículo eléctrico, pudiendo utilizarse también en diferentes aplicaciones.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

On-time completion is an important temporal QoS (Quality of Service) dimension and one of the fundamental requirements for high-confidence workflow systems. In recent years, a workflow temporal verification framework, which generally consists of temporal constraint setting, temporal checkpoint selection, temporal verification, and temporal violation handling, has been the major approach for the high temporal QoS assurance of workflow systems. Among them, effective temporal checkpoint selection, which aims to timely detect intermediate temporal violations along workflow execution plays a critical role. Therefore, temporal checkpoint selection has been a major topic and has attracted significant efforts. In this paper, we will present an overview of work-flow temporal checkpoint selection for temporal verification. Specifically, we will first introduce the throughput based and response-time based temporal consistency models for business and scientific cloud workflow systems, respectively. Then the corresponding benchmarking checkpoint selection strategies that satisfy the property of “necessity and sufficiency” are presented. We also provide experimental results to demonstrate the effectiveness of our checkpoint selection strategies, and finally points out some possible future issues in this research area.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Cloud computing is establishing itself as the latest computing paradigm in recent years. As doing science in the cloud is becoming a reality, scientists are now able to access public cloud centers and employ high-performance computing resources to run scientific applications. However, due to the dynamic nature of the cloud environment, the usability of scientific cloud workflow systems can be significantly deteriorated if without effective service quality assurance strategies. Specifically, workflow temporal verification as the major approach for workflow temporal QoS (Quality of Service) assurance plays a critical role in the on-time completion of large-scale scientific workflows. Great efforts have been dedicated to the area of workflow temporal verification in recent years and it is high time that we should define the key research issues for scientific cloud workflows in order to keep our research on the right track. In this paper, we systematically investigate this problem and present four key research issues based on the introduction of a generic temporal verification framework. Meanwhile, state-of-the-art solutions for each research issue and open challenges are also presented. Finally, SwinDeW-V, an ongoing research project on temporal verification as part of our SwinDeW-C cloud workflow system, is also demonstrated.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Workflow temporal verification is conducted to guarantee on-time completion, which is one of the most important QoS (Quality of Service) dimensions for business processes running in the cloud. However, as today's business systems often need to handle a large number of concurrent customer requests, conventional response-time based process monitoring strategies conducted in a one-by-one fashion cannot be applied efficiently to a large batch of parallel processes because of significant time overhead. Similar situations may also exist in software companies where multiple software projects are carried out at the same time by software developers. To address such a problem, based on a novel runtime throughput consistency model, this paper proposes a QoS-aware throughput based checkpoint selection strategy, which can dynamically select a small number of checkpoints along the system timeline to facilitate the temporal verification of throughput constraints and achieve the target on-time completion rate. Experimental results demonstrate that our strategy can achieve the best efficiency and effectiveness compared with the state-of-the-art as and other representative response-time based checkpoint selection strategies.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

An inverse model is proposed to construct the mathematical relationship between continuous cooling transformation (CCT) kinetics with constant rates and the isothermal one. The kinetic parameters in JMAK equations of isothermal kinetics can be deduced from the experimental CCT kinetics. Furthermore, a generalized model with a new additive rule is developed for predicting the kinetics of nucleation and growth during diffusional phase transformation with arbitrary cooling paths based only on CCT curve. A generalized contribution coefficient is introduced into the new additivity rule to describe the influences of current temperature and cooling rate on the incubation time of nuclei. Finally, then the reliability of the proposed model is validated using dilatometry experiments of a microalloy steel with fully bainitic microstructure based on various cooling routes.