991 resultados para Software defect prediction
Resumo:
Concurrent software executes multiple threads or processes to achieve high performance. However, concurrency results in a huge number of different system behaviors that are difficult to test and verify. The aim of this dissertation is to develop new methods and tools for modeling and analyzing concurrent software systems at design and code levels. This dissertation consists of several related results. First, a formal model of Mondex, an electronic purse system, is built using Petri nets from user requirements, which is formally verified using model checking. Second, Petri nets models are automatically mined from the event traces generated from scientific workflows. Third, partial order models are automatically extracted from some instrumented concurrent program execution, and potential atomicity violation bugs are automatically verified based on the partial order models using model checking. Our formal specification and verification of Mondex have contributed to the world wide effort in developing a verified software repository. Our method to mine Petri net models automatically from provenance offers a new approach to build scientific workflows. Our dynamic prediction tool, named McPatom, can predict several known bugs in real world systems including one that evades several other existing tools. McPatom is efficient and scalable as it takes advantage of the nature of atomicity violations and considers only a pair of threads and accesses to a single shared variable at one time. However, predictive tools need to consider the tradeoffs between precision and coverage. Based on McPatom, this dissertation presents two methods for improving the coverage and precision of atomicity violation predictions: 1) a post-prediction analysis method to increase coverage while ensuring precision; 2) a follow-up replaying method to further increase coverage. Both methods are implemented in a completely automatic tool.
Resumo:
The large upfront investments required for game development pose a severe barrier for the wider uptake of serious games in education and training. Also, there is a lack of well-established methods and tools that support game developers at preserving and enhancing the games’ pedagogical effectiveness. The RAGE project, which is a Horizon 2020 funded research project on serious games, addresses these issues by making available reusable software components that aim to support the pedagogical qualities of serious games. In order to easily deploy and integrate these game components in a multitude of game engines, platforms and programming languages, RAGE has developed and validated a hybrid component-based software architecture that preserves component portability and interoperability. While a first set of software components is being developed, this paper presents selected examples to explain the overall system’s concept and its practical benefits. First, the Emotion Detection component uses the learners’ webcams for capturing their emotional states from facial expressions. Second, the Performance Statistics component is an add-on for learning analytics data processing, which allows instructors to track and inspect learners’ progress without bothering about the required statistics computations. Third, a set of language processing components accommodate the analysis of textual inputs of learners, facilitating comprehension assessment and prediction. Fourth, the Shared Data Storage component provides a technical solution for data storage - e.g. for player data or game world data - across multiple software components. The presented components are exemplary for the anticipated RAGE library, which will include up to forty reusable software components for serious gaming, addressing diverse pedagogical dimensions.
Resumo:
Most liquid electrolytes used in commercial lithium-ion batteries are composed by alkylcarbonate mixture containing lithium salt. The decomposition of these solvents by oxidation or reduction during cycling of the cell, induce generation of gases (CO2, CH4, C2H4, CO …) increasing of pressure in the sealed cell, which causes a safety problem [1]. The prior understanding of parameters, such as structure and nature of salt, temperature pressure, concentration, salting effects and solvation parameters, which influence gas solubility and vapor pressure of electrolytes is required to formulate safer and suitable electrolytes especially at high temperature.
We present in this work the CO2, CH4, C2H4, CO solubility in different pure alkyl-carbonate solvents (PC, DMC, EMC, DEC) and their binary or ternary mixtures as well as the effect of temperature and lithium salt LiX (X = LiPF6, LiTFSI or LiFAP) structure and concentration on these properties. Furthermore, in order to understand parameters that influence the choice of the structure of the solvents and their ability to dissolve gas through the addition of a salt, we firstly analyzed experimentally the transport properties (Self diffusion coefficient (D), fluidity (h-1), and conductivity (s) and lithium transport number (tLi) using the Stock-Einstein, and extended Jones-Dole equations [2]. Furthermore, measured data for the of CO2, C2H4, CH4 and CO solubility in pure alkylcarbonates and their mixtures containing LiPF6; LiFAP; LiTFSI salt, are reported as a function of temperature and concentration in salt. Based on experimental solubility data, the Henry’s law constant of gases in these solvents and electrolytes was then deduced and compared with values predicted by using COSMO-RS methodology within COSMOthermX software. From these results, the molar thermodynamic functions of dissolution such as the standard Gibbs energy, the enthalpy, and the entropy, as well as the mixing enthalpy of the solvents and electrolytes with the gases in its hypothetical liquid state were calculated and discussed [3]. Finally, the analysis of the CO2 solubility variations with the salt addition was then evaluated by determining specific ion parameters Hi by using the Setchenov coefficients in solution. This study showed that the gas solubility is entropy driven and can been influenced by the shape, charge density, and size of the anions in lithium salt.
References
[1] S.A. Freunberger, Y. Chen, Z. Peng, J.M. Griffin, L.J. Hardwick, F. Bardé, P. Novák, P.G. Bruce, Journal of the American Chemical Society 133 (2011) 8040-8047.
[2] P. Porion, Y.R. Dougassa, C. Tessier, L. El Ouatani, J. Jacquemin, M. Anouti, Electrochimica Acta 114 (2013) 95-104.
[3] Y.R. Dougassa, C. Tessier, L. El Ouatani, M. Anouti, J. Jacquemin, The Journal of Chemical Thermodynamics 61 (2013) 32-44.
Resumo:
Protective factors are neglected in risk assessment in adult psychiatric and criminal justice populations. This review investigated the predictive efficacy of selected tools that assess protective factors. Five databases were searched using comprehensive terms for records up to June 2014, resulting in 17 studies (n = 2,198). Results were combined in a multilevel meta-analysis using the R (R Core Team, R: A Language and Environment for Statistical Computing, Vienna, Austria: R Foundation for Statistical Computing, 2015) metafor package (Viechtbauer, Journal of Statistical Software, 2010, 36, 1). Prediction of outcomes was poor relative to a reference category of violent offending, with the exception of prediction of discharge from secure units. There were no significant differences between the predictive efficacy of risk scales, protective scales, and summary judgments. Protective factor assessment may be clinically useful, but more development is required. Claims that use of these tools is therapeutically beneficial require testing.
Resumo:
Desde hace cerca de dos siglos, los hidratos de gas han ganado un rol importante en la ingeniería de procesos, debido a su impacto económico y ambiental en la industria -- Cada día, más compañías e ingenieros ganan interés en este tema, a medida que nuevos desafíos muestran a los hidratos de gas como un factor crucial, haciendo su estudio una solución para un futuro próximo -- Los gases de hidrato son estructuras similares al hielo, compuestos de moléculas huéspedes de agua conteniendo compuestos gaseosos -- Existen naturalmente en condiciones de presiones altas y bajas temperaturas, condiciones típicas de algunos procesos químicos y petroquímicos [1] -- Basado en el trabajo doctoral de Windmeier [2] y el trabajo doctoral the Rock [3], la descripción termodinámica de las fases de los hidratos de gas es implementada siguiendo el estado del arte de la ciencia y la tecnología -- Con ayuda del Dortmund Data Bank (DDB) y el paquete de software correspondiente (DDBSP) [26], el desempeño del método fue mejorado y comparado con una gran cantidad de datos publicados alrededor del mundo -- También, la aplicabilidad de la predicción de los hidratos de gas fue estudiada enfocada en la ingeniería de procesos, con un caso de estudio relacionado con la extracción, producción y transporte del gas natural -- Fue determinado que la predicción de los hidratos de gas es crucial en el diseño del proceso del gas natural -- Donde, en las etapas de tratamiento del gas y procesamiento de líquido no se presenta ninguna formación, en la etapa de deshidratación una temperatura mínima de 290.15 K es crítica y para la extracción y transporte el uso de inhibidores es esencial -- Una composición másica de 40% de etilenglicol fue encontrada apropiada para prevenir la formación de hidrato de gas en la extracción y una composición másica de 20% de metanol en el transporte
Resumo:
Phosphorylation is amongst the most crucial and well-studied post-translational modifications. It is involved in multiple cellular processes which makes phosphorylation prediction vital for understanding protein functions. However, wet-lab techniques are labour and time intensive. Thus, computational tools are required for efficiency. This project aims to provide a novel way to predict phosphorylation sites from protein sequences by adding flexibility and Sezerman Grouping amino acid similarity measure to previous methods, as discovering new protein sequences happens at a greater rate than determining protein structures. The predictor – NOPAY - relies on Support Vector Machines (SVMs) for classification. The features include amino acid encoding, amino acid grouping, predicted secondary structure, predicted protein disorder, predicted protein flexibility, solvent accessibility, hydrophobicity and volume. As a result, we have managed to improve phosphorylation prediction accuracy for Homo sapiens by 3% and 6.1% for Mus musculus. Sensitivity at 99% specificity was also increased by 6% for Homo sapiens and for Mus musculus by 5% on independent test sets. In this study, we have managed to increase phosphorylation prediction accuracy for Homo sapiens and Mus musculus. When there is enough data, future versions of the software may also be able to predict other organisms.
Resumo:
The aim of this thesis is to test the ability of some correlative models such as Alpert correlations on 1972 and re-examined on 2011, the investigation of Heskestad and Delichatsios in 1978, the correlations produced by Cooper in 1982, to define both dynamic and thermal characteristics of a fire induced ceiling-jet flow. The flow occurs when the fire plume impinges the ceiling and develops in the radial direction of the fire axis. Both temperature and velocity predictions are decisive for sprinklers positioning, fire alarms positions, detectors (heat, smoke) positions and activation times and back-layering predictions. These correlative models will be compared with a 3D numerical simulation software CFAST. For the results comparison of temperature and velocity near the ceiling. These results are also compared with a Computational Fluid Dynamics (CFD) analysis, using ANSYS FLUENT.
Resumo:
FEA simulation of thermal metal cutting is central to interactive design and manufacturing. It is therefore relevant to assess the applicability of FEA open software to simulate 2D heat transfer in metal sheet laser cuts. Application of open source code (e.g. FreeFem++, FEniCS, MOOSE) makes possible additional scenarios (e.g. parallel, CUDA, etc.), with lower costs. However, a precise assessment is required on the scenarios in which open software can be a sound alternative to a commercial one. This article contributes in this regard, by presenting a comparison of the aforementioned freeware FEM software for the simulation of heat transfer in thin (i.e. 2D) sheets, subject to a gliding laser point source. We use the commercial ABAQUS software as the reference to compare such open software. A convective linear thin sheet heat transfer model, with and without material removal is used. This article does not intend a full design of computer experiments. Our partial assessment shows that the thin sheet approximation turns to be adequate in terms of the relative error for linear alumina sheets. Under mesh resolutions better than 10e−5 m , the open and reference software temperature differ in at most 1 % of the temperature prediction. Ongoing work includes adaptive re-meshing, nonlinearities, sheet stress analysis and Mach (also called ‘relativistic’) effects.
Resumo:
Purpose: To investigate the expression of Myt272-3 recombinant protein and also to predict a possible protein vaccine candidate against Mycobacterium tuberculosis . Methods: Myt272-3 protein was expressed in pET30a+-Myt272-3 clone. The purity of the protein was determined using Dynabeads® His-Tag Isolation & Pulldown. Protein sequence was analysed in silico using bioinformatics software for the prediction of allergenicity, antigenicity, MHC-I and MHC-II binding, and B-cell epitope binding. Results: The candidate protein was a non-allergen with 15.19 % positive predictive value. It was also predicted to be antigenic, with binding affinity to MHC-I and MHC-II, as well as B-cell epitope binding. Conclusion: The predicted results obtained in this study provide a guide for practical design of a new tuberculosis vaccine.
Resumo:
Concurrent software executes multiple threads or processes to achieve high performance. However, concurrency results in a huge number of different system behaviors that are difficult to test and verify. The aim of this dissertation is to develop new methods and tools for modeling and analyzing concurrent software systems at design and code levels. This dissertation consists of several related results. First, a formal model of Mondex, an electronic purse system, is built using Petri nets from user requirements, which is formally verified using model checking. Second, Petri nets models are automatically mined from the event traces generated from scientific workflows. Third, partial order models are automatically extracted from some instrumented concurrent program execution, and potential atomicity violation bugs are automatically verified based on the partial order models using model checking. Our formal specification and verification of Mondex have contributed to the world wide effort in developing a verified software repository. Our method to mine Petri net models automatically from provenance offers a new approach to build scientific workflows. Our dynamic prediction tool, named McPatom, can predict several known bugs in real world systems including one that evades several other existing tools. McPatom is efficient and scalable as it takes advantage of the nature of atomicity violations and considers only a pair of threads and accesses to a single shared variable at one time. However, predictive tools need to consider the tradeoffs between precision and coverage. Based on McPatom, this dissertation presents two methods for improving the coverage and precision of atomicity violation predictions: 1) a post-prediction analysis method to increase coverage while ensuring precision; 2) a follow-up replaying method to further increase coverage. Both methods are implemented in a completely automatic tool.
Resumo:
In the presented thesis work, meshfree method with distance fields is applied to create a novel computational approach which enables inclusion of the realistic geometric models of the microstructure and liberates Finite Element Analysis(FEA) from thedependance on and limitations of meshing of fine microstructural feature such as splats and porosity.Manufacturing processes of ceramics produce materials with complex porosity microstructure.Geometry of pores, their size and location substantially affect macro scale physical properties of the material. Complex structure and geometry of the pores severely limit application of modern Finite Element Analysis methods because they require construction of spatial grids (meshes) that conform to the geometric shape of the structure. As a result, there are virtually no effective tools available for predicting overall mechanical and thermal properties of porous materials based on their microstructure. This thesis is a separate handling and controls of geometric and physical computational models that are seamlessly combined at solution run time. Using the proposedapproach we will determine the effective thermal conductivity tensor of real porous ceramic materials featuring both isotropic and anisotropic thermal properties. This work involved development and implementation of numerical algorithms, data structure, and software.
Resumo:
Security defects are common in large software systems because of their size and complexity. Although efficient development processes, testing, and maintenance policies are applied to software systems, there are still a large number of vulnerabilities that can remain, despite these measures. Some vulnerabilities stay in a system from one release to the next one because they cannot be easily reproduced through testing. These vulnerabilities endanger the security of the systems. We propose vulnerability classification and prediction frameworks based on vulnerability reproducibility. The frameworks are effective to identify the types and locations of vulnerabilities in the earlier stage, and improve the security of software in the next versions (referred to as releases). We expand an existing concept of software bug classification to vulnerability classification (easily reproducible and hard to reproduce) to develop a classification framework for differentiating between these vulnerabilities based on code fixes and textual reports. We then investigate the potential correlations between the vulnerability categories and the classical software metrics and some other runtime environmental factors of reproducibility to develop a vulnerability prediction framework. The classification and prediction frameworks help developers adopt corresponding mitigation or elimination actions and develop appropriate test cases. Also, the vulnerability prediction framework is of great help for security experts focus their effort on the top-ranked vulnerability-prone files. As a result, the frameworks decrease the number of attacks that exploit security vulnerabilities in the next versions of the software. To build the classification and prediction frameworks, different machine learning techniques (C4.5 Decision Tree, Random Forest, Logistic Regression, and Naive Bayes) are employed. The effectiveness of the proposed frameworks is assessed based on collected software security defects of Mozilla Firefox.
Resumo:
Monitoring user interaction activities provides the basis for creating a user model that can be used to predict user behaviour and enable user assistant services. The BaranC framework provides components that perform UI monitoring (and collect all associated context data), builds a user model, and supports services that make use of the user model. In this case study, a Next-App prediction service is built to demonstrate the use of the framework and to evaluate the usefulness of such a prediction service. Next-App analyses a user's data, learns patterns, makes a model for a user, and finally predicts based on the user model and current context, what application(s) the user is likely to want to use. The prediction is pro-active and dynamic; it is dynamic both in responding to the current context, and also in that it responds to changes in the user model, as might occur over time as a user's habits change. Initial evaluation of Next-App indicates a high-level of satisfaction with the service.
Resumo:
Predicting user behaviour enables user assistant services provide personalized services to the users. This requires a comprehensive user model that can be created by monitoring user interactions and activities. BaranC is a framework that performs user interface (UI) monitoring (and collects all associated context data), builds a user model, and supports services that make use of the user model. A prediction service, Next-App, is built to demonstrate the use of the framework and to evaluate the usefulness of such a prediction service. Next-App analyses a user's data, learns patterns, makes a model for a user, and finally predicts, based on the user model and current context, what application(s) the user is likely to want to use. The prediction is pro-active and dynamic, reflecting the current context, and is also dynamic in that it responds to changes in the user model, as might occur over time as a user's habits change. Initial evaluation of Next-App indicates a high-level of satisfaction with the service.
Modelos estocásticos de crescimento individual e desenvolvimento de software de estimação e previsão
Resumo:
Os modelos de crescimento individual são geralmente adaptações de modelos de crescimento de populações. Inicialmente estes modelos eram apenas determinísticos, isto é, não incorporavam as flutuações aleatórias do ambiente. Com o desenvolvimento da teoria do cálculo estocástico podemos adicionar um termo estocástico, que representa a aleatoriedade ambiental que influencia o processo em estudo. Actualmente, o estudo do crescimento individual em ambiente aleatório é cada vez mais importante, não apenas pela vertente financeira, mas também devido às suas aplicações nas áreas da saúde e da pecuária, entre outras. Problemas como o ajustamento de modelos de crescimento individual, estimação de parâmetros e previsão de tamanhos futuros são tratados neste trabalho. São apresentadas novas aplicações do modelo estocástico monomolecular generalizado e um novo software de aplicação deste e de outros modelos. ABSTRACT: Individual growth models are usually adaptations of growth population models. Initially these models were only deterministic, that is, they did not incorporate the random fluctuations of the environment. With the development of the theory of stochastic calculus, we can add a stochastic term that represents the random environmental influences in the process under study. Currently, the study of individual growth in a random environment is increasingly important, not only by the financial scope but also because of its applications in health care and livestock production, among others. Problems such as adjustment of an individual growth model, estimation of parameters and prediction of future sizes are treated in this work. New applications of the generalized stochastic monomolecular model and a new software applied to this and other models are presented.