946 resultados para software failure prediction


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Software is patentable in Europe so long as there is sufficient ‘technical contribution’ under the decades-long interpretation of the European Patent Convention made by the Boards of Appeal of the European Patent Office. Despite the failure of the proposed Directive on Computer Implemented Inventions, opponents of software patents have failed to have any affect upon this technical contrivance. Yet, while national courts find the Boards of Appeal decisions persuasive, ‘technical contribution’ remains a difficult test for these various courts to apply. In this article I outline that the test is difficult to utilise in national litigation (it is an engineering approach, rather than a legal one) and suggest that as the Boards of Appeal become less important (and thus less persuasive) should the proposed Unified Patent Court come to fruition, the ‘technical contribution’ test is unlikely to last. This may again make the whole issue of what/whether/how software should be patentable open to debate, hopefully in a less aggressive environment than has existed to date.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents a scalable, statistical ‘black-box’ model for predicting the performance of parallel programs on multi-core non-uniform memory access (NUMA) systems. We derive a model with low overhead, by reducing data collection and model training time. The model can accurately predict the behaviour of parallel applications in response to changes in their concurrency, thread layout on NUMA nodes, and core voltage and frequency. We present a framework that applies the model to achieve significant energy and energy-delay-square (ED2) savings (9% and 25%, respectively) along with performance improvement (10% mean) on an actual 16-core NUMA system running realistic application workloads. Our prediction model proves substantially more accurate than previous efforts.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Knowledge on the life span of the riveting dies used in the automotive industry is sparse. It is often the case that only when faulty products are produced are workers aware that their tool needs to be changed. This is of course costly both in terms of time and money. Responding to this challenge, this paper proposes a methodology which integrates wear and stress analysis to quantify the life of a riveting die. Experiments are carried out to measure the applied load required to split a rivet. The obtained results (i.e. force curves) are used to validate the wear mechanisms of the die observed using scanning electron microscopy. Sliding, impact, and adhesive wears are observed on the riveting die after a certain number of riveting cycles. The stress distribution on the die during riveting is simulated using a finite element (FE) approach. In order to confirm the accuracy of the FE model, the experimental force results are compared with the ones produced from FE simulation. The maximum and minimum von Mises' stresses generated from the FE model are input into a Goodman diagram and an S-N curve to compute the life of the riveting die. It is found that the riveting die is predicted to run for 4 980 000 cycles before failure.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The research aims to carry out a detailed analysis of the loads applied by the ambulance workers when loading/unloading ambulance stretchers. The forces required of the ambulance workers for each system are measured using a load cell in a force handle arrangement. The process of loading and unloading is video recorded for all the systems to register the posture of the ambulance workers in different stages of the process. The postures and forces exerted by the ambulance workers are analyzed using biomechanical assessment software to examine if the work loads at any stage of the process are harmful. Kinetic analysis of each stretcher loading system is performed. Comparison of the kinetic analysis and measurements shows very close agreement for most of the cases. The force analysis results are evaluated against derived failure criteria. The evaluation is extended to a biomechanical failure analysis of the ambulance worker's lower back using 3DSSPP software developed at the Centre for Ergonomics at the University of Michigan. The critical tasks of each ambulance worker during the loading and unloading operations for each system are identified. Design recommendations are made to reduce the forces exerted based on loading requirements from the kinetic analysis. © 2006 IPEM.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The methane solubility in five pure electrolyte solvents and one binary solvent mixture for lithium ion batteries – such as ethylene carbonate (EC), propylene carbonate (PC), dimethyl carbonate (DMC), ethyl methyl carbonate (EMC), diethyl carbonate (DEC) and the (50:50 wt%) mixture of EC:DMC was studied experimentally at pressures close to atmospheric and as a function of temperature between (280 and 343) K by using an isochoric saturation technique. The effect of the selected anions of a lithium salt LiX (X = hexafluorophosphate,

&lt;img height="16" border="0" style="vertical-align:bottom" width="27" alt="View the MathML source" title="View the MathML source" src="http://origin-ars.els-cdn.com/content/image/1-s2.0-S0021961414002146-si1.gif"&gt;PF6-; tris(pentafluoroethane)trifluorurophosphate, FAP; bis(trifluoromethylsulfonyl)imide, TFSI) on the methane solubility in electrolytes for lithium ion batteries was then investigated using a model electrolyte based on the binary mixture of EC:DMC (50:50 wt%) + 1 mol · dm−3 of lithium salt in the same temperature and pressure ranges. Based on experimental solubility data, the Henry’s law constant of the methane in these solutions were then deduced and compared together and with those predicted by using COSMO-RS methodology within COSMOthermX software. From this study, it appears that the methane solubility in each pure solvent decreases with the temperature and increases in the following order: EC < PC < EC:EMC (50:50 wt%) < DMC < EMC < DEC, showing that this increases with the van der Walls force in solution. Additionally, in all investigated EC:DMC (50:50 wt%) + 1 mol · dm−3 of lithium salt electrolytes, the methane solubility decreases also with the temperature and the methane solubility is higher in the electrolyte containing the LiFAP salt, followed by that based on the LiTFSI one. From the variation of the Henry’s law constants with the temperature, the partial molar thermodynamic functions of solvation, such as the standard Gibbs free energy, the enthalpy, and the entropy where then calculated, as well as the mixing enthalpy of the solvent with methane in its hypothetical liquid state. Finally, the effect of the gas structure on their solubility in selected solutions was discussed by comparing methane solubility data reported in the present work with carbon dioxide solubility data available in the same solvents or mixtures to discern the more harmful gas generated during the degradation of the electrolyte, which limits the battery lifetime.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Thesis (Master's)--University of Washington, 2015

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Master Thesis in Mechanical Engineering field of Maintenance and Production

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Therapeutic drug monitoring (TDM) aims to optimize treatments by individualizing dosage regimens based on the measurement of blood concentrations. Dosage individualization to maintain concentrations within a target range requires pharmacokinetic and clinical capabilities. Bayesian calculations currently represent the gold standard TDM approach but require computation assistance. In recent decades computer programs have been developed to assist clinicians in this assignment. The aim of this survey was to assess and compare computer tools designed to support TDM clinical activities. The literature and the Internet were searched to identify software. All programs were tested on personal computers. Each program was scored against a standardized grid covering pharmacokinetic relevance, user friendliness, computing aspects, interfacing and storage. A weighting factor was applied to each criterion of the grid to account for its relative importance. To assess the robustness of the software, six representative clinical vignettes were processed through each of them. Altogether, 12 software tools were identified, tested and ranked, representing a comprehensive review of the available software. Numbers of drugs handled by the software vary widely (from two to 180), and eight programs offer users the possibility of adding new drug models based on population pharmacokinetic analyses. Bayesian computation to predict dosage adaptation from blood concentration (a posteriori adjustment) is performed by ten tools, while nine are also able to propose a priori dosage regimens, based only on individual patient covariates such as age, sex and bodyweight. Among those applying Bayesian calculation, MM-USC*PACK© uses the non-parametric approach. The top two programs emerging from this benchmark were MwPharm© and TCIWorks. Most other programs evaluated had good potential while being less sophisticated or less user friendly. Programs vary in complexity and might not fit all healthcare settings. Each software tool must therefore be regarded with respect to the individual needs of hospitals or clinicians. Programs should be easy and fast for routine activities, including for non-experienced users. Computer-assisted TDM is gaining growing interest and should further improve, especially in terms of information system interfacing, user friendliness, data storage capability and report generation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The main objective of this master’s thesis is to examine if Weibull analysis is suitable method for warranty forecasting in the Case Company. The Case Company has used Reliasoft’s Weibull++ software, which is basing on the Weibull method, but the Company has noticed that the analysis has not given right results. This study was conducted making Weibull simulations in different profit centers of the Case Company and then comparing actual cost and forecasted cost. Simula-tions were made using different time frames and two methods for determining future deliveries. The first sub objective is to examine, which parameters of simulations will give the best result to each profit center. The second sub objective of this study is to create a simple control model for following forecasted costs and actual realized costs. The third sub objective is to document all Qlikview-parameters of profit centers. This study is a constructive research, and solutions for company’s problems are figured out in this master’s thesis. In the theory parts were introduced quality issues, for example; what is quality, quality costing and cost of poor quality. Quality is one of the major aspects in the Case Company, so understand-ing the link between quality and warranty forecasting is important. Warranty management was also introduced and other different tools for warranty forecasting. The Weibull method and its mathematical properties and reliability engineering were introduced. The main results of this master’s thesis are that the Weibull analysis forecasted too high costs, when calculating provision. Although, some forecasted values of profit centers were lower than actual values, the method works better for planning purposes. One of the reasons is that quality improving or alternatively quality decreasing is not showing in the results of the analysis in the short run. The other reason for too high values is that the products of the Case Company are com-plex and analyses were made in the profit center-level. The Weibull method was developed for standard products, but products of the Case Company consists of many complex components. According to the theory, this method was developed for homogeneous-data. So the most im-portant notification is that the analysis should be made in the product level, not the profit center level, when the data is more homogeneous.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The purpose of this study is to examine the impact of the choice of cut-off points, sampling procedures, and the business cycle on the accuracy of bankruptcy prediction models. Misclassification can result in erroneous predictions leading to prohibitive costs to firms, investors and the economy. To test the impact of the choice of cut-off points and sampling procedures, three bankruptcy prediction models are assessed- Bayesian, Hazard and Mixed Logit. A salient feature of the study is that the analysis includes both parametric and nonparametric bankruptcy prediction models. A sample of firms from Lynn M. LoPucki Bankruptcy Research Database in the U. S. was used to evaluate the relative performance of the three models. The choice of a cut-off point and sampling procedures were found to affect the rankings of the various models. In general, the results indicate that the empirical cut-off point estimated from the training sample resulted in the lowest misclassification costs for all three models. Although the Hazard and Mixed Logit models resulted in lower costs of misclassification in the randomly selected samples, the Mixed Logit model did not perform as well across varying business-cycles. In general, the Hazard model has the highest predictive power. However, the higher predictive power of the Bayesian model, when the ratio of the cost of Type I errors to the cost of Type II errors is high, is relatively consistent across all sampling methods. Such an advantage of the Bayesian model may make it more attractive in the current economic environment. This study extends recent research comparing the performance of bankruptcy prediction models by identifying under what conditions a model performs better. It also allays a range of user groups, including auditors, shareholders, employees, suppliers, rating agencies, and creditors' concerns with respect to assessing failure risk.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Les sociétés modernes dépendent de plus en plus sur les systèmes informatiques et ainsi, il y a de plus en plus de pression sur les équipes de développement pour produire des logiciels de bonne qualité. Plusieurs compagnies utilisent des modèles de qualité, des suites de programmes qui analysent et évaluent la qualité d'autres programmes, mais la construction de modèles de qualité est difficile parce qu'il existe plusieurs questions qui n'ont pas été répondues dans la littérature. Nous avons étudié les pratiques de modélisation de la qualité auprès d'une grande entreprise et avons identifié les trois dimensions où une recherche additionnelle est désirable : Le support de la subjectivité de la qualité, les techniques pour faire le suivi de la qualité lors de l'évolution des logiciels, et la composition de la qualité entre différents niveaux d'abstraction. Concernant la subjectivité, nous avons proposé l'utilisation de modèles bayésiens parce qu'ils sont capables de traiter des données ambiguës. Nous avons appliqué nos modèles au problème de la détection des défauts de conception. Dans une étude de deux logiciels libres, nous avons trouvé que notre approche est supérieure aux techniques décrites dans l'état de l'art, qui sont basées sur des règles. Pour supporter l'évolution des logiciels, nous avons considéré que les scores produits par un modèle de qualité sont des signaux qui peuvent être analysés en utilisant des techniques d'exploration de données pour identifier des patrons d'évolution de la qualité. Nous avons étudié comment les défauts de conception apparaissent et disparaissent des logiciels. Un logiciel est typiquement conçu comme une hiérarchie de composants, mais les modèles de qualité ne tiennent pas compte de cette organisation. Dans la dernière partie de la dissertation, nous présentons un modèle de qualité à deux niveaux. Ces modèles ont trois parties: un modèle au niveau du composant, un modèle qui évalue l'importance de chacun des composants, et un autre qui évalue la qualité d'un composé en combinant la qualité de ses composants. L'approche a été testée sur la prédiction de classes à fort changement à partir de la qualité des méthodes. Nous avons trouvé que nos modèles à deux niveaux permettent une meilleure identification des classes à fort changement. Pour terminer, nous avons appliqué nos modèles à deux niveaux pour l'évaluation de la navigabilité des sites web à partir de la qualité des pages. Nos modèles étaient capables de distinguer entre des sites de très bonne qualité et des sites choisis aléatoirement. Au cours de la dissertation, nous présentons non seulement des problèmes théoriques et leurs solutions, mais nous avons également mené des expériences pour démontrer les avantages et les limitations de nos solutions. Nos résultats indiquent qu'on peut espérer améliorer l'état de l'art dans les trois dimensions présentées. En particulier, notre travail sur la composition de la qualité et la modélisation de l'importance est le premier à cibler ce problème. Nous croyons que nos modèles à deux niveaux sont un point de départ intéressant pour des travaux de recherche plus approfondis.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Learning Disability (LD) is a classification including several disorders in which a child has difficulty in learning in a typical manner, usually caused by an unknown factor or factors. LD affects about 15% of children enrolled in schools. The prediction of learning disability is a complicated task since the identification of LD from diverse features or signs is a complicated problem. There is no cure for learning disabilities and they are life-long. The problems of children with specific learning disabilities have been a cause of concern to parents and teachers for some time. The aim of this paper is to develop a new algorithm for imputing missing values and to determine the significance of the missing value imputation method and dimensionality reduction method in the performance of fuzzy and neuro fuzzy classifiers with specific emphasis on prediction of learning disabilities in school age children. In the basic assessment method for prediction of LD, checklists are generally used and the data cases thus collected fully depends on the mood of children and may have also contain redundant as well as missing values. Therefore, in this study, we are proposing a new algorithm, viz. the correlation based new algorithm for imputing the missing values and Principal Component Analysis (PCA) for reducing the irrelevant attributes. After the study, it is found that, the preprocessing methods applied by us improves the quality of data and thereby increases the accuracy of the classifiers. The system is implemented in Math works Software Mat Lab 7.10. The results obtained from this study have illustrated that the developed missing value imputation method is very good contribution in prediction system and is capable of improving the performance of a classifier.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Learning Disability (LD) is a neurological condition that affects a child’s brain and impairs his ability to carry out one or many specific tasks. LD affects about 15 % of children enrolled in schools. The prediction of LD is a vital and intricate job. The aim of this paper is to design an effective and powerful tool, using the two intelligent methods viz., Artificial Neural Network and Adaptive Neuro-Fuzzy Inference System, for measuring the percentage of LD that affected in school-age children. In this study, we are proposing some soft computing methods in data preprocessing for improving the accuracy of the tool as well as the classifier. The data preprocessing is performed through Principal Component Analysis for attribute reduction and closest fit algorithm is used for imputing missing values. The main idea in developing the LD prediction tool is not only to predict the LD present in children but also to measure its percentage along with its class like low or minor or major. The system is implemented in Mathworks Software MatLab 7.10. The results obtained from this study have illustrated that the designed prediction system or tool is capable of measuring the LD effectively

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Memory errors are a common cause of incorrect software execution and security vulnerabilities. We have developed two new techniques that help software continue to execute successfully through memory errors: failure-oblivious computing and boundless memory blocks. The foundation of both techniques is a compiler that generates code that checks accesses via pointers to detect out of bounds accesses. Instead of terminating or throwing an exception, the generated code takes another action that keeps the program executing without memory corruption. Failure-oblivious code simply discards invalid writes and manufactures values to return for invalid reads, enabling the program to continue its normal execution path. Code that implements boundless memory blocks stores invalid writes away in a hash table to return as the values for corresponding out of bounds reads. he net effect is to (conceptually) give each allocated memory block unbounded size and to eliminate out of bounds accesses as a programming error. We have implemented both techniques and acquired several widely used open source servers (Apache, Sendmail, Pine, Mutt, and Midnight Commander).With standard compilers, all of these servers are vulnerable to buffer overflow attacks as documented at security tracking web sites. Both failure-oblivious computing and boundless memory blocks eliminate these security vulnerabilities (as well as other memory errors). Our results show that our compiler enables the servers to execute successfully through buffer overflow attacks to continue to correctly service user requests without security vulnerabilities.