838 resultados para Accelerated failure time Model. Correlated data. Imputation. Residuals analysis
Resumo:
Detailed knowledge on genetic diversity among germplasm is important for hybrid maize ( Zea mays L.) breeding. The objective of the study was to determine genetic diversity in widely grown hybrids in Southern Africa, and compare effectiveness of phenotypic analysis models for determining genetic distances between hybrids. Fifty hybrids were evaluated at one site with two replicates. The experiment was a randomized complete block design. Phenotypic and genotypic data were analyzed using SAS and Power Marker respectively. There was significant (p < 0.01) variation and diversity among hybrid brands but small within brand clusters. Polymorphic Information Content (PIC) ranged from 0.07 to 0.38 with an average of 0.34 and genetic distance ranged from 0.08 to 0.50 with an average of 0.43. SAH23 and SAH21 (0.48) and SAH33 and SAH3 (0.47) were the most distantly related hybrids. Both single nucleotide polymorphism (SNP) markers and phenotypic data models were effective for discriminating genotypes according to genetic distance. SNP markers revealed nine clusters of hybrids. The 12-trait phenotypic analysis model, revealed eight clusters at 85%, while the five-trait model revealed six clusters. Path analysis revealed significant direct and indirect effects of secondary traits on yield. Plant height and ear height were negatively correlated with grain yield meaning shorter hybrids gave high yield. Ear weight, days to anthesis, and number of ears had highest positive direct effects on yield. These traits can provide good selection index for high yielding maize hybrids. Results confirmed that diversity of hybrids is small within brands and also confirm that phenotypic trait models are effective for discriminating hybrids.
Resumo:
In this thesis, we present a quantitative approach using probabilistic verification techniques for the analysis of reliability, availability, maintainability, and safety (RAMS) properties of satellite systems. The subject of our research is satellites used in mission critical industrial applications. A strong case for using probabilistic model checking to support RAMS analysis of satellite systems is made by our verification results. This study is intended to build a foundation to help reliability engineers with a basic background in model checking to apply probabilistic model checking to small satellite systems. We make two major contributions. One of these is the approach of RAMS analysis to satellite systems. In the past, RAMS analysis has been extensively applied to the field of electrical and electronics engineering. It allows system designers and reliability engineers to predict the likelihood of failures from the indication of historical or current operational data. There is a high potential for the application of RAMS analysis in the field of space science and engineering. However, there is a lack of standardisation and suitable procedures for the correct study of RAMS characteristics for satellite systems. This thesis considers the promising application of RAMS analysis to the case of satellite design, use, and maintenance, focusing on its system segments. Data collection and verification procedures are discussed, and a number of considerations are also presented on how to predict the probability of failure. Our second contribution is leveraging the power of probabilistic model checking to analyse satellite systems. We present techniques for analysing satellite systems that differ from the more common quantitative approaches based on traditional simulation and testing. These techniques have not been applied in this context before. We present the use of probabilistic techniques via a suite of detailed examples, together with their analysis. Our presentation is done in an incremental manner: in terms of complexity of application domains and system models, and a detailed PRISM model of each scenario. We also provide results from practical work together with a discussion about future improvements.
Resumo:
Several companies are trying to improve their operation efficiency by implementing an enterprise resource planning (ERP) system that makes it possible to control the resources of the company in real time. However, the success of the implementation project is not a foregone conclusion; a significant part of these projects end in a failure, one way or another. Therefore it is important to investigate ERP system implementation more closely in order to increase understanding about factors influencing ERP system success and to improve the probability of a successful ERP implementation project. Consequently, this study was initiated because a manufacturing case company wanted to review the success of their ERP implementation project. To be exact, the case company hoped to gain both information about the success of the project and insight for future implementation improvement. This study investigated ERP success specifically by examining factors that influence ERP key-user satisfaction. User satisfaction is one of the most commonly applied indicators of information system success. The research data was mainly collected by conducting theme interviews. The subjects of the interviews were six key-users of the newly implemented ERP system. The interviewees were closely involved in the implementation project. Furthermore, they act as representative users that utilize the new system in everyday business processes. The collected data was analyzed by thematizing. Both data collection and analysis were guided by a theoretical frame of reference. This frame was based on previous research on the subject. The results of the study aligned with the theoretical framework to large extent. The four principal factors influencing key-user satisfaction were change management, contractor service, key-user’s system knowledge and characteristics of the ERP product itself. One of the most significant contributions of the research is that it confirmed the existence of a connection between change management and ERP key-user satisfaction. Furthermore, it discovered two new sub-factors influencing contractor service related key-user satisfaction. In addition, the research findings indicated that in order to improve the current level of key-user satisfaction, the case company should pay special attention to system functionality improvement and enhancement of the key-users’ knowledge. During similar implementation projects in the future, it would be important to assure the success of change management and contractor service related processes.
Resumo:
Le but de cette thèse est d’explorer le potentiel sismique des étoiles naines blanches pulsantes, et en particulier celles à atmosphères riches en hydrogène, les étoiles ZZ Ceti. La technique d’astérosismologie exploite l’information contenue dans les modes normaux de vibration qui peuvent être excités lors de phases particulières de l’évolution d’une étoile. Ces modes modulent le flux émergent de l’étoile pulsante et se manifestent principalement en termes de variations lumineuses multi-périodiques. L’astérosismologie consiste donc à examiner la luminosité d’étoiles pulsantes en fonction du temps, afin d’en extraire les périodes, les amplitudes apparentes, ainsi que les phases relatives des modes de pulsation détectés, en utilisant des méthodes standards de traitement de signal, telles que des techniques de Fourier. L’étape suivante consiste à comparer les périodes de pulsation observées avec des périodes générées par un modèle stellaire en cherchant l’accord optimal avec un modèle physique reconstituant le plus fidèlement possible l’étoile pulsante. Afin d’assurer une recherche optimale dans l’espace des paramètres, il est nécessaire d’avoir de bons modèles physiques, un algorithme d’optimisation de comparaison de périodes efficace, et une puissance de calcul considérable. Les périodes des modes de pulsation de modèles stellaires de naines blanches peuvent être généralement calculées de manière précise et fiable sur la base de la théorie linéaire des pulsations stellaires dans sa version adiabatique. Afin de définir dans son ensemble un modèle statique de naine blanche propre à l’analyse astérosismologique, il est nécessaire de spécifier la gravité de surface, la température effective, ainsi que différents paramètres décrivant la disposition en couche de l’enveloppe. En utilisant parallèlement les informations obtenues de manière indépendante (température effective et gravité de surface) par la méthode spectroscopique, il devient possible de vérifier la validité de la solution obtenue et de restreindre de manière remarquable l’espace des paramètres. L’exercice astérosismologique, s’il est réussi, mène donc à la détermination précise des paramètres de la structure globale de l’étoile pulsante et fournit de l’information unique sur sa structure interne et l’état de sa phase évolutive. On présente dans cette thèse l’analyse complète réussie, de l’extraction des fréquences à la solution sismique, de quatre étoiles naines blanches pulsantes. Il a été possible de déterminer les paramètres structuraux de ces étoiles et de les comparer remarquablement à toutes les contraintes indépendantes disponibles dans la littérature, mais aussi d’inférer sur la dynamique interne et de reconstruire le profil de rotation interne. Dans un premier temps, on analyse le duo d’étoiles ZZ Ceti, GD 165 et Ross 548, afin de comprendre les différences entre leurs propriétés de pulsation, malgré le fait qu’elles soient des étoiles similaires en tout point, spectroscopiquement parlant. L’analyse sismique révèle des structures internes différentes, et dévoile la sensibilité de certains modes de pulsation à la composition interne du noyau de l’étoile. Afin de palier à cette sensibilité, nouvellement découverte, et de rivaliser avec les données de qualité exceptionnelle que nous fournissent les missions spatiales Kepler et Kepler2, on développe une nouvelle paramétrisation des profils chimiques dans le coeur, et on valide la robustesse de notre technique et de nos modèles par de nombreux tests. Avec en main la nouvelle paramétrisation du noyau, on décroche enfin le ”Saint Graal” de l’astérosismologie, en étant capable de reproduire pour la première fois les périodes observées à la précision des observations, dans le cas de l’étude sismique des étoiles KIC 08626021 et de GD 1212.
Resumo:
Mechanistic models used for prediction should be parsimonious, as models which are over-parameterised may have poor predictive performance. Determining whether a model is parsimonious requires comparisons with alternative model formulations with differing levels of complexity. However, creating alternative formulations for large mechanistic models is often problematic, and usually time-consuming. Consequently, few are ever investigated. In this paper, we present an approach which rapidly generates reduced model formulations by replacing a models variables with constants. These reduced alternatives can be compared to the original model, using data based model selection criteria, to assist in the identification of potentially unnecessary model complexity, and thereby inform reformulation of the model. To illustrate the approach, we present its application to a published radiocaesium plant-uptake model, which predicts uptake on the basis of soil characteristics (e.g. pH, organic matter content, clay content). A total of 1024 reduced model formulations were generated, and ranked according to five model selection criteria: Residual Sum of Squares (RSS), AICc, BIC, MDL and ICOMP. The lowest scores for RSS and AICc occurred for the same reduced model in which pH dependent model components were replaced. The lowest scores for BIC, MDL and ICOMP occurred for a further reduced model in which model components related to the distinction between adsorption on clay and organic surfaces were replaced. Both these reduced models had a lower RSS for the parameterisation dataset than the original model. As a test of their predictive performance, the original model and the two reduced models outlined above were used to predict an independent dataset. The reduced models have lower prediction sums of squares than the original model, suggesting that the latter may be overfitted. The approach presented has the potential to inform model development by rapidly creating a class of alternative model formulations, which can be compared.
Resumo:
The model presented allows simulating the pesticide concentration in fruit trees and estimating the pesticide bioconcentration factor in fruits of woody species. The model allows estimating the pesticide uptake by plants through the water transpiration stream and also the time in which maximum pesticide concentration occur in the fruits. The equation proposed presents the relationships between bioconcentration factor (BCF) and the following variables: plant water transpiration volume (Q), pesticide transpiration stream concentration factor (TSCF), pesticide stem-water partition coefficient (KWood,w), stem dry biomass (M) and pesticide dissipation rate in the soil-plant system (kEGS). The modeling started and was developed from a previous model ?Fruit Tree Model? (FTM), reported by Trapp and collaborators in 2003, to which was added the hypothesis that the pesticide degradation in the soil follows a first order kinetic equation. The model fitness was evaluated through the sensitivity analysis of the pesticide BCF values in fruits with respect to the model entry data variability.
Resumo:
Todas as organizações deveriam preocupar-se com a análise dos custos da qualidade, dado que essa análise, para além de permitir identificar aspetos a melhorar, é uma ferramenta fundamental para os próprios órgãos de gestão dessas organizações. Esta análise sobre os custos da qualidade também deveria incidir sobre as atividades da empresa relacionadas com a sua prática fiscal. Porém, a literatura não apresenta qualquer referência à relação entre essas duas temáticas: custos da qualidade e fiscalidade empresarial. Nesse sentido, o presente trabalho de investigação analisa a relação entre os princípios dos custos da qualidade e a fiscalidade empresarial em Portugal. Pelo que, optou-se pela metodologia case study, mais especificamente pela metodologia comparative case study, por se entender, e se ter demonstrado, ser a metodologia que melhor se adequa à complexidade do tema em análise. Este trabalho, para além de relacionar os custos da qualidade e a fiscalidade empresarial, permitiu apresentar e aplicar uma metodologia para implementação do modelo Prevention – Appraisal – Faillure (PAF), com o objetivo de diminuir os custos da qualidade na prática fiscal e atingir o nível económico da qualidade, bem como um índice de eficiência, que permite, a todo o momento, determinar o nível de eficiência atingido e a forma de o melhorar. Nesse sentido, concluiu-se que a generalidade das empresas portuguesas não aplica os princípios dos custos da qualidade ao seu departamento fiscal ou à sua prática fiscal, quer essa atividade seja executada internamente na empresa, quer seja executada externamente; Costs related to the quality of fiscal practice in Portuguese firms. Comparative case study Abstract: Every organization should be concerned about analyzing its quality costs, since that analysis, besides allowing identification of aspects to improve is a fundamental tool for the management organs of those organizations. This analysis of quality costs should also be carried out on firms’ activities related to their fiscal practice. However, no reference is found in the literature to the relationship between these two: quality costs and business taxation. This research analyzes the relationship between the principles of quality costs and business taxation in Portugal. So being, and to carry out this study, the case study methodology was chosen, more specifically the comparative case study methodology, through the understanding, and previous demonstration, that it is the most appropriate methodology for the complexity of the subject analyzed. Besides relating quality costs to business taxation, this study allowed presentation and application of a methodology for implementing the Prevention – Appraisal – Failure (PAF) model in companies’ fiscal practice which decreases the costs of this practice, reach the economic level of quality as well as an efficiency index, which allows at any time to determine the achieved level of efficiency and how to improve it. All in all, what this study demonstrated is that Portuguese companies, in general, do not apply the principles of quality costs to their taxation department or fiscal practice, whether that activity is performed internally in the firm or externally.
Resumo:
For climate risk management, cumulative distribution functions (CDFs) are an important source of information. They are ideally suited to compare probabilistic forecasts of primary (e.g. rainfall) or secondary data (e.g. crop yields). Summarised as CDFs, such forecasts allow an easy quantitative assessment of possible, alternative actions. Although the degree of uncertainty associated with CDF estimation could influence decisions, such information is rarely provided. Hence, we propose Cox-type regression models (CRMs) as a statistical framework for making inferences on CDFs in climate science. CRMs were designed for modelling probability distributions rather than just mean or median values. This makes the approach appealing for risk assessments where probabilities of extremes are often more informative than central tendency measures. CRMs are semi-parametric approaches originally designed for modelling risks arising from time-to-event data. Here we extend this original concept beyond time-dependent measures to other variables of interest. We also provide tools for estimating CDFs and surrounding uncertainty envelopes from empirical data. These statistical techniques intrinsically account for non-stationarities in time series that might be the result of climate change. This feature makes CRMs attractive candidates to investigate the feasibility of developing rigorous global circulation model (GCM)-CRM interfaces for provision of user-relevant forecasts. To demonstrate the applicability of CRMs, we present two examples for El Ni ? no/Southern Oscillation (ENSO)-based forecasts: the onset date of the wet season (Cairns, Australia) and total wet season rainfall (Quixeramobim, Brazil). This study emphasises the methodological aspects of CRMs rather than discussing merits or limitations of the ENSO-based predictors.
Resumo:
Background: Rotavirus diarrhea is one of the most important causes of death among under-five children. Anti-rotavirus vaccination of these children may have a reducing effect on the disease. Objectives: this study is intended to contribute to health policy-makers of the country about the optimal decision and policy development in this area, by performing cost-effectiveness and cost-utility analysis on anti-rotavirus vaccination for under-5 children. Patients and Methods: A cost-effectiveness analysis was performed using a decision tree model to analyze rotavirus vaccination, which was compared with no vaccination with Iran’s ministry of health perspective in a 5-year time horizon. Epidemiological data were collected from published and unpublished sources. Four different assumptions were considered to the extent of the disease episode. To analyze costs, the costs of implementing the vaccination program were calculated with 98% coverage and the cost of USD 7 per dose. Medical and social costs of the disease were evaluated by sampling patients with rotavirus diarrhea, and sensitivity analysis was also performed for different episode rates and vaccine price per dose. Results: For the most optimistic assumption for the episode of illness (10.2 per year), the cost per DALY averted is 12,760 and 7,404 for RotaTeq and Rotarix vaccines, respectively, while assuming the episode of illness is 300%, they will be equal to 2,395 and 354, respectively, which will be highly cost-effective. Number of life-years gained is equal to 3,533 years. Conclusions: Assuming that the illness episodes are 100% and 300% for Rotarix and 300% for Rota Teq, the ratio of cost per DALY averted is highly cost-effective, based on the threshold of the world health organization (< 1 GDP per capita = 4526 USD). The implementation of a national rotavirus vaccination program is suggested.
Resumo:
Discrepancies between classical model predictions and experimental data for deep bed filtration have been reported by various authors. In order to understand these discrepancies, an analytic continuum model for deep bed filtration is proposed. In this model, a filter coefficient is attributed to each distinct retention mechanism (straining, diffusion, gravity interception, etc.). It was shown that these coefficients generally cannot be merged into an effective filter coefficient, as considered in the classical model. Furthermore, the derived analytic solutions for the proposed model were applied for fitting experimental data, and a very good agreement between experimental data and proposed model predictions were obtained. Comparison of the obtained results with empirical correlations allowed identifying the dominant retention mechanisms. In addition, it was shown that the larger the ratio of particle to pore sizes, the more intensive the straining mechanism and the larger the discrepancies between experimental data and classical model predictions. The classical model and proposed model were compared via statistical analysis. The obtained p values allow concluding that the proposed model should be preferred especially when straining plays an important role. In addition, deep bed filtration with finite retention capacity was studied. This work also involves the study of filtration of particles through porous media with a finite capacity of filtration. It was observed, in this case, that is necessary to consider changes in the boundary conditions through time evolution. It was obtained a solution for such a model using different functions of filtration coefficients. Besides that, it was shown how to build a solution for any filtration coefficient. It was seen that, even considering the same filtration coefficient, the classic model and the one here propposed, show different predictions for the concentration of particles retained in the porous media and for the suspended particles at the exit of the media
Resumo:
The goal of this project is to learn the necessary steps to create a finite element model, which can accurately predict the dynamic response of a Kohler Engines Heavy Duty Air Cleaner (HDAC). This air cleaner is composed of three glass reinforced plastic components and two air filters. Several uncertainties arose in the finite element (FE) model due to the HDAC’s component material properties and assembly conditions. To help understand and mitigate these uncertainties, analytical and experimental modal models were created concurrently to perform a model correlation and calibration. Over the course of the project simple and practical methods were found for future FE model creation. Similarly, an experimental method for the optimal acquisition of experimental modal data was arrived upon. After the model correlation and calibration was performed a validation experiment was used to confirm the FE models predictive capabilities.
Resumo:
Credible spatial information characterizing the structure and site quality of forests is critical to sustainable forest management and planning, especially given the increasing demands and threats to forest products and services. Forest managers and planners are required to evaluate forest conditions over a broad range of scales, contingent on operational or reporting requirements. Traditionally, forest inventory estimates are generated via a design-based approach that involves generalizing sample plot measurements to characterize an unknown population across a larger area of interest. However, field plot measurements are costly and as a consequence spatial coverage is limited. Remote sensing technologies have shown remarkable success in augmenting limited sample plot data to generate stand- and landscape-level spatial predictions of forest inventory attributes. Further enhancement of forest inventory approaches that couple field measurements with cutting edge remotely sensed and geospatial datasets are essential to sustainable forest management. We evaluated a novel Random Forest based k Nearest Neighbors (RF-kNN) imputation approach to couple remote sensing and geospatial data with field inventory collected by different sampling methods to generate forest inventory information across large spatial extents. The forest inventory data collected by the FIA program of US Forest Service was integrated with optical remote sensing and other geospatial datasets to produce biomass distribution maps for a part of the Lake States and species-specific site index maps for the entire Lake State. Targeting small-area application of the state-of-art remote sensing, LiDAR (light detection and ranging) data was integrated with the field data collected by an inexpensive method, called variable plot sampling, in the Ford Forest of Michigan Tech to derive standing volume map in a cost-effective way. The outputs of the RF-kNN imputation were compared with independent validation datasets and extant map products based on different sampling and modeling strategies. The RF-kNN modeling approach was found to be very effective, especially for large-area estimation, and produced results statistically equivalent to the field observations or the estimates derived from secondary data sources. The models are useful to resource managers for operational and strategic purposes.
Resumo:
Intracochlear trauma from surgical insertion of bulky electrode arrays and inadequate pitch perception are areas of concern with current hand-assembled commercial cochlear implants. Parylene thin-film arrays with higher electrode densities and lower profiles are a potential solution, but lack rigidity and hence depend on manually fabricated permanently attached polyethylene terephthalate (PET) tubing based bulky backing devices. As a solution, we investigated a new backing device with two sub-systems. The first sub-system is a thin poly(lactic acid) (PLA) stiffener that will be embedded in the parylene array. The second sub-system is an attaching and detaching mechanism, utilizing a poly(N-vinylpyrrolidone)-block-poly(d,l-lactide) (PVP-b-PDLLA) copolymer-based biodegradable and water soluble adhesive, that will help to retract the PET insertion tool after implantation. As a proof-of-concept of sub-system one, a microfabrication process for patterning PLA stiffeners embedded in parylene has been developed. Conventional hotembossing, mechanical micromachining, and standard cleanroom processes were integrated for patterning fully released and discrete stiffeners coated with parylene. The released embedded stiffeners were thermoformed to demonstrate that imparting perimodiolar shapes to stiffener-embedded arrays will be possible. The developed process when integrated with the array fabrication process will allow fabrication of stiffener-embedded arrays in a single process. As a proof-of-concept of sub-system two, the feasibility of the attaching and detaching mechanism was demonstrated by adhering 1x and 1.5x scale PET tube-based insertion tools and PLA stiffeners embedded in parylene using the copolymer adhesive. The attached devices survived qualitative adhesion tests, thermoforming, and flexing. The viability of the detaching mechanism was tested by aging the assemblies in-vitro in phosphate buffer solution. The average detachment times, 2.6 minutes and 10 minutes for 1x and 1.5x scale devices respectively, were found to be clinically relevant with respect to the reported array insertion times during surgical implantation. Eventually, the stiffener-embedded arrays would not need to be permanently attached to current insertion tools which are left behind after implantation and congest the cochlear scala tympani chamber. Finally, a simulation-based approach for accelerated failure analysis of PLA stiffeners and characterization of PVP-b-PDLLA copolymer adhesive has been explored. The residual functional life of embedded PLA stiffeners exposed to body-fluid and thereby subjected to degradation and erosion has been estimated by simulating PLA stiffeners with different parylene coating failure types and different PLA types for a given parylene coating failure type. For characterizing the PVP-b-PDLLA copolymer adhesive, several formulations of the copolymer adhesive were simulated and compared based on the insertion tool detachment times that were predicted from the dissolution, degradation, and erosion behavior of the simulated adhesive formulations. Results indicate that the simulation-based approaches could be used to reduce the total number of time consuming and expensive in-vitro tests that must be conducted.
Resumo:
To analyze the characteristics and predict the dynamic behaviors of complex systems over time, comprehensive research to enable the development of systems that can intelligently adapt to the evolving conditions and infer new knowledge with algorithms that are not predesigned is crucially needed. This dissertation research studies the integration of the techniques and methodologies resulted from the fields of pattern recognition, intelligent agents, artificial immune systems, and distributed computing platforms, to create technologies that can more accurately describe and control the dynamics of real-world complex systems. The need for such technologies is emerging in manufacturing, transportation, hazard mitigation, weather and climate prediction, homeland security, and emergency response. Motivated by the ability of mobile agents to dynamically incorporate additional computational and control algorithms into executing applications, mobile agent technology is employed in this research for the adaptive sensing and monitoring in a wireless sensor network. Mobile agents are software components that can travel from one computing platform to another in a network and carry programs and data states that are needed for performing the assigned tasks. To support the generation, migration, communication, and management of mobile monitoring agents, an embeddable mobile agent system (Mobile-C) is integrated with sensor nodes. Mobile monitoring agents visit distributed sensor nodes, read real-time sensor data, and perform anomaly detection using the equipped pattern recognition algorithms. The optimal control of agents is achieved by mimicking the adaptive immune response and the application of multi-objective optimization algorithms. The mobile agent approach provides potential to reduce the communication load and energy consumption in monitoring networks. The major research work of this dissertation project includes: (1) studying effective feature extraction methods for time series measurement data; (2) investigating the impact of the feature extraction methods and dissimilarity measures on the performance of pattern recognition; (3) researching the effects of environmental factors on the performance of pattern recognition; (4) integrating an embeddable mobile agent system with wireless sensor nodes; (5) optimizing agent generation and distribution using artificial immune system concept and multi-objective algorithms; (6) applying mobile agent technology and pattern recognition algorithms for adaptive structural health monitoring and driving cycle pattern recognition; (7) developing a web-based monitoring network to enable the visualization and analysis of real-time sensor data remotely. Techniques and algorithms developed in this dissertation project will contribute to research advances in networked distributed systems operating under changing environments.