924 resultados para failure time model
Resumo:
Submicroscopic changes in chromosomal DNA copy number dosage are common and have been implicated in many heritable diseases and cancers. Recent high-throughput technologies have a resolution that permits the detection of segmental changes in DNA copy number that span thousands of basepairs across the genome. Genome-wide association studies (GWAS) may simultaneously screen for copy number-phenotype and SNP-phenotype associations as part of the analytic strategy. However, genome-wide array analyses are particularly susceptible to batch effects as the logistics of preparing DNA and processing thousands of arrays often involves multiple laboratories and technicians, or changes over calendar time to the reagents and laboratory equipment. Failure to adjust for batch effects can lead to incorrect inference and requires inefficient post-hoc quality control procedures that exclude regions that are associated with batch. Our work extends previous model-based approaches for copy number estimation by explicitly modeling batch effects and using shrinkage to improve locus-specific estimates of copy number uncertainty. Key features of this approach include the use of diallelic genotype calls from experimental data to estimate batch- and locus-specific parameters of background and signal without the requirement of training data. We illustrate these ideas using a study of bipolar disease and a study of chromosome 21 trisomy. The former has batch effects that dominate much of the observed variation in quantile-normalized intensities, while the latter illustrates the robustness of our approach to datasets where as many as 25% of the samples have altered copy number. Locus-specific estimates of copy number can be plotted on the copy-number scale to investigate mosaicism and guide the choice of appropriate downstream approaches for smoothing the copy number as a function of physical position. The software is open source and implemented in the R package CRLMM available at Bioconductor (http:www.bioconductor.org).
Resumo:
Numerous time series studies have provided strong evidence of an association between increased levels of ambient air pollution and increased levels of hospital admissions, typically at 0, 1, or 2 days after an air pollution episode. An important research aim is to extend existing statistical models so that a more detailed understanding of the time course of hospitalization after exposure to air pollution can be obtained. Information about this time course, combined with prior knowledge about biological mechanisms, could provide the basis for hypotheses concerning the mechanism by which air pollution causes disease. Previous studies have identified two important methodological questions: (1) How can we estimate the shape of the distributed lag between increased air pollution exposure and increased mortality or morbidity? and (2) How should we estimate the cumulative population health risk from short-term exposure to air pollution? Distributed lag models are appropriate tools for estimating air pollution health effects that may be spread over several days. However, estimation for distributed lag models in air pollution and health applications is hampered by the substantial noise in the data and the inherently weak signal that is the target of investigation. We introduce an hierarchical Bayesian distributed lag model that incorporates prior information about the time course of pollution effects and combines information across multiple locations. The model has a connection to penalized spline smoothing using a special type of penalty matrix. We apply the model to estimating the distributed lag between exposure to particulate matter air pollution and hospitalization for cardiovascular and respiratory disease using data from a large United States air pollution and hospitalization database of Medicare enrollees in 94 counties covering the years 1999-2002.
Resumo:
Wind energy has been one of the most growing sectors of the nation’s renewable energy portfolio for the past decade, and the same tendency is being projected for the upcoming years given the aggressive governmental policies for the reduction of fossil fuel dependency. Great technological expectation and outstanding commercial penetration has shown the so called Horizontal Axis Wind Turbines (HAWT) technologies. Given its great acceptance, size evolution of wind turbines over time has increased exponentially. However, safety and economical concerns have emerged as a result of the newly design tendencies for massive scale wind turbine structures presenting high slenderness ratios and complex shapes, typically located in remote areas (e.g. offshore wind farms). In this regard, safety operation requires not only having first-hand information regarding actual structural dynamic conditions under aerodynamic action, but also a deep understanding of the environmental factors in which these multibody rotating structures operate. Given the cyclo-stochastic patterns of the wind loading exerting pressure on a HAWT, a probabilistic framework is appropriate to characterize the risk of failure in terms of resistance and serviceability conditions, at any given time. Furthermore, sources of uncertainty such as material imperfections, buffeting and flutter, aeroelastic damping, gyroscopic effects, turbulence, among others, have pleaded for the use of a more sophisticated mathematical framework that could properly handle all these sources of indetermination. The attainable modeling complexity that arises as a result of these characterizations demands a data-driven experimental validation methodology to calibrate and corroborate the model. For this aim, System Identification (SI) techniques offer a spectrum of well-established numerical methods appropriated for stationary, deterministic, and data-driven numerical schemes, capable of predicting actual dynamic states (eigenrealizations) of traditional time-invariant dynamic systems. As a consequence, it is proposed a modified data-driven SI metric based on the so called Subspace Realization Theory, now adapted for stochastic non-stationary and timevarying systems, as is the case of HAWT’s complex aerodynamics. Simultaneously, this investigation explores the characterization of the turbine loading and response envelopes for critical failure modes of the structural components the wind turbine is made of. In the long run, both aerodynamic framework (theoretical model) and system identification (experimental model) will be merged in a numerical engine formulated as a search algorithm for model updating, also known as Adaptive Simulated Annealing (ASA) process. This iterative engine is based on a set of function minimizations computed by a metric called Modal Assurance Criterion (MAC). In summary, the Thesis is composed of four major parts: (1) development of an analytical aerodynamic framework that predicts interacted wind-structure stochastic loads on wind turbine components; (2) development of a novel tapered-swept-corved Spinning Finite Element (SFE) that includes dampedgyroscopic effects and axial-flexural-torsional coupling; (3) a novel data-driven structural health monitoring (SHM) algorithm via stochastic subspace identification methods; and (4) a numerical search (optimization) engine based on ASA and MAC capable of updating the SFE aerodynamic model.
Evolutionary demography of long-lived monocarpic perennials: a time-lagged integral projection model
Resumo:
1. The evolution of flowering strategies (when and at what size to flower) in monocarpic perennials is determined by balancing current reproduction with expected future reproduction, and these are largely determined by size-specific patterns of growth and survival. However, because of the difficulty in following long-lived individuals throughout their lives, this theory has largely been tested using short-lived species (< 5 years). 2. Here, we tested this theory using the long-lived monocarpic perennial Campanula thyrsoides which can live up to 16 years. We used a novel approach that combined permanent plot and herb chronology data from a 3-year field study to parameterize and validate integral projection models (IPMs). 3. Similar to other monocarpic species, the rosette leaves of C. thyrsoides wither over winter and so size cannot be measured in the year of flowering. We therefore extended the existing IPM framework to incorporate an additional time delay that arises because flowering demography must be predicted from rosette size in the year before flowering. 4. We found that all main demographic functions (growth, survival probability, flowering probability and fecundity) were strongly size-dependent and there was a pronounced threshold size of flowering. There was good agreement between the predicted distribution of flowering ages obtained from the IPMs and that estimated in the field. Mostly, there was good agreement between the IPM predictions and the direct quantitative field measurements regarding the demographic parameters lambda, R-0 and T. We therefore conclude that the model captures the main demographic features of the field populations. 5. Elasticity analysis indicated that changes in the survival and growth function had the largest effect (c. 80%) on lambda and this was considerably larger than in short-lived monocarps. We found only weak selection pressure operating on the observed flowering strategy which was close to the predicted evolutionary stable strategy. 6. Synthesis. The extended IPM accurately described the demography of a long-lived monocarpic perennial using data collected over a relatively short period. We could show that the evolution of flowering strategies in short- and long-lived monocarps seem to follow the same general rules but with a longevity-related emphasis on survival over fecundity.
Resumo:
In this paper, a simulation model of glucose-insulin metabolism for Type 1 diabetes patients is presented. The proposed system is based on the combination of Compartmental Models (CMs) and artificial Neural Networks (NNs). This model aims at the development of an accurate system, in order to assist Type 1 diabetes patients to handle their blood glucose profile and recognize dangerous metabolic states. Data from a Type 1 diabetes patient, stored in a database, have been used as input to the hybrid system. The data contain information about measured blood glucose levels, insulin intake, and description of food intake, along with the corresponding time. The data are passed to three separate CMs, which produce estimations about (i) the effect of Short Acting (SA) insulin intake on blood insulin concentration, (ii) the effect of Intermediate Acting (IA) insulin intake on blood insulin concentration, and (iii) the effect of carbohydrate intake on blood glucose absorption from the gut. The outputs of the three CMs are passed to a Recurrent NN (RNN) in order to predict subsequent blood glucose levels. The RNN is trained with the Real Time Recurrent Learning (RTRL) algorithm. The resulted blood glucose predictions are promising for the use of the proposed model for blood glucose level estimation for Type 1 diabetes patients.
Resumo:
Ein auf Basis von Prozessdaten kalibriertes Viskositätsmodell wird vorgeschlagen und zur Vorhersage der Viskosität einer Polyamid 12 (PA12) Kunststoffschmelze als Funktion von Zeit, Temperatur und Schergeschwindigkeit angewandt. Im ersten Schritt wurde das Viskositätsmodell aus experimentellen Daten abgeleitet. Es beruht hauptsächlich auf dem drei-parametrigen Ansatz von Carreau, wobei zwei zusätzliche Verschiebungsfaktoren eingesetzt werden. Die Temperaturabhängigkeit der Viskosität wird mithilfe des Verschiebungsfaktors aT von Arrhenius berücksichtigt. Ein weiterer Verschiebungsfaktor aSC (Structural Change) wird eingeführt, der die Strukturänderung von PA12 als Folge der Prozessbedingungen beim Lasersintern beschreibt. Beobachtet wurde die Strukturänderung in Form einer signifikanten Viskositätserhöhung. Es wurde geschlussfolgert, dass diese Viskositätserhöhung auf einen Molmassenaufbau zurückzuführen ist und als Nachkondensation verstanden werden kann. Abhängig von den Zeit- und Temperaturbedingungen wurde festgestellt, dass die Viskosität als Folge des Molmassenaufbaus exponentiell gegen eine irreversible Grenze strebt. Die Geschwindigkeit dieser Nachkondensation ist zeit- und temperaturabhängig. Es wird angenommen, dass die Pulverbetttemperatur einen Molmassenaufbau verursacht und es damit zur Kettenverlängerung kommt. Dieser fortschreitende Prozess der zunehmenden Kettenlängen setzt molekulare Beweglichkeit herab und unterbindet die weitere Nachkondensation. Der Verschiebungsfaktor aSC drückt diese physikalisch-chemische Modellvorstellung aus und beinhaltet zwei zusätzliche Parameter. Der Parameter aSC,UL entspricht der oberen Viskositätsgrenze, wohingegen k0 die Strukturänderungsrate angibt. Es wurde weiterhin festgestellt, dass es folglich nützlich ist zwischen einer Fließaktivierungsenergie und einer Strukturänderungsaktivierungsenergie für die Berechnung von aT und aSC zu unterscheiden. Die Optimierung der Modellparameter erfolgte mithilfe eines genetischen Algorithmus. Zwischen berechneten und gemessenen Viskositäten wurde eine gute Übereinstimmung gefunden, so dass das Viskositätsmodell in der Lage ist die Viskosität einer PA12 Kunststoffschmelze als Folge eines kombinierten Lasersinter Zeit- und Temperatureinflusses vorherzusagen. Das Modell wurde im zweiten Schritt angewandt, um die Viskosität während des Lasersinter-Prozesses in Abhängigkeit von der Energiedichte zu berechnen. Hierzu wurden Prozessdaten, wie Schmelzetemperatur und Belichtungszeit benutzt, die mithilfe einer High-Speed Thermografiekamera on-line gemessen wurden. Abschließend wurde der Einfluss der Strukturänderung auf das Viskositätsniveau im Prozess aufgezeigt.
Resumo:
In process industries, make-and-pack production is used to produce food and beverages, chemicals, and metal products, among others. This type of production process allows the fabrication of a wide range of products in relatively small amounts using the same equipment. In this article, we consider a real-world production process (cf. Honkomp et al. 2000. The curse of reality – why process scheduling optimization problems are diffcult in practice. Computers & Chemical Engineering, 24, 323–328.) comprising sequence-dependent changeover times, multipurpose storage units with limited capacities, quarantine times, batch splitting, partial equipment connectivity, and transfer times. The planning problem consists of computing a production schedule such that a given demand of packed products is fulfilled, all technological constraints are satisfied, and the production makespan is minimised. None of the models in the literature covers all of the technological constraints that occur in such make-and-pack production processes. To close this gap, we develop an efficient mixed-integer linear programming model that is based on a continuous time domain and general-precedence variables. We propose novel types of symmetry-breaking constraints and a preprocessing procedure to improve the model performance. In an experimental analysis, we show that small- and moderate-sized instances can be solved to optimality within short CPU times.
Resumo:
In the context of expensive numerical experiments, a promising solution for alleviating the computational costs consists of using partially converged simulations instead of exact solutions. The gain in computational time is at the price of precision in the response. This work addresses the issue of fitting a Gaussian process model to partially converged simulation data for further use in prediction. The main challenge consists of the adequate approximation of the error due to partial convergence, which is correlated in both design variables and time directions. Here, we propose fitting a Gaussian process in the joint space of design parameters and computational time. The model is constructed by building a nonstationary covariance kernel that reflects accurately the actual structure of the error. Practical solutions are proposed for solving parameter estimation issues associated with the proposed model. The method is applied to a computational fluid dynamics test case and shows significant improvement in prediction compared to a classical kriging model.
Resumo:
This study tests whether cognitive failures mediate effects of work-related time pressure and time control on commuting accidents and near-accidents. Participants were 83 employees (56% female) who each commuted between their regular place of residence and place of work using vehicles. The Workplace Cognitive Failure Scale (WCFS) asked for the frequency of failure in memory function, failure in attention regulation, and failure in action execution. Time pressure and time control at work were assessed by the Instrument for Stress Oriented Task Analysis (ISTA). Commuting accidents in the last 12 months were reported by 10% of participants, and half of the sample reported commuting near-accidents in the last 4 weeks. Cognitive failure significantly mediated the influence of time pressure at work on near-accidents even when age, gender, neuroticism, conscientiousness, commuting duration, commuting distance, and time pressure during commuting were controlled for. Time control was negatively related to cognitive failure and neuroticism, but no association with commuting accidents or near-accidents was found. Time pressure at work is likely to increase cognitive load. Time pressure might, therefore, increase cognitive failures during work and also during commuting. Hence, time pressure at work can decrease commuting safety. The result suggests a reduction of time pressure at work should improve commuting safety.
Resumo:
Although several detailed models of molecular processes essential for circadian oscillations have been developed, their complexity makes intuitive understanding of the oscillation mechanism difficult. The goal of the present study was to reduce a previously developed, detailed model to a minimal representation of the transcriptional regulation essential for circadian rhythmicity in Drosophila. The reduced model contains only two differential equations, each with time delays. A negative feedback loop is included, in which PER protein represses per transcription by binding the dCLOCK transcription factor. A positive feedback loop is also included, in which dCLOCK indirectly enhances its own formation. The model simulated circadian oscillations, light entrainment, and a phase-response curve with qualitative similarities to experiment. Time delays were found to be essential for simulation of circadian oscillations with this model. To examine the robustness of the simplified model to fluctuations in molecule numbers, a stochastic variant was constructed. Robust circadian oscillations and entrainment to light pulses were simulated with fewer than 80 molecules of each gene product present on average. Circadian oscillations persisted when the positive feedback loop was removed. Moreover, elimination of positive feedback did not decrease the robustness of oscillations to stochastic fluctuations or to variations in parameter values. Such reduced models can aid understanding of the oscillation mechanisms in Drosophila and in other organisms in which feedback regulation of transcription may play an important role.
Resumo:
BACKGROUND: Renal involvement is a serious manifestation of systemic lupus erythematosus (SLE); it may portend a poor prognosis as it may lead to end-stage renal disease (ESRD). The purpose of this study was to determine the factors predicting the development of renal involvement and its progression to ESRD in a multi-ethnic SLE cohort (PROFILE). METHODS AND FINDINGS: PROFILE includes SLE patients from five different United States institutions. We examined at baseline the socioeconomic-demographic, clinical, and genetic variables associated with the development of renal involvement and its progression to ESRD by univariable and multivariable Cox proportional hazards regression analyses. Analyses of onset of renal involvement included only patients with renal involvement after SLE diagnosis (n = 229). Analyses of ESRD included all patients, regardless of whether renal involvement occurred before, at, or after SLE diagnosis (34 of 438 patients). In addition, we performed a multivariable logistic regression analysis of the variables associated with the development of renal involvement at any time during the course of SLE.In the time-dependent multivariable analysis, patients developing renal involvement were more likely to have more American College of Rheumatology criteria for SLE, and to be younger, hypertensive, and of African-American or Hispanic (from Texas) ethnicity. Alternative regression models were consistent with these results. In addition to greater accrued disease damage (renal damage excluded), younger age, and Hispanic ethnicity (from Texas), homozygosity for the valine allele of FcgammaRIIIa (FCGR3A*GG) was a significant predictor of ESRD. Results from the multivariable logistic regression model that included all cases of renal involvement were consistent with those from the Cox model. CONCLUSIONS: Fcgamma receptor genotype is a risk factor for progression of renal disease to ESRD. Since the frequency distribution of FCGR3A alleles does not vary significantly among the ethnic groups studied, the additional factors underlying the ethnic disparities in renal disease progression remain to be elucidated.