11 resultados para Time-varying covariance matrices

em Biblioteca Digital da Produ


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background: Several models have been designed to predict survival of patients with heart failure. These, while available and widely used for both stratifying and deciding upon different treatment options on the individual level, have several limitations. Specifically, some clinical variables that may influence prognosis may have an influence that change over time. Statistical models that include such characteristic may help in evaluating prognosis. The aim of the present study was to analyze and quantify the impact of modeling heart failure survival allowing for covariates with time-varying effects known to be independent predictors of overall mortality in this clinical setting. Methodology: Survival data from an inception cohort of five hundred patients diagnosed with heart failure functional class III and IV between 2002 and 2004 and followed-up to 2006 were analyzed by using the proportional hazards Cox model and variations of the Cox's model and also of the Aalen's additive model. Principal Findings: One-hundred and eighty eight (188) patients died during follow-up. For patients under study, age, serum sodium, hemoglobin, serum creatinine, and left ventricular ejection fraction were significantly associated with mortality. Evidence of time-varying effect was suggested for the last three. Both high hemoglobin and high LV ejection fraction were associated with a reduced risk of dying with a stronger initial effect. High creatinine, associated with an increased risk of dying, also presented an initial stronger effect. The impact of age and sodium were constant over time. Conclusions: The current study points to the importance of evaluating covariates with time-varying effects in heart failure models. The analysis performed suggests that variations of Cox and Aalen models constitute a valuable tool for identifying these variables. The implementation of covariates with time-varying effects into heart failure prognostication models may reduce bias and increase the specificity of such models.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Linear parameter varying (LPV) control is a model-based control technique that takes into account time-varying parameters of the plant. In the case of rotating systems supported by lubricated bearings, the dynamic characteristics of the bearings change in time as a function of the rotating speed. Hence, LPV control can tackle the problem of run-up and run-down operational conditions when dynamic characteristics of the rotating system change significantly in time due to the bearings and high vibration levels occur. In this work, the LPV control design for a flexible shaft supported by plain journal bearings is presented. The model used in the LPV control design is updated from unbalance response experimental results and dynamic coefficients for the entire range of rotating speeds are obtained by numerical optimization. Experimental implementation of the designed LPV control resulted in strong reduction of vibration amplitudes when crossing the critical speed, without affecting system behavior in sub- or supercritical speeds. (C) 2012 Elsevier Ltd. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Most biological systems are formed by component parts that are to some degree interrelated. Groups of parts that are more associated among themselves and are relatively autonomous from others are called modules. One of the consequences of modularity is that biological systems usually present an unequal distribution of the genetic variation among traits. Estimating the covariance matrix that describes these systems is a difficult problem due to a number of factors such as poor sample sizes and measurement errors. We show that this problem will be exacerbated whenever matrix inversion is required, as in directional selection reconstruction analysis. We explore the consequences of varying degrees of modularity and signal-to-noise ratio on selection reconstruction. We then present and test the efficiency of available methods for controlling noise in matrix estimates. In our simulations, controlling matrices for noise vastly improves the reconstruction of selection gradients. We also perform an analysis of selection gradients reconstruction over a New World Monkeys skull database to illustrate the impact of noise on such analyses. Noise-controlled estimates render far more plausible interpretations that are in full agreement with previous results.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background: This study evaluated a wide range of viral load (VL) thresholds to identify a cut-point that best predicts new clinical events in children on stable highly active antiretroviral therapy (HAART). Methods: Cox proportional hazards modeling was used to assess the adjusted risk for World Health Organization stage 3 or 4 clinical events (WHO events) as a function of time-varying CD4, VL, and hemoglobin values in a cohort study of Latin American children on HAART >= 6 months. Models were fit using different VL cut-points between 400 and 50,000 copies per milliliter, with model fit evaluated on the basis of the minimum Akaike information criterion value, a standard model fit statistic. Results: Models were based on 67 subjects with WHO events out of 550 subjects on study. The VL cut-points of >2600 and >32,000 copies per milliliter corresponded to the lowest Akaike information criterion values and were associated with the highest hazard ratios (2.0, P = 0.015; and 2.1, P = 0.0058, respectively) for WHO events. Conclusions: In HIV-infected Latin American children on stable HAART, 2 distinct VL thresholds (>2600 and >32,000 copies/mL) were identified for predicting children at significantly increased risk for HIV-related clinical illness, after accounting for CD4 level, hemoglobin level, and other significant factors.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The escape dynamics of a classical light ray inside a corrugated waveguide is characterised by the use of scaling arguments. The model is described via a two-dimensional nonlinear and area preserving mapping. The phase space of the mapping contains a set of periodic islands surrounded by a large chaotic sea that is confined by a set of invariant tori. When a hole is introduced in the chaotic sea, letting the ray escape, the histogram of frequency of the number of escaping particles exhibits rapid growth, reaching a maximum value at n(p) and later decaying asymptotically to zero. The behaviour of the histogram of escape frequency is characterised using scaling arguments. The scaling formalism is widely applicable to critical phenomena and useful in characterisation of phase transitions, including transitions from limited to unlimited energy growth in two-dimensional time varying billiard problems. (C) 2011 Elsevier B.V. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background: Magnetic hyperthermia is currently a clinical therapy approved in the European Union for treatment of tumor cells, and uses magnetic nanoparticles (MNPs) under time-varying magnetic fields (TVMFs). The same basic principle seems promising against trypanosomatids causing Chagas disease and sleeping sickness, given that the therapeutic drugs available have severe side effects and that there are drug-resistant strains. However, no applications of this strategy against protozoan-induced diseases have been reported so far. In the present study, Crithidia fasciculata, a widely used model for therapeutic strategies against pathogenic trypanosomatids, was targeted with Fe3O4 MNPs in order to provoke cell death remotely using TVMFs. Methods: Iron oxide MNPs with average diameters of approximately 30 nm were synthesized by precipitation of FeSO4 in basic medium. The MNPs were added to C. fasciculata choanomastigotes in the exponential phase and incubated overnight, removing excess MNPs using a DEAE-cellulose resin column. The amount of MNPs uploaded per cell was determined by magnetic measurement. The cells bearing MNPs were submitted to TVMFs using a homemade AC field applicator (f = 249 kHz, H = 13 kA/m), and the temperature variation during the experiments was measured. Scanning electron microscopy was used to assess morphological changes after the TVMF experiments. Cell viability was analyzed using an MTT colorimetric assay and flow cytometry. Results: MNPs were incorporated into the cells, with no noticeable cytotoxicity. When a TVMF was applied to cells bearing MNPs, massive cell death was induced via a nonapoptotic mechanism. No effects were observed by applying TVMF to control cells not loaded with MNPs. No macroscopic rise in temperature was observed in the extracellular medium during the experiments. Conclusion: As a proof of principle, these data indicate that intracellular hyperthermia is a suitable technology to induce death of protozoan parasites bearing MNPs. These findings expand the possibilities for new therapeutic strategies combating parasitic infection.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A total of 46,089 individual monthly test-day (TD) milk yields (10 test-days), from 7,331 complete first lactations of Holstein cattle were analyzed. A standard multivariate analysis (MV), reduced rank analyses fitting the first 2, 3, and 4 genetic principal components (PC2, PC3, PC4), and analyses that fitted a factor analytic structure considering 2, 3, and 4 factors (FAS2, FAS3, FAS4), were carried out. The models included the random animal genetic effect and fixed effects of the contemporary groups (herd-year-month of test-day), age of cow (linear and quadratic effects), and days in milk (linear effect). The residual covariance matrix was assumed to have full rank. Moreover, 2 random regression models were applied. Variance components were estimated by restricted maximum likelihood method. The heritability estimates ranged from 0.11 to 0.24. The genetic correlation estimates between TD obtained with the PC2 model were higher than those obtained with the MV model, especially on adjacent test-days at the end of lactation close to unity. The results indicate that for the data considered in this study, only 2 principal components are required to summarize the bulk of genetic variation among the 10 traits.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Drug testing is used by employers to detect drug use by employees or job candidates. It can identify recent use of alcohol, prescription drugs, and illicit drugs as a screening tool for potential health and safety and performance issues. Urine is the most commonly used sample for illicit drugs. It detects the use of a drug within the last few days and as such is evidence of recent use; but a positive test does not necessarily mean that the individual was impaired at the time of the test. Abstention from use for three days will often produce a negative test result. Analysis of hair provides a much longer window of detection, typically 1 to 3 months. Hence the likelihood of a falsely negative test using hair is very much less than with a urine test. Conversely, a negative hair test is a substantially stronger indicator of a non-drug user than a negative urine test. Oral fluid (saliva) is also easy to collect. Drugs remain in oral fluid for a similar time as in blood. The method is a good way of detecting current use and is more likely to reflect current impairment. It offers promise as a test in post-accident, for cause, and on-duty situations. Studies have shown that within the same industrial settings, hair testing can detect twice as many drug users as urine testing. Copyright (C) 2012 John Wiley & Sons, Ltd.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Objectives: This study evaluated the degree of conversion (DC) and working time (WT) of two commercial, dual-cured resin cements polymerized at varying temperatures and under different curing-light accessible conditions, using Fourier transformed infrared analysis (FTIR). Materials and Methods: Calibra (Cal; Dentsply Caulk) and Variolink II (Ivoclar Vivadent) were tested at 25 degrees C or preheated to 37 degrees C or 50 degrees C and applied to a similar-temperature surface of a horizontal attenuated-total-reflectance unit (ATR) attached to an infrared spectrometer. The products were polymerized using one of four conditions: direct light exposure only (600 mW/cm(2)) through a glass slide or through a 1.5- or 3.0-mm-thick ceramic disc (A2 shade, IPS e.max, Ivoclar Vivadent) or allowed to self-cure in the absence of light curing. FTIR spectra were recorded for 20 min (1 spectrum/s, 16 scans/spectrum, resolution 4 cm(-1)) immediately after application to the ATR. DC was calculated using standard techniques of observing changes in aliphatic-to-aromatic peak ratios precuring and 20-min postcuring as well as during each 1-second interval. Time-based monomer conversion analysis was used to determine WT at each temperature. DC and WT data (n=6) were analyzed by two-way analysis of variance and Tukey post hoc test (p=0.05). Results: Higher temperatures increased DC regardless of curing mode and product. For Calibra, only the 3-mm-thick ceramic group showed lower DC than the other groups at 25 degrees C (p=0.01830), while no significant difference was observed among groups at 37 degrees C and 50 degrees C. For Variolink, the 3-mm-thick ceramic group showed lower DC than the 1-mm-thick group only at 25 degrees C, while the self-cure group showed lower DC than the others at all temperatures (p=0.00001). WT decreased with increasing temperature: at 37 degrees C near 70% reduction and at 50 degrees C near 90% for both products, with WT reduction reaching clinically inappropriate times in some cases (p=0.00001). Conclusion: Elevated temperature during polymerization of dual-cured cements increased DC. WT was reduced with elevated temperature, but the extent of reduction might not be clinically acceptable.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Chemically resolved submicron (PM1) particlemass fluxes were measured by eddy covariance with a high resolution time-of-flight aerosolmass spectrometer over temperate and tropical forests during the BEARPEX-07 and AMAZE-08 campaigns. Fluxes during AMAZE-08 were small and close to the detection limit (<1 ng m−2 s−1) due to low particle mass concentrations (<1 μg m−3). During BEARPEX-07, concentrations were five times larger, with mean mid-day deposition fluxes of −4.8 ng m−2 s−1 for total nonrefractory PM1 (Vex,PM1 = −1 mm s−1) and emission fluxes of +2.6 ng m−2 s−1 for organic PM1 (Vex,org = +1 mm s−1). Biosphere–atmosphere fluxes of different chemical components are affected by in-canopy chemistry, vertical gradients in gas-particle partitioning due to canopy temperature gradients, emission of primary biological aerosol particles, and wet and dry deposition. As a result of these competing processes, individual chemical components had fluxes of varying magnitude and direction during both campaigns. Oxygenated organic components representing regionally aged aerosol deposited, while components of fresh secondary organic aerosol (SOA) emitted. During BEARPEX-07, rapid incanopy oxidation caused rapid SOA growth on the timescale of biosphere-atmosphere exchange. In-canopy SOA mass yields were 0.5–4%. During AMAZE-08, the net organic aerosol flux was influenced by deposition, in-canopy SOA formation, and thermal shifts in gas-particle partitioning.Wet deposition was estimated to be an order ofmagnitude larger than dry deposition during AMAZE-08. Small shifts in organic aerosol concentrations from anthropogenic sources such as urban pollution or biomass burning alters the balance between flux terms. The semivolatile nature of the Amazonian organic aerosol suggests a feedback in which warmer temperatures will partition SOA to the gas-phase, reducing their light scattering and thus potential to cool the region.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The modern GPUs are well suited for intensive computational tasks and massive parallel computation. Sparse matrix multiplication and linear triangular solver are the most important and heavily used kernels in scientific computation, and several challenges in developing a high performance kernel with the two modules is investigated. The main interest it to solve linear systems derived from the elliptic equations with triangular elements. The resulting linear system has a symmetric positive definite matrix. The sparse matrix is stored in the compressed sparse row (CSR) format. It is proposed a CUDA algorithm to execute the matrix vector multiplication using directly the CSR format. A dependence tree algorithm is used to determine which variables the linear triangular solver can determine in parallel. To increase the number of the parallel threads, a coloring graph algorithm is implemented to reorder the mesh numbering in a pre-processing phase. The proposed method is compared with parallel and serial available libraries. The results show that the proposed method improves the computation cost of the matrix vector multiplication. The pre-processing associated with the triangular solver needs to be executed just once in the proposed method. The conjugate gradient method was implemented and showed similar convergence rate for all the compared methods. The proposed method showed significant smaller execution time.