8 resultados para H-Infinity Time-Varying Adaptive Algorithm
em Biblioteca Digital da Produção Intelectual da Universidade de São Paulo
Resumo:
Background: Several models have been designed to predict survival of patients with heart failure. These, while available and widely used for both stratifying and deciding upon different treatment options on the individual level, have several limitations. Specifically, some clinical variables that may influence prognosis may have an influence that change over time. Statistical models that include such characteristic may help in evaluating prognosis. The aim of the present study was to analyze and quantify the impact of modeling heart failure survival allowing for covariates with time-varying effects known to be independent predictors of overall mortality in this clinical setting. Methodology: Survival data from an inception cohort of five hundred patients diagnosed with heart failure functional class III and IV between 2002 and 2004 and followed-up to 2006 were analyzed by using the proportional hazards Cox model and variations of the Cox's model and also of the Aalen's additive model. Principal Findings: One-hundred and eighty eight (188) patients died during follow-up. For patients under study, age, serum sodium, hemoglobin, serum creatinine, and left ventricular ejection fraction were significantly associated with mortality. Evidence of time-varying effect was suggested for the last three. Both high hemoglobin and high LV ejection fraction were associated with a reduced risk of dying with a stronger initial effect. High creatinine, associated with an increased risk of dying, also presented an initial stronger effect. The impact of age and sodium were constant over time. Conclusions: The current study points to the importance of evaluating covariates with time-varying effects in heart failure models. The analysis performed suggests that variations of Cox and Aalen models constitute a valuable tool for identifying these variables. The implementation of covariates with time-varying effects into heart failure prognostication models may reduce bias and increase the specificity of such models.
Resumo:
Linear parameter varying (LPV) control is a model-based control technique that takes into account time-varying parameters of the plant. In the case of rotating systems supported by lubricated bearings, the dynamic characteristics of the bearings change in time as a function of the rotating speed. Hence, LPV control can tackle the problem of run-up and run-down operational conditions when dynamic characteristics of the rotating system change significantly in time due to the bearings and high vibration levels occur. In this work, the LPV control design for a flexible shaft supported by plain journal bearings is presented. The model used in the LPV control design is updated from unbalance response experimental results and dynamic coefficients for the entire range of rotating speeds are obtained by numerical optimization. Experimental implementation of the designed LPV control resulted in strong reduction of vibration amplitudes when crossing the critical speed, without affecting system behavior in sub- or supercritical speeds. (C) 2012 Elsevier Ltd. All rights reserved.
Resumo:
Background: This study evaluated a wide range of viral load (VL) thresholds to identify a cut-point that best predicts new clinical events in children on stable highly active antiretroviral therapy (HAART). Methods: Cox proportional hazards modeling was used to assess the adjusted risk for World Health Organization stage 3 or 4 clinical events (WHO events) as a function of time-varying CD4, VL, and hemoglobin values in a cohort study of Latin American children on HAART >= 6 months. Models were fit using different VL cut-points between 400 and 50,000 copies per milliliter, with model fit evaluated on the basis of the minimum Akaike information criterion value, a standard model fit statistic. Results: Models were based on 67 subjects with WHO events out of 550 subjects on study. The VL cut-points of >2600 and >32,000 copies per milliliter corresponded to the lowest Akaike information criterion values and were associated with the highest hazard ratios (2.0, P = 0.015; and 2.1, P = 0.0058, respectively) for WHO events. Conclusions: In HIV-infected Latin American children on stable HAART, 2 distinct VL thresholds (>2600 and >32,000 copies/mL) were identified for predicting children at significantly increased risk for HIV-related clinical illness, after accounting for CD4 level, hemoglobin level, and other significant factors.
Resumo:
The escape dynamics of a classical light ray inside a corrugated waveguide is characterised by the use of scaling arguments. The model is described via a two-dimensional nonlinear and area preserving mapping. The phase space of the mapping contains a set of periodic islands surrounded by a large chaotic sea that is confined by a set of invariant tori. When a hole is introduced in the chaotic sea, letting the ray escape, the histogram of frequency of the number of escaping particles exhibits rapid growth, reaching a maximum value at n(p) and later decaying asymptotically to zero. The behaviour of the histogram of escape frequency is characterised using scaling arguments. The scaling formalism is widely applicable to critical phenomena and useful in characterisation of phase transitions, including transitions from limited to unlimited energy growth in two-dimensional time varying billiard problems. (C) 2011 Elsevier B.V. All rights reserved.
Resumo:
Background: Magnetic hyperthermia is currently a clinical therapy approved in the European Union for treatment of tumor cells, and uses magnetic nanoparticles (MNPs) under time-varying magnetic fields (TVMFs). The same basic principle seems promising against trypanosomatids causing Chagas disease and sleeping sickness, given that the therapeutic drugs available have severe side effects and that there are drug-resistant strains. However, no applications of this strategy against protozoan-induced diseases have been reported so far. In the present study, Crithidia fasciculata, a widely used model for therapeutic strategies against pathogenic trypanosomatids, was targeted with Fe3O4 MNPs in order to provoke cell death remotely using TVMFs. Methods: Iron oxide MNPs with average diameters of approximately 30 nm were synthesized by precipitation of FeSO4 in basic medium. The MNPs were added to C. fasciculata choanomastigotes in the exponential phase and incubated overnight, removing excess MNPs using a DEAE-cellulose resin column. The amount of MNPs uploaded per cell was determined by magnetic measurement. The cells bearing MNPs were submitted to TVMFs using a homemade AC field applicator (f = 249 kHz, H = 13 kA/m), and the temperature variation during the experiments was measured. Scanning electron microscopy was used to assess morphological changes after the TVMF experiments. Cell viability was analyzed using an MTT colorimetric assay and flow cytometry. Results: MNPs were incorporated into the cells, with no noticeable cytotoxicity. When a TVMF was applied to cells bearing MNPs, massive cell death was induced via a nonapoptotic mechanism. No effects were observed by applying TVMF to control cells not loaded with MNPs. No macroscopic rise in temperature was observed in the extracellular medium during the experiments. Conclusion: As a proof of principle, these data indicate that intracellular hyperthermia is a suitable technology to induce death of protozoan parasites bearing MNPs. These findings expand the possibilities for new therapeutic strategies combating parasitic infection.
Resumo:
A deep theoretical analysis of the graph cut image segmentation framework presented in this paper simultaneously translates into important contributions in several directions. The most important practical contribution of this work is a full theoretical description, and implementation, of a novel powerful segmentation algorithm, GC(max). The output of GC(max) coincides with a version of a segmentation algorithm known as Iterative Relative Fuzzy Connectedness, IRFC. However, GC(max) is considerably faster than the classic IRFC algorithm, which we prove theoretically and show experimentally. Specifically, we prove that, in the worst case scenario, the GC(max) algorithm runs in linear time with respect to the variable M=|C|+|Z|, where |C| is the image scene size and |Z| is the size of the allowable range, Z, of the associated weight/affinity function. For most implementations, Z is identical to the set of allowable image intensity values, and its size can be treated as small with respect to |C|, meaning that O(M)=O(|C|). In such a situation, GC(max) runs in linear time with respect to the image size |C|. We show that the output of GC(max) constitutes a solution of a graph cut energy minimization problem, in which the energy is defined as the a"" (a) norm ayenF (P) ayen(a) of the map F (P) that associates, with every element e from the boundary of an object P, its weight w(e). This formulation brings IRFC algorithms to the realm of the graph cut energy minimizers, with energy functions ayenF (P) ayen (q) for qa[1,a]. Of these, the best known minimization problem is for the energy ayenF (P) ayen(1), which is solved by the classic min-cut/max-flow algorithm, referred to often as the Graph Cut algorithm. We notice that a minimization problem for ayenF (P) ayen (q) , qa[1,a), is identical to that for ayenF (P) ayen(1), when the original weight function w is replaced by w (q) . Thus, any algorithm GC(sum) solving the ayenF (P) ayen(1) minimization problem, solves also one for ayenF (P) ayen (q) with qa[1,a), so just two algorithms, GC(sum) and GC(max), are enough to solve all ayenF (P) ayen (q) -minimization problems. We also show that, for any fixed weight assignment, the solutions of the ayenF (P) ayen (q) -minimization problems converge to a solution of the ayenF (P) ayen(a)-minimization problem (ayenF (P) ayen(a)=lim (q -> a)ayenF (P) ayen (q) is not enough to deduce that). An experimental comparison of the performance of GC(max) and GC(sum) algorithms is included. This concentrates on comparing the actual (as opposed to provable worst scenario) algorithms' running time, as well as the influence of the choice of the seeds on the output.
Resumo:
Current SoC design trends are characterized by the integration of larger amount of IPs targeting a wide range of application fields. Such multi-application systems are constrained by a set of requirements. In such scenario network-on-chips (NoC) are becoming more important as the on-chip communication structure. Designing an optimal NoC for satisfying the requirements of each individual application requires the specification of a large set of configuration parameters leading to a wide solution space. It has been shown that IP mapping is one of the most critical parameters in NoC design, strongly influencing the SoC performance. IP mapping has been solved for single application systems using single and multi-objective optimization algorithms. In this paper we propose the use of a multi-objective adaptive immune algorithm (M(2)AIA), an evolutionary approach to solve the multi-application NoC mapping problem. Latency and power consumption were adopted as the target multi-objective functions. To compare the efficiency of our approach, our results are compared with those of the genetic and branch and bound multi-objective mapping algorithms. We tested 11 well-known benchmarks, including random and real applications, and combines up to 8 applications at the same SoC. The experimental results showed that the M(2)AIA decreases in average the power consumption and the latency 27.3 and 42.1 % compared to the branch and bound approach and 29.3 and 36.1 % over the genetic approach.
Resumo:
The main objective of this work is to present an efficient method for phasor estimation based on a compact Genetic Algorithm (cGA) implemented in Field Programmable Gate Array (FPGA). To validate the proposed method, an Electrical Power System (EPS) simulated by the Alternative Transients Program (ATP) provides data to be used by the cGA. This data is as close as possible to the actual data provided by the EPS. Real life situations such as islanding, sudden load increase and permanent faults were considered. The implementation aims to take advantage of the inherent parallelism in Genetic Algorithms in a compact and optimized way, making them an attractive option for practical applications in real-time estimations concerning Phasor Measurement Units (PMUs).