193 resultados para Enhanced genetic algorithms
Resumo:
beta-blockers, as class, improve cardiac function and survival in heart failure (HF). However, the molecular mechanisms underlying these beneficial effects remain elusive. In the present study, metoprolol and carvedilol were used in doses that display comparable heart rate reduction to assess their beneficial effects in a genetic model of sympathetic hyperactivity-induced HF (alpha(2A)/alpha(2C)-ARKO mice). Five month-old HF mice were randomly assigned to receive either saline, metoprolol or carvedilol for 8 weeks and age-matched wild-type mice (WT) were used as controls. HF mice displayed baseline tachycardia, systolic dysfunction evaluated by echocardiography, 50% mortality rate, increased cardiac myocyte width (50%) and ventricular fibrosis (3-fold) compared with WT. All these responses were significantly improved by both treatments. Cardiomyocytes from HF mice showed reduced peak [Ca(2+)](i) transient (13%) using confocal microscopy imaging. Interestingly, while metoprolol improved [Ca(2+)](i) transient, carvedilol had no effect on peak [Ca(2+)](i) transient but also increased [Ca(2+)] transient decay dynamics. We then examined the influence of carvedilol in cardiac oxidative stress as an alternative target to explain its beneficial effects. Indeed, HF mice showed 10-fold decrease in cardiac reduced/oxidized glutathione ratio compared with WT, which was significantly improved only by carvedilol treatment. Taken together, we provide direct evidence that the beneficial effects of metoprolol were mainly associated with improved cardiac Ca(2+) transients and the net balance of cardiac Ca(2+) handling proteins while carvedilol preferentially improved cardiac redox state. (C) 2008 Elsevier Inc. All rights reserved.
Resumo:
Sympathetic hyperactivity (SH) and renin angiotensin system (RAS) activation are commonly associated with heart failure (HF), even though the relative contribution of these factors to the cardiac derangement is less understood. The role of SH on RAS components and its consequences for the HF were investigated in mice lacking alpha(2A) and alpha(2C) adrenoceptor knockout (alpha(2A)/alpha(2C) ARKO) that present SH with evidence of HF by 7 mo of age. Cardiac and systemic RAS components and plasma norepinephrine (PN) levels were evaluated in male adult mice at 3 and 7 mo of age. In addition, cardiac morphometric analysis, collagen content, exercise tolerance, and hemodynamic assessments were made. At 3 mo, alpha(2A)/alpha(2C)ARKO mice showed no signs of HF, while displaying elevated PN, activation of local and systemic RAS components, and increased cardiomyocyte width (16%) compared with wild-type mice (WT). In contrast, at 7 mo, alpha(2A)/alpha(2C)ARKO mice presented clear signs of HF accompanied only by cardiac activation of angiotensinogen and ANG II levels and increased collagen content (twofold). Consistent with this local activation of RAS, 8 wk of ANG II AT(1) receptor blocker treatment restored cardiac structure and function comparable to the WT. Collectively, these data provide direct evidence that cardiac RAS activation plays a major role underlying the structural and functional abnormalities associated with a genetic SH-induced HF in mice.
Resumo:
BACKGROUND: Xylitol is a sugar alcohol (polyalcohol) with many interesting properties for pharmaceutical and food products. It is currently produced by a chemical process, which has some disadvantages such as high energy requirement. Therefore microbiological production of xylitol has been studied as an alternative, but its viability is dependent on optimisation of the fermentation variables. Among these, aeration is fundamental, because xylitol is produced only under adequate oxygen availability. In most experiments with xylitol-producing yeasts, low oxygen transfer volumetric coefficient (K(L)a) values are used to maintain microaerobic conditions. However, in the present study the use of relatively high K(L)a values resulted in high xylitol production. The effect of aeration was also evaluated via the profiles of xylose reductase (XR) and xylitol clehydrogenase (XD) activities during the experiments. RESULTS: The highest XR specific activity (1.45 +/- 0.21 U mg(protein)(-1)) was achieved during the experiment with the lowest K(L)a value (12 h(-1)), while the highest XD specific activity (0.19 +/- 0.03 U mg(protein)(-1)) was observed with a K(L)a value of 25 h(-1). Xylitol production was enhanced when K(L)a was increased from 12 to 50 h(-1), which resulted in the best condition observed, corresponding to a xylitol volumetric productivity of 1.50 +/- 0.08 g(xylitol) L(-1) h(-1) and an efficiency of 71 +/- 6.0%. CONCLUSION: The results showed that the enzyme activities during xylitol bioproduction depend greatly on the initial KLa value (oxygen availability). This finding supplies important information for further studies in molecular biology and genetic engineering aimed at improving xylitol bioproduction. (C) 2008 Society of Chemical Industry
Resumo:
Motivation: Understanding the patterns of association between polymorphisms at different loci in a population ( linkage disequilibrium, LD) is of fundamental importance in various genetic studies. Many coefficients were proposed for measuring the degree of LD, but they provide only a static view of the current LD structure. Generative models (GMs) were proposed to go beyond these measures, giving not only a description of the actual LD structure but also a tool to help understanding the process that generated such structure. GMs based in coalescent theory have been the most appealing because they link LD to evolutionary factors. Nevertheless, the inference and parameter estimation of such models is still computationally challenging. Results: We present a more practical method to build GM that describe LD. The method is based on learning weighted Bayesian network structures from haplotype data, extracting equivalence structure classes and using them to model LD. The results obtained in public data from the HapMap database showed that the method is a promising tool for modeling LD. The associations represented by the learned models are correlated with the traditional measure of LD D`. The method was able to represent LD blocks found by standard tools. The granularity of the association blocks and the readability of the models can be controlled in the method. The results suggest that the causality information gained by our method can be useful to tell about the conservability of the genetic markers and to guide the selection of subset of representative markers.
Resumo:
The power loss reduction in distribution systems (DSs) is a nonlinear and multiobjective problem. Service restoration in DSs is even computationally hard since it additionally requires a solution in real-time. Both DS problems are computationally complex. For large-scale networks, the usual problem formulation has thousands of constraint equations. The node-depth encoding (NDE) enables a modeling of DSs problems that eliminates several constraint equations from the usual formulation, making the problem solution simpler. On the other hand, a multiobjective evolutionary algorithm (EA) based on subpopulation tables adequately models several objectives and constraints, enabling a better exploration of the search space. The combination of the multiobjective EA with NDE (MEAN) results in the proposed approach for solving DSs problems for large-scale networks. Simulation results have shown the MEAN is able to find adequate restoration plans for a real DS with 3860 buses and 632 switches in a running time of 0.68 s. Moreover, the MEAN has shown a sublinear running time in function of the system size. Tests with networks ranging from 632 to 5166 switches indicate that the MEAN can find network configurations corresponding to a power loss reduction of 27.64% for very large networks requiring relatively low running time.
Resumo:
Modal filters may be obtained by a properly designed weighted sum of the output signals of an array of sensors distributed on the host structure. Although several research groups have been interested in techniques for designing and implementing modal filters based on a given array of sensors, the effect of the array topology on the effectiveness of the modal filter has received much less attention. In particular, it is known that some parameters, such as size, shape and location of a sensor, are very important in determining the observability of a vibration mode. Hence, this paper presents a methodology for the topological optimization of an array of sensors in order to maximize the effectiveness of a set of selected modal filters. This is done using a genetic algorithm optimization technique for the selection of 12 piezoceramic sensors from an array of 36 piezoceramic sensors regularly distributed on an aluminum plate, which maximize the filtering performance, over a given frequency range, of a set of modal filters, each one aiming to isolate one of the first vibration modes. The vectors of the weighting coefficients for each modal filter are evaluated using QR decomposition of the complex frequency response function matrix. Results show that the array topology is not very important for lower frequencies but it greatly affects the filter effectiveness for higher frequencies. Therefore, it is possible to improve the effectiveness and frequency range of a set of modal filters by optimizing the topology of an array of sensors. Indeed, using 12 properly located piezoceramic sensors bonded on an aluminum plate it is shown that the frequency range of a set of modal filters may be enlarged by 25-50%.
Resumo:
Converting aeroelastic vibrations into electricity for low power generation has received growing attention over the past few years. In addition to potential applications for aerospace structures, the goal is to develop alternative and scalable configurations for wind energy harvesting to use in wireless electronic systems. This paper presents modeling and experiments of aeroelastic energy harvesting using piezoelectric transduction with a focus on exploiting combined nonlinearities. An airfoil with plunge and pitch degrees of freedom (DOF) is investigated. Piezoelectric coupling is introduced to the plunge DOF while nonlinearities are introduced through the pitch DOF. A state-space model is presented and employed for the simulations of the piezoaeroelastic generator. A two-state approximation to Theodorsen aerodynamics is used in order to determine the unsteady aerodynamic loads. Three case studies are presented. First the interaction between piezoelectric power generation and linear aeroelastic behavior of a typical section is investigated for a set of resistive loads. Model predictions are compared to experimental data obtained from the wind tunnel tests at the flutter boundary. In the second case study, free play nonlinearity is added to the pitch DOF and it is shown that nonlinear limit-cycle oscillations can be obtained not only above but also below the linear flutter speed. The experimental results are successfully predicted by the model simulations. Finally, the combination of cubic hardening stiffness and free play nonlinearities is considered in the pitch DOF. The nonlinear piezoaeroelastic response is investigated for different values of the nonlinear-to-linear stiffness ratio. The free play nonlinearity reduces the cut-in speed while the hardening stiffness helps in obtaining persistent oscillations of acceptable amplitude over a wider range of airflow speeds. Such nonlinearities can be introduced to aeroelastic energy harvesters (exploiting piezoelectric or other transduction mechanisms) for performance enhancement.
Resumo:
This paper presents a strategy for the solution of the WDM optical networks planning. Specifically, the problem of Routing and Wavelength Allocation (RWA) in order to minimize the amount of wavelengths used. In this case, the problem is known as the Min-RWA. Two meta-heuristics (Tabu Search and Simulated Annealing) are applied to take solutions of good quality and high performance. The key point is the degradation of the maximum load on the virtual links in favor of minimization of number of wavelengths used; the objective is to find a good compromise between the metrics of virtual topology (load in Gb/s) and of the physical topology (quantity of wavelengths). The simulations suggest good results when compared to some existing in the literature.
Resumo:
This technical note develops information filter and array algorithms for a linear minimum mean square error estimator of discrete-time Markovian jump linear systems. A numerical example for a two-mode Markovian jump linear system, to show the advantage of using array algorithms to filter this class of systems, is provided.
Resumo:
The continuous growth of peer-to-peer networks has made them responsible for a considerable portion of the current Internet traffic. For this reason, improvements in P2P network resources usage are of central importance. One effective approach for addressing this issue is the deployment of locality algorithms, which allow the system to optimize the peers` selection policy for different network situations and, thus, maximize performance. To date, several locality algorithms have been proposed for use in P2P networks. However, they usually adopt heterogeneous criteria for measuring the proximity between peers, which hinders a coherent comparison between the different solutions. In this paper, we develop a thoroughly review of popular locality algorithms, based on three main characteristics: the adopted network architecture, distance metric, and resulting peer selection algorithm. As result of this study, we propose a novel and generic taxonomy for locality algorithms in peer-to-peer networks, aiming to enable a better and more coherent evaluation of any individual locality algorithm.
Resumo:
This paper presents an Adaptive Maximum Entropy (AME) approach for modeling biological species. The Maximum Entropy algorithm (MaxEnt) is one of the most used methods in modeling biological species geographical distribution. The approach presented here is an alternative to the classical algorithm. Instead of using the same set features in the training, the AME approach tries to insert or to remove a single feature at each iteration. The aim is to reach the convergence faster without affect the performance of the generated models. The preliminary experiments were well performed. They showed an increasing on performance both in accuracy and in execution time. Comparisons with other algorithms are beyond the scope of this paper. Some important researches are proposed as future works.
Resumo:
This paper presents a free software tool that supports the next-generation Mobile Communications, through the automatic generation of models of components and electronic devices based on neural networks. This tool enables the creation, training, validation and simulation of the model directly from measurements made on devices of interest, using an interface totally oriented to non-experts in neural models. The resulting model can be exported automatically to a traditional circuit simulator to test different scenarios.
Resumo:
This paper presents a family of algorithms for approximate inference in credal networks (that is, models based on directed acyclic graphs and set-valued probabilities) that contain only binary variables. Such networks can represent incomplete or vague beliefs, lack of data, and disagreements among experts; they can also encode models based on belief functions and possibilistic measures. All algorithms for approximate inference in this paper rely on exact inferences in credal networks based on polytrees with binary variables, as these inferences have polynomial complexity. We are inspired by approximate algorithms for Bayesian networks; thus the Loopy 2U algorithm resembles Loopy Belief Propagation, while the Iterated Partial Evaluation and Structured Variational 2U algorithms are, respectively, based on Localized Partial Evaluation and variational techniques. (C) 2007 Elsevier Inc. All rights reserved.
Resumo:
The cost of a new ship design heavily depends on the principal dimensions of the ship; however, dimensions minimization often conflicts with the minimum oil outflow (in the event of an accidental spill). This study demonstrates one rational methodology for selecting the optimal dimensions and coefficients of form of tankers via the use of a genetic algorithm. Therein, a multi-objective optimization problem was formulated by using two objective attributes in the evaluation of each design, specifically, total cost and mean oil outflow. In addition, a procedure that can be used to balance the designs in terms of weight and useful space is proposed. A genetic algorithm was implemented to search for optimal design parameters and to identify the nondominated Pareto frontier. At the end of this study, three real ships are used as case studies. [DOI:10.1115/1.4002740]
Resumo:
Solid-liquid phase equilibrium modeling of triacylglycerol mixtures is essential for lipids design. Considering the alpha polymorphism and liquid phase as ideal, the Margules 2-suffix excess Gibbs energy model with predictive binary parameter correlations describes the non ideal beta and beta` solid polymorphs. Solving by direct optimization of the Gibbs free energy enables one to predict from a bulk mixture composition the phases composition at a given temperature and thus the SFC curve, the melting profile and the Differential Scanning Calorimetry (DSC) curve that are related to end-user lipid properties. Phase diagram, SFC and DSC curve experimental data are qualitatively and quantitatively well predicted for the binary mixture 1,3-dipalmitoyl-2-oleoyl-sn-glycerol (POP) and 1,2,3-tripalmitoyl-sn-glycerol (PPP), the ternary mixture 1,3-dimyristoyl-2-palmitoyl-sn-glycerol (MPM), 1,2-distearoyl-3-oleoyl-sn-glycerol (SSO) and 1,2,3-trioleoyl-sn-glycerol (OOO), for palm oil and cocoa butter. Then, addition to palm oil of Medium-Long-Medium type structured lipids is evaluated, using caprylic acid as medium chain and long chain fatty acids (EPA-eicosapentaenoic acid, DHA-docosahexaenoic acid, gamma-linolenic-octadecatrienoic acid and AA-arachidonic acid), as sn-2 substitutes. EPA, DHA and AA increase the melting range on both the fusion and crystallization side. gamma-linolenic shifts the melting range upwards. This predictive tool is useful for the pre-screening of lipids matching desired properties set a priori.