996 resultados para Genetic generalized epilepsy
Resumo:
This article focuses on the identification of the number of paths with different lengths between pairs of nodes in complex networks and how these paths can be used for characterization of topological properties of theoretical and real-world complex networks. This analysis revealed that the number of paths can provide a better discrimination of network models than traditional network measurements. In addition, the analysis of real-world networks suggests that the long-range connectivity tends to be limited in these networks and may be strongly related to network growth and organization.
Resumo:
In the last decade the Sznajd model has been successfully employed in modeling some properties and scale features of both proportional and majority elections. We propose a version of the Sznajd model with a generalized bounded confidence rule-a rule that limits the convincing capability of agents and that is essential to allow coexistence of opinions in the stationary state. With an appropriate choice of parameters it can be reduced to previous models. We solved this model both in a mean-field approach (for an arbitrary number of opinions) and numerically in a Barabaacutesi-Albert network (for three and four opinions), studying the transient and the possible stationary states. We built the phase portrait for the special cases of three and four opinions, defining the attractors and their basins of attraction. Through this analysis, we were able to understand and explain discrepancies between mean-field and simulation results obtained in previous works for the usual Sznajd model with bounded confidence and three opinions. Both the dynamical system approach and our generalized bounded confidence rule are quite general and we think it can be useful to the understanding of other similar models.
Resumo:
The Sznajd model is a sociophysics model that mimics the propagation of opinions in a closed society, where the interactions favor groups of agreeing people. It is based in the Ising and Potts ferromagnetic models and, although the original model used only linear chains, it has since been adapted to general networks. This model has a very rich transient, which has been used to model several aspects of elections, but its stationary states are always consensus states. In order to model more complex behaviors, we have, in a recent work, introduced the idea of biases and prejudices to the Sznajd model by generalizing the bounded confidence rule, which is common to many continuous opinion models, to what we called confidence rules. In that work we have found that the mean field version of this model (corresponding to a complete network) allows for stationary states where noninteracting opinions survive, but never for the coexistence of interacting opinions. In the present work, we provide networks that allow for the coexistence of interacting opinions for certain confidence rules. Moreover, we show that the model does not become inactive; that is, the opinions keep changing, even in the stationary regime. This is an important result in the context of understanding how a rule that breeds local conformity is still able to sustain global diversity while avoiding a frozen stationary state. We also provide results that give some insights on how this behavior approaches the mean field behavior as the networks are changed.
Resumo:
A simple and completely general representation of the exact exchange-correlation functional of density-functional theory is derived from the universal Lieb-Oxford bound, which holds for any Coulomb-interacting system. This representation leads to an alternative point of view on popular hybrid functionals, providing a rationale for why they work and how they can be constructed. A similar representation of the exact correlation functional allows to construct fully nonempirical hyper-generalized-gradient approximations (HGGAs), radically departing from established paradigms of functional construction. Numerical tests of these HGGAs for atomic and molecular correlation energies and molecular atomization energies show that even simple HGGAs match or outperform state-of-the-art correlation functionals currently used in solid-state physics and quantum chemistry.
Resumo:
A simultaneous optimization strategy based on a neuro-genetic approach is proposed for selection of laser induced breakdown spectroscopy operational conditions for the simultaneous determination of macronutrients (Ca, Mg and P), micro-nutrients (B, Cu, Fe, Mn and Zn), Al and Si in plant samples. A laser induced breakdown spectroscopy system equipped with a 10 Hz Q-switched Nd:YAG laser (12 ns, 532 nm, 140 mJ) and an Echelle spectrometer with intensified coupled-charge device was used. Integration time gate, delay time, amplification gain and number of pulses were optimized. Pellets of spinach leaves (NIST 1570a) were employed as laboratory samples. In order to find a model that could correlate laser induced breakdown spectroscopy operational conditions with compromised high peak areas of all elements simultaneously, a Bayesian Regularized Artificial Neural Network approach was employed. Subsequently, a genetic algorithm was applied to find optimal conditions for the neural network model, in an approach called neuro-genetic, A single laser induced breakdown spectroscopy working condition that maximizes peak areas of all elements simultaneously, was obtained with the following optimized parameters: 9.0 mu s integration time gate, 1.1 mu s delay time, 225 (a.u.) amplification gain and 30 accumulated laser pulses. The proposed approach is a useful and a suitable tool for the optimization process of such a complex analytical problem. (C) 2009 Elsevier B.V. All rights reserved.
Resumo:
Objective: We carry out a systematic assessment on a suite of kernel-based learning machines while coping with the task of epilepsy diagnosis through automatic electroencephalogram (EEG) signal classification. Methods and materials: The kernel machines investigated include the standard support vector machine (SVM), the least squares SVM, the Lagrangian SVM, the smooth SVM, the proximal SVM, and the relevance vector machine. An extensive series of experiments was conducted on publicly available data, whose clinical EEG recordings were obtained from five normal subjects and five epileptic patients. The performance levels delivered by the different kernel machines are contrasted in terms of the criteria of predictive accuracy, sensitivity to the kernel function/parameter value, and sensitivity to the type of features extracted from the signal. For this purpose, 26 values for the kernel parameter (radius) of two well-known kernel functions (namely. Gaussian and exponential radial basis functions) were considered as well as 21 types of features extracted from the EEG signal, including statistical values derived from the discrete wavelet transform, Lyapunov exponents, and combinations thereof. Results: We first quantitatively assess the impact of the choice of the wavelet basis on the quality of the features extracted. Four wavelet basis functions were considered in this study. Then, we provide the average accuracy (i.e., cross-validation error) values delivered by 252 kernel machine configurations; in particular, 40%/35% of the best-calibrated models of the standard and least squares SVMs reached 100% accuracy rate for the two kernel functions considered. Moreover, we show the sensitivity profiles exhibited by a large sample of the configurations whereby one can visually inspect their levels of sensitiveness to the type of feature and to the kernel function/parameter value. Conclusions: Overall, the results evidence that all kernel machines are competitive in terms of accuracy, with the standard and least squares SVMs prevailing more consistently. Moreover, the choice of the kernel function and parameter value as well as the choice of the feature extractor are critical decisions to be taken, albeit the choice of the wavelet family seems not to be so relevant. Also, the statistical values calculated over the Lyapunov exponents were good sources of signal representation, but not as informative as their wavelet counterparts. Finally, a typical sensitivity profile has emerged among all types of machines, involving some regions of stability separated by zones of sharp variation, with some kernel parameter values clearly associated with better accuracy rates (zones of optimality). (C) 2011 Elsevier B.V. All rights reserved.
Resumo:
Stingless bees play an important ecological role as pollinators of many wild plant species in the tropics and have significant potential for the pollination of agricultural crops. Nevertheless, conservation efforts as well as commercial breeding programmes require better guidelines on the amount of genetic variation that is needed to maintain viable populations. In this context, we carried out a long-term genetic study on the stingless bee Melipona scutellaris to evaluate the population viability consequences of prolonged breeding from a small number of founder colonies. In particular, it was artificially imposed a genetic bottleneck by setting up a population starting from only two founder colonies, and continued breeding from it for a period of over 10 years in a location outside its natural area of occurrence. We show that despite a great reduction in the number of alleles present at both neutral microsatellite loci and the sex-determining locus relative to its natural source population, and an increased frequency in the production of sterile diploid males, the genetically impoverished population could be successfully bred and maintained for at least 10 years. This shows that in stingless bees, breeding from a small stock of colonies may have less severe consequences than previously suspected. In addition, we provide a simulation model to determine the number of colonies that are needed to maintain a certain number of sex alleles in a population, thereby providing useful guidelines for stingless bee breeding and conservation efforts.
Resumo:
We evaluated the reliability and validity of a Brazilian-Portuguese version of the Epilepsy Medication Treatment Complexity Index (EMTCI). Interrater reliability was evaluated with the intraclass correlation coefficient (ICC), and validity was evaluated by correlation of mean EMTCI scores with the following variables: number of antiepileptic drugs (AEDs), seizure control, patients` perception of seizure control, and adherence to the therapeutic regimen as measured with the Morisky scale. We studied patients with epilepsy followed in a tertiary university-based hospital outpatient clinic setting, aged 18 years or older, independent in daily living activities, and without cognitive impairment or active psychiatric disease. ICCs ranged from 0.721 to 0.999. Mean EMTCI scores were significantly correlated with the variables assessed. Higher EMTCI scores were associated with an increasing number of AEDs, uncontrolled seizures, patients` perception of lack of seizure control, and poorer adherence to the therapeutic regimen. The results indicate that the Brazilian-Portuguese EMTCI is reliable and valid to be applied clinically in the country. The Brazilian-Portuguese EMTCI version may be a useful tool in developing strategies to minimize treatment complexity, possibly improving seizure control and quality of life in people with epilepsy in our milieu. (C) 2011 Elsevier Inc. All rights reserved.
Resumo:
The role of exercise training (ET) on cardiac renin-angiotensin system (RAS) was investigated in 3-5 month-old mice lacking alpha(2A-) and alpha(2C-)adrenoceptors (alpha(2A)/alpha(2C)ARKO) that present heart failure (HF) and wild type control (WT). ET consisted of 8-week running sessions of 60 min, 5 days/week. In addition, exercise tolerance, cardiac structural and function analysis were made. At 3 months, fractional shortening and exercise tolerance were similar between groups. At 5 months, alpha(2A)/alpha(2C)ARKO mice displayed ventricular dysfunction and fibrosis associated with increased cardiac angiotensin (Ang) II levels (2.9-fold) and increased local angiotensin-converting enzyme activity (ACE 18%). ET decreased alpha(2A)/alpha(2C)ARKO cardiac Ang II levels and ACE activity to age-matched untrained WT mice levels while increased ACE2 expression and prevented exercise intolerance and ventricular dysfunction with little impact on cardiac remodeling. Altogether, these data provide evidence that reduced cardiac RAS explains, at least in part, the beneficial effects of ET on cardiac function in a genetic model of HF.
Resumo:
beta-blockers, as class, improve cardiac function and survival in heart failure (HF). However, the molecular mechanisms underlying these beneficial effects remain elusive. In the present study, metoprolol and carvedilol were used in doses that display comparable heart rate reduction to assess their beneficial effects in a genetic model of sympathetic hyperactivity-induced HF (alpha(2A)/alpha(2C)-ARKO mice). Five month-old HF mice were randomly assigned to receive either saline, metoprolol or carvedilol for 8 weeks and age-matched wild-type mice (WT) were used as controls. HF mice displayed baseline tachycardia, systolic dysfunction evaluated by echocardiography, 50% mortality rate, increased cardiac myocyte width (50%) and ventricular fibrosis (3-fold) compared with WT. All these responses were significantly improved by both treatments. Cardiomyocytes from HF mice showed reduced peak [Ca(2+)](i) transient (13%) using confocal microscopy imaging. Interestingly, while metoprolol improved [Ca(2+)](i) transient, carvedilol had no effect on peak [Ca(2+)](i) transient but also increased [Ca(2+)] transient decay dynamics. We then examined the influence of carvedilol in cardiac oxidative stress as an alternative target to explain its beneficial effects. Indeed, HF mice showed 10-fold decrease in cardiac reduced/oxidized glutathione ratio compared with WT, which was significantly improved only by carvedilol treatment. Taken together, we provide direct evidence that the beneficial effects of metoprolol were mainly associated with improved cardiac Ca(2+) transients and the net balance of cardiac Ca(2+) handling proteins while carvedilol preferentially improved cardiac redox state. (C) 2008 Elsevier Inc. All rights reserved.
Resumo:
Sympathetic hyperactivity (SH) and renin angiotensin system (RAS) activation are commonly associated with heart failure (HF), even though the relative contribution of these factors to the cardiac derangement is less understood. The role of SH on RAS components and its consequences for the HF were investigated in mice lacking alpha(2A) and alpha(2C) adrenoceptor knockout (alpha(2A)/alpha(2C) ARKO) that present SH with evidence of HF by 7 mo of age. Cardiac and systemic RAS components and plasma norepinephrine (PN) levels were evaluated in male adult mice at 3 and 7 mo of age. In addition, cardiac morphometric analysis, collagen content, exercise tolerance, and hemodynamic assessments were made. At 3 mo, alpha(2A)/alpha(2C)ARKO mice showed no signs of HF, while displaying elevated PN, activation of local and systemic RAS components, and increased cardiomyocyte width (16%) compared with wild-type mice (WT). In contrast, at 7 mo, alpha(2A)/alpha(2C)ARKO mice presented clear signs of HF accompanied only by cardiac activation of angiotensinogen and ANG II levels and increased collagen content (twofold). Consistent with this local activation of RAS, 8 wk of ANG II AT(1) receptor blocker treatment restored cardiac structure and function comparable to the WT. Collectively, these data provide direct evidence that cardiac RAS activation plays a major role underlying the structural and functional abnormalities associated with a genetic SH-induced HF in mice.
Resumo:
Motivation: Understanding the patterns of association between polymorphisms at different loci in a population ( linkage disequilibrium, LD) is of fundamental importance in various genetic studies. Many coefficients were proposed for measuring the degree of LD, but they provide only a static view of the current LD structure. Generative models (GMs) were proposed to go beyond these measures, giving not only a description of the actual LD structure but also a tool to help understanding the process that generated such structure. GMs based in coalescent theory have been the most appealing because they link LD to evolutionary factors. Nevertheless, the inference and parameter estimation of such models is still computationally challenging. Results: We present a more practical method to build GM that describe LD. The method is based on learning weighted Bayesian network structures from haplotype data, extracting equivalence structure classes and using them to model LD. The results obtained in public data from the HapMap database showed that the method is a promising tool for modeling LD. The associations represented by the learned models are correlated with the traditional measure of LD D`. The method was able to represent LD blocks found by standard tools. The granularity of the association blocks and the readability of the models can be controlled in the method. The results suggest that the causality information gained by our method can be useful to tell about the conservability of the genetic markers and to guide the selection of subset of representative markers.
Resumo:
Voltage and current waveforms of a distribution or transmission power system are not pure sinusoids. There are distortions in these waveforms that can be represented as a combination of the fundamental frequency, harmonics and high frequency transients. This paper presents a novel approach to identifying harmonics in power system distorted waveforms. The proposed method is based on Genetic Algorithms, which is an optimization technique inspired by genetics and natural evolution. GOOAL, a specially designed intelligent algorithm for optimization problems, was successfully implemented and tested. Two kinds of representations concerning chromosomes are utilized: binary and real. The results show that the proposed method is more precise than the traditional Fourier Transform, especially considering the real representation of the chromosomes.
Resumo:
The general flowshop scheduling problem is a production problem where a set of n jobs have to be processed with identical flow pattern on in machines. In permutation flowshops the sequence of jobs is the same on all machines. A significant research effort has been devoted for sequencing jobs in a flowshop minimizing the makespan. This paper describes the application of a Constructive Genetic Algorithm (CGA) to makespan minimization on flowshop scheduling. The CGA was proposed recently as an alternative to traditional GA approaches, particularly, for evaluating schemata directly. The population initially formed only by schemata, evolves controlled by recombination to a population of well-adapted structures (schemata instantiation). The CGA implemented is based on the NEH classic heuristic and a local search heuristic used to define the fitness functions. The parameters of the CGA are calibrated using a Design of Experiments (DOE) approach. The computational results are compared against some other successful algorithms from the literature on Taillard`s well-known standard benchmark. The computational experience shows that this innovative CGA approach provides competitive results for flowshop scheduling; problems. (C) 2007 Elsevier Ltd. All rights reserved.
Resumo:
The Generalized Finite Element Method (GFEM) is employed in this paper for the numerical analysis of three-dimensional solids tinder nonlinear behavior. A brief summary of the GFEM as well as a description of the formulation of the hexahedral element based oil the proposed enrichment strategy are initially presented. Next, in order to introduce the nonlinear analysis of solids, two constitutive models are briefly reviewed: Lemaitre`s model, in which damage and plasticity are coupled, and Mazars`s damage model suitable for concrete tinder increased loading. Both models are employed in the framework of a nonlocal approach to ensure solution objectivity. In the numerical analyses carried out, a selective enrichment of approximation at regions of concern in the domain (mainly those with high strain and damage gradients) is exploited. Such a possibility makes the three-dimensional analysis less expensive and practicable since re-meshing resources, characteristic of h-adaptivity, can be minimized. Moreover, a combination of three-dimensional analysis and the selective enrichment presents a valuable good tool for a better description of both damage and plastic strain scatterings.