177 resultados para Standard models
Resumo:
With each directed acyclic graph (this includes some D-dimensional lattices) one can associate some Abelian algebras that we call directed Abelian algebras (DAAs). On each site of the graph one attaches a generator of the algebra. These algebras depend on several parameters and are semisimple. Using any DAA, one can define a family of Hamiltonians which give the continuous time evolution of a stochastic process. The calculation of the spectra and ground-state wave functions (stationary state probability distributions) is an easy algebraic exercise. If one considers D-dimensional lattices and chooses Hamiltonians linear in the generators, in finite-size scaling the Hamiltonian spectrum is gapless with a critical dynamic exponent z=D. One possible application of the DAA is to sandpile models. In the paper we present this application, considering one- and two-dimensional lattices. In the one-dimensional case, when the DAA conserves the number of particles, the avalanches belong to the random walker universality class (critical exponent sigma(tau)=3/2). We study the local density of particles inside large avalanches, showing a depletion of particles at the source of the avalanche and an enrichment at its end. In two dimensions we did extensive Monte-Carlo simulations and found sigma(tau)=1.780 +/- 0.005.
Resumo:
The structure of laser glasses in the system (Y(2)O(3))(0.2){(Al(2)O(3))(x))(B(2)O(3))(0.8-x)} (0.15 <= x <= 0.40) has been investigated by means of (11)B, (27)Al, and (89)Y solid state NMR as well as electron spin echo envelope modulation (ESEEM) of Yb-doped samples. The latter technique has been applied for the first time to an aluminoborate glass system. (11)B magic-angle spinning (MAS)-NMR spectra reveal that, while the majority of the boron atoms are three-coordinated over the entire composition region, the fraction of three-coordinated boron atoms increases significantly with increasing x. Charge balance considerations as well as (11)B NMR lineshape analyses suggest that the dominant borate species are predominantly singly charged metaborate (BO(2/2)O(-)), doubly charged pyroborate (BO(1/2)(O(-))(2)), and (at x = 0.40) triply charged orthoborate groups. As x increases along this series, the average anionic charge per trigonal borate group increases from 1.38 to 2.91. (27)Al MAS-NMR spectra show that the alumina species are present in the coordination states four, five and six, and the fraction of four-coordinated Al increases markedly with increasing x. All of the Al coordination states are in intimate contact with both the three-and the four-coordinate boron species and vice versa, as indicated by (11)B/(27)Al rotational echo double resonance (REDOR) data. These results are consistent with the formation of a homogeneous, non-segregated glass structure. (89)Y solid state NMR spectra show a significant chemical shift trend, reflecting that the second coordination sphere becomes increasingly ""aluminate-like'' with increasing x. This conclusion is supported by electron spin echo envelope modulation (ESEEM) data of Yb-doped glasses, which indicate that both borate and aluminate species participate in the medium range structure of the rare-earth ions, consistent with a random spatial distribution of the glass components.
Resumo:
Background: There are several studies in the literature depicting measurement error in gene expression data and also, several others about regulatory network models. However, only a little fraction describes a combination of measurement error in mathematical regulatory networks and shows how to identify these networks under different rates of noise. Results: This article investigates the effects of measurement error on the estimation of the parameters in regulatory networks. Simulation studies indicate that, in both time series (dependent) and non-time series (independent) data, the measurement error strongly affects the estimated parameters of the regulatory network models, biasing them as predicted by the theory. Moreover, when testing the parameters of the regulatory network models, p-values computed by ignoring the measurement error are not reliable, since the rate of false positives are not controlled under the null hypothesis. In order to overcome these problems, we present an improved version of the Ordinary Least Square estimator in independent (regression models) and dependent (autoregressive models) data when the variables are subject to noises. Moreover, measurement error estimation procedures for microarrays are also described. Simulation results also show that both corrected methods perform better than the standard ones (i.e., ignoring measurement error). The proposed methodologies are illustrated using microarray data from lung cancer patients and mouse liver time series data. Conclusions: Measurement error dangerously affects the identification of regulatory network models, thus, they must be reduced or taken into account in order to avoid erroneous conclusions. This could be one of the reasons for high biological false positive rates identified in actual regulatory network models.
Resumo:
An analytical procedure for multiple standard additions of arsenic species using sequential injection analysis (SIA) is proposed for their quantification in seafood extracts. SIA presented flexibility for generating multiple specie standards at the ng mL(-1) concentration level by adding different volumes of As(III), As(V), monomethylarsonic (MMA) and dimethylarsinic (DMA) to the sample. The mixed sample plus standard solutions were delivered from SIA to fill the HPLC injection loop. Subsequently, As species were separated by HPLC and analyzed by atomic fluorescence spectrometry (AFS). The proposed system comprised two independently controlled modules, with the HPLC loop acting as the intermediary device. The analytical frequency was enhanced by combining the actions of both modules. While the added sample was flowing through the chromatographic column towards the detection system, the SIA program started performing the standard additions to another sample. The proposed method was applied to spoiled seafood extracts. Detection limits based on 3 sigma for As(III), As(V), MMA and DMA were 0.023, 0.39, 0.45 and 1.0 ng mL(-1), respectively. (C) 2011 Elsevier B.V. All rights reserved.
Resumo:
Objective: We carry out a systematic assessment on a suite of kernel-based learning machines while coping with the task of epilepsy diagnosis through automatic electroencephalogram (EEG) signal classification. Methods and materials: The kernel machines investigated include the standard support vector machine (SVM), the least squares SVM, the Lagrangian SVM, the smooth SVM, the proximal SVM, and the relevance vector machine. An extensive series of experiments was conducted on publicly available data, whose clinical EEG recordings were obtained from five normal subjects and five epileptic patients. The performance levels delivered by the different kernel machines are contrasted in terms of the criteria of predictive accuracy, sensitivity to the kernel function/parameter value, and sensitivity to the type of features extracted from the signal. For this purpose, 26 values for the kernel parameter (radius) of two well-known kernel functions (namely. Gaussian and exponential radial basis functions) were considered as well as 21 types of features extracted from the EEG signal, including statistical values derived from the discrete wavelet transform, Lyapunov exponents, and combinations thereof. Results: We first quantitatively assess the impact of the choice of the wavelet basis on the quality of the features extracted. Four wavelet basis functions were considered in this study. Then, we provide the average accuracy (i.e., cross-validation error) values delivered by 252 kernel machine configurations; in particular, 40%/35% of the best-calibrated models of the standard and least squares SVMs reached 100% accuracy rate for the two kernel functions considered. Moreover, we show the sensitivity profiles exhibited by a large sample of the configurations whereby one can visually inspect their levels of sensitiveness to the type of feature and to the kernel function/parameter value. Conclusions: Overall, the results evidence that all kernel machines are competitive in terms of accuracy, with the standard and least squares SVMs prevailing more consistently. Moreover, the choice of the kernel function and parameter value as well as the choice of the feature extractor are critical decisions to be taken, albeit the choice of the wavelet family seems not to be so relevant. Also, the statistical values calculated over the Lyapunov exponents were good sources of signal representation, but not as informative as their wavelet counterparts. Finally, a typical sensitivity profile has emerged among all types of machines, involving some regions of stability separated by zones of sharp variation, with some kernel parameter values clearly associated with better accuracy rates (zones of optimality). (C) 2011 Elsevier B.V. All rights reserved.
Resumo:
Here, I investigate the use of Bayesian updating rules applied to modeling how social agents change their minds in the case of continuous opinion models. Given another agent statement about the continuous value of a variable, we will see that interesting dynamics emerge when an agent assigns a likelihood to that value that is a mixture of a Gaussian and a uniform distribution. This represents the idea that the other agent might have no idea about what is being talked about. The effect of updating only the first moments of the distribution will be studied, and we will see that this generates results similar to those of the bounded confidence models. On also updating the second moment, several different opinions always survive in the long run, as agents become more stubborn with time. However, depending on the probability of error and initial uncertainty, those opinions might be clustered around a central value.
Resumo:
Today several different unsupervised classification algorithms are commonly used to cluster similar patterns in a data set based only on its statistical properties. Specially in image data applications, self-organizing methods for unsupervised classification have been successfully applied for clustering pixels or group of pixels in order to perform segmentation tasks. The first important contribution of this paper refers to the development of a self-organizing method for data classification, named Enhanced Independent Component Analysis Mixture Model (EICAMM), which was built by proposing some modifications in the Independent Component Analysis Mixture Model (ICAMM). Such improvements were proposed by considering some of the model limitations as well as by analyzing how it should be improved in order to become more efficient. Moreover, a pre-processing methodology was also proposed, which is based on combining the Sparse Code Shrinkage (SCS) for image denoising and the Sobel edge detector. In the experiments of this work, the EICAMM and other self-organizing models were applied for segmenting images in their original and pre-processed versions. A comparative analysis showed satisfactory and competitive image segmentation results obtained by the proposals presented herein. (C) 2008 Published by Elsevier B.V.
Resumo:
Background: Although the Clock Drawing Test (CDT) is the second most used test in the world for the screening of dementia, there is still debate over its sensitivity specificity, application and interpretation in dementia diagnosis. This study has three main aims: to evaluate the sensitivity and specificity of the CDT in a sample composed of older adults with Alzheimer`s disease (AD) and normal controls; to compare CDT accuracy to the that of the Mini-mental State Examination (MMSE) and the Cambridge Cognitive Examination (CAMCOG), and to test whether the association of the MMSE with the CDT leads to higher or comparable accuracy as that reported for the CAMCOG. Methods: Cross-sectional assessment was carried out for 121 AD and 99 elderly controls with heterogeneous educational levels from a geriatric outpatient clinic who completed the Cambridge Examination for Mental Disorder of the Elderly (CAMDEX). The CDT was evaluated according to the Shulman, Mendez and Sunderland scales. Results: The CDT showed high sensitivity and specificity. There were significant correlations between the CDT and the MMSE (0.700-0.730; p < 0.001) and between the CDT and the CAMCOG (0.753-0.779; p < 0.001). The combination of the CDT with the MMSE improved sensitivity and specificity (SE = 89.2-90%; SP = 71.7-79.8%). Subgroup analysis indicated that for elderly people with lower education, sensitivity and specificity were both adequate and high. Conclusions: The CDT is a robust screening test when compared with the MMSE or the CAMCOG, independent of the scale used for its interpretation. The combination with the MMSE improves its performance significantly, becoming equivalent to the CAMCOG.
Resumo:
The aims of the present study were to compare the effects of two periodization models on metabolic syndrome risk factors in obese adolescents and verify whether the angiotensin-converting enzyme (ACE) genotype is important in establishing these effects. A total of 32 postpuberty obese adolescents were submitted to aerobic training (AT) and resistance training (RT) for 14 weeks. The subjects were divided into linear periodization (LP, n = 16) or daily undulating periodization (DUP, n = 16). Body composition, visceral and subcutaneous fat, glycemia, insulinemia, homeostasis model assessment of insulin resistance (HOMA-IR), lipid profiles, blood pressure, maximal oxygen consumption (VO(2max)), resting metabolic rate (RMR), muscular endurance were analyzed at baseline and after intervention. Both groups demonstrated a significant reduction in body mass, BMI, body fat, visceral and subcutaneous fat, total and low-density lipoprotein cholesterol, blood pressure and an increase in fat-free mass, VO(2max), and muscular endurance. However, only DUP promoted a reduction in insulin concentrations and HOMA-IR. It is important to emphasize that there was no statics difference between LP and DUP groups; however, it appears that there may be bigger changes in the DUP than LP group in some of the metabolic syndrome risk factors in obese adolescents with regard to the effect size (ES). Both periodization models presented a large effect on muscular endurance. Despite the limitation of sample size, our results suggested that the ACE genotype may influence the functional and metabolic characteristics of obese adolescents and may be considered in the future strategies for massive obesity control.
Resumo:
Background: Leptin-deficient mice (Lep(ob)/Lep(ob), also known as ob/ob) are of great importance for studies of obesity, diabetes and other correlated pathologies. Thus, generation of animals carrying the Lep(ob) gene mutation as well as additional genomic modifications has been used to associate genes with metabolic diseases. However, the infertility of Lep(ob)/Lep(ob) mice impairs this kind of breeding experiment. Objective: To propose a new method for production of Lep(ob)/Lep(ob) animals and Lep(ob)/Lep(ob)-derived animal models by restoring the fertility of Lep(ob)/Lep(ob) mice in a stable way through white adipose tissue transplantations. Methods: For this purpose, 1 g of peri-gonadal adipose tissue from lean donors was used in subcutaneous transplantations of Lep(ob)/Lep(ob) animals and a crossing strategy was established to generate Lep(ob)/Lep(ob)-derived mice. Results: The presented method reduced by four times the number of animals used to generate double transgenic models (from about 20 to 5 animals per double mutant produced) and minimized the number of genotyping steps (from 3 to 1 genotyping step, reducing the number of Lep gene genotyping assays from 83 to 6). Conclusion: The application of the adipose transplantation technique drastically improves both the production of Lep(ob)/Lep(ob) animals and the generation of Lep(ob)/Lep(ob)-derived animal models. International Journal of Obesity (2009) 33, 938-944; doi: 10.1038/ijo.2009.95; published online 16 June 2009
Resumo:
A high cost-effective treatment of sulphochromic waste is proposed employing a raw coconut coir as biosorbent for Cr(VI) removal. The ideal pH and sorption kinetic, sorption capacities, and sorption sites were the studied biosorbent parameters. After testing five different isotherm models with standard solutions, Redlich-Peterson and Toth best fitted the experimental data, obtaining a theoretical Cr(VI) sorption capacity (SC) of 6.3 mg g(-1). Acid-base potentiometric titration indicated around of 73% of sorption sites were from phenolic compounds, probably lignin. Differences between sorption sites in the coconut coir before and after Cr adsorption identified from Fourier transform infrared spectra suggested a modification of sorption sites after sulphochromic waste treatment, indicating that the sorption mechanism involves organic matter oxidation and chromium uptake. For sulphocromic waste treatment, the SC was improved to 26.8 +/- 0.2 mg g(-1), and no adsorbed Cr(VI) was reduced, remaining only Cr(III) in the final solution. The adsorbed material was calcinated to obtain Cr2O3, with a reduction of more than 60% of the original mass. (c) 2008 Elsevier B.V. All rights reserved.
Resumo:
Obesity has been shown to impair myocardial performance. Nevertheless, the mechanisms underlying the participation of calcium (Ca(2+)) handling on cardiac dysfunction in obesity models remain unknown. L-type Ca(2+) channels and sarcoplasmic reticulum (SR) Ca(2+)-ATPase (SERCA2a), may contribute to the cardiac dysfunction induced by obesity. The purpose of this study was to investigate whether myocardial dysfunction in obese rats is related to decreased activity and/or expression of L-type Ca(2+) channels and SERCA2a. Male 30-day-old Wistar rats were fed standard (C) and alternately four palatable high-fat diets (Ob) for 15 weeks. Obesity was determined by adiposity index and comorbidities were evaluated. Myocardial function was evaluated in isolated left ventricle papillary muscles under basal conditions and after inotropic and lusitropic maneuvers. L-type Ca(2+) channels and SERCA2a activity were determined using specific blockers, while changes in the amount of channels were evaluated by Western blot analysis. Phospholamban (PLB) protein expression and the SERCA2a/PLB ratio were also determined. Compared with C rats, the Ob rats had increased body fat, adiposity index and several comorbidities. The Ob muscles developed similar baseline data, but myocardial responsiveness to post-rest contraction stimulus and increased extracellular Ca(2+) was compromised. The diltiazem promoted higher inhibition on developed tension in obese rats. In addition, there were no changes in the L-type Ca(2+) channel protein content and SERCA2a behavior (activity and expression). In conclusion, the myocardial dysfunction caused by obesity is related to L-type Ca(2+) channel activity impairment without significant changes in SERCA2a expression and function as well as L-type Ca(2+) protein levels. J. Cell. Physiol. 226: 2934-2942, 2011. (C) 2011 Wiley-Liss, Inc.
Resumo:
Fourier transform near infrared (FT-NIR) spectroscopy was evaluated as an analytical too[ for monitoring residual Lignin, kappa number and hexenuronic acids (HexA) content in kraft pulps of Eucalyptus globulus. Sets of pulp samples were prepared under different cooking conditions to obtain a wide range of compound concentrations that were characterised by conventional wet chemistry analytical methods. The sample group was also analysed using FT-NIR spectroscopy in order to establish prediction models for the pulp characteristics. Several models were applied to correlate chemical composition in samples with the NIR spectral data by means of PCR or PLS algorithms. Calibration curves were built by using all the spectral data or selected regions. Best calibration models for the quantification of lignin, kappa and HexA were proposed presenting R-2 values of 0.99. Calibration models were used to predict pulp titers of 20 external samples in a validation set. The lignin concentration and kappa number in the range of 1.4-18% and 8-62, respectively, were predicted fairly accurately (standard error of prediction, SEP 1.1% for lignin and 2.9 for kappa). The HexA concentration (range of 5-71 mmol kg(-1) pulp) was more difficult to predict and the SEP was 7.0 mmol kg(-1) pulp in a model of HexA quantified by an ultraviolet (UV) technique and 6.1 mmol kg(-1) pulp in a model of HexA quantified by anion-exchange chromatography (AEC). Even in wet chemical procedures used for HexA determination, there is no good agreement between methods as demonstrated by the UV and AEC methods described in the present work. NIR spectroscopy did provide a rapid estimate of HexA content in kraft pulps prepared in routine cooking experiments.
Resumo:
Motivation: Understanding the patterns of association between polymorphisms at different loci in a population ( linkage disequilibrium, LD) is of fundamental importance in various genetic studies. Many coefficients were proposed for measuring the degree of LD, but they provide only a static view of the current LD structure. Generative models (GMs) were proposed to go beyond these measures, giving not only a description of the actual LD structure but also a tool to help understanding the process that generated such structure. GMs based in coalescent theory have been the most appealing because they link LD to evolutionary factors. Nevertheless, the inference and parameter estimation of such models is still computationally challenging. Results: We present a more practical method to build GM that describe LD. The method is based on learning weighted Bayesian network structures from haplotype data, extracting equivalence structure classes and using them to model LD. The results obtained in public data from the HapMap database showed that the method is a promising tool for modeling LD. The associations represented by the learned models are correlated with the traditional measure of LD D`. The method was able to represent LD blocks found by standard tools. The granularity of the association blocks and the readability of the models can be controlled in the method. The results suggest that the causality information gained by our method can be useful to tell about the conservability of the genetic markers and to guide the selection of subset of representative markers.
Resumo:
The airflow velocities and pressures are calculated from a three-dimensional model of the human larynx by using the finite element method. The laryngeal airflow is assumed to be incompressible, isothermal, steady, and created by fixed pressure drops. The influence of different laryngeal profiles (convergent, parallel, and divergent), glottal area, and dimensions of false vocal folds in the airflow are investigated. The results indicate that vertical and horizontal phase differences in the laryngeal tissue movements are influenced by the nonlinear pressure distribution across the glottal channel, and the glottal entrance shape influences the air pressure distribution inside the glottis. Additionally, the false vocal folds increase the glottal duct pressure drop by creating a new constricted channel in the larynx, and alter the airflow vortexes formed after the true vocal folds. (C) 2007 Elsevier Ltd. All rights reserved.