926 resultados para ASSESSMENT MODELS
Resumo:
Umbilical cord mesenchymal stromal cells (MSC) have been widely investigated for cell-based therapy studies as an alternative source to bone marrow transplantation. Umbilical cord tissue is a rich source of MSCs with potential to derivate at least muscle, cartilage, fat, and bone cells in vitro. The possibility to replace the defective muscle cells using cell therapy is a promising approach for the treatment of progressive muscular dystrophies (PMDs), independently of the specific gene mutation. Therefore, preclinical studies in different models of muscular dystrophies are of utmost importance. The main objective of the present study is to evaluate if umbilical cord MSCs have the potential to reach and differentiate into muscle cells in vivo in two animal models of PMDs. In order to address this question we injected (1) human umbilical cord tissue (hUCT) MSCs into the caudal vein of SJL mice; (2) hUCT and canine umbilical cord vein (cUCV) MSCs intra-arterially in GRMD dogs. Our results here reported support the safety of the procedure and indicate that the injected cells could engraft in the host muscle in both animal models but could not differentiate into muscle cells. These observations may provide important information aiming future therapy for muscular dystrophies.
Resumo:
We consider a simple Maier-Saupe statistical model with the inclusion of disorder degrees of freedom to mimic the phase diagram of a mixture of rodlike and disklike molecules. A quenched distribution of shapes leads to a phase diagram with two uniaxial and a biaxial nematic structure. A thermalized distribution, however, which is more adequate to liquid mixtures, precludes the stability of this biaxial phase. We then use a two-temperature formalism, and assume a separation of relaxation times, to show that a partial degree of annealing is already sufficient to stabilize a biaxial nematic structure.
Resumo:
We propose and analyze two different Bayesian online algorithms for learning in discrete Hidden Markov Models and compare their performance with the already known Baldi-Chauvin Algorithm. Using the Kullback-Leibler divergence as a measure of generalization we draw learning curves in simplified situations for these algorithms and compare their performances.
Resumo:
The lightest supersymmetric particle may decay with branching ratios that correlate with neutrino oscillation parameters. In this case the CERN Large Hadron Collider (LHC) has the potential to probe the atmospheric neutrino mixing angle with sensitivity competitive to its low-energy determination by underground experiments. Under realistic detection assumptions, we identify the necessary conditions for the experiments at CERN's LHC to probe the simplest scenario for neutrino masses induced by minimal supergravity with bilinear R parity violation.
Resumo:
We study the potential of the CERN large hadron collider to probe the spin of new massive vector boson resonances predicted by Higgsless models. We consider its production via weak boson fusion which relies only on the coupling between the new resonances and the weak gauge bosons. We show that the large hadron collider will be able to unravel the spin of the particles associated with the partial restoration of unitarity in vector boson scattering for integrated luminosities of 150-560 fb(-1), depending on the new state mass and on the method used in the analyses.
Resumo:
We show that the common singularities present in generic modified gravity models governed by actions of the type S = integral d(4)x root-gf(R, phi, X). with X = -1/2 g(ab)partial derivative(a)phi partial derivative(b)phi, are essentially the same anisotropic instabilities associated to the hypersurface F(phi) = 0 in the case of a nonminimal coupling of the type F(phi)R, enlightening the physical origin of such singularities that typically arise in rather complex and cumbersome inhomogeneous perturbation analyses. We show, moreover, that such anisotropic instabilities typically give rise to dynamically unavoidable singularities, precluding completely the possibility of having physically viable models for which the hypersurface partial derivative f/partial derivative R = 0 is attained. Some examples are explicitly discussed.
Resumo:
In one-component Abelian sandpile models, the toppling probabilities are independent quantities. This is not the case in multicomponent models. The condition of associativity of the underlying Abelian algebras imposes nonlinear relations among the toppling probabilities. These relations are derived for the case of two-component quadratic Abelian algebras. We show that Abelian sandpile models with two conservation laws have only trivial avalanches.
Resumo:
With each directed acyclic graph (this includes some D-dimensional lattices) one can associate some Abelian algebras that we call directed Abelian algebras (DAAs). On each site of the graph one attaches a generator of the algebra. These algebras depend on several parameters and are semisimple. Using any DAA, one can define a family of Hamiltonians which give the continuous time evolution of a stochastic process. The calculation of the spectra and ground-state wave functions (stationary state probability distributions) is an easy algebraic exercise. If one considers D-dimensional lattices and chooses Hamiltonians linear in the generators, in finite-size scaling the Hamiltonian spectrum is gapless with a critical dynamic exponent z=D. One possible application of the DAA is to sandpile models. In the paper we present this application, considering one- and two-dimensional lattices. In the one-dimensional case, when the DAA conserves the number of particles, the avalanches belong to the random walker universality class (critical exponent sigma(tau)=3/2). We study the local density of particles inside large avalanches, showing a depletion of particles at the source of the avalanche and an enrichment at its end. In two dimensions we did extensive Monte-Carlo simulations and found sigma(tau)=1.780 +/- 0.005.
Resumo:
The structure of laser glasses in the system (Y(2)O(3))(0.2){(Al(2)O(3))(x))(B(2)O(3))(0.8-x)} (0.15 <= x <= 0.40) has been investigated by means of (11)B, (27)Al, and (89)Y solid state NMR as well as electron spin echo envelope modulation (ESEEM) of Yb-doped samples. The latter technique has been applied for the first time to an aluminoborate glass system. (11)B magic-angle spinning (MAS)-NMR spectra reveal that, while the majority of the boron atoms are three-coordinated over the entire composition region, the fraction of three-coordinated boron atoms increases significantly with increasing x. Charge balance considerations as well as (11)B NMR lineshape analyses suggest that the dominant borate species are predominantly singly charged metaborate (BO(2/2)O(-)), doubly charged pyroborate (BO(1/2)(O(-))(2)), and (at x = 0.40) triply charged orthoborate groups. As x increases along this series, the average anionic charge per trigonal borate group increases from 1.38 to 2.91. (27)Al MAS-NMR spectra show that the alumina species are present in the coordination states four, five and six, and the fraction of four-coordinated Al increases markedly with increasing x. All of the Al coordination states are in intimate contact with both the three-and the four-coordinate boron species and vice versa, as indicated by (11)B/(27)Al rotational echo double resonance (REDOR) data. These results are consistent with the formation of a homogeneous, non-segregated glass structure. (89)Y solid state NMR spectra show a significant chemical shift trend, reflecting that the second coordination sphere becomes increasingly ""aluminate-like'' with increasing x. This conclusion is supported by electron spin echo envelope modulation (ESEEM) data of Yb-doped glasses, which indicate that both borate and aluminate species participate in the medium range structure of the rare-earth ions, consistent with a random spatial distribution of the glass components.
Resumo:
No-tillage mulch-based (NTM) cropping systems have been widely adopted by farmers in the Brazilian savanna region (Cerrado biome). We hypothesized that this new type of management should have a profound impact on soil organic carbon (SOC) at regional scale and consequently on climate change mitigation. The objective of this study was thus to quantify the SOC storage potential of NTM in the oxisols of the Cerrado using a synchronic approach that is based on a chronosequence of fields of different years under NTM. The study consisted of three phases: (1) a farm/cropping system survey to identify the main types of NTM systems to be chosen for the chronosequence; (2) a field survey to identify a homogeneous set of situations for the chronosequence and (3) the characterization of the chronosequence to assess the SOC storage potential. The main NTM system practiced by farmers is an annual succession of soybean (Glycine max)or maize (Zea mays) with another cereal crop. This cropping system covers 54% of the total cultivated area in the region. At the regional level, soil organic C concentrations from NTM fields were closely correlated with clay + silt content of the soil (r(2) = 0.64). No significant correlation was observed (r(2) = 0.07), however, between these two variables when we only considered the fields with a clay + silt content in the 500-700 g kg(-1) range. The final chronosequence of NTM fields was therefore based on a subsample of eight fields, within this textural range. The SOC stocks in the 0-30 cm topsoil layer of these selected fields varied between 4.2 and 6.7 kg C m(-2) and increased on average (r(2) = 0.97) with 0.19 kg C m(-2) year(-1). After 12 years of NTM management, SOC stocks were no longer significantly different from the stocks under natural Cerrado vegetation (p < 0.05), whereas a 23-year-old conventionally tilled and cropped field showed SOC stocks that were about 30% below this level. Confirming our hypotheses, this study clearly illustrated the high potential of NTM systems in increasing SOC storage under tropical conditions, and how a synchronic approach may be used to assess efficiently such modification on farmers` fields, identifying and excluding non desirable sources of heterogeneity (management, soils and climate). (C) 2010 Elsevier B.V. All rights reserved.
Resumo:
P>Soil bulk density values are needed to convert organic carbon content to mass of organic carbon per unit area. However, field sampling and measurement of soil bulk density are labour-intensive, costly and tedious. Near-infrared reflectance spectroscopy (NIRS) is a physically non-destructive, rapid, reproducible and low-cost method that characterizes materials according to their reflectance in the near-infrared spectral region. The aim of this paper was to investigate the ability of NIRS to predict soil bulk density and to compare its performance with published pedotransfer functions. The study was carried out on a dataset of 1184 soil samples originating from a reforestation area in the Brazilian Amazon basin, and conventional soil bulk density values were obtained with metallic ""core cylinders"". The results indicate that the modified partial least squares regression used on spectral data is an alternative method for soil bulk density predictions to the published pedotransfer functions tested in this study. The NIRS method presented the closest-to-zero accuracy error (-0.002 g cm-3) and the lowest prediction error (0.13 g cm-3) and the coefficient of variation of the validation sets ranged from 8.1 to 8.9% of the mean reference values. Nevertheless, further research is required to assess the limits and specificities of the NIRS method, but it may have advantages for soil bulk density predictions, especially in environments such as the Amazon forest.
Resumo:
Single interface flow systems (SIFA) present some noteworthy advantages when compared to other flow systems, such as a simpler configuration, a more straightforward operation and control and an undemanding optimisation routine. Moreover, the plain reaction zone establishment, which relies strictly on the mutual inter-dispersion of the adjoining solutions, could be exploited to set up multiple sequential reaction schemes providing supplementary information regarding the species under determination. In this context, strategies for accuracy assessment could be favourably implemented. To this end, the sample could be processed by two quasi-independent analytical methods and the final result would be calculated after considering the two different methods. Intrinsically more precise and accurate results would be then gathered. In order to demonstrate the feasibility of the approach, a SIFA system with spectrophotometric detection was designed for the determination of lansoprazole in pharmaceutical formulations. Two reaction interfaces with two distinct pi-acceptors, chloranilic acid (CIA) and 2,3-dichloro-5,6-dicyano-p-benzoquinone (DDQ) were implemented. Linear working concentration ranges between 2.71 x 10(-4) to 8.12 x 10(-4) mol L(-1) and 2.17 x 10(-4) to 8.12 x 10(-4) mol L(-1) were obtained for DDQ and CIA methods, respectively. When compared with the results furnished by the reference procedure, the results showed relative deviations lower than 2.7%. Furthermore. the repeatability was good, with r.s.d. lower than 3.8% and 4.7% for DDQ and CIA methods, respectively. Determination rate was about 30 h(-1). (C) 2009 Elsevier B.V. All rights reserved.
Resumo:
Objective: We carry out a systematic assessment on a suite of kernel-based learning machines while coping with the task of epilepsy diagnosis through automatic electroencephalogram (EEG) signal classification. Methods and materials: The kernel machines investigated include the standard support vector machine (SVM), the least squares SVM, the Lagrangian SVM, the smooth SVM, the proximal SVM, and the relevance vector machine. An extensive series of experiments was conducted on publicly available data, whose clinical EEG recordings were obtained from five normal subjects and five epileptic patients. The performance levels delivered by the different kernel machines are contrasted in terms of the criteria of predictive accuracy, sensitivity to the kernel function/parameter value, and sensitivity to the type of features extracted from the signal. For this purpose, 26 values for the kernel parameter (radius) of two well-known kernel functions (namely. Gaussian and exponential radial basis functions) were considered as well as 21 types of features extracted from the EEG signal, including statistical values derived from the discrete wavelet transform, Lyapunov exponents, and combinations thereof. Results: We first quantitatively assess the impact of the choice of the wavelet basis on the quality of the features extracted. Four wavelet basis functions were considered in this study. Then, we provide the average accuracy (i.e., cross-validation error) values delivered by 252 kernel machine configurations; in particular, 40%/35% of the best-calibrated models of the standard and least squares SVMs reached 100% accuracy rate for the two kernel functions considered. Moreover, we show the sensitivity profiles exhibited by a large sample of the configurations whereby one can visually inspect their levels of sensitiveness to the type of feature and to the kernel function/parameter value. Conclusions: Overall, the results evidence that all kernel machines are competitive in terms of accuracy, with the standard and least squares SVMs prevailing more consistently. Moreover, the choice of the kernel function and parameter value as well as the choice of the feature extractor are critical decisions to be taken, albeit the choice of the wavelet family seems not to be so relevant. Also, the statistical values calculated over the Lyapunov exponents were good sources of signal representation, but not as informative as their wavelet counterparts. Finally, a typical sensitivity profile has emerged among all types of machines, involving some regions of stability separated by zones of sharp variation, with some kernel parameter values clearly associated with better accuracy rates (zones of optimality). (C) 2011 Elsevier B.V. All rights reserved.
Resumo:
Here, I investigate the use of Bayesian updating rules applied to modeling how social agents change their minds in the case of continuous opinion models. Given another agent statement about the continuous value of a variable, we will see that interesting dynamics emerge when an agent assigns a likelihood to that value that is a mixture of a Gaussian and a uniform distribution. This represents the idea that the other agent might have no idea about what is being talked about. The effect of updating only the first moments of the distribution will be studied, and we will see that this generates results similar to those of the bounded confidence models. On also updating the second moment, several different opinions always survive in the long run, as agents become more stubborn with time. However, depending on the probability of error and initial uncertainty, those opinions might be clustered around a central value.
Resumo:
Today several different unsupervised classification algorithms are commonly used to cluster similar patterns in a data set based only on its statistical properties. Specially in image data applications, self-organizing methods for unsupervised classification have been successfully applied for clustering pixels or group of pixels in order to perform segmentation tasks. The first important contribution of this paper refers to the development of a self-organizing method for data classification, named Enhanced Independent Component Analysis Mixture Model (EICAMM), which was built by proposing some modifications in the Independent Component Analysis Mixture Model (ICAMM). Such improvements were proposed by considering some of the model limitations as well as by analyzing how it should be improved in order to become more efficient. Moreover, a pre-processing methodology was also proposed, which is based on combining the Sparse Code Shrinkage (SCS) for image denoising and the Sobel edge detector. In the experiments of this work, the EICAMM and other self-organizing models were applied for segmenting images in their original and pre-processed versions. A comparative analysis showed satisfactory and competitive image segmentation results obtained by the proposals presented herein. (C) 2008 Published by Elsevier B.V.