909 resultados para Data accuracy
Resumo:
Perturbative Quantum Chromodynamics (pQCD) predicts that the small-x gluons in the hadron wavefunction should form a Color Glass Condensate (CGC), which has universal properties, which are the same for nucleon or nuclei. Making use of the results in V.P. Goncalves, M.S. Kugeratski, M.V.T. Machado, F.S. Navarra, Phys. Lett. B643, 273 (2006), we study the behavior of the anomalous dimension in the saturation models as a function of the photon virtuality and of the scaling variable rQ(s), since the main difference among the known parameterizations are characterized by this quantity.
Resumo:
The MINOS experiment at Fermilab has recently reported a tension between the oscillation results for neutrinos and antineutrinos. We show that this tension, if it persists, can be understood in the framework of nonstandard neutrino interactions (NSI). While neutral current NSI (nonstandard matter effects) are disfavored by atmospheric neutrinos, a new charged current coupling between tau neutrinos and nucleons can fit the MINOS data without violating other constraints. In particular, we show that loop-level contributions to flavor-violating tau decays are sufficiently suppressed. However, conflicts with existing bounds could arise once the effective theory considered here is embedded into a complete renormalizable model. We predict the future sensitivity of the T2K and NOvA experiments to the NSI parameter region favored by the MINOS fit, and show that both experiments are excellent tools to test the NSI interpretation of the MINOS data.
Resumo:
We consider the gravitational recoil due to nonreflection-symmetric gravitational wave emission in the context of axisymmetric Robinson-Trautman spacetimes. We show that regular initial data evolve generically into a final configuration corresponding to a Schwarzschild black hole moving with constant speed. For the case of (reflection-)symmetric initial configurations, the mass of the remnant black hole and the total energy radiated away are completely determined by the initial data, allowing us to obtain analytical expressions for some recent numerical results that have appeared in the literature. Moreover, by using the Galerkin spectral method to analyze the nonlinear regime of the Robinson-Trautman equations, we show that the recoil velocity can be estimated with good accuracy from some asymmetry measures (namely the first odd moments) of the initial data. The extension for the nonaxisymmetric case and the implications of our results for realistic situations involving head-on collision of two black holes are also discussed.
Resumo:
The pre-Mesozoic geodynamic evolution of SW Iberia has been investigated on the basis of detailed structural analysis, isotope dating, and petrologic study of high-pressure (HP) rocks, revealing the superposition of several tectonometamorphic events: (1) An HP event older than circa 358 Ma is recorded in basic rocks preserved inside marbles, which suggests subduction of a continental margin. The deformation associated with this stage is recorded by a refractory graphite fabric and noncoaxial mesoscopic structures found within the host metasediments. The sense of shear is top to south, revealing thrusting synthetic with subduction (underthrusting) to the north. (2) Recrystallization before circa 358 Ma is due to a regional-scale thermal episode and magmatism. (3) Noncoaxial deformation with top to north sense of shear in northward dipping large-scale shear zones is associated with pervasive hydration and metamorphic retrogression under mostly greenschist facies. This indicates exhumation by normal faulting in a detachment zone confined to the top to north and north dipping shear zones during postorogenic collapse soon after 358 Ma ago (inversion of earlier top to south thrusts). (4) Static recrystallization at circa 318 Ma is due to regional-scale granitic intrusions. Citation: Rosas, F. M., F. O. Marques, M. Ballevre, and C. Tassinari (2008), Geodynamic evolution of the SW Variscides: Orogenic collapse shown by new tectonometamorphic and isotopic data from western Ossa-Morena Zone, SW Iberia, Tectonics, 27, TC6008, doi:10.1029/2008TC002333.
Resumo:
Thanks to recent advances in molecular biology, allied to an ever increasing amount of experimental data, the functional state of thousands of genes can now be extracted simultaneously by using methods such as cDNA microarrays and RNA-Seq. Particularly important related investigations are the modeling and identification of gene regulatory networks from expression data sets. Such a knowledge is fundamental for many applications, such as disease treatment, therapeutic intervention strategies and drugs design, as well as for planning high-throughput new experiments. Methods have been developed for gene networks modeling and identification from expression profiles. However, an important open problem regards how to validate such approaches and its results. This work presents an objective approach for validation of gene network modeling and identification which comprises the following three main aspects: (1) Artificial Gene Networks (AGNs) model generation through theoretical models of complex networks, which is used to simulate temporal expression data; (2) a computational method for gene network identification from the simulated data, which is founded on a feature selection approach where a target gene is fixed and the expression profile is observed for all other genes in order to identify a relevant subset of predictors; and (3) validation of the identified AGN-based network through comparison with the original network. The proposed framework allows several types of AGNs to be generated and used in order to simulate temporal expression data. The results of the network identification method can then be compared to the original network in order to estimate its properties and accuracy. Some of the most important theoretical models of complex networks have been assessed: the uniformly-random Erdos-Renyi (ER), the small-world Watts-Strogatz (WS), the scale-free Barabasi-Albert (BA), and geographical networks (GG). The experimental results indicate that the inference method was sensitive to average degree k variation, decreasing its network recovery rate with the increase of k. The signal size was important for the inference method to get better accuracy in the network identification rate, presenting very good results with small expression profiles. However, the adopted inference method was not sensible to recognize distinct structures of interaction among genes, presenting a similar behavior when applied to different network topologies. In summary, the proposed framework, though simple, was adequate for the validation of the inferred networks by identifying some properties of the evaluated method, which can be extended to other inference methods.
Resumo:
Background: The inference of gene regulatory networks (GRNs) from large-scale expression profiles is one of the most challenging problems of Systems Biology nowadays. Many techniques and models have been proposed for this task. However, it is not generally possible to recover the original topology with great accuracy, mainly due to the short time series data in face of the high complexity of the networks and the intrinsic noise of the expression measurements. In order to improve the accuracy of GRNs inference methods based on entropy (mutual information), a new criterion function is here proposed. Results: In this paper we introduce the use of generalized entropy proposed by Tsallis, for the inference of GRNs from time series expression profiles. The inference process is based on a feature selection approach and the conditional entropy is applied as criterion function. In order to assess the proposed methodology, the algorithm is applied to recover the network topology from temporal expressions generated by an artificial gene network (AGN) model as well as from the DREAM challenge. The adopted AGN is based on theoretical models of complex networks and its gene transference function is obtained from random drawing on the set of possible Boolean functions, thus creating its dynamics. On the other hand, DREAM time series data presents variation of network size and its topologies are based on real networks. The dynamics are generated by continuous differential equations with noise and perturbation. By adopting both data sources, it is possible to estimate the average quality of the inference with respect to different network topologies, transfer functions and network sizes. Conclusions: A remarkable improvement of accuracy was observed in the experimental results by reducing the number of false connections in the inferred topology by the non-Shannon entropy. The obtained best free parameter of the Tsallis entropy was on average in the range 2.5 <= q <= 3.5 (hence, subextensive entropy), which opens new perspectives for GRNs inference methods based on information theory and for investigation of the nonextensivity of such networks. The inference algorithm and criterion function proposed here were implemented and included in the DimReduction software, which is freely available at http://sourceforge.net/projects/dimreduction and http://code.google.com/p/dimreduction/.
Resumo:
Tibolone is used for hormone reposition of postmenopause women and isotibolone is considered the major degradation product of tibolone. Isotibolone can also be present in tibolone API raw materials due to some inadequate synthesis. Its presence is then necessary to be identified and quantified in the quality control of both API and drug products. In this work we present the indexing of an isotibolone X-ray diffraction pattern measured with synchrotron light (lambda=1.2407 angstrom) in the transmission mode. The characterization of the isotibolone sample by IR spectroscopy, elemental analysis, and thermal analysis are also presented. The isotibolone crystallographic data are a=6.8066 angstrom, b=20.7350 angstrom, c=6.4489 angstrom, beta=76.428 degrees, V=884.75 angstrom(3), and space group P2(1), rho(o)= 1.187 g cm(-3), Z=2. (C) 2009 International Centre for Diffraction Data. [DOI: 10.1154/1.3257612]
Resumo:
Agricultural management practices that promote net carbon (C) accumulation in the soil have been considered as an important potential mitigation option to combat global warming. The change in the sugarcane harvesting system, to one which incorporates C into the soil from crop residues, is the focus of this work. The main objective was to assess and discuss the changes in soil organic C stocks caused by the conversion of burnt to unburnt sugarcane harvesting systems in Brazil, when considering the main soils and climates associated with this crop. For this purpose, a dataset was obtained from a literature review of soils under sugarcane in Brazil. Although not necessarily from experimental studies, only paired comparisons were examined, and for each site the dominant soil type, topography and climate were similar. The results show a mean annual C accumulation rate of 1.5 Mg ha-1 year-1 for the surface to 30-cm depth (0.73 and 2.04 Mg ha-1 year-1 for sandy and clay soils, respectively) caused by the conversion from a burnt to an unburnt sugarcane harvesting system. The findings suggest that soil should be included in future studies related to life cycle assessment and C footprint of Brazilian sugarcane ethanol.
Resumo:
The Brazilian Amazon is one of the most rapidly developing agricultural frontiers in the world. The authors assess changes in cropland area and the intensification of cropping in the Brazilian agricultural frontier state of Mato Grosso using remote sensing and develop a greenhouse gas emissions budget. The most common type of intensification in this region is a shift from single-to double-cropping patterns and associated changes in management, including increased fertilization. Using the enhanced vegetation index (EVI) from the Moderate Resolution Imaging Spectroradiometer (MODIS) sensor, the authors created a green-leaf phenology for 2001-06 that was temporally smoothed with a wavelet filter. The wavelet-smoothed green-leaf phenology was analyzed to detect cropland areas and their cropping patterns. The authors document cropland extensification and double-cropping intensification validated with field data with 85% accuracy for detecting croplands and 64% and 89% accuracy for detecting single-and double-cropping patterns, respectively. The results show that croplands more than doubled from 2001 to 2006 to cover about 100 000 km(2) and that new double-cropping intensification occurred on over 20% of croplands. Variations are seen in the annual rates of extensification and double-cropping intensification. Greenhouse gas emissions are estimated for the period 2001-06 due to conversion of natural vegetation and pastures to row-crop agriculture in Mato Grosso averaged 179 Tg CO(2)-e yr(-1),over half the typical fossil fuel emissions for the country in recent years.
Resumo:
P>Soil bulk density values are needed to convert organic carbon content to mass of organic carbon per unit area. However, field sampling and measurement of soil bulk density are labour-intensive, costly and tedious. Near-infrared reflectance spectroscopy (NIRS) is a physically non-destructive, rapid, reproducible and low-cost method that characterizes materials according to their reflectance in the near-infrared spectral region. The aim of this paper was to investigate the ability of NIRS to predict soil bulk density and to compare its performance with published pedotransfer functions. The study was carried out on a dataset of 1184 soil samples originating from a reforestation area in the Brazilian Amazon basin, and conventional soil bulk density values were obtained with metallic ""core cylinders"". The results indicate that the modified partial least squares regression used on spectral data is an alternative method for soil bulk density predictions to the published pedotransfer functions tested in this study. The NIRS method presented the closest-to-zero accuracy error (-0.002 g cm-3) and the lowest prediction error (0.13 g cm-3) and the coefficient of variation of the validation sets ranged from 8.1 to 8.9% of the mean reference values. Nevertheless, further research is required to assess the limits and specificities of the NIRS method, but it may have advantages for soil bulk density predictions, especially in environments such as the Amazon forest.
Resumo:
The application of laser induced breakdown spectrometry (LIBS) aiming the direct analysis of plant materials is a great challenge that still needs efforts for its development and validation. In this way, a series of experimental approaches has been carried out in order to show that LIBS can be used as an alternative method to wet acid digestions based methods for analysis of agricultural and environmental samples. The large amount of information provided by LIBS spectra for these complex samples increases the difficulties for selecting the most appropriated wavelengths for each analyte. Some applications have suggested that improvements in both accuracy and precision can be achieved by the application of multivariate calibration in LIBS data when compared to the univariate regression developed with line emission intensities. In the present work, the performance of univariate and multivariate calibration, based on partial least squares regression (PLSR), was compared for analysis of pellets of plant materials made from an appropriate mixture of cryogenically ground samples with cellulose as the binding agent. The development of a specific PLSR model for each analyte and the selection of spectral regions containing only lines of the analyte of interest were the best conditions for the analysis. In this particular application, these models showed a similar performance. but PLSR seemed to be more robust due to a lower occurrence of outliers in comparison to the univariate method. Data suggests that efforts dealing with sample presentation and fitness of standards for LIBS analysis must be done in order to fulfill the boundary conditions for matrix independent development and validation. (C) 2009 Elsevier B.V. All rights reserved.
Resumo:
Single interface flow systems (SIFA) present some noteworthy advantages when compared to other flow systems, such as a simpler configuration, a more straightforward operation and control and an undemanding optimisation routine. Moreover, the plain reaction zone establishment, which relies strictly on the mutual inter-dispersion of the adjoining solutions, could be exploited to set up multiple sequential reaction schemes providing supplementary information regarding the species under determination. In this context, strategies for accuracy assessment could be favourably implemented. To this end, the sample could be processed by two quasi-independent analytical methods and the final result would be calculated after considering the two different methods. Intrinsically more precise and accurate results would be then gathered. In order to demonstrate the feasibility of the approach, a SIFA system with spectrophotometric detection was designed for the determination of lansoprazole in pharmaceutical formulations. Two reaction interfaces with two distinct pi-acceptors, chloranilic acid (CIA) and 2,3-dichloro-5,6-dicyano-p-benzoquinone (DDQ) were implemented. Linear working concentration ranges between 2.71 x 10(-4) to 8.12 x 10(-4) mol L(-1) and 2.17 x 10(-4) to 8.12 x 10(-4) mol L(-1) were obtained for DDQ and CIA methods, respectively. When compared with the results furnished by the reference procedure, the results showed relative deviations lower than 2.7%. Furthermore. the repeatability was good, with r.s.d. lower than 3.8% and 4.7% for DDQ and CIA methods, respectively. Determination rate was about 30 h(-1). (C) 2009 Elsevier B.V. All rights reserved.
Resumo:
Objective: We carry out a systematic assessment on a suite of kernel-based learning machines while coping with the task of epilepsy diagnosis through automatic electroencephalogram (EEG) signal classification. Methods and materials: The kernel machines investigated include the standard support vector machine (SVM), the least squares SVM, the Lagrangian SVM, the smooth SVM, the proximal SVM, and the relevance vector machine. An extensive series of experiments was conducted on publicly available data, whose clinical EEG recordings were obtained from five normal subjects and five epileptic patients. The performance levels delivered by the different kernel machines are contrasted in terms of the criteria of predictive accuracy, sensitivity to the kernel function/parameter value, and sensitivity to the type of features extracted from the signal. For this purpose, 26 values for the kernel parameter (radius) of two well-known kernel functions (namely. Gaussian and exponential radial basis functions) were considered as well as 21 types of features extracted from the EEG signal, including statistical values derived from the discrete wavelet transform, Lyapunov exponents, and combinations thereof. Results: We first quantitatively assess the impact of the choice of the wavelet basis on the quality of the features extracted. Four wavelet basis functions were considered in this study. Then, we provide the average accuracy (i.e., cross-validation error) values delivered by 252 kernel machine configurations; in particular, 40%/35% of the best-calibrated models of the standard and least squares SVMs reached 100% accuracy rate for the two kernel functions considered. Moreover, we show the sensitivity profiles exhibited by a large sample of the configurations whereby one can visually inspect their levels of sensitiveness to the type of feature and to the kernel function/parameter value. Conclusions: Overall, the results evidence that all kernel machines are competitive in terms of accuracy, with the standard and least squares SVMs prevailing more consistently. Moreover, the choice of the kernel function and parameter value as well as the choice of the feature extractor are critical decisions to be taken, albeit the choice of the wavelet family seems not to be so relevant. Also, the statistical values calculated over the Lyapunov exponents were good sources of signal representation, but not as informative as their wavelet counterparts. Finally, a typical sensitivity profile has emerged among all types of machines, involving some regions of stability separated by zones of sharp variation, with some kernel parameter values clearly associated with better accuracy rates (zones of optimality). (C) 2011 Elsevier B.V. All rights reserved.
Resumo:
Background: Although the Clock Drawing Test (CDT) is the second most used test in the world for the screening of dementia, there is still debate over its sensitivity specificity, application and interpretation in dementia diagnosis. This study has three main aims: to evaluate the sensitivity and specificity of the CDT in a sample composed of older adults with Alzheimer`s disease (AD) and normal controls; to compare CDT accuracy to the that of the Mini-mental State Examination (MMSE) and the Cambridge Cognitive Examination (CAMCOG), and to test whether the association of the MMSE with the CDT leads to higher or comparable accuracy as that reported for the CAMCOG. Methods: Cross-sectional assessment was carried out for 121 AD and 99 elderly controls with heterogeneous educational levels from a geriatric outpatient clinic who completed the Cambridge Examination for Mental Disorder of the Elderly (CAMDEX). The CDT was evaluated according to the Shulman, Mendez and Sunderland scales. Results: The CDT showed high sensitivity and specificity. There were significant correlations between the CDT and the MMSE (0.700-0.730; p < 0.001) and between the CDT and the CAMCOG (0.753-0.779; p < 0.001). The combination of the CDT with the MMSE improved sensitivity and specificity (SE = 89.2-90%; SP = 71.7-79.8%). Subgroup analysis indicated that for elderly people with lower education, sensitivity and specificity were both adequate and high. Conclusions: The CDT is a robust screening test when compared with the MMSE or the CAMCOG, independent of the scale used for its interpretation. The combination with the MMSE improves its performance significantly, becoming equivalent to the CAMCOG.
Resumo:
The ""Short Cognitive Performance Test"" (Syndrom Kurztest, SKT) is a cognitive screening battery designed to detect memory and attention deficits. The aim of this study was to evaluate the diagnostic accuracy of the SKT as a screening tool for mild cognitive impairment (MCI) and dementia. A total of 46 patients with Alzheimer`s disease (AD), 82 with MCI, and 56 healthy controls were included in the study. Patients and controls were allocated into two groups according to educational level (< 8 years or > 8 years). ROC analyses suggested that the SKT adequately discriminates AD from non-demented subjects (MCI and controls), irrespective of the education group. The test had good sensitivity to discriminate MCI from unimpaired controls in the sub-sample of individuals with more than 8 years of schooling. Our findings suggest that the SKT is a good screening test for cognitive impairment and dementia. However, test results must be interpreted with caution when administered to less-educated individuals.