962 resultados para De Gennes parameter
Resumo:
The Z-scan and thermal-lens techniques have been used to obtain the energy transfer upconversion parameter in Nd(3+)-doped materials. A comparison between these methods is done, showing that they are independent and provide similar results. Moreover, the advantages and applicability of each one are also discussed. The results point to these approaches as valuable alternative methods because of their sensitivity, which allows measurements to be performed in a pump-power regime without causing damage to the investigated material. (C) 2009 Optical Society of America
Resumo:
We present parameter-free calculations of electronic properties of InGaN, InAlN, and AlGaN alloys. The calculations are based on a generalized quasichemical approach, to account for disorder and composition effects, and first-principles calculations within the density functional theory with the LDA-1/2 approach, to accurately determine the band gaps. We provide precise results for AlGaN, InGaN, and AlInN band gaps for the entire range of compositions, and their respective bowing parameters. (C) 2011 American Institute of Physics. [doi:10.1063/1.3576570]
Resumo:
We study the free-fall of a quantum particle in the context of noncommutative quantum mechanics (NCQM). Assuming noncommutativity of the canonical type between the coordinates of a two-dimensional configuration space, we consider a neutral particle trapped in a gravitational well and exactly solve the energy eigenvalue problem. By resorting to experimental data from the GRANIT experiment, in which the first energy levels of freely falling quantum ultracold neutrons were determined, we impose an upper-bound on the noncommutativity parameter. We also investigate the time of flight of a quantum particle moving in a uniform gravitational field in NCQM. This is related to the weak equivalence principle. As we consider stationary, energy eigenstates, i.e., delocalized states, the time of flight must be measured by a quantum clock, suitably coupled to the particle. By considering the clock as a small perturbation, we solve the (stationary) scattering problem associated and show that the time of flight is equal to the classical result, when the measurement is made far from the turning point. This result is interpreted as an extension of the equivalence principle to the realm of NCQM. (C) 2010 American Institute of Physics. [doi:10.1063/1.3466812]
Resumo:
Background: The inference of gene regulatory networks (GRNs) from large-scale expression profiles is one of the most challenging problems of Systems Biology nowadays. Many techniques and models have been proposed for this task. However, it is not generally possible to recover the original topology with great accuracy, mainly due to the short time series data in face of the high complexity of the networks and the intrinsic noise of the expression measurements. In order to improve the accuracy of GRNs inference methods based on entropy (mutual information), a new criterion function is here proposed. Results: In this paper we introduce the use of generalized entropy proposed by Tsallis, for the inference of GRNs from time series expression profiles. The inference process is based on a feature selection approach and the conditional entropy is applied as criterion function. In order to assess the proposed methodology, the algorithm is applied to recover the network topology from temporal expressions generated by an artificial gene network (AGN) model as well as from the DREAM challenge. The adopted AGN is based on theoretical models of complex networks and its gene transference function is obtained from random drawing on the set of possible Boolean functions, thus creating its dynamics. On the other hand, DREAM time series data presents variation of network size and its topologies are based on real networks. The dynamics are generated by continuous differential equations with noise and perturbation. By adopting both data sources, it is possible to estimate the average quality of the inference with respect to different network topologies, transfer functions and network sizes. Conclusions: A remarkable improvement of accuracy was observed in the experimental results by reducing the number of false connections in the inferred topology by the non-Shannon entropy. The obtained best free parameter of the Tsallis entropy was on average in the range 2.5 <= q <= 3.5 (hence, subextensive entropy), which opens new perspectives for GRNs inference methods based on information theory and for investigation of the nonextensivity of such networks. The inference algorithm and criterion function proposed here were implemented and included in the DimReduction software, which is freely available at http://sourceforge.net/projects/dimreduction and http://code.google.com/p/dimreduction/.
Resumo:
This article presents maximum likelihood estimators (MLEs) and log-likelihood ratio (LLR) tests for the eigenvalues and eigenvectors of Gaussian random symmetric matrices of arbitrary dimension, where the observations are independent repeated samples from one or two populations. These inference problems are relevant in the analysis of diffusion tensor imaging data and polarized cosmic background radiation data, where the observations are, respectively, 3 x 3 and 2 x 2 symmetric positive definite matrices. The parameter sets involved in the inference problems for eigenvalues and eigenvectors are subsets of Euclidean space that are either affine subspaces, embedded submanifolds that are invariant under orthogonal transformations or polyhedral convex cones. We show that for a class of sets that includes the ones considered in this paper, the MLEs of the mean parameter do not depend on the covariance parameters if and only if the covariance structure is orthogonally invariant. Closed-form expressions for the MLEs and the associated LLRs are derived for this covariance structure.
Resumo:
The dynamical discrete web (DyDW), introduced in the recent work of Howitt and Warren, is a system of coalescing simple symmetric one-dimensional random walks which evolve in an extra continuous dynamical time parameter tau. The evolution is by independent updating of the underlying Bernoulli variables indexed by discrete space-time that define the discrete web at any fixed tau. In this paper, we study the existence of exceptional (random) values of tau where the paths of the web do not behave like usual random walks and the Hausdorff dimension of the set of such exceptional tau. Our results are motivated by those about exceptional times for dynamical percolation in high dimension by Haggstrom, Peres and Steif, and in dimension two by Schramm and Steif. The exceptional behavior of the walks in the DyDW is rather different from the situation for the dynamical random walks of Benjamini, Haggstrom, Peres and Steif. For example, we prove that the walk from the origin S(0)(tau) violates the law of the iterated logarithm (LIL) on a set of tau of Hausdorff dimension one. We also discuss how these and other results should extend to the dynamical Brownian web, the natural scaling limit of the DyDW. (C) 2009 Elsevier B.V. All rights reserved.
Resumo:
In this paper an alternative approach to the one in Henze (1986) is proposed for deriving the odd moments of the skew-normal distribution considered in Azzalini (1985). The approach is based on a Pascal type triangle, which seems to greatly simplify moments computation. Moreover, it is shown that the likelihood equation for estimating the asymmetry parameter in such model is generated as orthogonal functions to the sample vector. As a consequence, conditions for a unique solution of the likelihood equation are established, which seem to hold in more general setting.
Resumo:
Based on solvation studies of polymers, the sum (1: 1) of the electron acceptor (AN) and electron donor (DN) values of solvents has been proposed as an alternative polarity scale. To test this, the electron paramagnetic resonance isotropic hyperfine splitting constant, a parameter known to be dependent on the polarity/proticity of the medium, was correlated with the (AN+DN) term using three paramagnetic probes. The linear regression coefficient calculated for 15 different solvents was approximately 0.9, quite similar to those of other well-known polarity parameters, attesting to the validity of the (AN+DN) term as a novel ""two-parameter"" solvent polarity scale.
Resumo:
We have investigated the stability, electronic properties, Rayleigh (elastic), and Raman (inelastic) depolarization ratios, infrared and Raman absorption vibrational spectra of fullerenols [C(60)(OH)(n)] with different degrees of hydroxylation by using all-electron density-functional-theory (DFT) methods. Stable arrangements of these molecules were found by means of full geometry optimizations using Becke's three-parameter exchange functional with the Lee, Yang, and Parr correlation functional. This DFT level has been combined with the 6-31G(d,p) Gaussian-type basis set, as a compromise between accuracy and capability to treat highly hydroxylated fullerenes, e.g., C(60)(OH)(36). Thus, the molecular properties of fullerenols were systematically analyzed for structures with n=1, 2, 3, 4, 8, 10, 16, 18, 24, 32, and 36. From the electronic structure analysis of these molecules, we have evidenced an important effect related to the weak chemical reactivity of a possible C(60)(OH)(24) isomer. To investigate Raman scattering and the vibrational spectra of the different fullerenols, frequency calculations are carried out within the harmonic approximation. In this case a systematic study is only performed for n=1-4, 8, 10, 16, 18, and 24. Our results give good agreements with the expected changes in the spectral absorptions due to the hydroxylation of fullerenes.
Resumo:
We report a detailed numerical investigation of a prototype electrochemical oscillator, in terms of high-resolution phase diagrams for an experimentally relevant section of the control (parameter) space. The prototype model consists of a set of three autonomous ordinary differential equations which captures the general features of electrochemical oscillators characterized by a partially hidden negative differential resistance in an N-shaped current-voltage stationary curve. By computing Lyapunov exponents, we provide a detailed discrimination between chaotic and periodic phases of the electrochemical oscillator. Such phases reveal the existence of an intricate structure of domains of periodicity self-organized into a chaotic background. Shrimp-like periodic regions previously observed in other discrete and continuous systems were also observed here, which corroborate the universal nature of the occurrence of such structures. In addition, we have also found a structured period distribution within the order region. Finally we discuss the possible experimental realization of comparable phase diagrams.
Resumo:
Despite the fact that the majority of the catalytic electro-oxidation of small organic molecules presents oscillatory kinetics under certain conditions, there are few systematic studies concerning the influence of experimental parameters on the oscillatory dynamics. Of the studies available, most are devoted to C1 molecules and just some scattered data are available for C2 molecules. We present in this work a comprehensive study of the electro-oxidation of ethylene glycol on polycrystalline platinum surfaces and in alkaline media. The system was studied by means of electrochemical impedance spectroscopy, cyclic voltammetry, and chronoamperometry, and the impact of parameters such as applied current, ethylene glycol concentration, and temperature were investigated. As in the case of other parent systems, the instabilities in this system were associated with a hidden negative differential resistance, as identified by impedance data. Very rich and robust dynamics were observed, including the presence of harmonic and mixed mode oscillations and chaotic states, in some parameter region. Oscillation frequencies of about 16 Hz characterized the fastest oscillations ever reported for the electro-oxidation of small organic molecules. Those high frequencies were strongly influenced by the electrolyte pH and far less affected by the EG concentration. The system was regularly dependent on temperature under voltammetric conditions but rather independent within the oscillatory regime.
Resumo:
Due to the worldwide increase in demand for biofuels, the area cultivated with sugarcane is expected to increase. For environmental and economic reasons, an increasing proportion of the areas are being harvested without burning, leaving the residues on the soil surface. This periodical input of residues affects soil physical, chemical and biological properties, as well as plant growth and nutrition. Modeling can be a useful tool in the study of the complex interactions between the climate, residue quality, and the biological factors controlling plant growth and residue decomposition. The approach taken in this work was to parameterize the CENTURY model for the sugarcane crop, to simulate the temporal dynamics of aboveground phytomass and litter decomposition, and to validate the model through field experiment data. When studying aboveground growth, burned and unburned harvest systems were compared, as well as the effect of mineral fertilizer and organic residue applications. The simulations were performed with data from experiments with different durations, from 12 months to 60 years, in Goiana, TimbaA(0)ba and Pradpolis, Brazil; Harwood, Mackay and Tully, Australia; and Mount Edgecombe, South Africa. The differentiation of two pools in the litter, with different decomposition rates, was found to be a relevant factor in the simulations made. Originally, the model had a basically unlimited layer of mulch directly available for decomposition, 5,000 g m(-2). Through a parameter optimization process, the thickness of the mulch layer closer to the soil, more vulnerable to decomposition, was set as 110 g m(-2). By changing the layer of mulch at any given time available for decomposition, the sugarcane residues decomposition simulations where close to measured values (R (2) = 0.93), contributing to making the CENTURY model a tool for the study of sugarcane litter decomposition patterns. The CENTURY model accurately simulated aboveground carbon stalk values (R (2) = 0.76), considering burned and unburned harvest systems, plots with and without nitrogen fertilizer and organic amendment applications, in different climates and soil conditions.
Resumo:
Estimates of greenhouse-gas emissions from deforestation are highly uncertain because of high variability in key parameters and because of the limited number of studies providing field measurements of these parameters. One such parameter is burning efficiency, which determines how much of the original forest`s aboveground carbon stock will be released in the burn, as well as how much will later be released by decay and how much will remain as charcoal. In this paper we examined the fate of biomass from a semideciduous tropical forest in the ""arc of deforestation,"" where clearing activity is concentrated along the southern edge of the Amazon forest. We estimated carbon content, charcoal formation and burning efficiency by direct measurements (cutting and weighing) and by line-intersect sampling (LIS) done along the axis of each plot before and after burning of felled vegetation. The total aboveground dry biomass found here (219.3 Mg ha(-1)) is lower than the values found in studies that have been done in other parts of the Amazon region. Values for burning efficiency (65%) and charcoal formation (6.0%, or 5.98 Mg C ha(-1)) were much higher than those found in past studies in tropical areas. The percentage of trunk biomass lost in burning (49%) was substantially higher than has been found in previous studies. This difference may be explained by the concentration of more stems in the smaller diameter classes and the low humidity of the fuel (the dry season was unusually long in 2007, the year of the burn). This study provides the first measurements of forest burning parameters for a group of forest types that is now undergoing rapid deforestation. The burning parameters estimated here indicate substantially higher burning efficiency than has been found in other Amazonian forest types. Quantification of burning efficiency is critical to estimates of trace-gas emissions from deforestation. (C) 2009 Elsevier B.V. All rights reserved.
Resumo:
The Zr-Au set for monitoring the thermal and epithermal neutron fluence rate and the epithermal spectrum parameter a is not always practicable for routine application of INAA in well-thermalized facilities. An alternative set consisting of Cr, Au and Mo provides values for the thermal neutron fluence rate, f and alpha that are not significantly different from those found via the Zr-Au method and the Cd-covered Zr-method. The IRMM standard SMELS-II was analyzed using the (Au-Cr-Mo) monitor and a good agreement was obtained. (C) 2008 Elsevier Ltd. All rights reserved.
Resumo:
Objective: We carry out a systematic assessment on a suite of kernel-based learning machines while coping with the task of epilepsy diagnosis through automatic electroencephalogram (EEG) signal classification. Methods and materials: The kernel machines investigated include the standard support vector machine (SVM), the least squares SVM, the Lagrangian SVM, the smooth SVM, the proximal SVM, and the relevance vector machine. An extensive series of experiments was conducted on publicly available data, whose clinical EEG recordings were obtained from five normal subjects and five epileptic patients. The performance levels delivered by the different kernel machines are contrasted in terms of the criteria of predictive accuracy, sensitivity to the kernel function/parameter value, and sensitivity to the type of features extracted from the signal. For this purpose, 26 values for the kernel parameter (radius) of two well-known kernel functions (namely. Gaussian and exponential radial basis functions) were considered as well as 21 types of features extracted from the EEG signal, including statistical values derived from the discrete wavelet transform, Lyapunov exponents, and combinations thereof. Results: We first quantitatively assess the impact of the choice of the wavelet basis on the quality of the features extracted. Four wavelet basis functions were considered in this study. Then, we provide the average accuracy (i.e., cross-validation error) values delivered by 252 kernel machine configurations; in particular, 40%/35% of the best-calibrated models of the standard and least squares SVMs reached 100% accuracy rate for the two kernel functions considered. Moreover, we show the sensitivity profiles exhibited by a large sample of the configurations whereby one can visually inspect their levels of sensitiveness to the type of feature and to the kernel function/parameter value. Conclusions: Overall, the results evidence that all kernel machines are competitive in terms of accuracy, with the standard and least squares SVMs prevailing more consistently. Moreover, the choice of the kernel function and parameter value as well as the choice of the feature extractor are critical decisions to be taken, albeit the choice of the wavelet family seems not to be so relevant. Also, the statistical values calculated over the Lyapunov exponents were good sources of signal representation, but not as informative as their wavelet counterparts. Finally, a typical sensitivity profile has emerged among all types of machines, involving some regions of stability separated by zones of sharp variation, with some kernel parameter values clearly associated with better accuracy rates (zones of optimality). (C) 2011 Elsevier B.V. All rights reserved.