223 resultados para best estimate method


Relevância:

20.00% 20.00%

Publicador:

Resumo:

The authors describe a novel approach to the measurement of nanofriction, and demonstrate the application of the method by measurement of the coefficient of friction for diamondlike carbon (DLC) on DLC, Si on DLC, and Si on Si surfaces. The technique employs an atomic force microscope in a mode in which the tip moves only in the z (vertical) direction and the sample surface is sloped. As the tip moves vertically on the sloped surface, lateral tip slipping occurs, allowing the cantilever vertical deflection and the frictional (lateral) force to be monitored as a function of tip vertical deflection. The advantage of the approach is that cantilever calibration to obtain its spring constants is not necessary. Using this method, the authors have measured friction coefficients, for load range 0 < L M 6 mu N, of 0.047 +/- 0.002 for Si on Si, 0.0173 +/- 0.0009 for Si on DLC, and 0.0080 +/- 0.0005 for DLC on DLC. For load range 9 < L < 13 mu N, the DLC on DLC coefficient of friction increased to 0.051 +/- 0.003. (C) 2008 American Vacuum Society.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We have studied some possible four-quark and molecule configurations of the X(3872) using double ratios of sum rules, which are more accurate than the usual simple ratios often used in the literature to obtain hadron masses. We found that the different structures ((3) over bar - (3) over bar and (6) over bar - 6 tetraquarks and D - D(*) molecule) lead to the same prediction for the mass (within the accuracy of the method), indicating that the alone prediction of the X mass may not be sufficient to reveal its nature. In doing these analyses, we also find that (within our approximation) the use of the (MS) over bar running (m) over bar (c)(m(c)(2)), rather than the on-shell mass, is more appropriate to obtain the J/psi and X meson masses. Using vertex sum rules to roughly estimate the X(3872) hadronic and radiative widths, we found that the available experimental data does not exclude a lambda - J/psi-like molecule current.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In the case of quantum wells, the indium segregation leads to complex potential profiles that are hardly considered in the majority of the theoretical models. The authors demonstrated that the split-operator method is useful tool for obtaining the electronic properties in these cases. Particularly, they studied the influence of the indium surface segregation in optical properties of InGaAs/GaAs quantum wells. Photoluminescence measurements were carried out for a set of InGaAs/GaAs quantum wells and compared to the results obtained theoretically via split-operator method, showing a good agreement.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The structure of probability currents is studied for the dynamical network after consecutive contraction on two-state, nonequilibrium lattice systems. This procedure allows us to investigate the transition rates between configurations on small clusters and highlights some relevant effects of lattice symmetries on the elementary transitions that are responsible for entropy production. A method is suggested to estimate the entropy production for different levels of approximations (cluster sizes) as demonstrated in the two-dimensional contact process with mutation.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The solvent effects on the low-lying absorption spectrum and on the (15)N chemical shielding of pyrimidine in water are calculated using the combined and sequential Monte Carlo simulation and quantum mechanical calculations. Special attention is devoted to the solute polarization. This is included by an iterative procedure previously developed where the solute is electrostatically equilibrated with the solvent. In addition, we verify the simple yet unexplored alternative of combining the polarizable continuum model (PCM) and the hybrid QM/MM method. We use PCM to obtain the average solute polarization and include this in the MM part of the sequential QM/MM methodology, PCM-MM/QM. These procedures are compared and further used in the discrete and the explicit solvent models. The use of the PCM polarization implemented in the MM part seems to generate a very good description of the average solute polarization leading to very good results for the n-pi* excitation energy and the (15)N nuclear chemical shield of pyrimidine in aqueous environment. The best results obtained here using the solute pyrimidine surrounded by 28 explicit water molecules embedded in the electrostatic field of the remaining 472 molecules give the statistically converged values for the low lying n-pi* absorption transition in water of 36 900 +/- 100 (PCM polarization) and 36 950 +/- 100 cm(-1) (iterative polarization), in excellent agreement among one another and with the experimental value observed with a band maximum at 36 900 cm(-1). For the nuclear shielding (15)N the corresponding gas-water chemical shift obtained using the solute pyrimidine surrounded by 9 explicit water molecules embedded in the electrostatic field of the remaining 491 molecules give the statistically converged values of 24.4 +/- 0.8 and 28.5 +/- 0.8 ppm, compared with the inferred experimental value of 19 +/- 2 ppm. Considering the simplicity of the PCM over the iterative polarization this is an important aspect and the computational savings point to the possibility of dealing with larger solute molecules. This PCM-MM/QM approach reconciles the simplicity of the PCM model with the reliability of the combined QM/MM approaches.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We use the density functional theory/local-density approximation (DFT/LDA)-1/2 method [L. G. Ferreira , Phys. Rev. B 78, 125116 (2008)], which attempts to fix the electron self-energy deficiency of DFT/LDA by half-ionizing the whole Bloch band of the crystal, to calculate the band offsets of two Si/SiO(2) interface models. Our results are similar to those obtained with a ""state-of-the-art"" GW approach [R. Shaltaf , Phys. Rev. Lett. 100, 186401 (2008)], with the advantage of being as computationally inexpensive as the usual DFT/LDA. Our band gap and band offset predictions are in excellent agreement with experiments.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The objective of this work was to develop and validate a rapid Reversed-Phase High-Performance Liquid Chromatography method for the quantification of 3,5,3 '-triiodothyroacetic acid (TRIAC) in nanoparticles delivery system prepared in different polymeric matrices. Special attention was given to developing a reliable reproductive technique for the pretreatment of the samples. Chromatographic runs were performed on an Agilent 1200 Series HPLC with a RP Phenomenex (R) Gemini C18 (150 x 4, 6 mm i.d., 5 mu m) column using acetonitrile and triethylamine buffer 0.1% (TEA) (40 : 60 v/v) as a mobile phase in an isocratic elution, pH 5.6 at a flow rate of 1 ml min(-1). TRIAC was detected at a wavelength of 220 nm. The injection volume was 20 mu l and the column temperature was maintained at 35 degrees C. The validation characteristics included accuracy, precision, specificity, linearity, recovery, and robustness. The standard curve was found to have a linear relationship (r(2) - 0.9996) over the analytical range of 5-100 mu g ml(-1) . The detection and quantitation limits were 1.3 and 3.8 mu g ml(-1), respectively. The recovery and loaded TRIAC in colloidal system delivery was nearly 100% and 98%, respectively. The method was successfully applied in polycaprolactone, polyhydroxybutyrate, and polymethylmethacrylate nanoparticles.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this article, we evaluate the use of simple Lee-Goldburg cross-polarization (LG-CP) NMR experiments for obtaining quantitative information of molecular motion in the intermediate regime. In particular, we introduce the measurement of Hartmann-Hahn matching profiles for the assessment of heteronuclear dipolar couplings as well as dynamics as a reliable and robust alternative to the more common analysis of build-up curves. We have carried out dynamic spin dynamics simulations in order to test the method's sensitivity to intermediate motion and address its limitations concerning possible experimental imperfections. We further demonstrate the successful use of simple theoretical concepts, most prominently Anderson-Weiss (AW) theory, to analyze the data. We further propose an alternative way to estimate activation energies of molecular motions, based upon the acquisition of only two LG-CP spectra per temperature at different temperatures. As experimental tests, molecular jumps in imidazole methyl sulfonate, trimethylsulfoxonium iodide, and bisphenol A polycarbonate were investigated with the new method.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this work the time resolved thermal lens method is combined with interferometric technique, the thermal relaxation calorimetry, photoluminescence and lifetime measurements to determine the thermo physical properties of Nd(2)O(3) doped sodium zincborate glass as a function of temperature up to the glass transition region. Thermal diffusivity, thermal conductivity, fluorescence quantum efficiency, linear thermal expansion coefficient and thermal coefficient of electronic polarizability were determined. In conclusion, the results showed the ability of thermal lens and interferometric methods to perform measurements very close to the phase transition region. These techniques provide absolute values for the measured physical quantities and are advantageous when low scan rates are required. (c) 2008 Optical Society of America

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Base-level maps (or ""isobase maps"", as originally defined by Filosofov, 1960), express a relationship between valley order and topography. The base-level map can be seen as a ""simplified"" version of the original topographic surface, from which the ""noise"" of the low-order stream erosion was removed. This method is able to identify areas with possible tectonic influence even within lithologically uniform domains. Base-level maps have been recently applied in semi-detail scale (e.g., 1:50 000 or larger) morphotectonic analysis. In this paper, we present an evaluation of the method's applicability in regional-scale analysis (e.g., 1:250 000 or smaller). A test area was selected in northern Brazil, at the lower course of the Araguaia and Tocantins rivers. The drainage network extracted from SRTM30_PLUS DEMs with spatial resolution of approximately 900 m was visually compared with available topographic maps and considered to be compatible with a 1:1,000 000 scale. Regarding the interpretation of regional-scale morphostructures, the map constructed with 2nd and 3rd-order valleys was considered to present the best results. Some of the interpreted base-level anomalies correspond to important shear zones and geological contacts present in the 1:5 000 000 Geological Map of South America. Others have no correspondence with mapped Precambrian structures and are considered to represent younger, probably neotectonic, features. A strong E-W orientation of the base-level lines over the inflexion of the Araguaia and Tocantins rivers, suggest a major drainage capture. A N-S topographic swath profile over the Tocantins and Araguaia rivers reveals a topographic pattern which, allied with seismic data showing a roughly N-S direction of extension in the area, lead us to interpret this lineament as an E-W, southward-dipping normal fault. There is also a good visual correspondence between the base-level lineaments and geophysical anomalies. A NW-SE lineament in the southeast of the study area partially corresponds to the northern border of the Mosquito lava field, of Jurassic age, and a NW-SE lineament traced in the northeastern sector of the study area can be interpreted as the Picos-Santa Ines lineament, identifiable in geophysical maps but with little expression in hypsometric or topographic maps.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Thanks to recent advances in molecular biology, allied to an ever increasing amount of experimental data, the functional state of thousands of genes can now be extracted simultaneously by using methods such as cDNA microarrays and RNA-Seq. Particularly important related investigations are the modeling and identification of gene regulatory networks from expression data sets. Such a knowledge is fundamental for many applications, such as disease treatment, therapeutic intervention strategies and drugs design, as well as for planning high-throughput new experiments. Methods have been developed for gene networks modeling and identification from expression profiles. However, an important open problem regards how to validate such approaches and its results. This work presents an objective approach for validation of gene network modeling and identification which comprises the following three main aspects: (1) Artificial Gene Networks (AGNs) model generation through theoretical models of complex networks, which is used to simulate temporal expression data; (2) a computational method for gene network identification from the simulated data, which is founded on a feature selection approach where a target gene is fixed and the expression profile is observed for all other genes in order to identify a relevant subset of predictors; and (3) validation of the identified AGN-based network through comparison with the original network. The proposed framework allows several types of AGNs to be generated and used in order to simulate temporal expression data. The results of the network identification method can then be compared to the original network in order to estimate its properties and accuracy. Some of the most important theoretical models of complex networks have been assessed: the uniformly-random Erdos-Renyi (ER), the small-world Watts-Strogatz (WS), the scale-free Barabasi-Albert (BA), and geographical networks (GG). The experimental results indicate that the inference method was sensitive to average degree k variation, decreasing its network recovery rate with the increase of k. The signal size was important for the inference method to get better accuracy in the network identification rate, presenting very good results with small expression profiles. However, the adopted inference method was not sensible to recognize distinct structures of interaction among genes, presenting a similar behavior when applied to different network topologies. In summary, the proposed framework, though simple, was adequate for the validation of the inferred networks by identifying some properties of the evaluated method, which can be extended to other inference methods.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Background: The inference of gene regulatory networks (GRNs) from large-scale expression profiles is one of the most challenging problems of Systems Biology nowadays. Many techniques and models have been proposed for this task. However, it is not generally possible to recover the original topology with great accuracy, mainly due to the short time series data in face of the high complexity of the networks and the intrinsic noise of the expression measurements. In order to improve the accuracy of GRNs inference methods based on entropy (mutual information), a new criterion function is here proposed. Results: In this paper we introduce the use of generalized entropy proposed by Tsallis, for the inference of GRNs from time series expression profiles. The inference process is based on a feature selection approach and the conditional entropy is applied as criterion function. In order to assess the proposed methodology, the algorithm is applied to recover the network topology from temporal expressions generated by an artificial gene network (AGN) model as well as from the DREAM challenge. The adopted AGN is based on theoretical models of complex networks and its gene transference function is obtained from random drawing on the set of possible Boolean functions, thus creating its dynamics. On the other hand, DREAM time series data presents variation of network size and its topologies are based on real networks. The dynamics are generated by continuous differential equations with noise and perturbation. By adopting both data sources, it is possible to estimate the average quality of the inference with respect to different network topologies, transfer functions and network sizes. Conclusions: A remarkable improvement of accuracy was observed in the experimental results by reducing the number of false connections in the inferred topology by the non-Shannon entropy. The obtained best free parameter of the Tsallis entropy was on average in the range 2.5 <= q <= 3.5 (hence, subextensive entropy), which opens new perspectives for GRNs inference methods based on information theory and for investigation of the nonextensivity of such networks. The inference algorithm and criterion function proposed here were implemented and included in the DimReduction software, which is freely available at http://sourceforge.net/projects/dimreduction and http://code.google.com/p/dimreduction/.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Background: Identifying local similarity between two or more sequences, or identifying repeats occurring at least twice in a sequence, is an essential part in the analysis of biological sequences and of their phylogenetic relationship. Finding such fragments while allowing for a certain number of insertions, deletions, and substitutions, is however known to be a computationally expensive task, and consequently exact methods can usually not be applied in practice. Results: The filter TUIUIU that we introduce in this paper provides a possible solution to this problem. It can be used as a preprocessing step to any multiple alignment or repeats inference method, eliminating a possibly large fraction of the input that is guaranteed not to contain any approximate repeat. It consists in the verification of several strong necessary conditions that can be checked in a fast way. We implemented three versions of the filter. The first is simply a straightforward extension to the case of multiple sequences of an application of conditions already existing in the literature. The second uses a stronger condition which, as our results show, enable to filter sensibly more with negligible (if any) additional time. The third version uses an additional condition and pushes the sensibility of the filter even further with a non negligible additional time in many circumstances; our experiments show that it is particularly useful with large error rates. The latter version was applied as a preprocessing of a multiple alignment tool, obtaining an overall time (filter plus alignment) on average 63 and at best 530 times smaller than before (direct alignment), with in most cases a better quality alignment. Conclusion: To the best of our knowledge, TUIUIU is the first filter designed for multiple repeats and for dealing with error rates greater than 10% of the repeats length.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Background: Feature selection is a pattern recognition approach to choose important variables according to some criteria in order to distinguish or explain certain phenomena (i.e., for dimensionality reduction). There are many genomic and proteomic applications that rely on feature selection to answer questions such as selecting signature genes which are informative about some biological state, e. g., normal tissues and several types of cancer; or inferring a prediction network among elements such as genes, proteins and external stimuli. In these applications, a recurrent problem is the lack of samples to perform an adequate estimate of the joint probabilities between element states. A myriad of feature selection algorithms and criterion functions have been proposed, although it is difficult to point the best solution for each application. Results: The intent of this work is to provide an open-source multiplataform graphical environment for bioinformatics problems, which supports many feature selection algorithms, criterion functions and graphic visualization tools such as scatterplots, parallel coordinates and graphs. A feature selection approach for growing genetic networks from seed genes ( targets or predictors) is also implemented in the system. Conclusion: The proposed feature selection environment allows data analysis using several algorithms, criterion functions and graphic visualization tools. Our experiments have shown the software effectiveness in two distinct types of biological problems. Besides, the environment can be used in different pattern recognition applications, although the main concern regards bioinformatics tasks.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper describes methods for the direct determination of Cd and Pb in hair segments (c.a. 5 mm similar to 80 mu g) by solid sampling graphite furnace atomic absorption spectrometry, becoming possible longitudinal profiles in a single strand of hair. To distinguish endogenous and exogenous content. strands of hair were washed by using two different procedures: IAEA protocol (acetone + water + acetone) and the combination of IAEA protocol with HCl washing (acetone + water + acetone + 0.1 mol l(-1) HCl). The concentration of Cd and Pb increased from the root Until the tip of hair washed according to IAEA protocol. However, when the strand of hair was washed using the combination of IAEA protocol and 0.1 mol l(-1) HCl, Cd concentrations decreased in all segments, and Pb concentrations decreased drastically near to the root (5 to 12 mm) and was systematically higher ill the end. The proposed method showed to be useful to assess the temporal variation to Cd and Pb exposure and call be Used for toxicological and environmental investigations. The limits of detection were 2.8 ng g(-1) for Cd and 40 ng g(-1) for Pb. The characteristic masses based oil integrated absorbance were 2.4 pg for Cd and 22 pg for Pb.