120 resultados para Scale density
Resumo:
The performance of the SAOP potential for the calculation of NMR chemical shifts was evaluated. SAOP results show considerable improvement with respect to previous potentials, like VWN or BP86, at least for the carbon, nitrogen, oxygen, and fluorine chemical shifts. Furthermore, a few NMR calculations carried out on third period atoms (S, P, and Cl) improved when using the SAOP potential
Resumo:
The effect of basis set superposition error (BSSE) on molecular complexes is analyzed. The BSSE causes artificial delocalizations which modify the first order electron density. The mechanism of this effect is assessed for the hydrogen fluoride dimer with several basis sets. The BSSE-corrected first-order electron density is obtained using the chemical Hamiltonian approach versions of the Roothaan and Kohn-Sham equations. The corrected densities are compared to uncorrected densities based on the charge density critical points. Contour difference maps between BSSE-corrected and uncorrected densities on the molecular plane are also plotted to gain insight into the effects of BSSE correction on the electron density
Resumo:
The total energy of molecule in terms of 'fuzzy atoms' presented as sum of one- and two-atomic energy components is described. The divisions of three-dimensional physical space into atomic regions exhibit continuous transition from one to another. The energy components are on chemical energy scale according to proper definitions. The Becke's integration scheme and weight function determines realization of method which permits effective numerical integrations
Resumo:
Quantum molecular similarity (QMS) techniques are used to assess the response of the electron density of various small molecules to application of a static, uniform electric field. Likewise, QMS is used to analyze the changes in electron density generated by the process of floating a basis set. The results obtained show an interrelation between the floating process, the optimum geometry, and the presence of an external field. Cases involving the Le Chatelier principle are discussed, and an insight on the changes of bond critical point properties, self-similarity values and density differences is performed
Resumo:
A procedure based on quantum molecular similarity measures (QMSM) has been used to compare electron densities obtained from conventional ab initio and density functional methodologies at their respective optimized geometries. This method has been applied to a series of small molecules which have experimentally known properties and molecular bonds of diverse degrees of ionicity and covalency. Results show that in most cases the electron densities obtained from density functional methodologies are of a similar quality than post-Hartree-Fock generalized densities. For molecules where Hartree-Fock methodology yields erroneous results, the density functional methodology is shown to yield usually more accurate densities than those provided by the second order Møller-Plesset perturbation theory
Resumo:
We report here a new empirical density functional that is constructed based on the performance of OPBE and PBE for spin states and SN 2 reaction barriers and how these are affected by different regions of the reduced gradient expansion. In a previous study [Swart, Sol̀, and Bickelhaupt, J. Comput. Methods Sci. Eng. 9, 69 (2009)] we already reported how, by switching between OPBE and PBE, one could obtain both the good performance of OPBE for spin states and reaction barriers and that of PBE for weak interactions within one and the same (SSB-sw) functional. Here we fine tuned this functional and include a portion of the KT functional and Grimme's dispersion correction to account for π- π stacking. Our new SSB-D functional is found to be a clear improvement and functions very well for biological applications (hydrogen bonding, π -π stacking, spin-state splittings, accuracy of geometries, reaction barriers)
Resumo:
A comparative systematic study of the CrO2F2 compound has been performed using different conventional ab initio methodologies and density functional procedures. Two points have been analyzed: first, the accuracy of results yielded by each method under study, and second, the computational cost required to reach such results. Weighing up both aspects, density functional theory has been found to be more appropriate than the Hartree-Fock (HF) and the analyzed post-HF methods. Hence, the structural characterization and spectroscopic elucidation of the full CrO2X2 series (X=F,Cl,Br,I) has been done at this level of theory. Emphasis has been given to the unknown CrO2I2 species, and specially to the UV/visible spectra of all four compounds. Furthermore, a topological analysis in terms of charge density distributions has revealed why the valence shell electron pair repulsion model fails in predicting the molecular shape of such CrO2X2 complexes
Resumo:
A set of connections among several nuclear and electronic indexes of reactivity in the framework of the conceptual Density Functional Theory by using an expansion ofthe energy functional in terms of the total number of electrons and the normal coordinates within a canonical ensemble was derived. The relations obtained provided explicit links between important quantities related to the chemical reactivity of a system. This paper particularly demonstrates that the derivative of the electronic energy with respect to the external potential of a system in its equilibrium geometry was equal to the negative of the nuclear repulsion derivative with respect to the external potential
Resumo:
A conceptually new approach is introduced for the decomposition of the molecular energy calculated at the density functional theory level of theory into sum of one- and two-atomic energy components, and is realized in the "fuzzy atoms" framework. (Fuzzy atoms mean that the three-dimensional physical space is divided into atomic regions having no sharp boundaries but exhibiting a continuous transition from one to another.) The new scheme uses the new concept of "bond order density" to calculate the diatomic exchange energy components and gives them unexpectedly close to the values calculated by the exact (Hartree-Fock) exchange for the same Kohn-Sham orbitals
Resumo:
The project aims at advancing the state of the art in the use of context information for classification of image and video data. The use of context in the classification of images has been showed of great importance to improve the performance of actual object recognition systems. In our project we proposed the concept of Multi-scale Feature Labels as a general and compact method to exploit the local and global context. The feature extraction from the discriminative probability or classification confidence label field is of great novelty. Moreover the use of a multi-scale representation of the feature labels lead to a compact and efficient description of the context. The goal of the project has been also to provide a general-purpose method and prove its suitability in different image/video analysis problem. The two-year project generated 5 journal publications (plus 2 under submission), 10 conference publications (plus 2 under submission) and one patent (plus 1 pending). Of these publications, a relevant number make use of the main result of this project to improve the results in detection and/or segmentation of objects.
Resumo:
A number of experimental methods have been reported for estimating the number of genes in a genome, or the closely related coding density of a genome, defined as the fraction of base pairs in codons. Recently, DNA sequence data representative of the genome as a whole have become available for several organisms, making the problem of estimating coding density amenable to sequence analytic methods. Estimates of coding density for a single genome vary widely, so that methods with characterized error bounds have become increasingly desirable. We present a method to estimate the protein coding density in a corpus of DNA sequence data, in which a ‘coding statistic’ is calculated for a large number of windows of the sequence under study, and the distribution of the statistic is decomposed into two normal distributions, assumed to be the distributions of the coding statistic in the coding and noncoding fractions of the sequence windows. The accuracy of the method is evaluated using known data and application is made to the yeast chromosome III sequence and to C.elegans cosmid sequences. It can also be applied to fragmentary data, for example a collection of short sequences determined in the course of STS mapping.
Resumo:
Background: Despite the fact that labour market flexibility has resulted in an expansion of precarious employment in industrialized countries, to date there is limited empirical evidence about its health consequences. The Employment Precariousness Scale (EPRES) is a newly developed, theory-based, multidimensional questionnaire specifically devised for epidemiological studies among waged and salaried workers. Objective: To assess acceptability, reliability and construct validity of EPRES in a sample of waged and salaried workers in Spain. Methods: Cross-sectional study, using a sub-sample of 6.968 temporary and permanent workers from a population-based survey carried out in 2004-2005. The survey questionnaire was interviewer administered and included the six EPRES subscales, measures of the psychosocial work environment (COPSOQ ISTAS21), and perceived general and mental health (SF-36). Results: A high response rate to all EPRES items indicated good acceptability; Cronbach’s alpha coefficients, over 0.70 for all subscales and the global score, demonstrated good internal consistency reliability; exploratory factor analysis using principal axis analysis and varimax rotation confirmed the six-subscale structure and the theoretical allocation of all items. Patterns across known groups and correlation coefficients with psychosocial work environment measures and perceived health demonstrated the expected relations, providing evidence of construct validity. Conclusions: Our results provide evidence in support of the psychometric properties of EPRES, which appears to be a promising tool for the measurement of employment precariousness in public health research.
Resumo:
In the past, sensors networks in cities have been limited to fixed sensors, embedded in particular locations, under centralised control. Today, new applications can leverage wireless devices and use them as sensors to create aggregated information. In this paper, we show that the emerging patterns unveiled through the analysis of large sets of aggregated digital footprints can provide novel insights into how people experience the city and into some of the drivers behind these emerging patterns. We particularly explore the capacity to quantify the evolution of the attractiveness of urban space with a case study of in the area of the New York City Waterfalls, a public art project of four man-made waterfalls rising from the New York Harbor. Methods to study the impact of an event of this nature are traditionally based on the collection of static information such as surveys and ticket-based people counts, which allow to generate estimates about visitors’ presence in specific areas over time. In contrast, our contribution makes use of the dynamic data that visitors generate, such as the density and distribution of aggregate phone calls and photos taken in different areas of interest and over time. Our analysis provides novel ways to quantify the impact of a public event on the distribution of visitors and on the evolution of the attractiveness of the points of interest in proximity. This information has potential uses for local authorities, researchers, as well as service providers such as mobile network operators.
Resumo:
Endovascular coiling is a well-established therapy for treating intracranial aneurysms. Nonetheless, postoperative hemodynamic changes induced by this therapy remain not fully understood. The purpose of this work is to assess the influence of coil configuration and packing density on intra-aneurysmal hemodynamics
Resumo:
We design powerful low-density parity-check (LDPC) codes with iterative decoding for the block-fading channel. We first study the case of maximum-likelihood decoding, and show that the design criterion is rather straightforward. Since optimal constructions for maximum-likelihood decoding do not performwell under iterative decoding, we introduce a new family of full-diversity LDPC codes that exhibit near-outage-limit performance under iterative decoding for all block-lengths. This family competes favorably with multiplexed parallel turbo codes for nonergodic channels.