43 resultados para Polyhedral sets
em QUB Research Portal - Research Directory and Institutional Repository for Queen's University Belfast
Resumo:
Density functional calculations, using B3LPY/6-31G(d) methods, have been used to investigate the conformations and vibrational (Raman) spectra of three short-chain fatty acid methyl esters (FAMEs) with the formula CnH2nO2 (n = 3-5). In all three FAMEs, the lowest energy conformer has a simple 'all-trans' structure but there are other conformers, with different torsions about the backbone, which lie reasonably close in energy to the global minimum. One result of this is that the solid samples we studied do not appear to consist entirely of the lowest energy conformer. Indeed, to account for the 'extra' bands that were observed in the Raman data but were not predicted for the all-trans conformer, it was necessary to add-in contributions from other conformers before a complete set of vibrational assignments could be made. Provided this was done, the agreement between experimental Raman frequencies and 6-31G(d) values (after scaling) was excellent, RSD = 12.6 cm(-1). However, the agreement between predicted and observed intensities was much less satisfactory. To confirm the validity of the approach followed by the 6-3 1 G(d) basis set, we used a larger basis set, Sadlej pVTZ, and found that these calculations gave accurate Raman intensities and simulated spectra (summed from two different conformers) that were in quantitative agreement with experiment. In addition, the unscaled Sadlej pVTZ, and the scaled 6-3 1 G(d) calculations gave the same vibrational mode assignments for all bands in the experimental data. This work provides the foundation for calculations on longer-chain FAMEs (which are closer to those found as triglycerides in edible fats and oils) because it shows that scaled 6-3 1 G(d) calculations give equally accurate frequency predictions, and the same vibrational mode assignments, as the much more CPU-expensive Sadlej pVTZ basis set calculations.
Resumo:
Both the existence and the non-existence of a linearly ordered (by certain natural order relations) effective set of comparison functions (=dense comparison classes) are compatible with the ZFC axioms of set theory.
Resumo:
Support vector machine (SVM) is a powerful technique for data classification. Despite of its good theoretic foundations and high classification accuracy, normal SVM is not suitable for classification of large data sets, because the training complexity of SVM is highly dependent on the size of data set. This paper presents a novel SVM classification approach for large data sets by using minimum enclosing ball clustering. After the training data are partitioned by the proposed clustering method, the centers of the clusters are used for the first time SVM classification. Then we use the clusters whose centers are support vectors or those clusters which have different classes to perform the second time SVM classification. In this stage most data are removed. Several experimental results show that the approach proposed in this paper has good classification accuracy compared with classic SVM while the training is significantly faster than several other SVM classifiers.
Resumo:
Reduced-size polarized (ZmPolX) basis sets are developed for the second-row atoms X = Si, P, S, and Cl. The generation of these basis sets follows from a simple physical model of the polarization effect of the external electric field which leads to highly compact polarization functions to be added to the chosen initial basis set. The performance of the ZmPolX sets has been investigated in calculations of molecular dipole moments and polarizabilities. Only a small deterioration of the quality of the calculated molecular electric properties has been found. Simultaneously the size of the present reduced-size ZmPolX basis sets is about one-third smaller than that of the usual polarized (PolX) sets. This reduction considerably widens the range of applications of the ZmPolX sets in calculations of molecular dipole moments, dipole polarizabilities, and related properties.
Resumo:
Complexity is conventionally defined as the level of detail or intricacy contained within a picture. The study of complexity has received relatively little attention-in part, because of the absence of an acceptable metric. Traditionally, normative ratings of complexity have been based on human judgments. However, this study demonstrates that published norms for visual complexity are biased. Familiarity and learning influence the subjective complexity scores for nonsense shapes, with a significant training x familiarity interaction [F(1,52) = 17.53, p <.05]. Several image-processing techniques were explored as alternative measures of picture and image complexity. A perimeter detection measure correlates strongly with human judgments of the complexity of line drawings of real-world objects and nonsense shapes and captures some of the processes important in judgments of subjective complexity, while removing the bias due to familiarity effects.