145 resultados para semi empirical calculations
Resumo:
The magnetic structure of the edge-sharing cuprate compound Li2CuO2 has been investigated with highly correlated ab initio electronic structure calculations. The first- and second-neighbor in-chain magnetic interactions are calculated to be 142 and -22 K, respectively. The ratio between the two parameters is smaller than suggested previously in the literature. The interchain interactions are antiferromagnetic in nature and of the order of a few K only. Monte Carlo simulations using the ab initio parameters to define the spin model Hamiltonian result in a Nel temperature in good agreement with experiment. Spin population analysis situates the magnetic moment on the copper and oxygen ions between the completely localized picture derived from experiment and the more delocalized picture based on local-density calculations.
Resumo:
The magnetic coupling constant of selected cuprate superconductor parent compounds has been determined by means of embedded cluster model and periodic calculations carried out at the same level of theory. The agreement between both approaches validates the cluster model. This model is subsequently employed in state-of-the-art configuration interaction calculations aimed to obtain accurate values of the magnetic coupling constant and hopping integral for a series of superconducting cuprates. Likewise, a systematic study of the performance of different ab initio explicitly correlated wave function methods and of several density functional approaches is presented. The accurate determination of the parameters of the t-J Hamiltonian has several consequences. First, it suggests that the appearance of high-Tc superconductivity in existing monolayered cuprates occurs with J/t in the 0.20¿0.35 regime. Second, J/t=0.20 is predicted to be the threshold for the existence of superconductivity and, third, a simple and accurate relationship between the critical temperatures at optimum doping and these parameters is found. However, this quantitative electronic structure versus Tc relationship is only found when both J and t are obtained at the most accurate level of theory.
Resumo:
The observation of coherent tunnelling in Cu2+ - and Ag2+ -doped MgO and CaO:Cu2+ was a crucial discovery in the realm of the Jahn-Teller (JT) effect. The main reasons favoring this dynamic behavior are now clarified through ab initio calculations on Cu2+ - and Ag2+ -doped cubic oxides. Small JT distortions and an unexpected low anharmonicity of the eg JT mode are behind energy barriers smaller than 25 cm-1 derived through CASPT2 calculations for Cu2+ - and Ag2+ -doped MgO and CaO:Cu2+ . The low anharmonicity is shown to come from a strong vibrational coupling of MO610- units (M=Cu,Ag) to the host lattice. The average distance between the d9 impurity and ligands is found to vary significantly on passing from MgO to SrO following to a good extent the lattice parameter.
Resumo:
Application of semi-distributed hydrological models to large, heterogeneous watersheds deals with several problems. On one hand, the spatial and temporal variability in catchment features should be adequately represented in the model parameterization, while maintaining the model complexity in an acceptable level to take advantage of state-of-the-art calibration techniques. On the other hand, model complexity enhances uncertainty in adjusted model parameter values, therefore increasing uncertainty in the water routing across the watershed. This is critical for water quality applications, where not only streamflow, but also a reliable estimation of the surface versus subsurface contributions to the runoff is needed. In this study, we show how a regularized inversion procedure combined with a multiobjective function calibration strategy successfully solves the parameterization of a complex application of a water quality-oriented hydrological model. The final value of several optimized parameters showed significant and consistentdifferences across geological and landscape features. Although the number of optimized parameters was significantly increased by the spatial and temporal discretization of adjustable parameters, the uncertainty in water routing results remained at reasonable values. In addition, a stepwise numerical analysis showed that the effects on calibration performance due to inclusion of different data types in the objective function could be inextricably linked. Thus caution should be taken when adding or removing data from an aggregated objective function.
Resumo:
The prediction of rockfall travel distance below a rock cliff is an indispensable activity in rockfall susceptibility, hazard and risk assessment. Although the size of the detached rock mass may differ considerably at each specific rock cliff, small rockfall (<100 m3) is the most frequent process. Empirical models may provide us with suitable information for predicting the travel distance of small rockfalls over an extensive area at a medium scale (1:100 000¿1:25 000). "Solà d'Andorra la Vella" is a rocky slope located close to the town of Andorra la Vella, where the government has been documenting rockfalls since 1999. This documentation consists in mapping the release point and the individual fallen blocks immediately after the event. The documentation of historical rockfalls by morphological analysis, eye-witness accounts and historical images serve to increase available information. In total, data from twenty small rockfalls have been gathered which reveal an amount of a hundred individual fallen rock blocks. The data acquired has been used to check the reliability of the main empirical models widely adopted (reach and shadow angle models) and to analyse the influence of parameters which affecting the travel distance (rockfall size, height of fall along the rock cliff and volume of the individual fallen rock block). For predicting travel distances in maps with medium scales, a method has been proposed based on the "reach probability" concept. The accuracy of results has been tested from the line entailing the farthest fallen boulders which represents the maximum travel distance of past rockfalls. The paper concludes with a discussion of the application of both empirical models to other study areas.
Resumo:
Objecte: L'aplicació de la NIC 32 en les cooperatives ha generat una important controvèrsia en els últims anys. Fins al moment, s'han realitzat diversos treballs que intenten preveure els possibles efectes de la seva aplicació. Aquest treball pretén analitzar l'impacte de la primera aplicació de la NIC 32 en el sector cooperatiu. Disseny/metodologia/enfocament: S'ha seleccionat una mostra de 98 cooperatives, i s'ha realitzat una anàlisi comparativa de la seva informació financera presentada abans i després de l'aplicació de la NIC 32, per a determinar les diferències existents. S’ha utilitzat la prova de la suma de rangs de Wilcoxon per comprovar si aquestes diferències són significatives. També s’ha utilitzat la prova de la U de Mann Whitney per comprovar si existeixen diferències significatives en l’impacte relatiu de l’aplicació de la NIC 32 entre diversos grups de cooperatives. Finalment, s'ha realitzat una anàlisi dels efectes de l'aplicació de la NIC 32 en la situació patrimonial i econòmica de les cooperatives, i en l'evolució dels seus actius intangibles, mitjançant l’ús de tècniques d’anàlisi econòmico-financera. Aportacions i resultats: Els resultats obtinguts confirmen que l'aplicació de la NIC 32 provoca diferències significatives en algunes partides del balanç de situació i el compte de pèrdues i guanys, així com en les ràtios analitzades. Les principals diferències es concreten en una reducció del nivell de capitalització i un augment de l'endeutament de les cooperatives, així com un empitjorament general dels ràtios de solvència i autonomia financera. Limitacions: Cal tenir en compte que el treball s'ha realitzat amb una mostra de cooperatives que estan obligades a auditar els seus comptes anuals. Per tant, els resultats obtinguts han d'interpretar-se en un context de cooperatives de tamany elevat. També cal tenir en compte que hem realitzat una anàlisi comparativa dels comptes anuals de 2011 i 2010. Això ens ha permès conèixer les diferències en la informació financera de les cooperatives abans i després d'aplicar la NIC 32. Encara que algunes d’aquestes diferències també podrien estar causades per altres factors com la situació econòmica, els canvis en l'aplicació de les normes comptables, etc. Originalitat/valor afegit: Creiem que és el moment idoni per a realitzar aquest treball d'investigació, ja que des de 2011 totes les cooperatives espanyoles han d'aplicar les normes comptables adaptades a la NIC 32. A més, fins on coneixem, no existeixen altres treballs similars realitzats amb comptes anuals de cooperatives que ja han aplicat les normes comptables adaptades a la NIC 32 . Creiem que els resultats d'aquest treball d'investigació poden ser útils per a diferents grups d'interès. En primer lloc, perquè els organismes emissors de normes comptables puguin conèixer l'abast de la NIC 32 en les cooperatives i, puguin plantejar millores en el contingut de la norma. En segon lloc, perquè les pròpies cooperatives, federacions, confederacions i altres organismes cooperatius disposin d'informació sobre l'impacte econòmic de la primera aplicació de la NIC 32, i puguin realitzar les valoracions que creguin convenients. I en tercer lloc, perquè les entitats financeres, auditors i assessors de cooperatives i altres grups d'interès disposin d'informació sobre els canvis en els comptes anuals de les cooperatives, i puguin tenir-los en compte a l'hora de prendre decisions. Paraules clau: Cooperatives, patrimoni net, capital social, NIC 32, solvència, efectes de la normativa comptable, informació financera, ràtios.
Resumo:
In this paper, we present a computer simulation study of the ion binding process at an ionizable surface using a semi-grand canonical Monte Carlo method that models the surface as a discrete distribution of charged and neutral functional groups in equilibrium with explicit ions modelled in the context of the primitive model. The parameters of the simulation model were tuned and checked by comparison with experimental titrations of carboxylated latex particles in the presence of different ionic strengths of monovalent ions. The titration of these particles was analysed by calculating the degree of dissociation of the latex functional groups vs. pH curves at different background salt concentrations. As the charge of the titrated surface changes during the simulation, a procedure to keep the electroneutrality of the system is required. Here, two approaches are used with the choice depending on the ion selected to maintain electroneutrality: counterion or coion procedures. We compare and discuss the difference between the procedures. The simulations also provided a microscopic description of the electrostatic double layer (EDL) structure as a function of p H and ionic strength. The results allow us to quantify the effect of the size of the background salt ions and of the surface functional groups on the degree of dissociation. The non-homogeneous structure of the EDL was revealed by plotting the counterion density profiles around charged and neutral surface functional groups.
Resumo:
In this work we explore the multivariate empirical mode decomposition combined with a Neural Network classifier as technique for face recognition tasks. Images are simultaneously decomposed by means of EMD and then the distance between the modes of the image and the modes of the representative image of each class is calculated using three different distance measures. Then, a neural network is trained using 10- fold cross validation in order to derive a classifier. Preliminary results (over 98 % of classification rate) are satisfactory and will justify a deep investigation on how to apply mEMD for face recognition.
Resumo:
Artifacts are present in most of the electroencephalography (EEG) recordings, making it difficult to interpret or analyze the data. In this paper a cleaning procedure based on a multivariate extension of empirical mode decomposition is used to improve the quality of the data. This is achieved by applying the cleaning method to raw EEG data. Then, a synchrony measure is applied on the raw and the clean data in order to compare the improvement of the classification rate. Two classifiers are used, linear discriminant analysis and neural networks. For both cases, the classification rate is improved about 20%.
Resumo:
Aquest estudi va analitzar la interacció del canvi organitzatiu, els valors culturals i el canvi tecnològic en el sistema sanitari català. L'estudi se subdivideix en cinc parts diferents. La primera és una anàlisi de contingut de webs relacionats amb la salut a Catalunya. La segona és un estudi dels usos d'Internet en qüestions relacionades amb la salut entre la població en general, les associacions de pacients i els professionals de la salut, i es basa en un sondeig per Internet adaptat a cada un d'aquests grups. La tercera part és un estudi de treball de camp dels programes experimentals duts a terme pel Govern català en diverses àrees i hospitals locals per a integrar electrònicament la història clínica dels pacients. La quarta és un estudi de les implicacions organitzatives de la introducció de sistemes d'informació en la gestió d'hospitals i centres d'assistència primària a l'Institut Català de Salut, el principal proveïdor de salut pública a Catalunya, i es basa en un sondeig per Internet i entrevistes en profunditat. La cinquena part és un estudi de cas dels efectes organitzatius i socials de la introducció de les tecnologies de la informació i la comunicació en un dels principals hospitals de Catalunya, l'Hospital Clínic de Barcelona. L'estudi es va dur a terme entre el maig del 2005 i el juliol del 2007.
Resumo:
Background: In longitudinal studies where subjects experience recurrent incidents over a period of time, such as respiratory infections, fever or diarrhea, statistical methods are required to take into account the within-subject correlation. Methods: For repeated events data with censored failure, the independent increment (AG), marginal (WLW) and conditional (PWP) models are three multiple failure models that generalize Cox"s proportional hazard model. In this paper, we revise the efficiency, accuracy and robustness of all three models under simulated scenarios with varying degrees of within-subject correlation, censoring levels, maximum number of possible recurrences and sample size. We also study the methods performance on a real dataset from a cohort study with bronchial obstruction. Results: We find substantial differences between methods and there is not an optimal method. AG and PWP seem to be preferable to WLW for low correlation levels but the situation reverts for high correlations. Conclusions: All methods are stable in front of censoring, worsen with increasing recurrence levels and share a bias problem which, among other consequences, makes asymptotic normal confidence intervals not fully reliable, although they are well developed theoretically.
Resumo:
This paper analyses the effect of R&D investment on firm growth. We use an extensive sample of Spanish manufacturing and service firms. The database comprises diverse waves of Spanish Community Innovation Survey and covers the period 2004–2008. First, a probit model corrected for sample selection analyses the role of innovation on the probability of being a high-growth firm (HGF). Second, a quantile regression technique is applied to explore the determinants of firm growth. Our database shows that a small number of firms experience fast growth rates in terms of sales or employees. Our results reveal that R&D investments positively affect the probability of becoming a HGF. However, differences appear between manufacturing and service firms. Finally, when we study the impact of R&D investment on firm growth, quantile estimations show that internal R&D presents a significant positive impact for the upper quantiles, while external R&D shows a significant positive impact up to the median. Keywords : High-growth firms, Firm growth, Innovation activity. JEL Classifications : L11, L25, L26, O30
Resumo:
Peer-reviewed
Resumo:
Some faculty members from different universities around the world have begun to use Wikipedia as a teaching tool in recent years. These experiences show, in most cases, very satisfactory results and a substantial improvement in various basic skills, as well as a positive influence on the students' motivation. Nevertheless and despite the growing importance of e-learning methodologies based on the use of the Internet for higher education, the use of Wikipedia as a teaching resource remains scarce among university faculty.Our investigation tries to identify which are the main factors that determine acceptance or resistance to that use. We approach the decision to use Wikipedia as a teaching tool by analyzing both the individual attributes of faculty members and the characteristics of the environment where they develop their teaching activity. From a specific survey sent to all faculty of the Universitat Oberta de Catalunya (UOC), pioneer and leader in online education in Spain, we have tried to infer the influence of these internal and external elements. The questionnaire was designed to measure different constructs: perceived quality of Wikipedia, teaching practices involving Wikipedia, use experience, perceived usefulness and use of 2.0 tools. Control items were also included for gathering information on gender, age, teaching experience, academic rank, and area of expertise.Our results reveal that academic rank, teaching experience, age or gender, are not decisive factors in explaining the educational use of Wikipedia. Instead, the decision to use it is closely linked to the perception of Wikipedia's quality, the use of other collaborative learning tools, an active attitude towards web 2.0 applications, and connections with the professional non-academic world. Situational context is also very important, since the use is higher when faculty members have got reference models in their close environment and when they perceive it is positively valued by their colleagues. As far as these attitudes, practices and cultural norms diverge in different scientific disciplines, we have also detected clear differences in the use of Wikipedia among areas of academic expertise. As a consequence, a greater application of Wikipedia both as a teaching resource and as a driver for teaching innovation would require much more active institutional policies and some changes in the dominant academic culture among faculty members.
Resumo:
Peer-reviewed