977 resultados para Benchmark em índice de renda variável
Resumo:
To obtain a state-of-the-art benchmark potential energy surface (PES) for the archetypal oxidative addition of the methane C-H bond to the palladium atom, we have explored this PES using a hierarchical series of ab initio methods (Hartree-Fock, second-order Møller-Plesset perturbation theory, fourth-order Møller-Plesset perturbation theory with single, double and quadruple excitations, coupled cluster theory with single and double excitations (CCSD), and with triple excitations treated perturbatively [CCSD(T)]) and hybrid density functional theory using the B3LYP functional, in combination with a hierarchical series of ten Gaussian-type basis sets, up to g polarization. Relativistic effects are taken into account either through a relativistic effective core potential for palladium or through a full four-component all-electron approach. Counterpoise corrected relative energies of stationary points are converged to within 0.1-0.2 kcal/mol as a function of the basis-set size. Our best estimate of kinetic and thermodynamic parameters is -8.1 (-8.3) kcal/mol for the formation of the reactant complex, 5.8 (3.1) kcal/mol for the activation energy relative to the separate reactants, and 0.8 (-1.2) kcal/mol for the reaction energy (zero-point vibrational energy-corrected values in parentheses). This agrees well with available experimental data. Our work highlights the importance of sufficient higher angular momentum polarization functions, f and g, for correctly describing metal-d-electron correlation and, thus, for obtaining reliable relative energies. We show that standard basis sets, such as LANL2DZ+ 1f for palladium, are not sufficiently polarized for this purpose and lead to erroneous CCSD(T) results. B3LYP is associated with smaller basis set superposition errors and shows faster convergence with basis-set size but yields relative energies (in particular, a reaction barrier) that are ca. 3.5 kcal/mol higher than the corresponding CCSD(T) values
Resumo:
El presente trabajo tiene por objetivo ofrecer una solución para la creación de un índice espacial para la extensión JASPA (Java SPAtial) sobre la base de datos H2. Esta propuesta está limitada a operaciones espaciales en dos dimensiones. El algoritmo de indexación elegido para la implementación del índice espacial ha sido el Rtree
Resumo:
As designações crescimento e desenvolvimento, quando aplicadas à dinâmica social, são, muitas vezes, tomadas uma pela outra. Mas, hoje, tende‑se, cada vez mais, a usar o termo crescimento, quando se refere aos aspetos económicos dessa dinâmica, e desenvolvimento para se reportar à evolução, para mais e melhor, da interligação de todos os aspetos do social. No âmbito da análise aqui proposta, é, sobretudo, o desenvolvimento que se terá em conta, procurando uma medida para ele. Por isso, a partir de uma noção de desenvolvimento suficiente simples e precisa, para poder ser quantificável, vai-se procurar criar um Índice de Desenvolvimento Humano Ponderado Sustentável, tendo em conta a informação disponível de conceituadas entidades internacionais e os dados empíricos da situação moçambicana por alturas da viragem do século.
Resumo:
O presente trabalho pretende fazer um levantamento e caracterização dos alunos dos cursos de Tecnologias da Saúde da ERISA no que diz respeito aos hábitos alimentares, ao índice de massa corporal (IMC) e à prática de exercício físico. A maioria da população é do sexo feminino (81%) com idades entre os 18 e os 23 anos (68%). A maioria dos respondentes não consome leite ou pão todos os dias (40-57%), bebem menos de 1l de água por dia (51%) e apenas uma minoria consome vegetais e produtos hortícolas diariamente (13%). Quanto ao IMC, verificámos que 10.3% estão abaixo do peso normal (apenas mulheres) e 13.8% acima (maioritariamente homens), chegando mesmo alguns (1.7%) a níveis de obesidade segundo as normas da OMS. Verificámos ainda que mais de metade dos alunos de ambos os sexos não fazem qualquer tipo de exercício físico (51.3%). Os hábitos alimentares dos alunos da ERISA são inadequados em relação ao consumo de alguns grupos alimentares, nomeadamente aos produtos hortícolas, vegetais e frutas, bem como ao consumo diário de água e leite. A maioria dos inquiridos não pratica qualquer tipo de exercício físico, e existe um número significativo de alunos com IMC acima ou abaixo dos valores normais estabelecido pela OMS.
Resumo:
Sabe-se pouco sobre a prevalência da prática de actividade física em Portugal, estratificada por categorias de índice de massa corporal. O objectivo do presente projecto foi verificar a associação da prática de actividade física como (a) características sociodemográficas e (b) índice de massa corporal. Trata-se de um estudo observacional e transversal. Dados recolhidos entre Janeiro/2003 e Janeiro/2005, por questionário estruturado (entrevista face-a-face) e avaliação antropométrica (peso, altura e perímetros da cintura e anca). Amostra representativa da população adulta em Portugal continental. Para avaliação da prática de actividade física, foi utilizado o Baecke Questionnaire of Habitual Physical Activity. Participaram 8116 pessoas. 27.9% referiu praticar algum tipo de actividade desportiva. A proporção dos que fazem desporto diminui com a idade. A proporção de homens que referiram níveis de actividade mais elevados é significativamente superior à encontrada para as mulheres. As pontuações obtidas para a prática de actividade física em qualquer dos contextos (lazer, desporto e trabalho) correlacionam-se significativamente com o nível educacional (principalmente em contextos de lazer). Nas actividades de lazer e de desporto, a pontuação de actividade física está negativamente correlacionada com o índice de massa corporal. Quanto à actividade física no trabalho, esta correlaciona-se positivamente com o índice de massa corporal. Concluímos que são necessárias estratégias de saúde pública que facilitem e promovam a actividade física em contexto de lazer, especialmente dirigidas aos idosos e aos grupos demográficos com níveis educacionais mais baixos.
Resumo:
O problema que se circunscreve no presente estudo é o sistema de relações entre a confiança organizacional, o comprometimento organizacional e a estratégia comportamental negligência. As principais hipóteses são de que o Construto Confiança influencia a estratégia Comportamental Negligência e que esta relação é mediada pelo Comprometimento Organizacional perspetivado no Modelo das Três Componentes de Allen e Meyer (1991). Foi elaborado um questionário com base em três escalas – Escala de Confiança de Robinson (1996), Escala de Comprometimento Organizacional desenvolvida por Allen e Meyer (1997) e validada para a população portuguesa por Nascimento, Lopes e Salgueiro (2008) e a Escala do Modelo EVLN com plataforma de construção em três escalas utilizadas: uma por Rusbult et al (1998), outra por Withey e Cooper (1989) e outra por Hagedoorn et al (1999), que foi aplicado a uma amostra aleatória. A revisão de literatura do presente artigo teve base em artigos e livros da área organizacional. Identificaram-se como principais evidências que a Confiança influencia a Negligência e de que o Comprometimento Afetivo é moderador desta relação.
Resumo:
Quasi-Newton-Raphson minimization and conjugate gradient minimization have been used to solve the crystal structures of famotidine form B and capsaicin from X-ray powder diffraction data and characterize the chi(2) agreement surfaces. One million quasi-Newton-Raphson minimizations found the famotidine global minimum with a frequency of ca 1 in 5000 and the capsaicin global minimum with a frequency of ca 1 in 10 000. These results, which are corroborated by conjugate gradient minimization, demonstrate the existence of numerous pathways from some of the highest points on these chi(2) agreement surfaces to the respective global minima, which are passable using only downhill moves. This important observation has significant ramifications for the development of improved structure determination algorithms.
Resumo:
A novel algorithm for solving nonlinear discrete time optimal control problems with model-reality differences is presented. The technique uses Dynamic Integrated System Optimisation and Parameter Estimation (DISOPE) which has been designed to achieve the correct optimal solution in spite of deficiencies in the mathematical model employed in the optimisation procedure. A method based on Broyden's ideas is used for approximating some derivative trajectories required. Ways for handling con straints on both manipulated and state variables are described. Further, a method for coping with batch-to- batch dynamic variations in the process, which are common in practice, is introduced. It is shown that the iterative procedure associated with the algorithm naturally suits applications to batch processes. The algorithm is success fully applied to a benchmark problem consisting of the input profile optimisation of a fed-batch fermentation process.
Resumo:
There is a rising demand for the quantitative performance evaluation of automated video surveillance. To advance research in this area, it is essential that comparisons in detection and tracking approaches may be drawn and improvements in existing methods can be measured. There are a number of challenges related to the proper evaluation of motion segmentation, tracking, event recognition, and other components of a video surveillance system that are unique to the video surveillance community. These include the volume of data that must be evaluated, the difficulty in obtaining ground truth data, the definition of appropriate metrics, and achieving meaningful comparison of diverse systems. This chapter provides descriptions of useful benchmark datasets and their availability to the computer vision community. It outlines some ground truth and evaluation techniques, and provides links to useful resources. It concludes by discussing the future direction for benchmark datasets and their associated processes.
Resumo:
An important test of the quality of a computational model is its ability to reproduce standard test cases or benchmarks. For steady open–channel flow based on the Saint Venant equations some benchmarks exist for simple geometries from the work of Bresse, Bakhmeteff and Chow but these are tabulated in the form of standard integrals. This paper provides benchmark solutions for a wider range of cases, which may have a nonprismatic cross section, nonuniform bed slope, and transitions between subcritical and supercritical flow. This makes it possible to assess the underlying quality of computational algorithms in more difficult cases, including those with hydraulic jumps. Several new test cases are given in detail and the performance of a commercial steady flow package is evaluated against two of them. The test cases may also be used as benchmarks for both steady flow models and unsteady flow models in the steady limit.
Resumo:
A precipitation downscaling method is presented using precipitation from a general circulation model (GCM) as predictor. The method extends a previous method from monthly to daily temporal resolution. The simplest form of the method corrects for biases in wet-day frequency and intensity. A more sophisticated variant also takes account of flow-dependent biases in the GCM. The method is flexible and simple to implement. It is proposed here as a correction of GCM output for applications where sophisticated methods are not available, or as a benchmark for the evaluation of other downscaling methods. Applied to output from reanalyses (ECMWF, NCEP) in the region of the European Alps, the method is capable of reducing large biases in the precipitation frequency distribution, even for high quantiles. The two variants exhibit similar performances, but the ideal choice of method can depend on the GCM/reanalysis and it is recommended to test the methods in each case. Limitations of the method are found in small areas with unresolved topographic detail that influence higher-order statistics (e.g. high quantiles). When used as benchmark for three regional climate models (RCMs), the corrected reanalysis and the RCMs perform similarly in many regions, but the added value of the latter is evident for high quantiles in some small regions.
Resumo:
Accurate decadal climate predictions could be used to inform adaptation actions to a changing climate. The skill of such predictions from initialised dynamical global climate models (GCMs) may be assessed by comparing with predictions from statistical models which are based solely on historical observations. This paper presents two benchmark statistical models for predicting both the radiatively forced trend and internal variability of annual mean sea surface temperatures (SSTs) on a decadal timescale based on the gridded observation data set HadISST. For both statistical models, the trend related to radiative forcing is modelled using a linear regression of SST time series at each grid box on the time series of equivalent global mean atmospheric CO2 concentration. The residual internal variability is then modelled by (1) a first-order autoregressive model (AR1) and (2) a constructed analogue model (CA). From the verification of 46 retrospective forecasts with start years from 1960 to 2005, the correlation coefficient for anomaly forecasts using trend with AR1 is greater than 0.7 over parts of extra-tropical North Atlantic, the Indian Ocean and western Pacific. This is primarily related to the prediction of the forced trend. More importantly, both CA and AR1 give skillful predictions of the internal variability of SSTs in the subpolar gyre region over the far North Atlantic for lead time of 2 to 5 years, with correlation coefficients greater than 0.5. For the subpolar gyre and parts of the South Atlantic, CA is superior to AR1 for lead time of 6 to 9 years. These statistical forecasts are also compared with ensemble mean retrospective forecasts by DePreSys, an initialised GCM. DePreSys is found to outperform the statistical models over large parts of North Atlantic for lead times of 2 to 5 years and 6 to 9 years, however trend with AR1 is generally superior to DePreSys in the North Atlantic Current region, while trend with CA is superior to DePreSys in parts of South Atlantic for lead time of 6 to 9 years. These findings encourage further development of benchmark statistical decadal prediction models, and methods to combine different predictions.
Resumo:
The complexity of current and emerging high performance architectures provides users with options about how best to use the available resources, but makes predicting performance challenging. In this work a benchmark-driven performance modelling approach is outlined that is appro- priate for modern multicore architectures. The approach is demonstrated by constructing a model of a simple shallow water code on a Cray XE6 system, from application-specific benchmarks that illustrate precisely how architectural char- acteristics impact performance. The model is found to recre- ate observed scaling behaviour up to 16K cores, and used to predict optimal rank-core affinity strategies, exemplifying the type of problem such a model can be used for.
Resumo:
Geomagnetic activity has long been known to exhibit approximately 27 day periodicity, resulting from solar wind structures repeating each solar rotation. Thus a very simple near-Earth solar wind forecast is 27 day persistence, wherein the near-Earth solar wind conditions today are assumed to be identical to those 27 days previously. Effective use of such a persistence model as a forecast tool, however, requires the performance and uncertainty to be fully characterized. The first half of this study determines which solar wind parameters can be reliably forecast by persistence and how the forecast skill varies with the solar cycle. The second half of the study shows how persistence can provide a useful benchmark for more sophisticated forecast schemes, namely physics-based numerical models. Point-by-point assessment methods, such as correlation and mean-square error, find persistence skill comparable to numerical models during solar minimum, despite the 27 day lead time of persistence forecasts, versus 2–5 days for numerical schemes. At solar maximum, however, the dynamic nature of the corona means 27 day persistence is no longer a good approximation and skill scores suggest persistence is out-performed by numerical models for almost all solar wind parameters. But point-by-point assessment techniques are not always a reliable indicator of usefulness as a forecast tool. An event-based assessment method, which focusses key solar wind structures, finds persistence to be the most valuable forecast throughout the solar cycle. This reiterates the fact that the means of assessing the “best” forecast model must be specifically tailored to its intended use.
Resumo:
Using a linear factor model, we study the behaviour of French, Germany, Italian and British sovereign yield curves in the run up to EMU. This allows us to determine which of these yield curves might best approximate a benchmark yield curve post EMU. We find that the best approximation for the risk free yield is the UK three month T-bill yield, followed by the German three month T-bill yield. As no one sovereign yield curve dominates all others, we find that a composite yield curve, consisting of French, Italian and UK bonds at different maturity points along the yield curve should be the benchmark post EMU.