48 resultados para Accelerated tests
em Consorci de Serveis Universitaris de Catalunya (CSUC), Spain
Resumo:
The Hausman (1978) test is based on the vector of differences of two estimators. It is usually assumed that one of the estimators is fully efficient, since this simplifies calculation of the test statistic. However, this assumption limits the applicability of the test, since widely used estimators such as the generalized method of moments (GMM) or quasi maximum likelihood (QML) are often not fully efficient. This paper shows that the test may easily be implemented, using well-known methods, when neither estimator is efficient. To illustrate, we present both simulation results as well as empirical results for utilization of health care services.
Resumo:
Report for the scientific sojourn carried out at Albert Einstein Institut in Germany, from April to July 2006.
Resumo:
This paper tests for real interest parity (RIRP) among the nineteen major OECD countries over the period 1978:Q2-1998:Q4. The econometric methods applied consist of combining the use of several unit root or stationarity tests designed for panels valid under cross-section dependence and presence of multiple structural breaks. Our results strongly support the fulfillment of the weak version of the RIRP for the studied period once dependence and structural breaks are accounted for.
Resumo:
Aquest projecte consisteix en la creació d’una eina que permeti automatitzar els tests que s’han de fer a una aplicació comercial j2ee, amb el propòsit de facilitar i estalviar feina a les persones encarregades de testejar aquesta aplicació, ajudant-les així en la seva tasca de cerca d’errors. Concretament, s’ha creat una aplicació construïda per capes, modulable i fàcilment ampliable, la qual arriba més enllà de l’automatització dels tests més habituals, permetent executar un conjunt de tests per tal de validar si la versió de l’aplicació a testejar és vàlida o no.
Resumo:
Report for the scientific sojourn carried out at the Darmouth College, from august 2007 until february 2008. It has been very successful, from different viewpoints: scientific, philosophical, human. We have definitely advanced, during the past six months, towards the comprehension of the behaviour of the fluctuations of the quantum vacuum in the presence of boundaries, moving and non-moving, and also in situations where the topology of space-time changes: the dynamical Casimir effect, regularization problems, particle creation statistics, according to different BC, etc. We have solved some longstanding problems and got in this subject quite remarkable results (as we will explain in more detail below). We also pursued a general approach towards a viable modified f(R) gravity in both the Jordan and the Einstein frames (which are known to be mathematically equivalent, but physically not so). A class of exponential, realistic modified gravities has been introduced by us and investigated with care. Special focus was made on step-class models, most promising from the phenomenological viewpoint and which provide a natural way to classify all viable modified gravities. One- and two-steps models were considered, but the analysis is extensible to N-step models. Both inflation in the early universe and the onset of recent accelerated expansion arise in these models in a natural, unified way, what makes them very promising. Moreover, it is monstrated in our work that models in this category easily pass all local tests, including stability of spherical body solution, non-violation of Newton's law, and generation of a very heavy positive mass for the additional scalar degree of freedom.
Resumo:
We explore in depth the validity of a recently proposed scaling law for earthquake inter-event time distributions in the case of the Southern California, using the waveform cross-correlation catalog of Shearer et al. Two statistical tests are used: on the one hand, the standard two-sample Kolmogorov-Smirnov test is in agreement with the scaling of the distributions. On the other hand, the one-sample Kolmogorov-Smirnov statistic complemented with Monte Carlo simulation of the inter-event times, as done by Clauset et al., supports the validity of the gamma distribution as a simple model of the scaling function appearing on the scaling law, for rescaled inter-event times above 0.01, except for the largest data set (magnitude greater than 2). A discussion of these results is provided.
Resumo:
Una aproximació a la psicologia de la recepció de l'obra d'art a partir dels tests projectius.
Resumo:
Es tracta d'una aplicació Web que serveix perquè les persones puguin practicar i millorar els seus coneixements de cara a l'examen teòric de conduir. S'ha realitzat l'anàlisi, el disseny i la implementació utilitzant una arquitectura .NET amb 4 capes.
Resumo:
Crear un mòdul per al projecte d'autoavaluació InnovaCampus que ens permeti generar test utilitzant preguntes amb recursos, com podrien ser imatges, àudio, vídeo o documents, i poder generar preguntes que depenguin d'altres anteriors o que estiguin vinculades.
Resumo:
The classical description of Si oxidation given by Deal and Grove has well-known limitations for thin oxides (below 200 Ã). Among the large number of alternative models published so far, the interfacial emission model has shown the greatest ability to fit the experimental oxidation curves. It relies on the assumption that during oxidation Si interstitials are emitted to the oxide to release strain and that the accumulation of these interstitials near the interface reduces the reaction rate there. The resulting set of differential equations makes it possible to model diverse oxidation experiments. In this paper, we have compared its predictions with two sets of experiments: (1) the pressure dependence for subatmospheric oxygen pressure and (2) the enhancement of the oxidation rate after annealing in inert atmosphere. The result is not satisfactory and raises serious doubts about the model’s correctness
Resumo:
La meva incorporació al grup de recerca del Prof. McCammon (University of California San Diego) en qualitat d’investigador post doctoral amb una beca Beatriu de Pinós, va tenir lloc el passat 1 de desembre de 2010; on vaig dur a terme les meves tasques de recerca fins al darrer 1 d’abril de 2012. El Prof. McCammon és un referent mundial en l’aplicació de simulacions de dinàmica molecular (MD) en sistemes biològics d’interès humà. La contribució més important del Prof. McCammon en la simulació de sistemes biològics és el desenvolupament del mètode de dinàmiques moleculars accelerades (AMD). Les simulacions MD convencionals, les quals estan limitades a l’escala de temps del nanosegon (~10-9s), no son adients per l’estudi de sistemes biològics rellevants a escales de temps mes llargues (μs, ms...). AMD permet explorar fenòmens moleculars poc freqüents però que son clau per l’enteniment de molts sistemes biològics; fenòmens que no podrien ser observats d’un altre manera. Durant la meva estada a la “University of California San Diego”, vaig treballar en diferent aplicacions de les simulacions AMD, incloent fotoquímica i disseny de fàrmacs per ordinador. Concretament, primer vaig desenvolupar amb èxit una combinació dels mètodes AMD i simulacions Car-Parrinello per millorar l’exploració de camins de desactivació (interseccions còniques) en reaccions químiques fotoactivades. En segon lloc, vaig aplicar tècniques estadístiques (Replica Exchange) amb AMD en la descripció d’interaccions proteïna-lligand. Finalment, vaig dur a terme un estudi de disseny de fàrmacs per ordinador en la proteïna-G Rho (involucrada en el desenvolupament de càncer humà) combinant anàlisis estructurals i simulacions AMD. Els projectes en els quals he participat han estat publicats (o estan encara en procés de revisió) en diferents revistes científiques, i han estat presentats en diferents congressos internacionals. La memòria inclosa a continuació conté més detalls de cada projecte esmentat.
Resumo:
Test-based assessment tools are mostly focused on the use of computers. However, advanced Information and Communication Technologies, such as handheld devices, opens up the possibilities of creating new assessment scenarios, increasing the teachers’ choices to design more appropriate tests for their subject areas. In this paper we use the term Computing-Based Testing (CBT) instead of Computer-Based Testing, as it captures better the emerging trends. Within the CBT context, the paper is centred on proposing an approach for “Assessment in situ” activities, where questions have to be answered in front of a real space/location (situ). In particular, we present the QuesTInSitu software implementation that includes both an editor and a player based on the IMS Question and Test Interoperability specification and GoogleMaps. With QuesTInSitu teachers can create geolocated questions and tests (routes), and students can answer the tests using mobile devices with GPS when following a route. Three illustrating scenarios and the results from the implementation of one of them in a real educational situation show that QuesTInSitu enables the creation of innovative, enriched and context-aware assessment activities. The results also indicate that the use of mobile devices and location-based systems in assessment activities facilitates students to put explorative and spatial skills into practice and fosters their motivation, reflection and personal observation.
Resumo:
Sobriety checkpoints are not usually randomly located by traffic authorities. As such, information provided by non-random alcohol tests cannot be used to infer the characteristics of the general driving population. In this paper a case study is presented in which the prevalence of alcohol-impaired driving is estimated for the general population of drivers. A stratified probabilistic sample was designed to represent vehicles circulating in non-urban areas of Catalonia (Spain), a region characterized by its complex transportation network and dense traffic around the metropolis of Barcelona. Random breath alcohol concentration tests were performed during spring 2012 on 7,596 drivers. The estimated prevalence of alcohol-impaired drivers was 1.29%, which is roughly a third of the rate obtained in non-random tests. Higher rates were found on weekends (1.90% on Saturdays, 4.29% on Sundays) and especially at night. The rate is higher for men (1.45%) than for women (0.64%) and the percentage of positive outcomes shows an increasing pattern with age. In vehicles with two occupants, the proportion of alcohol-impaired drivers is estimated at 2.62%, but when the driver was alone the rate drops to 0.84%, which might reflect the socialization of drinking habits. The results are compared with outcomes in previous surveys, showing a decreasing trend in the prevalence of alcohol-impaired drivers over time.
Resumo:
We present a new method for constructing exact distribution-free tests (and confidence intervals) for variables that can generate more than two possible outcomes.This method separates the search for an exact test from the goal to create a non-randomized test. Randomization is used to extend any exact test relating to meansof variables with finitely many outcomes to variables with outcomes belonging to agiven bounded set. Tests in terms of variance and covariance are reduced to testsrelating to means. Randomness is then eliminated in a separate step.This method is used to create confidence intervals for the difference between twomeans (or variances) and tests of stochastic inequality and correlation.
Resumo:
This paper explores biases in the elicitation of utilities under risk and the contribution that generalizations of expected utility can make to the resolution of these biases. We used five methods to measure utilities under risk and found clear violations of expected utility. Of the theories studies, prospect theory was most consistent with our data. The main improvement of prospect theory over expected utility was in comparisons between a riskless and a risky prospect(riskless-risk methods). We observed no improvement over expected utility in comparisons between two risky prospects (risk-risk methods). An explanation why we found no improvement of prospect theory over expected utility in risk-risk methods may be that there was less overweighting of small probabilities in our study than has commonly been observed.