1000 resultados para dynamic mis-speci.cation
Resumo:
We propose new methods for evaluating predictive densities that focus on the models' actual predictive ability in finite samples. The tests offer a simple way of evaluatingthe correct specification of predictive densities, either parametric or non-parametric.The results indicate that our tests are well sized and have good power in detecting mis-specification in predictive densities. An empirical application to the Survey ofProfessional Forecasters and a baseline Dynamic Stochastic General Equilibrium modelshows the usefulness of our methodology.
Resumo:
This paper uses forecasts from the European Central Bank's Survey of Professional Forecasters to investigate the relationship between inflation and inflation expectations in the euro area. We use theoretical structures based on the New Keynesian and Neoclassical Phillips curves to inform our empirical work. Given the relatively short data span of the Survey of Professional Forecasters and the need to control for many explanatory variables, we use dynamic model averaging in order to ensure a parsimonious econometric speci cation. We use both regression-based and VAR-based methods. We find no support for the backward looking behavior embedded in the Neo-classical Phillips curve. Much more support is found for the forward looking behavior of the New Keynesian Phillips curve, but most of this support is found after the beginning of the financial crisis.
Resumo:
La crisis que se desató en el mercado hipotecario en Estados Unidos en 2008 y que logró propagarse a lo largo de todo sistema financiero, dejó en evidencia el nivel de interconexión que actualmente existe entre las entidades del sector y sus relaciones con el sector productivo, dejando en evidencia la necesidad de identificar y caracterizar el riesgo sistémico inherente al sistema, para que de esta forma las entidades reguladoras busquen una estabilidad tanto individual, como del sistema en general. El presente documento muestra, a través de un modelo que combina el poder informativo de las redes y su adecuación a un modelo espacial auto regresivo (tipo panel), la importancia de incorporar al enfoque micro-prudencial (propuesto en Basilea II), una variable que capture el efecto de estar conectado con otras entidades, realizando así un análisis macro-prudencial (propuesto en Basilea III).
Resumo:
Three sets of laboratory column experimental results concerning the hydrogeochemistry of seawater intrusion have been modelled using two codes: ACUAINTRUSION (Chemical Engineering Department, University of Alicante) and PHREEQC (U.S.G.S.). These reactive models utilise the hydrodynamic parameters determined using the ACUAINTRUSION TRANSPORT software and fit the chloride breakthrough curves perfectly. The ACUAINTRUSION code was improved, and the instabilities were studied relative to the discretisation. The relative square errors were obtained using different combinations of the spatial and temporal steps: the global error for the total experimental data and the partial error for each element. Good simulations for the three experiments were obtained using the ACUAINTRUSION software with slight variations in the selectivity coefficients for both sediments determined in batch experiments with fresh water. The cation exchange parameters included in ACUAINTRUSION are those reported by the Gapon convention with modified exponents for the Ca/Mg exchange. PHREEQC simulations performed using the Gains-Thomas convention were unsatisfactory, with the exchange coefficients from the database of PHREEQC (or range), but those determined with fresh water – natural sediment allowed only an approximation to be obtained. For the treated sediment, the adjusted exchange coefficients were determined to improve the simulation and are vastly different from those from the database of PHREEQC or batch experiment values; however, these values fall in an order similar to the others determined under dynamic conditions. Different cation concentrations were simulated using two different software packages; this disparity could be attributed to the defined selectivity coefficients that affect the gypsum equilibrium. Consequently, different calculated sulphate concentrations are obtained using each type of software; a smaller mismatch was predicted using ACUAINTRUSION. In general, the presented simulations by ACUAINTRUSION and PHREEQC produced similar results, making predictions consistent with the experimental data. However, the simulated results are not identical to the experimental data; sulphate (total S) is overpredicted by both models, most likely due to such factors as the kinetics of gypsum, the possible variations in the exchange coefficients due to salinity and the neglect of other processes.
Resumo:
The teaching-learning process is increasingly focused on the combination of the paradigms “learning by viewing” and “learning by doing.” In this context, educational resources, either expository or evaluative, play a pivotal role. Both types of resources are interdependent and their sequencing would create a richer educational experience to the end user. However, there is a lack of tools that support sequencing essentially due to the fact that existing specifications are complex. The Seqins is a sequencing tool of digital resources that has a fairly simple sequencing model. The tool communicates through the IMS LTI specification with a plethora of e-learning systems such as learning management systems, repositories, authoring and evaluation systems. In order to validate Seqins we integrate it in an e-learning Ensemble framework instance for the computer programming learning.
Resumo:
Existing gamification services have features that preclude their use by e-learning tools. Odin is a gamification service that mimics the API of state-of-the-art services without these limitations. This paper describes Odin, its role in an e-learning system architecture requiring gamification, and details its implementation. The validation of Odin involved the creation of a small e-learning game, integrated in a Learning Management System (LMS) using the Learning Tools Interoperability (LTI) specification.
Resumo:
"Series Title: IFIP - The International Federation for Information Processing, ISSN 1868-4238"
Resumo:
"A workshop within the 19th International Conference on Applications and Theory of Petri Nets - ICATPN’1998"
Resumo:
We propose new methods for evaluating predictive densities. The methods includeKolmogorov-Smirnov and Cram?r-von Mises-type tests for the correct specification ofpredictive densities robust to dynamic mis-specification. The novelty is that the testscan detect mis-specification in the predictive densities even if it appears only overa fraction of the sample, due to the presence of instabilities. Our results indicatethat our tests are well sized and have good power in detecting mis-specification inpredictive densities, even when it is time-varying. An application to density forecastsof the Survey of Professional Forecasters demonstrates the usefulness of the proposedmethodologies.
Resumo:
With the evolution of the P2P research eld, new problems, such as those related with information security, have arisen. It is important to provide security mechanisms to P2P systems, since it has already become one of the key issues when evaluating them. However, even though many P2P systems have been adapted to provide a security baseline to their underlying applications, more advanced capabilities are becoming necessary. Speci cally, privacy preservation and anonymity are deemed essential to make the information society sustainable. Unfortunately, sometimes, it may be di cult to attain anonymity unless it is included into the system's initial design. The JXTA open protocols speci cation is a good example of this kind of scenario. This work studies how to provide anonymity to JXTA's architecture in a feasible manner and proposes an extension which allows deployed services to process two-way messaging without disclosing the endpoints'identities to third parties.
Resumo:
The recent rapid development of biotechnological approaches has enabled the production of large whole genome level biological data sets. In order to handle thesedata sets, reliable and efficient automated tools and methods for data processingand result interpretation are required. Bioinformatics, as the field of studying andprocessing biological data, tries to answer this need by combining methods and approaches across computer science, statistics, mathematics and engineering to studyand process biological data. The need is also increasing for tools that can be used by the biological researchers themselves who may not have a strong statistical or computational background, which requires creating tools and pipelines with intuitive user interfaces, robust analysis workflows and strong emphasis on result reportingand visualization. Within this thesis, several data analysis tools and methods have been developed for analyzing high-throughput biological data sets. These approaches, coveringseveral aspects of high-throughput data analysis, are specifically aimed for gene expression and genotyping data although in principle they are suitable for analyzing other data types as well. Coherent handling of the data across the various data analysis steps is highly important in order to ensure robust and reliable results. Thus,robust data analysis workflows are also described, putting the developed tools andmethods into a wider context. The choice of the correct analysis method may also depend on the properties of the specific data setandthereforeguidelinesforchoosing an optimal method are given. The data analysis tools, methods and workflows developed within this thesis have been applied to several research studies, of which two representative examplesare included in the thesis. The first study focuses on spermatogenesis in murinetestis and the second one examines cell lineage specification in mouse embryonicstem cells.
Resumo:
Este estudio analiza el efecto del uso de Internet sobre el ingreso laboral, eliminado el problema de endogeneidad que existe por motivos de selección. También se analizan los efectos seg un el lugar de acceso y las actividades realizadas en Internet. Para esto se utiliza Propensity Score Matching, tomando información para Colombia entre el año 2009 a 2011. Los resultados muestran que existe un efecto positivo, significativo y heterogéneo entre trabajadores asalariados y cuenta propia, siendo los últimos los más beneficiados. Con respecto al lugar de uso se encuentra un impacto positivo del uso del Internet en el trabajo y/o en el hogar, con una mayor magnitud en el primer sitio de acceso. Para el tema del manejo del Internet, no se encuentra efecto cuando se utiliza en actividades de entretenimiento, pero si cuando se usa en tareas educativas diariamente. Esto siguiere que usar este servicio aumenta la productividad y el capital humano, lo que se transforma en un incremento en los ingresos. Por tanto, se encuentra que el lugar de uso es una señal informativa para el mercado laboral, mientras que el impacto del manejo de esta herramienta depende de la frecuencia con que se realicen las actividades.
Resumo:
Esta tese tem como objetivo principal aproximar a evidencia empirica existente sobre os agregados macroeconomicos com as novas evidencias empiricas baseadas nos micro dados de precos ao consumidor, tendo como base os modelos padroes de rigidez de preco utilizados na literatura de politica monetaria. Para isso, esta tese utiliza a base de dados individuais de precos ao consumidor no Brasil fornecida pela Fundacao Getulio Vargas. Especificamente, esta tese foca em tres temas principais: a existencia de variac˜oes temporararias de precos, a heterogeneidade na rigidez de precos entre firmas de um mesmo setor e o formato das func˜oes hazard. Os resultados mostram que: existe de fato uma correlac˜ao entre as variaveis referentes as mudancas temporararias de precos e os agregados macroeconomicos; a heterogeneidade na rigidez de precos entre firmas de um mesmo setor apresenta efeitos significativos sobre a dinamica dos agregados macroeconomicos; e por fim, o formato mais geral da func˜ao hazard proposta nesta tese possibilita novas dinamicas dos agregados macroeconomicos.
Resumo:
Economists and policymakers have long been concerned with increasing the supply of health professionals in rural and remote areas. This work seeks to understand which factors influence physicians’ choice of practice location right after completing residency. Differently from previous papers, we analyse the Brazilian missalocation and assess the particularities of developing countries. We use a discrete choice model approach with a multinomial logit specification. Two rich databases are employed containing the location and wage of formally employed physicians as well as details from their post-graduation. Our main findings are that amenities matter, physicians have a strong tendency to remain in the region they completed residency and salaries are significant in the choice of urban, but not rural, communities. We conjecture this is due to attachments built during training and infrastructure concerns.
Resumo:
The synthetic control (SC) method has been recently proposed as an alternative method to estimate treatment e ects in comparative case studies. Abadie et al. [2010] and Abadie et al. [2015] argue that one of the advantages of the SC method is that it imposes a data-driven process to select the comparison units, providing more transparency and less discretionary power to the researcher. However, an important limitation of the SC method is that it does not provide clear guidance on the choice of predictor variables used to estimate the SC weights. We show that such lack of speci c guidances provides signi cant opportunities for the researcher to search for speci cations with statistically signi cant results, undermining one of the main advantages of the method. Considering six alternative speci cations commonly used in SC applications, we calculate in Monte Carlo simulations the probability of nding a statistically signi cant result at 5% in at least one speci cation. We nd that this probability can be as high as 13% (23% for a 10% signi cance test) when there are 12 pre-intervention periods and decay slowly with the number of pre-intervention periods. With 230 pre-intervention periods, this probability is still around 10% (18% for a 10% signi cance test). We show that the speci cation that uses the average pre-treatment outcome values to estimate the weights performed particularly bad in our simulations. However, the speci cation-searching problem remains relevant even when we do not consider this speci cation. We also show that this speci cation-searching problem is relevant in simulations with real datasets looking at placebo interventions in the Current Population Survey (CPS). In order to mitigate this problem, we propose a criterion to select among SC di erent speci cations based on the prediction error of each speci cations in placebo estimations