905 resultados para bootstrap percolation
Resumo:
Ce travail évalue le comportement mécanique des matériaux cimentaires à différentes échelles de distance. Premièrement, les propriétés mécaniques du béton produit avec un bioplastifiant à base de microorganismes efficaces (EM) sont etudiées par nanoindentation statistique, et comparées aux propriétés mécaniques du béton produit avec un superplastifiant ordinaire (SP). Il est trouvé que l’ajout de bioplastifiant à base de produit EM améliore la résistance des C–S–H en augmentant la cohésion et la friction des nanograins solides. L’analyse statistique des résultats d’indentation suggère que le bioplastifiant à base de produit EM inhibe la précipitation des C–S–H avec une plus grande fraction volumique solide. Deuxièmement, un modèle multi-échelles à base micromécanique est dérivé pour le comportement poroélastique de la pâte de ciment au jeune age. L’approche proposée permet d’obtenir les propriétés poroélastiques requises pour la modélisation du comportoment mécanique partiellement saturé des pâtes de ciment viellissantes. Il est montré que ce modèle prédit le seuil de percolation et le module de Young non drainé de façon conforme aux données expérimentales. Un metamodèle stochastique est construit sur la base du chaos polynomial pour propager l’incertitude des paramètres du modèle à travers plusieurs échelles de distance. Une analyse de sensibilité est conduite par post-traitement du metamodèle pour des pâtes de ciment avec ratios d’eau sur ciment entre 0.35 et 0.70. Il est trouvé que l’incertitude sous-jacente des propriétés poroélastiques équivalentes est principalement due à l’énergie d’activation des aluminates de calcium au jeune age et, plus tard, au module élastique des silicates de calcium hydratés de basse densité.
Resumo:
In recent decades the public sector comes under pressure in order to improve its performance. The use of Information Technology (IT) has been a tool increasingly used in reaching that goal. Thus, it has become an important issue in public organizations, particularly in institutions of higher education, determine which factors influence the acceptance and use of technology, impacting on the success of its implementation and the desired organizational results. The Technology Acceptance Model - TAM was used as the basis for this study and is based on the constructs perceived usefulness and perceived ease of use. However, when it comes to integrated management systems due to the complexity of its implementation,organizational factors were added to thus seek further explanation of the acceptance of such systems. Thus, added to the model five TAM constructs related to critical success factors in implementing ERP systems, they are: support of top management, communication, training, cooperation, and technological complexity (BUENO and SALMERON, 2008). Based on the foregoing, launches the following research problem: What factors influence the acceptance and use of SIE / module academic at the Federal University of Para, from the users' perception of teachers and technicians? The purpose of this study was to identify the influence of organizational factors, and behavioral antecedents of behavioral intention to use the SIE / module academic UFPA in the perspective of teachers and technical users. This is applied research, exploratory and descriptive, quantitative with the implementation of a survey, and data collection occurred through a structured questionnaire applied to a sample of 229 teachers and 30 technical and administrative staff. Data analysis was carried out through descriptive statistics and structural equation modeling with the technique of partial least squares (PLS). Effected primarily to assess the measurement model, which were verified reliability, convergent and discriminant validity for all indicators and constructs. Then the structural model was analyzed using the bootstrap resampling technique like. In assessing statistical significance, all hypotheses were supported. The coefficient of determination (R ²) was high or average in five of the six endogenous variables, so the model explains 47.3% of the variation in behavioral intention. It is noteworthy that among the antecedents of behavioral intention (BI) analyzed in this study, perceived usefulness is the variable that has a greater effect on behavioral intention, followed by ease of use (PEU) and attitude (AT). Among the organizational aspects (critical success factors) studied technological complexity (TC) and training (ERT) were those with greatest effect on behavioral intention to use, although these effects were lower than those produced by behavioral factors (originating from TAM). It is pointed out further that the support of senior management (TMS) showed, among all variables, the least effect on the intention to use (BI) and was followed by communications (COM) and cooperation (CO), which exert a low effect on behavioral intention (BI). Therefore, as other studies on the TAM constructs were adequate for the present research. Thus, the study contributed towards proving evidence that the Technology Acceptance Model can be applied to predict the acceptance of integrated management systems, even in public. Keywords: Technology
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-08
Resumo:
En l’absence de mesure précise et unique de l’efficience pour les joueurs de hockey, la présente étude a pour objectifs d’évaluer l’efficience des joueurs dans la Ligue Nationale de Hockey (LNH) et de montrer comment celle-ci peut affecter la décision de racheter le contrat d’un joueur. Pour ce faire, les statistiques individuelles des joueurs de la LNH pour les saisons 2007-2008 à 2010-2011 sont utilisées. Pour estimer l’efficience, la méthode de l’enveloppement de données (DEA) avec bootstrap est utilisée. Les inputs incluent le salaire et le nombre de minutes de jeu, alors que les outputs incluent la contribution défensive et offensive de chaque joueur. Pour estimer l’association entre l’efficience individuelle et la probabilité d’un rachat de contrat, une régression logistique est utilisée. L’analyse des données montre que parmi 3 159 observations, l’efficience moyenne est de 0,635. L’efficience moyenne est similaire pour toutes les positions et toutes les saisons. Un lien positif et fort est trouvé entre le nombre de points au classement général d’une équipe et l’efficience moyenne des joueurs qui la compose (coefficient de corrélation=0,43, valeur-p<0,01). Les joueurs avec une efficience plus élevée ont une probabilité plus faible de voir leur contrat racheté (rapport des chances=0,01, valeur-p<0,01). La présente étude conclut donc que la plupart des joueurs de hockey dans la LNH ont un degré d’inefficience non négligeable, qu’une efficience plus élevée est associée à une meilleure performance au niveau de l’équipe et que les joueurs efficients ont une probabilité plus faible de voir leur contrat racheté.
Resumo:
Abstract : The structural build-up of fresh cement-based materials has a great impact on their structural performance after casting. Accordingly, the mixture design should be tailored to adapt the kinetics of build-up given the application on hand. The rate of structural build-up of cement-based suspensions at rest is a complex phenomenon affected by both physical and chemical structuration processes. The structuration kinetics are strongly dependent on the mixture’s composition, testing parameters, as well as the shear history. Accurate measurements of build-up rely on the efficiency of the applied pre-shear regime to achieve an initial well-dispersed state as well as the applied stress during the liquid-solid transition. Studying the physical and chemical mechanisms of build-up of cement suspensions at rest can enhance the fundamental understanding of this phenomenon. This can, therefore, allow a better control of the rheological and time-dependent properties of cement-based materials. The research focused on the use of dynamic rheology in investigating the kinetics of structural build-up of fresh cement pastes. The research program was conducted in three different phases. The first phase was devoted to evaluating the dispersing efficiency of various disruptive shear techniques. The investigated shearing profiles included rotational, oscillatory, and combination of both. The initial and final states of suspension’s structure, before and after disruption, were determined by applying a small-amplitude oscillatory shear (SAOS). The difference between the viscoelastic values before and after disruption was used to express the degree of dispersion. An efficient technique to disperse concentrated cement suspensions was developed. The second phase aimed to establish a rheometric approach to dissociate and monitor the individual physical and chemical mechanisms of build-up of cement paste. In this regard, the non-destructive dynamic rheometry was used to investigate the evolutions of both storage modulus and phase angle of inert calcium carbonate and cement suspensions. Two independent build-up indices were proposed. The structural build-up of various cement suspensions made with different cement contents, silica fume replacement percentages, and high-range water reducer dosages was evaluated using the proposed indices. These indices were then compared to the well-known thixotropic index (Athix.). Furthermore, the proposed indices were correlated to the decay in lateral pressure determined for various cement pastes cast in a pressure column. The proposed pre-shearing protocol and build-up indices (phases 1 and 2) were then used to investigate the effect of mixture’s parameters on the kinetics of structural build-up in phase 3. The investigated mixture’s parameters included cement content and fineness, alkali sulfate content, and temperature of cement suspension. Zeta potential, calorimetric, spectrometric measurements were performed to explore the corresponding microstructural changes in cement suspensions, such as inter-particle cohesion, rate of Brownian flocculation, and nucleation rate. A model linking the build-up indices and the microstructural characteristics was developed to predict the build-up behaviour of cement-based suspensions The obtained results showed that oscillatory shear may have a greater effect on dispersing concentrated cement suspension than the rotational shear. Furthermore, the increase in induced shear strain was found to enhance the breakdown of suspension’s structure until a critical point, after which thickening effects dominate. An effective dispersing method is then proposed. This consists of applying a rotational shear around the transitional value between the linear and non-linear variations of the apparent viscosity with shear rate, followed by an oscillatory shear at the crossover shear strain and high angular frequency of 100 rad/s. Investigating the evolutions of viscoelastic properties of inert calcite-based and cement suspensions and allowed establishing two independent build-up indices. The first one (the percolation time) can represent the rest time needed to form the elastic network. On the other hand, the second one (rigidification rate) can describe the increase in stress-bearing capacity of formed network due to cement hydration. In addition, results showed that combining the percolation time and the rigidification rate can provide deeper insight into the structuration process of cement suspensions. Furthermore, these indices were found to be well-correlated to the decay in the lateral pressure of cement suspensions. The variations of proposed build-up indices with mixture’s parameters showed that the percolation time is most likely controlled by the frequency of Brownian collisions, distance between dispersed particles, and intensity of cohesion between cement particles. On the other hand, a higher rigidification rate can be secured by increasing the number of contact points per unit volume of paste, nucleation rate of cement hydrates, and intensity of inter-particle cohesion.
Resumo:
El modelo geométrico del geoide MGH44 es el resultado de una comparación directa entre mediciones GPS y de nivelación convencional sobre puntos de una red geodésica ubicada en la zona urbana de 50 km2 de la ciudad de Heredia, Costa Rica. Con la grilla del MGH44 se obtiene la ondulación del geoide para cualquier punto de esa zona, valor que se puede utilizar para estimar la altura sobre el nivel medio del mar a partir de mediciones de altura elipsoídica con GPS.En este documento se describen los procedimientosy cálculos realizados para evaluar la calidad vertical del modelo MGH44 por medio de la aplicación del estándar de la National Standard for Spatial Data Accuracy (NSSDA). A través de la generación de una nueva grilla, con solo 36 datos denominada MGH36, se obtuvieron nuevos valores de la ondulación del geoide para los restantes 20 puntos escogidos como control. En el procesamiento de la información se aplicaron diferentes algoritmos para corroborar si los datos de los 20 puntos de control siguen una distribución normal y, además, verificar que en este conjunto no se tuvieran errores groseros. El valor promedio de la ondulación del geoide de los puntos de control es de 14,287 m y el cálculo según el estándar de la NSSDA brindó una exactitud vertical de los datos de ± 0,045 m. Posteriormente,por medio de la técnica de Bootstrap, se calcularon con un 95% de probabilidad los valores 14,233 m y 14,353 m como límites del intervalo de confianza del promedio.
Resumo:
This paper presents the development and evaluation of PICTOAPRENDE, which is an interactive software designed to improve oral communication. Additionally, it contributes to the development of children and youth who are diagnosed with autism spectrum disorder (ASD) in Ecuador. To fulfill this purpose initially analyzes the intervention area where the general characteristics of people with ASD and their status in Ecuador is described. Statistical techniques used for this evaluation constitutes the basis of this study. A section that presents the development of research-based cognitive and social parameters of the area of intervention is also shown. Finally, the algorithms to obtain the measurements and experimental results along with the analysis of them are presented.
Resumo:
Single-walled carbon nanotubes (SWNTs) have been studied as a prominent class of high performance electronic materials for next generation electronics. Their geometry dependent electronic structure, ballistic transport and low power dissipation due to quasi one dimensional transport, and their capability of carrying high current densities are some of the main reasons for the optimistic expectations on SWNTs. However, device applications of individual SWNTs have been hindered by uncontrolled variations in characteristics and lack of scalable methods to integrate SWNTs into electronic devices. One relatively new direction in SWNT electronics, which avoids these issues, is using arrays of SWNTs, where the ensemble average may provide uniformity from device to device, and this new breed of electronic material can be integrated into electronic devices in a scalable fashion. This dissertation describes (1) methods for characterization of SWNT arrays, (2) how the electrical transport in these two-dimensional arrays depend on length scales and spatial anisotropy, (3) the interaction of aligned SWNTs with the underlying substrate, and (4) methods for scalable integration of SWNT arrays into electronic devices. The electrical characterization of SWNT arrays have been realized by polymer electrolyte-gated SWNT thin film transistors (TFTs). Polymer electrolyte-gating addresses many technical difficulties inherent to electrical characterization by gating through oxide-dielectrics. Having shown polymer electrolyte-gating can be successfully applied on SWNT arrays, we have studied the length scaling dependence of electrical transport in SWNT arrays. Ultrathin films formed by sub-monolayer surface coverage of SWNT arrays are very interesting systems in terms of the physics of two-dimensional electronic transport. We have observed that they behave qualitatively different than the classical conducting films, which obey the Ohm’s law. The resistance of an ultrathin film of SWNT arrays is indeed non-linear with the length of the film, across which the transport occurs. More interestingly, a transition between conducting and insulating states is observed at a critical surface coverage, which is called percolation limit. The surface coverage of conducting SWNTs can be manipulated by turning on and off the semiconductors in the SWNT array, leading to the operation principle of SWNT TFTs. The percolation limit depends also on the length and the spatial orientation of SWNTs. We have also observed that the percolation limit increases abruptly for aligned arrays of SWNTs, which are grown on single crystal quartz substrates. In this dissertation, we also compare our experimental results with a two-dimensional stick network model, which gives a good qualitative picture of the electrical transport in SWNT arrays in terms of surface coverage, length scaling, and spatial orientation, and briefly discuss the validity of this model. However, the electronic properties of SWNT arrays are not only determined by geometrical arguments. The contact resistances at the nanotube-nanotube and nanotube-electrode (bulk metal) interfaces, and interactions with the local chemical groups and the underlying substrates are among other issues related to the electronic transport in SWNT arrays. Different aspects of these factors have been studied in detail by many groups. In fact, I have also included a brief discussion about electron injection onto semiconducting SWNTs by polymer dopants. On the other hand, we have compared the substrate-SWNT interactions for isotropic (in two dimensions) arrays of SWNTs grown on Si/SiO2 substrates and horizontally (on substrate) aligned arrays of SWNTs grown on single crystal quartz substrates. The anisotropic interactions associated with the quartz lattice between quartz and SWNTs that allow near perfect horizontal alignment on substrate along a particular crystallographic direction is examined by Raman spectroscopy, and shown to lead to uniaxial compressive strain in as-grown SWNTs on single crystal quartz. This is the first experimental demonstration of the hard-to-achieve uniaxial compression of SWNTs. Temperature dependence of Raman G-band spectra along the length of individual nanotubes reveals that the compressive strain is non-uniform and can be larger than 1% locally at room temperature. Effects of device fabrication steps on the non-uniform strain are also examined and implications on electrical performance are discussed. Based on our findings, there are discussions about device performances and designs included in this dissertation. The channel length dependences of device mobilities and on/off ratios are included for SWNT TFTs. Time response of polymer-electrolyte gated SWNT TFTs has been measured to be ~300 Hz, and a proof-of-concept logic inverter has been fabricated by using polymer electrolyte gated SWNT TFTs for macroelectronic applications. Finally, I dedicated a chapter on scalable device designs based on aligned arrays of SWNTs, including a design for SWNT memory devices.
Resumo:
Face à l’augmentation observée des accidents de régénération en forêt boréale et leur impact sur la productivité et la résilience des peuplements denses d’épinette noire, une meilleure compréhension des mécanismes de résilience et une surveillance des risques d’accident de régénération sont nécessaires. L’objectif principal de cette étude visait donc le développement de modèles prédictifs et spatialement explicites de la régénération de l’épinette noire. Plus particulièrement, deux modèles ont été développés soit (1) un modèle théorique, développé à l’aide de données in situ et de données spatiales et (2) un modèle cartographique, utilisant uniquement des données spatiales accessibles telles que les inventaires forestiers provinciaux et l’indice spectral de sévérité des feux « differenced Normalized Burn Ratio » (dNBR). Les résultats obtenus ont permis de constater que la succession rapprochée (< 55 ans) d’une coupe et d’un feu n’entraîne pas automatiquement une ouverture des peuplements d’épinette noire. Tout d’abord, les peuplements affectés par la coupe de récupération de brûlis (1963), immatures lors du feu de 2005, sont caractérisés par une faible régénération. En contrepartie, la régénération à la suite du feu de 2005, observé dans les peuplements coupés entre 1948 et 1967, est similaire à celle observée dans les peuplements non perturbés dans les 60 années précédant le feu. Le modèle théorique sélectionné à l’aide des critères d’information d’Akaike a, quant à lui, permis d'identifier trois variables déterminantes dans le succès ou l’échec de la régénération de l’épinette noire soit (1) la végétation potentielle, (2) le pourcentage de recouvrement du sol par les sphaignes et (3) la sévérité du feu évaluée à l’aide du dNBR. Des validations bootstrap et croisée ont permis de mettre en évidence qu’un modèle utilisant ces trois variables explique 59 % de la variabilité de la régénération observée dans le territoire d’étude., Quant à lui, le modèle cartographique qui utilise uniquement les variables végétation potentielle et dNBR explique 32 % de la variabilité. Finalement ce modèle a permis la création d’une carte de risque d’accident de régénération. Basée sur la précision du modèle, cette carte offre un potentiel intéressant afin de cibler les secteurs les plus à risque et ainsi appuyer les décisions relatives aux reboisements dans les zones incendiées.
Resumo:
L’industrie des biocarburants de deuxième génération utilise, entre autre, la biomasse lignocellulosique issue de résidus forestiers et agricoles et celle issue de cultures énergétiques. Le sorgho sucré [Sorghum bicolor (L.) Moench] fait partie de ces cultures énergétiques. L’intérêt croissant de l’industrie agroalimentaire et des biocarburants pour cette plante est dû à sa haute teneur en sucres (jusqu’à 60% en masse sèche). En plus de se développer rapidement (en 5-6 mois), le sorgho sucré a l’avantage de pouvoir croître sur des sols pauvres en nutriments et dans des conditions de faibles apports en eau, ce qui en fait une matière première intéressante pour l’industrie, notamment pour la production de bioéthanol. Le concept de bioraffinerie alliant la production de biocarburants à celle de bioénergies ou de bioproduits est de plus en plus étudié afin de valoriser la production des biocarburants. Dans le contexte d’une bioraffinerie exploitant la biomasse lignocellulosique, il est nécessaire de s’intéresser aux différents métabolites extractibles en plus des macromolécules permettant la fabrication de biocarburants et de biocommodités. Ceux-ci pouvant avoir une haute valeur ajoutée et intéresser l’industrie pharmaceutique ou cosmétique par exemple. Les techniques classiques pour extraire ces métabolites sont notamment l’extraction au Soxhlet et par macération ou percolation, qui sont longues et coûteuses en énergie. Ce projet s’intéresse donc à une méthode d’extraction des métabolites primaires et secondaires du sorgho sucré, moins coûteuse et plus courte, permettant de valoriser économiquement l’exploitation industrielle du de cette culture énergétique. Ce travail au sein de la CRIEC-B a porté spécifiquement sur l’utilisation d’une émulsion ultrasonique eau/carbonate de diméthyle permettant de diminuer les temps d’opération (passant à moins d’une heure au lieu de plusieurs heures) et les quantités de solvants mis en jeu dans le procédé d’extraction. Cette émulsion extractive permet ainsi de solubiliser à la fois les métabolites hydrophiles et ceux hydrophobes. De plus, l’impact environnemental est limité par l’utilisation de solvants respectueux de l’environnement (80 % d’eau et 20 % de carbonate de diméthyle). L’utilisation de deux systèmes d’extraction a été étudiée. L’un consiste en la recirculation de l’émulsion, en continu, au travers du lit de biomasse; le deuxième permet la mise en contact de la biomasse et des solvants avec la sonde à ultrasons, créant l’émulsion et favorisant la sonolyse de la biomasse. Ainsi, en réacteur « batch » avec recirculation de l’émulsion eau/DMC, à 370 mL.min[indice supérieur -1], au sein du lit de biomasse, l’extraction est de 37,91 % en 5 minutes, ce qui est supérieur à la méthode ASTM D1105-96 (34,01 % en 11h). De plus, en réacteur « batch – piston », où la biomasse est en contact direct avec les ultrasons et l’émulsion eau/DMC, les meilleurs rendements sont de 35,39 % en 17,5 minutes, avec 15 psig de pression et 70 % d’amplitude des ultrasons. Des tests effectués sur des particules de sorgho grossières ont donné des résultats similaires avec 30,23 % d’extraits en réacteur « batch » avec recirculation de l’émulsion (5 min, 370 mL.min[indice supérieur -1]) et 34,66 % avec le réacteur « batch-piston » (30 psig, 30 minutes, 95 % d’amplitude).
Resumo:
We determine numerically the single-particle and the two-particle spectrum of the three-state quantum Potts model on a lattice by using the density matrix renormalization group method, and extract information on the asymptotic (small momentum) S-matrix of the quasiparticles. The low energy part of the finite size spectrum can be understood in terms of a simple effective model introduced in a previous work, and is consistent with an asymptotic S-matrix of an exchange form below a momentum scale p*. This scale appears to vanish faster than the Compton scale, mc, as one approaches the critical point, suggesting that a dangerously irrelevant operator may be responsible for the behaviour observed on the lattice.
Resumo:
Despite record-setting performance demonstrated by superconducting Transition Edge Sensors (TESs) and growing utilization of the technology, a theoretical model of the physics governing TES devices superconducting phase transition has proven elusive. Earlier attempts to describe TESs assumed them to be uniform superconductors. Sadleir et al. 2010 shows that TESs are weak links and that the superconducting order parameter strength has significant spatial variation. Measurements are presented of the temperature T and magnetic field B dependence of the critical current Ic measured over 7 orders of magnitude on square Mo/Au bilayers ranging in length from 8 to 290 microns. We find our measurements have a natural explanation in terms of a spatially varying order parameter that is enhanced in proximity to the higher transition temperature superconducting leads (the longitudinal proximity effect) and suppressed in proximity to the added normal metal structures (the lateral inverse proximity effect). These in-plane proximity effects and scaling relations are observed over unprecedentedly long lengths (in excess of 1000 times the mean free path) and explained in terms of a Ginzburg-Landau model. Our low temperature Ic(B) measurements are found to agree with a general derivation of a superconducting strip with an edge or geometric barrier to vortex entry and we also derive two conditions that lead to Ic rectification. At high temperatures the Ic(B) exhibits distinct Josephson effect behavior over long length scales and following functional dependences not previously reported. We also investigate how film stress changes the transition, explain some transition features in terms of a nonequilibrium superconductivity effect, and show that our measurements of the resistive transition are not consistent with a percolating resistor network model.
Resumo:
This thesis describes the modification of the commercial TFC-S nanofiltration membrane with shape-persistent dendritic architectures. Amphiphilic aromatic polyamide dendrimers (G1-G3) are synthesized via a divergent approach and used for membrane modification by direct percolation. The permeate samples collected from the percolation experiments are analyzed by UV-Vis spectroscopy to instantly monitor the influence of dendrimer generations on percolation behaviors and new active layer formation. The membrane structures are further characterized by Rutherford backscattering spectrometry (RBS) and atomic force microscopy (AFM) techniques, suggesting a low-level accumulation of dendrimers inside the TFC-S NF membranes and subsequent formation of an additional aramide dendrimer active layer. Thus, all the modified TFC-S membranes have a double active layer structure. A PES-PVA film is used as a control membrane showing that structural compatibility between the dendrimer and supports plays an important role in the membrane modification process. The performance of modified TFC-S membrane is evaluated on the basis of rejection abilities of a variety of water contaminants having a range of sizes and chemistry. As the water flux is inversely proportional to the thickness of the active layer, we optimize the amount of dendrimers deposited for specific contaminants to improve the solute rejection while maintaining high water flux.
Resumo:
Many exchange rate papers articulate the view that instabilities constitute a major impediment to exchange rate predictability. In this thesis we implement Bayesian and other techniques to account for such instabilities, and examine some of the main obstacles to exchange rate models' predictive ability. We first consider in Chapter 2 a time-varying parameter model in which fluctuations in exchange rates are related to short-term nominal interest rates ensuing from monetary policy rules, such as Taylor rules. Unlike the existing exchange rate studies, the parameters of our Taylor rules are allowed to change over time, in light of the widespread evidence of shifts in fundamentals - for example in the aftermath of the Global Financial Crisis. Focusing on quarterly data frequency from the crisis, we detect forecast improvements upon a random walk (RW) benchmark for at least half, and for as many as seven out of 10, of the currencies considered. Results are stronger when we allow the time-varying parameters of the Taylor rules to differ between countries. In Chapter 3 we look closely at the role of time-variation in parameters and other sources of uncertainty in hindering exchange rate models' predictive power. We apply a Bayesian setup that incorporates the notion that the relevant set of exchange rate determinants and their corresponding coefficients, change over time. Using statistical and economic measures of performance, we first find that predictive models which allow for sudden, rather than smooth, changes in the coefficients yield significant forecast improvements and economic gains at horizons beyond 1-month. At shorter horizons, however, our methods fail to forecast better than the RW. And we identify uncertainty in coefficients' estimation and uncertainty about the precise degree of coefficients variability to incorporate in the models, as the main factors obstructing predictive ability. Chapter 4 focus on the problem of the time-varying predictive ability of economic fundamentals for exchange rates. It uses bootstrap-based methods to uncover the time-specific conditioning information for predicting fluctuations in exchange rates. Employing several metrics for statistical and economic evaluation of forecasting performance, we find that our approach based on pre-selecting and validating fundamentals across bootstrap replications generates more accurate forecasts than the RW. The approach, known as bumping, robustly reveals parsimonious models with out-of-sample predictive power at 1-month horizon; and outperforms alternative methods, including Bayesian, bagging, and standard forecast combinations. Chapter 5 exploits the predictive content of daily commodity prices for monthly commodity-currency exchange rates. It builds on the idea that the effect of daily commodity price fluctuations on commodity currencies is short-lived, and therefore harder to pin down at low frequencies. Using MIxed DAta Sampling (MIDAS) models, and Bayesian estimation methods to account for time-variation in predictive ability, the chapter demonstrates the usefulness of suitably exploiting such short-lived effects in improving exchange rate forecasts. It further shows that the usual low-frequency predictors, such as money supplies and interest rates differentials, typically receive little support from the data at monthly frequency, whereas MIDAS models featuring daily commodity prices are highly likely. The chapter also introduces the random walk Metropolis-Hastings technique as a new tool to estimate MIDAS regressions.
Resumo:
The Internet has grown in size at rapid rates since BGP records began, and continues to do so. This has raised concerns about the scalability of the current BGP routing system, as the routing state at each router in a shortest-path routing protocol will grow at a supra-linearly rate as the network grows. The concerns are that the memory capacity of routers will not be able to keep up with demands, and that the growth of the Internet will become ever more cramped as more and more of the world seeks the benefits of being connected. Compact routing schemes, where the routing state grows only sub-linearly relative to the growth of the network, could solve this problem and ensure that router memory would not be a bottleneck to Internet growth. These schemes trade away shortest-path routing for scalable memory state, by allowing some paths to have a certain amount of bounded “stretch”. The most promising such scheme is Cowen Routing, which can provide scalable, compact routing state for Internet routing, while still providing shortest-path routing to nearly all other nodes, with only slightly stretched paths to a very small subset of the network. Currently, there is no fully distributed form of Cowen Routing that would be practical for the Internet. This dissertation describes a fully distributed and compact protocol for Cowen routing, using the k-core graph decomposition. Previous compact routing work showed the k-core graph decomposition is useful for Cowen Routing on the Internet, but no distributed form existed. This dissertation gives a distributed k-core algorithm optimised to be efficient on dynamic graphs, along with with proofs of its correctness. The performance and efficiency of this distributed k-core algorithm is evaluated on large, Internet AS graphs, with excellent results. This dissertation then goes on to describe a fully distributed and compact Cowen Routing protocol. This protocol being comprised of a landmark selection process for Cowen Routing using the k-core algorithm, with mechanisms to ensure compact state at all times, including at bootstrap; a local cluster routing process, with mechanisms for policy application and control of cluster sizes, ensuring again that state can remain compact at all times; and a landmark routing process is described with a prioritisation mechanism for announcements that ensures compact state at all times.