914 resultados para distribution (probability theory)


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Cette thèse développe des méthodes bootstrap pour les modèles à facteurs qui sont couram- ment utilisés pour générer des prévisions depuis l'article pionnier de Stock et Watson (2002) sur les indices de diffusion. Ces modèles tolèrent l'inclusion d'un grand nombre de variables macroéconomiques et financières comme prédicteurs, une caractéristique utile pour inclure di- verses informations disponibles aux agents économiques. Ma thèse propose donc des outils éco- nométriques qui améliorent l'inférence dans les modèles à facteurs utilisant des facteurs latents extraits d'un large panel de prédicteurs observés. Il est subdivisé en trois chapitres complémen- taires dont les deux premiers en collaboration avec Sílvia Gonçalves et Benoit Perron. Dans le premier article, nous étudions comment les méthodes bootstrap peuvent être utilisées pour faire de l'inférence dans les modèles de prévision pour un horizon de h périodes dans le futur. Pour ce faire, il examine l'inférence bootstrap dans un contexte de régression augmentée de facteurs où les erreurs pourraient être autocorrélées. Il généralise les résultats de Gonçalves et Perron (2014) et propose puis justifie deux approches basées sur les résidus : le block wild bootstrap et le dependent wild bootstrap. Nos simulations montrent une amélioration des taux de couverture des intervalles de confiance des coefficients estimés en utilisant ces approches comparativement à la théorie asymptotique et au wild bootstrap en présence de corrélation sérielle dans les erreurs de régression. Le deuxième chapitre propose des méthodes bootstrap pour la construction des intervalles de prévision permettant de relâcher l'hypothèse de normalité des innovations. Nous y propo- sons des intervalles de prédiction bootstrap pour une observation h périodes dans le futur et sa moyenne conditionnelle. Nous supposons que ces prévisions sont faites en utilisant un ensemble de facteurs extraits d'un large panel de variables. Parce que nous traitons ces facteurs comme latents, nos prévisions dépendent à la fois des facteurs estimés et les coefficients de régres- sion estimés. Sous des conditions de régularité, Bai et Ng (2006) ont proposé la construction d'intervalles asymptotiques sous l'hypothèse de Gaussianité des innovations. Le bootstrap nous permet de relâcher cette hypothèse et de construire des intervalles de prédiction valides sous des hypothèses plus générales. En outre, même en supposant la Gaussianité, le bootstrap conduit à des intervalles plus précis dans les cas où la dimension transversale est relativement faible car il prend en considération le biais de l'estimateur des moindres carrés ordinaires comme le montre une étude récente de Gonçalves et Perron (2014). Dans le troisième chapitre, nous suggérons des procédures de sélection convergentes pour les regressions augmentées de facteurs en échantillons finis. Nous démontrons premièrement que la méthode de validation croisée usuelle est non-convergente mais que sa généralisation, la validation croisée «leave-d-out» sélectionne le plus petit ensemble de facteurs estimés pour l'espace généré par les vraies facteurs. Le deuxième critère dont nous montrons également la validité généralise l'approximation bootstrap de Shao (1996) pour les regressions augmentées de facteurs. Les simulations montrent une amélioration de la probabilité de sélectionner par- cimonieusement les facteurs estimés comparativement aux méthodes de sélection disponibles. L'application empirique revisite la relation entre les facteurs macroéconomiques et financiers, et l'excès de rendement sur le marché boursier américain. Parmi les facteurs estimés à partir d'un large panel de données macroéconomiques et financières des États Unis, les facteurs fortement correlés aux écarts de taux d'intérêt et les facteurs de Fama-French ont un bon pouvoir prédictif pour les excès de rendement.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The identification of archival records depends on description, being this last process summed up in designation. The title given to dif-ferent groups of documents will have a great incidence in archival func-tions. Taking this premise as our starting point, this paper presents theo-retical contributions made in Spain related to designation of documen-tary types, series and units. The status of the issue presented here tries to delimitate the concept of “documentary typology”, essential for Ar-chival Science.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The blast furnace is the main ironmaking production unit in the world which converts iron ore with coke and hot blast into liquid iron, hot metal, which is used for steelmaking. The furnace acts as a counter-current reactor charged with layers of raw material of very different gas permeability. The arrangement of these layers, or burden distribution, is the most important factor influencing the gas flow conditions inside the furnace, which dictate the efficiency of the heat transfer and reduction processes. For proper control the furnace operators should know the overall conditions in the furnace and be able to predict how control actions affect the state of the furnace. However, due to high temperatures and pressure, hostile atmosphere and mechanical wear it is very difficult to measure internal variables. Instead, the operators have to rely extensively on measurements obtained at the boundaries of the furnace and make their decisions on the basis of heuristic rules and results from mathematical models. It is particularly difficult to understand the distribution of the burden materials because of the complex behavior of the particulate materials during charging. The aim of this doctoral thesis is to clarify some aspects of burden distribution and to develop tools that can aid the decision-making process in the control of the burden and gas distribution in the blast furnace. A relatively simple mathematical model was created for simulation of the distribution of the burden material with a bell-less top charging system. The model developed is fast and it can therefore be used by the operators to gain understanding of the formation of layers for different charging programs. The results were verified by findings from charging experiments using a small-scale charging rig at the laboratory. A basic gas flow model was developed which utilized the results of the burden distribution model to estimate the gas permeability of the upper part of the blast furnace. This combined formulation for gas and burden distribution made it possible to implement a search for the best combination of charging parameters to achieve a target gas temperature distribution. As this mathematical task is discontinuous and non-differentiable, a genetic algorithm was applied to solve the optimization problem. It was demonstrated that the method was able to evolve optimal charging programs that fulfilled the target conditions. Even though the burden distribution model provides information about the layer structure, it neglects some effects which influence the results, such as mixed layer formation and coke collapse. A more accurate numerical method for studying particle mechanics, the Discrete Element Method (DEM), was used to study some aspects of the charging process more closely. Model charging programs were simulated using DEM and compared with the results from small-scale experiments. The mixed layer was defined and the voidage of mixed layers was estimated. The mixed layer was found to have about 12% less voidage than layers of the individual burden components. Finally, a model for predicting the extent of coke collapse when heavier pellets are charged over a layer of lighter coke particles was formulated based on slope stability theory, and was used to update the coke layer distribution after charging in the mathematical model. In designing this revision, results from DEM simulations and charging experiments for some charging programs were used. The findings from the coke collapse analysis can be used to design charging programs with more stable coke layers.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This dissertation demonstrates an explanation of damage and reliability of critical components and structures within the second law of thermodynamics. The approach relies on the fundamentals of irreversible thermodynamics, specifically the concept of entropy generation due to materials degradation as an index of damage. All failure mechanisms that cause degradation, damage accumulation and ultimate failure share a common feature, namely energy dissipation. Energy dissipation, as a fundamental measure for irreversibility in a thermodynamic treatment of non-equilibrium processes, leads to and can be expressed in terms of entropy generation. The dissertation proposes a theory of damage by relating entropy generation to energy dissipation via generalized thermodynamic forces and thermodynamic fluxes that formally describes the resulting damage. Following the proposed theory of entropic damage, an approach to reliability and integrity characterization based on thermodynamic entropy is discussed. It is shown that the variability in the amount of the thermodynamic-based damage and uncertainties about the parameters of a distribution model describing the variability, leads to a more consistent and broader definition of the well know time-to-failure distribution in reliability engineering. As such it has been shown that the reliability function can be derived from the thermodynamic laws rather than estimated from the observed failure histories. Furthermore, using the superior advantages of the use of entropy generation and accumulation as a damage index in comparison to common observable markers of damage such as crack size, a method is proposed to explain the prognostics and health management (PHM) in terms of the entropic damage. The proposed entropic-based damage theory to reliability and integrity is then demonstrated through experimental validation. Using this theorem, the corrosion-fatigue entropy generation function is derived, evaluated and employed for structural integrity, reliability assessment and remaining useful life (RUL) prediction of Aluminum 7075-T651 specimens tested.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Schedules can be built in a similar way to a human scheduler by using a set of rules that involve domain knowledge. This paper presents an Estimation of Distribution Algorithm (EDA) for the nurse scheduling problem, which involves choosing a suitable scheduling rule from a set for the assignment of each nurse. Unlike previous work that used Genetic Algorithms (GAs) to implement implicit learning, the learning in the proposed algorithm is explicit, i.e. we identify and mix building blocks directly. The EDA is applied to implement such explicit learning by building a Bayesian network of the joint distribution of solutions. The conditional probability of each variable in the network is computed according to an initial set of promising solutions. Subsequently, each new instance for each variable is generated by using the corresponding conditional probabilities, until all variables have been generated, i.e. in our case, a new rule string has been obtained. Another set of rule strings will be generated in this way, some of which will replace previous strings based on fitness selection. If stopping conditions are not met, the conditional probabilities for all nodes in the Bayesian network are updated again using the current set of promising rule strings. Computational results from 52 real data instances demonstrate the success of this approach. It is also suggested that the learning mechanism in the proposed approach might be suitable for other scheduling problems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Vector-borne disease emergence in recent decades has been associated with different environmental drivers including changes in habitat, hosts and climate. Lyme borreliosis is among the most important vector-borne diseases in the Northern hemisphere and is an emerging disease in Scotland. Transmitted by Ixodid tick vectors between large numbers of wild vertebrate host species, Lyme borreliosis is caused by bacteria from the Borrelia burgdorferi sensu lato species group. Ecological studies can inform how environmental factors such as host abundance and community composition, habitat and landscape heterogeneity contribute to spatial and temporal variation in risk from B. burgdorferi s.l. In this thesis a range of approaches were used to investigate the effects of vertebrate host communities and individual host species as drivers of B. burgdorferi s.l. dynamics and its tick vector Ixodes ricinus. Host species differ in reservoir competence for B. burgdorferi s.l. and as hosts for ticks. Deer are incompetent transmission hosts for B. burgdorferi s.l. but are significant hosts of all life-stages of I. ricinus. Rodents and birds are important transmission hosts of B. burgdorferi s.l. and common hosts of immature life-stages of I. ricinus. In this thesis, surveys of woodland sites revealed variable effects of deer density on B. burgdorferi prevalence, from no effect (Chapter 2) to a possible ‘dilution’ effect resulting in lower prevalence at higher deer densities (Chapter 3). An invasive species in Scotland, the grey squirrel (Sciurus carolinensis), was found to host diverse genotypes of B. burgdorferi s.l. and may act as a spill-over host for strains maintained by native host species (Chapter 4). Habitat fragmentation may alter the dynamics of B. burgdorferi s.l. via effects on the host community and host movements. In this thesis, there was lack of persistence of the rodent associated genospecies of B. burgdorferi s.l. within a naturally fragmented landscape (Chapter 3). Rodent host biology, particularly population cycles and dispersal ability are likely to affect pathogen persistence and recolonization in fragmented habitats. Heterogeneity in disease dynamics can occur spatially and temporally due to differences in the host community, habitat and climatic factors. Higher numbers of I. ricinus nymphs, and a higher probability of detecting a nymph infected with B. burgdorferi s.l., were found in areas with warmer climates estimated by growing degree days (Chapter 2). The ground vegetation type associated with the highest number of I. ricinus nymphs varied between studies in this thesis (Chapter 2 & 3) and does not appear to be a reliable predictor across large areas. B. burgdorferi s.l. prevalence and genospecies composition was highly variable for the same sites sampled in subsequent years (Chapter 2). This suggests that dynamic variables such as reservoir host densities and deer should be measured as well as more static habitat and climatic factors to understand the drivers of B. burgdorferi s.l. infection in ticks. Heterogeneity in parasite loads amongst hosts is a common finding which has implications for disease ecology and management. Using a 17-year data set for tick infestations in a wild bird community in Scotland, different effects of age and sex on tick burdens were found among four species of passerine bird (Chapter 5). There were also different rates of decline in tick burdens among bird species in response to a long term decrease in questing tick pressure over the study. Species specific patterns may be driven by differences in behaviour and immunity and highlight the importance of comparative approaches. Combining whole genome sequencing (WGS) and population genetics approaches offers a novel approach to identify ecological drivers of pathogen populations. An initial analysis of WGS from B. burgdorferi s.s. isolates sampled 16 years apart suggests that there is a signal of measurable evolution (Chapter 6). This suggests demographic analyses may be applied to understand ecological and evolutionary processes of these bacteria. This work shows how host communities, habitat and climatic factors can affect the local transmission dynamics of B. burgdorferi s.l. and the potential risk of infection to humans. Spatial and temporal heterogeneity in pathogen dynamics poses challenges for the prediction of risk. New tools such as WGS of the pathogen (Chapter 6) and blood meal analysis techniques will add power to future studies on the ecology and evolution of B. burgdorferi s.l.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper proposes a new memetic evolutionary algorithm to achieve explicit learning in rule-based nurse rostering, which involves applying a set of heuristic rules for each nurse's assignment. The main framework of the algorithm is an estimation of distribution algorithm, in which an ant-miner methodology improves the individual solutions produced in each generation. Unlike our previous work (where learning is implicit), the learning in the memetic estimation of distribution algorithm is explicit, i.e. we are able to identify building blocks directly. The overall approach learns by building a probabilistic model, i.e. an estimation of the probability distribution of individual nurse-rule pairs that are used to construct schedules. The local search processor (i.e. the ant-miner) reinforces nurse-rule pairs that receive higher rewards. A challenging real world nurse rostering problem is used as the test problem. Computational results show that the proposed approach outperforms most existing approaches. It is suggested that the learning methodologies suggested in this paper may be applied to other scheduling problems where schedules are built systematically according to specific rules.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Schedules can be built in a similar way to a human scheduler by using a set of rules that involve domain knowledge. This paper presents an Estimation of Distribution Algorithm (EDA) for the nurse scheduling problem, which involves choosing a suitable scheduling rule from a set for the assignment of each nurse. Unlike previous work that used Genetic Algorithms (GAs) to implement implicit learning, the learning in the proposed algorithm is explicit, i.e. we identify and mix building blocks directly. The EDA is applied to implement such explicit learning by building a Bayesian network of the joint distribution of solutions. The conditional probability of each variable in the network is computed according to an initial set of promising solutions. Subsequently, each new instance for each variable is generated by using the corresponding conditional probabilities, until all variables have been generated, i.e. in our case, a new rule string has been obtained. Another set of rule strings will be generated in this way, some of which will replace previous strings based on fitness selection. If stopping conditions are not met, the conditional probabilities for all nodes in the Bayesian network are updated again using the current set of promising rule strings. Computational results from 52 real data instances demonstrate the success of this approach. It is also suggested that the learning mechanism in the proposed approach might be suitable for other scheduling problems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper proposes a new memetic evolutionary algorithm to achieve explicit learning in rule-based nurse rostering, which involves applying a set of heuristic rules for each nurse's assignment. The main framework of the algorithm is an estimation of distribution algorithm, in which an ant-miner methodology improves the individual solutions produced in each generation. Unlike our previous work (where learning is implicit), the learning in the memetic estimation of distribution algorithm is explicit, i.e. we are able to identify building blocks directly. The overall approach learns by building a probabilistic model, i.e. an estimation of the probability distribution of individual nurse-rule pairs that are used to construct schedules. The local search processor (i.e. the ant-miner) reinforces nurse-rule pairs that receive higher rewards. A challenging real world nurse rostering problem is used as the test problem. Computational results show that the proposed approach outperforms most existing approaches. It is suggested that the learning methodologies suggested in this paper may be applied to other scheduling problems where schedules are built systematically according to specific rules.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Schedules can be built in a similar way to a human scheduler by using a set of rules that involve domain knowledge. This paper presents an Estimation of Distribution Algorithm (EDA) for the nurse scheduling problem, which involves choosing a suitable scheduling rule from a set for the assignment of each nurse. Unlike previous work that used Genetic Algorithms (GAs) to implement implicit learning, the learning in the proposed algorithm is explicit, i.e. we identify and mix building blocks directly. The EDA is applied to implement such explicit learning by building a Bayesian network of the joint distribution of solutions. The conditional probability of each variable in the network is computed according to an initial set of promising solutions. Subsequently, each new instance for each variable is generated by using the corresponding conditional probabilities, until all variables have been generated, i.e. in our case, a new rule string has been obtained. Another set of rule strings will be generated in this way, some of which will replace previous strings based on fitness selection. If stopping conditions are not met, the conditional probabilities for all nodes in the Bayesian network are updated again using the current set of promising rule strings. Computational results from 52 real data instances demonstrate the success of this approach. It is also suggested that the learning mechanism in the proposed approach might be suitable for other scheduling problems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Entanglement distribution between distant parties is an essential component to most quantum communication protocols. Unfortunately, decoherence effects such as phase noise in optical fibres are known to demolish entanglement. Iterative (multistep) entanglement distillation protocols have long been proposed to overcome decoherence, but their probabilistic nature makes them inefficient since the success probability decays exponentially with the number of steps. Quantum memories have been contemplated to make entanglement distillation practical, but suitable quantum memories are not realised to date. Here, we present the theory for an efficient iterative entanglement distillation protocol without quantum memories and provide a proof-of-principle experimental demonstration. The scheme is applied to phase-diffused two-mode-squeezed states and proven to distil entanglement for up to three iteration steps. The data are indistinguishable from those that an efficient scheme using quantum memories would produce. Since our protocol includes the final measurement it is particularly promising for enhancing continuous-variable quantum key distribution.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this thesis, we study the causal relationship between functional distribution of income and economic growth. In particular, we focus on some of the aspects that might alter the effect of the profit share on growth. After a brief introduction and literature review, the empirical contributions will be presented in Chapters 3,4 and 5. Chapter 3 analyses the effect of a contemporaneous decrease in the wage share among countries that are major trade partners. Falling wage share and wage moderation are a global phenomenon which are hardly opposed by governments. This is because lower wages are associated with lower export prices and, therefore, have a positive effect on net-exports. There is, however, a fallacy of composition problem: not all countries can improve their balance of payments contemporaneously. Studying the country members of the North America Free Trade Agreement, we find that the effect on export of a contemporaneous decrease in the wage share in Mexico, Canada and the United States, is negative in all countries. In other words, the competitive advantage that each country gains because of a reduction in its wage share (to which is associated a decrease in export prices), is offset by a contemporaneous increase in competitiveness in the other two countries. Moreover, we find that NAFTA is overall wage-led: the profit share has a negative effect on aggregate demand. Chapter 4 tests whether it is possible that the effect of the profit share on growth is different in the long run and in the short run. Following Blecker (2014) our hypothesis is that in the short run the growth regime is less wage-led than it is in the long run. The results of our empirical investigation support this hypothesis, at least for the United States over the period 1950-2014. The effect of wages on consumption increases more than proportionally compared to the effect of profits on consumption from the short to the long run. Moreover, consumer debt seem to have only a short-run effect on consumption indicating that in the long run, when debt has to be repaid, consumption depends more on the level of income and on how it is distributed. Regarding investment, the effect of capacity utilization is always larger than the effect of the profit share and that the difference between the two effects is higher in the long run than in the short run. This confirms the hypothesis that in the long run, unless there is an increase in demand, it is likely that firms are not going to increase investments even in the presence of high profits. In addition, the rentier share of profits – that comprises dividends and interest payments – has a long-run negative effect on investment. In the long run rentiers divert firms’ profits from investment and, therefore, it weakens the effect of profits on investment. Finally, Chapter 5 studies the possibility of structural breaks in the relationship between functional distribution of income and growth. We argue that, from the 1980s, financialization and the European exchange rate agreements weakened the positive effect of the profit share on growth in Italy. The growth regime is therefore becoming less profit-led and more wage-led. Our results confirm this hypothesis and also shed light on the concept of cooperative and conflictual regimes as defined by Bhaduri and Marglin (1990).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Growing models have been widely used for clustering or topology learning. Traditionally these models work on stationary environments, grow incrementally and adapt their nodes to a given distribution based on global parameters. In this paper, we present an enhanced unsupervised self-organising network for the modelling of visual objects. We first develop a framework for building non-rigid shapes using the growth mechanism of the self-organising maps, and then we define an optimal number of nodes without overfitting or underfitting the network based on the knowledge obtained from information-theoretic considerations. We present experimental results for hands and we quantitatively evaluate the matching capabilities of the proposed method with the topographic product.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In quantitative risk analysis, the problem of estimating small threshold exceedance probabilities and extreme quantiles arise ubiquitously in bio-surveillance, economics, natural disaster insurance actuary, quality control schemes, etc. A useful way to make an assessment of extreme events is to estimate the probabilities of exceeding large threshold values and extreme quantiles judged by interested authorities. Such information regarding extremes serves as essential guidance to interested authorities in decision making processes. However, in such a context, data are usually skewed in nature, and the rarity of exceedance of large threshold implies large fluctuations in the distribution's upper tail, precisely where the accuracy is desired mostly. Extreme Value Theory (EVT) is a branch of statistics that characterizes the behavior of upper or lower tails of probability distributions. However, existing methods in EVT for the estimation of small threshold exceedance probabilities and extreme quantiles often lead to poor predictive performance in cases where the underlying sample is not large enough or does not contain values in the distribution's tail. In this dissertation, we shall be concerned with an out of sample semiparametric (SP) method for the estimation of small threshold probabilities and extreme quantiles. The proposed SP method for interval estimation calls for the fusion or integration of a given data sample with external computer generated independent samples. Since more data are used, real as well as artificial, under certain conditions the method produces relatively short yet reliable confidence intervals for small exceedance probabilities and extreme quantiles.