964 resultados para Distribution (Probability theory)
Resumo:
Aims. Projected rotational velocities (ve sin i) have been estimated for 334 targets in the VLT-FLAMES Tarantula Survey that do not manifest significant radial velocity variations and are not supergiants. They have spectral types from approximately O9.5 to B3. The estimates have been analysed to infer the underlying rotational velocity distribution, which is critical for understanding the evolution of massive stars. Methods. Projected rotational velocities were deduced from the Fourier transforms of spectral lines, with upper limits also being obtained from profile fitting. For the narrower lined stars, metal and non-diffuse helium lines were adopted, and for the broader lined stars, both non-diffuse and diffuse helium lines; the estimates obtained using the different sets of lines are in good agreement. The uncertainty in the mean estimates is typically 4% for most targets. The iterative deconvolution procedure of Lucy has been used to deduce the probability density distribution of the rotational velocities. Results. Projected rotational velocities range up to approximately 450 kms-1 and show a bi-modal structure. This is also present in the inferred rotational velocity distribution with 25% of the sample having 0 <ve <100 km s-1 and the high velocity component having ve ∼ 250 km s-1. There is no evidence from the spatial and radial velocity distributions of the two components that they represent either field and cluster populations or different episodes of star formation. Be-type stars have also been identified. Conclusions. The bi-modal rotational velocity distribution in our sample resembles that found for late-B and early-A type stars.While magnetic braking appears to be a possible mechanism for producing the low-velocity component, we can not rule out alternative explanations. © ESO 2013.
Resumo:
The strong mixing of many-electron basis states in excited atoms and ions with open f shells results in very large numbers of complex, chaotic eigenstates that cannot be computed to any degree of accuracy. Describing the processes which involve such states requires the use of a statistical theory. Electron capture into these “compound resonances” leads to electron-ion recombination rates that are orders of magnitude greater than those of direct, radiative recombination and cannot be described by standard theories of dielectronic recombination. Previous statistical theories considered this as a two-electron capture process which populates a pair of single-particle orbitals, followed by “spreading” of the two-electron states into chaotically mixed eigenstates. This method is similar to a configuration-average approach because it neglects potentially important effects of spectator electrons and conservation of total angular momentum. In this work we develop a statistical theory which considers electron capture into “doorway” states with definite angular momentum obtained by the configuration interaction method. We apply this approach to electron recombination with W20+, considering 2×106 doorway states. Despite strong effects from the spectator electrons, we find that the results of the earlier theories largely hold. Finally, we extract the fluorescence yield (the probability of photoemission and hence recombination) by comparison with experiment.
Resumo:
In this paper, we consider the transmission of confidential information over a κ-μ fading channel in the presence of an eavesdropper who also experiences κ-μ fading. In particular, we obtain novel analytical solutions for the probability of strictly positive secrecy capacity (SPSC) and a lower bound of secure outage probability (SOPL) for independent and non-identically distributed channel coefficients without parameter constraints. We also provide a closed-form expression for the probability of SPSC when the μ parameter is assumed to take positive integer values. Monte-Carlo simulations are performed to verify the derived results. The versatility of the κ-μ fading model means that the results presented in this paper can be used to determine the probability of SPSC and SOPL for a large number of other fading scenarios, such as Rayleigh, Rice (Nakagamin), Nakagami-m, One-Sided Gaussian, and mixtures of these common fading models. In addition, due to the duality of the analysis of secrecy capacity and co-channel interference (CCI), the results presented here will have immediate applicability in the analysis of outage probability in wireless systems affected by CCI and background noise (BN). To demonstrate the efficacy of the novel formulations proposed here, we use the derived equations to provide a useful insight into the probability of SPSC and SOPL for a range of emerging wireless applications, such as cellular device-to-device, peer-to-peer, vehicle-to-vehicle, and body centric communications using data obtained from real channel measurements.
Resumo:
Queueing theory is the mathematical study of ‘queue’ or ‘waiting lines’ where an item from inventory is provided to the customer on completion of service. A typical queueing system consists of a queue and a server. Customers arrive in the system from outside and join the queue in a certain way. The server picks up customers and serves them according to certain service discipline. Customers leave the system immediately after their service is completed. For queueing systems, queue length, waiting time and busy period are of primary interest to applications. The theory permits the derivation and calculation of several performance measures including the average waiting time in the queue or the system, mean queue length, traffic intensity, the expected number waiting or receiving service, mean busy period, distribution of queue length, and the probability of encountering the system in certain states, such as empty, full, having an available server or having to wait a certain time to be served.
Resumo:
Cette thèse développe des méthodes bootstrap pour les modèles à facteurs qui sont couram- ment utilisés pour générer des prévisions depuis l'article pionnier de Stock et Watson (2002) sur les indices de diffusion. Ces modèles tolèrent l'inclusion d'un grand nombre de variables macroéconomiques et financières comme prédicteurs, une caractéristique utile pour inclure di- verses informations disponibles aux agents économiques. Ma thèse propose donc des outils éco- nométriques qui améliorent l'inférence dans les modèles à facteurs utilisant des facteurs latents extraits d'un large panel de prédicteurs observés. Il est subdivisé en trois chapitres complémen- taires dont les deux premiers en collaboration avec Sílvia Gonçalves et Benoit Perron. Dans le premier article, nous étudions comment les méthodes bootstrap peuvent être utilisées pour faire de l'inférence dans les modèles de prévision pour un horizon de h périodes dans le futur. Pour ce faire, il examine l'inférence bootstrap dans un contexte de régression augmentée de facteurs où les erreurs pourraient être autocorrélées. Il généralise les résultats de Gonçalves et Perron (2014) et propose puis justifie deux approches basées sur les résidus : le block wild bootstrap et le dependent wild bootstrap. Nos simulations montrent une amélioration des taux de couverture des intervalles de confiance des coefficients estimés en utilisant ces approches comparativement à la théorie asymptotique et au wild bootstrap en présence de corrélation sérielle dans les erreurs de régression. Le deuxième chapitre propose des méthodes bootstrap pour la construction des intervalles de prévision permettant de relâcher l'hypothèse de normalité des innovations. Nous y propo- sons des intervalles de prédiction bootstrap pour une observation h périodes dans le futur et sa moyenne conditionnelle. Nous supposons que ces prévisions sont faites en utilisant un ensemble de facteurs extraits d'un large panel de variables. Parce que nous traitons ces facteurs comme latents, nos prévisions dépendent à la fois des facteurs estimés et les coefficients de régres- sion estimés. Sous des conditions de régularité, Bai et Ng (2006) ont proposé la construction d'intervalles asymptotiques sous l'hypothèse de Gaussianité des innovations. Le bootstrap nous permet de relâcher cette hypothèse et de construire des intervalles de prédiction valides sous des hypothèses plus générales. En outre, même en supposant la Gaussianité, le bootstrap conduit à des intervalles plus précis dans les cas où la dimension transversale est relativement faible car il prend en considération le biais de l'estimateur des moindres carrés ordinaires comme le montre une étude récente de Gonçalves et Perron (2014). Dans le troisième chapitre, nous suggérons des procédures de sélection convergentes pour les regressions augmentées de facteurs en échantillons finis. Nous démontrons premièrement que la méthode de validation croisée usuelle est non-convergente mais que sa généralisation, la validation croisée «leave-d-out» sélectionne le plus petit ensemble de facteurs estimés pour l'espace généré par les vraies facteurs. Le deuxième critère dont nous montrons également la validité généralise l'approximation bootstrap de Shao (1996) pour les regressions augmentées de facteurs. Les simulations montrent une amélioration de la probabilité de sélectionner par- cimonieusement les facteurs estimés comparativement aux méthodes de sélection disponibles. L'application empirique revisite la relation entre les facteurs macroéconomiques et financiers, et l'excès de rendement sur le marché boursier américain. Parmi les facteurs estimés à partir d'un large panel de données macroéconomiques et financières des États Unis, les facteurs fortement correlés aux écarts de taux d'intérêt et les facteurs de Fama-French ont un bon pouvoir prédictif pour les excès de rendement.
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-08
Resumo:
The identification of archival records depends on description, being this last process summed up in designation. The title given to dif-ferent groups of documents will have a great incidence in archival func-tions. Taking this premise as our starting point, this paper presents theo-retical contributions made in Spain related to designation of documen-tary types, series and units. The status of the issue presented here tries to delimitate the concept of “documentary typology”, essential for Ar-chival Science.
Resumo:
The blast furnace is the main ironmaking production unit in the world which converts iron ore with coke and hot blast into liquid iron, hot metal, which is used for steelmaking. The furnace acts as a counter-current reactor charged with layers of raw material of very different gas permeability. The arrangement of these layers, or burden distribution, is the most important factor influencing the gas flow conditions inside the furnace, which dictate the efficiency of the heat transfer and reduction processes. For proper control the furnace operators should know the overall conditions in the furnace and be able to predict how control actions affect the state of the furnace. However, due to high temperatures and pressure, hostile atmosphere and mechanical wear it is very difficult to measure internal variables. Instead, the operators have to rely extensively on measurements obtained at the boundaries of the furnace and make their decisions on the basis of heuristic rules and results from mathematical models. It is particularly difficult to understand the distribution of the burden materials because of the complex behavior of the particulate materials during charging. The aim of this doctoral thesis is to clarify some aspects of burden distribution and to develop tools that can aid the decision-making process in the control of the burden and gas distribution in the blast furnace. A relatively simple mathematical model was created for simulation of the distribution of the burden material with a bell-less top charging system. The model developed is fast and it can therefore be used by the operators to gain understanding of the formation of layers for different charging programs. The results were verified by findings from charging experiments using a small-scale charging rig at the laboratory. A basic gas flow model was developed which utilized the results of the burden distribution model to estimate the gas permeability of the upper part of the blast furnace. This combined formulation for gas and burden distribution made it possible to implement a search for the best combination of charging parameters to achieve a target gas temperature distribution. As this mathematical task is discontinuous and non-differentiable, a genetic algorithm was applied to solve the optimization problem. It was demonstrated that the method was able to evolve optimal charging programs that fulfilled the target conditions. Even though the burden distribution model provides information about the layer structure, it neglects some effects which influence the results, such as mixed layer formation and coke collapse. A more accurate numerical method for studying particle mechanics, the Discrete Element Method (DEM), was used to study some aspects of the charging process more closely. Model charging programs were simulated using DEM and compared with the results from small-scale experiments. The mixed layer was defined and the voidage of mixed layers was estimated. The mixed layer was found to have about 12% less voidage than layers of the individual burden components. Finally, a model for predicting the extent of coke collapse when heavier pellets are charged over a layer of lighter coke particles was formulated based on slope stability theory, and was used to update the coke layer distribution after charging in the mathematical model. In designing this revision, results from DEM simulations and charging experiments for some charging programs were used. The findings from the coke collapse analysis can be used to design charging programs with more stable coke layers.
Resumo:
This dissertation demonstrates an explanation of damage and reliability of critical components and structures within the second law of thermodynamics. The approach relies on the fundamentals of irreversible thermodynamics, specifically the concept of entropy generation due to materials degradation as an index of damage. All failure mechanisms that cause degradation, damage accumulation and ultimate failure share a common feature, namely energy dissipation. Energy dissipation, as a fundamental measure for irreversibility in a thermodynamic treatment of non-equilibrium processes, leads to and can be expressed in terms of entropy generation. The dissertation proposes a theory of damage by relating entropy generation to energy dissipation via generalized thermodynamic forces and thermodynamic fluxes that formally describes the resulting damage. Following the proposed theory of entropic damage, an approach to reliability and integrity characterization based on thermodynamic entropy is discussed. It is shown that the variability in the amount of the thermodynamic-based damage and uncertainties about the parameters of a distribution model describing the variability, leads to a more consistent and broader definition of the well know time-to-failure distribution in reliability engineering. As such it has been shown that the reliability function can be derived from the thermodynamic laws rather than estimated from the observed failure histories. Furthermore, using the superior advantages of the use of entropy generation and accumulation as a damage index in comparison to common observable markers of damage such as crack size, a method is proposed to explain the prognostics and health management (PHM) in terms of the entropic damage. The proposed entropic-based damage theory to reliability and integrity is then demonstrated through experimental validation. Using this theorem, the corrosion-fatigue entropy generation function is derived, evaluated and employed for structural integrity, reliability assessment and remaining useful life (RUL) prediction of Aluminum 7075-T651 specimens tested.
Resumo:
Schedules can be built in a similar way to a human scheduler by using a set of rules that involve domain knowledge. This paper presents an Estimation of Distribution Algorithm (EDA) for the nurse scheduling problem, which involves choosing a suitable scheduling rule from a set for the assignment of each nurse. Unlike previous work that used Genetic Algorithms (GAs) to implement implicit learning, the learning in the proposed algorithm is explicit, i.e. we identify and mix building blocks directly. The EDA is applied to implement such explicit learning by building a Bayesian network of the joint distribution of solutions. The conditional probability of each variable in the network is computed according to an initial set of promising solutions. Subsequently, each new instance for each variable is generated by using the corresponding conditional probabilities, until all variables have been generated, i.e. in our case, a new rule string has been obtained. Another set of rule strings will be generated in this way, some of which will replace previous strings based on fitness selection. If stopping conditions are not met, the conditional probabilities for all nodes in the Bayesian network are updated again using the current set of promising rule strings. Computational results from 52 real data instances demonstrate the success of this approach. It is also suggested that the learning mechanism in the proposed approach might be suitable for other scheduling problems.
Resumo:
Vector-borne disease emergence in recent decades has been associated with different environmental drivers including changes in habitat, hosts and climate. Lyme borreliosis is among the most important vector-borne diseases in the Northern hemisphere and is an emerging disease in Scotland. Transmitted by Ixodid tick vectors between large numbers of wild vertebrate host species, Lyme borreliosis is caused by bacteria from the Borrelia burgdorferi sensu lato species group. Ecological studies can inform how environmental factors such as host abundance and community composition, habitat and landscape heterogeneity contribute to spatial and temporal variation in risk from B. burgdorferi s.l. In this thesis a range of approaches were used to investigate the effects of vertebrate host communities and individual host species as drivers of B. burgdorferi s.l. dynamics and its tick vector Ixodes ricinus. Host species differ in reservoir competence for B. burgdorferi s.l. and as hosts for ticks. Deer are incompetent transmission hosts for B. burgdorferi s.l. but are significant hosts of all life-stages of I. ricinus. Rodents and birds are important transmission hosts of B. burgdorferi s.l. and common hosts of immature life-stages of I. ricinus. In this thesis, surveys of woodland sites revealed variable effects of deer density on B. burgdorferi prevalence, from no effect (Chapter 2) to a possible ‘dilution’ effect resulting in lower prevalence at higher deer densities (Chapter 3). An invasive species in Scotland, the grey squirrel (Sciurus carolinensis), was found to host diverse genotypes of B. burgdorferi s.l. and may act as a spill-over host for strains maintained by native host species (Chapter 4). Habitat fragmentation may alter the dynamics of B. burgdorferi s.l. via effects on the host community and host movements. In this thesis, there was lack of persistence of the rodent associated genospecies of B. burgdorferi s.l. within a naturally fragmented landscape (Chapter 3). Rodent host biology, particularly population cycles and dispersal ability are likely to affect pathogen persistence and recolonization in fragmented habitats. Heterogeneity in disease dynamics can occur spatially and temporally due to differences in the host community, habitat and climatic factors. Higher numbers of I. ricinus nymphs, and a higher probability of detecting a nymph infected with B. burgdorferi s.l., were found in areas with warmer climates estimated by growing degree days (Chapter 2). The ground vegetation type associated with the highest number of I. ricinus nymphs varied between studies in this thesis (Chapter 2 & 3) and does not appear to be a reliable predictor across large areas. B. burgdorferi s.l. prevalence and genospecies composition was highly variable for the same sites sampled in subsequent years (Chapter 2). This suggests that dynamic variables such as reservoir host densities and deer should be measured as well as more static habitat and climatic factors to understand the drivers of B. burgdorferi s.l. infection in ticks. Heterogeneity in parasite loads amongst hosts is a common finding which has implications for disease ecology and management. Using a 17-year data set for tick infestations in a wild bird community in Scotland, different effects of age and sex on tick burdens were found among four species of passerine bird (Chapter 5). There were also different rates of decline in tick burdens among bird species in response to a long term decrease in questing tick pressure over the study. Species specific patterns may be driven by differences in behaviour and immunity and highlight the importance of comparative approaches. Combining whole genome sequencing (WGS) and population genetics approaches offers a novel approach to identify ecological drivers of pathogen populations. An initial analysis of WGS from B. burgdorferi s.s. isolates sampled 16 years apart suggests that there is a signal of measurable evolution (Chapter 6). This suggests demographic analyses may be applied to understand ecological and evolutionary processes of these bacteria. This work shows how host communities, habitat and climatic factors can affect the local transmission dynamics of B. burgdorferi s.l. and the potential risk of infection to humans. Spatial and temporal heterogeneity in pathogen dynamics poses challenges for the prediction of risk. New tools such as WGS of the pathogen (Chapter 6) and blood meal analysis techniques will add power to future studies on the ecology and evolution of B. burgdorferi s.l.
An Estimation of Distribution Algorithm with Intelligent Local Search for Rule-based Nurse Rostering
Resumo:
This paper proposes a new memetic evolutionary algorithm to achieve explicit learning in rule-based nurse rostering, which involves applying a set of heuristic rules for each nurse's assignment. The main framework of the algorithm is an estimation of distribution algorithm, in which an ant-miner methodology improves the individual solutions produced in each generation. Unlike our previous work (where learning is implicit), the learning in the memetic estimation of distribution algorithm is explicit, i.e. we are able to identify building blocks directly. The overall approach learns by building a probabilistic model, i.e. an estimation of the probability distribution of individual nurse-rule pairs that are used to construct schedules. The local search processor (i.e. the ant-miner) reinforces nurse-rule pairs that receive higher rewards. A challenging real world nurse rostering problem is used as the test problem. Computational results show that the proposed approach outperforms most existing approaches. It is suggested that the learning methodologies suggested in this paper may be applied to other scheduling problems where schedules are built systematically according to specific rules.
Resumo:
Schedules can be built in a similar way to a human scheduler by using a set of rules that involve domain knowledge. This paper presents an Estimation of Distribution Algorithm (EDA) for the nurse scheduling problem, which involves choosing a suitable scheduling rule from a set for the assignment of each nurse. Unlike previous work that used Genetic Algorithms (GAs) to implement implicit learning, the learning in the proposed algorithm is explicit, i.e. we identify and mix building blocks directly. The EDA is applied to implement such explicit learning by building a Bayesian network of the joint distribution of solutions. The conditional probability of each variable in the network is computed according to an initial set of promising solutions. Subsequently, each new instance for each variable is generated by using the corresponding conditional probabilities, until all variables have been generated, i.e. in our case, a new rule string has been obtained. Another set of rule strings will be generated in this way, some of which will replace previous strings based on fitness selection. If stopping conditions are not met, the conditional probabilities for all nodes in the Bayesian network are updated again using the current set of promising rule strings. Computational results from 52 real data instances demonstrate the success of this approach. It is also suggested that the learning mechanism in the proposed approach might be suitable for other scheduling problems.
An Estimation of Distribution Algorithm with Intelligent Local Search for Rule-based Nurse Rostering
Resumo:
This paper proposes a new memetic evolutionary algorithm to achieve explicit learning in rule-based nurse rostering, which involves applying a set of heuristic rules for each nurse's assignment. The main framework of the algorithm is an estimation of distribution algorithm, in which an ant-miner methodology improves the individual solutions produced in each generation. Unlike our previous work (where learning is implicit), the learning in the memetic estimation of distribution algorithm is explicit, i.e. we are able to identify building blocks directly. The overall approach learns by building a probabilistic model, i.e. an estimation of the probability distribution of individual nurse-rule pairs that are used to construct schedules. The local search processor (i.e. the ant-miner) reinforces nurse-rule pairs that receive higher rewards. A challenging real world nurse rostering problem is used as the test problem. Computational results show that the proposed approach outperforms most existing approaches. It is suggested that the learning methodologies suggested in this paper may be applied to other scheduling problems where schedules are built systematically according to specific rules.
Resumo:
Schedules can be built in a similar way to a human scheduler by using a set of rules that involve domain knowledge. This paper presents an Estimation of Distribution Algorithm (EDA) for the nurse scheduling problem, which involves choosing a suitable scheduling rule from a set for the assignment of each nurse. Unlike previous work that used Genetic Algorithms (GAs) to implement implicit learning, the learning in the proposed algorithm is explicit, i.e. we identify and mix building blocks directly. The EDA is applied to implement such explicit learning by building a Bayesian network of the joint distribution of solutions. The conditional probability of each variable in the network is computed according to an initial set of promising solutions. Subsequently, each new instance for each variable is generated by using the corresponding conditional probabilities, until all variables have been generated, i.e. in our case, a new rule string has been obtained. Another set of rule strings will be generated in this way, some of which will replace previous strings based on fitness selection. If stopping conditions are not met, the conditional probabilities for all nodes in the Bayesian network are updated again using the current set of promising rule strings. Computational results from 52 real data instances demonstrate the success of this approach. It is also suggested that the learning mechanism in the proposed approach might be suitable for other scheduling problems.