93 resultados para Probabilities.
Resumo:
A reduced-density-operator description is developed for coherent optical phenomena in many-electron atomic systems, utilizing a Liouville-space, multiple-mode Floquet–Fourier representation. The Liouville-space formulation provides a natural generalization of the ordinary Hilbert-space (Hamiltonian) R-matrix-Floquet method, which has been developed for multi-photon transitions and laser-assisted electron–atom collision processes. In these applications, the R-matrix-Floquet method has been demonstrated to be capable of providing an accurate representation of the complex, multi-level structure of many-electron atomic systems in bound, continuum, and autoionizing states. The ordinary Hilbert-space (Hamiltonian) formulation of the R-matrix-Floquet method has been implemented in highly developed computer programs, which can provide a non-perturbative treatment of the interaction of a classical, multiple-mode electromagnetic field with a quantum system. This quantum system may correspond to a many-electron, bound atomic system and a single continuum electron. However, including pseudo-states in the expansion of the many-electron atomic wave function can provide a representation of multiple continuum electrons. The 'dressed' many-electron atomic states thereby obtained can be used in a realistic non-perturbative evaluation of the transition probabilities for an extensive class of atomic collision and radiation processes in the presence of intense electromagnetic fields. In order to incorporate environmental relaxation and decoherence phenomena, we propose to utilize the ordinary Hilbert-space (Hamiltonian) R-matrix-Floquet method as a starting-point for a Liouville-space (reduced-density-operator) formulation. To illustrate how the Liouville-space R-matrix-Floquet formulation can be implemented for coherent atomic radiative processes, we discuss applications to electromagnetically induced transparency, as well as to related pump–probe optical phenomena, and also to the unified description of radiative and dielectronic recombination in electron–ion beam interactions and high-temperature plasmas.
Resumo:
A wireless energy harvesting protocol is proposed for a decode-and-forward relay- assisted secondary user (SU) network in a cognitive spectrum sharing paradigm. An expression for the outage probability of the relay-assisted cognitive network is derived subject to the following power constraints: 1) the maximum power that the source and the relay in the SU network can transmit from the harvested energy, 2) the peak interference power from the source and the relay in the SU network at the primary user (PU) network, and 3) the interference power of the PU network at the relay-assisted SU network. The results show that as the energy harvesting conversion efficiency improves, the relay- assisted network with the proposed wireless energy harvesting protocol can operate with outage probabilities below 20% for some practical applications.
Resumo:
The timing and sequencing of fertility transitions and early-life mortality declines in historical Western societies indicate that reductions in sibship (number of siblings) may have contributed to improvements in infant health. Surprisingly, however, this demographic relationship has received little attention in empirical research. We outline the difficulties associated with establishing the effect of sibship on infant mortality and discuss the inherent bias associated with conventional empirical approaches. We offer a solution that permits an empirical test of this relationship while accounting for reverse causality and potential omitted variable bias. Our approach is illustrated by evaluating the causal impact of family size on infant mortality using genealogical data from 13 German parishes spanning the sixteenth, seventeenth, eighteenth, and nineteenth centuries. Overall, our findings do not support the hypothesis that declining fertility led to increased infant survival probabilities in historical populations.
Resumo:
Why do some banks fail in financial crises while others survive? This article answers this question by analysing the effect of the Dutch financial crisis of the 1920s on 142 banks, of which 33 failed. We find that choices of balance sheet composition and product market strategy made in the lead-up to the crisis had a significant impact on banks’ subsequent chances of experiencing distress. We document that high-risk banks – those operating highly-leveraged portfolios and attracting large quantities of deposits – were more likely to fail. Branching and international activities also increased banks’ default probabilities. We measure the effects of board interlocks, which have been characterized in the extant literature as contributing to the Dutch crisis. We find that boards mattered: failing banks had smaller boards, shared directors with smaller and very profitable banks and had a lower concentration of interlocking directorates in non-financial firms.
Resumo:
Hardware impairments in physical transceivers are known to have a deleterious effect on communication systems; however, very few contributions have investigated their impact on relaying. This paper quantifies the impact of transceiver impairments in a two-way amplify-and-forward configuration. More specifically, the effective signal-to-noise-and-distortion ratios at both transmitter nodes are obtained. These are used to deduce exact and asymptotic closed-form expressions for the outage probabilities (OPs), as well as tractable formulations for the symbol error rates (SERs). It is explicitly shown that non-zero lower bounds on the OP and SER exist in the high-power regime---this stands in contrast to the special case of ideal hardware, where the OP and SER go asymptotically to zero.
Resumo:
We consider the dynamics of an array of mutually interacting cavities, each containing an ensemble of N two-level atoms. By exploring the possibilities offered by ensembles of various dimensions and a range of atom-light and photon-hopping values, we investigate the generation of multisite entanglement, as well as the performance of excitation transfer across the array, resulting from the competition between on-site nonlinearities of the matter-light interaction and intersite photon hopping. In particular, for a three-cavity interacting system it is observed that the initial excitation in the first cavity completely transfers to the ensemble in the third cavity through the hopping of photons between the adjacent cavities. Probabilities of the transfer of excitation of the cavity modes and ensembles exhibit characteristics of fast and slow oscillations governed by coupling and hopping parameters, respectively. In the large-hopping case, by seeding an initial excitation in the cavity at the center of the array, a tripartite W state, as well as a bipartite maximally entangled state, is obtained, depending on the interaction time. Population of the ensemble in a cavity has a positive impact on the rate of excitation transfer between the ensembles and their local cavity modes. In particular, for ensembles of five to seven atoms, tripartite W states can be produced even when the hopping rate is comparable to the cavity-atom coupling rate. A similar behavior of the transfer of excitation is observed for a four-coupled-cavity system with two initial excitations.
Resumo:
The equiprobability bias (EB) is a tendency to believe that every process in which randomness is involved corresponds to a fair distribution, with equal probabilities for any possible outcome. The EB is known to affect both children and adults, and to increase with probability education. Because it results in probability errors resistant to pedagogical interventions, it has been described as a deep misconception about randomness: the erroneous belief that randomness implies uniformity. In the present paper, we show that the EB is actually not the result of a conceptual error about the definition of randomness. On the contrary, the mathematical theory of randomness does imply uniformity. However, the EB is still a bias, because people tend to assume uniformity even in the case of events that are not random. The pervasiveness of the EB reveals a paradox: The combination of random processes is not necessarily random. The link between the EB and this paradox is discussed, and suggestions are made regarding educational design to overcome difficulties encountered by students as a consequence of the EB.
Resumo:
Credal networks are graph-based statistical models whose parameters take values in a set, instead of being sharply specified as in traditional statistical models (e.g., Bayesian networks). The computational complexity of inferences on such models depends on the irrelevance/independence concept adopted. In this paper, we study inferential complexity under the concepts of epistemic irrelevance and strong independence. We show that inferences under strong independence are NP-hard even in trees with binary variables except for a single ternary one. We prove that under epistemic irrelevance the polynomial-time complexity of inferences in credal trees is not likely to extend to more general models (e.g., singly connected topologies). These results clearly distinguish networks that admit efficient inferences and those where inferences are most likely hard, and settle several open questions regarding their computational complexity. We show that these results remain valid even if we disallow the use of zero probabilities. We also show that the computation of bounds on the probability of the future state in a hidden Markov model is the same whether we assume epistemic irrelevance or strong independence, and we prove an analogous result for inference in Naive Bayes structures. These inferential equivalences are important for practitioners, as hidden Markov models and Naive Bayes networks are used in real applications of imprecise probability.
Resumo:
We present TANC, a TAN classifier (tree-augmented naive) based on imprecise probabilities. TANC models prior near-ignorance via the Extreme Imprecise Dirichlet Model (EDM). A first contribution of this paper is the experimental comparison between EDM and the global Imprecise Dirichlet Model using the naive credal classifier (NCC), with the aim of showing that EDM is a sensible approximation of the global IDM. TANC is able to deal with missing data in a conservative manner by considering all possible completions (without assuming them to be missing-at-random), but avoiding an exponential increase of the computational time. By experiments on real data sets, we show that TANC is more reliable than the Bayesian TAN and that it provides better performance compared to previous TANs based on imprecise probabilities. Yet, TANC is sometimes outperformed by NCC because the learned TAN structures are too complex; this calls for novel algorithms for learning the TAN structures, better suited for an imprecise probability classifier.
Resumo:
Credal networks generalize Bayesian networks by relaxing the requirement of precision of probabilities. Credal networks are considerably more expressive than Bayesian networks, but this makes belief updating NP-hard even on polytrees. We develop a new efficient algorithm for approximate belief updating in credal networks. The algorithm is based on an important representation result we prove for general credal networks: that any credal network can be equivalently reformulated as a credal network with binary variables; moreover, the transformation, which is considerably more complex than in the Bayesian case, can be implemented in polynomial time. The equivalent binary credal network is then updated by L2U, a loopy approximate algorithm for binary credal networks. Overall, we generalize L2U to non-binary credal networks, obtaining a scalable algorithm for the general case, which is approximate only because of its loopy nature. The accuracy of the inferences with respect to other state-of-the-art algorithms is evaluated by extensive numerical tests.
Resumo:
This paper explores the application of semi-qualitative probabilistic networks (SQPNs) that combine numeric and qualitative information to computer vision problems. Our version of SQPN allows qualitative influences and imprecise probability measures using intervals. We describe an Imprecise Dirichlet model for parameter learning and an iterative algorithm for evaluating posterior probabilities, maximum a posteriori and most probable explanations. Experiments on facial expression recognition and image segmentation problems are performed using real data.
Resumo:
Hidden Markov models (HMMs) are widely used models for sequential data. As with other probabilistic graphical models, they require the specification of precise probability values, which can be too restrictive for some domains, especially when data are scarce or costly to acquire. We present a generalized version of HMMs, whose quantification can be done by sets of, instead of single, probability distributions. Our models have the ability to suspend judgment when there is not enough statistical evidence, and can serve as a sensitivity analysis tool for standard non-stationary HMMs. Efficient inference algorithms are developed to address standard HMM usage such as the computation of likelihoods and most probable explanations. Experiments with real data show that the use of imprecise probabilities leads to more reliable inferences without compromising efficiency.
Resumo:
Credal nets are probabilistic graphical models which extend Bayesian nets to cope with sets of distributions. This feature makes the model particularly suited for the implementation of classifiers and knowledge-based systems. When working with sets of (instead of single) probability distributions, the identification of the optimal option can be based on different criteria, some of them eventually leading to multiple choices. Yet, most of the inference algorithms for credal nets are designed to compute only the bounds of the posterior probabilities. This prevents some of the existing criteria from being used. To overcome this limitation, we present two simple transformations for credal nets which make it possible to compute decisions based on the maximality and E-admissibility criteria without any modification in the inference algorithms. We also prove that these decision problems have the same complexity of standard inference, being NP^PP-hard for general credal nets and NP-hard for polytrees.
Resumo:
In this paper we present TANC, i.e., a tree-augmented naive credal classifier based on imprecise probabilities; it models prior near-ignorance via the Extreme Imprecise Dirichlet Model (EDM) (Cano et al., 2007) and deals conservatively with missing data in the training set, without assuming them to be missing-at-random. The EDM is an approximation of the global Imprecise Dirichlet Model (IDM), which considerably simplifies the computation of upper and lower probabilities; yet, having been only recently introduced, the quality of the provided approximation needs still to be verified. As first contribution, we extensively compare the output of the naive credal classifier (one of the few cases in which the global IDM can be exactly implemented) when learned with the EDM and the global IDM; the output of the classifier appears to be identical in the vast majority of cases, thus supporting the adoption of the EDM in real classification problems. Then, by experiments we show that TANC is more reliable than the precise TAN (learned with uniform prior), and also that it provides better performance compared to a previous (Zaffalon, 2003) TAN model based on imprecise probabilities. TANC treats missing data by considering all possible completions of the training set, but avoiding an exponential increase of the computational times; eventually, we present some preliminary results with missing data.
Resumo:
Credal nets generalize Bayesian nets by relaxing the requirement of precision of probabilities. Credal nets are considerably more expressive than Bayesian nets, but this makes belief updating NP-hard even on polytrees. We develop a new efficient algorithm for approximate belief updating in credal nets. The algorithm is based on an important representation result we prove for general credal nets: that any credal net can be equivalently reformulated as a credal net with binary variables; moreover, the transformation, which is considerably more complex than in the Bayesian case, can be implemented in polynomial time. The equivalent binary credal net is updated by L2U, a loopy approximate algorithm for binary credal nets. Thus, we generalize L2U to non-binary credal nets, obtaining an accurate and scalable algorithm for the general case, which is approximate only because of its loopy nature. The accuracy of the inferences is evaluated by empirical tests.