878 resultados para Uncertainty in generation
Resumo:
In recent years, the issue of life expectancy has become of utmost importance to pension providers, insurance companies, and government bodies in the developed world. Significant and consistent improvements in mortality rates and hence life expectancy have led to unprecedented increases in the cost of providing for older ages. This has resulted in an explosion of stochastic mortality models forecasting trends in mortality data to anticipate future life expectancy and hence quantify the costs of providing for future aging populations. Many stochastic models of mortality rates identify linear trends in mortality rates by time, age, and cohort and forecast these trends into the future by using standard statistical methods. These approaches rely on the assumption that structural breaks in the trend do not exist or do not have a significant impact on the mortality forecasts. Recent literature has started to question this assumption. In this paper, we carry out a comprehensive investigation of the presence or of structural breaks in a selection of leading mortality models. We find that structural breaks are present in the majority of cases. In particular, we find that allowing for structural break, where present, improves the forecast result significantly.
Resumo:
To provide in-time reactions to a large volume of surveil- lance data, uncertainty-enabled event reasoning frameworks for CCTV and sensor based intelligent surveillance system have been integrated to model and infer events of interest. However, most of the existing works do not consider decision making under uncertainty which is important for surveillance operators. In this paper, we extend an event reasoning framework for decision support, which enables our framework to predict, rank and alarm threats from multiple heterogeneous sources.
Resumo:
Credal networks relax the precise probability requirement of Bayesian networks, enabling a richer representation of uncertainty in the form of closed convex sets of probability measures. The increase in expressiveness comes at the expense of higher computational costs. In this paper, we present a new variable elimination algorithm for exactly computing posterior inferences in extensively specified credal networks, which is empirically shown to outperform a state-of-the-art algorithm. The algorithm is then turned into a provably good approximation scheme, that is, a procedure that for any input is guaranteed to return a solution not worse than the optimum by a given factor. Remarkably, we show that when the networks have bounded treewidth and bounded number of states per variable the approximation algorithm runs in time polynomial in the input size and in the inverse of the error factor, thus being the first known fully polynomial-time approximation scheme for inference in credal networks.
Resumo:
This paper explores semi-qualitative probabilistic networks (SQPNs) that combine numeric and qualitative information. We first show that exact inferences with SQPNs are NPPP-Complete. We then show that existing qualitative relations in SQPNs (plus probabilistic logic and imprecise assessments) can be dealt effectively through multilinear programming. We then discuss learning: we consider a maximum likelihood method that generates point estimates given a SQPN and empirical data, and we describe a Bayesian-minded method that employs the Imprecise Dirichlet Model to generate set-valued estimates.
Resumo:
Radiotherapy is commonly planned on the basis of physical dose received by the tumour and surrounding normal tissue, with margins added to address the possibility of geometric miss. However, recent experimental evidence suggests that intercellular signalling results in a given cell's survival also depending on the dose received by neighbouring cells. A model of radiation-induced cell killing and signalling was used to analyse how this effect depends on dose and margin choices. Effective Uniform Doses were calculated for model tumours in both idealised cases with no delivery uncertainty and more realistic cases incorporating geometric uncertainty. In highly conformal irradiation, a lack of signalling from outside the target leads to reduced target cell killing, equivalent to under-dosing by up to 10% compared to large uniform fields. This effect is significantly reduced when higher doses per fraction are considered, both increasing the level of cell killing and reducing margin sensitivity. These effects may limit the achievable biological precision of techniques such as stereotactic radiotherapy even in the absence of geometric uncertainties, although it is predicted that larger fraction sizes reduce the relative contribution of cell signalling driven effects. These observations may contribute to understanding the efficacy of hypo-fractionated radiotherapy.
Resumo:
In many CCTV and sensor network based intelligent surveillance systems, a number of attributes or criteria are used to individually evaluate the degree of potential threat of a suspect. The outcomes for these attributes are in general from analytical algorithms where data are often pervaded with uncertainty and incompleteness. As a result, such individual threat evaluations are often inconsistent, and individual evaluations can change as time elapses. Therefore, integrating heterogeneous threat evaluations with temporal influence to obtain a better overall evaluation is a challenging issue. So far, this issue has rarely be considered by existing event reasoning frameworks under uncertainty in sensor network based surveillance. In this paper, we first propose a weighted aggregation operator based on a set of principles that constraints the fusion of individual threat evaluations. Then, we propose a method to integrate the temporal influence on threat evaluation changes. Finally, we demonstrate the usefulness of our system with a decision support event modeling framework using an airport security surveillance scenario.
Resumo:
Situation calculus has been applied widely in arti?cial intelligence to model and reason about actions and changes in dynamic systems. Since actions carried out by agents will cause constant changes of the agents’ beliefs, how to manage
these changes is a very important issue. Shapiro et al. [22] is one of the studies that considered this issue. However, in this framework, the problem of noisy sensing, which often presents in real-world applications, is not considered. As a
consequence, noisy sensing actions in this framework will lead to an agent facing inconsistent situation and subsequently the agent cannot proceed further. In this paper, we investigate how noisy sensing actions can be handled in iterated
belief change within the situation calculus formalism. We extend the framework proposed in [22] with the capability of managing noisy sensings. We demonstrate that an agent can still detect the actual situation when the ratio of noisy sensing actions vs. accurate sensing actions is limited. We prove that our framework subsumes the iterated belief change strategy in [22] when all sensing actions are accurate. Furthermore, we prove that our framework can adequately handle belief introspection, mistaken beliefs, belief revision and belief update even with noisy sensing, as done in [22] with accurate sensing actions only.
Resumo:
Photodynamic therapy involves delivery of a photosensitising drug that is activated by light of a specific wavelength, resulting in generation of highly reactive radicals. This activated species can cause destruction of targeted cells. Application of this process for treatment of microbial infections has been termed "photodynamic antimicrobial chemotherapy" (PACT). In the treatment of chronic wounds, the delivery of photosensitising agents is often impeded by the presence of a thick hyperkeratotic/necrotic tissue layer, reducing their therapeutic efficacy. Microneedles (MNs) are an emerging drug delivery technology that have been demonstrated to successfully penetrate the outer layers of the skin, whilst minimising damage to skin barrier function. Delivering photosensitising drugs using this platform has been demonstrated to have several advantages over conventional photodynamic therapy, such as, painless application, reduced erythema, enhanced cosmetic results and improved intradermal delivery. The aim of this study was to physically characterise dissolving MNs loaded with the photosensitising agent, methylene blue and assess their photodynamic antimicrobial activity. Dissolving MNs were fabricated from aqueous blends of Gantrez(®) AN-139 co-polymer containing varying loadings of methylene blue. A height reduction of 29.8% was observed for MNs prepared from blends containing 0.5% w/w methylene blue following application of a total force of 70.56 N/array. A previously validated insertion test was used to assess the effect of drug loading on MN insertion into a wound model. Staphylococcus aureus, Escherichia coli and Candida albicans biofilms were incubated with various methylene blue concentrations within the range delivered by MNs in vitro (0.1-2.5 mg/mL) and either irradiated at 635 nm using a Paterson Lamp or subjected to a dark period. Microbial susceptibility to PACT was determined by assessing the total viable count. Kill rates of >96%, were achieved for S. aureus and >99% for E. coli and C. albicans with the combination of PACT and methylene blue concentrations between 0.1 and 2.5 mg/mL. A reduction in the colony count was also observed when incorporating the photosensitiser without irradiation, this reduction was more notable in S. aureus and E. coli strains than in C. albicans.
Resumo:
Possibilistic answer set programming (PASP) unites answer set programming (ASP) and possibilistic logic (PL) by associating certainty values with rules. The resulting framework allows to combine both non-monotonic reasoning and reasoning under uncertainty in a single framework. While PASP has been well-studied for possibilistic definite and possibilistic normal programs, we argue that the current semantics of possibilistic disjunctive programs are not entirely satisfactory. The problem is twofold. First, the treatment of negation-as-failure in existing approaches follows an all-or-nothing scheme that is hard to match with the graded notion of proof underlying PASP. Second, we advocate that the notion of disjunction can be interpreted in several ways. In particular, in addition to the view of ordinary ASP where disjunctions are used to induce a non-deterministic choice, the possibilistic setting naturally leads to a more epistemic view of disjunction. In this paper, we propose a semantics for possibilistic disjunctive programs, discussing both views on disjunction. Extending our earlier work, we interpret such programs as sets of constraints on possibility distributions, whose least specific solutions correspond to answer sets.
Resumo:
Inferences in directed acyclic graphs associated with probability intervals and sets of probabilities are NP-hard, even for polytrees. We propose: 1) an improvement on Tessem’s A/R algorithm for inferences on polytrees associated with probability intervals; 2) a new algorithm for approximate inferences based on local search; 3) branch-and-bound algorithms that combine the previous techniques. The first two algorithms produce complementary approximate solutions, while branch-and-bound procedures can generate either exact or approximate solutions. We report improvements on existing techniques for inference with probability sets and intervals, in some cases reducing computational effort by several orders of magnitude.
Resumo:
In the reinsurance market, the risks natural catastrophes pose to portfolios of properties must be quantified, so that they can be priced, and insurance offered. The analysis of such risks at a portfolio level requires a simulation of up to 800 000 trials with an average of 1000 catastrophic events per trial. This is sufficient to capture risk for a global multi-peril reinsurance portfolio covering a range of perils including earthquake, hurricane, tornado, hail, severe thunderstorm, wind storm, storm surge and riverine flooding, and wildfire. Such simulations are both computation and data intensive, making the application of high-performance computing techniques desirable.
In this paper, we explore the design and implementation of portfolio risk analysis on both multi-core and many-core computing platforms. Given a portfolio of property catastrophe insurance treaties, key risk measures, such as probable maximum loss, are computed by taking both primary and secondary uncertainties into account. Primary uncertainty is associated with whether or not an event occurs in a simulated year, while secondary uncertainty captures the uncertainty in the level of loss due to the use of simplified physical models and limitations in the available data. A combination of fast lookup structures, multi-threading and careful hand tuning of numerical operations is required to achieve good performance. Experimental results are reported for multi-core processors and systems using NVIDIA graphics processing unit and Intel Phi many-core accelerators.
Resumo:
bservations of the Rossiter–McLaughlin (RM) effect provide information on star–planet alignments, which can inform planetary migration and evolution theories. Here, we go beyond the classical RM modeling and explore the impact of a convective blueshift that varies across the stellar disk and non-Gaussian stellar photospheric profiles. We simulated an aligned hot Jupiter with a four-day orbit about a Sun-like star and injected center-to-limb velocity (and profile shape) variations based on radiative 3D magnetohydrodynamic simulations of solar surface convection. The residuals between our modeling and classical RM modeling were dependent on the intrinsic profile width and v sin i; the amplitude of the residuals increased with increasing v sin i and with decreasing intrinsic profile width. For slowly rotating stars the center-to-limb convective variation dominated the residuals (with amplitudes of 10 s of cm s−1 to ~1 m s−1); however, for faster rotating stars the dominant residual signature was due a non-Gaussian intrinsic profile (with amplitudes from 0.5 to 9 m s−1). When the impact factor was 0, neglecting to account for the convective center-to-limb variation led to an uncertainty in the obliquity of ~10°–20°, even though the true v sin i was known. Additionally, neglecting to properly model an asymmetric intrinsic profile had a greater impact for more rapidly rotating stars (e.g., v sin i = 6 km s−1) and caused systematic errors on the order of ~20° in the measured obliquities. Hence, neglecting the impact of stellar surface convection may bias star–planet alignment measurements and consequently theories on planetary migration and evolution.
Resumo:
Abstract
Complexity and environmental uncertainty in public sector systems requires leaders to balance the administrative practices necessary to be aligned and efficient in the management of routine challenges, and the adaptive practices required to respond to complex and dynamic circumstances. Conventional notions of leadership in the field of public administration do not fully explain the role of leadership in enabling and balancing the entanglement of formal, top-down, administrative functions and informal, emergent, adaptive functions within public sector settings with different levels of complexity. Drawing on and extending existing complexity leadership constructs, this paper explores how change was enabled over the duration of three urban regeneration projects, each representing high, medium and low levels of project complexity. The data reveals six distinct yet interconnected functions of enabling leadership that were identified within the three urban regeneration projects. The paper contributes to our understanding of how leadership is enacted and poses questions for those engaged in leading in complex public sector settings.
Resumo:
A distribui ção de um sinal relógio, com elevada precisão espacial (baixo skew) e temporal (baixo jitter ), em sistemas sí ncronos de alta velocidade tem-se revelado uma tarefa cada vez mais demorada e complexa devido ao escalonamento da tecnologia. Com a diminuição das dimensões dos dispositivos e a integração crescente de mais funcionalidades nos Circuitos Integrados (CIs), a precisão associada as transições do sinal de relógio tem sido cada vez mais afectada por varia ções de processo, tensão e temperatura. Esta tese aborda o problema da incerteza de rel ogio em CIs de alta velocidade, com o objetivo de determinar os limites do paradigma de desenho sí ncrono. Na prossecu ção deste objectivo principal, esta tese propõe quatro novos modelos de incerteza com âmbitos de aplicação diferentes. O primeiro modelo permite estimar a incerteza introduzida por um inversor est atico CMOS, com base em parâmetros simples e su cientemente gen éricos para que possa ser usado na previsão das limitações temporais de circuitos mais complexos, mesmo na fase inicial do projeto. O segundo modelo, permite estimar a incerteza em repetidores com liga ções RC e assim otimizar o dimensionamento da rede de distribui ção de relógio, com baixo esfor ço computacional. O terceiro modelo permite estimar a acumula ção de incerteza em cascatas de repetidores. Uma vez que este modelo tem em considera ção a correla ção entre fontes de ruí do, e especialmente util para promover t ecnicas de distribui ção de rel ogio e de alimentação que possam minimizar a acumulação de incerteza. O quarto modelo permite estimar a incerteza temporal em sistemas com m ultiplos dom ínios de sincronismo. Este modelo pode ser facilmente incorporado numa ferramenta autom atica para determinar a melhor topologia para uma determinada aplicação ou para avaliar a tolerância do sistema ao ru ído de alimentação. Finalmente, usando os modelos propostos, são discutidas as tendências da precisão de rel ogio. Conclui-se que os limites da precisão do rel ogio são, em ultima an alise, impostos por fontes de varia ção dinâmica que se preveem crescentes na actual l ogica de escalonamento dos dispositivos. Assim sendo, esta tese defende a procura de solu ções em outros ní veis de abstração, que não apenas o ní vel f sico, que possam contribuir para o aumento de desempenho dos CIs e que tenham um menor impacto nos pressupostos do paradigma de desenho sí ncrono.