986 resultados para sample complexity


Relevância:

20.00% 20.00%

Publicador:

Resumo:

In recent years, wide-field sky surveys providing deep multi-band imaging have presented a new path for indirectly characterizing the progenitor populations of core-collapse supernovae (SN): systematic light curve studies. We assemble a set of 76 grizy-band Type IIP SN light curves from Pan-STARRS1, obtained over a constant survey program of 4 years and classified using both spectroscopy and machine learning-based photometric techniques. We develop and apply a new Bayesian model for the full multi-band evolution of each light curve in the sample. We find no evidence of a sub-population of fast-declining explosions (historically referred to as "Type IIL" SNe). However, we identify a highly significant relation between the plateau phase decay rate and peak luminosity among our SNe IIP. These results argue in favor of a single parameter, likely determined by initial stellar mass, predominantly controlling the explosions of red supergiants. This relation could also be applied for supernova cosmology, offering a standardizable candle good to an intrinsic scatter of 0.2 mag. We compare each light curve to physical models from hydrodynamic simulations to estimate progenitor initial masses and other properties of the Pan-STARRS1 Type IIP SN sample. We show that correction of systematic discrepancies between modeled and observed SN IIP light curve properties and an expanded grid of progenitor properties, are needed to enable robust progenitor inferences from multi-band light curve samples of this kind. This work will serve as a pathfinder for photometric studies of core-collapse SNe to be conducted through future wide field transient searches.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Credal networks are graph-based statistical models whose parameters take values in a set, instead of being sharply specified as in traditional statistical models (e.g., Bayesian networks). The computational complexity of inferences on such models depends on the irrelevance/independence concept adopted. In this paper, we study inferential complexity under the concepts of epistemic irrelevance and strong independence. We show that inferences under strong independence are NP-hard even in trees with binary variables except for a single ternary one. We prove that under epistemic irrelevance the polynomial-time complexity of inferences in credal trees is not likely to extend to more general models (e.g., singly connected topologies). These results clearly distinguish networks that admit efficient inferences and those where inferences are most likely hard, and settle several open questions regarding their computational complexity. We show that these results remain valid even if we disallow the use of zero probabilities. We also show that the computation of bounds on the probability of the future state in a hidden Markov model is the same whether we assume epistemic irrelevance or strong independence, and we prove an analogous result for inference in Naive Bayes structures. These inferential equivalences are important for practitioners, as hidden Markov models and Naive Bayes networks are used in real applications of imprecise probability.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Influence diagrams are intuitive and concise representations of structured decision problems. When the problem is non-Markovian, an optimal strategy can be exponentially large in the size of the diagram. We can avoid the inherent intractability by constraining the size of admissible strategies, giving rise to limited memory influence diagrams. A valuable question is then how small do strategies need to be to enable efficient optimal planning. Arguably, the smallest strategies one can conceive simply prescribe an action for each time step, without considering past decisions or observations. Previous work has shown that finding such optimal strategies even for polytree-shaped diagrams with ternary variables and a single value node is NP-hard, but the case of binary variables was left open. In this paper we address such a case, by first noting that optimal strategies can be obtained in polynomial time for polytree-shaped diagrams with binary variables and a single value node. We then show that the same problem is NP-hard if the diagram has multiple value nodes. These two results close the fixed-parameter complexity analysis of optimal strategy selection in influence diagrams parametrized by the shape of the diagram, the number of value nodes and the maximum variable cardinality.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Credal networks are graph-based statistical models whose parameters take values on a set, instead of being sharply specified as in traditional statistical models (e.g., Bayesian networks). The result of inferences with such models depends on the irrelevance/independence concept adopted. In this paper, we study the computational complexity of inferences under the concepts of epistemic irrelevance and strong independence. We strengthen complexity results by showing that inferences with strong independence are NP-hard even in credal trees with ternary variables, which indicates that tractable algorithms, including the existing one for epistemic trees, cannot be used for strong independence. We prove that the polynomial time of inferences in credal trees under epistemic irrelevance is not likely to extend to more general models, because the problem becomes NP-hard even in simple polytrees. These results draw a definite line between networks with efficient inferences and those where inferences are hard, and close several open questions regarding the computational complexity of such models.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Semi-qualitative probabilistic networks (SQPNs) merge two important graphical model formalisms: Bayesian networks and qualitative probabilistic networks. They provide a very general modeling framework by allowing the combination of numeric and qualitative assessments over a discrete domain, and can be compactly encoded by exploiting the same factorization of joint probability distributions that are behind the Bayesian networks. This paper explores the computational complexity of semi-qualitative probabilistic networks, and takes the polytree-shaped networks as its main target. We show that the inference problem is coNP-Complete for binary polytrees with multiple observed nodes. We also show that inferences can be performed in linear time if there is a single observed node, which is a relevant practical case. Because our proof is constructive, we obtain an efficient linear time algorithm for SQPNs under such assumptions. To the best of our knowledge, this is the first exact polynomial-time algorithm for SQPNs. Together these results provide a clear picture of the inferential complexity in polytree-shaped SQPNs.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper presents new results for the (partial) maximum a posteriori (MAP) problem in Bayesian networks, which is the problem of querying the most probable state configuration of some of the network variables given evidence. It is demonstrated that the problem remains hard even in networks with very simple topology, such as binary polytrees and simple trees (including the Naive Bayes structure), which extends previous complexity results. Furthermore, a Fully Polynomial Time Approximation Scheme for MAP in networks with bounded treewidth and bounded number of states per variable is developed. Approximation schemes were thought to be impossible, but here it is shown otherwise under the assumptions just mentioned, which are adopted in most applications.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Influence diagrams allow for intuitive and yet precise description of complex situations involving decision making under uncertainty. Unfortunately, most of the problems described by influence diagrams are hard to solve. In this paper we discuss the complexity of approximately solving influence diagrams. We do not assume no-forgetting or regularity, which makes the class of problems we address very broad. Remarkably, we show that when both the treewidth and the cardinality of the variables are bounded the problem admits a fully polynomial-time approximation scheme.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper presents new results on the complexity of graph-theoretical models that represent probabilities (Bayesian networks) and that represent interval and set valued probabilities (credal networks). We define a new class of networks with bounded width, and introduce a new decision problem for Bayesian networks, the maximin a posteriori. We present new links between the Bayesian and credal networks, and present new results both for Bayesian networks (most probable explanation with observations, maximin a posteriori) and for credal networks (bounds on probabilities a posteriori, most probable explanation with and without observations, maximum a posteriori).

Relevância:

20.00% 20.00%

Publicador:

Resumo:

MarcoPolo-R is a sample return mission to a primitive Near-Earth Asteroid (NEA) proposed in collaboration with NASA. It will rendezvous with a primitive NEA, scientifically characterize it at multiple scales,and return a unique sample to Earth unaltered by the atmospheric entry process or terrestrial weathering. MarcoPolo-R will return bulk samples (up to 2 kg) from an organic-rich binary asteroid to Earth for laboratory analyses, allowing us to: explore the origin of planetary materials and initial stages of habitable planet formation; identify and characterize the organics and volatiles in a primitive asteroid; understand the unique geomorphology, dynamics and evolution of a binaryNEA. This project is based on the previous Marco Polo mission study,which was selected for the Assessment Phase of the first round of Cosmic Vision. Its scientific rationale was highly ranked by ESA committees andit was not selected only because the estimated cost was higher than theallotted amount for an M class mission. The cost of Marco Polo-R will be reduced to within the ESA medium mission budget by collaboration withAPL (John Hopkins University) and JPL in the NASA program for coordination with ESA's Cosmic Vision Call. The baseline target is a binary asteroid (175706) 1996 FG3, which offers a very efficient operational and technical mission profile. A binary target also providesenhanced science return. The choice of this target will allow newinvestigations to be performed more easily than at a single object, andalso enables investigations of the fascinating geology and geophysics ofasteroids that are impossible at a single object. Several launch windows have been identified in the time-span 2020-2024. A number of otherpossible primitive single targets of high scientific interest have beenidentified covering a wide range of possible launch dates. The baselinemission scenario of Marco Polo-R to 1996 FG3 is as follows: a singleprimary spacecraft provided by ESA, carrying the Earth Re-entry Capsule, sample acquisition and transfer system provided by NASA, will be launched by a Soyuz-Fregat rocket from Kourou into GTO and using two space segment stages. Two similar missions with two launch windows, in 2021 and 2022 and for both sample return in 2029 (with mission durationof 7 and 8 years), have been defined. Earlier or later launches, in 2020 or 2024, also offer good opportunities. All manoeuvres are carried out by a chemical propulsion system. MarcoPolo-R takes advantage of three industrial studies completed as part of the previous Marco Polo mission (see ESA/SRE (2009)3, Marco Polo Yellow Book) and of the expertise of the consortium led by Dr. A.F. Cheng (PI of the NASA NEAR Shoemaker mission) of the JHU-APL, including JPL, NASA ARC, NASA LaRC, and MIT.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

2. K. Gow, P.K.J Robertson, P.M. Pollard, and K. Christidis

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Electing a leader is a fundamental task in distributed computing. In its implicit version, only the leader must know who is the elected leader. This article focuses on studying the message and time complexity of randomized implicit leader election in synchronous distributed networks. Surprisingly, the most "obvious" complexity bounds have not been proven for randomized algorithms. In particular, the seemingly obvious lower bounds of Ω(m) messages, where m is the number of edges in the network, and Ω(D) time, where D is the network diameter, are nontrivial to show for randomized (Monte Carlo) algorithms. (Recent results, showing that even Ω(n), where n is the number of nodes in the network, is not a lower bound on the messages in complete networks, make the above bounds somewhat less obvious). To the best of our knowledge, these basic lower bounds have not been established even for deterministic algorithms, except for the restricted case of comparison algorithms, where it was also required that nodes may not wake up spontaneously and that D and n were not known. We establish these fundamental lower bounds in this article for the general case, even for randomized Monte Carlo algorithms. Our lower bounds are universal in the sense that they hold for all universal algorithms (namely, algorithms that work for all graphs), apply to every D, m, and n, and hold even if D, m, and n are known, all the nodes wake up simultaneously, and the algorithms can make any use of node's identities. To show that these bounds are tight, we present an O(m) messages algorithm. An O(D) time leader election algorithm is known. A slight adaptation of our lower bound technique gives rise to an Ω(m) message lower bound for randomized broadcast algorithms. 

An interesting fundamental problem is whether both upper bounds (messages and time) can be reached simultaneously in the randomized setting for all graphs. The answer is known to be negative in the deterministic setting. We answer this problem partially by presenting a randomized algorithm that matches both complexities in some cases. This already separates (for some cases) randomized algorithms from deterministic ones. As first steps towards the general case, we present several universal leader election algorithms with bounds that tradeoff messages versus time. We view our results as a step towards understanding the complexity of universal leader election in distributed networks.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

PurposeThe selection of suitable outcomes and sample size calculation are critical factors in the design of a randomised controlled trial (RCT). The goal of this study was to identify the range of outcomes and information on sample size calculation in RCTs on geographic atrophy (GA).MethodsWe carried out a systematic review of age-related macular degeneration (AMD) RCTs. We searched MEDLINE, EMBASE, Scopus, Cochrane Library, www.controlled-trials.com, and www.ClinicalTrials.gov. Two independent reviewers screened records. One reviewer collected data and the second reviewer appraised 10% of collected data. We scanned references lists of selected papers to include other relevant RCTs.ResultsLiterature and registry search identified 3816 abstracts of journal articles and 493 records from trial registries. From a total of 177 RCTs on all types of AMD, 23 RCTs on GA were included. Eighty-one clinical outcomes were identified. Visual acuity (VA) was the most frequently used outcome, presented in 18 out of 23 RCTs and followed by the measures of lesion area. For sample size analysis, 8 GA RCTs were included. None of them provided sufficient Information on sample size calculations.ConclusionsThis systematic review illustrates a lack of standardisation in terms of outcome reporting in GA trials and issues regarding sample size calculation. These limitations significantly hamper attempts to compare outcomes across studies and also perform meta-analyses.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Young people in long-term foster care are at risk of experiencing poor social, emotional, behavioural and educational outcomes. Moreover, these placements have a significantly greater chance of breaking down compared to those involving children. This article critically evaluates the factors associated with this particular outcome. It was carried out through a literature review conducted by a social work practitioner in one Health and Social Care Trust in Northern Ireland. The findings evidenced that, apart from overriding safety concerns, placement breakdown was not a one-off event but rather a complex process involving the interplay between a range of dynamic risk and protective factors over time, operating in the wider context of the young person’s history and life experiences. The significance of these findings for social work practitioners is finally considered by identifying key theories to inform understanding and intervention.