987 resultados para Cognitive complexity
Resumo:
A multiuser scheduling multiple-input multiple-output (MIMO) cognitive radio network (CRN) with space-time block coding (STBC) is considered in this paper, where one secondary base station (BS) communicates with one secondary user (SU) selected from K candidates. The joint impact of imperfect channel state information (CSI) in BS → SUs and BS → PU due to channel estimation errors and feedback delay on the outage performance is firstly investigated. We obtain the exact outage probability expressions for the considered network under the peak interference power IP at PU and maximum transmit power Pm at BS which cover perfect/imperfect CSI scenarios in BS → SUs and BS → PU. In addition, asymptotic expressions of outage probability in high SNR region are also derived from which we obtain several important insights into the system design. For example, only with perfect CSIs in BS → SUs, i.e., without channel estimation errors and feedback delay, the multiuser diversity can be exploited. Finally, simulation results confirm the correctness of our analysis.
Resumo:
We consider transmit antenna selection with receive generalized selection combining (TAS/GSC) for cognitive decodeand-forward (DF) relaying in Nakagami-m fading channels. In an effort to assess the performance, the probability density function and the cumulative distribution function of the endto-end SNR are derived using the moment generating function, from which new exact closed-form expressions for the outage probability and the symbol error rate are derived. We then derive a new closed-form expression for the ergodic capacity. More importantly, by deriving the asymptotic expressions for the outage probability and the symbol error rate, as well as the high SNR approximations of the ergodic capacity, we establish new design insights under the two distinct constraint scenarios: 1) proportional interference power constraint, and 2) fixed interference power constraint. Several pivotal conclusions are reached. For the first scenario, the full diversity order of the
outage probability and the symbol error rate is achieved, and the high SNR slope of the ergodic capacity is 1/2. For the second scenario, the diversity order of the outage probability and the symbol error rate is zero with error floors, and the high SNR slope of the ergodic capacity is zero with capacity ceiling.
Resumo:
Credal networks are graph-based statistical models whose parameters take values in a set, instead of being sharply specified as in traditional statistical models (e.g., Bayesian networks). The computational complexity of inferences on such models depends on the irrelevance/independence concept adopted. In this paper, we study inferential complexity under the concepts of epistemic irrelevance and strong independence. We show that inferences under strong independence are NP-hard even in trees with binary variables except for a single ternary one. We prove that under epistemic irrelevance the polynomial-time complexity of inferences in credal trees is not likely to extend to more general models (e.g., singly connected topologies). These results clearly distinguish networks that admit efficient inferences and those where inferences are most likely hard, and settle several open questions regarding their computational complexity. We show that these results remain valid even if we disallow the use of zero probabilities. We also show that the computation of bounds on the probability of the future state in a hidden Markov model is the same whether we assume epistemic irrelevance or strong independence, and we prove an analogous result for inference in Naive Bayes structures. These inferential equivalences are important for practitioners, as hidden Markov models and Naive Bayes networks are used in real applications of imprecise probability.
On the complexity of solving polytree-shaped limited memory influence diagrams with binary variables
Resumo:
Influence diagrams are intuitive and concise representations of structured decision problems. When the problem is non-Markovian, an optimal strategy can be exponentially large in the size of the diagram. We can avoid the inherent intractability by constraining the size of admissible strategies, giving rise to limited memory influence diagrams. A valuable question is then how small do strategies need to be to enable efficient optimal planning. Arguably, the smallest strategies one can conceive simply prescribe an action for each time step, without considering past decisions or observations. Previous work has shown that finding such optimal strategies even for polytree-shaped diagrams with ternary variables and a single value node is NP-hard, but the case of binary variables was left open. In this paper we address such a case, by first noting that optimal strategies can be obtained in polynomial time for polytree-shaped diagrams with binary variables and a single value node. We then show that the same problem is NP-hard if the diagram has multiple value nodes. These two results close the fixed-parameter complexity analysis of optimal strategy selection in influence diagrams parametrized by the shape of the diagram, the number of value nodes and the maximum variable cardinality.
Resumo:
Credal networks are graph-based statistical models whose parameters take values on a set, instead of being sharply specified as in traditional statistical models (e.g., Bayesian networks). The result of inferences with such models depends on the irrelevance/independence concept adopted. In this paper, we study the computational complexity of inferences under the concepts of epistemic irrelevance and strong independence. We strengthen complexity results by showing that inferences with strong independence are NP-hard even in credal trees with ternary variables, which indicates that tractable algorithms, including the existing one for epistemic trees, cannot be used for strong independence. We prove that the polynomial time of inferences in credal trees under epistemic irrelevance is not likely to extend to more general models, because the problem becomes NP-hard even in simple polytrees. These results draw a definite line between networks with efficient inferences and those where inferences are hard, and close several open questions regarding the computational complexity of such models.
Resumo:
Semi-qualitative probabilistic networks (SQPNs) merge two important graphical model formalisms: Bayesian networks and qualitative probabilistic networks. They provide a very general modeling framework by allowing the combination of numeric and qualitative assessments over a discrete domain, and can be compactly encoded by exploiting the same factorization of joint probability distributions that are behind the Bayesian networks. This paper explores the computational complexity of semi-qualitative probabilistic networks, and takes the polytree-shaped networks as its main target. We show that the inference problem is coNP-Complete for binary polytrees with multiple observed nodes. We also show that inferences can be performed in linear time if there is a single observed node, which is a relevant practical case. Because our proof is constructive, we obtain an efficient linear time algorithm for SQPNs under such assumptions. To the best of our knowledge, this is the first exact polynomial-time algorithm for SQPNs. Together these results provide a clear picture of the inferential complexity in polytree-shaped SQPNs.
Resumo:
This paper presents new results for the (partial) maximum a posteriori (MAP) problem in Bayesian networks, which is the problem of querying the most probable state configuration of some of the network variables given evidence. It is demonstrated that the problem remains hard even in networks with very simple topology, such as binary polytrees and simple trees (including the Naive Bayes structure), which extends previous complexity results. Furthermore, a Fully Polynomial Time Approximation Scheme for MAP in networks with bounded treewidth and bounded number of states per variable is developed. Approximation schemes were thought to be impossible, but here it is shown otherwise under the assumptions just mentioned, which are adopted in most applications.
Resumo:
Influence diagrams allow for intuitive and yet precise description of complex situations involving decision making under uncertainty. Unfortunately, most of the problems described by influence diagrams are hard to solve. In this paper we discuss the complexity of approximately solving influence diagrams. We do not assume no-forgetting or regularity, which makes the class of problems we address very broad. Remarkably, we show that when both the treewidth and the cardinality of the variables are bounded the problem admits a fully polynomial-time approximation scheme.
Resumo:
This paper presents new results on the complexity of graph-theoretical models that represent probabilities (Bayesian networks) and that represent interval and set valued probabilities (credal networks). We define a new class of networks with bounded width, and introduce a new decision problem for Bayesian networks, the maximin a posteriori. We present new links between the Bayesian and credal networks, and present new results both for Bayesian networks (most probable explanation with observations, maximin a posteriori) and for credal networks (bounds on probabilities a posteriori, most probable explanation with and without observations, maximum a posteriori).
Resumo:
Simulation of disorders of respiratory mechanics shown by spirometry provides insight into the pathophysiology of disease but some clinically important disorders have not been simulated and none have been formally evaluated for education. We have designed simple mechanical devices which, along with existing simulators, enable all the main dysfunctions which have diagnostic value in spirometry to be simulated and clearly explained with visual and haptic feedback. We modelled the airways as Starling resistors by a clearly visible mechanical action to simulate intra- and extra-thoracic obstruction. A narrow tube was used to simulate fixed large airway obstruction and inelastic bands to simulate restriction. We hypothesized that using simulators whose action explains disease promotes learning especially in higher domain educational objectives. The main features of obstruction and restriction were correctly simulated. Simulation of variable extra-thoracic obstruction caused blunting and plateauing of inspiratory flow, and simulation of intra-thoracic obstruction caused limitation of expiratory flow with marked dynamic compression. Multiple choice tests were created with questions allocated to lower (remember and understand) or higher cognitive domains (apply, analyse and evaluate). In a cross-over design, overall mean scores increased after 1½ h simulation spirometry (43-68 %, effect size 1.06, P < 0.0001). In higher cognitive domains the mean score was lower before and increased further than lower domains (Δ 30 vs 20 %, higher vs lower effect size 0.22, P < 0.05). In conclusion, the devices successfully simulate various patterns of obstruction and restriction. Using these devices medical students achieved marked enhancement of learning especially in higher cognitive domains.
Resumo:
This study combined high resolution mass spectrometry (HRMS), advanced chemometrics and pathway enrichment analysis to analyse the blood metabolome of patients attending the memory clinic: cases of mild cognitive impairment (MCI; n = 16), cases of MCI who upon subsequent follow-up developed Alzheimer's disease (MCI_AD; n = 19), and healthy age-matched controls (Ctrl; n = 37). Plasma was extracted in acetonitrile and applied to an Acquity UPLC HILIC (1.7μm x 2.1 x 100 mm) column coupled to a Xevo G2 QTof mass spectrometer using a previously optimised method. Data comprising 6751 spectral features were used to build an OPLS-DA statistical model capable of accurately distinguishing Ctrl, MCI and MCI_AD. The model accurately distinguished (R2 = 99.1%; Q2 = 97%) those MCI patients who later went on to develop AD. S-plots were used to shortlist ions of interest which were responsible for explaining the maximum amount of variation between patient groups. Metabolite database searching and pathway enrichment analysis indicated disturbances in 22 biochemical pathways, and excitingly it discovered two interlinked areas of metabolism (polyamine metabolism and L-Arginine metabolism) were differentially disrupted in this well-defined clinical cohort. The optimised untargeted HRMS methods described herein not only demonstrate that it is possible to distinguish these pathologies in human blood but also that MCI patients 'at risk' from AD could be predicted up to 2 years earlier than conventional clinical diagnosis. Blood-based metabolite profiling of plasma from memory clinic patients is a novel and feasible approach in improving MCI and AD diagnosis and, refining clinical trials through better patient stratification.
Resumo:
Aim: Substantial evidence links atherosclerosis and Alzheimer's disease (AD). Apolipoproteins, such as apolipoprotein E, have a causal relationship with both diseases. The rs11136000 SNP within the CLU gene, which encodes clusterin (apolipoprotein J), is also associated with increased AD risk. The aim of this study was to investigate the relationship between plasma clusterin and the rs11136000 genotype in mild cognitive impairment (MCI) and AD.
Methods: Plasma and DNA samples were collected from control, MCI and AD subjects (n=142, 111, 154, respectively). Plasma clusterin was determined by ELISA and DNA samples were genotyped for rs11136000 by TaqMan assay.
Results: Plasma clusterin levels were higher in MCI and AD subjects vs. controls (222.3 +/- 61.3 and 193.6 +/- 58.2 vs. 178.6 +/- 52.3 mu g/ml, respectively; p
Conclusion: This study examined control, MCI and AD subjects, identifying for the first time that plasma clusterin levels were influenced, not only by the presence of AD, but also the transitional stage of MCI, while rs11136000 genotype only influenced plasma clusterin levels in the control group. The increase in plasma clusterin in MCI and AD subjects may occur in response to the disease process and would be predicted to increase binding capacity for amyloid-beta peptides in plasma, enhancing their removal from the brain.
Resumo:
Electing a leader is a fundamental task in distributed computing. In its implicit version, only the leader must know who is the elected leader. This article focuses on studying the message and time complexity of randomized implicit leader election in synchronous distributed networks. Surprisingly, the most "obvious" complexity bounds have not been proven for randomized algorithms. In particular, the seemingly obvious lower bounds of Ω(m) messages, where m is the number of edges in the network, and Ω(D) time, where D is the network diameter, are nontrivial to show for randomized (Monte Carlo) algorithms. (Recent results, showing that even Ω(n), where n is the number of nodes in the network, is not a lower bound on the messages in complete networks, make the above bounds somewhat less obvious). To the best of our knowledge, these basic lower bounds have not been established even for deterministic algorithms, except for the restricted case of comparison algorithms, where it was also required that nodes may not wake up spontaneously and that D and n were not known. We establish these fundamental lower bounds in this article for the general case, even for randomized Monte Carlo algorithms. Our lower bounds are universal in the sense that they hold for all universal algorithms (namely, algorithms that work for all graphs), apply to every D, m, and n, and hold even if D, m, and n are known, all the nodes wake up simultaneously, and the algorithms can make any use of node's identities. To show that these bounds are tight, we present an O(m) messages algorithm. An O(D) time leader election algorithm is known. A slight adaptation of our lower bound technique gives rise to an Ω(m) message lower bound for randomized broadcast algorithms.
An interesting fundamental problem is whether both upper bounds (messages and time) can be reached simultaneously in the randomized setting for all graphs. The answer is known to be negative in the deterministic setting. We answer this problem partially by presenting a randomized algorithm that matches both complexities in some cases. This already separates (for some cases) randomized algorithms from deterministic ones. As first steps towards the general case, we present several universal leader election algorithms with bounds that tradeoff messages versus time. We view our results as a step towards understanding the complexity of universal leader election in distributed networks.