940 resultados para Bayesian belief networks
Resumo:
Distributed space time coding for wireless relay networks when the source, the destination and the relays have multiple antennas have been studied by Jing and Hassibi. In this set-up, the transmit and the receive signals at different antennas of the same relay are processed and designed independently, even though the antennas are colocated. In this paper, a wireless relay network with single antenna at the source and the destination and two antennas at each of the R relays is considered. A new class of distributed space time block codes called Co-ordinate Interleaved Distributed Space-Time Codes (CIDSTC) are introduced where, in the first phase, the source transmits a T-length complex vector to all the relays;and in the second phase, at each relay, the in-phase and quadrature component vectors of the received complex vectors at the two antennas are interleaved and processed before forwarding them to the destination. Compared to the scheme proposed by Jing-Hassibi, for T >= 4R, while providing the same asymptotic diversity order of 2R, CIDSTC scheme is shown to provide asymptotic coding gain with the cost of negligible increase in the processing complexity at the relays. However, for moderate and large values of P, CIDSTC scheme is shown to provide more diversity than that of the scheme proposed by Jing-Hassibi. CIDSTCs are shown to be fully diverse provided the information symbols take value from an appropriate multidimensional signal set.
Resumo:
This paper proposes new metrics and a performance-assessment framework for vision-based weed and fruit detection and classification algorithms. In order to compare algorithms, and make a decision on which one to use fora particular application, it is necessary to take into account that the performance obtained in a series of tests is subject to uncertainty. Such characterisation of uncertainty seems not to be captured by the performance metrics currently reported in the literature. Therefore, we pose the problem as a general problem of scientific inference, which arises out of incomplete information, and propose as a metric of performance the(posterior) predictive probabilities that the algorithms will provide a correct outcome for target and background detection. We detail the framework through which these predicted probabilities can be obtained, which is Bayesian in nature. As an illustration example, we apply the framework to the assessment of performance of four algorithms that could potentially be used in the detection of capsicums (peppers).
Resumo:
A flexible and simple Bayesian decision-theoretic design for dose-finding trials is proposed in this paper. In order to reduce the computational burden, we adopt a working model with conjugate priors, which is flexible to fit all monotonic dose-toxicity curves and produces analytic posterior distributions. We also discuss how to use a proper utility function to reflect the interest of the trial. Patients are allocated based on not only the utility function but also the chosen dose selection rule. The most popular dose selection rule is the one-step-look-ahead (OSLA), which selects the best-so-far dose. A more complicated rule, such as the two-step-look-ahead, is theoretically more efficient than the OSLA only when the required distributional assumptions are met, which is, however, often not the case in practice. We carried out extensive simulation studies to evaluate these two dose selection rules and found that OSLA was often more efficient than two-step-look-ahead under the proposed Bayesian structure. Moreover, our simulation results show that the proposed Bayesian method's performance is superior to several popular Bayesian methods and that the negative impact of prior misspecification can be managed in the design stage.
Resumo:
So far, most Phase II trials have been designed and analysed under a frequentist framework. Under this framework, a trial is designed so that the overall Type I and Type II errors of the trial are controlled at some desired levels. Recently, a number of articles have advocated the use of Bavesian designs in practice. Under a Bayesian framework, a trial is designed so that the trial stops when the posterior probability of treatment is within certain prespecified thresholds. In this article, we argue that trials under a Bayesian framework can also be designed to control frequentist error rates. We introduce a Bayesian version of Simon's well-known two-stage design to achieve this goal. We also consider two other errors, which are called Bayesian errors in this article because of their similarities to posterior probabilities. We show that our method can also control these Bayesian-type errors. We compare our method with other recent Bayesian designs in a numerical study and discuss implications of different designs on error rates. An example of a clinical trial for patients with nasopharyngeal carcinoma is used to illustrate differences of the different designs.
Resumo:
Stallard (1998, Biometrics 54, 279-294) recently used Bayesian decision theory for sample-size determination in phase II trials. His design maximizes the expected financial gains in the development of a new treatment. However, it results in a very high probability (0.65) of recommending an ineffective treatment for phase III testing. On the other hand, the expected gain using his design is more than 10 times that of a design that tightly controls the false positive error (Thall and Simon, 1994, Biometrics 50, 337-349). Stallard's design maximizes the expected gain per phase II trial, but it does not maximize the rate of gain or total gain for a fixed length of time because the rate of gain depends on the proportion: of treatments forwarding to the phase III study. We suggest maximizing the rate of gain, and the resulting optimal one-stage design becomes twice as efficient as Stallard's one-stage design. Furthermore, the new design has a probability of only 0.12 of passing an ineffective treatment to phase III study.
Resumo:
The minimum cost classifier when general cost functionsare associated with the tasks of feature measurement and classification is formulated as a decision graph which does not reject class labels at intermediate stages. Noting its complexities, a heuristic procedure to simplify this scheme to a binary decision tree is presented. The optimizationof the binary tree in this context is carried out using ynamicprogramming. This technique is applied to the voiced-unvoiced-silence classification in speech processing.
Resumo:
A relay network with N relays and a single source-destination pair is called a partially-coherent relay channel (PCRC) if the destination has perfect channel state information (CSI) of all the channels and the relays have only the phase information of the source-to-relay channels. In this paper, first, a new set of necessary and sufficient conditions for a space-time block code (STBC) to be single-symbol decodable (SSD) for colocated multiple antenna communication is obtained. Then, this is extended to a set of necessary and sufficient conditions for a distributed STBC (DSTBC) to be SSD for. a PCRC. Using this, several SSD DSTBCs for PCRC are identified. It is proved that even if a SSD STBC for a co-located MIMO channel does not satisfy the additional conditions for the code to be SSD for a PCRC, single-symbol decoding of it in a PCRC gives full-diversity and only coding gain is lost. It is shown that when a DSTBC is SSD for a PCRC, then arbitrary coordinate interleaving of the in-phase and quadrature-phase components of the variables does not disturb its SSD property for PCRC. Finally, it is shown that the possibility of channel phase compensation operation at the relay nodes using partial CSI at the relays increases the possible rate of SSD DSTBCs from (2)/(N) when the relays do not have CSI to(1)/(2), which is independent of N.
Resumo:
description and analysis of geographically indexed health data with respect to demographic, environmental, behavioural, socioeconomic, genetic, and infectious risk factors (Elliott andWartenberg 2004). Disease maps can be useful for estimating relative risk; ecological analyses, incorporating area and/or individual-level covariates; or cluster analyses (Lawson 2009). As aggregated data are often more readily available, one common method of mapping disease is to aggregate the counts of disease at some geographical areal level, and present them as choropleth maps (Devesa et al. 1999; Population Health Division 2006). Therefore, this chapter will focus exclusively on methods appropriate for areal data...
Resumo:
This paper proposes solutions to three issues pertaining to the estimation of finite mixture models with an unknown number of components: the non-identifiability induced by overfitting the number of components, the mixing limitations of standard Markov Chain Monte Carlo (MCMC) sampling techniques, and the related label switching problem. An overfitting approach is used to estimate the number of components in a finite mixture model via a Zmix algorithm. Zmix provides a bridge between multidimensional samplers and test based estimation methods, whereby priors are chosen to encourage extra groups to have weights approaching zero. MCMC sampling is made possible by the implementation of prior parallel tempering, an extension of parallel tempering. Zmix can accurately estimate the number of components, posterior parameter estimates and allocation probabilities given a sufficiently large sample size. The results will reflect uncertainty in the final model and will report the range of possible candidate models and their respective estimated probabilities from a single run. Label switching is resolved with a computationally light-weight method, Zswitch, developed for overfitted mixtures by exploiting the intuitiveness of allocation-based relabelling algorithms and the precision of label-invariant loss functions. Four simulation studies are included to illustrate Zmix and Zswitch, as well as three case studies from the literature. All methods are available as part of the R package Zmix, which can currently be applied to univariate Gaussian mixture models.
Resumo:
Interdependence is a central concept in systems and organizations, yet our methods for measuring it are not well developed. Here, we report on a novel method for transforming digital trace data into networks of events that can be used to visualize and measure interdependence. The edges in the network represent sequential flow and the vertices represent actors, actions and artifacts. We refer to this representation as an affordance network. As with conventional approaches such as process mining, our method uses input from a stream of time-stamped occurrences, but the representation is simpler and more appropriate for exploration and theory building. As digital trace data becomes more widely available, this method may become more useful in information systems research and practice. Like a thermometer, it helps us measure a basic property of a system that would otherwise be difficult to see.
Resumo:
For many complex natural resources problems, planning and management efforts involve groups of organizations working collaboratively through networks (Agranoff, 2007; Booher & Innes, 2010). These networks sometimes involve formal roles and relationships, but often include informal elements (Edelenbos & Klijn, 2007). All of these roles and relationships undergo change in response to changes in personnel, priorities and policy. There has been considerable focus in the planning and public policy literature on describing and characterizing these networks (Mandell & Keast, 2008; Provan & Kenis, 2007). However, there has been far less research assessing how networks change and adjust in response to policy and political change. In the Australian state of Queensland, Natural Resource Management (NRM) organizations were created as lead organizations to address land and water management issues on a regional basis with Commonwealth funding and state support. In 2012, a change in state government signaled a dramatic change in policy that resulted in a significant reduction of state support and commitment. In response to this change, NRM organizations have had to adapt their networks and relationships. In this study, we examine the issues of network relationships, capacity and changing relationships over time using written surveys and focus groups with NRM CEOs, managers and planners (note: data collection events scheduled for March and April 2015). The research team will meet with each of these three groups separately, conduct an in-person survey followed by a facilitated focus group discussion. The NRM participant focus groups will also be subdivided by region, which correlates with capacity (inland/low capacity; coastal/high capacity). The findings focus on how changes in state government commitment have affected NRM networks and their relationships with state agencies. We also examine how these changes vary according to the level within the organization and the capacity of the organization. We hypothesize that: (1) NRM organizations have struggled to maintain capacity in the wake of state agency withdrawal of support; (2) NRM organizations with the lowest capacity have been most adversely affected, while some high capacity NRM organizations may have become more resilient as they have sought out other partners; (3) Network relationships at the highest levels of the organization have been affected the most by state policy change; (4) NRM relationships at the lowest levels of the organizations have changed the least, as formal relationships are replaced by informal networks and relationships.
Resumo:
tRNA synthetases (aaRS) are enzymes crucial in the translation of genetic code. The enzyme accylates the acceptor stem of tRNA by the congnate amino acid bound at the active site, when the anti-codon is recognized by the anti-codon site of aaRS. In a typical aaRS, the distance between the anti-codon region and the amino accylation site is approximately 70 Å. We have investigated this allosteric phenomenon at molecular level by MD simulations followed by the analysis of protein structure networks (PSN) of non-covalent interactions. Specifically, we have generated conformational ensembles by performing MD simulations on different liganded states of methionyl tRNA synthetase (MetRS) from Escherichia coli and tryptophenyl tRNA synthetase (TrpRS) from Human. The correlated residues during the MD simulations are identified by cross correlation maps. We have identified the amino acids connecting the correlated residues by the shortest path between the two selected members of the PSN. The frequencies of paths have been evaluated from the MD snapshots[1]. The conformational populations in different liganded states of the protein have been beautifully captured in terms of network parameters such as hubs, cliques and communities[2]. These parameters have been associated with the rigidity and plasticity of the protein conformations and can be associated with free energy landscape. A comparison of allosteric communication in MetRS and TrpRS [3] elucidated in this study highlights diverse means adopted by different enzymes to perform a similar function. The computational method described for these two enzymes can be applied to the investigation of allostery in other systems.
Resumo:
This study addresses four issues concerning technological product innovations. First, the nature of the very early phases or "embryonic stages" of technological innovation is addressed. Second, this study analyzes why and by what means people initiate innovation processes outside the technological community and the field of expertise of the established industry. In other words, this study addresses the initiation of innovation that occurs without the expertise of established organizations, such as technology firms, professional societies and research institutes operating in the technological field under consideration. Third, the significance of interorganizational learning processes for technological innovation is dealt with. Fourth, this consideration is supplemented by considering how network collaboration and learning change when formalized product development work and the commercialization of innovation advance. These issues are addressed through the empirical analysis of the following three product innovations: Benecol margarine, the Nordic Mobile Telephone system (NMT) and the ProWellness Diabetes Management System (PDMS). This study utilizes the theoretical insights of cultural-historical activity theory on the development of human activities and learning. Activity-theoretical conceptualizations are used in the critical assessment and advancement of the concept of networks of learning. This concept was originally proposed by the research group of organizational scientist Walter Powell. A network of learning refers to the interorganizational collaboration that pools resources, ideas and know-how without market-based or hierarchical relations. The concept of an activity system is used in defining the nodes of the networks of learning. Network collaboration and learning are analyzed with regard to the shared object of development work. According to this study, enduring dilemmas and tensions in activity explain the participants' motives for carrying out actions that lead to novel product concepts in the early phases of technological innovation. These actions comprise the initiation of development work outside the relevant fields of expertise and collaboration and learning across fields of expertise in the absence of market-based or hierarchical relations. These networks of learning are fragile and impermanent. This study suggests that the significance of networks of learning across fields of expertise becomes more and more crucial for innovation activities.
Resumo:
This thesis is a study of Chinese One Child Generation's digital and social sharing. It examines urban youth grassroots communities, including an urban farmers' community and volunteers in educational camps. These case studies explain the emergence of 'sharism' in reaction to the growing risks in China, such as food safety and environmental degradation emanating from China's rapid economic development, and growing urbanism, globalisation, and consumerism. The new forms of 'sharism' are linked to guanxi (social relations) and connected youth communities, which are made possible by increasing accessibility to digital and networked technologies.
Resumo:
Structural and rheological features of a series of molecular hydrogels formed by synthetic bile salt analogues have been scrutinized. Among seven gelators, two are neutral compounds, while the others are cationic systems among which one is a tripodal steroid derivative. Despite the fact that the chemical structures are closely related, the variety of physical characteristics is extremely large in the structures of the connected fibers (either plain cylinders or ribbons), in the dynamical modes for stress relaxation of the associated SAFINs, in the scaling laws of the shear elasticity (typical of either cellular solids or fractal floc-like assemblies), in the micron-scale texture and the distribution of ordered domains (spherulites, crystallites) embedded in a random mesh, in the type of nodal zones (either crystalline-like, fiber entanglements, or bundles), in the evolution of the distribution and morphology of fibers and nodes, and in the sensitivity to added salt. SANS appears to be a suitable technique to infer all geometrical parameters defining the fibers, their interaction modes, and the volume fraction of nodes in a SAFIN. The tripodal system is particularly singular in the series and exhibits viscosity overshoots at the startup of shear flows, an “umbrella-like” molecular packing mode involving three molecules per cross section of fiber, and scattering correlation peaks revealing the ordering and overlap of 1d self-assembled polyelectrolyte species.