648 resultados para bayesian networks
Resumo:
Enterprise social networks (ESNs) often fail if there are few or no contributors of content. Promotional messages are among the common interventions used to improve participation. While most users only read others content (i.e. lurk), contributors who create content (i.e. post) account for only 1% of the users. Research on interventions to improve participation across dissimilar groups is scarce especially in work settings. We develop a model that examines four key motivations of posting and lurking. We employ the elaboration likelihood model to understand how promotional messages influence lurkers and posters beliefs and participation. We test our model with data collected from 366 members in two corporate Google communities in a large Australian retail organization. We find that posters and lurkers are motivated and hindered by different factors. Promotional messages do not always yield the hoped-for results among lurkers; however, they do make posters more enthusiastic to participate.
Resumo:
This project was a step forward in introducing suitable cooperative diversity transmission techniques for vehicle to vehicle communications. The contributions are intended to aid in the successful implementation of future vehicular safety and autonomous controlling systems. Several protocols were introduced for vehicles to communicate effectively without losing connectivity. This study investigated novel protocols in terms of diversity-multiplexing trade-off and outage for a range of potential vehicular safety and infotainment applications.
Resumo:
The effect of tunnel junction resistances on the electronic property and the magneto-resistance of few-layer graphene sheet networks is investigated. By decreasing the tunnel junction resistances, transition from strong localization to weak localization occurs and magneto-resistance changes from positive to negative. It is shown that the positive magneto-resistance is due to Zeeman splitting of the electronic states at the Fermi level as it changes with the bias voltage. As the tunnel junction resistances decrease, the network resistance is well described by 2D weak localization model. Sensitivity of the magneto-resistance to the bias voltage becomes negligible and diminishes with increasing temperature. It is shown 2D weak localization effect mainly occurs inside of the few-layer graphene sheets and the minimum temperature of 5 K in our experiments is not sufficiently low to allow us to observe 2D weak localization effect of the networks as it occurs in 2D disordered metal films. Furthermore, defects inside the few-layer graphene sheets have negligible effect on the resistance of the networks which have small tunnel junction resistances between few-layer graphene sheets
Resumo:
In this work, we study the fractal and multifractal properties of a family of fractal networks introduced by Gallos et al (2007 Proc. Nat. Acad. Sci. USA 104 7746). In this fractal network model, there is a parameter e which is between 0 and 1, and allows for tuning the level of fractality in the network. Here we examine the multifractal behavior of these networks, the dependence relationship of the fractal dimension and the multifractal parameters on parameter e. First, we find that the empirical fractal dimensions of these networks obtained by our program coincide with the theoretical formula given by Song et al (2006 Nature Phys. 2 275). Then from the shape of the (q) and D(q) curves, we find the existence of multifractality in these networks. Last, we find that there exists a linear relationship between the average information dimension D(1) and the parameter e.
Resumo:
Background Multilevel and spatial models are being increasingly used to obtain substantive information on area-level inequalities in cancer survival. Multilevel models assume independent geographical areas, whereas spatial models explicitly incorporate geographical correlation, often via a conditional autoregressive prior. However the relative merits of these methods for large population-based studies have not been explored. Using a case-study approach, we report on the implications of using multilevel and spatial survival models to study geographical inequalities in all-cause survival. Methods Multilevel discrete-time and Bayesian spatial survival models were used to study geographical inequalities in all-cause survival for a population-based colorectal cancer cohort of 22,727 cases aged 2084 years diagnosed during 19972007 from Queensland, Australia. Results Both approaches were viable on this large dataset, and produced similar estimates of the fixed effects. After adding area-level covariates, the between-area variability in survival using multilevel discrete-time models was no longer significant. Spatial inequalities in survival were also markedly reduced after adjusting for aggregated area-level covariates. Only the multilevel approach however, provided an estimation of the contribution of geographical variation to the total variation in survival between individual patients. Conclusions With little difference observed between the two approaches in the estimation of fixed effects, multilevel models should be favored if there is a clear hierarchical data structure and measuring the independent impact of individual- and area-level effects on survival differences is of primary interest. Bayesian spatial analyses may be preferred if spatial correlation between areas is important and if the priority is to assess small-area variations in survival and map spatial patterns. Both approaches can be readily fitted to geographically enabled survival data from international settings
Resumo:
A theoretical basis is required for comparing key features and critical elements in wild fisheries and aquaculture supply chains under a changing climate. Here we develop a new quantitative metric that is analogous to indices used to analyse food-webs and identify key species. The Supply Chain Index (SCI) identifies critical elements as those elements with large throughput rates, as well as greater connectivity. The sum of the scores for a supply chain provides a single metric that roughly captures both the resilience and connectedness of a supply chain. Standardised scores can facilitate cross-comparisons both under current conditions as well as under a changing climate. Identification of key elements along the supply chain may assist in informing adaptation strategies to reduce anticipated future risks posed by climate change. The SCI also provides information on the relative stability of different supply chains based on whether there is a fairly even spread in the individual scores of the top few key elements, compared with a more critical dependence on a few key individual supply chain elements. We use as a case study the Australian southern rock lobster Jasus edwardsii fishery, which is challenged by a number of climate change drivers such as impacts on recruitment and growth due to changes in large-scale and local oceanographic features. The SCI identifies airports, processors and Chinese consumers as the key elements in the lobster supply chain that merit attention to enhance stability and potentially enable growth. We also apply the index to an additional four real-world Australian commercial fishery and two aquaculture industry supply chains to highlight the utility of a systematic method for describing supply chains. Overall, our simple methodological approach to empirically-based supply chain research provides an objective method for comparing the resilience of supply chains and highlighting components that may be critical.
Resumo:
Statistical comparison of oil samples is an integral part of oil spill identification, which deals with the process of linking an oil spill with its source of origin. In current practice, a frequentist hypothesis test is often used to evaluate evidence in support of a match between a spill and a source sample. As frequentist tests are only able to evaluate evidence against a hypothesis but not in support of it, we argue that this leads to unsound statistical reasoning. Moreover, currently only verbal conclusions on a very coarse scale can be made about the match between two samples, whereas a finer quantitative assessment would often be preferred. To address these issues, we propose a Bayesian predictive approach for evaluating the similarity between the chemical compositions of two oil samples. We derive the underlying statistical model from some basic assumptions on modeling assays in analytical chemistry, and to further facilitate and improve numerical evaluations, we develop analytical expressions for the key elements of Bayesian inference for this model. The approach is illustrated with both simulated and real data and is shown to have appealing properties in comparison with both standard frequentist and Bayesian approaches
Resumo:
In this paper the issue of finding uncertainty intervals for queries in a Bayesian Network is reconsidered. The investigation focuses on Bayesian Nets with discrete nodes and finite populations. An earlier asymptotic approach is compared with a simulation-based approach, together with further alternatives, one based on a single sample of the Bayesian Net of a particular finite population size, and another which uses expected population sizes together with exact probabilities. We conclude that a query of a Bayesian Net should be expressed as a probability embedded in an uncertainty interval. Based on an investigation of two Bayesian Net structures, the preferred method is the simulation method. However, both the single sample method and the expected sample size methods may be useful and are simpler to compute. Any method at all is more useful than none, when assessing a Bayesian Net under development, or when drawing conclusions from an expert system.
Resumo:
Major advances in power electronics during recent years have prompted considerable interest within the traction community. The capability of new technologies to reduce the AC railway networks' effect on power quality and improve their supply efficiency is expected to significantly decrease the cost of electric rail supply systems. Of particular interest are Static Frequency Converter (SFC), Rail Power Conditioner (RPC), High Voltage Direct Current (HVDC) and Energy Storage Systems (ESS) solutions. Substantial impacts on future feasibility of railway electrification are anticipated. Aurizon, Australia's largest heavy haul railway operator, has recently commissioned the world's first 50Hz/50Hz SFC installation and is currently investigating SFC, RPC, HVDC and ESS solutions. This paper presents a summary of current and emerging technologies with a particular focus on the potential techno-economic benefits.
Resumo:
Gene expression is arguably the most important indicator of biological function. Thus identifying differentially expressed genes is one of the main aims of high throughout studies that use microarray and RNAseq platforms to study deregulated cellular pathways. There are many tools for analysing differentia gene expression from transciptomic datasets. The major challenge of this topic is to estimate gene expression variance due to the high amount of background noise that is generated from biological equipment and the lack of biological replicates. Bayesian inference has been widely used in the bioinformatics field. In this work, we reveal that the prior knowledge employed in the Bayesian framework also helps to improve the accuracy of differential gene expression analysis when using a small number of replicates. We have developed a differential analysis tool that uses Bayesian estimation of the variance of gene expression for use with small numbers of biological replicates. Our method is more consistent when compared to the widely used cyber-t tool that successfully introduced the Bayesian framework to differential analysis. We also provide a user-friendly web based Graphic User Interface for biologists to use with microarray and RNAseq data. Bayesian inference can compensate for the instability of variance caused when using a small number of biological replicates by using pseudo replicates as prior knowledge. We also show that our new strategy to select pseudo replicates will improve the performance of the analysis. - See more at: http://www.eurekaselect.com/node/138761/article#sthash.VeK9xl5k.dpuf
Resumo:
Cancer is the leading contributor to the disease burden in Australia. This thesis develops and applies Bayesian hierarchical models to facilitate an investigation of the spatial and temporal associations for cancer diagnosis and survival among Queenslanders. The key objectives are to document and quantify the importance of spatial inequalities, explore factors influencing these inequalities, and investigate how spatial inequalities change over time. Existing Bayesian hierarchical models are refined, new models and methods developed, and tangible benefits obtained for cancer patients in Queensland. The versatility of using Bayesian models in cancer control are clearly demonstrated through these detailed and comprehensive analyses.
Resumo:
The prospect of synthesizing ordered, covalently bonded structures directly on a surface has recently attracted considerable attention due to its fundamental interest and for potential applications in electronics and photonics. This prospective article focuses on efforts to synthesize and characterize epitaxial one- and two-dimensional (1D and 2D, respectively) polymeric networks on single crystal surfaces. Recent studies, mostly performed using scanning tunneling microscopy (STM), demonstrate the ability to induce polymerization based on Ullmann coupling, thermal dehalogenation and dehydration reactions. The 2D polymer networks synthesized to date have exhibited structural limitations and have been shown to form only small domains on the surface. We discuss different approaches to control 1D and 2D polymerization, with particular emphasis on the surface phenomena that are critical to the formation of larger ordered domains.
Resumo:
This thesis presents a novel approach to building large-scale agent-based models of networked physical systems using a compositional approach to provide extensibility and flexibility in building the models and simulations. A software framework (MODAM - MODular Agent-based Model) was implemented for this purpose, and validated through simulations. These simulations allow assessment of the impact of technological change on the electricity distribution network looking at the trajectories of electricity consumption at key locations over many years.
Resumo:
Ship seakeeping operability refers to the quantification of motion performance in waves relative to mission requirements. This is used to make decisions about preferred vessel designs, but it can also be used as comprehensive assessment of the benefits of ship-motion-control systems. Traditionally, operability computation aggregates statistics of motion computed over over the envelope of likely environmental conditions in order to determine a coefficient in the range from 0 to 1 called operability. When used for assessment of motion-control systems, the increase of operability is taken as the key performance indicator. The operability coefficient is often given the interpretation of the percentage of time operable. This paper considers an alternative probabilistic approach to this traditional computation of operability. It characterises operability not as a number to which a frequency interpretation is attached, but as a hypothesis that a vessel will attain the desired performance in one mission considering the envelope of likely operational conditions. This enables the use of Bayesian theory to compute the probability of that this hypothesis is true conditional on data from simulations. Thus, the metric considered is the probability of operability. This formulation not only adheres to recent developments in reliability and risk analysis, but also allows incorporating into the analysis more accurate descriptions of ship-motion-control systems since the analysis is not limited to linear ship responses in the frequency domain. The paper also discusses an extension of the approach to the case of assessment of increased levels of autonomy for unmanned marine craft.
Resumo:
This paper proposes new metrics and a performance-assessment framework for vision-based weed and fruit detection and classification algorithms. In order to compare algorithms, and make a decision on which one to use fora particular application, it is necessary to take into account that the performance obtained in a series of tests is subject to uncertainty. Such characterisation of uncertainty seems not to be captured by the performance metrics currently reported in the literature. Therefore, we pose the problem as a general problem of scientific inference, which arises out of incomplete information, and propose as a metric of performance the(posterior) predictive probabilities that the algorithms will provide a correct outcome for target and background detection. We detail the framework through which these predicted probabilities can be obtained, which is Bayesian in nature. As an illustration example, we apply the framework to the assessment of performance of four algorithms that could potentially be used in the detection of capsicums (peppers).