884 resultados para Inference module


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Effective risk management is crucial for any organisation. One of its key steps is risk identification, but few tools exist to support this process. Here we present a method for the automatic discovery of a particular type of process-related risk, the danger of deadline transgressions or overruns, based on the analysis of event logs. We define a set of time-related process risk indicators, i.e., patterns observable in event logs that highlight the likelihood of an overrun, and then show how instances of these patterns can be identified automatically using statistical principles. To demonstrate its feasibility, the approach has been implemented as a plug-in module to the process mining framework ProM and tested using an event log from a Dutch financial institution.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper addresses the issue of analogical inference, and its potential role as the mediator of new therapeutic discoveries, by using disjunction operators based on quantum connectives to combine many potential reasoning pathways into a single search expression. In it, we extend our previous work in which we developed an approach to analogical retrieval using the Predication-based Semantic Indexing (PSI) model, which encodes both concepts and the relationships between them in high-dimensional vector space. As in our previous work, we leverage the ability of PSI to infer predicate pathways connecting two example concepts, in this case comprising of known therapeutic relationships. For example, given that drug x TREATS disease z, we might infer the predicate pathway drug x INTERACTS WITH gene y ASSOCIATED WITH disease z, and use this pathway to search for drugs related to another disease in similar ways. As biological systems tend to be characterized by networks of relationships, we evaluate the ability of quantum-inspired operators to mediate inference and retrieval across multiple relations, by testing the ability of different approaches to recover known therapeutic relationships. In addition, we introduce a novel complex vector based implementation of PSI, based on Plate’s Circular Holographic Reduced Representations, which we utilize for all experiments in addition to the binary vector based approach we have applied in our previous research.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Motor unit number estimation (MUNE) is a method which aims to provide a quantitative indicator of progression of diseases that lead to loss of motor units, such as motor neurone disease. However the development of a reliable, repeatable and fast real-time MUNE method has proved elusive hitherto. Ridall et al. (2007) implement a reversible jump Markov chain Monte Carlo (RJMCMC) algorithm to produce a posterior distribution for the number of motor units using a Bayesian hierarchical model that takes into account biological information about motor unit activation. However we find that the approach can be unreliable for some datasets since it can suffer from poor cross-dimensional mixing. Here we focus on improved inference by marginalising over latent variables to create the likelihood. In particular we explore how this can improve the RJMCMC mixing and investigate alternative approaches that utilise the likelihood (e.g. DIC (Spiegelhalter et al., 2002)). For this model the marginalisation is over latent variables which, for a larger number of motor units, is an intractable summation over all combinations of a set of latent binary variables whose joint sample space increases exponentially with the number of motor units. We provide a tractable and accurate approximation for this quantity and also investigate simulation approaches incorporated into RJMCMC using results of Andrieu and Roberts (2009).

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Exponential growth of genomic data in the last two decades has made manual analyses impractical for all but trial studies. As genomic analyses have become more sophisticated, and move toward comparisons across large datasets, computational approaches have become essential. One of the most important biological questions is to understand the mechanisms underlying gene regulation. Genetic regulation is commonly investigated and modelled through the use of transcriptional regulatory network (TRN) structures. These model the regulatory interactions between two key components: transcription factors (TFs) and the target genes (TGs) they regulate. Transcriptional regulatory networks have proven to be invaluable scientific tools in Bioinformatics. When used in conjunction with comparative genomics, they have provided substantial insights into the evolution of regulatory interactions. Current approaches to regulatory network inference, however, omit two additional key entities: promoters and transcription factor binding sites (TFBSs). In this study, we attempted to explore the relationships among these regulatory components in bacteria. Our primary goal was to identify relationships that can assist in reducing the high false positive rates associated with transcription factor binding site predictions and thereupon enhance the reliability of the inferred transcription regulatory networks. In our preliminary exploration of relationships between the key regulatory components in Escherichia coli transcription, we discovered a number of potentially useful features. The combination of location score and sequence dissimilarity scores increased de novo binding site prediction accuracy by 13.6%. Another important observation made was with regards to the relationship between transcription factors grouped by their regulatory role and corresponding promoter strength. Our study of E.coli ��70 promoters, found support at the 0.1 significance level for our hypothesis | that weak promoters are preferentially associated with activator binding sites to enhance gene expression, whilst strong promoters have more repressor binding sites to repress or inhibit gene transcription. Although the observations were specific to �70, they nevertheless strongly encourage additional investigations when more experimentally confirmed data are available. In our preliminary exploration of relationships between the key regulatory components in E.coli transcription, we discovered a number of potentially useful features { some of which proved successful in reducing the number of false positives when applied to re-evaluate binding site predictions. Of chief interest was the relationship observed between promoter strength and TFs with respect to their regulatory role. Based on the common assumption, where promoter homology positively correlates with transcription rate, we hypothesised that weak promoters would have more transcription factors that enhance gene expression, whilst strong promoters would have more repressor binding sites. The t-tests assessed for E.coli �70 promoters returned a p-value of 0.072, which at 0.1 significance level suggested support for our (alternative) hypothesis; albeit this trend may only be present for promoters where corresponding TFBSs are either all repressors or all activators. Nevertheless, such suggestive results strongly encourage additional investigations when more experimentally confirmed data will become available. Much of the remainder of the thesis concerns a machine learning study of binding site prediction, using the SVM and kernel methods, principally the spectrum kernel. Spectrum kernels have been successfully applied in previous studies of protein classification [91, 92], as well as the related problem of promoter predictions [59], and we have here successfully applied the technique to refining TFBS predictions. The advantages provided by the SVM classifier were best seen in `moderately'-conserved transcription factor binding sites as represented by our E.coli CRP case study. Inclusion of additional position feature attributes further increased accuracy by 9.1% but more notable was the considerable decrease in false positive rate from 0.8 to 0.5 while retaining 0.9 sensitivity. Improved prediction of transcription factor binding sites is in turn extremely valuable in improving inference of regulatory relationships, a problem notoriously prone to false positive predictions. Here, the number of false regulatory interactions inferred using the conventional two-component model was substantially reduced when we integrated de novo transcription factor binding site predictions as an additional criterion for acceptance in a case study of inference in the Fur regulon. This initial work was extended to a comparative study of the iron regulatory system across 20 Yersinia strains. This work revealed interesting, strain-specific difierences, especially between pathogenic and non-pathogenic strains. Such difierences were made clear through interactive visualisations using the TRNDifi software developed as part of this work, and would have remained undetected using conventional methods. This approach led to the nomination of the Yfe iron-uptake system as a candidate for further wet-lab experimentation due to its potential active functionality in non-pathogens and its known participation in full virulence of the bubonic plague strain. Building on this work, we introduced novel structures we have labelled as `regulatory trees', inspired by the phylogenetic tree concept. Instead of using gene or protein sequence similarity, the regulatory trees were constructed based on the number of similar regulatory interactions. While the common phylogentic trees convey information regarding changes in gene repertoire, which we might regard being analogous to `hardware', the regulatory tree informs us of the changes in regulatory circuitry, in some respects analogous to `software'. In this context, we explored the `pan-regulatory network' for the Fur system, the entire set of regulatory interactions found for the Fur transcription factor across a group of genomes. In the pan-regulatory network, emphasis is placed on how the regulatory network for each target genome is inferred from multiple sources instead of a single source, as is the common approach. The benefit of using multiple reference networks, is a more comprehensive survey of the relationships, and increased confidence in the regulatory interactions predicted. In the present study, we distinguish between relationships found across the full set of genomes as the `core-regulatory-set', and interactions found only in a subset of genomes explored as the `sub-regulatory-set'. We found nine Fur target gene clusters present across the four genomes studied, this core set potentially identifying basic regulatory processes essential for survival. Species level difierences are seen at the sub-regulatory-set level; for example the known virulence factors, YbtA and PchR were found in Y.pestis and P.aerguinosa respectively, but were not present in both E.coli and B.subtilis. Such factors and the iron-uptake systems they regulate, are ideal candidates for wet-lab investigation to determine whether or not they are pathogenic specific. In this study, we employed a broad range of approaches to address our goals and assessed these methods using the Fur regulon as our initial case study. We identified a set of promising feature attributes; demonstrated their success in increasing transcription factor binding site prediction specificity while retaining sensitivity, and showed the importance of binding site predictions in enhancing the reliability of regulatory interaction inferences. Most importantly, these outcomes led to the introduction of a range of visualisations and techniques, which are applicable across the entire bacterial spectrum and can be utilised in studies beyond the understanding of transcriptional regulatory networks.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Food insecurity is the limited access to, or availability of, nutritious, culturally-appropriate and safe foods, or the inability to access these foods by socially acceptable means. In Australia, the monitoring of food insecurity is limited to the use of a single item, included in the three-yearly National Health Survey (NHS). The current research comprised a) a review of the literature and available tools to measure food security, b) piloting and adaptation of the more comprehensive 16-item United States Department of Agriculture (USDA) Food Security Survey Module (FSSM), and c) a cross-sectional study comparing this more comprehensive tool, and it’s 10- and 6- item short forms, with the current single-item used in the NHS, among a sample of households in disadvantaged urban-areas of Brisbane, Australia. Findings have shown that internationally the 16-item USDA-FSSM is the most widely used tool for the measurement of food insecurity. Furthermore, of the validated tools that exist to measure food insecurity, sensitivity and reliability decline as the number of questions in a tool decreases. Among an Australian sample, the current single-measure utilised in the NHS yielded a significantly lower prevalence for food insecurity compared to the 16-item USDA-FSSM and it’s two shorter forms respectively (four and two percentage points lower respectively). These findings suggest that the current prevalence of food insecurity (estimated at 6% in the most recent NHS) may have been underestimated, and have important implications for the development of an effective means of monitoring food security within the context of a developed country.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Objective: Food insecurity may be associated with a number of adverse health and social outcomes however our knowledge of its public health significance in Australia has been limited by use of a single-item measure in the Australian National Health Surveys (NHS) and, more recently, the exclusion of food security items from these surveys. The current study compares prevalence estimates of food insecurity in disadvantaged urban areas of Brisbane using the one-item NHS measure with three adaptations of the United States Department of Agriculture Food Security Survey Module (USDA-FSSM). Design: Data were collected by postal survey (n= 505, 53% response). Food security status was ascertained by the measure used in the NHS, and the 6-, 10- and 18-item versions of the USDA-FSSM. Demographic characteristics of the sample, prevalence estimates of food insecurity and different levels of food insecurity estimated by each tool were determined. Setting: Disadvantaged suburbs of Brisbane city, Australia, 2009. Subjects: Individuals aged ≥ 18 years. Results: Food insecurity was prevalent in socioeconomically-disadvantaged urban areas, estimated as 19.5% using the single-item NHS measure. This was significantly less than the 24.6% (P <0.01), 22.0% (P = 0.01) and 21.3% (P = 0.03) identified using the 18-item, 10-item and 6-item versions of the USDA-FSSM, respectively. The proportion of the sample reporting more severe levels of food insecurity were 10.7%, 10% and 8.6% for the 18-, 10- and 6-item USDA measures respectively, however this degree of food insecurity could not be ascertained using the NHS measure. Conclusions: The measure of food insecurity employed in the NHS may underestimate its prevalence and public health significance. Future monitoring and surveillance efforts should seek to employ a more accurate measure.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This tutorial is designed to help new users become familiar with using the PicoBlaze microcontroller with the Spartan-3E board. The tutorial gives a brief introduction to the PicoBlaze microcontroller, and then steps through the following: - Writing a small PicoBlaze assembly language (.psm) file, and stepping through the process of assembling the .psm file using KCPSM3; - Writing a top level VHDL module to connect the PicoBlaze microcontroller (KCPSM3 component) and the program ROM, and to connect the required input and output ports; - Connecting the top level module inputs and outputs to the switches, buttons and LEDs on the Spartan-3E board; - Downloading the program to the Spartan-3E board using the Project Navigator software.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Modern applications comprise multiple components, such as browser plug-ins, often of unknown provenance and quality. Statistics show that failure of such components accounts for a high percentage of software faults. Enabling isolation of such fine-grained components is therefore necessary to increase the robustness and resilience of security-critical and safety-critical computer systems. In this paper, we evaluate whether such fine-grained components can be sandboxed through the use of the hardware virtualization support available in modern Intel and AMD processors. We compare the performance and functionality of such an approach to two previous software based approaches. The results demonstrate that hardware isolation minimizes the difficulties encountered with software based approaches, while also reducing the size of the trusted computing base, thus increasing confidence in the solution's correctness. We also show that our relatively simple implementation has equivalent run-time performance, with overheads of less than 34%, does not require custom tool chains and provides enhanced functionality over software-only approaches, confirming that hardware virtualization technology is a viable mechanism for fine-grained component isolation.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The use of Trusted Platform Module (TPM) is be- coming increasingly popular in many security sys- tems. To access objects protected by TPM (such as cryptographic keys), several cryptographic proto- cols, such as the Object Specific Authorization Pro- tocol (OSAP), can be used. Given the sensitivity and the importance of those objects protected by TPM, the security of this protocol is vital. Formal meth- ods allow a precise and complete analysis of crypto- graphic protocols such that their security properties can be asserted with high assurance. Unfortunately, formal verification of these protocols are limited, de- spite the abundance of formal tools that one can use. In this paper, we demonstrate the use of Coloured Petri Nets (CPN) - a type of formal technique, to formally model the OSAP. Using this model, we then verify the authentication property of this protocol us- ing the state space analysis technique. The results of analysis demonstrates that as reported by Chen and Ryan the authentication property of OSAP can be violated.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The residence time distribution (RTD) is a crucial parameter when treating engine exhaust emissions with a Dielectric Barrier Discharge (DBD) reactor. In this paper, the residence time of such a reactor is investigated using a finite element based software: COMSOL Multiphysics 4.3. Non-thermal plasma (NTP) discharge is being introduced as a promising method for pollutant emission reduction. DBD is one of the most advantageous of NTP technologies. In a two cylinder co-axial DBD reactor, tubes are placed between two electrodes and flow passes through the annuals between these barrier tubes. If the mean residence time increases in a DBD reactor, there will be a corresponding increase in reaction time and consequently, the pollutant removal efficiency can increase. However, pollutant formation can occur during increased mean residence time and so the proportion of fluid that may remain for periods significantly longer than the mean residence time is of great importance. In this study, first, the residence time distribution is calculated based on the standard reactor used by the authors for ultrafine particle (10-500 nm) removal. Then, different geometrics and various inlet velocities are considered. Finally, for selected cases, some roughness elements added inside the reactor and the residence time is calculated. These results will form the basis for a COMSOL plasma and CFD module investigation.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In 2008 a move away from medical staff providing nursing education in Vietnam saw the employment of many new nurse academics. To assist in the instruction of these novice academics and provide them with sound teaching and learning practice as well as curriculum design and implementation skills, Queensland University of Technology (QUT) successfully tendered an international grant. One of QUT’s initiatives in educating the Vietnamese academics was a distance learning programme. Developed specifically for Vietnamese nurse academics, the programme was designed for Australian based delivery to academics in Vietnam. This paper will present an overview of why four separate modules were utilised for the delivery of content (modules were delivered at a rate of one per semester). It will address bilingual online discussion boards which were used in each of the modules and the process of moderating these given comments were posted in both Vietnamese and English. It will describe how content was scaffolded across four modules and how the modules themselves modelled new teaching delivery strategies. Lastly, it will discuss the considerations of programme delivery given the logistics of an Australian based delivery. Feedback from the Vietnamese nurse academics across their involvement in the programme (and at the conclusion of their fourth and final module) has been overwhelmingly positive. Feedback suggests the programme has altered teaching and assessment approaches used by some Vietnamese nurse academics. Additionally, Vietnamese nurse academics are reporting that they are engaging more with the application of their content indicating a cultural shift in the approach taken in Vietnamese nurse education.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Advances in algorithms for approximate sampling from a multivariable target function have led to solutions to challenging statistical inference problems that would otherwise not be considered by the applied scientist. Such sampling algorithms are particularly relevant to Bayesian statistics, since the target function is the posterior distribution of the unobservables given the observables. In this thesis we develop, adapt and apply Bayesian algorithms, whilst addressing substantive applied problems in biology and medicine as well as other applications. For an increasing number of high-impact research problems, the primary models of interest are often sufficiently complex that the likelihood function is computationally intractable. Rather than discard these models in favour of inferior alternatives, a class of Bayesian "likelihoodfree" techniques (often termed approximate Bayesian computation (ABC)) has emerged in the last few years, which avoids direct likelihood computation through repeated sampling of data from the model and comparing observed and simulated summary statistics. In Part I of this thesis we utilise sequential Monte Carlo (SMC) methodology to develop new algorithms for ABC that are more efficient in terms of the number of model simulations required and are almost black-box since very little algorithmic tuning is required. In addition, we address the issue of deriving appropriate summary statistics to use within ABC via a goodness-of-fit statistic and indirect inference. Another important problem in statistics is the design of experiments. That is, how one should select the values of the controllable variables in order to achieve some design goal. The presences of parameter and/or model uncertainty are computational obstacles when designing experiments but can lead to inefficient designs if not accounted for correctly. The Bayesian framework accommodates such uncertainties in a coherent way. If the amount of uncertainty is substantial, it can be of interest to perform adaptive designs in order to accrue information to make better decisions about future design points. This is of particular interest if the data can be collected sequentially. In a sense, the current posterior distribution becomes the new prior distribution for the next design decision. Part II of this thesis creates new algorithms for Bayesian sequential design to accommodate parameter and model uncertainty using SMC. The algorithms are substantially faster than previous approaches allowing the simulation properties of various design utilities to be investigated in a more timely manner. Furthermore the approach offers convenient estimation of Bayesian utilities and other quantities that are particularly relevant in the presence of model uncertainty. Finally, Part III of this thesis tackles a substantive medical problem. A neurological disorder known as motor neuron disease (MND) progressively causes motor neurons to no longer have the ability to innervate the muscle fibres, causing the muscles to eventually waste away. When this occurs the motor unit effectively ‘dies’. There is no cure for MND, and fatality often results from a lack of muscle strength to breathe. The prognosis for many forms of MND (particularly amyotrophic lateral sclerosis (ALS)) is particularly poor, with patients usually only surviving a small number of years after the initial onset of disease. Measuring the progress of diseases of the motor units, such as ALS, is a challenge for clinical neurologists. Motor unit number estimation (MUNE) is an attempt to directly assess underlying motor unit loss rather than indirect techniques such as muscle strength assessment, which generally is unable to detect progressions due to the body’s natural attempts at compensation. Part III of this thesis builds upon a previous Bayesian technique, which develops a sophisticated statistical model that takes into account physiological information about motor unit activation and various sources of uncertainties. More specifically, we develop a more reliable MUNE method by applying marginalisation over latent variables in order to improve the performance of a previously developed reversible jump Markov chain Monte Carlo sampler. We make other subtle changes to the model and algorithm to improve the robustness of the approach.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Modelling video sequences by subspaces has recently shown promise for recognising human actions. Subspaces are able to accommodate the effects of various image variations and can capture the dynamic properties of actions. Subspaces form a non-Euclidean and curved Riemannian manifold known as a Grassmann manifold. Inference on manifold spaces usually is achieved by embedding the manifolds in higher dimensional Euclidean spaces. In this paper, we instead propose to embed the Grassmann manifolds into reproducing kernel Hilbert spaces and then tackle the problem of discriminant analysis on such manifolds. To achieve efficient machinery, we propose graph-based local discriminant analysis that utilises within-class and between-class similarity graphs to characterise intra-class compactness and inter-class separability, respectively. Experiments on KTH, UCF Sports, and Ballet datasets show that the proposed approach obtains marked improvements in discrimination accuracy in comparison to several state-of-the-art methods, such as the kernel version of affine hull image-set distance, tensor canonical correlation analysis, spatial-temporal words and hierarchy of discriminative space-time neighbourhood features.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Driving on an approach to a signalized intersection while distracted is particularly dangerous, as potential vehicular conflicts and resulting angle collisions tend to be severe. Given the prevalence and importance of this particular scenario, the decisions and actions of distracted drivers during the onset of yellow lights are the focus of this study. Driving simulator data were obtained from a sample of 58 drivers under baseline and handheld mobile phone conditions at the University of Iowa - National Advanced Driving Simulator. Explanatory variables included age, gender, cell phone use, distance to stop-line, and speed. Although there is extensive research on drivers’ responses to yellow traffic signals, the examination has been conducted from a traditional regression-based approach, which does not necessary provide the underlying relations and patterns among the sampled data. In this paper, we exploit the benefits of both classical statistical inference and data mining techniques to identify the a priori relationships among main effects, non-linearities, and interaction effects. Results suggest that novice (16-17 years) and young drivers’ (18-25 years) have heightened yellow light running risk while distracted by a cell phone conversation. Driver experience captured by age has a multiplicative effect with distraction, making the combined effect of being inexperienced and distracted particularly risky. Overall, distracted drivers across most tested groups tend to reduce the propensity of yellow light running as the distance to stop line increases, exhibiting risk compensation on a critical driving situation.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Background: Extra corporeal membrane oxygenation (ECMO) is a complex rescue therapy used to provide cardiac and/or respiratory support for critically ill patients who have failed maximal conventional medical management. ECMO is based on a modified cardiopulmonary bypass (CPB) circuit, and can provide cardiopulmonary support for up-to several months. It can be used in a veno venous configuration for isolated respiratory failure, (VV-ECMO), or in a veno arterial configuration (VA-ECMO) where support is necessary for cardiac +/- respiratory failure. The ECMO circuit consists of five main components: large bore cannulae (access cannulae) for drainage of the venous system, and return cannulae to either the venous (in VV-ECMO) or arterial (in VA ECMO) system. An oxygenator, with a vast surface area of hollow filaments, allows addition of oxygen and removal of carbon dioxide; a centrifugal blood pump allows propulsion of blood through the circuit at upto 10 L/minute; a control module and a thermoregulatory unit, which allows for exact temperature control of the extra corporeal blood. Methods: The first successful use of ECMO for ARDS in adults occurred in 1972, and its use has become more commonplace over the last 30 years, supported by the improvement in design and biocompatibility of the equipment, which has reduced the morbidity associated with this modality. Whilst the use of ECMO in neonatal population has been supported by numerous studies, the evidence upon which ECMO was integrated into adult practice was substantially less robust. Results: Recent data, including the CESAR study (Conventional Ventilatory Support versus Extra corporeal membrane oxygenation for Severe Respiratory failure) has added a degree of evidence to the role of ECMO in such a patient population. The CESAR study analysed 180 patients, and confirmed that ECMO was associated with an improved rate of survival. More recently, ECMO has been utilized in numerous situations within the critical care area, including support in high-risk percutaneous interventions in cardiac catheter lab; the operating room, emergency department, as well in specialized inter-hospital retrieval services. The increased understanding of the risk:benefit profile of ECMO, along with a reduction in morbidity associated with its use will doubtless lead to a substantial rise in the utilisation of this modality. As with all extra-corporeal circuits, ECMO opposes the basic premises of the mammalian inflammation and coagulation cascade where blood comes into foreign circulation, both these cascades are activated. Anti-coagulation is readily dealt with through use of agents such as heparin, but the inflammatory excess, whilst less macroscopically obvious, continues un-abated. Platelet consumption and neutrophil activation occur rapidly, and the clinician is faced with balancing the need of anticoagulation for the circuit, against haemostasis in an acutely bleeding patient. Alterations in pharmacokinetics may result in inadequate levels of disease modifying therapeutics, such as antibiotics, hence paradoxically delaying recovery from conditions such as pneumonia. Key elements of nutrition and the innate immune system maysimilarly be affected. Summary: This presentation will discuss the basic features of ECMO to the nonspecialist, and review the clinical conundrum faced by the team treating these most complex cases.