903 resultados para inductive inference
Resumo:
Consider a person searching electronic health records, a search for the term ‘cracked skull’ should return documents that contain the term ‘cranium fracture’. A information retrieval systems is required that matches concepts, not just keywords. Further more, determining relevance of a query to a document requires inference – its not simply matching concepts. For example a document containing ‘dialysis machine’ should align with a query for ‘kidney disease’. Collectively we describe this problem as the ‘semantic gap’ – the difference between the raw medical data and the way a human interprets it. This paper presents an approach to semantic search of health records by combining two previous approaches: an ontological approach using the SNOMED CT medical ontology; and a distributional approach using semantic space vector space models. Our approach will be applied to a specific problem in health informatics: the matching of electronic patient records to clinical trials.
Resumo:
Hydroxyapatite (HAP) is a major component of bone and has osteoconductive and -inductive properties. It has been successfully applied as a substrate in bone tissue engineering, either with or without a biodegradable polymer such as polycaprolactone or polylactide. Recently, we have developed a stereolithography resin based on poly(D,L-lactide) (PDLLA) and a non-reactive diluent, that allows for the preparation of tissue engineering scaffolds with designed architectures. In this work, designed porous composite structures of PDLLA and HAP are prepared by stereolithography.
Resumo:
The menopausal transition is a marker of aging for women and a time when health professionals urge women to prevent disease. In this research we adopted a constructivist, inductive approach in exploring how and why midlife women think about health in general, about being healthy, and about factors that influence engaging in healthy behaviors. The sample constituted 23 women who had participated in a women’s wellness program intervention trial and subsequent interviews. The women described lives of healthy eating and exercise, yet, their perceptions of health and healthy behavior at midlife contradicted that history. Midlife was associated with risk and guilt at not doing enough to be healthy. Health professionals provided a very limited frame within which to judge what is healthy. Mostly this was left up to individual women. Those who were successful framed health as “being able to do what you want to do when you want to do it.” In this article we present study findings of how meanings attached to health and being healthy were constructed through social expectations, family relationships, and life experiences.
Resumo:
We estimate the parameters of a stochastic process model for a macroparasite population within a host using approximate Bayesian computation (ABC). The immunity of the host is an unobserved model variable and only mature macroparasites at sacrifice of the host are counted. With very limited data, process rates are inferred reasonably precisely. Modeling involves a three variable Markov process for which the observed data likelihood is computationally intractable. ABC methods are particularly useful when the likelihood is analytically or computationally intractable. The ABC algorithm we present is based on sequential Monte Carlo, is adaptive in nature, and overcomes some drawbacks of previous approaches to ABC. The algorithm is validated on a test example involving simulated data from an autologistic model before being used to infer parameters of the Markov process model for experimental data. The fitted model explains the observed extra-binomial variation in terms of a zero-one immunity variable, which has a short-lived presence in the host.
Resumo:
With regard to the long-standing problem of the semantic gap between low-level image features and high-level human knowledge, the image retrieval community has recently shifted its emphasis from low-level features analysis to high-level image semantics extrac- tion. User studies reveal that users tend to seek information using high-level semantics. Therefore, image semantics extraction is of great importance to content-based image retrieval because it allows the users to freely express what images they want. Semantic content annotation is the basis for semantic content retrieval. The aim of image anno- tation is to automatically obtain keywords that can be used to represent the content of images. The major research challenges in image semantic annotation are: what is the basic unit of semantic representation? how can the semantic unit be linked to high-level image knowledge? how can the contextual information be stored and utilized for image annotation? In this thesis, the Semantic Web technology (i.e. ontology) is introduced to the image semantic annotation problem. Semantic Web, the next generation web, aims at mak- ing the content of whatever type of media not only understandable to humans but also to machines. Due to the large amounts of multimedia data prevalent on the Web, re- searchers and industries are beginning to pay more attention to the Multimedia Semantic Web. The Semantic Web technology provides a new opportunity for multimedia-based applications, but the research in this area is still in its infancy. Whether ontology can be used to improve image annotation and how to best use ontology in semantic repre- sentation and extraction is still a worth-while investigation. This thesis deals with the problem of image semantic annotation using ontology and machine learning techniques in four phases as below. 1) Salient object extraction. A salient object servers as the basic unit in image semantic extraction as it captures the common visual property of the objects. Image segmen- tation is often used as the �rst step for detecting salient objects, but most segmenta- tion algorithms often fail to generate meaningful regions due to over-segmentation and under-segmentation. We develop a new salient object detection algorithm by combining multiple homogeneity criteria in a region merging framework. 2) Ontology construction. Since real-world objects tend to exist in a context within their environment, contextual information has been increasingly used for improving object recognition. In the ontology construction phase, visual-contextual ontologies are built from a large set of fully segmented and annotated images. The ontologies are composed of several types of concepts (i.e. mid-level and high-level concepts), and domain contextual knowledge. The visual-contextual ontologies stand as a user-friendly interface between low-level features and high-level concepts. 3) Image objects annotation. In this phase, each object is labelled with a mid-level concept in ontologies. First, a set of candidate labels are obtained by training Support Vectors Machines with features extracted from salient objects. After that, contextual knowledge contained in ontologies is used to obtain the �nal labels by removing the ambiguity concepts. 4) Scene semantic annotation. The scene semantic extraction phase is to get the scene type by using both mid-level concepts and domain contextual knowledge in ontologies. Domain contextual knowledge is used to create scene con�guration that describes which objects co-exist with which scene type more frequently. The scene con�guration is represented in a probabilistic graph model, and probabilistic inference is employed to calculate the scene type given an annotated image. To evaluate the proposed methods, a series of experiments have been conducted in a large set of fully annotated outdoor scene images. These include a subset of the Corel database, a subset of the LabelMe dataset, the evaluation dataset of localized semantics in images, the spatial context evaluation dataset, and the segmented and annotated IAPR TC-12 benchmark.
Resumo:
Over the past two decades there has been a remarkable expansion in the use of executive coaching as an executive development technique. The increasing prominence of executive coaching has been attributed to the emergence of new organisational cultures and the subtler competencies needed by executives in these faster moving organisations. The widespread popularity of executive coaching has been based largely on anecdotal feedback regarding its effectiveness. The small body of empirical research has been growing but conclusive outcomes are rare. The prominent question for those with the business imperative to implement executive coaching has been what are the ingredients of the process that engender an effective outcome? This investigation has focused on the factors of executive coaching that contribute to effectiveness. A qualitative methodology facilitated an in-depth study of the experiences of the participants of executive coaching with the perceptions of both executives and coaches being sought. Semi-structured interviews and a focus group provided rich, thick descriptions and together with a process of inductive analysis produced findings that confidently identify the key factors that contribute to coaching effectiveness. Six major themes were identified, each comprising a collection of meanings. These themes have been labelled Executive Engagement, Preliminary Assessment and Feedback, Coaching Process, Coach.s Contribution, Trusting Relationship and Support from the Organisation. One theme, Coaching Process, comprises three significant sub-themes, namely, Encouragement and Emotional Support, Challenge and Reflection and Enhancing Executive Performance. The findings of this study add value to the field by identifying factors contributing to coaching effectiveness, and providing for the coaching practitioner a basis for enhancing their practice of executive coaching to better meet the needs of executives and their organisations.
Resumo:
Methicillin-resistant Staphylococcus Aureus (MRSA) is a pathogen that continues to be of major concern in hospitals. We develop models and computational schemes based on observed weekly incidence data to estimate MRSA transmission parameters. We extend the deterministic model of McBryde, Pettitt, and McElwain (2007, Journal of Theoretical Biology 245, 470–481) involving an underlying population of MRSA colonized patients and health-care workers that describes, among other processes, transmission between uncolonized patients and colonized health-care workers and vice versa. We develop new bivariate and trivariate Markov models to include incidence so that estimated transmission rates can be based directly on new colonizations rather than indirectly on prevalence. Imperfect sensitivity of pathogen detection is modeled using a hidden Markov process. The advantages of our approach include (i) a discrete valued assumption for the number of colonized health-care workers, (ii) two transmission parameters can be incorporated into the likelihood, (iii) the likelihood depends on the number of new cases to improve precision of inference, (iv) individual patient records are not required, and (v) the possibility of imperfect detection of colonization is incorporated. We compare our approach with that used by McBryde et al. (2007) based on an approximation that eliminates the health-care workers from the model, uses Markov chain Monte Carlo and individual patient data. We apply these models to MRSA colonization data collected in a small intensive care unit at the Princess Alexandra Hospital, Brisbane, Australia.
Resumo:
Plant biosecurity requires statistical tools to interpret field surveillance data in order to manage pest incursions that threaten crop production and trade. Ultimately, management decisions need to be based on the probability that an area is infested or free of a pest. Current informal approaches to delimiting pest extent rely upon expert ecological interpretation of presence / absence data over space and time. Hierarchical Bayesian models provide a cohesive statistical framework that can formally integrate the available information on both pest ecology and data. The overarching method involves constructing an observation model for the surveillance data, conditional on the hidden extent of the pest and uncertain detection sensitivity. The extent of the pest is then modelled as a dynamic invasion process that includes uncertainty in ecological parameters. Modelling approaches to assimilate this information are explored through case studies on spiralling whitefly, Aleurodicus dispersus and red banded mango caterpillar, Deanolis sublimbalis. Markov chain Monte Carlo simulation is used to estimate the probable extent of pests, given the observation and process model conditioned by surveillance data. Statistical methods, based on time-to-event models, are developed to apply hierarchical Bayesian models to early detection programs and to demonstrate area freedom from pests. The value of early detection surveillance programs is demonstrated through an application to interpret surveillance data for exotic plant pests with uncertain spread rates. The model suggests that typical early detection programs provide a moderate reduction in the probability of an area being infested but a dramatic reduction in the expected area of incursions at a given time. Estimates of spiralling whitefly extent are examined at local, district and state-wide scales. The local model estimates the rate of natural spread and the influence of host architecture, host suitability and inspector efficiency. These parameter estimates can support the development of robust surveillance programs. Hierarchical Bayesian models for the human-mediated spread of spiralling whitefly are developed for the colonisation of discrete cells connected by a modified gravity model. By estimating dispersal parameters, the model can be used to predict the extent of the pest over time. An extended model predicts the climate restricted distribution of the pest in Queensland. These novel human-mediated movement models are well suited to demonstrating area freedom at coarse spatio-temporal scales. At finer scales, and in the presence of ecological complexity, exploratory models are developed to investigate the capacity for surveillance information to estimate the extent of red banded mango caterpillar. It is apparent that excessive uncertainty about observation and ecological parameters can impose limits on inference at the scales required for effective management of response programs. The thesis contributes novel statistical approaches to estimating the extent of pests and develops applications to assist decision-making across a range of plant biosecurity surveillance activities. Hierarchical Bayesian modelling is demonstrated as both a useful analytical tool for estimating pest extent and a natural investigative paradigm for developing and focussing biosecurity programs.
Resumo:
Intelligible and accurate risk-based decision-making requires a complex balance of information from different sources, appropriate statistical analysis of this information and consequent intelligent inference and decisions made on the basis of these analyses. Importantly, this requires an explicit acknowledgement of uncertainty in the inputs and outputs of the statistical model. The aim of this paper is to progress a discussion of these issues in the context of several motivating problems related to the wider scope of agricultural production. These problems include biosecurity surveillance design, pest incursion, environmental monitoring and import risk assessment. The information to be integrated includes observational and experimental data, remotely sensed data and expert information. We describe our efforts in addressing these problems using Bayesian models and Bayesian networks. These approaches provide a coherent and transparent framework for modelling complex systems, combining the different information sources, and allowing for uncertainty in inputs and outputs. While the theory underlying Bayesian modelling has a long and well established history, its application is only now becoming more possible for complex problems, due to increased availability of methodological and computational tools. Of course, there are still hurdles and constraints, which we also address through sharing our endeavours and experiences.
Resumo:
Association rule mining has contributed to many advances in the area of knowledge discovery. However, the quality of the discovered association rules is a big concern and has drawn more and more attention recently. One problem with the quality of the discovered association rules is the huge size of the extracted rule set. Often for a dataset, a huge number of rules can be extracted, but many of them can be redundant to other rules and thus useless in practice. Mining non-redundant rules is a promising approach to solve this problem. In this paper, we first propose a definition for redundancy, then propose a concise representation, called a Reliable basis, for representing non-redundant association rules. The Reliable basis contains a set of non-redundant rules which are derived using frequent closed itemsets and their generators instead of using frequent itemsets that are usually used by traditional association rule mining approaches. An important contribution of this paper is that we propose to use the certainty factor as the criterion to measure the strength of the discovered association rules. Using this criterion, we can ensure the elimination of as many redundant rules as possible without reducing the inference capacity of the remaining extracted non-redundant rules. We prove that the redundancy elimination, based on the proposed Reliable basis, does not reduce the strength of belief in the extracted rules. We also prove that all association rules, their supports and confidences, can be retrieved from the Reliable basis without accessing the dataset. Therefore the Reliable basis is a lossless representation of association rules. Experimental results show that the proposed Reliable basis can significantly reduce the number of extracted rules. We also conduct experiments on the application of association rules to the area of product recommendation. The experimental results show that the non-redundant association rules extracted using the proposed method retain the same inference capacity as the entire rule set. This result indicates that using non-redundant rules only is sufficient to solve real problems needless using the entire rule set.
Resumo:
Introduction and aims: For a scaffold material to be considered effective and efficient for tissue engineering it must be biocompatible as well as bioinductive. Silk fiber is a natural biocompatible material suitable for scaffold fabrication; however, silk is tissue-conductive and lacks tissue-inductive properties. One proposed method to make the scaffold tissue-inductive is to introduce plasmids or viruses encoding a specific growth factor into the scaffold. In this study, we constructed adenoviruses encoding bone morphogenetic protein-7 (BMP-7) and incorporated these into silk scaffolds. The osteo-inductive and new bone formation properties of these constructs were assessed in vivo in a critical-sized skull defect animal model. Materials and methods: Silk fibroin scaffolds containing adenovirus particles coding BMP-7 were prepared. The release of the adenovirus particles from the scaffolds was quantified by tissue-culture infective dose (TCID50) and the bioactivity of the released viruses was evaluated on human bone marrow mesenchymal stromal cells (BMSCs). To demonstrate the in vivo bone forming ability of the virus-carrying silk fibroin scaffold, the scaffold constructs were implanted into calvarial defects in SCID mice. Results: In vitro studies demonstrated that the virus-carrying silk fibroin scaffold released virus particles over a 3 week period while preserving their bioactivity. In vivo test of the scaffold constructs in critical-sized skull defect areas revealed that silk scaffolds were capable of delivering the adenovirus encoding BMP-7, resulting significantly enhanced new bone formation. Conclusions: Silk scaffolds carrying BMP-7 encoding adenoviruses can effectively transfect cells and enhance both in vitro and in vivo osteogenesis. The findings of this study indicate silk fibroin is a promising biomaterial for gene delivery to repair critical-sized bone defects.
Resumo:
Smart matrices are required in bone tissueengineered grafts that provide an optimal environment for cells and retain osteo-inductive factors for sustained biological activity. We hypothesized that a slow-degrading heparin-incorporated hyaluronan (HA) hydrogel can preserve BMP-2; while an arterio–venous (A–V) loop can support axial vascularization to provide nutrition for a bioartificial bone graft. HA was evaluated for osteoblast growth and BMP-2 release. Porous PLDLLA–TCP–PCL scaffolds were produced by rapid prototyping technology and applied in vivo along with HA-hydrogel, loaded with either primary osteoblasts or BMP-2. A microsurgically created A–V loop was placed around the scaffold, encased in an isolation chamber in Lewis rats. HA-hydrogel supported growth of osteoblasts over 8 weeks and allowed sustained release of BMP-2 over 35 days. The A–V loop provided an angiogenic stimulus with the formation of vascularized tissue in the scaffolds. Bone-specific genes were detected by real time RT-PCR after 8 weeks. However, no significant amount of bone was observed histologically. The heterotopic isolation chamber in combination with absent biomechanical stimulation might explain the insufficient bone formation despite adequate expression of bone-related genes. Optimization of the interplay of osteogenic cells and osteo-inductive factors might eventually generate sufficient amounts of axially vascularized bone grafts for reconstructive surgery.
Resumo:
We consider the problem of how to efficiently and safely design dose finding studies. Both current and novel utility functions are explored using Bayesian adaptive design methodology for the estimation of a maximum tolerated dose (MTD). In particular, we explore widely adopted approaches such as the continual reassessment method and minimizing the variance of the estimate of an MTD. New utility functions are constructed in the Bayesian framework and are evaluated against current approaches. To reduce computing time, importance sampling is implemented to re-weight posterior samples thus avoiding the need to draw samples using Markov chain Monte Carlo techniques. Further, as such studies are generally first-in-man, the safety of patients is paramount. We therefore explore methods for the incorporation of safety considerations into utility functions to ensure that only safe and well-predicted doses are administered. The amalgamation of Bayesian methodology, adaptive design and compound utility functions is termed adaptive Bayesian compound design (ABCD). The performance of this amalgamation of methodology is investigated via the simulation of dose finding studies. The paper concludes with a discussion of results and extensions that could be included into our approach.
Resumo:
This paper presents an approach to predict the operating conditions of machine based on classification and regression trees (CART) and adaptive neuro-fuzzy inference system (ANFIS) in association with direct prediction strategy for multi-step ahead prediction of time series techniques. In this study, the number of available observations and the number of predicted steps are initially determined by using false nearest neighbor method and auto mutual information technique, respectively. These values are subsequently utilized as inputs for prediction models to forecast the future values of the machines’ operating conditions. The performance of the proposed approach is then evaluated by using real trending data of low methane compressor. A comparative study of the predicted results obtained from CART and ANFIS models is also carried out to appraise the prediction capability of these models. The results show that the ANFIS prediction model can track the change in machine conditions and has the potential for using as a tool to machine fault prognosis.