958 resultados para Standard models
Resumo:
Skin is the largest, and arguably, the most important organ of the body. It is a complex and multi-dimensional tissue, thus making it essentially impossible to fully model in vitro in conventional 2-dimensional culture systems. In view of this, rodents or pigs are utilised to study wound healing therapeutics or to investigate the biological effects of treatments on skin. However, there are many differences between the wound healing processes in rodents compared to humans (contraction vs. re-epithelialisation) and there are also ethical issues associated with animal testing for scientific research. Therefore, the development of skin equivalent (HSE) models from surgical discard human skin has become an important area of research. The studies in this thesis compare, for the first time, native human skin and the epidermogenesis process in a HSE model. The HSE was reported to be a comparable model for human skin in terms of expression and localisation of key epidermal cell markers. This validated HSE model was utilised to study the potential wound healing therapeutic, hyperbaric oxygen (HBO) therapy. There is a significant body of evidence suggesting that lack of cutaneous oxygen results in and potentiates the chronic, non-healing wound environment. Although the evidence is anecdotal, HBO therapy has displayed positive effects on re-oxygenation of chronic wounds and the clinical outcomes suggest that HBO treatment may be beneficial. Therefore, the HSE was subjected to a daily clinical HBO regime and assessed in terms of keratinocyte migration, proliferation, differentiation and epidermal thickening. HBO treatment was observed to increase epidermal thickness, in particular stratum corneum thickening, but it did not alter the expression or localisation of standard epidermal cell markers. In order to elucidate the mechanistic changes occurring in response to HBO treatment in the HSE model, gene microarrays were performed, followed by qRT-PCR of select genes which were differentially regulated in response to HBO treatment. The biological diversity of the HSEs created from individual skin donors, however, overrode the differences in gene expression between treatment groups. Network analysis of functional changes in the HSE model revealed general trends consistent with normal skin growth and maturation. As a more robust and longer term study of these molecular changes, protein localisation and expression was investigated in sections from the HSEs undergoing epidermogenesis in response to HBO treatment. These proteins were CDCP1, Metallothionein, Kallikrein (KLK) 1 and KLK7 and early growth response 1. While the protein expression within the HSE models exposed to HBO treatment were not consistent in all HSEs derived from all skin donors, this is the first study to detect and compare both KLK1 and CDCP1 protein expression in both a HSE model and native human skin. Furthermore, this is the first study to provide such an in depth analysis of the effect of HBO treatment on a HSE model. The data presented in this thesis, demonstrates high levels of variation between individuals and their response to HBO treatment, consistent with the clinical variation that is currently observed.
Resumo:
Introduction: Recent advances in the planning and delivery of radiotherapy treatments have resulted in improvements in the accuracy and precision with which therapeutic radiation can be administered. As the complexity of the treatments increases it becomes more difficult to predict the dose distribution in the patient accurately. Monte Carlo (MC) methods have the potential to improve the accuracy of the dose calculations and are increasingly being recognised as the ‘gold standard’ for predicting dose deposition in the patient [1]. This project has three main aims: 1. To develop tools that enable the transfer of treatment plan information from the treatment planning system (TPS) to a MC dose calculation engine. 2. To develop tools for comparing the 3D dose distributions calculated by the TPS and the MC dose engine. 3. To investigate the radiobiological significance of any errors between the TPS patient dose distribution and the MC dose distribution in terms of Tumour Control Probability (TCP) and Normal Tissue Complication Probabilities (NTCP). The work presented here addresses the first two aims. Methods: (1a) Plan Importing: A database of commissioned accelerator models (Elekta Precise and Varian 2100CD) has been developed for treatment simulations in the MC system (EGSnrc/BEAMnrc). Beam descriptions can be exported from the TPS using the widespread DICOM framework, and the resultant files are parsed with the assistance of a software library (PixelMed Java DICOM Toolkit). The information in these files (such as the monitor units, the jaw positions and gantry orientation) is used to construct a plan-specific accelerator model which allows an accurate simulation of the patient treatment field. (1b) Dose Simulation: The calculation of a dose distribution requires patient CT images which are prepared for the MC simulation using a tool (CTCREATE) packaged with the system. Beam simulation results are converted to absolute dose per- MU using calibration factors recorded during the commissioning process and treatment simulation. These distributions are combined according to the MU meter settings stored in the exported plan to produce an accurate description of the prescribed dose to the patient. (2) Dose Comparison: TPS dose calculations can be obtained using either a DICOM export or by direct retrieval of binary dose files from the file system. Dose difference, gamma evaluation and normalised dose difference algorithms [2] were employed for the comparison of the TPS dose distribution and the MC dose distribution. These implementations are spatial resolution independent and able to interpolate for comparisons. Results and Discussion: The tools successfully produced Monte Carlo input files for a variety of plans exported from the Eclipse (Varian Medical Systems) and Pinnacle (Philips Medical Systems) planning systems: ranging in complexity from a single uniform square field to a five-field step and shoot IMRT treatment. The simulation of collimated beams has been verified geometrically, and validation of dose distributions in a simple body phantom (QUASAR) will follow. The developed dose comparison algorithms have also been tested with controlled dose distribution changes. Conclusion: The capability of the developed code to independently process treatment plans has been demonstrated. A number of limitations exist: only static fields are currently supported (dynamic wedges and dynamic IMRT will require further development), and the process has not been tested for planning systems other than Eclipse and Pinnacle. The tools will be used to independently assess the accuracy of the current treatment planning system dose calculation algorithms for complex treatment deliveries such as IMRT in treatment sites where patient inhomogeneities are expected to be significant. Acknowledgements: Computational resources and services used in this work were provided by the HPC and Research Support Group, Queensland University of Technology, Brisbane, Australia. Pinnacle dose parsing made possible with the help of Paul Reich, North Coast Cancer Institute, North Coast, New South Wales.
Resumo:
This thesis makes several contributions towards improved methods for encoding structure in computational models of word meaning. New methods are proposed and evaluated which address the requirement of being able to easily encode linguistic structural features within a computational representation while retaining the ability to scale to large volumes of textual data. Various methods are implemented and evaluated on a range of evaluation tasks to demonstrate the effectiveness of the proposed methods.
Resumo:
This book presents readers with the opportunity to fundamentally re-evaluate the processes of innovation and entrepreneurship, and to rethink how they might best be stimulated and fostered within our organizations and communities. The fundamental thesis of the book is that the entrepreneurial process is not a linear progression from novel idea to successful innovation, but is an iterative series of experiments, where progress depends on the persistence and resilience of the individuals involved, and their ability and to learn from failure as well as success. From this premise, the authors argue that the ideal environment for new venture creation is a form of “experimental laboratory,” a community of innovators where ideas are generated, shared, and refined; experiments are encouraged; and which in itself serves as a test environment for those ideas and experiments. This environment is quite different from the traditional “incubator,” which may impose the disciplines of the established firm too early in the development of the new venture.
Resumo:
A range of authors from the risk management, crisis management, and crisis communications literature have proposed different models as a means of understanding components of crisis. A generic component of these sources has focused on preparedness practices before disturbance events and response practices during events. This paper provides a critical analysis of three key explanatory models of how crises escalate highlighting the strengths and limitations of each approach. The paper introduces an optimised conceptual model utilising components from the previous work under the four phases of pre-event, response, recovery, and post-event. Within these four phases, a ten step process is introduced that can enhance understanding of the progression of distinct stages of disturbance for different types of events. This crisis evolution framework is examined as a means to provide clarity and applicability to a range of infrastructure failure contexts and provide a path for further empirical investigation in this area.
Resumo:
The traditional hospital-based model of cardiac rehabilitation faces substantial challenges, such as cost and accessibility. These challenges have led to the development of alternative models of cardiac rehabilitation in recent years. The aim of this study was to identify and critique evidence for the effectiveness of these alternative models. A total of 22 databases were searched to identify quantitative studies or systematic reviews of quantitative studies regarding the effectiveness of alternative models of cardiac rehabilitation. Included studies were appraised using a Critical Appraisal Skills Programme tool and the National Health and Medical Research Council's designations for Level of Evidence. The 83 included articles described interventions in the following broad categories of alternative models of care: multifactorial individualized telehealth, internet based, telehealth focused on exercise, telehealth focused on recovery, community- or home-based, and complementary therapies. Multifactorial individualized telehealth and community- or home-based cardiac rehabilitation are effective alternative models of cardiac rehabilitation, as they have produced similar reductions in cardiovascular disease risk factors compared with hospital-based programmes. While further research is required to address the paucity of data available regarding the effectiveness of alternative models of cardiac rehabilitation in rural, remote, and culturally and linguistically diverse populations, our review indicates there is no need to rely on hospital-based strategies alone to deliver effective cardiac rehabilitation. Local healthcare systems should strive to integrate alternative models of cardiac rehabilitation, such as brief telehealth interventions tailored to individual's risk factor profiles as well as community- or home-based programmes, in order to ensure there are choices available for patients that best fit their needs, risk factor profile, and preferences.
Resumo:
The emergence of pseudo-marginal algorithms has led to improved computational efficiency for dealing with complex Bayesian models with latent variables. Here an unbiased estimator of the likelihood replaces the true likelihood in order to produce a Bayesian algorithm that remains on the marginal space of the model parameter (with latent variables integrated out), with a target distribution that is still the correct posterior distribution. Very efficient proposal distributions can be developed on the marginal space relative to the joint space of model parameter and latent variables. Thus psuedo-marginal algorithms tend to have substantially better mixing properties. However, for pseudo-marginal approaches to perform well, the likelihood has to be estimated rather precisely. This can be difficult to achieve in complex applications. In this paper we propose to take advantage of multiple central processing units (CPUs), that are readily available on most standard desktop computers. Here the likelihood is estimated independently on the multiple CPUs, with the ultimate estimate of the likelihood being the average of the estimates obtained from the multiple CPUs. The estimate remains unbiased, but the variability is reduced. We compare and contrast two different technologies that allow the implementation of this idea, both of which require a negligible amount of extra programming effort. The superior performance of this idea over the standard approach is demonstrated on simulated data from a stochastic volatility model.
Resumo:
This thesis describes the use of 2- and 3-dimensional cell-based models for studying how skin cells respond to ultraviolet radiation. These methods were used to investigate skin damage and repair after exposure to radiation in the context of skin cancer development. Interactions between different skin cell types were demonstrated as being significant in protecting against ultraviolet radiation-induced skin damage. This has important implications in understanding how skin cancers occur, as well as in the development of new strategies to prevent and treat them.
Resumo:
Chlamydia is responsible for a wide range of diseases with enormous global economic and health burden. As the majority of chlamydial infections are asymptomatic, a vaccine has greatest potential to reduce infection and disease prevalence. Protective immunity against Chlamydia requires the induction of a mucosal immune response, ideally, at the multiple sites in the body where an infection can be established. Mucosal immunity is most effectively stimulated by targeting vaccination to the epithelium, which is best accomplished by direct vaccine application to mucosal surfaces rather than by injection. The efficacy of needle-free vaccines however is reliant on a powerful adjuvant to overcome mucosal tolerance. As very few adjuvants have proven able to elicit mucosal immunity without harmful side effects, there is a need to develop non-toxic adjuvants or safer ways to administered pre-existing toxic adjuvants. In the present study we investigated the novel non-toxic mucosal adjuvant CTA1-DD. The immunogenicity of CTA1-DD was compared to our "gold-standard" mucosal adjuvant combination of cholera toxin (CT) and cytosine-phosphate-guanosine oligodeoxynucleotide (CpG-ODN). We also utilised different needle-free immunisation routes, intranasal (IN), sublingual (SL) and transcutaneous (TC), to stimulate the induction of immunity at multiple mucosal surfaces in the body where Chlamydia are known to infect. Moreover, administering each adjuvant by different routes may also limit the toxicity of the CT/CpG adjuvant, currently restricted from use in humans. Mice were immunised with either adjuvant together with the chlamydial major outer membrane protein (MOMP) to evaluate vaccine safety and quantify the induction of antigen-specific mucosal immune responses. The level of protection against infection and disease was also assessed in vaccinated animals following a live genital or respiratory tract infectious challenge. The non-toxic CTA1-DD was found to be safe and immunogenic when delivered via the IN route in mice, inducing a comparable mucosal response and level of protective immunity against chlamydial challenge to its toxic CT/CpG counterpart administered by the same route. The utilisation of different routes of immunisation strongly influenced the distribution of antigen-specific responses to distant mucosal surfaces and also abrogated the toxicity of CT/CpG. The CT/CpG-adjuvanted vaccine was safe when administered by the SL and TC routes and conferred partial immunity against infection and pathology in both challenge models. This protection was attributed to the induction of antigen-specific pro-inflammatory cellular responses in the lymph nodes regional to the site of infection and rather than in the spleen. Development of non-toxic adjuvants and effective ways to reduce the side effects of toxic adjuvants has profound implications for vaccine development, particularly against mucosal pathogens like Chlamydia. Interestingly, we also identified two contrasting vaccines in both infection models capable of preventing infection or pathology exclusively. This indicated that the development of pathology following an infection of vaccinated animals was independent of bacterial load and was instead the result of immunopathology, potentially driven by the adaptive immune response generated following immunisation. While both vaccines expressed high levels of interleukin (IL)-17 cytokines, the pathology protected group displayed significantly reduced expression of corresponding IL-17 receptors and hence an inhibition of signalling. This indicated that the balance of IL-17-mediated responses defines the degree of protection against infection and tissue damage generated following vaccination. This study has enabled us to better understand the immune basis of pathology and protection, necessary to design more effective vaccines.
Resumo:
Process-aware information systems (PAISs) can be configured using a reference process model, which is typically obtained via expert interviews. Over time, however, contextual factors and system requirements may cause the operational process to start deviating from this reference model. While a reference model should ideally be updated to remain aligned with such changes, this is a costly and often neglected activity. We present a new process mining technique that automatically improves the reference model on the basis of the observed behavior as recorded in the event logs of a PAIS. We discuss how to balance the four basic quality dimensions for process mining (fitness, precision, simplicity and generalization) and a new dimension, namely the structural similarity between the reference model and the discovered model. We demonstrate the applicability of this technique using a real-life scenario from a Dutch municipality.
Resumo:
Most security models for authenticated key exchange (AKE) do not explicitly model the associated certification system, which includes the certification authority (CA) and its behaviour. However, there are several well-known and realistic attacks on AKE protocols which exploit various forms of malicious key registration and which therefore lie outside the scope of these models. We provide the first systematic analysis of AKE security incorporating certification systems (ASICS). We define a family of security models that, in addition to allowing different sets of standard AKE adversary queries, also permit the adversary to register arbitrary bitstrings as keys. For this model family we prove generic results that enable the design and verification of protocols that achieve security even if some keys have been produced maliciously. Our approach is applicable to a wide range of models and protocols; as a concrete illustration of its power, we apply it to the CMQV protocol in the natural strengthening of the eCK model to the ASICS setting.
Resumo:
The article focuses on how the information seeker makes decisions about relevance. It will employ a novel decision theory based on quantum probabilities. This direction derives from mounting research within the field of cognitive science showing that decision theory based on quantum probabilities is superior to modelling human judgements than standard probability models [2, 1]. By quantum probabilities, we mean decision event space is modelled as vector space rather than the usual Boolean algebra of sets. In this way,incompatible perspectives around a decision can be modelled leading to an interference term which modifies the law of total probability. The interference term is crucial in modifying the probability judgements made by current probabilistic systems so they align better with human judgement. The goal of this article is thus to model the information seeker user as a decision maker. For this purpose, signal detection models will be sketched which are in principle applicable in a wide variety of information seeking scenarios.
Resumo:
This presentation discusses topics and issues that connect closely with the Conference Themes and themes in the ARACY Report Card. For example, developing models of public space that are safe, welcoming and relevant to children and young people will impact on their overall wellbeing and may help to prevent many of the tensions occurring in Australia and elsewhere around the world. This area is the subject of ongoing international debate, research and policy formation, relevant to concerns in the ARACY Report Card about children and young people’s health and safety, participation, behaviours and risks and peer and family relationships.
Resumo:
Background: Developing sampling strategies to target biological pests such as insects in stored grain is inherently difficult owing to species biology and behavioural characteristics. The design of robust sampling programmes should be based on an underlying statistical distribution that is sufficiently flexible to capture variations in the spatial distribution of the target species. Results: Comparisons are made of the accuracy of four probability-of-detection sampling models - the negative binomial model,1 the Poisson model,1 the double logarithmic model2 and the compound model3 - for detection of insects over a broad range of insect densities. Although the double log and negative binomial models performed well under specific conditions, it is shown that, of the four models examined, the compound model performed the best over a broad range of insect spatial distributions and densities. In particular, this model predicted well the number of samples required when insect density was high and clumped within experimental storages. Conclusions: This paper reinforces the need for effective sampling programs designed to detect insects over a broad range of spatial distributions. The compound model is robust over a broad range of insect densities and leads to substantial improvement in detection probabilities within highly variable systems such as grain storage.
Resumo:
This paper details the participation of the Australian e- Health Research Centre (AEHRC) in the ShARe/CLEF 2013 eHealth Evaluation Lab { Task 3. This task aims to evaluate the use of information retrieval (IR) systems to aid consumers (e.g. patients and their relatives) in seeking health advice on the Web. Our submissions to the ShARe/CLEF challenge are based on language models generated from the web corpus provided by the organisers. Our baseline system is a standard Dirichlet smoothed language model. We enhance the baseline by identifying and correcting spelling mistakes in queries, as well as expanding acronyms using AEHRC's Medtex medical text analysis platform. We then consider the readability and the authoritativeness of web pages to further enhance the quality of the document ranking. Measures of readability are integrated in the language models used for retrieval via prior probabilities. Prior probabilities are also used to encode authoritativeness information derived from a list of top-100 consumer health websites. Empirical results show that correcting spelling mistakes and expanding acronyms found in queries signi cantly improves the e ectiveness of the language model baseline. Readability priors seem to increase retrieval e ectiveness for graded relevance at early ranks (nDCG@5, but not precision), but no improvements are found at later ranks and when considering binary relevance. The authoritativeness prior does not appear to provide retrieval gains over the baseline: this is likely to be because of the small overlap between websites in the corpus and those in the top-100 consumer-health websites we acquired.