886 resultados para Noisy 3D data
Resumo:
This dissertation is primarily an applied statistical modelling investigation, motivated by a case study comprising real data and real questions. Theoretical questions on modelling and computation of normalization constants arose from pursuit of these data analytic questions. The essence of the thesis can be described as follows. Consider binary data observed on a two-dimensional lattice. A common problem with such data is the ambiguity of zeroes recorded. These may represent zero response given some threshold (presence) or that the threshold has not been triggered (absence). Suppose that the researcher wishes to estimate the effects of covariates on the binary responses, whilst taking into account underlying spatial variation, which is itself of some interest. This situation arises in many contexts and the dingo, cypress and toad case studies described in the motivation chapter are examples of this. Two main approaches to modelling and inference are investigated in this thesis. The first is frequentist and based on generalized linear models, with spatial variation modelled by using a block structure or by smoothing the residuals spatially. The EM algorithm can be used to obtain point estimates, coupled with bootstrapping or asymptotic MLE estimates for standard errors. The second approach is Bayesian and based on a three- or four-tier hierarchical model, comprising a logistic regression with covariates for the data layer, a binary Markov Random field (MRF) for the underlying spatial process, and suitable priors for parameters in these main models. The three-parameter autologistic model is a particular MRF of interest. Markov chain Monte Carlo (MCMC) methods comprising hybrid Metropolis/Gibbs samplers is suitable for computation in this situation. Model performance can be gauged by MCMC diagnostics. Model choice can be assessed by incorporating another tier in the modelling hierarchy. This requires evaluation of a normalization constant, a notoriously difficult problem. Difficulty with estimating the normalization constant for the MRF can be overcome by using a path integral approach, although this is a highly computationally intensive method. Different methods of estimating ratios of normalization constants (N Cs) are investigated, including importance sampling Monte Carlo (ISMC), dependent Monte Carlo based on MCMC simulations (MCMC), and reverse logistic regression (RLR). I develop an idea present though not fully developed in the literature, and propose the Integrated mean canonical statistic (IMCS) method for estimating log NC ratios for binary MRFs. The IMCS method falls within the framework of the newly identified path sampling methods of Gelman & Meng (1998) and outperforms ISMC, MCMC and RLR. It also does not rely on simplifying assumptions, such as ignoring spatio-temporal dependence in the process. A thorough investigation is made of the application of IMCS to the three-parameter Autologistic model. This work introduces background computations required for the full implementation of the four-tier model in Chapter 7. Two different extensions of the three-tier model to a four-tier version are investigated. The first extension incorporates temporal dependence in the underlying spatio-temporal process. The second extensions allows the successes and failures in the data layer to depend on time. The MCMC computational method is extended to incorporate the extra layer. A major contribution of the thesis is the development of a fully Bayesian approach to inference for these hierarchical models for the first time. Note: The author of this thesis has agreed to make it open access but invites people downloading the thesis to send her an email via the 'Contact Author' function.
Resumo:
In the context of learning paradigms of identification in the limit, we address the question: why is uncertainty sometimes desirable? We use mind change bounds on the output hypotheses as a measure of uncertainty, and interpret ‘desirable’ as reduction in data memorization, also defined in terms of mind change bounds. The resulting model is closely related to iterative learning with bounded mind change complexity, but the dual use of mind change bounds — for hypotheses and for data — is a key distinctive feature of our approach. We show that situations exists where the more mind changes the learner is willing to accept, the lesser the amount of data it needs to remember in order to converge to the correct hypothesis. We also investigate relationships between our model and learning from good examples, set-driven, monotonic and strong-monotonic learners, as well as class-comprising versus class-preserving learnability.
Resumo:
Keyword Spotting is the task of detecting keywords of interest within continu- ous speech. The applications of this technology range from call centre dialogue systems to covert speech surveillance devices. Keyword spotting is particularly well suited to data mining tasks such as real-time keyword monitoring and unre- stricted vocabulary audio document indexing. However, to date, many keyword spotting approaches have su®ered from poor detection rates, high false alarm rates, or slow execution times, thus reducing their commercial viability. This work investigates the application of keyword spotting to data mining tasks. The thesis makes a number of major contributions to the ¯eld of keyword spotting. The ¯rst major contribution is the development of a novel keyword veri¯cation method named Cohort Word Veri¯cation. This method combines high level lin- guistic information with cohort-based veri¯cation techniques to obtain dramatic improvements in veri¯cation performance, in particular for the problematic short duration target word class. The second major contribution is the development of a novel audio document indexing technique named Dynamic Match Lattice Spotting. This technique aug- ments lattice-based audio indexing principles with dynamic sequence matching techniques to provide robustness to erroneous lattice realisations. The resulting algorithm obtains signi¯cant improvement in detection rate over lattice-based audio document indexing while still maintaining extremely fast search speeds. The third major contribution is the study of multiple veri¯er fusion for the task of keyword veri¯cation. The reported experiments demonstrate that substantial improvements in veri¯cation performance can be obtained through the fusion of multiple keyword veri¯ers. The research focuses on combinations of speech background model based veri¯ers and cohort word veri¯ers. The ¯nal major contribution is a comprehensive study of the e®ects of limited training data for keyword spotting. This study is performed with consideration as to how these e®ects impact the immediate development and deployment of speech technologies for non-English languages.
Resumo:
n the field of tissue engineering new polymers are needed to fabricate scaffolds with specific properties depending on the targeted tissue. This work aimed at designing and developing a 3D scaffold with variable mechanical strength, fully interconnected porous network, controllable hydrophilicity and degradability. For this, a desktop-robot-based melt-extrusion rapid prototyping technique was applied to a novel tri-block co-polymer, namely poly(ethylene glycol)-block-poly(epsi-caprolactone)-block-poly(DL-lactide), PEG-PCL-P(DL)LA. This co-polymer was melted by electrical heating and directly extruded out using computer-controlled rapid prototyping by means of compressed purified air to build porous scaffolds. Various lay-down patterns (0/30/60/90/120/150°, 0/45/90/135°, 0/60/120° and 0/90°) were produced by using appropriate positioning of the robotic control system. Scanning electron microscopy and micro-computed tomography were used to show that 3D scaffold architectures were honeycomb-like with completely interconnected and controlled channel characteristics. Compression tests were performed and the data obtained agreed well with the typical behavior of a porous material undergoing deformation. Preliminary cell response to the as-fabricated scaffolds has been studied with primary human fibroblasts. The results demonstrated the suitability of the process and the cell biocompatibility of the polymer, two important properties among the many required for effective clinical use and efficient tissue-engineering scaffolding.
Resumo:
Computer aided technologies, medical imaging, and rapid prototyping has created new possibilities in biomedical engineering. The systematic variation of scaffold architecture as well as the mineralization inside a scaffold/bone construct can be studied using computer imaging technology and CAD/CAM and micro computed tomography (CT). In this paper, the potential of combining these technologies has been exploited in the study of scaffolds and osteochondral repair. Porosity, surface area per unit volume and the degree of interconnectivity were evaluated through imaging and computer aided manipulation of the scaffold scan data. For the osteochondral model, the spatial distribution and the degree of bone regeneration were evaluated. In this study the versatility of two softwares Mimics (Materialize), CTan and 3D realistic visualization (Skyscan) were assessed, too.
Resumo:
Purpose – This paper aims to present a novel rapid prototyping (RP) fabrication methods and preliminary characterization for chitosan scaffolds. Design – A desktop rapid prototyping robot dispensing (RPBOD) system has been developed to fabricate scaffolds for tissue engineering (TE) applications. The system is a computer-controlled four-axis machine with a multiple-dispenser head. Neutralization of the acetic acid by the sodium hydroxide results in a precipitate to form a gel-like chitosan strand. The scaffold properties were characterized by scanning electron microscopy, porosity calculation and compression test. An example of fabrication of a freeform hydrogel scaffold is demonstrated. The required geometric data for the freeform scaffold were obtained from CT-scan images and the dispensing path control data were converted form its volume model. The applications of the scaffolds are discussed based on its potential for TE. Findings – It is shown that the RPBOD system can be interfaced with imaging techniques and computational modeling to produce scaffolds which can be customized in overall size and shape allowing tissue-engineered grafts to be tailored to specific applications or even for individual patients. Research limitations/implications – Important challenges for further research are the incorporation of growth factors, as well as cell seeding into the 3D dispensing plotting materials. Improvements regarding the mechanical properties of the scaffolds are also necessary. Originality/value – One of the important aspects of TE is the design scaffolds. For customized TE, it is essential to be able to fabricate 3D scaffolds of various geometric shapes, in order to repair tissue defects. RP or solid free-form fabrication techniques hold great promise for designing 3D customized scaffolds; yet traditional cell-seeding techniques may not provide enough cell mass for larger constructs. This paper presents a novel attempt to fabricate 3D scaffolds, using hydrogels which in the future can be combined with cells.
Resumo:
We propose a model-based approach to unify clustering and network modeling using time-course gene expression data. Specifically, our approach uses a mixture model to cluster genes. Genes within the same cluster share a similar expression profile. The network is built over cluster-specific expression profiles using state-space models. We discuss the application of our model to simulated data as well as to time-course gene expression data arising from animal models on prostate cancer progression. The latter application shows that with a combined statistical/bioinformatics analyses, we are able to extract gene-to-gene relationships supported by the literature as well as new plausible relationships.
Resumo:
Rapid prototyping (RP) techniques have been utilised by tissue engineers to produce three-dimensional (3D) porous scaffolds. RP technologies allow the design and fabrication of complex scaffold geometries with a fully interconnected pore network. Three-dimensional printing (3DP) technique was used to fabricate scaffolds with a novel micro- and macro-architecture. In this study, a unique blend of starch-based polymer powders (cornstarch, dextran and gelatin) was developed for the 3DP process. Cylindrical scaffolds of five different designs were fabricated and post-processed to enhance the mechanical and chemical properties. The scaffold properties were characterised by scanning electron microscopy (SEM), differential scanning calorimetry (DSC), porosity analysis and compression tests
Resumo:
Cell-cell and cell-matrix interactions play a major role in tumor morphogenesis and cancer metastasis. Therefore, it is crucial to create a model with a biomimetic microenvironment that allows such interactions to fully represent the pathophysiology of a disease for an in vitro study. This is achievable by using three-dimensional (3D) models instead of conventional two-dimensional (2D) cultures with the aid of tissue engineering technology. We are now able to better address the complex intercellular interactions underlying prostate cancer (CaP) bone metastasis through such models. In this study, we assessed the interaction of CaP cells and human osteoblasts (hOBs) within a tissue engineered bone (TEB) construct. Consistent with other in vivo studies, our findings show that intercellular and CaP cell-bone matrix interactions lead to elevated levels of matrix metalloproteinases, steroidogenic enzymes and the CaP biomarker, prostate specific antigen (PSA); all associated with CaP metastasis. Hence, it highlights the physiological relevance of this model. We believe that this model will provide new insights for understanding of the previously poorly understood molecular mechanisms of bone metastasis, which will foster further translational studies, and ultimately offer a potential tool for drug screening. © 2010 Landes Bioscience.
Resumo:
Data breach notification laws require organisations to notify affected persons or regulatory authorities when an unauthorised acquisition of personal data occurs. Most laws provide a safe harbour to this obligation if acquired data has been encrypted. There are three types of safe harbour: an exemption; a rebuttable presumption and factor-based analysis. We demonstrate, using three condition-based scenarios, that the broad formulation of most encryption safe harbours is based on the flawed assumption that encryption is the silver bullet for personal information protection. We then contend that reliance upon an encryption safe harbour should be dependent upon a rigorous and competent risk-based review that is required on a case-by-case basis. Finally, we recommend the use of both an encryption safe harbour and a notification trigger as our preferred choice for a data breach notification regulatory framework.
Resumo:
The advent of data breach notification laws in the United States (US) has unearthed a significant problem involving the mismanagement of personal information by a range of public and private sector organisations. At present, there is currently no statutory obligation under Australian law requiring public or private sector organisations to report a data breach of personal information to law enforcement agencies or affected persons. However, following a comprehensive review of Australian privacy law, the Australian Law Reform Commission (ALRC) has recommended the introduction of a mandatory data breach notification scheme. The issue of data breach notification has ignited fierce debate amongst stakeholders, especially larger private sector entities. The purpose of this article is to document the perspectives of key industry and government representatives to identify their standpoints regarding an appropriate regulatory approach to data breach notification in Australia.