973 resultados para low-dimensional system
Resumo:
During the ARCTIC '91-Expedition with RV 'Polarstern', several Multicorer and Kastenlot-cores were recovered along a profile crossing the eastern part of the Arctic Ocean. The investigated cores consist mainly of clayey-silty sediments, and some units with a higher sand content. In this thesis, detailed sedimentological and organic-geochemical investigations were performed. In part, the near surface sediments were AMS-14C dated making it possible to Interpret the results of the organic-geochemical investigations in terms of climatic changes (isotopic stage 2 to the Holocene). The more or less absence of foraminifers within the long cores prevented the development of an oxygen isotope stratigraphy. Only the results of core PS2174-5 from the Amundsen-Basin could be discussed in terms of the climatic change that could be dated back to oxygen isotope stage 7. Detailed organic-geochemical investigations in the central Arctic Ocean are rare. Therefore, several different organic-geochemical methods were used to obtain a wide range of data for the Interpretation of the organic matter. The high organic carbon content of the surface sediments is derived from a high input of terrigenous organic matter. The terrigenous organic material is most likely entrained within the sea-ice On the Siberian shelves and released during ice-drift over the Arctic Ocean. Other factors such as iceberg-transport and turbidites are also responsible for the high input of terrigenous organic matter. Due to the more or less closed sea-ice Cover, the Arctic Ocean is known as a low productivity system. A model shows, that only 2 % of the organic matter in central Arctic Ocean sediments is of a marine origin. The influence of the West-Spitsbergen current increases the marine organic matter content to 16 %. Short chain n-alkanes (C17 and C19) can be used as a marker of marine productivity in the Arctic Ocean. Higher contents of short chain n-alkanes exist in surface sediments of the Lomonosov-Ridge and the Makarov-Basin, indicating a higher marine productivity caused by a reduced sea-ice Cover. The Beaufort-Gyre and Transpolar-Drift drift Patterns could be responsible for the lower sea-ice distribution in this region. The sediments of Stage 2 and Stage 3 in this region are also dominated by a higher content of short chain-nalkanes indicating a comparable ice-drift Pattern during that time. The content and composition of organic carbon in the sediments of core PS2174-5 reflect glaciallinterglacial changes. Interglacial stages 7 and 5e show a low organic carbon content (C 0,5 %) and, as indicated by high hydrogen-indices, low CIN-ratios, higher content of n-alkanes (C17 and C19) and a higher opal content, a higher marine productivity. In the Holocene, a high content of foraminifers, coccoliths, ostracodes, and sponge spicules indicate higher surface-water productivity. Nevertheless, the low hydrogenindices reveal a high content of terrigenous organic matter. Therefore, the Holocene seems to be different from interglacials 7 and 5e. During the glacial periods (stages 6, upper 5, and 4), TOC-values are significantly higher (0.7 to 1.3 %). In addition, low hydrogen-indices, high CIN-ratios, low short chain n-alkanes and opal contents provide evidence for a higher input of terrigenous organic matter and reduced marine productivity. The high lignin content in core sections with high TOC-contents, substantiates the high input of terrigenous organic matter. Changes in the content and composition of the organic carbon is believed to vary with the fluctuations in sea-level and sea-ice coverage.
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-06
Resumo:
This thesis describes the Generative Topographic Mapping (GTM) --- a non-linear latent variable model, intended for modelling continuous, intrinsically low-dimensional probability distributions, embedded in high-dimensional spaces. It can be seen as a non-linear form of principal component analysis or factor analysis. It also provides a principled alternative to the self-organizing map --- a widely established neural network model for unsupervised learning --- resolving many of its associated theoretical problems. An important, potential application of the GTM is visualization of high-dimensional data. Since the GTM is non-linear, the relationship between data and its visual representation may be far from trivial, but a better understanding of this relationship can be gained by computing the so-called magnification factor. In essence, the magnification factor relates the distances between data points, as they appear when visualized, to the actual distances between those data points. There are two principal limitations of the basic GTM model. The computational effort required will grow exponentially with the intrinsic dimensionality of the density model. However, if the intended application is visualization, this will typically not be a problem. The other limitation is the inherent structure of the GTM, which makes it most suitable for modelling moderately curved probability distributions of approximately rectangular shape. When the target distribution is very different to that, theaim of maintaining an `interpretable' structure, suitable for visualizing data, may come in conflict with the aim of providing a good density model. The fact that the GTM is a probabilistic model means that results from probability theory and statistics can be used to address problems such as model complexity. Furthermore, this framework provides solid ground for extending the GTM to wider contexts than that of this thesis.
Resumo:
We have recently proposed the framework of independent blind source separation as an advantageous approach to steganography. Amongst the several characteristics noted was a sensitivity to message reconstruction due to small perturbations in the sources. This characteristic is not common in most other approaches to steganography. In this paper we discuss how this sensitivity relates the joint diagonalisation inside the independent component approach, and reliance on exact knowledge of secret information, and how it can be used as an additional and inherent security mechanism against malicious attack to discovery of the hidden messages. The paper therefore provides an enhanced mechanism that can be used for e-document forensic analysis and can be applied to different dimensionality digital data media. In this paper we use a low dimensional example of biomedical time series as might occur in the electronic patient health record, where protection of the private patient information is paramount.
Resumo:
We present a thorough study on the development of a polymer optical fibre-based tuneable filter utilizing an intra-core Bragg grating that is electrically tuneable, operating at 1.55 µm. The Bragg grating is made tuneable using a thin-film resistive heater deposited on the surface of the fibre. The polymer fibre was coated via the photochemical deposition of a Pd/Cu metallic layer with the procedure induced by VUV radiation at room temperature. The resulting device, when wavelength tuned via Joule heating, underwent a wavelength shift of 2 nm for a moderate input power of 160 mW, a wavelength to input power coefficient of -13.4 pm mW-1 and time constant of 1.7 s-1. A basic theoretical study verified that for this fibre type one can treat the device as a one-dimensional system. The model was extended to include the effect of input electrical power changes on the refractive index of the fibre and subsequently to changes in the Bragg wavelength of the grating, showing excellent agreement with the experimental measurements.
Resumo:
This thesis addresses the problem of information hiding in low dimensional digital data focussing on issues of privacy and security in Electronic Patient Health Records (EPHRs). The thesis proposes a new security protocol based on data hiding techniques for EPHRs. This thesis contends that embedding of sensitive patient information inside the EPHR is the most appropriate solution currently available to resolve the issues of security in EPHRs. Watermarking techniques are applied to one-dimensional time series data such as the electroencephalogram (EEG) to show that they add a level of confidence (in terms of privacy and security) in an individual’s diverse bio-profile (the digital fingerprint of an individual’s medical history), ensure belief that the data being analysed does indeed belong to the correct person, and also that it is not being accessed by unauthorised personnel. Embedding information inside single channel biomedical time series data is more difficult than the standard application for images due to the reduced redundancy. A data hiding approach which has an in built capability to protect against illegal data snooping is developed. The capability of this secure method is enhanced by embedding not just a single message but multiple messages into an example one-dimensional EEG signal. Embedding multiple messages of similar characteristics, for example identities of clinicians accessing the medical record helps in creating a log of access while embedding multiple messages of dissimilar characteristics into an EPHR enhances confidence in the use of the EPHR. The novel method of embedding multiple messages of both similar and dissimilar characteristics into a single channel EEG demonstrated in this thesis shows how this embedding of data boosts the implementation and use of the EPHR securely.
Resumo:
This thesis is a study of low-dimensional visualisation methods for data visualisation under certainty of the input data. It focuses on the two main feed-forward neural network algorithms which are NeuroScale and Generative Topographic Mapping (GTM) by trying to make both algorithms able to accommodate the uncertainty. The two models are shown not to work well under high levels of noise within the data and need to be modified. The modification of both models, NeuroScale and GTM, are verified by using synthetic data to show their ability to accommodate the noise. The thesis is interested in the controversy surrounding the non-uniqueness of predictive gene lists (PGL) of predicting prognosis outcome of breast cancer patients as available in DNA microarray experiments. Many of these studies have ignored the uncertainty issue resulting in random correlations of sparse model selection in high dimensional spaces. The visualisation techniques are used to confirm that the patients involved in such medical studies are intrinsically unclassifiable on the basis of provided PGL evidence. This additional category of ‘unclassifiable’ should be accommodated within medical decision support systems if serious errors and unnecessary adjuvant therapy are to be avoided.
Resumo:
An alkali- and nitrate-free hydrotalcite coating has been grafted onto the surface of a hierarchically ordered macroporous-mesoporous SBA-15 template via stepwise growth of conformal alumina adlayers and their subsequent reaction with magnesium methoxide. The resulting low dimensional hydrotalcite crystallites exhibit excellent per site activity for the base catalysed transesterification of glyceryl triolein with methanol for FAME production.
Resumo:
We present a thorough study on the development of a polymer optical fibre-based tuneable filter utilizing an intra-core Bragg grating that is electrically tuneable, operating at 1.55 νm. The Bragg grating is made tuneable using a thin-film resistive heater deposited on the surface of the fibre. The polymer fibre was coated via the photochemical deposition of a Pd/Cu metallic layer with the procedure induced by VUV radiation at room temperature. The resulting device, when wavelength tuned via Joule heating, underwent a wavelength shift of 2 nm for a moderate input power of 160 mW, a wavelength to input power coefficient of -13.4 pm mW-1 and time constant of 1.7 s-1. A basic theoretical study verified that for this fibre type one can treat the device as a one-dimensional system. The model was extended to include the effect of input electrical power changes on the refractive index of the fibre and subsequently to changes in the Bragg wavelength of the grating, showing excellent agreement with the experimental measurements. © 2007 IOP Publishing Ltd.
Resumo:
Protein-DNA interactions are an essential feature in the genetic activities of life, and the ability to predict and manipulate such interactions has applications in a wide range of fields. This Thesis presents the methods of modelling the properties of protein-DNA interactions. In particular, it investigates the methods of visualising and predicting the specificity of DNA-binding Cys2His2 zinc finger interaction. The Cys2His2 zinc finger proteins interact via their individual fingers to base pair subsites on the target DNA. Four key residue positions on the a- helix of the zinc fingers make non-covalent interactions with the DNA with sequence specificity. Mutating these key residues generates combinatorial possibilities that could potentially bind to any DNA segment of interest. Many attempts have been made to predict the binding interaction using structural and chemical information, but with only limited success. The most important contribution of the thesis is that the developed model allows for the binding properties of a given protein-DNA binding to be visualised in relation to other protein-DNA combinations without having to explicitly physically model the specific protein molecule and specific DNA sequence. To prove this, various databases were generated, including a synthetic database which includes all possible combinations of the DNA-binding Cys2His2 zinc finger interactions. NeuroScale, a topographic visualisation technique, is exploited to represent the geometric structures of the protein-DNA interactions by measuring dissimilarity between the data points. In order to verify the effect of visualisation on understanding the binding properties of the DNA-binding Cys2His2 zinc finger interaction, various prediction models are constructed by using both the high dimensional original data and the represented data in low dimensional feature space. Finally, novel data sets are studied through the selected visualisation models based on the experimental DNA-zinc finger protein database. The result of the NeuroScale projection shows that different dissimilarity representations give distinctive structural groupings, but clustering in biologically-interesting ways. This method can be used to forecast the physiochemical properties of the novel proteins which may be beneficial for therapeutic purposes involving genome targeting in general.
Resumo:
This paper examines a method for locating within a scene a distribution of an absorbing gas using a passive imaging technique. An oscillatory modulation of the angle of a narrowband dielectric filter located in front of a camera imaging a scene, gives rise to an intensity modulation that differs in regions occupied by the absorbing gas. A preliminary low cost system has been constructed from readily available components which demonstrates how the location of gas within a scene can be implemented. Modelling of the system has been carried out, especially highlighting the transmission effects of the dielectric filter upon different regions of the image.
Resumo:
The problem of strongly correlated electrons in one dimension attracted attention of condensed matter physicists since early 50’s. After the seminal paper of Tomonaga [1] who suggested the first soluble model in 1950, there were essential achievements reflected in papers by Luttinger [2] (1963) and Mattis and Lieb [3] (1963). A considerable contribution to the understanding of generic properties of the 1D electron liquid has been made by Dzyaloshinskii and Larkin [4] (1973) and Efetov and Larkin [5] (1976). Despite the fact that the main features of the 1D electron liquid were captured and described by the end of 70’s, the investigators felt dissatisfied with the rigour of the theoretical description. The most famous example is the paper by Haldane [6] (1981) where the author developed the fundamentals of a modern bosonisation technique, known as the operator approach. This paper became famous because the author has rigourously shown how to construct the Fermi creation/anihilation operators out of the Bose ones. The most recent example of such a dissatisfaction is the review by von Delft and Schoeller [7] (1998) who revised the approach to the bosonisation and came up with what they called constructive bosonisation.
Resumo:
Analysing the molecular polymorphism and interactions of DNA, RNA and proteins is of fundamental importance in biology. Predicting functions of polymorphic molecules is important in order to design more effective medicines. Analysing major histocompatibility complex (MHC) polymorphism is important for mate choice, epitope-based vaccine design and transplantation rejection etc. Most of the existing exploratory approaches cannot analyse these datasets because of the large number of molecules with a high number of descriptors per molecule. This thesis develops novel methods for data projection in order to explore high dimensional biological dataset by visualising them in a low-dimensional space. With increasing dimensionality, some existing data visualisation methods such as generative topographic mapping (GTM) become computationally intractable. We propose variants of these methods, where we use log-transformations at certain steps of expectation maximisation (EM) based parameter learning process, to make them tractable for high-dimensional datasets. We demonstrate these proposed variants both for synthetic and electrostatic potential dataset of MHC class-I. We also propose to extend a latent trait model (LTM), suitable for visualising high dimensional discrete data, to simultaneously estimate feature saliency as an integrated part of the parameter learning process of a visualisation model. This LTM variant not only gives better visualisation by modifying the project map based on feature relevance, but also helps users to assess the significance of each feature. Another problem which is not addressed much in the literature is the visualisation of mixed-type data. We propose to combine GTM and LTM in a principled way where appropriate noise models are used for each type of data in order to visualise mixed-type data in a single plot. We call this model a generalised GTM (GGTM). We also propose to extend GGTM model to estimate feature saliencies while training a visualisation model and this is called GGTM with feature saliency (GGTM-FS). We demonstrate effectiveness of these proposed models both for synthetic and real datasets. We evaluate visualisation quality using quality metrics such as distance distortion measure and rank based measures: trustworthiness, continuity, mean relative rank errors with respect to data space and latent space. In cases where the labels are known we also use quality metrics of KL divergence and nearest neighbour classifications error in order to determine the separation between classes. We demonstrate the efficacy of these proposed models both for synthetic and real biological datasets with a main focus on the MHC class-I dataset.
Resumo:
We have devised a general scheme that reveals multiple duality relations valid for all multi-channel Luttinger Liquids. The relations are universal and should be used for establishing phase diagrams and searching for new non-trivial phases in low-dimensional strongly correlated systems. The technique developed provides universal correspondence between scaling dimensions of local perturbations in different phases. These multiple relations between scaling dimensions lead to a connection between different inter-phase boundaries on the phase diagram. The dualities, in particular, constrain phase diagram and allow predictions of emergence and observation of new phases without explicit model-dependent calculations. As an example, we demonstrate the impossibility of non-trivial phase existence for fermions coupled to phonons in one dimension. © 2013 EPLA.
Resumo:
This paper considers the problem of low-dimensional visualisation of very high dimensional information sources for the purpose of situation awareness in the maritime environment. In response to the requirement for human decision support aids to reduce information overload (and specifically, data amenable to inter-point relative similarity measures) appropriate to the below-water maritime domain, we are investigating a preliminary prototype topographic visualisation model. The focus of the current paper is on the mathematical problem of exploiting a relative dissimilarity representation of signals in a visual informatics mapping model, driven by real-world sonar systems. A realistic noise model is explored and incorporated into non-linear and topographic visualisation algorithms building on the approach of [9]. Concepts are illustrated using a real world dataset of 32 hydrophones monitoring a shallow-water environment in which targets are present and dynamic.