933 resultados para Computer Generated Proofs
Resumo:
The analysis of lipid compositions from biological samples has become increasingly important. Lipids have a role in cardiovascular disease, metabolic syndrome and diabetes. They also participate in cellular processes such as signalling, inflammatory response, aging and apoptosis. Also, the mechanisms of regulation of cell membrane lipid compositions are poorly understood, partially because a lack of good analytical methods. Mass spectrometry has opened up new possibilities for lipid analysis due to its high resolving power, sensitivity and the possibility to do structural identification by fragment analysis. The introduction of Electrospray ionization (ESI) and the advances in instrumentation revolutionized the analysis of lipid compositions. ESI is a soft ionization method, i.e. it avoids unwanted fragmentation the lipids. Mass spectrometric analysis of lipid compositions is complicated by incomplete separation of the signals, the differences in the instrument response of different lipids and the large amount of data generated by the measurements. These factors necessitate the use of computer software for the analysis of the data. The topic of the thesis is the development of methods for mass spectrometric analysis of lipids. The work includes both computational and experimental aspects of lipid analysis. The first article explores the practical aspects of quantitative mass spectrometric analysis of complex lipid samples and describes how the properties of phospholipids and their concentration affect the response of the mass spectrometer. The second article describes a new algorithm for computing the theoretical mass spectrometric peak distribution, given the elemental isotope composition and the molecular formula of a compound. The third article introduces programs aimed specifically for the analysis of complex lipid samples and discusses different computational methods for separating the overlapping mass spectrometric peaks of closely related lipids. The fourth article applies the methods developed by simultaneously measuring the progress curve of enzymatic hydrolysis for a large number of phospholipids, which are used to determine the substrate specificity of various A-type phospholipases. The data provides evidence that the substrate efflux from bilayer is the key determining factor for the rate of hydrolysis.
Resumo:
Reorganizing a dataset so that its hidden structure can be observed is useful in any data analysis task. For example, detecting a regularity in a dataset helps us to interpret the data, compress the data, and explain the processes behind the data. We study datasets that come in the form of binary matrices (tables with 0s and 1s). Our goal is to develop automatic methods that bring out certain patterns by permuting the rows and columns. We concentrate on the following patterns in binary matrices: consecutive-ones (C1P), simultaneous consecutive-ones (SC1P), nestedness, k-nestedness, and bandedness. These patterns reflect specific types of interplay and variation between the rows and columns, such as continuity and hierarchies. Furthermore, their combinatorial properties are interlinked, which helps us to develop the theory of binary matrices and efficient algorithms. Indeed, we can detect all these patterns in a binary matrix efficiently, that is, in polynomial time in the size of the matrix. Since real-world datasets often contain noise and errors, we rarely witness perfect patterns. Therefore we also need to assess how far an input matrix is from a pattern: we count the number of flips (from 0s to 1s or vice versa) needed to bring out the perfect pattern in the matrix. Unfortunately, for most patterns it is an NP-complete problem to find the minimum distance to a matrix that has the perfect pattern, which means that the existence of a polynomial-time algorithm is unlikely. To find patterns in datasets with noise, we need methods that are noise-tolerant and work in practical time with large datasets. The theory of binary matrices gives rise to robust heuristics that have good performance with synthetic data and discover easily interpretable structures in real-world datasets: dialectical variation in the spoken Finnish language, division of European locations by the hierarchies found in mammal occurrences, and co-occuring groups in network data. In addition to determining the distance from a dataset to a pattern, we need to determine whether the pattern is significant or a mere occurrence of a random chance. To this end, we use significance testing: we deem a dataset significant if it appears exceptional when compared to datasets generated from a certain null hypothesis. After detecting a significant pattern in a dataset, it is up to domain experts to interpret the results in the terms of the application.
Resumo:
The mechanism of action of ribonuclease (RNase) T1 is still a matter of considerable debate as the results of x-ray, 2-D nmr and site-directed mutagenesis studies disagree regarding the role of the catalytically important residues. Hence computer modelling studies were carried out by energy minimisation of the complexes of RNase T1 and some of its mutants (His40Ala, His40Lys, and Glu58Ala) with the substrate guanyl cytosine (GpC), and of native RNase T1 with the reaction intermediate guanosine 2',3'-cyclic phosphate (G greater than p). The puckering of the guanosine ribose moiety in the minimum energy conformer of the RNase T1-GpC (substrate) complex was found to be O4'-endo and not C3'-endo as in the RNase T1-3'-guanylic acid (inhibitor/product) complex. A possible scheme for the mechanism of action of RNase T1 has been proposed on the basis of the arrangement of the catalytically important amino acid residues His40, Glu58, Arg77, and His92 around the guanosine ribose and the phosphate moiety in the RNase T1-GpC and RNase T1-G greater than p complexes. In this scheme, Glu58 serves as the general base group and His92 as the general acid group in the transphosphorylation step. His40 may be essential for stabilising the negatively charged phosphate moiety in the enzyme-transition state complex.
Resumo:
We present a low-complexity algorithm for intrusion detection in the presence of clutter arising from wind-blown vegetation, using Passive Infra-Red (PIR) sensors in a Wireless Sensor Network (WSN). The algorithm is based on a combination of Haar Transform (HT) and Support-Vector-Machine (SVM) based training and was field tested in a network setting comprising of 15-20 sensing nodes. Also contained in this paper is a closed-form expression for the signal generated by an intruder moving at a constant velocity. It is shown how this expression can be exploited to determine the direction of motion information and the velocity of the intruder from the signals of three well-positioned sensors.
Resumo:
Computer-modelling studies on the modes of binding of the three guanosine monophosphate inhibitors 2'-GMP, 3'-GMP, and 5'-GMP to ribonuclease (RNase) T1 have been carried out by energy minimization in Cartesian-coordinate space. The inhibitory power was found to decrease in the order 2'-GMP > 3'-GMP > 5'-GMP in agreement with the experimental observations. The ribose moiety was found to form hydrogen bonds with the protein in all the enzyme-inhibitor complexes, indicating that it contributes to the binding energy and does not merely act as a spacer between the base and the phosphate moieties as suggested earlier. 2'-GMP and 5'-GMP bind to RNase T1 in either of the two ribose puckered forms (with C3'-endo more favoured over the C2'-endo) and 3'-GMP binds to RNase T1 predominantly in C3'-endo form. The catalytically important residue His-92 was found to form hydrogen bond with the phosphate moiety in all the enzyme-inhibitor complexes, indicating that this residue may serve as a general acid group during catalysis. Such an interaction was not found in either X-ray or two-dimensional NMR studies.
Resumo:
A new algorithm based on signal subspace approach is proposed for localizing a sound source in shallow water. In the first instance we assumed an ideal channel with plane parallel boundaries and known reflection properties. The sound source is assumed to emit a broadband stationary stochastic signal. The algorithm takes into account the spatial distribution of all images and reflection characteristics of the sea bottom. It is shown that both range and depth of a source can be measured accurately with the help of a vertical array of sensors. For good results the number of sensors should be greater than the number of significant images; however, localization is possible even with a smaller array but at the cost of higher side lobes. Next, we allowed the channel to be stochastically perturbed; this resulted in random phase errors in the reflection coefficients. The most singular effect of the phase errors is to introduce into the spectral matrix an extra term which may be looked upon as a signal generated coloured noise. It is shown through computer simulations that the signal peak height is reduced considerably as a consequence of random phase errors.
Resumo:
In order to generate normal Penrose tilings by inflation/deflation, decisions have to be made regarding the matching of the rhombuses/tilings with their neighbours. We show here that this decision-making problem can be avoided by adopting a deflation/inflation procedure which uses the decorated rhombuses with identical boundaries. The procedure enables both kinds of inflated rhombuses to match in any orientation along their edges. The tilings so generated are quasiperiodic. These structures appear to have a close relationship with the growth mechanism of quasicrystals.
Resumo:
Thermonuclear fusion is a sustainable energy solution, in which energy is produced using similar processes as in the sun. In this technology hydrogen isotopes are fused to gain energy and consequently to produce electricity. In a fusion reactor hydrogen isotopes are confined by magnetic fields as ionized gas, the plasma. Since the core plasma is millions of degrees hot, there are special needs for the plasma-facing materials. Moreover, in the plasma the fusion of hydrogen isotopes leads to the production of high energetic neutrons which sets demanding abilities for the structural materials of the reactor. This thesis investigates the irradiation response of materials to be used in future fusion reactors. Interactions of the plasma with the reactor wall leads to the removal of surface atoms, migration of them, and formation of co-deposited layers such as tungsten carbide. Sputtering of tungsten carbide and deuterium trapping in tungsten carbide was investigated in this thesis. As the second topic the primary interaction of the neutrons in the structural material steel was examined. As model materials for steel iron chromium and iron nickel were used. This study was performed theoretically by the means of computer simulations on the atomic level. In contrast to previous studies in the field, in which simulations were limited to pure elements, in this work more complex materials were used, i.e. they were multi-elemental including two or more atom species. The results of this thesis are in the microscale. One of the results is a catalogue of atom species, which were removed from tungsten carbide by the plasma. Another result is e.g. the atomic distributions of defects in iron chromium caused by the energetic neutrons. These microscopic results are used in data bases for multiscale modelling of fusion reactor materials, which has the aim to explain the macroscopic degradation in the materials. This thesis is therefore a relevant contribution to investigate the connection of microscopic and macroscopic radiation effects, which is one objective in fusion reactor materials research.
Resumo:
This monograph describes the emergence of independent research on logic in Finland. The emphasis is placed on three well-known students of Eino Kaila: Georg Henrik von Wright (1916-2003), Erik Stenius (1911-1990), and Oiva Ketonen (1913-2000), and their research between the early 1930s and the early 1950s. The early academic work of these scholars laid the foundations for today's strong tradition in logic in Finland and also became internationally recognized. However, due attention has not been given to these works later, nor have they been comprehensively presented together. Each chapter of the book focuses on the life and work of one of Kaila's aforementioned students, with a fourth chapter discussing works on logic by authors who would later become known within other disciplines. Through an extensive use of correspondence and other archived material, some insight has been gained into the persons behind the academic personae. Unique and unpublished biographical material has been available for this task. The chapter on Oiva Ketonen focuses primarily on his work on what is today known as proof theory, especially on his proof theoretical system with invertible rules that permits a terminating root-first proof search. The independency of the parallel postulate is proved as an example of the strength of root-first proof search. Ketonen was to our knowledge Gerhard Gentzen's (the 'father' of proof theory) only student. Correspondence and a hitherto unavailable autobiographic manuscript, in addition to an unpublished article on the relationship between logic and epistemology, is presented. The chapter on Erik Stenius discusses his work on paradoxes and set theory, more specifically on how a rigid theory of definitions is employed to avoid these paradoxes. A presentation by Paul Bernays on Stenius' attempt at a proof of the consistency of arithmetic is reconstructed based on Bernays' lecture notes. Stenius correspondence with Paul Bernays, Evert Beth, and Georg Kreisel is discussed. The chapter on Georg Henrik von Wright presents his early work on probability and epistemology, along with his later work on modal logic that made him internationally famous. Correspondence from various archives (especially with Kaila and Charlie Dunbar Broad) further discusses his academic achievements and his experiences during the challenging circumstances of the 1940s.
Resumo:
An algorithm to generate a minimal spanning tree is presented when the nodes with their coordinates in some m-dimensional Euclidean space and the corresponding metric are given. This algorithm is tested on manually generated data sets. The worst case time complexity of this algorithm is O(n log2n) for a collection of n data samples.