28 resultados para Distance convex simple graphs
Resumo:
This thesis studies gray-level distance transforms, particularly the Distance Transform on Curved Space (DTOCS). The transform is produced by calculating distances on a gray-level surface. The DTOCS is improved by definingmore accurate local distances, and developing a faster transformation algorithm. The Optimal DTOCS enhances the locally Euclidean Weighted DTOCS (WDTOCS) with local distance coefficients, which minimize the maximum error from the Euclideandistance in the image plane, and produce more accurate global distance values.Convergence properties of the traditional mask operation, or sequential localtransformation, and the ordered propagation approach are analyzed, and compared to the new efficient priority pixel queue algorithm. The Route DTOCS algorithmdeveloped in this work can be used to find and visualize shortest routes between two points, or two point sets, along a varying height surface. In a digital image, there can be several paths sharing the same minimal length, and the Route DTOCS visualizes them all. A single optimal path can be extracted from the route set using a simple backtracking algorithm. A new extension of the priority pixel queue algorithm produces the nearest neighbor transform, or Voronoi or Dirichlet tessellation, simultaneously with the distance map. The transformation divides the image into regions so that each pixel belongs to the region surrounding the reference point, which is nearest according to the distance definition used. Applications and application ideas for the DTOCS and its extensions are presented, including obstacle avoidance, image compression and surface roughness evaluation.
Resumo:
This thesis deals with distance transforms which are a fundamental issue in image processing and computer vision. In this thesis, two new distance transforms for gray level images are presented. As a new application for distance transforms, they are applied to gray level image compression. The new distance transforms are both new extensions of the well known distance transform algorithm developed by Rosenfeld, Pfaltz and Lay. With some modification their algorithm which calculates a distance transform on binary images with a chosen kernel has been made to calculate a chessboard like distance transform with integer numbers (DTOCS) and a real value distance transform (EDTOCS) on gray level images. Both distance transforms, the DTOCS and EDTOCS, require only two passes over the graylevel image and are extremely simple to implement. Only two image buffers are needed: The original gray level image and the binary image which defines the region(s) of calculation. No other image buffers are needed even if more than one iteration round is performed. For large neighborhoods and complicated images the two pass distance algorithm has to be applied to the image more than once, typically 3 10 times. Different types of kernels can be adopted. It is important to notice that no other existing transform calculates the same kind of distance map as the DTOCS. All the other gray weighted distance function, GRAYMAT etc. algorithms find the minimum path joining two points by the smallest sum of gray levels or weighting the distance values directly by the gray levels in some manner. The DTOCS does not weight them that way. The DTOCS gives a weighted version of the chessboard distance map. The weights are not constant, but gray value differences of the original image. The difference between the DTOCS map and other distance transforms for gray level images is shown. The difference between the DTOCS and EDTOCS is that the EDTOCS calculates these gray level differences in a different way. It propagates local Euclidean distances inside a kernel. Analytical derivations of some results concerning the DTOCS and the EDTOCS are presented. Commonly distance transforms are used for feature extraction in pattern recognition and learning. Their use in image compression is very rare. This thesis introduces a new application area for distance transforms. Three new image compression algorithms based on the DTOCS and one based on the EDTOCS are presented. Control points, i.e. points that are considered fundamental for the reconstruction of the image, are selected from the gray level image using the DTOCS and the EDTOCS. The first group of methods select the maximas of the distance image to new control points and the second group of methods compare the DTOCS distance to binary image chessboard distance. The effect of applying threshold masks of different sizes along the threshold boundaries is studied. The time complexity of the compression algorithms is analyzed both analytically and experimentally. It is shown that the time complexity of the algorithms is independent of the number of control points, i.e. the compression ratio. Also a new morphological image decompression scheme is presented, the 8 kernels' method. Several decompressed images are presented. The best results are obtained using the Delaunay triangulation. The obtained image quality equals that of the DCT images with a 4 x 4
Resumo:
The advancement of science and technology makes it clear that no single perspective is any longer sufficient to describe the true nature of any phenomenon. That is why the interdisciplinary research is gaining more attention overtime. An excellent example of this type of research is natural computing which stands on the borderline between biology and computer science. The contribution of research done in natural computing is twofold: on one hand, it sheds light into how nature works and how it processes information and, on the other hand, it provides some guidelines on how to design bio-inspired technologies. The first direction in this thesis focuses on a nature-inspired process called gene assembly in ciliates. The second one studies reaction systems, as a modeling framework with its rationale built upon the biochemical interactions happening within a cell. The process of gene assembly in ciliates has attracted a lot of attention as a research topic in the past 15 years. Two main modelling frameworks have been initially proposed in the end of 1990s to capture ciliates’ gene assembly process, namely the intermolecular model and the intramolecular model. They were followed by other model proposals such as templatebased assembly and DNA rearrangement pathways recombination models. In this thesis we are interested in a variation of the intramolecular model called simple gene assembly model, which focuses on the simplest possible folds in the assembly process. We propose a new framework called directed overlap-inclusion (DOI) graphs to overcome the limitations that previously introduced models faced in capturing all the combinatorial details of the simple gene assembly process. We investigate a number of combinatorial properties of these graphs, including a necessary property in terms of forbidden induced subgraphs. We also introduce DOI graph-based rewriting rules that capture all the operations of the simple gene assembly model and prove that they are equivalent to the string-based formalization of the model. Reaction systems (RS) is another nature-inspired modeling framework that is studied in this thesis. Reaction systems’ rationale is based upon two main regulation mechanisms, facilitation and inhibition, which control the interactions between biochemical reactions. Reaction systems is a complementary modeling framework to traditional quantitative frameworks, focusing on explicit cause-effect relationships between reactions. The explicit formulation of facilitation and inhibition mechanisms behind reactions, as well as the focus on interactions between reactions (rather than dynamics of concentrations) makes their applicability potentially wide and useful beyond biological case studies. In this thesis, we construct a reaction system model corresponding to the heat shock response mechanism based on a novel concept of dominance graph that captures the competition on resources in the ODE model. We also introduce for RS various concepts inspired by biology, e.g., mass conservation, steady state, periodicity, etc., to do model checking of the reaction systems based models. We prove that the complexity of the decision problems related to these properties varies from P to NP- and coNP-complete to PSPACE-complete. We further focus on the mass conservation relation in an RS and introduce the conservation dependency graph to capture the relation between the species and also propose an algorithm to list the conserved sets of a given reaction system.
Resumo:
Selostus: Yksinkertainen viljelymenetelmä naudan alkioiden aikaviivenauhoitusta varten
Resumo:
Summary
Resumo:
Abstract
Resumo:
RFID on 2000-luvulla yleisesti saatavilla tullut tekniikka erilaisten kohteiden tunnistamiseen. RFID-tekniikassa tunnistamiseen käytetään pienikokoisia tunnisteita, joiden tietosisältö pystytään lukemaan langattomasti ilman näköyhteyttä tarkoitukseen soveltuvalla lukulaitteella. Tunnisteet ovat halpoja ja yksinkertaisia. Yleensä ne eivät sisällä omaa virtalähdettä, vaan ne toimivat ainoastaan lukulaitteen luoman kentän voimalla. Tässä työssä tutkitaan RFID-tekniikan soveltuvuutta betonista valmistettujen rakennuselementtien tunnistamiseen. Ympäristön vaikutukset tekniikan käyttöön tutkitaan ja selvitetään, mitkä ovat parhaat toimintatavat elementtien tunnistamiseen nämä vaikutukset huomioon ottaen. Työssä esitellään ensin RFID-tekniikan toimintaperiaatteet sekä tunnisteiden ja lukulaitteiden rakenne. Tunnisteiden jaottelu erilaisten ominasisuuksien perusteella käydään läpi ja sovellusalan kannalta tärkeimmät standardit esitellään. Käytännön osuudessa esitellään RFID-tekniikan soveltamista betonista valmistettujen rakennuselementtien tunnistamiseen. Työssä esitellään saavutetut mittaustulokset sekä betonielementtien tunnistamiseen ja tietojen hallintaan toteutetun järjestelmän rakenne.
Resumo:
An accidental burst of a pressure vessel is an uncontrollable and explosion-like batch process. In this study it is called an explosion. The destructive effectof a pressure vessel explosion is relative to the amount of energy released in it. However, in the field of pressure vessel safety, a mutual understanding concerning the definition of explosion energy has not yet been achieved. In this study the definition of isentropic exergy is presented. Isentropic exergy is the greatest possible destructive energy which can be obtained from a pressure vessel explosion when its state changes in an isentropic way from the initial to the final state. Finally, after the change process, the gas has similar pressure and flow velocity as the environment. Isentropic exergy differs from common exergy inthat the process is assumed to be isentropic and the final gas temperature usually differs from the ambient temperature. The explosion process is so fast that there is no time for the significant heat exchange needed for the common exergy.Therefore an explosion is better characterized by isentropic exergy. Isentropicexergy is a characteristic of a pressure vessel and it is simple to calculate. Isentropic exergy can be defined also for any thermodynamic system, such as the shock wave system developing around an exploding pressure vessel. At the beginning of the explosion process the shock wave system has the same isentropic exergyas the pressure vessel. When the system expands to the environment, its isentropic exergy decreases because of the increase of entropy in the shock wave. The shock wave system contains the pressure vessel gas and a growing amount of ambient gas. The destructive effect of the shock wave on the ambient structures decreases when its distance from the starting point increases. This arises firstly from the fact that the shock wave system is distributed to a larger space. Secondly, the increase of entropy in the shock waves reduces the amount of isentropic exergy. Equations concerning the change of isentropic exergy in shock waves are derived. By means of isentropic exergy and the known flow theories, equations illustrating the pressure of the shock wave as a function of distance are derived. Amethod is proposed as an application of the equations. The method is applicablefor all shapes of pressure vessels in general use, such as spheres, cylinders and tubes. The results of this method are compared to measurements made by various researchers and to accident reports on pressure vessel explosions. The test measurements are found to be analogous with the proposed method and the findings in the accident reports are not controversial to it.
Resumo:
Fatigue life assessment of weldedstructures is commonly based on the nominal stress method, but more flexible and accurate methods have been introduced. In general, the assessment accuracy is improved as more localized information about the weld is incorporated. The structural hot spot stress method includes the influence of macro geometric effects and structural discontinuities on the design stress but excludes the local features of the weld. In this thesis, the limitations of the structural hot spot stress method are discussed and a modified structural stress method with improved accuracy is developed and verified for selected welded details. The fatigue life of structures in the as-welded state consists mainly of crack growth from pre-existing cracks or defects. Crack growth rate depends on crack geometry and the stress state on the crack face plane. This means that the stress level and shape of the stress distribution in the assumed crack path governs thetotal fatigue life. In many structural details the stress distribution is similar and adequate fatigue life estimates can be obtained just by adjusting the stress level based on a single stress value, i.e., the structural hot spot stress. There are, however, cases for which the structural stress approach is less appropriate because the stress distribution differs significantly from the more common cases. Plate edge attachments and plates on elastic foundations are some examples of structures with this type of stress distribution. The importance of fillet weld size and weld load variation on the stress distribution is another central topic in this thesis. Structural hot spot stress determination is generally based on a procedure that involves extrapolation of plate surface stresses. Other possibilities for determining the structural hot spot stress is to extrapolate stresses through the thickness at the weld toe or to use Dong's method which includes through-thickness extrapolation at some distance from the weld toe. Both of these latter methods are less sensitive to the FE mesh used. Structural stress based on surface extrapolation is sensitive to the extrapolation points selected and to the FE mesh used near these points. Rules for proper meshing, however, are well defined and not difficult to apply. To improve the accuracy of the traditional structural hot spot stress, a multi-linear stress distribution is introduced. The magnitude of the weld toe stress after linearization is dependent on the weld size, weld load and plate thickness. Simple equations have been derived by comparing assessment results based on the local linear stress distribution and LEFM based calculations. The proposed method is called the modified structural stress method (MSHS) since the structural hot spot stress (SHS) value is corrected using information on weld size andweld load. The correction procedure is verified using fatigue test results found in the literature. Also, a test case was conducted comparing the proposed method with other local fatigue assessment methods.
Resumo:
Tässä diplomityössä esitellään langattoman mittaus- ja valvontajärjestelmän protokollakehitys. Työssä selvitetään protokollakehityksessä huomioon otettavat asiat ja esitetään langattoman tilavalvontaan perustuvan pilottijärjestelmän toteutus. Pilottijärjestelmänä käytetään Ensto Busch-Jaeger Oy:n Jussi-kosteusvahtijärjestelmää, joka muutetaan langattomaksi. Järjestelmän tiedonsiirto on yksisuuntaista ja tapahtuu radioyhteydellä. Käytetty taajuus on 433,92 MHz. Tavoitteena työssä oli kehittää yksinkertainen, mutta luotettava signalointijärjestelmä. Siihen toteutettu protokolla koodaa lähetettävän datan NRZ-L -koodauksen tapaisesti. Virheenkorjaus tehdään pariteettibittiä ja Hamming-etäisyyttä hyväksi käyttäen. Lisäksi tiedonsiirron yhteyskäytäntöön on lisätty rinnakkaisuutta yksisuuntaisen tiedonsiirron varmistamiseksi. Kehitetylle protokollalle tehdyt testit osoittavat sen olevan luotettava valitussa tiedonsiirtoympäristössä.
Resumo:
Fluorescence resonance energy transfer (FRET) is a non-radiative energy transfer from a fluorescent donor molecule to an appropriate acceptor molecule and a commonly used technique to develop homogeneous assays. If the emission spectrum of the donor overlaps with the excitation spectrum of the acceptor, FRET might occur. As a consequence, the emission of the donor is decreased and the emission of the acceptor (if fluorescent) increased. Furthermore, the distance between the donor and the acceptor needs to be short enough, commonly 10-100 Å. Typically, the close proximity between the donor and the acceptor is achieved via bioaffinity interactions e.g. antibody binding antigen. Large variety of donors and acceptors exist. The selection of the donor/acceptor pair should be done not only based on the requirements of FRET but also the performance expectancies and the objectives of the application should be considered. In this study, the exceptional fluorescence properties of the lanthanide chelates were employed to develop two novel homogeneous immunoassays: a non-competitive hapten (estradiol) assay based on a single binder and a dual-parametric total and free PSA assay. In addition, the quenching efficiencies and energy transfer properties of various donor/acceptor pairs were studied. The applied donors were either europium(III) or terbium(III) chelates; whereas several organic dyes (both fluorescent and quenchers) acted as acceptors. First, it was shown that if the interaction between the donor/acceptor complexes is of high quality (e.g. biotin-streptavidin) the fluorescence of the europium(III) chelate could be quenched rather efficiently. Furthermore, the quenching based homogeneous non-competitive assay for estradiol had significantly better sensitivity (~67 times) than a corresponding homogeneous competitive assay using the same assay components. Second, if the acceptors were chosen to emit at the emission minima of the terbium(III) chelate, several acceptor emissions could be measured simultaneously without significant cross-talk from other acceptors. Based on these results, the appropriate acceptors were chosen for the dual-parameter assay. The developed homogeneous dual-parameter assay was able to measure both total and free PSA simultaneously using a simple mix and measure protocol. Correlation of this assay to a heterogeneous single parameter assay was excellent (above 0.99 for both) when spiked human plasma samples were used. However, due to the interference of the sample material, the obtained concentrations were slightly lower with the homogeneous than the heterogeneous assay, especially for the free PSA. To conclude, in this work two novel immunoassay principles were developed, which both are adaptable to other analytes. However, the hapten assay requires a rather good antibody with low dissociation rate and high affinity; whereas the dual-parameter assay principle is applicable whenever two immunometric complexes can form simultaneously, provided that the requirements of FRET are fulfilled.
Resumo:
This PhD thesis in Mathematics belongs to the field of Geometric Function Theory. The thesis consists of four original papers. The topic studied deals with quasiconformal mappings and their distortion theory in Euclidean n-dimensional spaces. This theory has its roots in the pioneering papers of F. W. Gehring and J. Väisälä published in the early 1960’s and it has been studied by many mathematicians thereafter. In the first paper we refine the known bounds for the so-called Mori constant and also estimate the distortion in the hyperbolic metric. The second paper deals with radial functions which are simple examples of quasiconformal mappings. These radial functions lead us to the study of the so-called p-angular distance which has been studied recently e.g. by L. Maligranda and S. Dragomir. In the third paper we study a class of functions of a real variable studied by P. Lindqvist in an influential paper. This leads one to study parametrized analogues of classical trigonometric and hyperbolic functions which for the parameter value p = 2 coincide with the classical functions. Gaussian hypergeometric functions have an important role in the study of these special functions. Several new inequalities and identities involving p-analogues of these functions are also given. In the fourth paper we study the generalized complete elliptic integrals, modular functions and some related functions. We find the upper and lower bounds of these functions, and those bounds are given in a simple form. This theory has a long history which goes back two centuries and includes names such as A. M. Legendre, C. Jacobi, C. F. Gauss. Modular functions also occur in the study of quasiconformal mappings. Conformal invariants, such as the modulus of a curve family, are often applied in quasiconformal mapping theory. The invariants can be sometimes expressed in terms of special conformal mappings. This fact explains why special functions often occur in this theory.
Resumo:
The aim of the present study was to demonstrate the wide applicability of the novel photoluminescent labels called upconverting phosphors (UCPs) in proximity-based bioanalytical assays. The exceptional features of the lanthanide-doped inorganic UCP compounds stem from their capability for photon upconversion resulting in anti-Stokes photoluminescence at visible wavelengths under near-infrared (NIR) excitation. Major limitations related to conventional photoluminescent labels are avoided, rendering the UCPs a competitive next-generation label technology. First, the background luminescence is minimized due to total elimination of autofluorescence. Consequently, improvements in detectability are expected. Second, at the long wavelengths (>600 nm) used for exciting and detecting the UCPs, the transmittance of sample matrixes is significantly greater in comparison with shorter wavelengths. Colored samples are no longer an obstacle to the luminescence measurement, and more flexibility is allowed even in homogeneous assay concepts, where the sample matrix remains present during the entire analysis procedure, including label detection. To transform a UCP particle into a biocompatible label suitable for bioanalytical assays, it must be colloidal in an aqueous environment and covered with biomolecules capable of recognizing the analyte molecule. At the beginning of this study, only UCP bulk material was available, and it was necessary to process the material to submicrometer-sized particles prior to use. Later, the ground UCPs, with irregular shape, wide size-distribution and heterogeneous luminescence properties, were substituted by a smaller-sized spherical UCP material. The surface functionalization of the UCPs was realized by producing a thin hydrophilic coating. Polymer adsorption on the UCP surface is a simple way to introduce functional groups for bioconjugation purposes, but possible stability issues encouraged us to optimize an optional silica-encapsulation method which produces a coating that is not detached in storage or assay conditions. An extremely thin monolayer around the UCPs was pursued due to their intended use as short-distance energy donors, and much attention was paid to controlling the thickness of the coating. The performance of the UCP technology was evaluated in three different homogeneous resonance energy transfer-based bioanalytical assays: a competitive ligand binding assay, a hybridization assay for nucleic acid detection and an enzyme activity assay. To complete the list, a competitive immunoassay has been published previously. Our systematic investigation showed that a nonradiative energy transfer mechanism is indeed involved, when a UCP and an acceptor fluorophore are brought into close proximity in aqueous suspension. This process is the basis for the above-mentioned homogeneous assays, in which the distance between the fluorescent species depends on a specific biomolecular binding event. According to the studies, the submicrometer-sized UCP labels allow versatile proximity-based bioanalysis with low detection limits (a low-nanomolar concentration for biotin, 0.01 U for benzonase enzyme, 0.35 nM for target DNA sequence).
Resumo:
The worlds’ population is increasing and cities have become more crowded with people and vehicles. Communities in the fringe of metropolitans’ increase the traffic done with private cars, but also increase the need for public transportation. People have typically needs traveling to work located in city centers during the morning time, and return to suburbs in the afternoon or evening. Rail based passenger transport is environmentally friendly transport mode with high capacity to transport large volume of people. Railways have been regulated markets with national incumbent having monopoly position. Opening the market for competition is believed to have a positive effect by increasing the efficiency of the industry. National passenger railway market is opened for competition only in few countries, where as international traffic in EU countries was deregulated in 2010. The objective of this study is to examine the passenger railway market of three North European countries, Sweden, Denmark and Estonia. The interest was also to get an understanding of the current situation and how the deregulation has proceeded. Theory of deregulation is unfolded with literature analyses and empirical part of the study is constructed from two parts. Customer satisfaction survey was chosen as a method to collect real life experiences from the passengers and measure their knowledge of the market situation and possible changes appeared. Interviews of experts from the industry and labor unions give more insights and able better understanding for example of social consequences caused from opening the market for competition. Expert interviews were conducted by using semi-structured theme interview. Based on the results of this study, deregulation has proceeded quite differently in the three countries researched. Sweden is the most advanced country, where the passenger railway market is open for new entrants. Denmark and Estonia are lagging behind. Opening the market is considered positive among passengers and most of the experts interviewed. Common for the interviews were the labour unions negative perspective concerning deregulation. Despite the fact deregulation is considered positive among the respondents of the customer satisfaction survey, they could not name railway undertakings operating in their country. Generally respondents were satisfied with the commuter trains. Ticket price, punctuality of trains and itinerary affect the most to customer satisfaction.