830 resultados para multiresolution filtering
Resumo:
Tietokoneiden vuosi vuodelta kasvanut prosessointikyky mahdollistaa spektrikuvien hyö- dyntämisen harmaasävy- ja RGB-värikuvien sijaan yhä useampien ongelmien ratkaisemi- sessa. Valitettavasti häiriöiden suodatuksen tutkimus on jäänyt jälkeen tästä kehityksestä. Useimmat menetelmät on testattu vain harmaasävy- tai RGB-värikuvien yhteydessä, mut- ta niiden toimivuutta ei ole testattu spektrikuvien suhteen. Tässä diplomityössä tutkitaan erilaisia menetelmiä bittivirheiden poistamisessa spektrikuvista. Uutena menetelmänä työssä käytetään kuutiomediaanisuodatinta ja monivaiheista kuutio- mediaanisuodatinta. Muita tutkittuja menetelmiä olivat vektorimediaanisuodatus, moni- vaiheinen vektorimediaanisuodatus, sekä rajattu keskiarvosuodatus. Kuutiosuodattimilla pyrittiin hyödyntämään spektrikuvien kaistojen välillä olevaa korrelaatiota ja niillä pääs- tiinkin kokonaisuuden kannalta parhaisiin tuloksiin. Kaikkien suodattimien toimintaa tutkittiin kahdella eri 224 komponenttisella spektriku- valla lisäämällä kuviin satunnaisia bittivirheitä.
Resumo:
Gene filtering is a useful preprocessing technique often applied to microarray datasets. However, it is no common practice because clear guidelines are lacking and it bears the risk of excluding some potentially relevant genes. In this work, we propose to model microarray data as a mixture of two Gaussian distributions that will allow us to obtain an optimal filter threshold in terms of the gene expression level.
Resumo:
The term proteome is used to define the complete set of proteins expressed in cells or tissues of an organism at a certain timepoint. Respectively, proteomics is used to describe the methods, which are used to study such proteomes. These methods include chromatographic and electrophoretic techniques for protein or peptide fractionation, mass spectrometry for their identification, and use of computational methods to assist the complicated data analysis. A primary aim in this Ph.D. thesis was to set-up, optimize, and develop proteomics methods for analysing proteins extracted from T-helper (Th) lymphocytes. First, high-throughput LC-MS/MS and ICAT labeling methods were set-up and optimized for analysing the microsomal fraction proteins extracted from Th lymphocytes. Later, iTRAQ method was optimized to study cytokine regulated protein expression in the nuclei of Th lymphocytes. High-throughput LC-MS/MS analyses, like ICAT and iTRAQ, produce large quantities of data and robust software and data analysis pipelines are needed. Therefore, different software programs used for analysing such data were evaluated. Moreover, a pre-filtering algorithm was developed to classify good-quality and bad-quality spectra prior to the database searches. Th-lymphocytes can differentiate into Th1 or Th2 cells based on surrounding antigens, co-stimulatory molecules, and cytokines. Both subsets have individual cytokine secretion profiles and specific functions. Th1 cells participate in the cellular immunity against intracellular pathogens, while Th2 cells have important role in the humoral immunity against extracellular parasites. An abnormal response of Th1 and Th2 cells and imbalance between the subsets are charasteristic of several diseases. Th1 specific reactions and cytokines have been detected in autoimmune diseases, while Th2 specific response and cytokine profile is common in allergy and asthma. In this Ph. D. thesis mass spectrometry-based proteomics was used to study the effects of Th1 and Th2 promoting cytokines IL-12 and IL-4 on the proteome of Th lymphocytes. Characterization of microsomal fraction proteome extracted from IL-12 treated lymphobasts and IL-4 stimulated cord blood CD4+ cells resulted in finding of cytokine regulated proteins. Galectin-1 and CD7 were down-regulated in IL-12 treated cells, while IL-4 stimulation decreased the expression of STAT1, MXA, GIMAP1, and GIMAP4. Interestingly, the transcription of both GIMAP genes was up-regulated in Th1 polarized cells and down-regulated in Th2 promoting conditions.
Resumo:
This thesis presents the design and implementation of a GPS-signal source suitable for receiver measurements. The developed signal source is based on direct digital synthesis which generates the intermediate frequency. The intermediate frequency is transfered to the final frequency with the aid of an Inphase/Quadrature modulator. The modulating GPS-data was generated with MATLAB. The signal source was duplicated to form a multi channel source. It was shown that, GPS-signals ment for civil navigation are easy to generate in the laboratory. The hardware does not need to be technically advanced if navigation with high level of accuracy is not needed. It was also shown that, the Inphase/Quadrature modulator can function as a single side band upconverter even with a high intermediate frequency. This concept reduces the demands required for output filtering.
Resumo:
Multicast is one method to transfer information in IPv4 based communication. Other methods are unicast and broadcast. Multicast is based on the group concept where data is sent from one point to a group of receivers and this remarkably saves bandwidth. Group members express an interest to receive data by using Internet Group Management Protocol and traffic is received by only those receivers who want it. The most common multicast applications are media streaming applications, surveillance applications and data collection applications. There are many data security methods to protect unicast communication that is the most common transfer method in Internet. Popular data security methods are encryption, authentication, access control and firewalls. The characteristics of multicast such as dynamic membership cause that all these data security mechanisms can not be used to protect multicast traffic. Nowadays the protection of multicast traffic is possible via traffic restrictions where traffic is allowed to propagate only to certain areas. One way to implement this is packet filters. Methods tested in this thesis are MVR, IGMP Filtering and access control lists which worked as supposed. These methods restrict the propagation of multicast but are laborious to configure in a large scale. There are also a few manufacturerspecific products that make possible to encrypt multicast traffic. These separate products are expensive and mainly intended to protect video transmissions via satellite. Investigation of multicast security has taken place for several years and the security methods that will be the results of the investigation are getting ready. An IETF working group called MSEC is standardizing these security methods. The target of this working group is to standardize data security protocols for multicast during 2004.
Resumo:
Työssä suunniteltiin liikealusta liikkuvan työkoneen koulutussimulaattoriin. Suunnittelu aloitettiin mittaamalla lastauskoneen dynaamisia ominaisuuksia. Mittausdatan ja koneen toiminnan analysoinnin perusteella valittiin liikealustan perusrakenne. Toimilaitteiden mitoitus tapahtui simulointimallin avulla, jossa käytettiin mitattuja kiihtyvyyksiä, signaalin suodatusta, käänteiskinematiikkaa ja käänteisdynamiikkaa. Simulointimallia käytettiin myös mekaanisen rakenteen mitoituksessa. Lisäksi visualisoinnin ja ohjauksen toteutusta tutkittiin. Työn tavoitteena oli kehittää mahdollisimman realistisen liiketuntuman toteuttava ja kustannustehokas liikealusta. Lisäksi pyrittiin matalaan ja helposti siirrettävissä olevaan rakenteeseen. Liikealustan liikkeet pyrittiin toteuttamaan sähkökäytöillä. Suunnittelun tuloksena saatiin kolmen vapausasteen liikealusta, joka on toteutettu servomoottoreilla. Työssä suunnitellusta liikealustasta on tarkoitus rakentaa fyysinen prototyyppi ja liittää se lastauskoneen reaaliaikasimulaattoriin.
Resumo:
Diplomityössä perehdyttiin taajuusmuuttajien toimintaan ja ohjaukseen. Lisäksi työssä tarkasteltiin vaihtosuuntaajan nopeiden transienttitilojen aiheuttamaa moottorin ylijännitettä. Moottorikaapelin heijastuksia käsiteltiin vertaamalla moottorikaapelia siirtolinjaan ja todennettiin ylijännitteen syyt. Ylijännitteen vähentämiseksi on kehitetty useita suodatusmenetelmiä. Työssä vertailtiin näitä menetelmiä ja kartoitettiin kaupallisia vaihtoehtoja. Taajuusmuuttajan ohjaus on tähän päivään asti tehty yleensä käyttäen mikroprosessoria sekä logiikkapiiriä. Tulevaisuudessa ohjaukseen käytetään todennäköisesti uudelleenohjelmoitavia FPGA-piirejä (Field Programmable Gate Array). FPGA-piirin etuihin kuuluu uudelleenohjelmoitavuus sekä ohjauksen keskittäminen yhdelle piirille.
Resumo:
An increasing number of studies in recent years have sought to identify individual inventors from patent data. A variety of heuristics have been proposed for using the names and other information disclosed in patent documents to establish who is who in patents. This paper contributes to this literature by describing a methodology for identifying inventors using patents applied to the European Patent Office, EPO hereafter. As in much of this literature, we basically follow a threestep procedure : 1- the parsing stage, aimed at reducing the noise in the inventor’s name and other fields of the patent; 2- the matching stage, where name matching algorithms are used to group similar names; and 3- the filtering stage, where additional information and various scoring schemes are used to filter out these similarlynamed inventors. The paper presents the results obtained by using the algorithms with the set of European inventors applying to the EPO over a long period of time.
Resumo:
Online paper web analysis relies on traversing scanners that criss-cross on top of a rapidly moving paper web. The sensors embedded in the scanners measure many important quality variables of paper, such as basis weight, caliper and porosity. Most of these quantities are varying a lot and the measurements are noisy at many different scales. The zigzagging nature of scanning makes it difficult to separate machine direction (MD) and cross direction (CD) variability from one another. For improving the 2D resolution of the quality variables above, the paper quality control team at the Department of Mathematics and Physics at LUT has implemented efficient Kalman filtering based methods that currently use 2D Fourier series. Fourier series are global and therefore resolve local spatial detail on the paper web rather poorly. The target of the current thesis is to study alternative wavelet based representations as candidates to replace the Fourier basis for a higher resolution spatial reconstruction of these quality variables. The accuracy of wavelet compressed 2D web fields will be compared with corresponding truncated Fourier series based fields.
Resumo:
Peer-reviewed
Resumo:
This paper presents a novel technique to align partial 3D reconstructions of the seabed acquired by a stereo camera mounted on an autonomous underwater vehicle. Vehicle localization and seabed mapping is performed simultaneously by means of an Extended Kalman Filter. Passive landmarks are detected on the images and characterized considering 2D and 3D features. Landmarks are re-observed while the robot is navigating and data association becomes easier but robust. Once the survey is completed, vehicle trajectory is smoothed by a Rauch-Tung-Striebel filter obtaining an even better alignment of the 3D views and yet a large-scale acquisition of the seabed
Resumo:
A visual SLAM system has been implemented and optimised for real-time deployment on an AUV equipped with calibrated stereo cameras. The system incorporates a novel approach to landmark description in which landmarks are local sub maps that consist of a cloud of 3D points and their associated SIFT/SURF descriptors. Landmarks are also sparsely distributed which simplifies and accelerates data association and map updates. In addition to landmark-based localisation the system utilises visual odometry to estimate the pose of the vehicle in 6 degrees of freedom by identifying temporal matches between consecutive local sub maps and computing the motion. Both the extended Kalman filter and unscented Kalman filter have been considered for filtering the observations. The output of the filter is also smoothed using the Rauch-Tung-Striebel (RTS) method to obtain a better alignment of the sequence of local sub maps and to deliver a large-scale 3D acquisition of the surveyed area. Synthetic experiments have been performed using a simulation environment in which ray tracing is used to generate synthetic images for the stereo system
Resumo:
Simultaneous localization and mapping(SLAM) is a very important problem in mobile robotics. Many solutions have been proposed by different scientists during the last two decades, nevertheless few studies have considered the use of multiple sensors simultane¬ously. The solution is on combining several data sources with the aid of an Extended Kalman Filter (EKF). Two approaches are proposed. The first one is to use the ordinary EKF SLAM algorithm for each data source separately in parallel and then at the end of each step, fuse the results into one solution. Another proposed approach is the use of multiple data sources simultaneously in a single filter. The comparison of the computational com¬plexity of the two methods is also presented. The first method is almost four times faster than the second one.
Resumo:
In many industrial applications, accurate and fast surface reconstruction is essential for quality control. Variation in surface finishing parameters, such as surface roughness, can reflect defects in a manufacturing process, non-optimal product operational efficiency, and reduced life expectancy of the product. This thesis considers reconstruction and analysis of high-frequency variation, that is roughness, on planar surfaces. Standard roughness measures in industry are calculated from surface topography. A fast and non-contact method to obtain surface topography is to apply photometric stereo in the estimation of surface gradients and to reconstruct the surface by integrating the gradient fields. Alternatively, visual methods, such as statistical measures, fractal dimension and distance transforms, can be used to characterize surface roughness directly from gray-scale images. In this thesis, the accuracy of distance transforms, statistical measures, and fractal dimension are evaluated in the estimation of surface roughness from gray-scale images and topographies. The results are contrasted to standard industry roughness measures. In distance transforms, the key idea is that distance values calculated along a highly varying surface are greater than distances calculated along a smoother surface. Statistical measures and fractal dimension are common surface roughness measures. In the experiments, skewness and variance of brightness distribution, fractal dimension, and distance transforms exhibited strong linear correlations to standard industry roughness measures. One of the key strengths of photometric stereo method is the acquisition of higher frequency variation of surfaces. In this thesis, the reconstruction of planar high-frequency varying surfaces is studied in the presence of imaging noise and blur. Two Wiener filterbased methods are proposed of which one is optimal in the sense of surface power spectral density given the spectral properties of the imaging noise and blur. Experiments show that the proposed methods preserve the inherent high-frequency variation in the reconstructed surfaces, whereas traditional reconstruction methods typically handle incorrect measurements by smoothing, which dampens the high-frequency variation.
Resumo:
Peer-reviewed