828 resultados para GRASP filtering
Resumo:
Multicast is one method to transfer information in IPv4 based communication. Other methods are unicast and broadcast. Multicast is based on the group concept where data is sent from one point to a group of receivers and this remarkably saves bandwidth. Group members express an interest to receive data by using Internet Group Management Protocol and traffic is received by only those receivers who want it. The most common multicast applications are media streaming applications, surveillance applications and data collection applications. There are many data security methods to protect unicast communication that is the most common transfer method in Internet. Popular data security methods are encryption, authentication, access control and firewalls. The characteristics of multicast such as dynamic membership cause that all these data security mechanisms can not be used to protect multicast traffic. Nowadays the protection of multicast traffic is possible via traffic restrictions where traffic is allowed to propagate only to certain areas. One way to implement this is packet filters. Methods tested in this thesis are MVR, IGMP Filtering and access control lists which worked as supposed. These methods restrict the propagation of multicast but are laborious to configure in a large scale. There are also a few manufacturerspecific products that make possible to encrypt multicast traffic. These separate products are expensive and mainly intended to protect video transmissions via satellite. Investigation of multicast security has taken place for several years and the security methods that will be the results of the investigation are getting ready. An IETF working group called MSEC is standardizing these security methods. The target of this working group is to standardize data security protocols for multicast during 2004.
Resumo:
Työssä suunniteltiin liikealusta liikkuvan työkoneen koulutussimulaattoriin. Suunnittelu aloitettiin mittaamalla lastauskoneen dynaamisia ominaisuuksia. Mittausdatan ja koneen toiminnan analysoinnin perusteella valittiin liikealustan perusrakenne. Toimilaitteiden mitoitus tapahtui simulointimallin avulla, jossa käytettiin mitattuja kiihtyvyyksiä, signaalin suodatusta, käänteiskinematiikkaa ja käänteisdynamiikkaa. Simulointimallia käytettiin myös mekaanisen rakenteen mitoituksessa. Lisäksi visualisoinnin ja ohjauksen toteutusta tutkittiin. Työn tavoitteena oli kehittää mahdollisimman realistisen liiketuntuman toteuttava ja kustannustehokas liikealusta. Lisäksi pyrittiin matalaan ja helposti siirrettävissä olevaan rakenteeseen. Liikealustan liikkeet pyrittiin toteuttamaan sähkökäytöillä. Suunnittelun tuloksena saatiin kolmen vapausasteen liikealusta, joka on toteutettu servomoottoreilla. Työssä suunnitellusta liikealustasta on tarkoitus rakentaa fyysinen prototyyppi ja liittää se lastauskoneen reaaliaikasimulaattoriin.
Resumo:
Diplomityössä perehdyttiin taajuusmuuttajien toimintaan ja ohjaukseen. Lisäksi työssä tarkasteltiin vaihtosuuntaajan nopeiden transienttitilojen aiheuttamaa moottorin ylijännitettä. Moottorikaapelin heijastuksia käsiteltiin vertaamalla moottorikaapelia siirtolinjaan ja todennettiin ylijännitteen syyt. Ylijännitteen vähentämiseksi on kehitetty useita suodatusmenetelmiä. Työssä vertailtiin näitä menetelmiä ja kartoitettiin kaupallisia vaihtoehtoja. Taajuusmuuttajan ohjaus on tähän päivään asti tehty yleensä käyttäen mikroprosessoria sekä logiikkapiiriä. Tulevaisuudessa ohjaukseen käytetään todennäköisesti uudelleenohjelmoitavia FPGA-piirejä (Field Programmable Gate Array). FPGA-piirin etuihin kuuluu uudelleenohjelmoitavuus sekä ohjauksen keskittäminen yhdelle piirille.
Resumo:
An increasing number of studies in recent years have sought to identify individual inventors from patent data. A variety of heuristics have been proposed for using the names and other information disclosed in patent documents to establish who is who in patents. This paper contributes to this literature by describing a methodology for identifying inventors using patents applied to the European Patent Office, EPO hereafter. As in much of this literature, we basically follow a threestep procedure : 1- the parsing stage, aimed at reducing the noise in the inventor’s name and other fields of the patent; 2- the matching stage, where name matching algorithms are used to group similar names; and 3- the filtering stage, where additional information and various scoring schemes are used to filter out these similarlynamed inventors. The paper presents the results obtained by using the algorithms with the set of European inventors applying to the EPO over a long period of time.
Resumo:
Online paper web analysis relies on traversing scanners that criss-cross on top of a rapidly moving paper web. The sensors embedded in the scanners measure many important quality variables of paper, such as basis weight, caliper and porosity. Most of these quantities are varying a lot and the measurements are noisy at many different scales. The zigzagging nature of scanning makes it difficult to separate machine direction (MD) and cross direction (CD) variability from one another. For improving the 2D resolution of the quality variables above, the paper quality control team at the Department of Mathematics and Physics at LUT has implemented efficient Kalman filtering based methods that currently use 2D Fourier series. Fourier series are global and therefore resolve local spatial detail on the paper web rather poorly. The target of the current thesis is to study alternative wavelet based representations as candidates to replace the Fourier basis for a higher resolution spatial reconstruction of these quality variables. The accuracy of wavelet compressed 2D web fields will be compared with corresponding truncated Fourier series based fields.
Resumo:
Peer-reviewed
Resumo:
This paper presents a novel technique to align partial 3D reconstructions of the seabed acquired by a stereo camera mounted on an autonomous underwater vehicle. Vehicle localization and seabed mapping is performed simultaneously by means of an Extended Kalman Filter. Passive landmarks are detected on the images and characterized considering 2D and 3D features. Landmarks are re-observed while the robot is navigating and data association becomes easier but robust. Once the survey is completed, vehicle trajectory is smoothed by a Rauch-Tung-Striebel filter obtaining an even better alignment of the 3D views and yet a large-scale acquisition of the seabed
Resumo:
A visual SLAM system has been implemented and optimised for real-time deployment on an AUV equipped with calibrated stereo cameras. The system incorporates a novel approach to landmark description in which landmarks are local sub maps that consist of a cloud of 3D points and their associated SIFT/SURF descriptors. Landmarks are also sparsely distributed which simplifies and accelerates data association and map updates. In addition to landmark-based localisation the system utilises visual odometry to estimate the pose of the vehicle in 6 degrees of freedom by identifying temporal matches between consecutive local sub maps and computing the motion. Both the extended Kalman filter and unscented Kalman filter have been considered for filtering the observations. The output of the filter is also smoothed using the Rauch-Tung-Striebel (RTS) method to obtain a better alignment of the sequence of local sub maps and to deliver a large-scale 3D acquisition of the surveyed area. Synthetic experiments have been performed using a simulation environment in which ray tracing is used to generate synthetic images for the stereo system
Resumo:
Simultaneous localization and mapping(SLAM) is a very important problem in mobile robotics. Many solutions have been proposed by different scientists during the last two decades, nevertheless few studies have considered the use of multiple sensors simultane¬ously. The solution is on combining several data sources with the aid of an Extended Kalman Filter (EKF). Two approaches are proposed. The first one is to use the ordinary EKF SLAM algorithm for each data source separately in parallel and then at the end of each step, fuse the results into one solution. Another proposed approach is the use of multiple data sources simultaneously in a single filter. The comparison of the computational com¬plexity of the two methods is also presented. The first method is almost four times faster than the second one.
Resumo:
In many industrial applications, accurate and fast surface reconstruction is essential for quality control. Variation in surface finishing parameters, such as surface roughness, can reflect defects in a manufacturing process, non-optimal product operational efficiency, and reduced life expectancy of the product. This thesis considers reconstruction and analysis of high-frequency variation, that is roughness, on planar surfaces. Standard roughness measures in industry are calculated from surface topography. A fast and non-contact method to obtain surface topography is to apply photometric stereo in the estimation of surface gradients and to reconstruct the surface by integrating the gradient fields. Alternatively, visual methods, such as statistical measures, fractal dimension and distance transforms, can be used to characterize surface roughness directly from gray-scale images. In this thesis, the accuracy of distance transforms, statistical measures, and fractal dimension are evaluated in the estimation of surface roughness from gray-scale images and topographies. The results are contrasted to standard industry roughness measures. In distance transforms, the key idea is that distance values calculated along a highly varying surface are greater than distances calculated along a smoother surface. Statistical measures and fractal dimension are common surface roughness measures. In the experiments, skewness and variance of brightness distribution, fractal dimension, and distance transforms exhibited strong linear correlations to standard industry roughness measures. One of the key strengths of photometric stereo method is the acquisition of higher frequency variation of surfaces. In this thesis, the reconstruction of planar high-frequency varying surfaces is studied in the presence of imaging noise and blur. Two Wiener filterbased methods are proposed of which one is optimal in the sense of surface power spectral density given the spectral properties of the imaging noise and blur. Experiments show that the proposed methods preserve the inherent high-frequency variation in the reconstructed surfaces, whereas traditional reconstruction methods typically handle incorrect measurements by smoothing, which dampens the high-frequency variation.
Resumo:
Peer-reviewed
Estudo comparativo sobre filtragem de sinais instrumentais usando transformadas de Fourier e Wavelet
Resumo:
A comparative study of the Fourier (FT) and the wavelet transforms (WT) for instrumental signal denoising is presented. The basic principles of wavelet theory are described in a succinct and simplified manner. For illustration, FT and WT are used to filter UV-VIS and plasma emission spectra using MATLAB software for computation. Results show that FT and WT filters are comparable when the signal does not display sharp peaks (UV-VIS spectra), but the WT yields a better filtering when the filling factor of the signal is small (plasma spectra), since it causes low peak distortion.
Resumo:
Learning of preference relations has recently received significant attention in machine learning community. It is closely related to the classification and regression analysis and can be reduced to these tasks. However, preference learning involves prediction of ordering of the data points rather than prediction of a single numerical value as in case of regression or a class label as in case of classification. Therefore, studying preference relations within a separate framework facilitates not only better theoretical understanding of the problem, but also motivates development of the efficient algorithms for the task. Preference learning has many applications in domains such as information retrieval, bioinformatics, natural language processing, etc. For example, algorithms that learn to rank are frequently used in search engines for ordering documents retrieved by the query. Preference learning methods have been also applied to collaborative filtering problems for predicting individual customer choices from the vast amount of user generated feedback. In this thesis we propose several algorithms for learning preference relations. These algorithms stem from well founded and robust class of regularized least-squares methods and have many attractive computational properties. In order to improve the performance of our methods, we introduce several non-linear kernel functions. Thus, contribution of this thesis is twofold: kernel functions for structured data that are used to take advantage of various non-vectorial data representations and the preference learning algorithms that are suitable for different tasks, namely efficient learning of preference relations, learning with large amount of training data, and semi-supervised preference learning. Proposed kernel-based algorithms and kernels are applied to the parse ranking task in natural language processing, document ranking in information retrieval, and remote homology detection in bioinformatics domain. Training of kernel-based ranking algorithms can be infeasible when the size of the training set is large. This problem is addressed by proposing a preference learning algorithm whose computation complexity scales linearly with the number of training data points. We also introduce sparse approximation of the algorithm that can be efficiently trained with large amount of data. For situations when small amount of labeled data but a large amount of unlabeled data is available, we propose a co-regularized preference learning algorithm. To conclude, the methods presented in this thesis address not only the problem of the efficient training of the algorithms but also fast regularization parameter selection, multiple output prediction, and cross-validation. Furthermore, proposed algorithms lead to notably better performance in many preference learning tasks considered.
Resumo:
The points of departure of this article are our own researches which talked about Maria of Portugal and widowhood during the 14ht century, respectively. We were interested in the figure of Maria of Portugal as example of all that kind of people, mostly women but also children who were treated as change currency by their relatives. They were used to guaranty peace treaties and other agreements, but also as hostages in war times. This article is based on the study of Maria's will, which is preserved in the Arxiu de la Coroan d'Aaragó. however we also thought necessary to present the figure of Maria of Portugal in order to grasp, not only the events that marked her life, but also the way they influenced in her thoughts an feelings.
Resumo:
This paper presents empirical research comparing the accounting difficulties that arise from the use of two valuation methods for biological assets, fair value (FV) and historical cost (HC) accounting, in the agricultural sector. It also compares how reliable each valuation method is in the decision-making process of agents within the sector. By conducting an experiment with students, farmers, and accountants operating in the agricultural sector, we find that they have more difficulties, make larger miscalculations and make poorer judgements with HC accounting than with FV accounting. In-depth interviews uncover flawed accounting practices in the agricultural sector in Spain in order to meet HC accounting requirements. Given the complexities of cost calculation for biological assets and the predominance of small family business units in advanced Western countries, the study concludes that accounting can be more easily applied in the agricultural sector under FV than HC accounting, and that HC conveys a less accurate grasp of the real situation of a farm.