20 resultados para binary soliton


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Diplomityön tarkoituksena on tutkia eri laskentamenetelmien soveltuvuutta kevyiden rikkiyhdisteiden laskentaan ja kuinka mitatusta kaasu-neste tasapainotiedoista sovitetut binääriset vuorovaikutusparametrit parantavat kaasu-neste tasapainojen laskentaa simuloinneissa. Kirjallisuusosassa paneudutaan kevyisiin rikkiyhdisteisiin ja niiden aineominaisuuksiin. Lisäksi käsitellään öljynjalostuksessa nykyisin käytettäviä ja uusia kehitteillä olevia rikinpoistomenetelmiä.Kokeellisessa osassa tarkastellaan eri laskentamenetelmien soveltuvuutta rikkiyhdisteiden ja kevyiden hiilivetyjen kaasu-neste tasapainon laskentaan. Mitatusta rikkiyhdisteiden ja hiilivetyjen kaasu-neste tasapainoista sovitetaan binäärisiä vuorovaikutusparametrejä tarkentamaan käytettäviä laskentamenetelmiä. Osassa verrataan binääristen seosten mittaustuloksia eri laskentamenetelmillä saatuihin simulointituloksiin. Tarkasteluiden perusteella tehdään johtopäätöksiä laskentamenetelmien soveltuvuudesta kevyiden hiilivetyjen ja rikkiyhdisteiden laskentaan. Tarkastellaan kahden prosessin (rikkivetystripperi ja butaaninpoistokolonni) rikkiyhdisteiden laskentaa. Prosesseille tehdään taseajot, joista saatuja analyysituloksia verrataan simulointien antamiin tuloksiin. Työssä tarkastellaan myös veden liukoisuuden laskentaa ja mahdollisten laskentamenetelmien käytön vaikutusta rikkiyhdisteiden laskentaan.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Diplomityön tarkoituksena on tarkastella isobuteenin dimeroitumisprosessin neste-nestetasapainoja. Tavoitteena on määrittää prosessissa esiintyvien komponenttien väliset neste-nestetasapainot. Työn kirjallisuusosassa on tarkasteltu neste-nestetasapainojen teoriaa. Erityisesti on tarkasteltu mittausmenetelmiä sekä kirjallisuudesta löytyneitä laitteistoja binääristen ja ternääristen systeemien neste-nestetasapainojen määritystä varten. Menetelmät ja laitteistot on esitetty erikseen matalassa ja korkeassa paineessa suoritettaville mittauksille. Lisäksi on tarkasteltu näytteenottoa sekä näytteiden analysointimenetelmiä. Kirjallisuusosassa on myös sivuttu kaasu-neste-nestetasapainojen määritystä, mutta työn varsinainen kohde on neste-nestetasapainojen määritys. Työn kokeellisessa osassa määritettiin iso-oktaaniprosessissa esiintyvien komponenttien välisiä binäärisiä ja ternäärisiä neste-nestetasapainoja. Mitattavien komponenttiparien määrää karsittiin ja jäljellejääneiden parien välillä suoritettavat mittaukset jaoteltiin matalassa ja korkeassa paineessa suoritettaviin määrityksiin. Ternääriset mittaukset tulivat kyseeseen sellaisten komponenttiparien kohdalla, joissa toisiinsa täysin liukenevien nesteiden systeemiin kolmatta komponenttia lisättäessä saatiin aikaiseksi kaksi nestefaasia. Tällaisesta mittaustiedosta voidaan määrittää neste-nestetasapainomallien parametrejä. Mittausten lisäksi kokeellisessa osassa tarkasteltiin näytteenottoa sekä näytteiden analysointia.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis deals with distance transforms which are a fundamental issue in image processing and computer vision. In this thesis, two new distance transforms for gray level images are presented. As a new application for distance transforms, they are applied to gray level image compression. The new distance transforms are both new extensions of the well known distance transform algorithm developed by Rosenfeld, Pfaltz and Lay. With some modification their algorithm which calculates a distance transform on binary images with a chosen kernel has been made to calculate a chessboard like distance transform with integer numbers (DTOCS) and a real value distance transform (EDTOCS) on gray level images. Both distance transforms, the DTOCS and EDTOCS, require only two passes over the graylevel image and are extremely simple to implement. Only two image buffers are needed: The original gray level image and the binary image which defines the region(s) of calculation. No other image buffers are needed even if more than one iteration round is performed. For large neighborhoods and complicated images the two pass distance algorithm has to be applied to the image more than once, typically 3 10 times. Different types of kernels can be adopted. It is important to notice that no other existing transform calculates the same kind of distance map as the DTOCS. All the other gray weighted distance function, GRAYMAT etc. algorithms find the minimum path joining two points by the smallest sum of gray levels or weighting the distance values directly by the gray levels in some manner. The DTOCS does not weight them that way. The DTOCS gives a weighted version of the chessboard distance map. The weights are not constant, but gray value differences of the original image. The difference between the DTOCS map and other distance transforms for gray level images is shown. The difference between the DTOCS and EDTOCS is that the EDTOCS calculates these gray level differences in a different way. It propagates local Euclidean distances inside a kernel. Analytical derivations of some results concerning the DTOCS and the EDTOCS are presented. Commonly distance transforms are used for feature extraction in pattern recognition and learning. Their use in image compression is very rare. This thesis introduces a new application area for distance transforms. Three new image compression algorithms based on the DTOCS and one based on the EDTOCS are presented. Control points, i.e. points that are considered fundamental for the reconstruction of the image, are selected from the gray level image using the DTOCS and the EDTOCS. The first group of methods select the maximas of the distance image to new control points and the second group of methods compare the DTOCS distance to binary image chessboard distance. The effect of applying threshold masks of different sizes along the threshold boundaries is studied. The time complexity of the compression algorithms is analyzed both analytically and experimentally. It is shown that the time complexity of the algorithms is independent of the number of control points, i.e. the compression ratio. Also a new morphological image decompression scheme is presented, the 8 kernels' method. Several decompressed images are presented. The best results are obtained using the Delaunay triangulation. The obtained image quality equals that of the DCT images with a 4 x 4

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Streaming potential measurements for the surface charge characterisation of different filter media types and materials were used. The equipment was developed further so that measurements could be taken along the surfaces, and so that tubular membranes could also be measured. The streaming potential proved to be a very useful tool in the charge analysis of both clean and fouled filter media. Adsorption and fouling could be studied, as could flux, as functions of time. A module to determine the membrane potential was also constructed. The results collected from the experiments conducted with these devices were used in the study of the theory of streaming potential as an electrokinetic phenomenon. Several correction factors, which are derived to take into account the surface conductance and the electrokinetic flow in very narrow capillaries, were tested in practice. The surface materials were studied using FTIR and the results compared with those from the streaming potentials. FTIR analysis was also found to be a useful tool in the characterisation of filters, as well as in the fouling studies. Upon examination of the recorded spectra from different depths in a sample it was possible to determine the adsorption sites. The influence of an external electric field on the cross flow microflltration of a binary protein system was investigated using a membrane electroflltration apparatus. The results showed that a significant improvement could be achieved in membrane filtration by using the measured electrochemical properties to help adjust the process conditions.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Fluent health information flow is critical for clinical decision-making. However, a considerable part of this information is free-form text and inabilities to utilize it create risks to patient safety and cost-­effective hospital administration. Methods for automated processing of clinical text are emerging. The aim in this doctoral dissertation is to study machine learning and clinical text in order to support health information flow.First, by analyzing the content of authentic patient records, the aim is to specify clinical needs in order to guide the development of machine learning applications.The contributions are a model of the ideal information flow,a model of the problems and challenges in reality, and a road map for the technology development. Second, by developing applications for practical cases,the aim is to concretize ways to support health information flow. Altogether five machine learning applications for three practical cases are described: The first two applications are binary classification and regression related to the practical case of topic labeling and relevance ranking.The third and fourth application are supervised and unsupervised multi-class classification for the practical case of topic segmentation and labeling.These four applications are tested with Finnish intensive care patient records.The fifth application is multi-label classification for the practical task of diagnosis coding. It is tested with English radiology reports.The performance of all these applications is promising. Third, the aim is to study how the quality of machine learning applications can be reliably evaluated.The associations between performance evaluation measures and methods are addressed,and a new hold-out method is introduced.This method contributes not only to processing time but also to the evaluation diversity and quality. The main conclusion is that developing machine learning applications for text requires interdisciplinary, international collaboration. Practical cases are very different, and hence the development must begin from genuine user needs and domain expertise. The technological expertise must cover linguistics,machine learning, and information systems. Finally, the methods must be evaluated both statistically and through authentic user-feedback.