88 resultados para Particle Image Velocimetry –mittaustekniikka


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Raaka-aineen hiukkaskoko on lääkekehityksessä keskeinen materiaaliparametri. Lääkeaineen partikkelikoko vaikuttaa moneen lääketuotteen tärkeään ominaisuuteen, esimerkiksi lääkkeen biologiseen hyväksikäytettävyyteen. Tässä diplomityössä keskityttiin jauhemaisten lääkeaineiden hiukkaskoon määrittämiseen laserdiffraktiomenetelmällä. Menetelmä perustuu siihen, että partikkeleista sironneen valon intensiteetin sirontakulmajakauma on riippuvainen partikkelien kokojakaumasta. Työn kirjallisuusosassa esiteltiin laserdiffraktiomenetelmän teoriaa. PIDS (Polarization Intensity Differential Scattering) tekniikka, jota voidaan käyttää laserdiffraktion yhteydessä, on myös kuvattu kirjallisuusosassa. Muihin menetelmiin perustuvista analyysimenetelmistä tutustuttiin mikroskopiaan sekä aerodynaamisen lentoajan määrittämiseen perustuvaan menetelmään. Kirjallisuusosassa esiteltiin myös partikkelikoon yleisimpiä esitystapoja. Työn kokeellisen osan tarkoituksena oli kehittää ja validoida laserdiffraktioon perustuva partikkelikoon määritysmenetelmä tietylle lääkeaineelle. Menetelmäkehitys tehtiin käyttäen Beckman Coulter LS 13 320 laserdiffraktoria. Laite mahdollistaa PIDS-tekniikan käytön laserdiffraktiotekniikan ohella. Menetelmäkehitys aloitettiin arvioimalla, että kyseinen lääkeaine soveltuu parhaiten määritettäväksi nesteeseen dispergoituna. Liukoisuuden perusteella väliaineeksi valittiin tällä lääkeaineella kyllästetty vesiliuos. Dispergointiaineen sekä ultraäänihauteen käyttö havaittiin tarpeelliseksi dispergoidessa kyseistä lääkeainetta kylläiseen vesiliuokseen. Lopuksi sekoitusnopeus näytteensyöttöyksikössä säädettiin sopivaksi. Validointivaiheessa kehitetyn menetelmän todettiin soveltuvan hyvin kyseiselle lääkeaineelle ja tulosten todettiin olevan oikeellisia sekä toistettavia. Menetelmä ei myöskään ollut herkkä pienille häiriöille.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Tutkimuksen päätavoitteena oli auttaa Myllykoski –ryhmää määrittämään, mistä tekijöistä ryhmän uuden myyntiorganisaation, Myllykoski Salesin, tulevaisuuden imagon tulisi koostua. Näin ollen tutkimus pyrki selvittämään Myllykoski –ryhmän yritysidentiteetin nykytilaa ja Myllykoski Salesin toivottuja imagotekijöitä, sekä vertaamaan niiden vastaavuutta. Lisäksi tutkimuksen tavoitteena oli tutkia Myllykoski –ryhmän nykyistä ja toivottua tulevaisuuden imagotilaa. Jotta imagonrakennusprosessi olisi menestyksekäs, rakennettavan imagon ja viestittävien imagotekijöiden tulisi perustua yritysidentiteetin todellisiin tekijöihin. Yritysidentiteetin voidaan määritellä olevan yhtäläinen sisäisten sidosryhmien muodostaman yritysmielikuvan kanssa ja näin ollen nykyinen yritysidentiteetti voidaan paljastaa tutkimalla henkilökunnan mielipiteitä työorganisaatiotaan kohtaan. Näin ollen käsiteltävä tutkimus suoritettiin tekemällä kaksi sähköpostikyselyä, jotka suunnattiin Myllykoski -ryhmän myynti- ja markkinointihenkilökunnalle. Tutkimusten vastausprosentit olivat 71,4 % (johto, 14 lähetettyä kyselyä) ja 51,9 % (muu henkilökunta, 108 lähetettyä kyselyä). Saatuja vastauksia analysoitiin sekä laadullisesti että määrällisesti. Saaduista vastauksista oli selvästi havaittavissa Myllykoski –ryhmän yritysidentiteetin nykytila, nykyinen ja toivottu imagotila, sekä Myllykoski Salesin toivotut imagotekijät. Verrattaessa toivottuja imagotekijöitä ryhmän yritysidentiteettiin havaittiin, että suurin osa halutuista imagotekijöistä vastasi ryhmän identiteetin nykytilan ominaisuuksia ja näin ollen kyseisiä tekijöitä voitaisiin huoletta viestiä rakennettaessa Myllykoski Salesin imagoa. Joidenkin toivottujen imagotekijöiden viestintää tulisi kuitenkin vakavasti harkita, jottei rakennettaisi epärealistista imagoa.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

TUTKIMUKSEN TAVOITTEET Tutkielman tavoitteena oli luoda ensin yleiskäsitys tuotemerkkimarkkinoinnin roolista teollisilla markkinoilla, sekä suhdemarkkinoinnin merkityksestä teollisessa merkkituotemarkkinoinnissa. Toisena oleellisena tavoitteena oli kuvata teoreettisesti merkkituoteidentiteetin rakenne teollisessa yrityksessä ja sen vaikutukset myyntihenkilöstöön, ja lisäksi haluttiin tutkia tuotemerkkien lisäarvoa sekä asiakkaalle että myyjälle. Identiteetti ja sen vaikutukset, erityisesti imago haluttiin tutkia myös empiirisesti. LÄHDEAINEISTO JA TUTKIMUSMENETELMÄT Tämän tutkielman teoreettinen osuus perustuu kirjallisuuteen, akateemisiin julkaisuihin ja aikaisempiin tutkimuksiin; keskittyen merkkituotteiden markkinointiin, identiteettiin ja imagoon, sekä suhdemarkkinointiin osana merkkituotemarkkinointia. Tutkimuksen lähestymistapa on kuvaileva eli deskriptiivinen ja sekä kvalitatiivinen että kvantitatiivinen. Tutkimus on tapaustutkimus, jossa caseyritykseksi valittiin kansainvälinen pakkauskartonki-teollisuuden yritys. Empiirisen osuuden toteuttamiseen käytettiin www-pohjaista surveytä, jonka avulla tietoja kerättiin myyntihenkilöstöltä case-yrityksessä. Lisäksi empiiristä osuutta laajennettiin tutkimalla sekundäärilähteitä kuten yrityksen sisäisiä kirjallisia dokumentteja ja tutkimuksia. TULOKSET. Teoreettisen ja empiirisen tutkimuksen tuloksena luotiin malli jota voidaan hyödyntää merkkituotemarkkinoinnin päätöksenteon tukena pakkauskartonki-teollisuudessa. Teollisen brandinhallinnan tulee keskittyä erityisesti asiakas-suhteiden brandaukseen – tätä voisi kutsua teolliseksi suhdebrandaukseksi. Tuote-elementit ja –arvot, differointi ja positiointi, sisäinen yrityskuva ja viestintä ovat teollisen brandi-identiteetin peruskiviä, jotka luovat brandi-imagon. Case-yrityksen myyntihenkilöstön tuote- ja yritysmielikuvat osoittautuivat kokonaisuudessaan hyviksi. Paras imago on CKB tuotteilla, kun taas heikoin on WLC tuotteilla. Teolliset brandit voivat luoda monenlaisia lisäarvoja sekä asiakas- että myyjäyritykselle.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The semiconductor particle detectors used at CERN experiments are exposed to radiation. Under radiation, the formation of lattice defects is unavoidable. The defects affect the depletion voltage and leakage current of the detectors, and hence affect on the signal-to-noise ratio of the detectors. This shortens the operational lifetime of the detectors. For this reason, the understanding of the formation and the effects of radiation induced defects is crucial for the development of radiation hard detectors. In this work, I have studied the effects of radiation induced defects-mostly vacancy related defects-with a simulation package, Silvaco. Thus, this work essentially concerns the effects of radiation induced defects, and native defects, on leakage currents in particle detectors. Impurity donor atom-vacancy complexes have been proved to cause insignificant increase of leakage current compared with the trivacancy and divacancy-oxygen centres. Native defects and divacancies have proven to cause some of the leakage current, which is relatively small compared with trivacancy and divacancy-oxygen.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The large hadron collider constructed at the European organization for nuclear research, CERN, is the world’s largest single measuring instrument ever built, and also currently the most powerful particle accelerator that exists. The large hadron collider includes six different experiment stations, one of which is called the compact muon solenoid, or the CMS. The main purpose of the CMS is to track and study residue particles from proton-proton collisions. The primary detectors utilized in the CMS are resistive plate chambers (RPCs). To obtain data from these detectors, a link system has been designed. The main idea of the link system is to receive data from the detector front-end electronics in parallel form, and to transmit it onwards in serial form, via an optical fiber. The system is mostly ready and in place. However, a problem has occurred with innermost RPC detectors, located in sector labeled RE1/1; transmission lines for parallel data suffer from signal integrity issues over long distances. As a solution to this, a new version of the link system has been devised, a one that fits in smaller space and can be located within the CMS, closer to the detectors. This RE1/1 link system has been so far completed only partially, with just the mechanical design and casing being done. In this thesis, link system electronics for RE1/1 sector has been designed, by modifying the existing link system concept to better meet the requirements of the RE1/1 sector. In addition to completion of the prototype of the RE1/1 link system electronics, some testing for the system has also been done, to ensure functionality of the design.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis deals with distance transforms which are a fundamental issue in image processing and computer vision. In this thesis, two new distance transforms for gray level images are presented. As a new application for distance transforms, they are applied to gray level image compression. The new distance transforms are both new extensions of the well known distance transform algorithm developed by Rosenfeld, Pfaltz and Lay. With some modification their algorithm which calculates a distance transform on binary images with a chosen kernel has been made to calculate a chessboard like distance transform with integer numbers (DTOCS) and a real value distance transform (EDTOCS) on gray level images. Both distance transforms, the DTOCS and EDTOCS, require only two passes over the graylevel image and are extremely simple to implement. Only two image buffers are needed: The original gray level image and the binary image which defines the region(s) of calculation. No other image buffers are needed even if more than one iteration round is performed. For large neighborhoods and complicated images the two pass distance algorithm has to be applied to the image more than once, typically 3 10 times. Different types of kernels can be adopted. It is important to notice that no other existing transform calculates the same kind of distance map as the DTOCS. All the other gray weighted distance function, GRAYMAT etc. algorithms find the minimum path joining two points by the smallest sum of gray levels or weighting the distance values directly by the gray levels in some manner. The DTOCS does not weight them that way. The DTOCS gives a weighted version of the chessboard distance map. The weights are not constant, but gray value differences of the original image. The difference between the DTOCS map and other distance transforms for gray level images is shown. The difference between the DTOCS and EDTOCS is that the EDTOCS calculates these gray level differences in a different way. It propagates local Euclidean distances inside a kernel. Analytical derivations of some results concerning the DTOCS and the EDTOCS are presented. Commonly distance transforms are used for feature extraction in pattern recognition and learning. Their use in image compression is very rare. This thesis introduces a new application area for distance transforms. Three new image compression algorithms based on the DTOCS and one based on the EDTOCS are presented. Control points, i.e. points that are considered fundamental for the reconstruction of the image, are selected from the gray level image using the DTOCS and the EDTOCS. The first group of methods select the maximas of the distance image to new control points and the second group of methods compare the DTOCS distance to binary image chessboard distance. The effect of applying threshold masks of different sizes along the threshold boundaries is studied. The time complexity of the compression algorithms is analyzed both analytically and experimentally. It is shown that the time complexity of the algorithms is independent of the number of control points, i.e. the compression ratio. Also a new morphological image decompression scheme is presented, the 8 kernels' method. Several decompressed images are presented. The best results are obtained using the Delaunay triangulation. The obtained image quality equals that of the DCT images with a 4 x 4

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Multispectral images are becoming more common in the field of remote sensing, computer vision, and industrial applications. Due to the high accuracy of the multispectral information, it can be used as an important quality factor in the inspection of industrial products. Recently, the development on multispectral imaging systems and the computational analysis on the multispectral images have been the focus of a growing interest. In this thesis, three areas of multispectral image analysis are considered. First, a method for analyzing multispectral textured images was developed. The method is based on a spectral cooccurrence matrix, which contains information of the joint distribution of spectral classes in a spectral domain. Next, a procedure for estimating the illumination spectrum of the color images was developed. Proposed method can be used, for example, in color constancy, color correction, and in the content based search from color image databases. Finally, color filters for the optical pattern recognition were designed, and a prototype of a spectral vision system was constructed. The spectral vision system can be used to acquire a low dimensional component image set for the two dimensional spectral image reconstruction. The data obtained by the spectral vision system is small and therefore convenient for storing and transmitting a spectral image.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The ongoing development of the digital media has brought a new set of challenges with it. As images containing more than three wavelength bands, often called spectral images, are becoming a more integral part of everyday life, problems in the quality of the RGB reproduction from the spectral images have turned into an important area of research. The notion of image quality is often thought to comprise two distinctive areas – image quality itself and image fidelity, both dealing with similar questions, image quality being the degree of excellence of the image, and image fidelity the measure of the match of the image under study to the original. In this thesis, both image fidelity and image quality are considered, with an emphasis on the influence of color and spectral image features on both. There are very few works dedicated to the quality and fidelity of spectral images. Several novel image fidelity measures were developed in this study, which include kernel similarity measures and 3D-SSIM (structural similarity index). The kernel measures incorporate the polynomial, Gaussian radial basis function (RBF) and sigmoid kernels. The 3D-SSIM is an extension of a traditional gray-scale SSIM measure developed to incorporate spectral data. The novel image quality model presented in this study is based on the assumption that the statistical parameters of the spectra of an image influence the overall appearance. The spectral image quality model comprises three parameters of quality: colorfulness, vividness and naturalness. The quality prediction is done by modeling the preference function expressed in JNDs (just noticeable difference). Both image fidelity measures and the image quality model have proven to be effective in the respective experiments.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In the last two decades of studying the Solar Energetic Particle (SEP) phenomenon, intensive emphasis has been put on how and when and where these SEPs are injected into interplanetary space. It is well known that SEPs are related to solar flares and CMEs. However, the role of each in the acceleration of SEPs has been under debate since the major role was taken from flares ascribed to CMEs step by step after the skylab mission, which started the era of CME spaceborn observations. Since then, the shock wave generated by powerful CMEs in between 2-5 solar radii is considered the major accelerator. The current paradigm interprets the prolonged proton intensity-time profile in gradual SEP events as a direct effect of accelerated SEPs by shock wave propagating in the interplanetary medium. Thus the powerful CME is thought of as a starter for the acceleration and its shock wave as a continuing accelerator to result in such an intensity-time profile. Generally it is believed that a single powerful CME which might or might not be associated with a flare is always the reason behind such gradual events.

In this work we use the Energetic and Relativistic Nucleus and Electrons ERNE instrument on board Solar and Heliospheric Observatory SOHO to present an empirical study to show the possibility of multiple accelerations in SEP events. In the beginning we found 18 double-peaked SEP events by examining 88 SEP events. The peaks in the intensity-time profile were separated by 3-24 hours. We divided the SEP events according to possible multiple acceleration into four groups and in one of these groups we find evidence for multiple acceleration in velocity dispersion and change in the abundance ratio associated at transition to the second peak. Then we explored the intensity-time profiles of all SEP events during solar cycle 23 and found that most of the SEP events are associated with multiple eruptions at the Sun and we call those events as Multi-Eruption Solar Energetic Particles (MESEP) events. We use the data available by Large Angle and Spectrometric Coronograph LASCO on board SOHO to determine the CME associated with such events and YOHKOH and GOES satellites data to determine the flare associated with such events. We found four types of MESEP according to the appearance of the peaks in the intensity-time profile in large variation of energy levels. We found that it is not possible to determine whether the peaks are related to an eruption at the Sun or not, only by examining the anisotropy flux, He/p ratio and velocity dispersion. Then we chose a rare event in which there is evidence of SEP acceleration from behind previous CME. This work resulted in a conclusion which is inconsistent with the current SEP paradigm. Then we discovered through examining another MESEP event, that energetic particles accelerated by a second CME can penetrate a previous CME-driven decelerating shock. Finally, we report the previous two MESEP events with new two events and find a common basis for second CME SEPs penetrating previous decelerating shocks. This phenomenon is reported for the first time and expected to have significant impact on modification of the current paradigm of the solar energetic particle events.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Cooling crystallization is one of the most important purification and separation techniques in the chemical and pharmaceutical industry. The product of the cooling crystallization process is always a suspension that contains both the mother liquor and the product crystals, and therefore the first process step following crystallization is usually solid-liquid separation. The properties of the produced crystals, such as their size and shape, can be affected by modifying the conditions during the crystallization process. The filtration characteristics of solid/liquid suspensions, on the other hand, are strongly influenced by the particle properties, as well as the properties of the liquid phase. It is thus obvious that the effect of the changes made to the crystallization parameters can also be seen in the course of the filtration process. Although the relationship between crystallization and filtration is widely recognized, the number of publications where these unit operations have been considered in the same context seems to be surprisingly small. This thesis explores the influence of different crystallization parameters in an unseeded batch cooling crystallization process on the external appearance of the product crystals and on the pressure filtration characteristics of the obtained product suspensions. Crystallization experiments are performed by crystallizing sulphathiazole (C9H9N3O2S2), which is a wellknown antibiotic agent, from different mixtures of water and n-propanol in an unseeded batch crystallizer. The different crystallization parameters that are studied are the composition of the solvent, the cooling rate during the crystallization experiments carried out by using a constant cooling rate throughout the whole batch, the cooling profile, as well as the mixing intensity during the batch. The obtained crystals are characterized by using an automated image analyzer and the crystals are separated from the solvent through constant pressure batch filtration experiments. Separation characteristics of the suspensions are described by means of average specific cake resistance and average filter cake porosity, and the compressibilities of the cakes are also determined. The results show that fairly large differences can be observed between the size and shape of the crystals, and it is also shown experimentally that the changes in the crystal size and shape have a direct impact on the pressure filtration characteristics of the crystal suspensions. The experimental results are utilized to create a procedure that can be used for estimating the filtration characteristics of solid-liquid suspensions according to the particle size and shape data obtained by image analysis. Multilinear partial least squares regression (N-PLS) models are created between the filtration parameters and the particle size and shape data, and the results presented in this thesis show that relatively obvious correlations can be detected with the obtained models.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Dirt counting and dirt particle characterisation of pulp samples is an important part of quality control in pulp and paper production. The need for an automatic image analysis system to consider dirt particle characterisation in various pulp samples is also very critical. However, existent image analysis systems utilise a single threshold to segment the dirt particles in different pulp samples. This limits their precision. Based on evidence, designing an automatic image analysis system that could overcome this deficiency is very useful. In this study, the developed Niblack thresholding method is proposed. The method defines the threshold based on the number of segmented particles. In addition, the Kittler thresholding is utilised. Both of these thresholding methods can determine the dirt count of the different pulp samples accurately as compared to visual inspection and the Digital Optical Measuring and Analysis System (DOMAS). In addition, the minimum resolution needed for acquiring a scanner image is defined. By considering the variation in dirt particle features, the curl shows acceptable difference to discriminate the bark and the fibre bundles in different pulp samples. Three classifiers, called k-Nearest Neighbour, Linear Discriminant Analysis and Multi-layer Perceptron are utilised to categorize the dirt particles. Linear Discriminant Analysis and Multi-layer Perceptron are the most accurate in classifying the segmented dirt particles by the Kittler thresholding with morphological processing. The result shows that the dirt particles are successfully categorized for bark and for fibre bundles.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This work is devoted to the development of numerical method to deal with convection diffusion dominated problem with reaction term, non - stiff chemical reaction and stiff chemical reaction. The technique is based on the unifying Eulerian - Lagrangian schemes (particle transport method) under the framework of operator splitting method. In the computational domain, the particle set is assigned to solve the convection reaction subproblem along the characteristic curves created by convective velocity. At each time step, convection, diffusion and reaction terms are solved separately by assuming that, each phenomenon occurs separately in a sequential fashion. Moreover, adaptivities and projection techniques are used to add particles in the regions of high gradients (steep fronts) and discontinuities and transfer a solution from particle set onto grid point respectively. The numerical results show that, the particle transport method has improved the solutions of CDR problems. Nevertheless, the method is time consumer when compared with other classical technique e.g., method of lines. Apart from this advantage, the particle transport method can be used to simulate problems that involve movingsteep/smooth fronts such as separation of two or more elements in the system.