14 resultados para Scanner intraoral
em Doria (National Library of Finland DSpace Services) - National Library of Finland, Finland
Resumo:
Selostus: Tasoskannerin ja digitaalisen kuva-analyysimenetelmän kalibrointi juurten morfologian kvantifioimiseksi
Resumo:
Most of the applications of airborne laser scanner data to forestry require that the point cloud be normalized, i.e., each point represents height from the ground instead of elevation. To normalize the point cloud, a digital terrain model (DTM), which is derived from the ground returns in the point cloud, is employed. Unfortunately, extracting accurate DTMs from airborne laser scanner data is a challenging task, especially in tropical forests where the canopy is normally very thick (partially closed), leading to a situation in which only a limited number of laser pulses reach the ground. Therefore, robust algorithms for extracting accurate DTMs in low-ground-point-densitysituations are needed in order to realize the full potential of airborne laser scanner data to forestry. The objective of this thesis is to develop algorithms for processing airborne laser scanner data in order to: (1) extract DTMs in demanding forest conditions (complex terrain and low number of ground points) for applications in forestry; (2) estimate canopy base height (CBH) for forest fire behavior modeling; and (3) assess the robustness of LiDAR-based high-resolution biomass estimation models against different field plot designs. Here, the aim is to find out if field plot data gathered by professional foresters can be combined with field plot data gathered by professionally trained community foresters and used in LiDAR-based high-resolution biomass estimation modeling without affecting prediction performance. The question of interest in this case is whether or not the local forest communities can achieve the level technical proficiency required for accurate forest monitoring. The algorithms for extracting DTMs from LiDAR point clouds presented in this thesis address the challenges of extracting DTMs in low-ground-point situations and in complex terrain while the algorithm for CBH estimation addresses the challenge of variations in the distribution of points in the LiDAR point cloud caused by things like variations in tree species and season of data acquisition. These algorithms are adaptive (with respect to point cloud characteristics) and exhibit a high degree of tolerance to variations in the density and distribution of points in the LiDAR point cloud. Results of comparison with existing DTM extraction algorithms showed that DTM extraction algorithms proposed in this thesis performed better with respect to accuracy of estimating tree heights from airborne laser scanner data. On the other hand, the proposed DTM extraction algorithms, being mostly based on trend surface interpolation, can not retain small artifacts in the terrain (e.g., bumps, small hills and depressions). Therefore, the DTMs generated by these algorithms are only suitable for forestry applications where the primary objective is to estimate tree heights from normalized airborne laser scanner data. On the other hand, the algorithm for estimating CBH proposed in this thesis is based on the idea of moving voxel in which gaps (openings in the canopy) which act as fuel breaks are located and their height is estimated. Test results showed a slight improvement in CBH estimation accuracy over existing CBH estimation methods which are based on height percentiles in the airborne laser scanner data. However, being based on the idea of moving voxel, this algorithm has one main advantage over existing CBH estimation methods in the context of forest fire modeling: it has great potential in providing information about vertical fuel continuity. This information can be used to create vertical fuel continuity maps which can provide more realistic information on the risk of crown fires compared to CBH.
Resumo:
Abstract
Resumo:
This work is devoted to the problem of reconstructing the basis weight structure at paper web with black{box techniques. The data that is analyzed comes from a real paper machine and is collected by an o®-line scanner. The principal mathematical tool used in this work is Autoregressive Moving Average (ARMA) modelling. When coupled with the Discrete Fourier Transform (DFT), it gives a very flexible and interesting tool for analyzing properties of the paper web. Both ARMA and DFT are independently used to represent the given signal in a simplified version of our algorithm, but the final goal is to combine the two together. Ljung-Box Q-statistic lack-of-fit test combined with the Root Mean Squared Error coefficient gives a tool to separate significant signals from noise.
Resumo:
Oral mucosa is a frequent site of primary herpes simplex virus type 1 (HSV-1) infection, whereas intraoral recurrent disease is very rare. Instead, reactivation from latency predominantly results in asymptomatic HSV shedding to saliva or recurrent labial herpes (RLH) with highly individual frequency. The current study aimed to elucidate the role of human oral innate and acquired immune mechanisms in modulation of HSV infection in orolabial region. Saliva was found to neutralize HSV-1, and to protect cells from infection independently of salivary antibodies. Neutralization capacity was higher in saliva from asymptomatic HSV-seropositive individuals compared to subjects with history of RLH or seronegative controls. Neutralization was at least partially associated with salivary lactoferrin content. Further, lactoferrin and peroxidase-generated hypothiocyanite were found to either neutralize HSV-1 or interfere with HSV-1 replication, whereas lysozyme displayed no anti-HSV-1 activity. Lactoferrin was also shown to modulate HSV-1 infection by inhibiting keratinocyte proliferation. RLH susceptibility was further found to be associated with Th2 biased cytokine responses against HSV, and a higher level of anti- HSV-IgG with Th2 polarization, indicating lack of efficiency of humoral response in the control of HSV disease. In a three-dimensional cell culture, keratinocytes were found to support both lytic and nonproductive infection, suggesting HSV persistence in epithelial cells, and further emphasizing the importance of peripheral immune control of HSV. These results suggest that certain innate salivary antimicrobial compounds and Th1 type cellular responses are critically important in protecting the host against HSV disease, implying possible applications in drug, vaccine and gene therapy design.
Resumo:
Imaging systems have developed latest years and developing is still continuing following years. Manufacturers of imaging systems give promises for the quality of the performance of imaging systems to advertise their products. Promises for the quality of the performance are often so good that they will not be tested in normal usage. The main target in this research is to evaluate the quality of the performance of two imaging systems: Scanner and CCD color camera. Optical measurement procedures were planned to evaluate the quality of imaging performances. Other target in this research is to evaluate calibration programs for the camera and the scanner. Measuring targets had to choose to evaluate the quality of imaging performances. Manufacturers have given definitions for targets. The third task in this research is to evaluate and consider how good measuring targets are.
Resumo:
Performance standards for Positron emission tomography (PET) were developed to be able to compare systems from different generations and manufacturers. This resulted in the NEMA methodology in North America and the IEC in Europe. In practices, the NEMA NU 2- 2001 is the method of choice today. These standardized methods allow assessment of the physical performance of new commercial dedicated PET/CT tomographs. The point spread in image formation is one of the factors that blur the image. The phenomenon is often called the partial volume effect. Several methods for correcting for partial volume are under research but no real agreement exists on how to solve it. The influence of the effect varies in different clinical settings and it is likely that new methods are needed to solve this problem. Most of the clinical PET work is done in the field of oncology. The whole body PET combined with a CT is the standard investigation today in oncology. Despite the progress in PET imaging technique visualization, especially quantification of small lesions is a challenge. In addition to partial volume, the movement of the object is a significant source of error. The main causes of movement are respiratory and cardiac motions. Most of the new commercial scanners are in addition to cardiac gating, also capable of respiratory gating and this technique has been used in patients with cancer of the thoracic region and patients being studied for the planning of radiation therapy. For routine cardiac applications such as assessment of viability and perfusion only cardiac gating has been used. However, the new targets such as plaque or molecular imaging of new therapies require better control of the cardiac motion also caused by respiratory motion. To overcome these problems in cardiac work, a dual gating approach has been proposed. In this study we investigated the physical performance of a new whole body PET/CT scanner with NEMA standard, compared methods for partial volume correction in PET studies of the brain and developed and tested a new robust method for dual cardiac-respiratory gated PET with phantom, animal and human data. Results from performance measurements showed the feasibility of the new scanner design in 2D and 3D whole body studies. Partial volume was corrected, but there is no best method among those tested as the correction also depends on the radiotracer and its distribution. New methods need to be developed for proper correction. The dual gating algorithm generated is shown to handle dual-gated data, preserving quantification and clearly eliminating the majority of contraction and respiration movement
Resumo:
Controlling the quality variables (such as basis weight, moisture etc.) is a vital part of making top quality paper or board. In this thesis, an advanced data assimilation tool is applied to the quality control system (QCS) of a paper or board machine. The functionality of the QCS is based on quality observations that are measured with a traversing scanner making a zigzag path. The basic idea is the following: The measured quality variable has to be separated into its machine direction (MD) and cross direction (CD) variations due to the fact that the QCS works separately in MD and CD. Traditionally this is done simply by assuming one scan of the zigzag path to be the CD profile and its mean value to be one point of the MD trend. In this thesis, a more advanced method is introduced. The fundamental idea is to use the signals’ frequency components to represent the variation in both CD and MD. To be able to get to the frequency domain, the Fourier transform is utilized. The frequency domain, that is, the Fourier components are then used as a state vector in a Kalman filter. The Kalman filter is a widely used data assimilation tool to combine noisy observations with a model. The observations here refer to the quality measurements and the model to the Fourier frequency components. By implementing the two dimensional Fourier transform into the Kalman filter, we get an advanced tool for the separation of CD and MD components in total variation or, to be more general, for data assimilation. A piece of a paper roll is analyzed and this tool is applied to model the dataset. As a result, it is clear that the Kalman filter algorithm is able to reconstruct the main features of the dataset from a zigzag path. Although the results are made with a very short sample of paper roll, it seems that this method has great potential to be used later on as a part of the quality control system.
Resumo:
Dirt counting and dirt particle characterisation of pulp samples is an important part of quality control in pulp and paper production. The need for an automatic image analysis system to consider dirt particle characterisation in various pulp samples is also very critical. However, existent image analysis systems utilise a single threshold to segment the dirt particles in different pulp samples. This limits their precision. Based on evidence, designing an automatic image analysis system that could overcome this deficiency is very useful. In this study, the developed Niblack thresholding method is proposed. The method defines the threshold based on the number of segmented particles. In addition, the Kittler thresholding is utilised. Both of these thresholding methods can determine the dirt count of the different pulp samples accurately as compared to visual inspection and the Digital Optical Measuring and Analysis System (DOMAS). In addition, the minimum resolution needed for acquiring a scanner image is defined. By considering the variation in dirt particle features, the curl shows acceptable difference to discriminate the bark and the fibre bundles in different pulp samples. Three classifiers, called k-Nearest Neighbour, Linear Discriminant Analysis and Multi-layer Perceptron are utilised to categorize the dirt particles. Linear Discriminant Analysis and Multi-layer Perceptron are the most accurate in classifying the segmented dirt particles by the Kittler thresholding with morphological processing. The result shows that the dirt particles are successfully categorized for bark and for fibre bundles.
Resumo:
The topic of this thesis is the simulation of a combination of several control and data assimilation methods, meant to be used for controlling the quality of paper in a paper machine. Paper making is a very complex process and the information obtained from the web is sparse. A paper web scanner can only measure a zig zag path on the web. An assimilation method is needed to process estimates for Machine Direction (MD) and Cross Direction (CD) profiles of the web. Quality control is based on these measurements. There is an increasing need for intelligent methods to assist in data assimilation. The target of this thesis is to study how such intelligent assimilation methods are affecting paper web quality. This work is based on a paper web simulator, which has been developed in the TEKES funded MASI NoTes project. The simulator is a valuable tool in comparing different assimilation methods. The thesis contains the comparison of four different assimilation methods. These data assimilation methods are a first order Bayesian model estimator, an ARMA model based on a higher order Bayesian estimator, a Fourier transform based Kalman filter estimator and a simple block estimator. The last one can be considered to be close to current operational methods. From these methods Bayesian, ARMA and Kalman all seem to have advantages over the commercial one. The Kalman and ARMA estimators seems to be best in overall performance.
Resumo:
Scanning optics create different types of phenomena and limitation to cladding process compared to cladding with static optics. This work concentrates on identifying and explaining the special features of laser cladding with scanning optics. Scanner optics changes cladding process energy input mechanics. Laser energy is introduced into the process through a relatively small laser spot which moves rapidly back and forth, distributing the energy to a relatively large area. The moving laser spot was noticed to cause dynamic movement in the melt pool. Due to different energy input mechanism scanner optic can make cladding process unstable if parameter selection is not done carefully. Especially laser beam intensity and scanning frequency have significant role in the process stability. The laser beam scanning frequency determines how long the laser beam affects with specific place local specific energy input. It was determined that if the scanning frequency in too low, under 40 Hz, scanned beam can start to vaporize material. The intensity in turn determines on how large package this energy is brought and if the intensity of the laser beam was too high, over 191 kW/cm2, laser beam started to vaporize material. If there was vapor formation noticed in the melt pool, the process starts to resample more laser alloying due to deep penetration of laser beam in to the substrate. Scanner optics enables more flexibility to the process than static optics. The numerical adjustment of scanning amplitude enables clad bead width adjustment. In turn scanner power modulation (where laser power is adjusted according to where the scanner is pointing) enables modification of clad bead cross-section geometry when laser power can be adjusted locally and thus affect how much laser beam melts material in each sector. Power modulation is also an important factor in terms of process stability. When a linear scanner is used, oscillating the scanning mirror causes a dwell time in scanning amplitude border area, where the scanning mirror changes the direction of movement. This can cause excessive energy input to this area which in turn can cause vaporization and process instability. This process instability can be avoided by decreasing energy in this region by power modulation. Powder feeding parameters have a significant role in terms of process stability. It was determined that with certain powder feeding parameter combinations powder cloud behavior became unstable, due to the vaporizing powder material in powder cloud. Mainly this was noticed, when either or both the scanning frequency or powder feeding gas flow was low or steep powder feeding angle was used. When powder material vaporization occurred, it created vapor flow, which prevented powder material to reach the melt pool and thus dilution increased. Also powder material vaporization was noticed to produce emission of light at wavelength range of visible light. This emission intensity was noticed to be correlated with the amount of vaporization in the powder cloud.
Resumo:
Tukin mittaus ennen sahausta ja sahausasetteen optimointi on kehittynyt paljon viimeisen 10 vuoden aikana. Sahauksen kannattavuuden huonontuessa raaka-aineen tehokas hyödyntäminen on muodostunut tärkeäksi osaksi prosessia. Mittalaitteiden tekniikan kehityttyä on ollut mahdollista mitata tukin muoto ja halkaisijat eri kohdista entistä tarkemmin. Sahausasetteen optimoinnilla pyritään raaka-aineen mahdollisimman tehokkaaseen käyttöön eli saamaan mahdollisimman hyvä saanto jokaisesta yksittäisestä sahatusta tukista. Mittaustarkkuus on suoraan kytköksissä sahausasetteen optimointi tuloksen onnistumiseen. Yleisesti tukin mittaus ennen sahausta ja sahausasetteen optimointi tulevat samalta toimittajalta. Työssä tarkasteltiin kahden eri toimittajan tukkimittareita sekä optimoinnin onnistumista sen perusteella. Käytössä oli lasikuituinen mallitukki, jota mitattiin kummankin toimittajan mittareilla. Näin voitiin suoraan vertailla mittauksen ja optimoinnin onnistumista ja verrata sitä optimaalisiin tuloksiin. Työssä käytettiin kandidaatintyössä luomaani toimintamallia tukkimittarin tarkkuuden toteamiseksi. Mittaus- ja optimointivirheistä pystyttiin laskemaan, kuinka paljon tappiota sahalaitokselle aiheutui verrattuna optimaaliseen mittaus- ja optimointitulokseen. Jo pienetkin virheet optimoinnissa ja mittauksessa vaikuttavat sahauksen kannattavuuteen, kun tarkastellaan sahalaitosta jossa sahataan 8000 – 10 000 tukkia yhden työvuoron aikana. Tulosten perusteella mittarit mittaavat hieman virheellisesti, ja kummankin mittarin mittausten perusteella saatiin eri sahausasete optimointitulokseksi. Mittavirheen takia voitiin todeta, että parantamalla mittaustarkkuutta voidaan sahauksen kannattavuutta parantaa.
Resumo:
The human striatum is a heterogeneous structure representing a major part of the dopamine (DA) system’s basal ganglia input and output. Positron emission tomography (PET) is a powerful tool for imaging DA neurotransmission. However, PET measurements suffer from bias caused by the low spatial resolution, especially when imaging small, D2/3 -rich structures such as the ventral striatum (VST). The brain dedicated high-resolution PET scanner, ECAT HRRT (Siemens Medical Solutions, Knoxville, TN, USA) has superior resolution capabilities than its predecessors. In the quantification of striatal D2/3 binding, the in vivo highly selective D2/3 antagonist [11C] raclopride is recognized as a well-validated tracer. The aim of this thesis was to use a traditional test-retest setting to evaluate the feasibility of utilizing the HRRT scanner for exploring not only small brain regions such as the VST but also low density D2/3 areas such as cortex. It was demonstrated that the measurement of striatal D2/3 binding was very reliable, even when studying small brain structures or prolonging the scanning interval. Furthermore, the cortical test-retest parameters displayed good to moderate reproducibility. For the first time in vivo, it was revealed that there are significant divergent rostrocaudal gradients of [11C]raclopride binding in striatal subregions. These results indicate that high-resolution [11C]raclopride PET is very reliable and its improved sensitivity means that it should be possible to detect the often very subtle changes occurring in DA transmission. Another major advantage is the possibility to measure simultaneously striatal and cortical areas. The divergent gradients of D2/3 binding may have functional significance and the average distribution binding could serve as the basis for a future database. Key words: dopamine, PET, HRRT, [11C]raclopride, striatum, VST, gradients, test-retest.
Resumo:
Currently, laser scribing is growing material processing method in the industry. Benefits of laser scribing technology are studied for example for improving an efficiency of solar cells. Due high-quality requirement of the fast scribing process, it is important to monitor the process in real time for detecting possible defects during the process. However, there is a lack of studies of laser scribing real time monitoring. Commonly used monitoring methods developed for other laser processes such a laser welding, are sufficient slow and existed applications cannot be implemented in fast laser scribing monitoring. The aim of this thesis is to find a method for laser scribing monitoring with a high-speed camera and evaluate reliability and performance of the developed monitoring system with experiments. The laser used in experiments is an IPG ytterbium pulsed fiber laser with 20 W maximum average power and Scan head optics used in the laser is Scanlab’s Hurryscan 14 II with an f100 tele-centric lens. The camera was connected to laser scanner using camera adapter to follow the laser process. A powerful fully programmable industrial computer was chosen for executing image processing and analysis. Algorithms for defect analysis, which are based on particle analysis, were developed using LabVIEW system design software. The performance of the algorithms was analyzed by analyzing a non-moving image from the scribing line with resolution 960x20 pixel. As a result, the maximum analysis speed was 560 frames per second. Reliability of the algorithm was evaluated by imaging scribing path with a variable number of defects 2000 mm/s when the laser was turned off and image analysis speed was 430 frames per second. The experiment was successful and as a result, the algorithms detected all defects from the scribing path. The final monitoring experiment was performed during a laser process. However, it was challenging to get active laser illumination work with the laser scanner due physical dimensions of the laser lens and the scanner. For reliable error detection, the illumination system is needed to be replaced.