67 resultados para Image sensor
Resumo:
TUTKIMUKSEN TAVOITTEET Tutkielman tavoitteena oli luoda ensin yleiskäsitys tuotemerkkimarkkinoinnin roolista teollisilla markkinoilla, sekä suhdemarkkinoinnin merkityksestä teollisessa merkkituotemarkkinoinnissa. Toisena oleellisena tavoitteena oli kuvata teoreettisesti merkkituoteidentiteetin rakenne teollisessa yrityksessä ja sen vaikutukset myyntihenkilöstöön, ja lisäksi haluttiin tutkia tuotemerkkien lisäarvoa sekä asiakkaalle että myyjälle. Identiteetti ja sen vaikutukset, erityisesti imago haluttiin tutkia myös empiirisesti. LÄHDEAINEISTO JA TUTKIMUSMENETELMÄT Tämän tutkielman teoreettinen osuus perustuu kirjallisuuteen, akateemisiin julkaisuihin ja aikaisempiin tutkimuksiin; keskittyen merkkituotteiden markkinointiin, identiteettiin ja imagoon, sekä suhdemarkkinointiin osana merkkituotemarkkinointia. Tutkimuksen lähestymistapa on kuvaileva eli deskriptiivinen ja sekä kvalitatiivinen että kvantitatiivinen. Tutkimus on tapaustutkimus, jossa caseyritykseksi valittiin kansainvälinen pakkauskartonki-teollisuuden yritys. Empiirisen osuuden toteuttamiseen käytettiin www-pohjaista surveytä, jonka avulla tietoja kerättiin myyntihenkilöstöltä case-yrityksessä. Lisäksi empiiristä osuutta laajennettiin tutkimalla sekundäärilähteitä kuten yrityksen sisäisiä kirjallisia dokumentteja ja tutkimuksia. TULOKSET. Teoreettisen ja empiirisen tutkimuksen tuloksena luotiin malli jota voidaan hyödyntää merkkituotemarkkinoinnin päätöksenteon tukena pakkauskartonki-teollisuudessa. Teollisen brandinhallinnan tulee keskittyä erityisesti asiakas-suhteiden brandaukseen – tätä voisi kutsua teolliseksi suhdebrandaukseksi. Tuote-elementit ja –arvot, differointi ja positiointi, sisäinen yrityskuva ja viestintä ovat teollisen brandi-identiteetin peruskiviä, jotka luovat brandi-imagon. Case-yrityksen myyntihenkilöstön tuote- ja yritysmielikuvat osoittautuivat kokonaisuudessaan hyviksi. Paras imago on CKB tuotteilla, kun taas heikoin on WLC tuotteilla. Teolliset brandit voivat luoda monenlaisia lisäarvoja sekä asiakas- että myyjäyritykselle.
Resumo:
Simultaneous localization and mapping(SLAM) is a very important problem in mobile robotics. Many solutions have been proposed by different scientists during the last two decades, nevertheless few studies have considered the use of multiple sensors simultane¬ously. The solution is on combining several data sources with the aid of an Extended Kalman Filter (EKF). Two approaches are proposed. The first one is to use the ordinary EKF SLAM algorithm for each data source separately in parallel and then at the end of each step, fuse the results into one solution. Another proposed approach is the use of multiple data sources simultaneously in a single filter. The comparison of the computational com¬plexity of the two methods is also presented. The first method is almost four times faster than the second one.
Resumo:
In the thesis the principle of work of eddy current position sensors and the main cautions that must be taken into account while sensor design process are explained. A way of automated eddy current position sensor electrical characteristics measurement is suggested. A prototype of the eddy current position sensor and its electrical characteristics are investigated. The results obtained by means of the automated measuring system are explained.
Resumo:
Coating and filler pigments have strong influence to the properties of the paper. Filler content can be even over 30 % and pigment content in coating is about 85-95 weight percent. The physical and chemical properties of the pigments are different and the knowledge of these properties is important for optimising of optical and printing properties of the paper. The size and shape of pigment particles can be measured by different analysers which can be based on sedimentation, laser diffraction, changes in electric field etc. In this master's thesis was researched particle properties especially by scanning electron microscope (SEM) and image analysis programs. Research included nine pigments with different particle size and shape. Pigments were analysed by two image analysis programs (INCA Feature and Poikki), Coulter LS230 (laser diffraction) and SediGraph 5100 (sedimentation). The results were compared to perceive the effect of particle shape to the performance of the analysers. Only image analysis programs gave parameters of the particle shape. One part of research was also the sample preparation for SEM. Individual particles should be separated and distinct in ideal sample. Analysing methods gave different results but results from image analysis programs corresponded even to sedimentation or to laser diffraction depending on the particle shape. Detailed analysis of the particle shape required high magnification in SEM, but measured parameters described very well the shape of the particles. Large particles (ecd~1 µm) could be used also in 3D-modelling which enabled the measurement of the thickness of the particles. Scanning electron microscope and image analysis programs were effective and multifunctional tools for particle analyses. Development and experience will devise the usability of analysing method in routine use.
Resumo:
This thesis deals with distance transforms which are a fundamental issue in image processing and computer vision. In this thesis, two new distance transforms for gray level images are presented. As a new application for distance transforms, they are applied to gray level image compression. The new distance transforms are both new extensions of the well known distance transform algorithm developed by Rosenfeld, Pfaltz and Lay. With some modification their algorithm which calculates a distance transform on binary images with a chosen kernel has been made to calculate a chessboard like distance transform with integer numbers (DTOCS) and a real value distance transform (EDTOCS) on gray level images. Both distance transforms, the DTOCS and EDTOCS, require only two passes over the graylevel image and are extremely simple to implement. Only two image buffers are needed: The original gray level image and the binary image which defines the region(s) of calculation. No other image buffers are needed even if more than one iteration round is performed. For large neighborhoods and complicated images the two pass distance algorithm has to be applied to the image more than once, typically 3 10 times. Different types of kernels can be adopted. It is important to notice that no other existing transform calculates the same kind of distance map as the DTOCS. All the other gray weighted distance function, GRAYMAT etc. algorithms find the minimum path joining two points by the smallest sum of gray levels or weighting the distance values directly by the gray levels in some manner. The DTOCS does not weight them that way. The DTOCS gives a weighted version of the chessboard distance map. The weights are not constant, but gray value differences of the original image. The difference between the DTOCS map and other distance transforms for gray level images is shown. The difference between the DTOCS and EDTOCS is that the EDTOCS calculates these gray level differences in a different way. It propagates local Euclidean distances inside a kernel. Analytical derivations of some results concerning the DTOCS and the EDTOCS are presented. Commonly distance transforms are used for feature extraction in pattern recognition and learning. Their use in image compression is very rare. This thesis introduces a new application area for distance transforms. Three new image compression algorithms based on the DTOCS and one based on the EDTOCS are presented. Control points, i.e. points that are considered fundamental for the reconstruction of the image, are selected from the gray level image using the DTOCS and the EDTOCS. The first group of methods select the maximas of the distance image to new control points and the second group of methods compare the DTOCS distance to binary image chessboard distance. The effect of applying threshold masks of different sizes along the threshold boundaries is studied. The time complexity of the compression algorithms is analyzed both analytically and experimentally. It is shown that the time complexity of the algorithms is independent of the number of control points, i.e. the compression ratio. Also a new morphological image decompression scheme is presented, the 8 kernels' method. Several decompressed images are presented. The best results are obtained using the Delaunay triangulation. The obtained image quality equals that of the DCT images with a 4 x 4
Resumo:
Multispectral images are becoming more common in the field of remote sensing, computer vision, and industrial applications. Due to the high accuracy of the multispectral information, it can be used as an important quality factor in the inspection of industrial products. Recently, the development on multispectral imaging systems and the computational analysis on the multispectral images have been the focus of a growing interest. In this thesis, three areas of multispectral image analysis are considered. First, a method for analyzing multispectral textured images was developed. The method is based on a spectral cooccurrence matrix, which contains information of the joint distribution of spectral classes in a spectral domain. Next, a procedure for estimating the illumination spectrum of the color images was developed. Proposed method can be used, for example, in color constancy, color correction, and in the content based search from color image databases. Finally, color filters for the optical pattern recognition were designed, and a prototype of a spectral vision system was constructed. The spectral vision system can be used to acquire a low dimensional component image set for the two dimensional spectral image reconstruction. The data obtained by the spectral vision system is small and therefore convenient for storing and transmitting a spectral image.
Resumo:
Sensor-based robot control allows manipulation in dynamic environments with uncertainties. Vision is a versatile low-cost sensory modality, but low sample rate, high sensor delay and uncertain measurements limit its usability, especially in strongly dynamic environments. Force is a complementary sensory modality allowing accurate measurements of local object shape when a tooltip is in contact with the object. In multimodal sensor fusion, several sensors measuring different modalities are combined to give a more accurate estimate of the environment. As force and vision are fundamentally different sensory modalities not sharing a common representation, combining the information from these sensors is not straightforward. In this thesis, methods for fusing proprioception, force and vision together are proposed. Making assumptions of object shape and modeling the uncertainties of the sensors, the measurements can be fused together in an extended Kalman filter. The fusion of force and visual measurements makes it possible to estimate the pose of a moving target with an end-effector mounted moving camera at high rate and accuracy. The proposed approach takes the latency of the vision system into account explicitly, to provide high sample rate estimates. The estimates also allow a smooth transition from vision-based motion control to force control. The velocity of the end-effector can be controlled by estimating the distance to the target by vision and determining the velocity profile giving rapid approach and minimal force overshoot. Experiments with a 5-degree-of-freedom parallel hydraulic manipulator and a 6-degree-of-freedom serial manipulator show that integration of several sensor modalities can increase the accuracy of the measurements significantly.
Resumo:
The ongoing development of the digital media has brought a new set of challenges with it. As images containing more than three wavelength bands, often called spectral images, are becoming a more integral part of everyday life, problems in the quality of the RGB reproduction from the spectral images have turned into an important area of research. The notion of image quality is often thought to comprise two distinctive areas – image quality itself and image fidelity, both dealing with similar questions, image quality being the degree of excellence of the image, and image fidelity the measure of the match of the image under study to the original. In this thesis, both image fidelity and image quality are considered, with an emphasis on the influence of color and spectral image features on both. There are very few works dedicated to the quality and fidelity of spectral images. Several novel image fidelity measures were developed in this study, which include kernel similarity measures and 3D-SSIM (structural similarity index). The kernel measures incorporate the polynomial, Gaussian radial basis function (RBF) and sigmoid kernels. The 3D-SSIM is an extension of a traditional gray-scale SSIM measure developed to incorporate spectral data. The novel image quality model presented in this study is based on the assumption that the statistical parameters of the spectra of an image influence the overall appearance. The spectral image quality model comprises three parameters of quality: colorfulness, vividness and naturalness. The quality prediction is done by modeling the preference function expressed in JNDs (just noticeable difference). Both image fidelity measures and the image quality model have proven to be effective in the respective experiments.
Resumo:
Monet teollisuuden konenäkö- ja hahmontunnistusongelmat ovat hyvin samantapaisia, jolloin prototyyppisovelluksia suunniteltaessa voitaisiin hyödyntää pitkälti samoja komponentteja. Oliopohjaiset sovelluskehykset tarjoavat erinomaisen tavan nopeuttaa ohjelmistokehitystä uudelleenkäytettävyyttä parantamalla. Näin voidaan sekä mahdollistaa konenäkösovellusten laajempi käyttö että säästää kustannuksissa. Tässä työssä esitellään konenäkösovelluskehys, joka on perusarkkitehtuuriltaan liukuhihnamainen. Ylätason rakenne koostuu sensorista, datankäsittelyoperaatioista, piirreirrottimesta sekä luokittimesta. Itse sovelluskehyksen lisäksi on toteutettu joukko kuvankäsittely- ja hahmontunnistusoperaatioita. Sovelluskehys nopeuttaa selvästi ohjelmointityötä ja helpottaa uusien kuvankäsittelyoperaatioiden lisää mistä.
Resumo:
The number of autonomous wireless sensor and control nodes has been increasing rapidly during the last decade. Until recently, these wireless nodes have been powered with batteries, which have lead to a short life cycle and high maintenance need. Due to these battery-related problems, new energy sources have been studied to power wireless nodes. One solution is energy harvesting, i.e. extracting energy from the ambient environment. Energy harvesting can provide a long-lasting power source for sensor nodes, with no need for maintenance. In this thesis, various energy harvesting technologies are studied whilst focusing on the theory of each technology and the state-of-the-art solutions of published studies and commercial solutions. In addition to energy harvesting, energy storage and energy management solutions are also studied as a subsystem of a whole energy source solution. Wireless nodes are also used in heavy-duty vehicles. Therefore a reliable, long-lasting and maintenance-free power source is also needed in this kind of environment. A forestry harvester has been used as a case study to study the feasibility of energy harvesting in a forestry harvester’s sliding boom. The energy harvester should be able to produce few milliwatts to power the target system, an independent limit switch.
Resumo:
The main objective for this study was to explore certain organization’s product line rebranding process and its impact on product line’s perceived image. The case company is a global paper, packaging and forest products company, business segment paper board. The audience explored is one of the company’s major customers, merchant in Germany. The research was performed as a descriptive case study with a purpose to provide longitudinal insight into the product line image and its eventual alteration as a result of the case company’s rebranding process. Mainly qualitative methods were used for conducting the research. The data for the empirical part was collected with a web-based survey at two different points of time; before the rebranded products entered the market and after they had been available approximately six months. The results of this study reveal that the case company has performed well in its attempt to improve product line’s brand image through rebranding. It was found that between the two brand image measurements the product brand image seems to have improved in all of the areas which according to theoretical framework of this study contribute to formation of brand image; brand associations, marketing communications and interpersonal relationships, not forgetting the original platform that initiated the change; technical quality modifications. In other words it may be concluded that as technical quality was brought to a new level, also assessments about the brand image improved respectively.
Resumo:
Diabetes is a rapidly increasing worldwide problem which is characterised by defective metabolism of glucose that causes long-term dysfunction and failure of various organs. The most common complication of diabetes is diabetic retinopathy (DR), which is one of the primary causes of blindness and visual impairment in adults. The rapid increase of diabetes pushes the limits of the current DR screening capabilities for which the digital imaging of the eye fundus (retinal imaging), and automatic or semi-automatic image analysis algorithms provide a potential solution. In this work, the use of colour in the detection of diabetic retinopathy is statistically studied using a supervised algorithm based on one-class classification and Gaussian mixture model estimation. The presented algorithm distinguishes a certain diabetic lesion type from all other possible objects in eye fundus images by only estimating the probability density function of that certain lesion type. For the training and ground truth estimation, the algorithm combines manual annotations of several experts for which the best practices were experimentally selected. By assessing the algorithm’s performance while conducting experiments with the colour space selection, both illuminance and colour correction, and background class information, the use of colour in the detection of diabetic retinopathy was quantitatively evaluated. Another contribution of this work is the benchmarking framework for eye fundus image analysis algorithms needed for the development of the automatic DR detection algorithms. The benchmarking framework provides guidelines on how to construct a benchmarking database that comprises true patient images, ground truth, and an evaluation protocol. The evaluation is based on the standard receiver operating characteristics analysis and it follows the medical practice in the decision making providing protocols for image- and pixel-based evaluations. During the work, two public medical image databases with ground truth were published: DIARETDB0 and DIARETDB1. The framework, DR databases and the final algorithm, are made public in the web to set the baseline results for automatic detection of diabetic retinopathy. Although deviating from the general context of the thesis, a simple and effective optic disc localisation method is presented. The optic disc localisation is discussed, since normal eye fundus structures are fundamental in the characterisation of DR.
Resumo:
Adequate supply of oxygen is essential for the survival of multicellular organisms. However, in several conditions the supply of oxygen can be disturbed and the tissue oxygenation is compromised. This condition is termed hypoxia. Oxygen homeostasis is maintained by the regulation of both the use and delivery of oxygen through complex, sensitive and cell-type specific transcriptional responses to hypoxia. This is mainly achieved by one master regulator, a transcription factor called hypoxiainducible factor 1 (HIF-1). The amount of HIF-1 is under tight oxygen-dependent control by a family of oxygen-dependent prolyl hydroxylase domain proteins (PHDs) that function as the cellular oxygen sensors. Three family members (PHD1-3) are known to regulate HIF of which the PHD2 isoform is thought to be the main regulator of HIF-1. The supply of oxygen can be disturbed in pathophysiological conditions, such as ischemic disorders and cancer. Cancer cells in the hypoxic parts of the tumors exploit the ability of HIF-1 to turn on the mechanisms for their survival, resistance to treatment, and escape from the oxygen- and nutrient-deprived environment. In this study, the expression and regulation of PHD2 were studied in normal and cancerous tissues, and its significance in tumor growth. The results show that the expression of PHD2 is induced in hypoxic cells. It is overexpressed in head and neck squamous cell carcinomas and colon adenocarcinomas. Although PHD2 normally resides in the cytoplasm, nuclear translocation of PHD2 was also seen in a subset of tumor cells. Together with the overexpression, the nuclear localization correlated with the aggressiveness of the tumors. The nuclear localization of PHD2 caused an increase in the anchorage-independent growth of cancer cells. This study provides information on the role of PHD2, the main regulator of HIF expression, in cancer progression. This knowledge may prove to be valuable in targeting the HIF pathway in cancer treatment.
Resumo:
Wireless sensor networks and its applications have been widely researched and implemented in both commercial and non commercial areas. The usage of wireless sensor network has developed its market from military usage to daily use of human livings. Wireless sensor network applications from monitoring prospect are used in home monitoring, farm fields and habitant monitoring to buildings structural monitoring. As the usage boundaries of wireless sensor networks and its applications are emerging there are definite ongoing research, such as lifetime for wireless sensor network, security of sensor nodes and expanding the applications with modern day scenarios of applications as web services. The main focus in this thesis work is to study and implement monitoring application for infrastructure based sensor network and expand its usability as web service to facilitate mobile clients. The developed application is implemented for wireless sensor nodes information collection and monitoring purpose enabling home or office environment remote monitoring for a user.