990 resultados para second image reversed


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Time-resolved particle image velocimetry (PIV) has been performed inside the nozzle of a commercially available inkjet print-head to obtain the time-dependent velocity waveform. A printhead with a single transparent nozzle 80 μm in orifice diameter was used to eject single droplets at a speed of 5 m/s. An optical microscope was used with an ultra-high-speed camera to capture the motion of particles suspended in a transparent liquid at the center of the nozzle and above the fluid meniscus at a rate of half a million frames per second. Time-resolved velocity fields were obtained from a fluid layer approximately 200 μm thick within the nozzle for a complete jetting cycle. A Lagrangian finite-element numerical model with experimental measurements as inputs was used to predict the meniscus movement. The model predictions showed good agreement with the experimental results. This work provides the first experimental verification of physical models and numerical simulations of flows within a drop-on-demand nozzle. © 2012 Society for Imaging Science and Technology.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The implementation of image contrast reversal by using a photochromic material of Bacteriorhodopsin (BR) films is demonstrated with two methods based on the optical properties of BR. One is based on the absorption difference between the B and M states. Images recorded by green light can be contrast reversed readout by violet light. The other is based on the photoinduced anisotropy of BR when it is excited by linear polarization light. By placing the BR film between two crossed polarizers (i.e. a polarizer and an analyser), the difference of polarization states of the recorded area and the unrecorded area can be detected, and thus different contrast images can be obtained by rotating the polarization axis of the analyser.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents a new region-based unified tensor level set model for image segmentation. This model introduces a three-order tensor to comprehensively depict features of pixels, e.g., gray value and the local geometrical features, such as orientation and gradient, and then, by defining a weighted distance, we generalized the representative region-based level set method from scalar to tensor. The proposed model has four main advantages compared with the traditional representative method as follows. First, involving the Gaussian filter bank, the model is robust against noise, particularly the salt-and pepper-type noise. Second, considering the local geometrical features, e. g., orientation and gradient, the model pays more attention to boundaries and makes the evolving curve stop more easily at the boundary location. Third, due to the unified tensor pixel representation representing the pixels, the model segments images more accurately and naturally. Fourth, based on a weighted distance definition, the model possesses the capacity to cope with data varying from scalar to vector, then to high-order tensor. We apply the proposed method to synthetic, medical, and natural images, and the result suggests that the proposed method is superior to the available representative region-based level set method.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper consists of two major parts. First, we present the outline of a simple approach to very-low bandwidth video-conferencing system relying on an example-based hierarchical image compression scheme. In particular, we discuss the use of example images as a model, the number of required examples, faces as a class of semi-rigid objects, a hierarchical model based on decomposition into different time-scales, and the decomposition of face images into patches of interest. In the second part, we present several algorithms for image processing and animation as well as experimental evaluations. Among the original contributions of this paper is an automatic algorithm for pose estimation and normalization. We also review and compare different algorithms for finding the nearest neighbors in a database for a new input as well as a generalized algorithm for blending patches of interest in order to synthesize new images. Finally, we outline the possible integration of several algorithms to illustrate a simple model-based video-conference system.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis addresses the problem of recognizing solid objects in the three-dimensional world, using two-dimensional shape information extracted from a single image. Objects can be partly occluded and can occur in cluttered scenes. A model based approach is taken, where stored models are matched to an image. The matching problem is separated into two stages, which employ different representations of objects. The first stage uses the smallest possible number of local features to find transformations from a model to an image. This minimizes the amount of search required in recognition. The second stage uses the entire edge contour of an object to verify each transformation. This reduces the chance of finding false matches.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A method is proposed that can generate a ranked list of plausible three-dimensional hand configurations that best match an input image. Hand pose estimation is formulated as an image database indexing problem, where the closest matches for an input hand image are retrieved from a large database of synthetic hand images. In contrast to previous approaches, the system can function in the presence of clutter, thanks to two novel clutter-tolerant indexing methods. First, a computationally efficient approximation of the image-to-model chamfer distance is obtained by embedding binary edge images into a high-dimensional Euclide an space. Second, a general-purpose, probabilistic line matching method identifies those line segment correspondences between model and input images that are the least likely to have occurred by chance. The performance of this clutter-tolerant approach is demonstrated in quantitative experiments with hundreds of real hand images.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Nearest neighbor retrieval is the task of identifying, given a database of objects and a query object, the objects in the database that are the most similar to the query. Retrieving nearest neighbors is a necessary component of many practical applications, in fields as diverse as computer vision, pattern recognition, multimedia databases, bioinformatics, and computer networks. At the same time, finding nearest neighbors accurately and efficiently can be challenging, especially when the database contains a large number of objects, and when the underlying distance measure is computationally expensive. This thesis proposes new methods for improving the efficiency and accuracy of nearest neighbor retrieval and classification in spaces with computationally expensive distance measures. The proposed methods are domain-independent, and can be applied in arbitrary spaces, including non-Euclidean and non-metric spaces. In this thesis particular emphasis is given to computer vision applications related to object and shape recognition, where expensive non-Euclidean distance measures are often needed to achieve high accuracy. The first contribution of this thesis is the BoostMap algorithm for embedding arbitrary spaces into a vector space with a computationally efficient distance measure. Using this approach, an approximate set of nearest neighbors can be retrieved efficiently - often orders of magnitude faster than retrieval using the exact distance measure in the original space. The BoostMap algorithm has two key distinguishing features with respect to existing embedding methods. First, embedding construction explicitly maximizes the amount of nearest neighbor information preserved by the embedding. Second, embedding construction is treated as a machine learning problem, in contrast to existing methods that are based on geometric considerations. The second contribution is a method for constructing query-sensitive distance measures for the purposes of nearest neighbor retrieval and classification. In high-dimensional spaces, query-sensitive distance measures allow for automatic selection of the dimensions that are the most informative for each specific query object. It is shown theoretically and experimentally that query-sensitivity increases the modeling power of embeddings, allowing embeddings to capture a larger amount of the nearest neighbor structure of the original space. The third contribution is a method for speeding up nearest neighbor classification by combining multiple embedding-based nearest neighbor classifiers in a cascade. In a cascade, computationally efficient classifiers are used to quickly classify easy cases, and classifiers that are more computationally expensive and also more accurate are only applied to objects that are harder to classify. An interesting property of the proposed cascade method is that, under certain conditions, classification time actually decreases as the size of the database increases, a behavior that is in stark contrast to the behavior of typical nearest neighbor classification systems. The proposed methods are evaluated experimentally in several different applications: hand shape recognition, off-line character recognition, online character recognition, and efficient retrieval of time series. In all datasets, the proposed methods lead to significant improvements in accuracy and efficiency compared to existing state-of-the-art methods. In some datasets, the general-purpose methods introduced in this thesis even outperform domain-specific methods that have been custom-designed for such datasets.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Fourth-order partial differential equation (PDE) proposed by You and Kaveh (You-Kaveh fourth-order PDE), which replaces the gradient operator in classical second-order nonlinear diffusion methods with a Laplacian operator, is able to avoid blocky effects often caused by second-order nonlinear PDEs. However, the equation brought forward by You and Kaveh tends to leave the processed images with isolated black and white speckles. Although You and Kaveh use median filters to filter these speckles, median filters can blur the processed images to some extent, which weakens the result of You-Kaveh fourth-order PDE. In this paper, the reason why You-Kaveh fourth-order PDE can leave the processed images with isolated black and white speckles is analyzed, and a new fourth-order PDE based on the changes of Laplacian (LC fourth-order PDE) is proposed and tested. The new fourth-order PDE preserves the advantage of You-Kaveh fourth-order PDE and avoids leaving isolated black and white speckles. Moreover, the new fourth-order PDE keeps the boundary from being blurred and preserves the nuance in the processed images, so, the processed images look very natural.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, a parallel-matching processor architecture with early jump-out (EJO) control is proposed to carry out high-speed biometric fingerprint database retrieval. The processor performs the fingerprint retrieval by using minutia point matching. An EJO method is applied to the proposed architecture to speed up the large database retrieval. The processor is implemented on a Xilinx Virtex-E, and occupies 6,825 slices and runs at up to 65 MHz. The software/hardware co-simulation benchmark with a database of 10,000 fingerprints verifies that the matching speed can achieve the rate of up to 1.22 million fingerprints per second. EJO results in about a 22% gain in computing efficiency.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A new, front-end image processing chip is presented for real-time small object detection. It has been implemented using a 0.6 µ, 3.3 V CMOS technology and operates on 10-bit input data at 54 megasamples per second. It occupies an area of 12.9 mm×13.6 mm (including pads), dissipates 1.5 W, has 92 I/O pins and is to be housed in a 160-pin ceramic quarter flat-pack. It performs both one- and two-dimensional FIR filtering and a multilayer perceptron (MLP) neural network function using a reconfigurable array of 21 multiplication-accumulation cells which corresponds to a window size of 7×3. The chip can cope with images of 2047 pixels per line and can be cascaded to cope with larger window sizes. The chip performs two billion fixed point multiplications and additions per second.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A new high performance, programmable image processing chip targeted at video and HDTV applications is described. This was initially developed for image small object recognition but has much broader functional application including 1D and 2D FIR filtering as well as neural network computation. The core of the circuit is made up of an array of twenty one multiplication-accumulation cells based on systolic architecture. Devices can be cascaded to increase the order of the filter both vertically and horizontally. The chip has been fabricated in a 0.6 µ, low power CMOS technology and operates on 10 bit input data at over 54 Megasamples per second. The introduction gives some background to the chip design and highlights that there are few other comparable devices. Section 2 gives a brief introduction to small object detection. The chip architecture and the chip design will be described in detail in the later sections.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This research aims, through performance, fashion photography, video making and the theatrical devices that accompany such practice, to explore the style of a contemporary, largely male, subcultural collective. The common term that joins these loosely bound groups is revival as they appear driven by an impulse to simulate and re-enact the dress, rites and rituals of British and American subcultures from a perceived golden era. The similarities with re-enactment societies are also explored and exploited to the end of developing new style- based aesthetics in male fashion image-making formed around an elaborate re- enactment of Spartacus and the Third Servile Wars. Examined through comparative visuals (revivalists / re-enactors) a common thread is found in the wearing of leather as a metaphor for resistance, style and a pupa-like second skin. Subsequent findings of this research suggest that the cuirass of popular culture emerges as the motorcycle jacket of both the sword and sandal epic and the historical re-enactor. Addressing extremes in narcissistic dress and behaviour amongst certain individuals within these older male communities, this study also questions parts of established theory on subcultural development within the field of cultural studies and postulates on a metaphorical dandy gene. Citing two leading practitioners in the field of fashion photography the work of both Richard Prince and Bruce Weber is viewed through the lens of the subcultural aesthete and conclusions drawn as to their role as agents provocateurs in the development of the fashion image with a revival based narrative. In addition the often used term retro is examined, categorised and granted its own genre within fashion image- making and defined as being separate from the practice element of this research. Reflecting a multi-disciplinary approach that engages the researcher as Bricoleur and participant observer this research operates in the reflexive realm and uses simulation as a key method of enquiry. The practice-led outcome of this investigation takes the form of a final research exhibition that takes the form of a substantial installation of photography, video, clothing and textile prints. Key terms: dandy gene, historical re-enactment groups, internal theatre, narcissism, narrative image-making, reflexive practice, revival as theatre, subcultures,

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Thesis (Master's)--University of Washington, 2015

Relevância:

30.00% 30.00%

Publicador:

Resumo:

TUTKIMUKSEN TAVOITTEET Tutkielman tavoitteena oli luoda ensin yleiskäsitys tuotemerkkimarkkinoinnin roolista teollisilla markkinoilla, sekä suhdemarkkinoinnin merkityksestä teollisessa merkkituotemarkkinoinnissa. Toisena oleellisena tavoitteena oli kuvata teoreettisesti merkkituoteidentiteetin rakenne teollisessa yrityksessä ja sen vaikutukset myyntihenkilöstöön, ja lisäksi haluttiin tutkia tuotemerkkien lisäarvoa sekä asiakkaalle että myyjälle. Identiteetti ja sen vaikutukset, erityisesti imago haluttiin tutkia myös empiirisesti. LÄHDEAINEISTO JA TUTKIMUSMENETELMÄT Tämän tutkielman teoreettinen osuus perustuu kirjallisuuteen, akateemisiin julkaisuihin ja aikaisempiin tutkimuksiin; keskittyen merkkituotteiden markkinointiin, identiteettiin ja imagoon, sekä suhdemarkkinointiin osana merkkituotemarkkinointia. Tutkimuksen lähestymistapa on kuvaileva eli deskriptiivinen ja sekä kvalitatiivinen että kvantitatiivinen. Tutkimus on tapaustutkimus, jossa caseyritykseksi valittiin kansainvälinen pakkauskartonki-teollisuuden yritys. Empiirisen osuuden toteuttamiseen käytettiin www-pohjaista surveytä, jonka avulla tietoja kerättiin myyntihenkilöstöltä case-yrityksessä. Lisäksi empiiristä osuutta laajennettiin tutkimalla sekundäärilähteitä kuten yrityksen sisäisiä kirjallisia dokumentteja ja tutkimuksia. TULOKSET. Teoreettisen ja empiirisen tutkimuksen tuloksena luotiin malli jota voidaan hyödyntää merkkituotemarkkinoinnin päätöksenteon tukena pakkauskartonki-teollisuudessa. Teollisen brandinhallinnan tulee keskittyä erityisesti asiakas-suhteiden brandaukseen – tätä voisi kutsua teolliseksi suhdebrandaukseksi. Tuote-elementit ja –arvot, differointi ja positiointi, sisäinen yrityskuva ja viestintä ovat teollisen brandi-identiteetin peruskiviä, jotka luovat brandi-imagon. Case-yrityksen myyntihenkilöstön tuote- ja yritysmielikuvat osoittautuivat kokonaisuudessaan hyviksi. Paras imago on CKB tuotteilla, kun taas heikoin on WLC tuotteilla. Teolliset brandit voivat luoda monenlaisia lisäarvoja sekä asiakas- että myyjäyritykselle.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Confocal and two-photon microcopy have become essential tools in biological research and today many investigations are not possible without their help. The valuable advantage that these two techniques offer is the ability of optical sectioning. Optical sectioning makes it possible to obtain 3D visuahzation of the structiu-es, and hence, valuable information of the structural relationships, the geometrical, and the morphological aspects of the specimen. The achievable lateral and axial resolutions by confocal and two-photon microscopy, similar to other optical imaging systems, are both defined by the diffraction theorem. Any aberration and imperfection present during the imaging results in broadening of the calculated theoretical resolution, blurring, geometrical distortions in the acquired images that interfere with the analysis of the structures, and lower the collected fluorescence from the specimen. The aberrations may have different causes and they can be classified by their sources such as specimen-induced aberrations, optics-induced aberrations, illumination aberrations, and misalignment aberrations. This thesis presents an investigation and study of image enhancement. The goal of this thesis was approached in two different directions. Initially, we investigated the sources of the imperfections. We propose methods to eliminate or minimize aberrations introduced during the image acquisition by optimizing the acquisition conditions. The impact on the resolution as a result of using a coverslip the thickness of which is mismatched with the one that the objective lens is designed for was shown and a novel technique was introduced in order to define the proper value on the correction collar of the lens. The amoimt of spherical aberration with regard to t he numerical aperture of the objective lens was investigated and it was shown that, based on the purpose of our imaging tasks, different numerical apertures must be used. The deformed beam cross section of the single-photon excitation source was corrected and the enhancement of the resolution and image quaUty was shown. Furthermore, the dependency of the scattered light on the excitation wavelength was shown empirically. In the second part, we continued the study of the image enhancement process by deconvolution techniques. Although deconvolution algorithms are used widely to improve the quality of the images, how well a deconvolution algorithm responds highly depends on the point spread function (PSF) of the imaging system applied to the algorithm and the level of its accuracy. We investigated approaches that can be done in order to obtain more precise PSF. Novel methods to improve the pattern of the PSF and reduce the noise are proposed. Furthermore, multiple soiu'ces to extract the PSFs of the imaging system are introduced and the empirical deconvolution results by using each of these PSFs are compared together. The results confirm that a greater improvement attained by applying the in situ PSF during the deconvolution process.