970 resultados para Image readout process


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Developments in digital detector technologies have been taking place and new digital technologies are available for clinical practice. This chapter is intended to give a technical state-of-the-art overview about computed radiography (CR) and digital radiography (DR) detectors. CR systems use storage-phosphor image plates with a separate image readout process and DR technology converts X-rays into electrical charges by means of a readout process using TFT arrays. Digital detectors offer several advantages when compared to analogue detectors. The knowledge about digital detector technology for use in plain radiograph examinations is thus a fundamental topic to be acquired by radiology professionals and students. In this chapter an overview of digital radiography systems (both CR and DR) currently available for clinical practice is provided.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In this thesis a semi-automated cell analysis system is described through image processing. To achieve this, an image processing algorithm was studied in order to segment cells in a semi-automatic way. The main goal of this analysis is to increase the performance of cell image segmentation process, without affecting the results in a significant way. Even though, a totally manual system has the ability of producing the best results, it has the disadvantage of taking too long and being repetitive, when a large number of images need to be processed. An active contour algorithm was tested in a sequence of images taken by a microscope. This algorithm, more commonly known as snakes, allowed the user to define an initial region in which the cell was incorporated. Then, the algorithm would run several times, making the initial region contours to converge to the cell boundaries. With the final contour, it was possible to extract region properties and produce statistical data. This data allowed to say that this algorithm produces similar results to a purely manual system but at a faster rate. On the other hand, it is slower than a purely automatic way but it allows the user to adjust the contour, making it more versatile and tolerant to image variations.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Image registration is an important component of image analysis used to align two or more images. In this paper, we present a new framework for image registration based on compression. The basic idea underlying our approach is the conjecture that two images are correctly registered when we can maximally compress one image given the information in the other. The contribution of this paper is twofold. First, we show that the image registration process can be dealt with from the perspective of a compression problem. Second, we demonstrate that the similarity metric, introduced by Li et al., performs well in image registration. Two different versions of the similarity metric have been used: the Kolmogorov version, computed using standard real-world compressors, and the Shannon version, calculated from an estimation of the entropy rate of the images

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Tutkimuksen päätavoitteena oli auttaa Myllykoski –ryhmää määrittämään, mistä tekijöistä ryhmän uuden myyntiorganisaation, Myllykoski Salesin, tulevaisuuden imagon tulisi koostua. Näin ollen tutkimus pyrki selvittämään Myllykoski –ryhmän yritysidentiteetin nykytilaa ja Myllykoski Salesin toivottuja imagotekijöitä, sekä vertaamaan niiden vastaavuutta. Lisäksi tutkimuksen tavoitteena oli tutkia Myllykoski –ryhmän nykyistä ja toivottua tulevaisuuden imagotilaa. Jotta imagonrakennusprosessi olisi menestyksekäs, rakennettavan imagon ja viestittävien imagotekijöiden tulisi perustua yritysidentiteetin todellisiin tekijöihin. Yritysidentiteetin voidaan määritellä olevan yhtäläinen sisäisten sidosryhmien muodostaman yritysmielikuvan kanssa ja näin ollen nykyinen yritysidentiteetti voidaan paljastaa tutkimalla henkilökunnan mielipiteitä työorganisaatiotaan kohtaan. Näin ollen käsiteltävä tutkimus suoritettiin tekemällä kaksi sähköpostikyselyä, jotka suunnattiin Myllykoski -ryhmän myynti- ja markkinointihenkilökunnalle. Tutkimusten vastausprosentit olivat 71,4 % (johto, 14 lähetettyä kyselyä) ja 51,9 % (muu henkilökunta, 108 lähetettyä kyselyä). Saatuja vastauksia analysoitiin sekä laadullisesti että määrällisesti. Saaduista vastauksista oli selvästi havaittavissa Myllykoski –ryhmän yritysidentiteetin nykytila, nykyinen ja toivottu imagotila, sekä Myllykoski Salesin toivotut imagotekijät. Verrattaessa toivottuja imagotekijöitä ryhmän yritysidentiteettiin havaittiin, että suurin osa halutuista imagotekijöistä vastasi ryhmän identiteetin nykytilan ominaisuuksia ja näin ollen kyseisiä tekijöitä voitaisiin huoletta viestiä rakennettaessa Myllykoski Salesin imagoa. Joidenkin toivottujen imagotekijöiden viestintää tulisi kuitenkin vakavasti harkita, jottei rakennettaisi epärealistista imagoa.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Confocal and two-photon microcopy have become essential tools in biological research and today many investigations are not possible without their help. The valuable advantage that these two techniques offer is the ability of optical sectioning. Optical sectioning makes it possible to obtain 3D visuahzation of the structiu-es, and hence, valuable information of the structural relationships, the geometrical, and the morphological aspects of the specimen. The achievable lateral and axial resolutions by confocal and two-photon microscopy, similar to other optical imaging systems, are both defined by the diffraction theorem. Any aberration and imperfection present during the imaging results in broadening of the calculated theoretical resolution, blurring, geometrical distortions in the acquired images that interfere with the analysis of the structures, and lower the collected fluorescence from the specimen. The aberrations may have different causes and they can be classified by their sources such as specimen-induced aberrations, optics-induced aberrations, illumination aberrations, and misalignment aberrations. This thesis presents an investigation and study of image enhancement. The goal of this thesis was approached in two different directions. Initially, we investigated the sources of the imperfections. We propose methods to eliminate or minimize aberrations introduced during the image acquisition by optimizing the acquisition conditions. The impact on the resolution as a result of using a coverslip the thickness of which is mismatched with the one that the objective lens is designed for was shown and a novel technique was introduced in order to define the proper value on the correction collar of the lens. The amoimt of spherical aberration with regard to t he numerical aperture of the objective lens was investigated and it was shown that, based on the purpose of our imaging tasks, different numerical apertures must be used. The deformed beam cross section of the single-photon excitation source was corrected and the enhancement of the resolution and image quaUty was shown. Furthermore, the dependency of the scattered light on the excitation wavelength was shown empirically. In the second part, we continued the study of the image enhancement process by deconvolution techniques. Although deconvolution algorithms are used widely to improve the quality of the images, how well a deconvolution algorithm responds highly depends on the point spread function (PSF) of the imaging system applied to the algorithm and the level of its accuracy. We investigated approaches that can be done in order to obtain more precise PSF. Novel methods to improve the pattern of the PSF and reduce the noise are proposed. Furthermore, multiple soiu'ces to extract the PSFs of the imaging system are introduced and the empirical deconvolution results by using each of these PSFs are compared together. The results confirm that a greater improvement attained by applying the in situ PSF during the deconvolution process.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Image registration is an important component of image analysis used to align two or more images. In this paper, we present a new framework for image registration based on compression. The basic idea underlying our approach is the conjecture that two images are correctly registered when we can maximally compress one image given the information in the other. The contribution of this paper is twofold. First, we show that the image registration process can be dealt with from the perspective of a compression problem. Second, we demonstrate that the similarity metric, introduced by Li et al., performs well in image registration. Two different versions of the similarity metric have been used: the Kolmogorov version, computed using standard real-world compressors, and the Shannon version, calculated from an estimation of the entropy rate of the images

Relevância:

90.00% 90.00%

Publicador:

Resumo:

A fully 3D iterative image reconstruction algorithm has been developed for high-resolution PET cameras composed of pixelated scintillator crystal arrays and rotating planar detectors, based on the ordered subsets approach. The associated system matrix is precalculated with Monte Carlo methods that incorporate physical effects not included in analytical models, such as positron range effects and interaction of the incident gammas with the scintillator material. Custom Monte Carlo methodologies have been developed and optimized for modelling of system matrices for fast iterative image reconstruction adapted to specific scanner geometries, without redundant calculations. According to the methodology proposed here, only one-eighth of the voxels within two central transaxial slices need to be modelled in detail. The rest of the system matrix elements can be obtained with the aid of axial symmetries and redundancies, as well as in-plane symmetries within transaxial slices. Sparse matrix techniques for the non-zero system matrix elements are employed, allowing for fast execution of the image reconstruction process. This 3D image reconstruction scheme has been compared in terms of image quality to a 2D fast implementation of the OSEM algorithm combined with Fourier rebinning approaches. This work confirms the superiority of fully 3D OSEM in terms of spatial resolution, contrast recovery and noise reduction as compared to conventional 2D approaches based on rebinning schemes. At the same time it demonstrates that fully 3D methodologies can be efficiently applied to the image reconstruction problem for high-resolution rotational PET cameras by applying accurate pre-calculated system models and taking advantage of the system's symmetries.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

These slides present several 3-D reconstruction methods to obtain the geometric structure of a scene that is viewed by multiple cameras. We focus on the combination of the geometric modeling in the image formation process with the use of standard optimization tools to estimate the characteristic parameters that describe the geometry of the 3-D scene. In particular, linear, non-linear and robust methods to estimate the monocular and epipolar geometry are introduced as cornerstones to generate 3-D reconstructions with multiple cameras. Some examples of systems that use this constructive strategy are Bundler, PhotoSynth, VideoSurfing, etc., which are able to obtain 3-D reconstructions with several hundreds or thousands of cameras. En esta presentación se tratan varios métodos de reconstrucción 3-D para la obtención de la estructura geométrica de una escena que es visualizada por varias cámaras. Se enfatiza la combinación de modelado geométrico del proceso de formación de la imagen con el uso de herramientas estándar de optimización para estimar los parámetros característicos que describen la geometría de la escena 3-D. En concreto, se presentan métodos de estimación lineales, no lineales y robustos de las geometrías monocular y epipolar como punto de partida para generar reconstrucciones con tres o más cámaras. Algunos ejemplos de sistemas que utilizan este enfoque constructivo son Bundler, PhotoSynth, VideoSurfing, etc., los cuales, en la práctica pueden llegar a reconstruir una escena con varios cientos o miles de cámaras.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This layer is a georeferenced raster image of the historic paper manuscript map entitled: Plan of the town and basin of Quebec : and part of the adjacent country shewing the principal encampments and works of the British army commanded by Major Genl. Wolfe and those of the French army by Lieut. Genl. the Marquis of Montcalm during the attack in 1759. Scale [ca. 1:9,600]. This image consists of images of a two sheet source map that have been stitched together using image editing software to create one image. Manuscript copy of a map. Copied by [Whlkington?] in 1857. The image inside the map neatline is georeferenced to the surface of the earth and fit to the Universal Transverse Mercator (UTM) Zone 19N NAD 1983 coordinate system. All map collar and inset information is also available as part of the raster image, including any inset maps, profiles, statistical tables, directories, text, illustrations, index maps, legends, or other information associated with the principal map. This map shows features such as roads, drainage, selected buildings, fortification, ship and troop movements, and places of military interest for the Battle of Quebec, 1759, and more. Relief is shown by hachures. Includes index and text. This layer is part of a selection of digitally scanned and georeferenced historic maps from The Harvard Map Collection as part of the Imaging the Urban Environment project. Maps selected for this project represent major urban areas and cities of the world, at various time periods. These maps typically portray both natural and manmade features at a large scale. The selection represents a range of regions, originators, ground condition dates, scales, and purposes.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

With the rise of smart phones, lifelogging devices (e.g. Google Glass) and popularity of image sharing websites (e.g. Flickr), users are capturing and sharing every aspect of their life online producing a wealth of visual content. Of these uploaded images, the majority are poorly annotated or exist in complete semantic isolation making the process of building retrieval systems difficult as one must firstly understand the meaning of an image in order to retrieve it. To alleviate this problem, many image sharing websites offer manual annotation tools which allow the user to “tag” their photos, however, these techniques are laborious and as a result have been poorly adopted; Sigurbjörnsson and van Zwol (2008) showed that 64% of images uploaded to Flickr are annotated with < 4 tags. Due to this, an entire body of research has focused on the automatic annotation of images (Hanbury, 2008; Smeulders et al., 2000; Zhang et al., 2012a) where one attempts to bridge the semantic gap between an image’s appearance and meaning e.g. the objects present. Despite two decades of research the semantic gap still largely exists and as a result automatic annotation models often offer unsatisfactory performance for industrial implementation. Further, these techniques can only annotate what they see, thus ignoring the “bigger picture” surrounding an image (e.g. its location, the event, the people present etc). Much work has therefore focused on building photo tag recommendation (PTR) methods which aid the user in the annotation process by suggesting tags related to those already present. These works have mainly focused on computing relationships between tags based on historical images e.g. that NY and timessquare co-exist in many images and are therefore highly correlated. However, tags are inherently noisy, sparse and ill-defined often resulting in poor PTR accuracy e.g. does NY refer to New York or New Year? This thesis proposes the exploitation of an image’s context which, unlike textual evidences, is always present, in order to alleviate this ambiguity in the tag recommendation process. Specifically we exploit the “what, who, where, when and how” of the image capture process in order to complement textual evidences in various photo tag recommendation and retrieval scenarios. In part II, we combine text, content-based (e.g. # of faces present) and contextual (e.g. day-of-the-week taken) signals for tag recommendation purposes, achieving up to a 75% improvement to precision@5 in comparison to a text-only TF-IDF baseline. We then consider external knowledge sources (i.e. Wikipedia & Twitter) as an alternative to (slower moving) Flickr in order to build recommendation models on, showing that similar accuracy could be achieved on these faster moving, yet entirely textual, datasets. In part II, we also highlight the merits of diversifying tag recommendation lists before discussing at length various problems with existing automatic image annotation and photo tag recommendation evaluation collections. In part III, we propose three new image retrieval scenarios, namely “visual event summarisation”, “image popularity prediction” and “lifelog summarisation”. In the first scenario, we attempt to produce a rank of relevant and diverse images for various news events by (i) removing irrelevant images such memes and visual duplicates (ii) before semantically clustering images based on the tweets in which they were originally posted. Using this approach, we were able to achieve over 50% precision for images in the top 5 ranks. In the second retrieval scenario, we show that by combining contextual and content-based features from images, we are able to predict if it will become “popular” (or not) with 74% accuracy, using an SVM classifier. Finally, in chapter 9 we employ blur detection and perceptual-hash clustering in order to remove noisy images from lifelogs, before combining visual and geo-temporal signals in order to capture a user’s “key moments” within their day. We believe that the results of this thesis show an important step towards building effective image retrieval models when there lacks sufficient textual content (i.e. a cold start).

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Virtual Reality (VR) has grown to become state-of-theart technology in many business- and consumer oriented E-Commerce applications. One of the major design challenges of VR environments is the placement of the rendering process. The rendering process converts the abstract description of a scene as contained in an object database to an image. This process is usually done at the client side like in VRML [1] a technology that requires the client’s computational power for smooth rendering. The vision of VR is also strongly connected to the issue of Quality of Service (QoS) as the perceived realism is subject to an interactive frame rate ranging from 10 to 30 frames-per-second (fps), real-time feedback mechanisms and realistic image quality. These requirements overwhelm traditional home computers or even high sophisticated graphical workstations over their limits. Our work therefore introduces an approach for a distributed rendering architecture that gracefully balances the workload between the client and a clusterbased server. We believe that a distributed rendering approach as described in this paper has three major benefits: It reduces the clients workload, it decreases the network traffic and it allows to re-use already rendered scenes.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Partial dynamic reconfiguration of FPGAs can be used to implement complex applications using the concept of virtual hardware. In this work we have used partial dynamic reconfiguration to implement a JPEG decoder with reduced area. The image decoding process was adapted to be implemented on the FPGA fabric using this technique. The architecture was tested in a low cost ZYNQ-7020 FPGA that supports dynamic reconfiguration. The results show that the proposed solution needs only 40% of the resources utilized by a static implementation. The performance of the dynamic solution is about 9X slower than the static solution by trading-off internal resources of the FPGA. A throughput of 7 images per second is achievable with the proposed partial dynamic reconfiguration solution.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We advocate the use of a novel compressed sensing technique for accelerating the magnetic resonance image acquisition process, coined spread spectrum MR imaging or simply s2MRI. The method resides in pre-modulating the signal of interest by a linear chirp, resulting from the application of quadratic phase profiles, before random k-space under-sampling with uniform average density. The effectiveness of the procedure is theoretically underpinned by the optimization of the coherence between the sparsity and sensing bases. The application of the technique for single coil acquisitions is thoroughly studied by means of numerical simulations as well as phantom and in vivo experiments on a 7T scanner. The corresponding results suggest a favorable comparison with state-of-the-art variable density k-space under-sampling approaches.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This thesis presents two graphical user interfaces for the project DigiQ - Fusion of Digital and Visual Print Quality, a project for computationally modeling the subjective human experience of print quality by measuring the image with certain metrics. After presenting the user interfaces, methods for reducing the computation time of several of the metrics and the image registration process required to compute the metrics, and details of their performance are given. The weighted sample method for the image registration process was able to signifigantly decrease the calculation times while resulting in some error. The random sampling method for the metrics greatly reduced calculation time while maintaining excellent accuracy, but worked with only two of the metrics.