988 resultados para IMAGE ENHANCEMENT


Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present a controlled image smoothing and enhancement method based on a curvature flow interpretation of the geometric heat equation. Compared to existing techniques, the model has several distinct advantages. (i) It contains just one enhancement parameter. (ii) The scheme naturally inherits a stopping criterion from the image; continued application of the scheme produces no further change. (iii) The method is one of the fastest possible schemes based on a curvature-controlled approach.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Current image database metadata schemas require users to adopt a specific text-based vocabulary. Text-based metadata is good for searching but not for browsing. Existing image-based search facilities, on the other hand, are highly specialised and so suffer similar problems. Wexelblat's semantic dimensional spatial visualisation schemas go some way towards addressing this problem by making both searching and browsing more accessible to the user in a single interface. But the question of how and what initial metadata to enter a database remains. Different people see different things in an image and will organise a collection in equally diverse ways. However, we can find some similarity across groups of users regardless of their reasoning. For example, a search on Amazon.com returns other products also, based on an averaging of how users navigate the database. In this paper, we report on applying this concept to a set of images for which we have visualised them using traditional methods and the Amazon.com method. We report on the findings of this comparative investigation in a case study setting involving a group of randomly selected participants. We conclude with the recommendation that in combination, the traditional and averaging methods would provide an enhancement to current database visualisation, searching, and browsing facilities.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Visual perception is dependent on both light transmission through the eye and neuronal conduction through the visual pathway. Advances in clinical diagnostics and treatment modalities over recent years have increased the opportunities to improve the optical path and retinal image quality. Higher order aberrations and retinal straylight are two major factors that influence light transmission through the eye and ultimately, visual outcome. Recent technological advancements have brought these important factors into the clinical domain, however the potential applications of these tools and considerations regarding interpretation of data are much underestimated. The purpose of this thesis was to validate and optimise wavefront analysers and a new clinical tool for the objective evaluation of intraocular scatter. The application of these methods in a clinical setting involving a range of conditions was also explored. The work was divided into two principal sections: 1. Wavefront Aberrometry: optimisation, validation and clinical application The main findings of this work were: • Observer manipulation of the aberrometer increases variability by a factor of 3. • Ocular misalignment can profoundly affect reliability, notably for off-axis aberrations. • Aberrations measured with wavefront analysers using different principles are not interchangeable, with poor relationships and significant differences between values. • Instrument myopia of around 0.30D is induced when performing wavefront analysis in non-cyclopleged eyes; values can be as high as 3D, being higher as the baseline level of myopia decreases. Associated accommodation changes may result in relevant changes to the aberration profile, particularly with respect to spherical aberration. • Young adult healthy Caucasian eyes have significantly more spherical aberration than Asian eyes when matched for age, gender, axial length and refractive error. Axial length is significantly correlated with most components of the aberration profile. 2. Intraocular light scatter: Evaluation of subjective measures and validation and application of a new objective method utilising clinically derived wavefront patterns. The main findings of this work were: • Subjective measures of clinical straylight are highly repeatable. Three measurements are suggested as the optimum number for increased reliability. • Significant differences in straylight values were found for contact lenses designed for contrast enhancement compared to clear lenses of the same design and material specifications. Specifically, grey/green tints induced significantly higher values of retinal straylight. • Wavefront patterns from a commercial Hartmann-Shack device can be used to obtain objective measures of scatter and are well correlated with subjective straylight values. • Perceived retinal stray light was similar in groups of patients implanted with monofocal and multi focal intraocular lenses. Correlation between objective and subjective measurements of scatter is poor, possibly due to different illumination conditions between the testing procedures, or a neural component which may alter with age. Careful acquisition results in highly reproducible in vivo measures of higher order aberrations; however, data from different devices are not interchangeable which brings the accuracy of measurement into question. Objective measures of intraocular straylight can be derived from clinical aberrometry and may be of great diagnostic and management importance in the future.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The aim of art, as transformer of the individual, has numerous sides which give the human beings a sense of enhancement and growth. It is considered by university the need for our students to take part of this process of social transformation in which they feel the need of helping the community when its members are at risk of social exclusion. Art is considered to be a means, a tool and a purpose for an artis-pedagogue to be used as a guide for the renewal. And the university is also considered as a focus of commitment by means of the development of good practices as well as adopting an open and innovative attitude to any changes aimed at living harmoniously within a more just society.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The goal of this study is to better simulate microscopic and voxel-based dynamic contrast enhancement in magnetic resonance imaging. Specifically, errors imposed by the traditional two-compartment model are reduced by introducing a novel Krogh cylinder network. The two-compartment model was developed for macroscopic pharmacokinetic analysis of dynamic contrast enhancement and generalizing it to voxel dimensions, due to the significant decrease in scale, imposes physiologically unrealistic assumptions. In the project, a system of microscopic exchange between plasma and extravascular-extracellular space is built while numerically simulating the local contrast agent flow between and inside image elements. To do this, tissue parameter maps were created, contrast agent was introduced to the tissue via a flow lattice, and various data sets were simulated. The effects of sources, tissue heterogeneity, and the contribution of individual tissue parameters to an image are modeled. Further, the study attempts to demonstrate the effects of a priori flow maps on image contrast, indicating that flow data is as important as permeability data when analyzing tumor contrast enhancement. In addition, the simulations indicate that it may be possible to obtain tumor-type diagnostic information by acquiring both flow and permeability data.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Nanotechnology has revolutionised humanity's capability in building microscopic systems by manipulating materials on a molecular and atomic scale. Nan-osystems are becoming increasingly smaller and more complex from the chemical perspective which increases the demand for microscopic characterisation techniques. Among others, transmission electron microscopy (TEM) is an indispensable tool that is increasingly used to study the structures of nanosystems down to the molecular and atomic scale. However, despite the effectivity of this tool, it can only provide 2-dimensional projection (shadow) images of the 3D structure, leaving the 3-dimensional information hidden which can lead to incomplete or erroneous characterization. One very promising inspection method is Electron Tomography (ET), which is rapidly becoming an important tool to explore the 3D nano-world. ET provides (sub-)nanometer resolution in all three dimensions of the sample under investigation. However, the fidelity of the ET tomogram that is achieved by current ET reconstruction procedures remains a major challenge. This thesis addresses the assessment and advancement of electron tomographic methods to enable high-fidelity three-dimensional investigations. A quality assessment investigation was conducted to provide a quality quantitative analysis of the main established ET reconstruction algorithms and to study the influence of the experimental conditions on the quality of the reconstructed ET tomogram. Regular shaped nanoparticles were used as a ground-truth for this study. It is concluded that the fidelity of the post-reconstruction quantitative analysis and segmentation is limited, mainly by the fidelity of the reconstructed ET tomogram. This motivates the development of an improved tomographic reconstruction process. In this thesis, a novel ET method was proposed, named dictionary learning electron tomography (DLET). DLET is based on the recent mathematical theorem of compressed sensing (CS) which employs the sparsity of ET tomograms to enable accurate reconstruction from undersampled (S)TEM tilt series. DLET learns the sparsifying transform (dictionary) in an adaptive way and reconstructs the tomogram simultaneously from highly undersampled tilt series. In this method, the sparsity is applied on overlapping image patches favouring local structures. Furthermore, the dictionary is adapted to the specific tomogram instance, thereby favouring better sparsity and consequently higher quality reconstructions. The reconstruction algorithm is based on an alternating procedure that learns the sparsifying dictionary and employs it to remove artifacts and noise in one step, and then restores the tomogram data in the other step. Simulation and real ET experiments of several morphologies are performed with a variety of setups. Reconstruction results validate its efficiency in both noiseless and noisy cases and show that it yields an improved reconstruction quality with fast convergence. The proposed method enables the recovery of high-fidelity information without the need to worry about what sparsifying transform to select or whether the images used strictly follow the pre-conditions of a certain transform (e.g. strictly piecewise constant for Total Variation minimisation). This can also avoid artifacts that can be introduced by specific sparsifying transforms (e.g. the staircase artifacts the may result when using Total Variation minimisation). Moreover, this thesis shows how reliable elementally sensitive tomography using EELS is possible with the aid of both appropriate use of Dual electron energy loss spectroscopy (DualEELS) and the DLET compressed sensing algorithm to make the best use of the limited data volume and signal to noise inherent in core-loss electron energy loss spectroscopy (EELS) from nanoparticles of an industrially important material. Taken together, the results presented in this thesis demonstrates how high-fidelity ET reconstructions can be achieved using a compressed sensing approach.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The abundance of visual data and the push for robust AI are driving the need for automated visual sensemaking. Computer Vision (CV) faces growing demand for models that can discern not only what images "represent," but also what they "evoke." This is a demand for tools mimicking human perception at a high semantic level, categorizing images based on concepts like freedom, danger, or safety. However, automating this process is challenging due to entropy, scarcity, subjectivity, and ethical considerations. These challenges not only impact performance but also underscore the critical need for interoperability. This dissertation focuses on abstract concept-based (AC) image classification, guided by three technical principles: situated grounding, performance enhancement, and interpretability. We introduce ART-stract, a novel dataset of cultural images annotated with ACs, serving as the foundation for a series of experiments across four key domains: assessing the effectiveness of the end-to-end DL paradigm, exploring cognitive-inspired semantic intermediaries, incorporating cultural and commonsense aspects, and neuro-symbolic integration of sensory-perceptual data with cognitive-based knowledge. Our results demonstrate that integrating CV approaches with semantic technologies yields methods that surpass the current state of the art in AC image classification, outperforming the end-to-end deep vision paradigm. The results emphasize the role semantic technologies can play in developing both effective and interpretable systems, through the capturing, situating, and reasoning over knowledge related to visual data. Furthermore, this dissertation explores the complex interplay between technical and socio-technical factors. By merging technical expertise with an understanding of human and societal aspects, we advocate for responsible labeling and training practices in visual media. These insights and techniques not only advance efforts in CV and explainable artificial intelligence but also propel us toward an era of AI development that harmonizes technical prowess with deep awareness of its human and societal implications.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Universidade Estadual de Campinas . Faculdade de Educação Física

Relevância:

20.00% 20.00%

Publicador:

Resumo:

PURPOSE: Juvenile idiopathic arthritis (JIA) has unknown etiology, and the involvement of the temporomandibular joint (TMJ) is rare in the early phase of the disease. The present article describes the use of computed tomography (CT) and magnetic resonance (MRI) images for the diagnosis of affected TMJ in JIA. CASE DESCRIPTION: A 12-year-old, female, Caucasian patient, with systemic rheumathoid arthritis and involvement of multiple joints was referred to the Imaging Center for TMJ assessment. The patient reported TMJ pain and limited opening of the mouth. The helical CT examination of the TMJ region showed asymmetric mandibular condyles, erosion of the right condyle and osteophyte-like formation. The MRI examination showed erosion of the right mandibular condyle, osteophytes, displacement without reduction and disruption of the articular disc. CONCLUSION: The disorders of the TMJ as a consequence of JIA must be carefully assessed by modern imaging methods such as CT and MRI. CT is very useful for the evaluation of discrete bone changes, which are not identified by conventional radiographs in the early phase of JIA. MRI allows the evaluation of soft tissues, the identification of acute articular inflammation and the differentiation between pannus and synovial hypertrophy.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Fifty Bursa of Fabricius (BF) were examined by conventional optical microscopy and digital images were acquired and processed using Matlab® 6.5 software. The Artificial Neuronal Network (ANN) was generated using Neuroshell® Classifier software and the optical and digital data were compared. The ANN was able to make a comparable classification of digital and optical scores. The use of ANN was able to classify correctly the majority of the follicles, reaching sensibility and specificity of 89% and 96%, respectively. When the follicles were scored and grouped in a binary fashion the sensibility increased to 90% and obtained the maximum value for the specificity of 92%. These results demonstrate that the use of digital image analysis and ANN is a useful tool for the pathological classification of the BF lymphoid depletion. In addition it provides objective results that allow measuring the dimension of the error in the diagnosis and classification therefore making comparison between databases feasible.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Background: Lignin and hemicelluloses are the major components limiting enzyme infiltration into cell walls. Determination of the topochemical distribution of lignin and aromatics in sugar cane might provide important data on the recalcitrance of specific cells. We used cellular ultraviolet (UV) microspectrophotometry (UMSP) to topochemically detect lignin and hydroxycinnamic acids in individual fiber, vessel and parenchyma cell walls of untreated and chlorite-treated sugar cane. Internodes, presenting typical vascular bundles and sucrose-storing parenchyma cells, were divided into rind and pith fractions. Results: Vascular bundles were more abundant in the rind, whereas parenchyma cells predominated in the pith region. UV measurements of untreated fiber cell walls gave absorbance spectra typical of grass lignin, with a band at 278 nm and a pronounced shoulder at 315 nm, assigned to the presence of hydroxycinnamic acids linked to lignin and/or to arabino-methylglucurono-xylans. The cell walls of vessels had the highest level of lignification, followed by those of fibers and parenchyma. Pith parenchyma cell walls were characterized by very low absorbance values at 278 nm; however, a distinct peak at 315 nm indicated that pith parenchyma cells are not extensively lignified, but contain significant amounts of hydroxycinnamic acids. Cellular UV image profiles scanned with an absorbance intensity maximum of 278 nm identified the pattern of lignin distribution in the individual cell walls, with the highest concentration occurring in the middle lamella and cell corners. Chlorite treatment caused a rapid removal of hydroxycinnamic acids from parenchyma cell walls, whereas the thicker fiber cell walls were delignified only after a long treatment duration (4 hours). Untreated pith samples were promptly hydrolyzed by cellulases, reaching 63% of cellulose conversion after 72 hours of hydrolysis, whereas untreated rind samples achieved only 20% hydrolyzation. Conclusion: The low recalcitrance of pith cells correlated with the low UV-absorbance values seen in parenchyma cells. Chlorite treatment of pith cells did not enhance cellulose conversion. By contrast, application of the same treatment to rind cells led to significant removal of hydroxycinnamic acids and lignin, resulting in marked enhancement of cellulose conversion by cellulases.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We report large photoluminescence (PL) enhancement in Eu(3+)-doped GeO(2)-Bi(2)O(3) glasses containing gold nanoparticles (NPs). Growth of approximate to 1000% in the PL intensity corresponding to the Eu(3+) transition (5)D(0)->(7)F(2), at 614 nm, was observed in comparison with a reference sample that does not contain gold NPs. Other PL bands from 580 to 700 nm are also enhanced. The enhancement of the PL intensity is attributed to the increased local field in the Eu(3+) locations due to the presence of the NPs and the energy transfer from the excited NPs to the Eu(3+) ions.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Background: The present work aims at the application of the decision theory to radiological image quality control ( QC) in diagnostic routine. The main problem addressed in the framework of decision theory is to accept or reject a film lot of a radiology service. The probability of each decision of a determined set of variables was obtained from the selected films. Methods: Based on a radiology service routine a decision probability function was determined for each considered group of combination characteristics. These characteristics were related to the film quality control. These parameters were also framed in a set of 8 possibilities, resulting in 256 possible decision rules. In order to determine a general utility application function to access the decision risk, we have used a simple unique parameter called r. The payoffs chosen were: diagnostic's result (correct/incorrect), cost (high/low), and patient satisfaction (yes/no) resulting in eight possible combinations. Results: Depending on the value of r, more or less risk will occur related to the decision-making. The utility function was evaluated in order to determine the probability of a decision. The decision was made with patients or administrators' opinions from a radiology service center. Conclusion: The model is a formal quantitative approach to make a decision related to the medical imaging quality, providing an instrument to discriminate what is really necessary to accept or reject a film or a film lot. The method presented herein can help to access the risk level of an incorrect radiological diagnosis decision.