950 resultados para Image processing -- Digital techniques
Resumo:
OBJETIVO: Avaliar os efeitos da infiltração de dióxido de carbono em adipócitos presentes na parede abdominal. MÉTODOS: Quinze voluntárias foram submetidas a sessões de infusão de CO2 durante três semanas consecutivas (duas sessões por semana com intervalos de dois a três dias entre cada sessão). O volume de gás carbônico infundido por sessão, em pontos previamente demarcados, foi sempre calculado com base na superfície da área a ser tratada, com volume infundido fixo de 250 mL/100cm² de superfície tratada. Os pontos de infiltração foram demarcados respeitando-se o limite eqüidistante 2cm entre eles. Em cada ponto se injetou 10mL, por sessão, com fluxo de 80mL/min. Foram colhidos fragmentos de tecido celular subcutâneo da parede abdominal anterior antes e após o tratamento. O número e as alterações histomorfológicas dos adipócitos (diâmetro médio, perímetro, comprimento, largura e número de adipócitos por campos de observação) foram mensurados por citometria computadorizada. Os resultados foram analisados com o teste t de Student pareado, adotando-se nível de significância de 5% (p<0,05). RESULTADOS: Encontrou-se redução significativa no número de adipócitos da parede abdominal e na área, diâmetro, perímetro, comprimento e largura após o uso da hipercapnia (p=0,0001). CONCLUSÃO: A infiltração percutânea de CO2 reduz a população e modifica a morfologia dos adipócitos presentes na parede abdominal anterior.
Resumo:
PURPOSE: The ability to predict and understand which biomechanical properties of the cornea are responsible for the stability or progression of keratoconus may be an important clinical and surgical tool for the eye-care professional. We have developed a finite element model of the cornea, that tries to predicts keratoconus-like behavior and its evolution based on material properties of the corneal tissue. METHODS: Corneal material properties were modeled using bibliographic data and corneal topography was based on literature values from a schematic eye model. Commercial software was used to simulate mechanical and surface properties when the cornea was subject to different local parameters, such as elasticity. RESULTS: The simulation has shown that, depending on the corneal initial surface shape, changes in local material properties and also different intraocular pressures values induce a localized protuberance and increase in curvature when compared to the remaining portion of the cornea. CONCLUSIONS: This technique provides a quantitative and accurate approach to the problem of understanding the biomechanical nature of keratoconus. The implemented model has shown that changes in local material properties of the cornea and intraocular pressure are intrinsically related to keratoconus pathology and its shape/curvature.
Resumo:
OBJETIVO: Desenvolver a instrumentação e o "software" para topografia de córnea de grande-ângulo usando o tradicional disco de Plácido. O objetivo é permitir o mapeamento de uma região maior da córnea para topógrafos de córnea que usem a técnica de Plácido, fazendo-se uma adaptação simples na mira. MÉTODOS: Utilizando o tradicional disco de Plácido de um topógrafo de córnea tradicional, 9 LEDs (Light Emitting Diodes) foram adaptados no anteparo cônico para que o paciente voluntário pudesse fixar o olhar em diferentes direções. Para cada direção imagens de Plácido foram digitalizadas e processadas para formar, por meio de algoritmo envolvendo elementos sofisticados de computação gráfica, um mapa tridimensional completo da córnea toda. RESULTADOS: Resultados apresentados neste trabalho mostram que uma região de até 100% maior pode ser mapeada usando esta técnica, permitindo que o clínico mapeie até próximo ao limbo da córnea. São apresentados aqui os resultados para uma superfície esférica de calibração e também para uma córnea in vivo com alto grau de astigmatismo, mostrando a curvatura e elevação. CONCLUSÃO: Acredita-se que esta nova técnica pode propiciar a melhoria de alguns processos, como por exemplo: adaptação de lentes de contato, algoritmos para ablações costumizadas para hipermetropia, entre outros.
Resumo:
Oscillator networks have been developed in order to perform specific tasks related to image processing. Here we analytically investigate the existence of synchronism in a pair of phase oscillators that are short-range dynamically coupled. Then, we use these analytical results to design a network able of detecting border of black-and-white figures. Each unit composing this network is a pair of such phase oscillators and is assigned to a pixel in the image. The couplings among the units forming the network are also dynamical. Border detection emerges from the network activity.
Resumo:
Acoustic resonances are observed in high-pressure discharge lamps operated with ac input modulated power frequencies in the kilohertz range. This paper describes an optical resonance detection method for high-intensity discharge lamps using computer-controlled cameras and image processing software. Experimental results showing acoustic resonances in high-pressure sodium lamps are presented.
Resumo:
In Part I [""Fast Transforms for Acoustic Imaging-Part I: Theory,"" IEEE TRANSACTIONS ON IMAGE PROCESSING], we introduced the Kronecker array transform (KAT), a fast transform for imaging with separable arrays. Given a source distribution, the KAT produces the spectral matrix which would be measured by a separable sensor array. In Part II, we establish connections between the KAT, beamforming and 2-D convolutions, and show how these results can be used to accelerate classical and state of the art array imaging algorithms. We also propose using the KAT to accelerate general purpose regularized least-squares solvers. Using this approach, we avoid ill-conditioned deconvolution steps and obtain more accurate reconstructions than previously possible, while maintaining low computational costs. We also show how the KAT performs when imaging near-field source distributions, and illustrate the trade-off between accuracy and computational complexity. Finally, we show that separable designs can deliver accuracy competitive with multi-arm logarithmic spiral geometries, while having the computational advantages of the KAT.
Resumo:
This paper provides insights into liquid free water dynamics in wood vessels based on Lattice Boltzmann experiments. The anatomy of real wood samples was reconstructed from systematic 3-D analyses of the vessel contours derived from successive microscopic images. This virtual vascular system was then used to supply fluid-solid boundary conditions to a two-phase Lattice Boltzmann scheme and investigate capillary invasion of this hydrophilic porous medium. Behavior of the liquid phase was strongly dependent on anatomical features, especially vessel bifurcations and reconnections. Various parameters were examined in numerical experiments with ideal vessel bifurcations, to clarify our interpretation of these features. (c) 2010 Elsevier Ltd. All rights reserved.
Resumo:
This paper provides a computational framework, based on Defeasible Logic, to capture some aspects of institutional agency. Our background is Kanger-Lindahl-P\"orn account of organised interaction, which describes this interaction within a multi-modal logical setting. This work focuses in particular on the notions of counts-as link and on those of attempt and of personal and direct action to realise states of affairs. We show how standard Defeasible Logic can be extended to represent these concepts: the resulting system preserves some basic properties commonly attributed to them. In addition, the framework enjoys nice computational properties, as it turns out that the extension of any theory can be computed in time linear to the size of the theory itself.
Resumo:
This article describes a method to turn astronomical imaging into a random number generator by using the positions of incident cosmic rays and hot pixels to generate bit streams. We subject the resultant bit streams to a battery of standard benchmark statistical tests for randomness and show that these bit streams are statistically the same as a perfect random bit stream. Strategies for improving and building upon this method are outlined.
Resumo:
This paper proposes some variants of Temporal Defeasible Logic (TDL) to reason about normative modifications. These variants make it possible to differentiate cases in which, for example, modifications at some time change legal rules but their conclusions persist afterwards from cases where also their conclusions are blocked.
Resumo:
Extracting human postural information from video sequences has proved a difficult research question. The most successful approaches to date have been based on particle filtering, whereby the underlying probability distribution is approximated by a set of particles. The shape of the underlying observational probability distribution plays a significant role in determining the success, both accuracy and efficiency, of any visual tracker. In this paper we compare approaches used by other authors and present a cost path approach which is commonly used in image segmentation problems, however is currently not widely used in tracking applications.
Resumo:
The demand for more pixels is beginning to be met as manufacturers increase the native resolution of projector chips. Tiling several projectors still offers a solution to augment the pixel capacity of a display. However, problems of color and illumination uniformity across projectors need to be addressed as well as the computer software required to drive such devices. We present the results obtained on a desktop-size tiled projector array of three D-ILA projectors sharing a common illumination source. A short throw lens (0.8:1) on each projector yields a 21-in. diagonal for each image tile; the composite image on a 3×1 array is 3840×1024 pixels with a resolution of about 80 dpi. The system preserves desktop resolution, is compact, and can fit in a normal room or laboratory. The projectors are mounted on precision six-axis positioners, which allow pixel level alignment. A fiber optic beamsplitting system and a single set of red, green, and blue dichroic filters are the key to color and illumination uniformity. The D-ILA chips inside each projector can be adjusted separately to set or change characteristics such as contrast, brightness, or gamma curves. The projectors were then matched carefully: photometric variations were corrected, leading to a seamless image. Photometric measurements were performed to characterize the display and are reported here. This system is driven by a small PC cluster fitted with graphics cards and running Linux. It can be scaled to accommodate an array of 2×3 or 3×3 projectors, thus increasing the number of pixels of the final image. Finally, we present current uses of the display in fields such as astrophysics and archaeology (remote sensing).
Resumo:
One of the challenges in scientific visualization is to generate software libraries suitable for the large-scale data emerging from tera-scale simulations and instruments. We describe the efforts currently under way at SDSC and NPACI to address these challenges. The scope of the SDSC project spans data handling, graphics, visualization, and scientific application domains. Components of the research focus on the following areas: intelligent data storage, layout and handling, using an associated “Floor-Plan” (meta data); performance optimization on parallel architectures; extension of SDSC’s scalable, parallel, direct volume renderer to allow perspective viewing; and interactive rendering of fractional images (“imagelets”), which facilitates the examination of large datasets. These concepts are coordinated within a data-visualization pipeline, which operates on component data blocks sized to fit within the available computing resources. A key feature of the scheme is that the meta data, which tag the data blocks, can be propagated and applied consistently. This is possible at the disk level, in distributing the computations across parallel processors; in “imagelet” composition; and in feature tagging. The work reflects the emerging challenges and opportunities presented by the ongoing progress in high-performance computing (HPC) and the deployment of the data, computational, and visualization Grids.
Resumo:
Trust is a vital feature for Semantic Web: If users (humans and agents) are to use and integrate system answers, they must trust them. Thus, systems should be able to explain their actions, sources, and beliefs, and this issue is the topic of the proof layer in the design of the Semantic Web. This paper presents the design and implementation of a system for proof explanation on the Semantic Web, based on defeasible reasoning. The basis of this work is the DR-DEVICE system that is extended to handle proofs. A critical aspect is the representation of proofs in an XML language, which is achieved by a RuleML language extension.
Resumo:
This article extends Defeasible Logic to deal with the contextual deliberation process of cognitive agents. First, we introduce meta-rules to reason with rules. Meta-rules are rules that have as a consequent rules for motivational components, such as obligations, intentions and desires. In other words, they include nested rules. Second, we introduce explicit preferences among rules. They deal with complex structures where nested rules can be involved.