970 resultados para low-resolution
Resumo:
Low energy x-ray fluorescence (LEXRF) detection was optimized for imaging cerebral glucose metabolism by mapping the fluorine LEXRF signal of 19 F in 19 FDG, trapped as intracellular 19 F-deoxyglucose-6-phosphate ( 19 FDG-6P) at 1μm spatial resolution from 3μm thick brain slices. 19 FDG metabolism was evaluated in brain structures closely resembling the general cerebral cytoarchitecture following formalin fixation of brain slices and their inclusion in an epon matrix. 2-dimensional distribution maps of 19 FDG-6P were placed in a cytoarchitectural and morphological context by simultaneous LEXRF mapping of N and O, and scanning transmission x-ray (STXM) imaging. A disproportionately high uptake and metabolism of glucose was found in neuropil relative to intracellular domains of the cell body of hypothalamic neurons, showing directly that neurons, like glial cells, also metabolize glucose. As 19 F-deoxyglucose-6P is structurally identical to 18 F-deoxyglucose-6P, LEXRF of subcellular 19 F provides a link to in vivo 18 FDG PET, forming a novel basis for understanding the physiological mechanisms underlying the 18 FDG PET image, and the contribution of neurons and glia to the PET signal.
Resumo:
La tomodensitométrie (TDM) est une technique d'imagerie pour laquelle l'intérêt n'a cessé de croitre depuis son apparition au début des années 70. De nos jours, l'utilisation de cette technique est devenue incontournable, grâce entre autres à sa capacité à produire des images diagnostiques de haute qualité. Toutefois, et en dépit d'un bénéfice indiscutable sur la prise en charge des patients, l'augmentation importante du nombre d'examens TDM pratiqués soulève des questions sur l'effet potentiellement dangereux des rayonnements ionisants sur la population. Parmi ces effets néfastes, l'induction de cancers liés à l'exposition aux rayonnements ionisants reste l'un des risques majeurs. Afin que le rapport bénéfice-risques reste favorable au patient il est donc nécessaire de s'assurer que la dose délivrée permette de formuler le bon diagnostic tout en évitant d'avoir recours à des images dont la qualité est inutilement élevée. Ce processus d'optimisation, qui est une préoccupation importante pour les patients adultes, doit même devenir une priorité lorsque l'on examine des enfants ou des adolescents, en particulier lors d'études de suivi requérant plusieurs examens tout au long de leur vie. Enfants et jeunes adultes sont en effet beaucoup plus sensibles aux radiations du fait de leur métabolisme plus rapide que celui des adultes. De plus, les probabilités des évènements auxquels ils s'exposent sont également plus grandes du fait de leur plus longue espérance de vie. L'introduction des algorithmes de reconstruction itératifs, conçus pour réduire l'exposition des patients, est certainement l'une des plus grandes avancées en TDM, mais elle s'accompagne de certaines difficultés en ce qui concerne l'évaluation de la qualité des images produites. Le but de ce travail est de mettre en place une stratégie pour investiguer le potentiel des algorithmes itératifs vis-à-vis de la réduction de dose sans pour autant compromettre la qualité du diagnostic. La difficulté de cette tâche réside principalement dans le fait de disposer d'une méthode visant à évaluer la qualité d'image de façon pertinente d'un point de vue clinique. La première étape a consisté à caractériser la qualité d'image lors d'examen musculo-squelettique. Ce travail a été réalisé en étroite collaboration avec des radiologues pour s'assurer un choix pertinent de critères de qualité d'image. Une attention particulière a été portée au bruit et à la résolution des images reconstruites à l'aide d'algorithmes itératifs. L'analyse de ces paramètres a permis aux radiologues d'adapter leurs protocoles grâce à une possible estimation de la perte de qualité d'image liée à la réduction de dose. Notre travail nous a également permis d'investiguer la diminution de la détectabilité à bas contraste associée à une diminution de la dose ; difficulté majeure lorsque l'on pratique un examen dans la région abdominale. Sachant que des alternatives à la façon standard de caractériser la qualité d'image (métriques de l'espace Fourier) devaient être utilisées, nous nous sommes appuyés sur l'utilisation de modèles d'observateurs mathématiques. Nos paramètres expérimentaux ont ensuite permis de déterminer le type de modèle à utiliser. Les modèles idéaux ont été utilisés pour caractériser la qualité d'image lorsque des paramètres purement physiques concernant la détectabilité du signal devaient être estimés alors que les modèles anthropomorphes ont été utilisés dans des contextes cliniques où les résultats devaient être comparés à ceux d'observateurs humain, tirant profit des propriétés de ce type de modèles. Cette étude a confirmé que l'utilisation de modèles d'observateurs permettait d'évaluer la qualité d'image en utilisant une approche basée sur la tâche à effectuer, permettant ainsi d'établir un lien entre les physiciens médicaux et les radiologues. Nous avons également montré que les reconstructions itératives ont le potentiel de réduire la dose sans altérer la qualité du diagnostic. Parmi les différentes reconstructions itératives, celles de type « model-based » sont celles qui offrent le plus grand potentiel d'optimisation, puisque les images produites grâce à cette modalité conduisent à un diagnostic exact même lors d'acquisitions à très basse dose. Ce travail a également permis de clarifier le rôle du physicien médical en TDM: Les métriques standards restent utiles pour évaluer la conformité d'un appareil aux requis légaux, mais l'utilisation de modèles d'observateurs est inévitable pour optimiser les protocoles d'imagerie. -- Computed tomography (CT) is an imaging technique in which interest has been quickly growing since it began to be used in the 1970s. Today, it has become an extensively used modality because of its ability to produce accurate diagnostic images. However, even if a direct benefit to patient healthcare is attributed to CT, the dramatic increase in the number of CT examinations performed has raised concerns about the potential negative effects of ionising radiation on the population. Among those negative effects, one of the major risks remaining is the development of cancers associated with exposure to diagnostic X-ray procedures. In order to ensure that the benefits-risk ratio still remains in favour of the patient, it is necessary to make sure that the delivered dose leads to the proper diagnosis without producing unnecessarily high-quality images. This optimisation scheme is already an important concern for adult patients, but it must become an even greater priority when examinations are performed on children or young adults, in particular with follow-up studies which require several CT procedures over the patient's life. Indeed, children and young adults are more sensitive to radiation due to their faster metabolism. In addition, harmful consequences have a higher probability to occur because of a younger patient's longer life expectancy. The recent introduction of iterative reconstruction algorithms, which were designed to substantially reduce dose, is certainly a major achievement in CT evolution, but it has also created difficulties in the quality assessment of the images produced using those algorithms. The goal of the present work was to propose a strategy to investigate the potential of iterative reconstructions to reduce dose without compromising the ability to answer the diagnostic questions. The major difficulty entails disposing a clinically relevant way to estimate image quality. To ensure the choice of pertinent image quality criteria this work was continuously performed in close collaboration with radiologists. The work began by tackling the way to characterise image quality when dealing with musculo-skeletal examinations. We focused, in particular, on image noise and spatial resolution behaviours when iterative image reconstruction was used. The analyses of the physical parameters allowed radiologists to adapt their image acquisition and reconstruction protocols while knowing what loss of image quality to expect. This work also dealt with the loss of low-contrast detectability associated with dose reduction, something which is a major concern when dealing with patient dose reduction in abdominal investigations. Knowing that alternative ways had to be used to assess image quality rather than classical Fourier-space metrics, we focused on the use of mathematical model observers. Our experimental parameters determined the type of model to use. Ideal model observers were applied to characterise image quality when purely objective results about the signal detectability were researched, whereas anthropomorphic model observers were used in a more clinical context, when the results had to be compared with the eye of a radiologist thus taking advantage of their incorporation of human visual system elements. This work confirmed that the use of model observers makes it possible to assess image quality using a task-based approach, which, in turn, establishes a bridge between medical physicists and radiologists. It also demonstrated that statistical iterative reconstructions have the potential to reduce the delivered dose without impairing the quality of the diagnosis. Among the different types of iterative reconstructions, model-based ones offer the greatest potential, since images produced using this modality can still lead to an accurate diagnosis even when acquired at very low dose. This work has clarified the role of medical physicists when dealing with CT imaging. The use of the standard metrics used in the field of CT imaging remains quite important when dealing with the assessment of unit compliance to legal requirements, but the use of a model observer is the way to go when dealing with the optimisation of the imaging protocols.
Resumo:
OBJECTIVE: To gather and clarify the actual effects of low-level laser therapy on wound healing and its most effective ways of application in human and veterinary medicine.METHODS: We searched original articles published in journals between the years 2000 and 2011, in Spanish, English, French and Portuguese languages, belonging to the following databases: Lilacs, Medline, PubMed and Bireme; Tey should contain the methodological description of the experimental design and parameters used.RESULTS: doses ranging from 3 to 6 J/cm2 appear to be more effective and doses 10 above J/cm2 are associated with deleterious effects. The wavelengths ranging from 632.8 to 1000 nm remain as those that provide more satisfactory results in the wound healing process.CONCLUSION: Low-level laser can be safely applied to accelerate the resolution of cutaneous wounds, although this fact is closely related to the election of parameters such as dose, time of exposure and wavelength.
Resumo:
High resolution proton nuclear magnetic resonance spectroscopy (¹H MRS) can be used to detect biochemical changes in vitro caused by distinct pathologies. It can reveal distinct metabolic profiles of brain tumors although the accurate analysis and classification of different spectra remains a challenge. In this study, the pattern recognition method partial least squares discriminant analysis (PLS-DA) was used to classify 11.7 T ¹H MRS spectra of brain tissue extracts from patients with brain tumors into four classes (high-grade neuroglial, low-grade neuroglial, non-neuroglial, and metastasis) and a group of control brain tissue. PLS-DA revealed 9 metabolites as the most important in group differentiation: γ-aminobutyric acid, acetoacetate, alanine, creatine, glutamate/glutamine, glycine, myo-inositol, N-acetylaspartate, and choline compounds. Leave-one-out cross-validation showed that PLS-DA was efficient in group characterization. The metabolic patterns detected can be explained on the basis of previous multimodal studies of tumor metabolism and are consistent with neoplastic cell abnormalities possibly related to high turnover, resistance to apoptosis, osmotic stress and tumor tendency to use alternative energetic pathways such as glycolysis and ketogenesis.
Resumo:
The human striatum is a heterogeneous structure representing a major part of the dopamine (DA) system’s basal ganglia input and output. Positron emission tomography (PET) is a powerful tool for imaging DA neurotransmission. However, PET measurements suffer from bias caused by the low spatial resolution, especially when imaging small, D2/3 -rich structures such as the ventral striatum (VST). The brain dedicated high-resolution PET scanner, ECAT HRRT (Siemens Medical Solutions, Knoxville, TN, USA) has superior resolution capabilities than its predecessors. In the quantification of striatal D2/3 binding, the in vivo highly selective D2/3 antagonist [11C] raclopride is recognized as a well-validated tracer. The aim of this thesis was to use a traditional test-retest setting to evaluate the feasibility of utilizing the HRRT scanner for exploring not only small brain regions such as the VST but also low density D2/3 areas such as cortex. It was demonstrated that the measurement of striatal D2/3 binding was very reliable, even when studying small brain structures or prolonging the scanning interval. Furthermore, the cortical test-retest parameters displayed good to moderate reproducibility. For the first time in vivo, it was revealed that there are significant divergent rostrocaudal gradients of [11C]raclopride binding in striatal subregions. These results indicate that high-resolution [11C]raclopride PET is very reliable and its improved sensitivity means that it should be possible to detect the often very subtle changes occurring in DA transmission. Another major advantage is the possibility to measure simultaneously striatal and cortical areas. The divergent gradients of D2/3 binding may have functional significance and the average distribution binding could serve as the basis for a future database. Key words: dopamine, PET, HRRT, [11C]raclopride, striatum, VST, gradients, test-retest.
Resumo:
A Czerny Mount double monochromator is used to measure Raman scattered radiation near 90" from a crystalline, Silicon sample. Incident light is provided by a mixed gas Kr-Ar laser, operating at 5145 A. The double monochromator is calibrated to true wavelength by comparison of Kr and Ar emission Une positions (A) to grating position (A) display [1]. The relationship was found to be hnear and can be described by, y = 1.219873a; - 1209.32, (1) where y is true wavelength (A) and xis grating position display (A). The Raman emission spectra are collected via C"*""*" encoded software, which displays a mV signal from a Photodetector and allows stepping control of the gratings via an A/D interface. [2] The software collection parameters, detector temperature and optics are optimised to yield the best quality spectra. The inclusion of a cryostat allows for temperatmre dependent capabihty ranging from 4 K to w 350 K. Silicon Stokes temperatm-e dependent Raman spectra, generally show agreement with Uterature results [3] in their frequency haxdening, FWHM reduction and intensity increase as temperature is reduced. Tests reveal that a re-alignment of the double monochromator is necessary before spectral resolution can approach literature standard. This has not yet been carried out due to time constraints.
Resumo:
In this paper we investigate the problem of cache resolution in a mobile peer to peer ad hoc network. In our vision cache resolution should satisfy the following requirements: (i) it should result in low message overhead and (ii) the information should be retrieved with minimum delay. In this paper, we show that these goals can be achieved by splitting the one hop neighbours in to two sets based on the transmission range. The proposed approach reduces the number of messages flooded in to the network to find the requested data. This scheme is fully distributed and comes at very low cost in terms of cache overhead. The experimental results gives a promising result based on the metrics of studies.
Resumo:
Information display technology is a rapidly growing research and development field. Using state-of-the-art technology, optical resolution can be increased dramatically by organic light-emitting diode - since the light emitting layer is very thin, under 100nm. The main question is what pixel size is achievable technologically? The next generation of display will considers three-dimensional image display. In 2D , one is considering vertical and horizontal resolutions. In 3D or holographic images, there is another dimension – depth. The major requirement is the high resolution horizontal dimension in order to sustain the third dimension using special lenticular glass or barrier masks, separate views for each eye. The high-resolution 3D display offers hundreds of more different views of objects or landscape. OLEDs have potential to be a key technology for information displays in the future. The display technology presented in this work promises to bring into use bright colour 3D flat panel displays in a unique way. Unlike the conventional TFT matrix, OLED displays have constant brightness and colour, independent from the viewing angle i.e. the observer's position in front of the screen. A sandwich (just 0.1 micron thick) of organic thin films between two conductors makes an OLE Display device. These special materials are named electroluminescent organic semi-conductors (or organic photoconductors (OPC )). When electrical current is applied, a bright light is emitted (electrophosphorescence) from the formed Organic Light-Emitting Diode. Usually for OLED an ITO layer is used as a transparent electrode. Such types of displays were the first for volume manufacture and only a few products are available in the market at present. The key challenges that OLED technology faces in the application areas are: producing high-quality white light achieving low manufacturing costs increasing efficiency and lifetime at high brightness. Looking towards the future, by combining OLED with specially constructed surface lenses and proper image management software it will be possible to achieve 3D images.
Resumo:
Relativistic multi-configuration Dirac-Fock wavefunctions, coupled to good angular momentum J, have been calculated for low lying states of Ba I and Ba II. The resulting electronic factors show good agreement with data derived from recent high-resolution laser spectroscopy experiments and results from a comparison of muonic and optical data.
The Inertio-Elastic Planar Entry Flow of Low-Viscosity Elastic Fluids in Micro-fabricated Geometries
Resumo:
The non-Newtonian flow of dilute aqueous polyethylene oxide (PEO) solutions through microfabricated planar abrupt contraction-expansions is investigated. The contraction geometries are fabricated from a high-resolution chrome mask and cross-linked PDMS gels using the tools of soft-lithography. The small length scales and high deformation rates in the contraction throat lead to significant extensional flow effects even with dilute polymer solutions having time constants on the order of milliseconds. The dimensionless extra pressure drop across the contraction increases by more than 200% and is accompanied by significant upstream vortex growth. Streak photography and videomicroscopy using epifluorescent particles shows that the flow ultimately becomes unstable and three-dimensional. The moderate Reynolds numbers (0.03 ⤠Re ⤠44) associated with these high Deborah number (0 ⤠De ⤠600) microfluidic flows results in the exploration of new regions of the Re-De parameter space in which the effects of both elasticity and inertia can be observed. Understanding such interactions will be increasingly important in microfluidic applications involving complex fluids and can best be interpreted in terms of the elasticity number, El = De/Re, which is independent of the flow kinematics and depends only on the fluid rheology and the characteristic size of the device.
Resumo:
Flooding is a major hazard in both rural and urban areas worldwide, but it is in urban areas that the impacts are most severe. An investigation of the ability of high resolution TerraSAR-X data to detect flooded regions in urban areas is described. An important application for this would be the calibration and validation of the flood extent predicted by an urban flood inundation model. To date, research on such models has been hampered by lack of suitable distributed validation data. The study uses a 3m resolution TerraSAR-X image of a 1-in-150 year flood near Tewkesbury, UK, in 2007, for which contemporaneous aerial photography exists for validation. The DLR SETES SAR simulator was used in conjunction with airborne LiDAR data to estimate regions of the TerraSAR-X image in which water would not be visible due to radar shadow or layover caused by buildings and taller vegetation, and these regions were masked out in the flood detection process. A semi-automatic algorithm for the detection of floodwater was developed, based on a hybrid approach. Flooding in rural areas adjacent to the urban areas was detected using an active contour model (snake) region-growing algorithm seeded using the un-flooded river channel network, which was applied to the TerraSAR-X image fused with the LiDAR DTM to ensure the smooth variation of heights along the reach. A simpler region-growing approach was used in the urban areas, which was initialized using knowledge of the flood waterline in the rural areas. Seed pixels having low backscatter were identified in the urban areas using supervised classification based on training areas for water taken from the rural flood, and non-water taken from the higher urban areas. Seed pixels were required to have heights less than a spatially-varying height threshold determined from nearby rural waterline heights. Seed pixels were clustered into urban flood regions based on their close proximity, rather than requiring that all pixels in the region should have low backscatter. This approach was taken because it appeared that urban water backscatter values were corrupted in some pixels, perhaps due to contributions from side-lobes of strong reflectors nearby. The TerraSAR-X urban flood extent was validated using the flood extent visible in the aerial photos. It turned out that 76% of the urban water pixels visible to TerraSAR-X were correctly detected, with an associated false positive rate of 25%. If all urban water pixels were considered, including those in shadow and layover regions, these figures fell to 58% and 19% respectively. These findings indicate that TerraSAR-X is capable of providing useful data for the calibration and validation of urban flood inundation models.
Resumo:
This article describes the development and evaluation of the U.K.’s new High-Resolution Global Environmental Model (HiGEM), which is based on the latest climate configuration of the Met Office Unified Model, known as the Hadley Centre Global Environmental Model, version 1 (HadGEM1). In HiGEM, the horizontal resolution has been increased to 0.83° latitude × 1.25° longitude for the atmosphere, and 1/3° × 1/3° globally for the ocean. Multidecadal integrations of HiGEM, and the lower-resolution HadGEM, are used to explore the impact of resolution on the fidelity of climate simulations. Generally, SST errors are reduced in HiGEM. Cold SST errors associated with the path of the North Atlantic drift improve, and warm SST errors are reduced in upwelling stratocumulus regions where the simulation of low-level cloud is better at higher resolution. The ocean model in HiGEM allows ocean eddies to be partially resolved, which dramatically improves the representation of sea surface height variability. In the Southern Ocean, most of the heat transports in HiGEM is achieved by resolved eddy motions, which replaces the parameterized eddy heat transport in the lower-resolution model. HiGEM is also able to more realistically simulate small-scale features in the wind stress curl around islands and oceanic SST fronts, which may have implications for oceanic upwelling and ocean biology. Higher resolution in both the atmosphere and the ocean allows coupling to occur on small spatial scales. In particular, the small-scale interaction recently seen in satellite imagery between the atmosphere and tropical instability waves in the tropical Pacific Ocean is realistically captured in HiGEM. Tropical instability waves play a role in improving the simulation of the mean state of the tropical Pacific, which has important implications for climate variability. In particular, all aspects of the simulation of ENSO (spatial patterns, the time scales at which ENSO occurs, and global teleconnections) are much improved in HiGEM.
Resumo:
The intraseasonal variability (ISV) of the Indian summer monsoon is dominated by a 30–50 day oscillation between “active” and “break” events of enhanced and reduced rainfall over the subcontinent, respectively. These organized convective events form in the equatorial Indian Ocean and propagate north to India. Atmosphere–ocean coupled processes are thought to play a key role the intensity and propagation of these events. A high-resolution, coupled atmosphere–mixed-layer-oceanmodel is assembled: HadKPP. HadKPP comprises the Hadley Centre Atmospheric Model (HadAM3) and the K Profile Parameterization (KPP) mixed-layer ocean model. Following studies that upper-ocean vertical resolution and sub-diurnal coupling frequencies improve the simulation of ISV in SSTs, KPP is run at 1 m vertical resolution near the surface; the atmosphere and ocean are coupled every three hours. HadKPP accurately simulates the 30–50 day ISV in rainfall and SSTs over India and the Bay of Bengal, respectively, but suffers from low ISV on the equator. This is due to the HadAM3 convection scheme producing limited ISV in surface fluxes. HadKPP demonstrates little of the observed northward propagation of intraseasonal events, producing instead a standing oscillation. The lack of equatorial ISV in convection in HadAM3 constrains the ability of KPP to produce equatorial SST anomalies, which further weakens the ISV of convection. It is concluded that while atmosphere–ocean interactions are undoubtedly essential to an accurate simulation of ISV, they are not a panacea for model deficiencies. In regions where the atmospheric forcing is adequate, such as the Bay of Bengal, KPP produces SST anomalies that are comparable to the Tropical Rainfall Measuring Mission Microwave Imager (TMI) SST analyses in both their magnitude and their timing with respect to rainfall anomalies over India. HadKPP also displays a much-improved phase relationship between rainfall and SSTs over a HadAM3 ensemble forced by observed SSTs, when both are compared to observations. Coupling to mixed-layer models such as KPP has the potential to improve operational predictions of ISV, particularly when the persistence time of SST anomalies is shorter than the forecast lead time.
Resumo:
Absolute infrared photoabsorption cross-sections have been measured over the range 600-1500 cm(-1) for the powerful greenhouse gas SF5CF3 at high resolution (0.03 cm(-1)) and at temperatures between 203 and 298 K. Our data indicate that the integrated absorption intensity shows a weak negative dependence on temperature. It is concluded therefore that previous calculations of radiative forcings and global warming potentials based on room-temperature data are reasonable estimates for the atmosphere, but may be low by a few percent. (C) 2002 Elsevier Science B.V. All rights reserved.
Resumo:
The atmospheric component of the United Kingdom’s new High-resolution Global Environmental Model (HiGEM) has been run with interactive aerosol schemes that include biomass burning and mineral dust. Dust emission, transport, and deposition are parameterized within the model using six particle size divisions, which are treated independently. The biomass is modeled in three nonindependent modes, and emissions are prescribed from an external dataset. The model is shown to produce realistic horizontal and vertical distributions of these aerosols for each season when compared with available satellite- and ground-based observations and with other models. Combined aerosol optical depths off the coast of North Africa exceed 0.5 both in boreal winter, when biomass is the main contributor, and also in summer, when the dust dominates. The model is capable of resolving smaller-scale features, such as dust storms emanating from the Bode´ le´ and Saharan regions of North Africa and the wintertime Bode´ le´ low-level jet. This is illustrated by February and July case studies, in which the diurnal cycles of model variables in relation to dust emission and transport are examined. The top-of-atmosphere annual mean radiative forcing of the dust is calculated and found to be globally quite small but locally very large, exceeding 20 W m22 over the Sahara, where inclusion of dust aerosol is shown to improve the model radiative balance. This work extends previous aerosol studies by combining complexity with increased global resolution and represents a step toward the next generation of models to investigate aerosol–climate interactions. 1. Introduction Accurate modeling of mineral dust is known to be important because of its radiative impact in both numerical weather prediction models (Milton et al. 2008; Haywood et