799 resultados para Content Based Image Retrieval (CBIR)


Relevância:

40.00% 40.00%

Publicador:

Resumo:

Vertical profiles of stratospheric water vapour measured by the Michelson Interferometer for Passive Atmospheric Sounding (MIPAS) with the full resolution mode between September 2002 and March 2004 and retrieved with the IMK/IAA scientific retrieval processor were compared to a number of independent measurements in order to estimate the bias and to validate the existing precision estimates of the MIPAS data. The estimated precision for MIPAS is 5 to 10% in the stratosphere, depending on altitude, latitude, and season. The independent instruments were: the Halogen Occultation Experiment (HALOE), the Atmospheric Chemistry Experiment Fourier Transform Spectrometer (ACE-FTS), the Improved Limb Atmospheric Spectrometer-II (ILAS-II), the Polar Ozone and Aerosol Measurement (POAM III) instrument, the Middle Atmospheric Water Vapour Radiometer (MIAWARA), the Michelson Interferometer for Passive Atmospheric Sounding, balloon-borne version (MIPAS-B), the Airborne Microwave Stratospheric Observing System (AMSOS), the Fluorescent Stratospheric Hygrometer for Balloon (FLASH-B), the NOAA frostpoint hygrometer, and the Fast In Situ Hygrometer (FISH). For the in-situ measurements and the ground based, air- and balloon borne remote sensing instruments, the measurements are restricted to central and northern Europe. The comparisons to satellite-borne instruments are predominantly at mid- to high latitudes on both hemispheres. In the stratosphere there is no clear indication of a bias in MIPAS data, because the independent measurements in some cases are drier and in some cases are moister than the MIPAS measurements. Compared to the infrared measurements of MIPAS, measurements in the ultraviolet and visible have a tendency to be high, whereas microwave measurements have a tendency to be low. The results of χ2-based precision validation are somewhat controversial among the comparison estimates. However, for comparison instruments whose error budget also includes errors due to uncertainties in spectrally interfering species and where good coincidences were found, the χ2 values found are in the expected range or even below. This suggests that there is no evidence of systematically underestimated MIPAS random errors.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Image-based Relighting (IBRL) has recently attracted a lot of research interest for its ability to relight real objects or scenes, from novel illuminations captured in natural/synthetic environments. Complex lighting effects such as subsurface scattering, interreflection, shadowing, mesostructural self-occlusion, refraction and other relevant phenomena can be generated using IBRL. The main advantage of image-based graphics is that the rendering time is independent of scene complexity as the rendering is actually a process of manipulating image pixels, instead of simulating light transport. The goal of this paper is to provide a complete and systematic overview of the research in Imagebased Relighting. We observe that essentially all IBRL techniques can be broadly classified into three categories (Fig. 9), based on how the scene/illumination information is captured: Reflectance function-based, Basis function-based and Plenoptic function-based. We discuss the characteristics of each of these categories and their representative methods. We also discuss about the sampling density and types of light source(s), relevant issues of IBRL.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

BACKGROUND: Tumor bed stereotactic radiosurgery (SRS) after resection of brain metastases is a new strategy to delay or avoid whole-brain irradiation (WBRT) and its associated toxicities. This retrospective study analyzes results of frameless image-guided linear accelerator (LINAC)-based SRS and stereotactic hypofractionated radiotherapy (SHRT) as adjuvant treatment without WBRT. MATERIALS AND METHODS: Between March 2009 and February 2012, 44 resection cavities in 42 patients were treated with SRS (23 cavities) or SHRT (21 cavities). All treatments were delivered using a stereotactic LINAC. All cavities were expanded by ≥ 2 mm in all directions to create the clinical target volume (CTV). RESULTS: The median planning target volume (PTV) for SRS was 11.1 cm(3). The median dose prescribed to the PTV margin for SRS was 17 Gy. Median PTV for SHRT was 22.3 cm(3). The fractionation schemes applied were: 4 fractions of 6 Gy (5 patients), 6 fractions of 4 Gy (6 patients) and 10 fractions of 4 Gy (10 patients). Median follow-up was 9.6 months. Local control (LC) rates after 6 and 12 months were 91 and 77 %, respectively. No statistically significant differences in LC rates between SRS and SHRT treatments were observed. Distant brain control (DBC) rates at 6 and 12 months were 61 and 33 %, respectively. Overall survival (OS) at 6 and 12 months was 87 and 63.5 %, respectively, with a median OS of 15.9 months. One patient treated by SRS showed symptoms of radionecrosis, which was confirmed histologically. CONCLUSION: Frameless image-guided LINAC-based adjuvant SRS and SHRT are effective and well tolerated local treatment strategies after resection of brain metastases in patients with oligometastatic disease.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

We study state-based video communication where a client simultaneously informs the server about the presence status of various packets in its buffer. In sender-driven transmission, the client periodically sends to the server a single acknowledgement packet that provides information about all packets that have arrived at the client by the time the acknowledgment is sent. In receiver-driven streaming, the client periodically sends to the server a single request packet that comprises a transmission schedule for sending missing data to the client over a horizon of time. We develop a comprehensive optimization framework that enables computing packet transmission decisions that maximize the end-to-end video quality for the given bandwidth resources, in both prospective scenarios. The core step of the optimization comprises computing the probability that a single packet will be communicated in error as a function of the expected transmission redundancy (or cost) used to communicate the packet. Through comprehensive simulation experiments, we carefully examine the performance advances that our framework enables relative to state-of-the-art scheduling systems that employ regular acknowledgement or request packets. Consistent gains in video quality of up to 2B are demonstrated across a variety of content types. We show that there is a direct analogy between the error-cost efficiency of streaming a single packet and the overall rate-distortion performance of streaming the whole content. In the case of sender-driven transmission, we develop an effective modeling approach that accurately characterizes the end-to-end performance as a function of the packet loss rate on the backward channel and the source encoding characteristics.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Introduction: Tuberculosis (TB) is a global health concern with one-third of the world’s population infected. With the goal of eliminating TB, a component of appropriate management of the disease is ensuring baccalaureate nursing students receive current and consistent TB education. [See PDF for complete abstract]

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Computer vision-based food recognition could be used to estimate a meal's carbohydrate content for diabetic patients. This study proposes a methodology for automatic food recognition, based on the Bag of Features (BoF) model. An extensive technical investigation was conducted for the identification and optimization of the best performing components involved in the BoF architecture, as well as the estimation of the corresponding parameters. For the design and evaluation of the prototype system, a visual dataset with nearly 5,000 food images was created and organized into 11 classes. The optimized system computes dense local features, using the scale-invariant feature transform on the HSV color space, builds a visual dictionary of 10,000 visual words by using the hierarchical k-means clustering and finally classifies the food images with a linear support vector machine classifier. The system achieved classification accuracy of the order of 78%, thus proving the feasibility of the proposed approach in a very challenging image dataset.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Source materials like fine art, over-sized, fragile maps, and delicate artifacts have traditionally been digitally converted through the use of controlled lighting and high resolution scanners and camera backs. In addition the capture of items such as general and special collections bound monographs has recently grown both through consortial efforts like the Internet Archive's Open Content Alliance and locally at the individual institution level. These projects, in turn, have introduced increasingly higher resolution consumer-grade digital single lens reflex cameras or "DSLRs" as a significant part of the general cultural heritage digital conversion workflow. Central to the authors' discussion is the fact that both camera backs and DSLRs commonly share the ability to capture native raw file formats. Because these formats include such advantages as access to an image's raw mosaic sensor data within their architecture, many institutions choose raw for initial capture due to its high bit-level and unprocessed nature. However to date these same raw formats, so important to many at the point of capture, have yet to be considered "archival" within most published still imaging standards, if they are considered at all. Throughout many workflows raw files are deleted and thrown away after more traditionally "archival" uncompressed TIFF or JPEG 2000 files have been derived downstream from their raw source formats [1][2]. As a result, the authors examine the nature of raw anew and consider the basic questions, Should raw files be retained? What might their role be? Might they in fact form a new archival format space? Included in the discussion is a survey of assorted raw file types and their attributes. Also addressed are various sustainability issues as they pertain to archival formats with a special emphasis on both raw's positive and negative characteristics as they apply to archival practices. Current common archival workflows versus possible raw-based ones are investigated as well. These comparisons are noted in the context of each approach's differing levels of usable captured image data, various preservation virtues, and the divergent ideas of strictly fixed renditions versus the potential for improved renditions over time. Special attention is given to the DNG raw format through a detailed inspection of a number of its various structural components and the roles that they play in the format's latest specification. Finally an evaluation is drawn of both proprietary raw formats in general and DNG in particular as possible alternative archival formats for still imaging.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

In attempts to elucidate the underlying mechanisms of spinal injuries and spinal deformities, several experimental and numerical studies have been conducted to understand the biomechanical behavior of the spine. However, numerical biomechanical studies suffer from uncertainties associated with hard- and soft-tissue anatomies. Currently, these parameters are identified manually on each mesh model prior to simulations. The determination of soft connective tissues on finite element meshes can be a tedious procedure, which limits the number of models used in the numerical studies to a few instances. In order to address these limitations, an image-based method for automatic morphing of soft connective tissues has been proposed. Results showed that the proposed method is capable to accurately determine the spatial locations of predetermined bony landmarks. The present method can be used to automatically generate patient-specific models, which may be helpful in designing studies involving a large number of instances and to understand the mechanical behavior of biomechanical structures across a given population.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Statistical appearance models have recently been introduced in bone mechanics to investigate bone geometry and mechanical properties in population studies. The establishment of accurate anatomical correspondences is a critical aspect for the construction of reliable models. Depending on the representation of a bone as an image or a mesh, correspondences are detected using image registration or mesh morphing. The objective of this study was to compare image-based and mesh-based statistical appearance models of the femur for finite element (FE) simulations. To this aim, (i) we compared correspondence detection methods on bone surface and in bone volume; (ii) we created an image-based and a mesh-based statistical appearance models from 130 images, which we validated using compactness, representation and generalization, and we analyzed the FE results on 50 recreated bones vs. original bones; (iii) we created 1000 new instances, and we compared the quality of the FE meshes. Results showed that the image-based approach was more accurate in volume correspondence detection and quality of FE meshes, whereas the mesh-based approach was more accurate for surface correspondence detection and model compactness. Based on our results, we recommend the use of image-based statistical appearance models for FE simulations of the femur.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Life expectancy continuously increases but our society faces age-related conditions. Among musculoskeletal diseases, osteoporosis associated with risk of vertebral fracture and degenerative intervertebral disc (IVD) are painful pathologies responsible for tremendous healthcare costs. Hence, reliable diagnostic tools are necessary to plan a treatment or follow up its efficacy. Yet, radiographic and MRI techniques, respectively clinical standards for evaluation of bone strength and IVD degeneration, are unspecific and not objective. Increasingly used in biomedical engineering, CT-based finite element (FE) models constitute the state-of-art for vertebral strength prediction. However, as non-invasive biomechanical evaluation and personalised FE models of the IVD are not available, rigid boundary conditions (BCs) are applied on the FE models to avoid uncertainties of disc degeneration that might bias the predictions. Moreover, considering the impact of low back pain, the biomechanical status of the IVD is needed as a criterion for early disc degeneration. Thus, the first FE study focuses on two rigid BCs applied on the vertebral bodies during compression test of cadaver vertebral bodies, vertebral sections and PMMA embedding. The second FE study highlights the large influence of the intervertebral disc’s compliance on the vertebral strength, damage distribution and its initiation. The third study introduces a new protocol for normalisation of the IVD stiffness in compression, torsion and bending using MRI-based data to account for its morphology. In the last study, a new criterion (Otsu threshold) for disc degeneration based on quantitative MRI data (axial T2 map) is proposed. The results show that vertebral strength and damage distribution computed with rigid BCs are identical. Yet, large discrepancies in strength and damage localisation were observed when the vertebral bodies were loaded via IVDs. The normalisation protocol attenuated the effect of geometry on the IVD stiffnesses without complete suppression. Finally, the Otsu threshold computed in the posterior part of annulus fibrosus was related to the disc biomechanics and meet objectivity and simplicity required for a clinical application. In conclusion, the stiffness normalisation protocol necessary for consistent IVD comparisons and the relation found between degeneration, mechanical response of the IVD and Otsu threshold lead the way for non-invasive evaluation biomechanical status of the IVD. As the FE prediction of vertebral strength is largely influenced by the IVD conditions, this data could also improve the future FE models of osteoporotic vertebra.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Automated identification of vertebrae from X-ray image(s) is an important step for various medical image computing tasks such as 2D/3D rigid and non-rigid registration. In this chapter we present a graphical model-based solution for automated vertebra identification from X-ray image(s). Our solution does not ask for a training process using training data and has the capability to automatically determine the number of vertebrae visible in the image(s). This is achieved by combining a graphical model-based maximum a posterior probability (MAP) estimate with a mean-shift based clustering. Experiments conducted on simulated X-ray images as well as on a low-dose low quality X-ray spinal image of a scoliotic patient verified its performance.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

In this paper, we propose a new method for stitching multiple fluoroscopic images taken by a C-arm instrument. We employ an X-ray radiolucent ruler with numbered graduations while acquiring the images, and the image stitching is based on detecting and matching ruler parts in the images to the corresponding parts of a virtual ruler. To achieve this goal, we first detect the regular spaced graduations on the ruler and the numbers. After graduation labeling, for each image, we have the location and the associated number for every graduation on the ruler. Then, we initialize the panoramic X-ray image with the virtual ruler, and we “paste” each image by aligning the detected ruler part on the original image, to the corresponding part of the virtual ruler on the panoramic image. Our method is based on ruler matching but without the requirement of matching similar feature points in pairwise images, and thus, we do not necessarily require overlap between the images. We tested our method on eight different datasets of X-ray images, including long bones and a complete spine. Qualitative and quantitative experiments show that our method achieves good results.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

XMapTools is a MATLAB©-based graphical user interface program for electron microprobe X-ray image processing, which can be used to estimate the pressure–temperature conditions of crystallization of minerals in metamorphic rocks. This program (available online at http://www.xmaptools.com) provides a method to standardize raw electron microprobe data and includes functions to calculate the oxide weight percent compositions for various minerals. A set of external functions is provided to calculate structural formulae from the standardized analyses as well as to estimate pressure–temperature conditions of crystallization, using empirical and semi-empirical thermobarometers from the literature. Two graphical user interface modules, Chem2D and Triplot3D, are used to plot mineral compositions into binary and ternary diagrams. As an example, the software is used to study a high-pressure Himalayan eclogite sample from the Stak massif in Pakistan. The high-pressure paragenesis consisting of omphacite and garnet has been retrogressed to a symplectitic assemblage of amphibole, plagioclase and clinopyroxene. Mineral compositions corresponding to ~165,000 analyses yield estimates for the eclogitic pressure–temperature retrograde path from 25 kbar to 9 kbar. Corresponding pressure–temperature maps were plotted and used to interpret the link between the equilibrium conditions of crystallization and the symplectitic microstructures. This example illustrates the usefulness of XMapTools for studying variations of the chemical composition of minerals and for retrieving information on metamorphic conditions on a microscale, towards computation of continuous pressure–temperature-and relative time path in zoned metamorphic minerals not affected by post-crystallization diffusion.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Specification consortia and standardization bodies concentrate on e-Learning objects to en-sure reusability of content. Learning objects may be collected in a library and used for deriv-ing course offerings that are customized to the needs of different learning communities. How-ever, customization of courses is possible only if the logical dependencies between the learn-ing objects are known. Metadata for describing object relationships have been proposed in several e-Learning specifications. This paper discusses the customization potential of e-Learning objects but also the pitfalls that exist if content is customized inappropriately.