934 resultados para depth image


Relevância:

20.00% 20.00%

Publicador:

Resumo:

The aim of this article is to prove the real possibility of travelling intellectually to the Platonic image of the cave from different films. In this sense, one can speak of explicit references as in The Conformist by B. Bertolucci or in Shadowlands by R. Attenborough -if one bears in mind the Chronicles of Narnia by C. S. Lewis- or The Picture of Dorian Gray ¿if one bears in mind the well-known O. Wilde¿s novel-, but, on other occasions, although the Platonic influence cannot be proved, for instance in The Truman Show, A Room with a View or Brideshead Revisited, one can perfectly think of these films in order to guide the contemporary audiences to that Platonic image, since Plato himself affirms that it deals with an image which can be easily applied and, in first place, to his idealistic philosophy.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The cave by José Saramago has as a certain reference the image of the cave of book VII of Plato's Republic and, however, Saramago is not an idealistic or metaphysical writer. This article, taking advantage of the applicability with which Plato endowed his image, defends the urge to be open to the messages sent by the earth, by matter, the urge not to become prisoners in the golden caves of the Western society and, finally, the urge to find our freedom in Nature, phýsis, and not far or beyond, metá, it.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Translations of the first chapters of Book VII of Plato's Republic, in which he introduces the well-known image of the cave, eikón, reveals an astonishing and intriguing variety of interpretations of this image: "allegory", "myth", "fable", "parable", "simile" and "comparison", to cite but a few. Taking as an example the work by Benjamin Jowett, the Victorian translator of Plato, remarkable for its textual accuracy and by means of a close analysis of the terms related to the image, this paper insists on the need to neither interpret nor correct the great ideal philosopher, in this case revealing some evident contradictions that arise when this advice is not followed and pointing out the occasional use of terms extraneous to the Platonic lexicon such as "allegory".

Relevância:

20.00% 20.00%

Publicador:

Resumo:

BACKGROUND: Deep burn assessment made by clinical evaluation has an accuracy varying between 60% and 80% and will determine if a burn injury will need tangential excision and skin grafting or if it will be able to heal spontaneously. Laser Doppler Imaging (LDI) techniques allow an improved burn depth assessment but their use is limited by the time-consuming image acquisition which may take up to 6 min per image. METHODS: To evaluate the effectiveness and reliability of a newly developed full-field LDI technology, 15 consecutive patients presenting with intermediate depth burns were assessed both clinically and by FluxExplorer LDI technology. Comparison between the two methods of assessment was carried out. RESULTS: Image acquisition was done within 6 s. FluxEXPLORER LDI technology achieved a significantly improved accuracy of burn depth assessment compared to the clinical judgement performed by board certified plastic and reconstructive surgeons (P < 0.05, 93% of correctly assessed burns injuries vs. 80% for clinical assessment). CONCLUSION: Technological improvements of LDI technology leading to a decreased image acquisition time and reliable burn depth assessment allow the routine use of such devices in the acute setting of burn care without interfering with the patient's treatment. Rapid and reliable LDI technology may assist clinicians in burn depth assessment and may limit the morbidity of burn patients through a minimization of the area of surgical debridement. Future technological improvements allowing the miniaturization of the device will further ease its clinical application.

Relevância:

20.00% 20.00%

Publicador:

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In the search for high efficiency in root studies, computational systems have been developed to analyze digital images. ImageJ and Safira are public-domain systems that may be used for image analysis of washed roots. However, differences in root properties measured using ImageJ and Safira are supposed. This study compared values of root length and surface area obtained with public-domain systems with values obtained by a reference method. Root samples were collected in a banana plantation in an area of a shallower Typic Carbonatic Haplic Cambisol (CXk), and an area of a deeper Typic Haplic Ta Eutrophic Cambisol (CXve), at six depths in five replications. Root images were digitized and the systems ImageJ and Safira used to determine root length and surface area. The line-intersect method modified by Tennant was used as reference; values of root length and surface area measured with the different systems were analyzed by Pearson's correlation coefficient and compared by the confidence interval and t-test. Both systems ImageJ and Safira had positive correlation coefficients with the reference method for root length and surface area data in CXk and CXve. The correlation coefficient ranged from 0.54 to 0.80, with lowest value observed for ImageJ in the measurement of surface area of roots sampled in CXve. The IC (95 %) revealed that root length measurements with Safira did not differ from that with the reference method in CXk (-77.3 to 244.0 mm). Regarding surface area measurements, Safira did not differ from the reference method for samples collected in CXk (-530.6 to 565.8 mm²) as well as in CXve (-4231 to 612.1 mm²). However, measurements with ImageJ were different from those obtained by the reference method, underestimating length and surface area in samples collected in CXk and CXve. Both ImageJ and Safira allow an identification of increases or decreases in root length and surface area. However, Safira results for root length and surface area are closer to the results obtained with the reference method.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We have used the unique spatial sensitivity of polarized neutron and soft x-ray beams in reflection geometry to measure the depth dependence of magnetization across the interface between a ferromagnet and an antiferromagnet. The net uncompensated magnetization near the interface responds to applied field, while uncompensated spins in the antiferromagnet bulk are pinned, thus providing a means to establish exchange bias.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Cortical folding (gyrification) is determined during the first months of life, so that adverse events occurring during this period leave traces that will be identifiable at any age. As recently reviewed by Mangin and colleagues(2), several methods exist to quantify different characteristics of gyrification. For instance, sulcal morphometry can be used to measure shape descriptors such as the depth, length or indices of inter-hemispheric asymmetry(3). These geometrical properties have the advantage of being easy to interpret. However, sulcal morphometry tightly relies on the accurate identification of a given set of sulci and hence provides a fragmented description of gyrification. A more fine-grained quantification of gyrification can be achieved with curvature-based measurements, where smoothed absolute mean curvature is typically computed at thousands of points over the cortical surface(4). The curvature is however not straightforward to comprehend, as it remains unclear if there is any direct relationship between the curvedness and a biologically meaningful correlate such as cortical volume or surface. To address the diverse issues raised by the measurement of cortical folding, we previously developed an algorithm to quantify local gyrification with an exquisite spatial resolution and of simple interpretation. Our method is inspired of the Gyrification Index(5), a method originally used in comparative neuroanatomy to evaluate the cortical folding differences across species. In our implementation, which we name local Gyrification Index (lGI(1)), we measure the amount of cortex buried within the sulcal folds as compared with the amount of visible cortex in circular regions of interest. Given that the cortex grows primarily through radial expansion(6), our method was specifically designed to identify early defects of cortical development. In this article, we detail the computation of local Gyrification Index, which is now freely distributed as a part of the FreeSurfer Software (http://surfer.nmr.mgh.harvard.edu/, Martinos Center for Biomedical Imaging, Massachusetts General Hospital). FreeSurfer provides a set of automated reconstruction tools of the brain's cortical surface from structural MRI data. The cortical surface extracted in the native space of the images with sub-millimeter accuracy is then further used for the creation of an outer surface, which will serve as a basis for the lGI calculation. A circular region of interest is then delineated on the outer surface, and its corresponding region of interest on the cortical surface is identified using a matching algorithm as described in our validation study(1). This process is repeatedly iterated with largely overlapping regions of interest, resulting in cortical maps of gyrification for subsequent statistical comparisons (Fig. 1). Of note, another measurement of local gyrification with a similar inspiration was proposed by Toro and colleagues(7), where the folding index at each point is computed as the ratio of the cortical area contained in a sphere divided by the area of a disc with the same radius. The two implementations differ in that the one by Toro et al. is based on Euclidian distances and thus considers discontinuous patches of cortical area, whereas ours uses a strict geodesic algorithm and include only the continuous patch of cortical area opening at the brain surface in a circular region of interest.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Résumé Suite aux recentes avancées technologiques, les archives d'images digitales ont connu une croissance qualitative et quantitative sans précédent. Malgré les énormes possibilités qu'elles offrent, ces avancées posent de nouvelles questions quant au traitement des masses de données saisies. Cette question est à la base de cette Thèse: les problèmes de traitement d'information digitale à très haute résolution spatiale et/ou spectrale y sont considérés en recourant à des approches d'apprentissage statistique, les méthodes à noyau. Cette Thèse étudie des problèmes de classification d'images, c'est à dire de catégorisation de pixels en un nombre réduit de classes refletant les propriétés spectrales et contextuelles des objets qu'elles représentent. L'accent est mis sur l'efficience des algorithmes, ainsi que sur leur simplicité, de manière à augmenter leur potentiel d'implementation pour les utilisateurs. De plus, le défi de cette Thèse est de rester proche des problèmes concrets des utilisateurs d'images satellite sans pour autant perdre de vue l'intéret des méthodes proposées pour le milieu du machine learning dont elles sont issues. En ce sens, ce travail joue la carte de la transdisciplinarité en maintenant un lien fort entre les deux sciences dans tous les développements proposés. Quatre modèles sont proposés: le premier répond au problème de la haute dimensionalité et de la redondance des données par un modèle optimisant les performances en classification en s'adaptant aux particularités de l'image. Ceci est rendu possible par un système de ranking des variables (les bandes) qui est optimisé en même temps que le modèle de base: ce faisant, seules les variables importantes pour résoudre le problème sont utilisées par le classifieur. Le manque d'information étiquétée et l'incertitude quant à sa pertinence pour le problème sont à la source des deux modèles suivants, basés respectivement sur l'apprentissage actif et les méthodes semi-supervisées: le premier permet d'améliorer la qualité d'un ensemble d'entraînement par interaction directe entre l'utilisateur et la machine, alors que le deuxième utilise les pixels non étiquetés pour améliorer la description des données disponibles et la robustesse du modèle. Enfin, le dernier modèle proposé considère la question plus théorique de la structure entre les outputs: l'intègration de cette source d'information, jusqu'à présent jamais considérée en télédétection, ouvre des nouveaux défis de recherche. Advanced kernel methods for remote sensing image classification Devis Tuia Institut de Géomatique et d'Analyse du Risque September 2009 Abstract The technical developments in recent years have brought the quantity and quality of digital information to an unprecedented level, as enormous archives of satellite images are available to the users. However, even if these advances open more and more possibilities in the use of digital imagery, they also rise several problems of storage and treatment. The latter is considered in this Thesis: the processing of very high spatial and spectral resolution images is treated with approaches based on data-driven algorithms relying on kernel methods. In particular, the problem of image classification, i.e. the categorization of the image's pixels into a reduced number of classes reflecting spectral and contextual properties, is studied through the different models presented. The accent is put on algorithmic efficiency and the simplicity of the approaches proposed, to avoid too complex models that would not be used by users. The major challenge of the Thesis is to remain close to concrete remote sensing problems, without losing the methodological interest from the machine learning viewpoint: in this sense, this work aims at building a bridge between the machine learning and remote sensing communities and all the models proposed have been developed keeping in mind the need for such a synergy. Four models are proposed: first, an adaptive model learning the relevant image features has been proposed to solve the problem of high dimensionality and collinearity of the image features. This model provides automatically an accurate classifier and a ranking of the relevance of the single features. The scarcity and unreliability of labeled. information were the common root of the second and third models proposed: when confronted to such problems, the user can either construct the labeled set iteratively by direct interaction with the machine or use the unlabeled data to increase robustness and quality of the description of data. Both solutions have been explored resulting into two methodological contributions, based respectively on active learning and semisupervised learning. Finally, the more theoretical issue of structured outputs has been considered in the last model, which, by integrating outputs similarity into a model, opens new challenges and opportunities for remote sensing image processing.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Photon migration in a turbid medium has been modeled in many different ways. The motivation for such modeling is based on technology that can be used to probe potentially diagnostic optical properties of biological tissue. Surprisingly, one of the more effective models is also one of the simplest. It is based on statistical properties of a nearest-neighbor lattice random walk. Here we develop a theory allowing one to calculate the number of visits by a photon to a given depth, if it is eventually detected at an absorbing surface. This mimics cw measurements made on biological tissue and is directed towards characterizing the depth reached by photons injected at the surface. Our development of the theory uses formalism based on the theory of a continuous-time random walk (CTRW). Formally exact results are given in the Fourier-Laplace domain, which, in turn, are used to generate approximations for parameters of physical interest.