986 resultados para Evaluate image retention
Resumo:
In evaluation of soil quality for agricultural use, soil structure is one of the most important properties, which is influenced not only by climate, biological activity, and management practices but also by mechanical and physico-chemical forces acting in the soil. The purpose of this study was to evaluate the influence of conventional agricultural management on the structure and microstructure of a Latossolo Vermelho distroférrico típico (Rhodic Hapludox) in an experimental area planted to maize. Soil morphology was described using the crop profile method by identifying the distinct structural volumes called Morphologically Homogeneous Units (MHUs). For comparison, we also described a profile in an adjacent area without agricultural use and under natural regrowth referred to as Memory. We took undisturbed samples from the main MHUs so as to form thin sections and blocks of soil for micromorphological and micromorphometrical analyses. Results from the application of the crop profile method showed the occurrence of the following structural types: loose (L), fragmented (F) and continuous (C) in both profiles analyzed. In the Memory soil profile, the fragmented structures were classified as Fptμ∆+tf and Fmt∆μ, whose micromorphology shows an enaulic-porphyric (porous) relative distribution with a great deal of biological activity as indicated by the presence of vughs and channels. Lower down, from 0.20 to 0.35 m, there was a continuous soil volume (sub-type C∆μ), with a subangular block microstructure and an enaulic-porphyric relative distribution, though in this case more compact and with aggregate coalescence and less biological activity. The micromorphometrical study of the soil of the Memory Plot showed the predominance of complex pores in NAM (15.03 %), Fmt∆μ (11.72 %), and Fptμ∆+tf (7.73 %), and rounded pores in C∆μ (8.21 %). In the soil under conventional agricultural management, we observed fragmented structures similar to the Memory Plot from 0.02 to 0.20 m, followed by a volume with a compact continuous structure (C∆μ), without visible porosity and with few roots. In the MHUs under conventional management, reduction in the packing pores (40 %) was observed, mainly in the continuous units (C). The microstructure had well-defined blocks, with the occurrence of planar pores and less evidence of biological activity. In conclusion, the morphological and micromorphological analyses of the soil profiles studied offered complementary information regarding soil structural quality, especially concerning the changes in pore types as result of mechanical stress undergone by the soil.
Resumo:
Cortical folding (gyrification) is determined during the first months of life, so that adverse events occurring during this period leave traces that will be identifiable at any age. As recently reviewed by Mangin and colleagues(2), several methods exist to quantify different characteristics of gyrification. For instance, sulcal morphometry can be used to measure shape descriptors such as the depth, length or indices of inter-hemispheric asymmetry(3). These geometrical properties have the advantage of being easy to interpret. However, sulcal morphometry tightly relies on the accurate identification of a given set of sulci and hence provides a fragmented description of gyrification. A more fine-grained quantification of gyrification can be achieved with curvature-based measurements, where smoothed absolute mean curvature is typically computed at thousands of points over the cortical surface(4). The curvature is however not straightforward to comprehend, as it remains unclear if there is any direct relationship between the curvedness and a biologically meaningful correlate such as cortical volume or surface. To address the diverse issues raised by the measurement of cortical folding, we previously developed an algorithm to quantify local gyrification with an exquisite spatial resolution and of simple interpretation. Our method is inspired of the Gyrification Index(5), a method originally used in comparative neuroanatomy to evaluate the cortical folding differences across species. In our implementation, which we name local Gyrification Index (lGI(1)), we measure the amount of cortex buried within the sulcal folds as compared with the amount of visible cortex in circular regions of interest. Given that the cortex grows primarily through radial expansion(6), our method was specifically designed to identify early defects of cortical development. In this article, we detail the computation of local Gyrification Index, which is now freely distributed as a part of the FreeSurfer Software (http://surfer.nmr.mgh.harvard.edu/, Martinos Center for Biomedical Imaging, Massachusetts General Hospital). FreeSurfer provides a set of automated reconstruction tools of the brain's cortical surface from structural MRI data. The cortical surface extracted in the native space of the images with sub-millimeter accuracy is then further used for the creation of an outer surface, which will serve as a basis for the lGI calculation. A circular region of interest is then delineated on the outer surface, and its corresponding region of interest on the cortical surface is identified using a matching algorithm as described in our validation study(1). This process is repeatedly iterated with largely overlapping regions of interest, resulting in cortical maps of gyrification for subsequent statistical comparisons (Fig. 1). Of note, another measurement of local gyrification with a similar inspiration was proposed by Toro and colleagues(7), where the folding index at each point is computed as the ratio of the cortical area contained in a sphere divided by the area of a disc with the same radius. The two implementations differ in that the one by Toro et al. is based on Euclidian distances and thus considers discontinuous patches of cortical area, whereas ours uses a strict geodesic algorithm and include only the continuous patch of cortical area opening at the brain surface in a circular region of interest.
Resumo:
Résumé Suite aux recentes avancées technologiques, les archives d'images digitales ont connu une croissance qualitative et quantitative sans précédent. Malgré les énormes possibilités qu'elles offrent, ces avancées posent de nouvelles questions quant au traitement des masses de données saisies. Cette question est à la base de cette Thèse: les problèmes de traitement d'information digitale à très haute résolution spatiale et/ou spectrale y sont considérés en recourant à des approches d'apprentissage statistique, les méthodes à noyau. Cette Thèse étudie des problèmes de classification d'images, c'est à dire de catégorisation de pixels en un nombre réduit de classes refletant les propriétés spectrales et contextuelles des objets qu'elles représentent. L'accent est mis sur l'efficience des algorithmes, ainsi que sur leur simplicité, de manière à augmenter leur potentiel d'implementation pour les utilisateurs. De plus, le défi de cette Thèse est de rester proche des problèmes concrets des utilisateurs d'images satellite sans pour autant perdre de vue l'intéret des méthodes proposées pour le milieu du machine learning dont elles sont issues. En ce sens, ce travail joue la carte de la transdisciplinarité en maintenant un lien fort entre les deux sciences dans tous les développements proposés. Quatre modèles sont proposés: le premier répond au problème de la haute dimensionalité et de la redondance des données par un modèle optimisant les performances en classification en s'adaptant aux particularités de l'image. Ceci est rendu possible par un système de ranking des variables (les bandes) qui est optimisé en même temps que le modèle de base: ce faisant, seules les variables importantes pour résoudre le problème sont utilisées par le classifieur. Le manque d'information étiquétée et l'incertitude quant à sa pertinence pour le problème sont à la source des deux modèles suivants, basés respectivement sur l'apprentissage actif et les méthodes semi-supervisées: le premier permet d'améliorer la qualité d'un ensemble d'entraînement par interaction directe entre l'utilisateur et la machine, alors que le deuxième utilise les pixels non étiquetés pour améliorer la description des données disponibles et la robustesse du modèle. Enfin, le dernier modèle proposé considère la question plus théorique de la structure entre les outputs: l'intègration de cette source d'information, jusqu'à présent jamais considérée en télédétection, ouvre des nouveaux défis de recherche. Advanced kernel methods for remote sensing image classification Devis Tuia Institut de Géomatique et d'Analyse du Risque September 2009 Abstract The technical developments in recent years have brought the quantity and quality of digital information to an unprecedented level, as enormous archives of satellite images are available to the users. However, even if these advances open more and more possibilities in the use of digital imagery, they also rise several problems of storage and treatment. The latter is considered in this Thesis: the processing of very high spatial and spectral resolution images is treated with approaches based on data-driven algorithms relying on kernel methods. In particular, the problem of image classification, i.e. the categorization of the image's pixels into a reduced number of classes reflecting spectral and contextual properties, is studied through the different models presented. The accent is put on algorithmic efficiency and the simplicity of the approaches proposed, to avoid too complex models that would not be used by users. The major challenge of the Thesis is to remain close to concrete remote sensing problems, without losing the methodological interest from the machine learning viewpoint: in this sense, this work aims at building a bridge between the machine learning and remote sensing communities and all the models proposed have been developed keeping in mind the need for such a synergy. Four models are proposed: first, an adaptive model learning the relevant image features has been proposed to solve the problem of high dimensionality and collinearity of the image features. This model provides automatically an accurate classifier and a ranking of the relevance of the single features. The scarcity and unreliability of labeled. information were the common root of the second and third models proposed: when confronted to such problems, the user can either construct the labeled set iteratively by direct interaction with the machine or use the unlabeled data to increase robustness and quality of the description of data. Both solutions have been explored resulting into two methodological contributions, based respectively on active learning and semisupervised learning. Finally, the more theoretical issue of structured outputs has been considered in the last model, which, by integrating outputs similarity into a model, opens new challenges and opportunities for remote sensing image processing.
Resumo:
ABSTRACT High cost and long time required to determine a retention curve by the conventional methods of the Richards Chamber and Haines Funnel limit its use; therefore, alternative methods to facilitate this routine are needed. The filter paper method to determine the soil water retention curve was evaluated and compared to the conventional method. Undisturbed samples were collected from five different soils. Using a Haines Funnel and Richards Chamber, moisture content was obtained for tensions of 2; 4; 6; 8; 10; 33; 100; 300; 700; and 1,500 kPa. In the filter paper test, the soil matric potential was obtained from the filter-paper calibration equation, and the moisture subsequently determined based on the gravimetric difference. The van Genuchten model was fitted to the observed data of soil matric potential versus moisture. Moisture values of the conventional and the filter paper methods, estimated by the van Genuchten model, were compared. The filter paper method, with R2 of 0.99, can be used to determine water retention curves of agricultural soils as an alternative to the conventional method.
Resumo:
Plutonium and Sr-90 are considered to be among the most radiotoxic nuclides produced by the nuclear fission process. In spite of numerous studies on mammals and humans there is still no general agreement on the retention half time of both radionuclides in the skeleton in the general population. Here we determined plutonium and Sr-90 in human vertebrae in individuals deceased between 1960 and 2004 in Switzerland. Plutonium was measured by sensitive SF-ICP-MS techniques and Sr-90 by radiometric methods. We compared our results to the ones obtained for other environmental compartments to reveal the retention half time of NBT fallout Pu-239 and Sr-90 in trabecular bones of the Swiss population. Results show that plutonium has a retention half time of 40 +/- 14 years. In contrast Sr-90 has a shorter retention half time of 13.5 +/- 1.0 years. Moreover Sr-90 retention half time in vertebrae is shown to be linked to the retention half time in food and other environmental compartments. These findings demonstrate that the renewal of the vertebrae through calcium homeostatic control is faster for Sr-90 excretion than for plutonium excretion. The precise determination of the retention half time of plutonium in the skeleton will improve the biokinetic model of plutonium metabolism in humans. (C) 2010 Elsevier Ltd. All rights reserved.
Resumo:
ABSTRACT The concept of soil physical quality (SPQ) is currently under discussion, and an agreement about which soil physical properties should be included in the SPQ characterization has not been reached. The objectives of this study were to evaluate the ability of SPQ indicators based on static and dynamic soil properties to assess the effects of two loosening treatments (chisel plowing to 0.20 m [ChT] and subsoiling to 0.35 m [DL]) on a soil under NT and to compare the performance of static- and dynamic-based SPQ indicators to define soil proper soil conditions for soybean yield. Soil sampling and field determinations were carried out after crop harvest. Soil water retention curve was determined using a tension table, and field infiltration was measured using a tension disc infiltrometer. Most dynamic SPQ indicators (field saturated hydraulic conductivity, K0, effective macroporosity, εma, total connectivity and macroporosity indexes [CwTP and Cwmac]) were affected by the studied treatments, and were greater for DL compared to NT and ChT (K0 values were 2.17, 2.55, and 4.37 cm h-1 for NT, ChT, and DL, respectively). However, static SPQ indicators (calculated from the water retention curve) were not capable of distinguishing effects among treatments. Crop yield was significantly lower for the DL treatment (NT: 2,400 kg ha-1; ChT: 2,358 kg ha-1; and DL: 2,105 kg ha1), in agreement with significantly higher values of the dynamic SPQ indicators, K0, εma, CwTP, and Cwmac, in this treatment. The results support the idea that SPQ indicators based on static properties are not capable of distinguishing tillage effects and predicting crop yield, whereas dynamic SPQ indicators are useful for distinguishing tillage effects and can explain differences in crop yield when used together with information on weather conditions. However, future studies, monitoring years with different weather conditions, would be useful for increasing knowledge on this topic.
Resumo:
Three-dimensional imaging and quantification of myocardial function are essential steps in the evaluation of cardiac disease. We propose a tagged magnetic resonance imaging methodology called zHARP that encodes and automatically tracks myocardial displacement in three dimensions. Unlike other motion encoding techniques, zHARP encodes both in-plane and through-plane motion in a single image plane without affecting the acquisition speed. Postprocessing unravels this encoding in order to directly track the 3-D displacement of every point within the image plane throughout an entire image sequence. Experimental results include a phantom validation experiment, which compares zHARP to phase contrast imaging, and an in vivo study of a normal human volunteer. Results demonstrate that the simultaneous extraction of in-plane and through-plane displacements from tagged images is feasible.
Resumo:
BACKGROUND: Direct noninvasive visualization of the coronary vessel wall may enhance risk stratification by quantifying subclinical coronary atherosclerotic plaque burden. We sought to evaluate high-resolution black-blood 3D cardiovascular magnetic resonance (CMR) imaging for in vivo visualization of the proximal coronary artery vessel wall. METHODS AND RESULTS: Twelve adult subjects, including 6 clinically healthy subjects and 6 patients with nonsignificant coronary artery disease (10% to 50% x-ray angiographic diameter reduction) were studied with the use of a commercial 1.5 Tesla CMR scanner. Free-breathing 3D coronary vessel wall imaging was performed along the major axis of the right coronary artery with isotropic spatial resolution (1.0x1.0x1.0 mm(3)) with the use of a black-blood spiral image acquisition. The proximal vessel wall thickness and luminal diameter were objectively determined with an automated edge detection tool. The 3D CMR vessel wall scans allowed for visualization of the contiguous proximal right coronary artery in all subjects. Both mean vessel wall thickness (1.7+/-0.3 versus 1.0+/-0.2 mm) and wall area (25.4+/-6.9 versus 11.5+/-5.2 mm(2)) were significantly increased in the patients compared with the healthy subjects (both P<0.01). The lumen diameter (3.6+/-0.7 versus 3.4+/-0.5 mm, P=0.47) and lumen area (8.9+/-3.4 versus 7.9+/-3.5 mm(2), P=0.47) were similar in both groups. CONCLUSIONS: Free-breathing 3D black-blood coronary CMR with isotropic resolution identified an increased coronary vessel wall thickness with preservation of lumen size in patients with nonsignificant coronary artery disease, consistent with a "Glagov-type" outward arterial remodeling. This novel approach has the potential to quantify subclinical disease.
Resumo:
Retention elections are intended to focus on the professional competency of Iowa’s judges rather than the popularity of individual rulings. In a retention election, voters decide whether a judge should be retained or removed from office. If a judge receives a majority of “yes” votes, the judge serves another full term. If a judge receives a majority of “no” votes, the judge is removed from office at the end of the year.