942 resultados para trapping depth
Resumo:
[EN]A range of factors may affect the composition and abundance of macroalgae on subtidal rocky reefs. We experimentally determined the interactive effect of the occurrence of the long-spine sea urchin, Diadema antillarum, depth and sedimentation levels on macroalgal assemblage structure on eastern Atlantic rocky reefs. Specifically, we manipulated sea urchin densities (removal of all individuals vs. untouched controls at natural densities) on rocky reefs devoid of erect vegetation, and predicted (1) that removal of sea urchins would differently affect macroalgal assemblage structure between deep (16-18 m) and shallow (8-9 m) reef strata, and that (2) the effect of sea urchin removal on macroalgae would be altered under different scenarios of sedimentation (ambient vs. enhanced). Experimental circular plots (2 m in diameter) were set up at 3 locations at Gran Canaria (Canarian Archipelago), and were maintained and monitored every 4 wk for 1 y. At the end of the experimental period, the structure of the algal assemblages differed between urchin treatments and depth strata, with a larger cover of turf and bushlike algae where urchins were removed and at the shallow reef stratum. More important, differences in algal assemblage structure between urchin treatments were irrespective of sedimentation levels, but shifted from the shallow to the deep stratum. This interactive effect was, in turn, observed for bushlike algae, as a result of a larger magnitude of response (i.e., larger cover) in the shallow stratum relative to the deep stratum, but was not detected for either turf or crustose coralline algae. These results highlight the importance of sorne physical conditions (here, differences in depth) to interact with biotic processes (here, urchin abundance) to create patterns in the organization of subtidal and benthic assemblages
Resumo:
Trabajo realizado por: Packard, T. T., Osma, N., Fernández Urruzola, I., Gómez, M
Resumo:
Máster Oficial en Gestión Costera
Resumo:
[EN]Low cost real-time depth cameras offer new sensors for a wide field of applications apart from the gaming world. Other active research scenarios as for example surveillance, can take ad- vantage of the capabilities offered by this kind of sensors that integrate depth and visual information. In this paper, we present a system that operates in a novel application context for these devices, in troublesome scenarios where illumination conditions can suffer sudden changes. We focus on the people counting problem with re-identification and trajectory analysis.
Resumo:
Abstract Originalsprache (englisch) Visual perception relies on a two-dimensional projection of the viewed scene on the retinas of both eyes. Thus, visual depth has to be reconstructed from a number of different cues that are subsequently integrated to obtain robust depth percepts. Existing models of sensory integration are mainly based on the reliabilities of individual cues and disregard potential cue interactions. In the current study, an extended Bayesian model is proposed that takes into account both cue reliability and consistency. Four experiments were carried out to test this model's predictions. Observers had to judge visual displays of hemi-cylinders with an elliptical cross section, which were constructed to allow for an orthogonal variation of several competing depth cues. In Experiment 1 and 2, observers estimated the cylinder's depth as defined by shading, texture, and motion gradients. The degree of consistency among these cues was systematically varied. It turned out that the extended Bayesian model provided a better fit to the empirical data compared to the traditional model which disregards covariations among cues. To circumvent the potentially problematic assessment of single-cue reliabilities, Experiment 3 used a multiple-observation task, which allowed for estimating perceptual weights from multiple-cue stimuli. Using the same multiple-observation task, the integration of stereoscopic disparity, shading, and texture gradients was examined in Experiment 4. It turned out that less reliable cues were downweighted in the combined percept. Moreover, a specific influence of cue consistency was revealed. Shading and disparity seemed to be processed interactively while other cue combinations could be well described by additive integration rules. These results suggest that cue combination in visual depth perception is highly flexible and depends on single-cue properties as well as on interrelations among cues. The extension of the traditional cue combination model is defended in terms of the necessity for robust perception in ecologically valid environments and the current findings are discussed in the light of emerging computational theories and neuroscientific approaches.
Resumo:
In this thesis we have developed solutions to common issues regarding widefield microscopes, facing the problem of the intensity inhomogeneity of an image and dealing with two strong limitations: the impossibility of acquiring either high detailed images representative of whole samples or deep 3D objects. First, we cope with the problem of the non-uniform distribution of the light signal inside a single image, named vignetting. In particular we proposed, for both light and fluorescent microscopy, non-parametric multi-image based methods, where the vignetting function is estimated directly from the sample without requiring any prior information. After getting flat-field corrected images, we studied how to fix the problem related to the limitation of the field of view of the camera, so to be able to acquire large areas at high magnification. To this purpose, we developed mosaicing techniques capable to work on-line. Starting from a set of overlapping images manually acquired, we validated a fast registration approach to accurately stitch together the images. Finally, we worked to virtually extend the field of view of the camera in the third dimension, with the purpose of reconstructing a single image completely in focus, stemming from objects having a relevant depth or being displaced in different focus planes. After studying the existing approaches for extending the depth of focus of the microscope, we proposed a general method that does not require any prior information. In order to compare the outcome of existing methods, different standard metrics are commonly used in literature. However, no metric is available to compare different methods in real cases. First, we validated a metric able to rank the methods as the Universal Quality Index does, but without needing any reference ground truth. Second, we proved that the approach we developed performs better in both synthetic and real cases.
Resumo:
Questo lavoro è iniziato con uno studio teorico delle principali tecniche di classificazione di immagini note in letteratura, con particolare attenzione ai più diffusi modelli di rappresentazione dell’immagine, quali il modello Bag of Visual Words, e ai principali strumenti di Apprendimento Automatico (Machine Learning). In seguito si è focalizzata l’attenzione sulla analisi di ciò che costituisce lo stato dell’arte per la classificazione delle immagini, ovvero il Deep Learning. Per sperimentare i vantaggi dell’insieme di metodologie di Image Classification, si è fatto uso di Torch7, un framework di calcolo numerico, utilizzabile mediante il linguaggio di scripting Lua, open source, con ampio supporto alle metodologie allo stato dell’arte di Deep Learning. Tramite Torch7 è stata implementata la vera e propria classificazione di immagini poiché questo framework, grazie anche al lavoro di analisi portato avanti da alcuni miei colleghi in precedenza, è risultato essere molto efficace nel categorizzare oggetti in immagini. Le immagini su cui si sono basati i test sperimentali, appartengono a un dataset creato ad hoc per il sistema di visione 3D con la finalità di sperimentare il sistema per individui ipovedenti e non vedenti; in esso sono presenti alcuni tra i principali ostacoli che un ipovedente può incontrare nella propria quotidianità. In particolare il dataset si compone di potenziali ostacoli relativi a una ipotetica situazione di utilizzo all’aperto. Dopo avere stabilito dunque che Torch7 fosse il supporto da usare per la classificazione, l’attenzione si è concentrata sulla possibilità di sfruttare la Visione Stereo per aumentare l’accuratezza della classificazione stessa. Infatti, le immagini appartenenti al dataset sopra citato sono state acquisite mediante una Stereo Camera con elaborazione su FPGA sviluppata dal gruppo di ricerca presso il quale è stato svolto questo lavoro. Ciò ha permesso di utilizzare informazioni di tipo 3D, quali il livello di depth (profondità) di ogni oggetto appartenente all’immagine, per segmentare, attraverso un algoritmo realizzato in C++, gli oggetti di interesse, escludendo il resto della scena. L’ultima fase del lavoro è stata quella di testare Torch7 sul dataset di immagini, preventivamente segmentate attraverso l’algoritmo di segmentazione appena delineato, al fine di eseguire il riconoscimento della tipologia di ostacolo individuato dal sistema.
Resumo:
In questa tesi vengono trattati argomenti relativi alla dinamica dei fasci di particelle: in particolare si è preso in considerazione il moto betatronico di una particella carica all'interno di un acceleratore circolare. Vengono quindi discussi alcuni aspetti della dinamica trasversa introducendo il formalismo Hamiltoniano e discutendo il modello presentato da Hénon per il caso bidimensionale. Viene poi introdotta la teoria adiabatica al fine di studiare gli effetti intrappolamento di un ensemble di particelle. Infine vengono presentate alcune simulazioni che permettono di poter osservare come il rumore rappresenti un fattore di rilevante importanza nello studio di tali fenomeni.
Resumo:
The aim of this in vitro study was to evaluate the relationship between laser fluorescence values and sealant penetration depth on occlusal fissures. One hundred and sixty-six permanent molars were selected and divided into four groups, which were each treated using a different sealant (two clear and two opaque). The teeth were independently measured twice by two experienced dentists using two laser fluorescence devices-DIAGNOdent (LF and LFpen)-before and after sealing, and then thermoclycled. After measuring, the teeth were histologically prepared and assessed for caries extension. Digital photographs of the cut sealed sites were assessed, and the sealant penetration depth was measured. All 166 sites were measured by one of the examiners taking as limits the outer and inner surface of the sealant into the fissure. For each device (LF and LFpen) and each group, the difference between the values at baseline and after sealing was plotted against the sealant penetration depth and scatter plots were provided. It could be observed that most of the points were concentrated around the zero line, for both LF and LFpen in the four groups. In conclusion, there is no relation between changes in DIAGNOdent values and increasing of depth sealant penetration within the occlusal fissures.