927 resultados para Penetration Depth


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Abstract Originalsprache (englisch) Visual perception relies on a two-dimensional projection of the viewed scene on the retinas of both eyes. Thus, visual depth has to be reconstructed from a number of different cues that are subsequently integrated to obtain robust depth percepts. Existing models of sensory integration are mainly based on the reliabilities of individual cues and disregard potential cue interactions. In the current study, an extended Bayesian model is proposed that takes into account both cue reliability and consistency. Four experiments were carried out to test this model's predictions. Observers had to judge visual displays of hemi-cylinders with an elliptical cross section, which were constructed to allow for an orthogonal variation of several competing depth cues. In Experiment 1 and 2, observers estimated the cylinder's depth as defined by shading, texture, and motion gradients. The degree of consistency among these cues was systematically varied. It turned out that the extended Bayesian model provided a better fit to the empirical data compared to the traditional model which disregards covariations among cues. To circumvent the potentially problematic assessment of single-cue reliabilities, Experiment 3 used a multiple-observation task, which allowed for estimating perceptual weights from multiple-cue stimuli. Using the same multiple-observation task, the integration of stereoscopic disparity, shading, and texture gradients was examined in Experiment 4. It turned out that less reliable cues were downweighted in the combined percept. Moreover, a specific influence of cue consistency was revealed. Shading and disparity seemed to be processed interactively while other cue combinations could be well described by additive integration rules. These results suggest that cue combination in visual depth perception is highly flexible and depends on single-cue properties as well as on interrelations among cues. The extension of the traditional cue combination model is defended in terms of the necessity for robust perception in ecologically valid environments and the current findings are discussed in the light of emerging computational theories and neuroscientific approaches.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this thesis we have developed solutions to common issues regarding widefield microscopes, facing the problem of the intensity inhomogeneity of an image and dealing with two strong limitations: the impossibility of acquiring either high detailed images representative of whole samples or deep 3D objects. First, we cope with the problem of the non-uniform distribution of the light signal inside a single image, named vignetting. In particular we proposed, for both light and fluorescent microscopy, non-parametric multi-image based methods, where the vignetting function is estimated directly from the sample without requiring any prior information. After getting flat-field corrected images, we studied how to fix the problem related to the limitation of the field of view of the camera, so to be able to acquire large areas at high magnification. To this purpose, we developed mosaicing techniques capable to work on-line. Starting from a set of overlapping images manually acquired, we validated a fast registration approach to accurately stitch together the images. Finally, we worked to virtually extend the field of view of the camera in the third dimension, with the purpose of reconstructing a single image completely in focus, stemming from objects having a relevant depth or being displaced in different focus planes. After studying the existing approaches for extending the depth of focus of the microscope, we proposed a general method that does not require any prior information. In order to compare the outcome of existing methods, different standard metrics are commonly used in literature. However, no metric is available to compare different methods in real cases. First, we validated a metric able to rank the methods as the Universal Quality Index does, but without needing any reference ground truth. Second, we proved that the approach we developed performs better in both synthetic and real cases.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

La gestione del traffico è una delle principali problematiche delle città moderne, e porta alla definizione di nuove sfide per quanto riguarda l’ottimizzazione del flusso veicolare. Il controllo semaforico è uno degli elementi fondamentali per ottimizzare la gestione del traffico. Attualmente la rilevazione del traffico viene effettuata tramite sensori, tra i quali vengono maggiormente utilizzate le spire magnetiche, la cui installazione e gestione implica costi elevati. In questo contesto, il progetto europeo COLOMBO si pone come obiettivo l’ideazione di nuovi sistemi di regolazione semaforica in grado di rilevare il traffico veicolare mediante sensori più economici da installare e mantenere, e capaci, sulla base di tali rilevazioni, di auto organizzarsi, traendo ispirazione dal campo dell’intelligenza artificiale noto come swarm intelligence. Alla base di questa auto organizzazione semaforica di COLOMBO vi sono due diversi livelli di politiche: macroscopico e microscopico. Nel primo caso le politiche macroscopiche, utilizzando il feromone come astrazione dell’attuale livello del traffico, scelgono la politica di gestione in base alla quantità di feromone presente nelle corsie di entrata e di uscita. Per quanto riguarda invece le politiche microscopiche, il loro compito è quello di deci- dere la durata dei periodi di rosso o verde modificando una sequenza di fasi, chiamata in COLOMBO catena. Le catene possono essere scelte dal sistema in base al valore corrente della soglia di desiderabilità e ad ogni catena corrisponde una soglia di desiderabilità. Lo scopo di questo elaborato è quello di suggerire metodi alternativi all’attuale conteggio di questa soglia di desiderabilità in scenari di bassa presenza di dispositivi per la rilevazione dei veicoli. Ogni algoritmo complesso ha bisogno di essere ottimizzato per migliorarne le performance. Anche in questo caso, gli algoritmi proposti hanno subito un processo di parameter tuning per ottimizzarne le prestazioni in scenari di bassa presenza di dispositivi per la rilevazione dei veicoli. Sulla base del lavoro di parameter tuning, infine, sono state eseguite delle simulazioni per valutare quale degli approcci suggeriti sia il migliore.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Questo lavoro è iniziato con uno studio teorico delle principali tecniche di classificazione di immagini note in letteratura, con particolare attenzione ai più diffusi modelli di rappresentazione dell’immagine, quali il modello Bag of Visual Words, e ai principali strumenti di Apprendimento Automatico (Machine Learning). In seguito si è focalizzata l’attenzione sulla analisi di ciò che costituisce lo stato dell’arte per la classificazione delle immagini, ovvero il Deep Learning. Per sperimentare i vantaggi dell’insieme di metodologie di Image Classification, si è fatto uso di Torch7, un framework di calcolo numerico, utilizzabile mediante il linguaggio di scripting Lua, open source, con ampio supporto alle metodologie allo stato dell’arte di Deep Learning. Tramite Torch7 è stata implementata la vera e propria classificazione di immagini poiché questo framework, grazie anche al lavoro di analisi portato avanti da alcuni miei colleghi in precedenza, è risultato essere molto efficace nel categorizzare oggetti in immagini. Le immagini su cui si sono basati i test sperimentali, appartengono a un dataset creato ad hoc per il sistema di visione 3D con la finalità di sperimentare il sistema per individui ipovedenti e non vedenti; in esso sono presenti alcuni tra i principali ostacoli che un ipovedente può incontrare nella propria quotidianità. In particolare il dataset si compone di potenziali ostacoli relativi a una ipotetica situazione di utilizzo all’aperto. Dopo avere stabilito dunque che Torch7 fosse il supporto da usare per la classificazione, l’attenzione si è concentrata sulla possibilità di sfruttare la Visione Stereo per aumentare l’accuratezza della classificazione stessa. Infatti, le immagini appartenenti al dataset sopra citato sono state acquisite mediante una Stereo Camera con elaborazione su FPGA sviluppata dal gruppo di ricerca presso il quale è stato svolto questo lavoro. Ciò ha permesso di utilizzare informazioni di tipo 3D, quali il livello di depth (profondità) di ogni oggetto appartenente all’immagine, per segmentare, attraverso un algoritmo realizzato in C++, gli oggetti di interesse, escludendo il resto della scena. L’ultima fase del lavoro è stata quella di testare Torch7 sul dataset di immagini, preventivamente segmentate attraverso l’algoritmo di segmentazione appena delineato, al fine di eseguire il riconoscimento della tipologia di ostacolo individuato dal sistema.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Among various groups of fishes, a shift in peak wavelength sensitivity has been correlated with changes in their photic environments. The genus Sebastes is a radiation of marine fish species that inhabit a wide range of depths from intertidal to over 600 m. We examined 32 species of Sebastes for evidence of adaptive amino acid substitution at the rhodopsin gene. Fourteen amino acid positions were variable among these species. Maximum likelihood analyses identify several of these to be targets of positive selection. None of these correspond to previously identified critical amino acid sites, yet they may in fact be functionally important. The occurrence of independent parallel changes at certain amino acid positions reinforces this idea. Reconstruction of habitat depths of ancestral nodes in the phylogeny suggests that shallow habitats have been colonized independently in different lineages. The evolution of rhodopsin appears to be associated with changes in depth, with accelerated evolution in lineages that have had large changes in depth.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The Advanced Very High Resolution Radiometer (AVHRR) carried on board the National Oceanic and Atmospheric Administration (NOAA) and the Meteorological Operational Satellite (MetOp) polar orbiting satellites is the only instrument offering more than 25 years of satellite data to analyse aerosols on a daily basis. The present study assessed a modified AVHRR aerosol optical depth τa retrieval over land for Europe. The algorithm might also be applied to other parts of the world with similar surface characteristics like Europe, only the aerosol properties would have to be adapted to a new region. The initial approach used a relationship between Sun photometer measurements from the Aerosol Robotic Network (AERONET) and the satellite data to post-process the retrieved τa. Herein a quasi-stand-alone procedure, which is more suitable for the pre-AERONET era, is presented. In addition, the estimation of surface reflectance, the aerosol model, and other processing steps have been adapted. The method's cross-platform applicability was tested by validating τa from NOAA-17 and NOAA-18 AVHRR at 15 AERONET sites in Central Europe (40.5° N–50° N, 0° E–17° E) from August 2005 to December 2007. Furthermore, the accuracy of the AVHRR retrieval was related to products from two newer instruments, the Medium Resolution Imaging Spectrometer (MERIS) on board the Environmental Satellite (ENVISAT) and the Moderate Resolution Imaging Spectroradiometer (MODIS) on board Aqua/Terra. Considering the linear correlation coefficient R, the AVHRR results were similar to those of MERIS with even lower root mean square error RMSE. Not surprisingly, MODIS, with its high spectral coverage, gave the highest R and lowest RMSE. Regarding monthly averaged τa, the results were ambiguous. Focusing on small-scale structures, R was reduced for all sensors, whereas the RMSE solely for MERIS substantially increased. Regarding larger areas like Central Europe, the error statistics were similar to the individual match-ups. This was mainly explained with sampling issues. With the successful validation of AVHRR we are now able to concentrate on our large data archive dating back to 1985. This is a unique opportunity for both climate and air pollution studies over land surfaces.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Outside of relatively limited crash testing with large trucks, very little is known regarding the performance of traffic barriers subjected to real-world large truck impacts. The purpose of this study was to investigate real-world large truck impacts into traffic barriers to determine barrier crash involvement rates, the impact performance of barriers not specifically designed to redirect large trucks, and the real-world performance of large-truck-specific barriers. Data sources included the Fatality Analysis Reporting System (2000-2009), the General Estimates System (2000-2009) and 155 in-depth large truck-to-barrier crashes from the Large Truck Crash Causation Study. Large truck impacts with a longitudinal barrier were found to comprise 3 percent of all police-reported longitudinal barrier impacts and roughly the same proportion of barrier fatalities. Based on a logistic regression model predicting barrier penetration, large truck barrier penetration risk was found to increase by a factor of 6 for impacts with barriers designed primarily for passenger vehicles. Although large-truck-specific barriers were found to perform better than non-heavy vehicle specific barriers, the penetration rate of these barriers were found to be 17 percent. This penetration rate is especially a concern because the higher test level barriers are designed to protect other road users, not the occupants of the large truck. Surprisingly, barriers not specifically designed for large truck impacts were found to prevent large truck penetration approximately half of the time. This suggests that adding costlier higher test level barriers may not always be warranted, especially on roadways with lower truck volumes.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

To analyze the influence of corneal cross-linking (CXL) using ultraviolet-A and riboflavin on corneal drug penetration of topically applied drugs.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

To evaluate if depth of cure D(ISO) determined by the ISO 4049 method is accurately reflected with bulk fill materials when compared to depth of cure D(new) determined by Vickers microhardness profiles.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The human epithelial cell adhesion molecule (EpCAM) is highly expressed in a variety of clinical tumour entities. Although an antibody against EpCAM has successfully been used as an adjuvant therapy in colon cancer, this therapy has never gained wide-spread use. We have therefore investigated the possibilities and limitations for EpCAM as possible molecular imaging target using a panel of preclinical cancer models. Twelve human cancer cell lines representing six tumour entities were tested for their EpCAM expression by qPCR, flow cytometry analysis and immunocytochemistry. In addition, EpCAM expression was analyzed in vivo in xenograft models for tumours derived from these cells. Except for melanoma, all cell lines expressed EpCAM mRNA and protein when grown in vitro. Although they exhibited different mRNA levels, all cell lines showed similar EpCAM protein levels upon detection with monoclonal antibodies. When grown in vivo, the EpCAM expression was unaffected compared to in vitro except for the pancreatic carcinoma cell line 5072 which lost its EpCAM expression in vivo. Intravenously applied radio-labelled anti EpCAM MOC31 antibody was enriched in HT29 primary tumour xenografts indicating that EpCAM binding sites are accessible in vivo. However, bound antibody could only be immunohistochemically detected in the vicinity of perfused blood vessels. Investigation of the fine structure of the HT29 tumour blood vessels showed that they were immature and prone for higher fluid flux into the interstitial space. Consistent with this hypothesis, a higher interstitial fluid pressure of about 12 mbar was measured in the HT29 primary tumour via "wick-in-needle" technique which could explain the limited diffusion of the antibody into the tumour observed by immunohistochemistry.