146 resultados para Thresholding


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Background: We aimed to investigate the performance of five different trend analysis criteria for the detection of glaucomatous progression and to determine the most frequently and rapidly progressing locations of the visual field. Design: Retrospective cohort. Participants or Samples: Treated glaucoma patients with =8 Swedish Interactive Thresholding Algorithm (SITA)-standard 24-2 visual field tests. Methods: Progression was determined using trend analysis. Five different criteria were used: (A) =1 significantly progressing point; (B) =2 significantly progressing points; (C) =2 progressing points located in the same hemifield; (D) at least two adjacent progressing points located in the same hemifield; (E) =2 progressing points in the same Garway-Heath map sector. Main Outcome Measures: Number of progressing eyes and false-positive results. Results: We included 587 patients. The number of eyes reaching a progression endpoint using each criterion was: A = 300 (51%); B = 212 (36%); C = 194 (33%); D = 170 (29%); and E = 186 (31%) (P = 0.03). The numbers of eyes with positive slopes were: A = 13 (4.3%); B = 3 (1.4%); C = 3 (1.5%); D = 2 (1.1%); and E = 3 (1.6%) (P = 0.06). The global slopes for progressing eyes were more negative in Groups B, C and D than in Group A (P = 0.004). The visual field locations that progressed more often were those in the nasal field adjacent to the horizontal midline. Conclusions: Pointwise linear regression criteria that take into account the retinal nerve fibre layer anatomy enhances the specificity of trend analysis for the detection glaucomatous visual field progression.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper describes a logic-based formalism for qualitative spatial reasoning with cast shadows (Perceptual Qualitative Relations on Shadows, or PQRS) and presents results of a mobile robot qualitative self-localisation experiment using this formalism. Shadow detection was accomplished by mapping the images from the robot’s monocular colour camera into a HSV colour space and then thresholding on the V dimension. We present results of selflocalisation using two methods for obtaining the threshold automatically: in one method the images are segmented according to their grey-scale histograms, in the other, the threshold is set according to a prediction about the robot’s location, based upon a qualitative spatial reasoning theory about shadows. This theory-driven threshold search and the qualitative self-localisation procedure are the main contributions of the present research. To the best of our knowledge this is the first work that uses qualitative spatial representations both to perform robot self-localisation and to calibrate a robot’s interpretation of its perceptual input.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Permitida la difusión del código bajo los términos de la licencia BSD de tres cláusulas.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

[ES]El proyecto contiene módulos de simulación, procesado de datos, mapeo y localización, desarrollados en C++ utilizando ROS (Robot Operating System) y PCL (Point Cloud Library). Ha sido desarrollado bajo el proyecto de robótica submarina AVORA.Se han caracterizado el vehículo y el sensor, y se han analizado diferentes tecnologías de sensores y mapeo. Los datos pasan por tres etapas: Conversión a nube de puntos, filtrado por umbral, eliminación de puntos espureos y, opcionalmente, detección de formas. Estos datos son utilizados para construir un mapa de superficie multinivel. La otra herramienta desarrollada es un algoritmo de Punto más Cercano Iterativo (ICP) modificado, que tiene en cuenta el modo de funcionamiento del sonar de imagen utilizado.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Structural Health Monitoring (SHM) is the process of characterization for existing civil structures that proposes for damage detection and structural identification. It's based firstly on the collection of data that are inevitably affected by noise. In this work a procedure to denoise the measured acceleration signal is proposed, based on EMD-thresholding techniques. Moreover the velocity and displacement responses are estimated, starting from measured acceleration.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Es soll eine Dichtefunktion geschätzt werden unter der Modellannahme, dass diese in einer geeigneten Besovklasse liegt und kompakten Träger hat. Hierzu wird ein Waveletschätzer TW näher untersucht, der Thresholding-Methoden verwendet. Es wird die asymptotische Konvergenzgeschwindigkeit von TW für eine große Zahl von Beobachtungen angegeben und bewiesen. Schließlich werden in einem Überblick weitere Waveletschätzer diskutiert und mit TW verglichen. Es zeigt sich, dass TW in vielen Modellannahmen die optimale Konvergenzrate erreicht.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Il metodo agli elementi finiti è stato utilizzato per valutare la distribuzione dei carichi e delle deformazioni in numerose componenti del corpo umano. L'applicazione di questo metodo ha avuto particolare successo nelle articolazioni con geometria semplice e condizioni di carico ben definite, mentre ha avuto un impatto minore sulla conoscenza della biomeccanica delle articolazioni multi-osso come il polso. Lo scopo di questo lavoro è quello di valutare gli aspetti clinici e biomeccanici dell’articolazione distale radio-ulnare, attraverso l’utilizzo di metodi di modellazione e di analisi agli elementi finiti. Sono stati progettati due modelli 3D a partire da immagini CT, in formato DICOM. Le immagini appartenevano ad un paziente con articolazione sana e ad un paziente con articolazione patologica, in particolare si trattava di una dislocazione ulnare traumatica. Le componenti principali dei modelli presi in considerazione sono stati: radio, ulna, cartilagine, legamento interosso, palmare e distale. Per la realizzazione del radio e dell’ulna sono stati utilizzati i metodi di segmentazione “Thresholding” e “RegionGrowing” sulle immagini e grazie ad operatori morfologici, è stato possibile distinguere l’osso corticale dall’osso spongioso. Successivamente è stata creata la cartilagine presente tra le due ossa, attraverso operazioni di tipo booleano. Invece, i legamenti sono stati realizzati prendendo i punti-nodo del radio e dell’ulna e formando le superfici tra di essi. Per ciascuna di queste componenti, sono state assegnate le corrispondenti proprietà dei materiali. Per migliorare la qualità dei modelli, sono state necessarie operazioni di “Smoothing” e “Autoremesh”. In seguito, è stata eseguita un’analisi agli elementi finiti attraverso l’uso di vincoli e forze, così da simulare il comportamento delle articolazioni. In particolare, sono stati simulati lo stress e la deformazione. Infine, grazie ai risultati ottenuti dalle simulazioni, è stato possibile verificare l’eventuale rischio di frattura in differenti punti anatomici del radio e dell’ulna nell’articolazione sana e patologica.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Wir betrachten einen zeitlich inhomogenen Diffusionsprozess, der durch eine stochastische Differentialgleichung gegeben wird, deren Driftterm ein deterministisches T-periodisches Signal beinhaltet, dessen Periodizität bekannt ist. Dieses Signal sei in einem Besovraum enthalten. Wir schätzen es mit Hilfe eines nichtparametrischen Waveletschätzers. Unser Schätzer ist von einem Wavelet-Dichteschätzer mit Thresholding inspiriert, der 1996 in einem klassischen iid-Modell von Donoho, Johnstone, Kerkyacharian und Picard konstruiert wurde. Unter gewissen Ergodizitätsvoraussetzungen an den Prozess können wir nichtparametrische Konvergenzraten angegeben, die bis auf einen logarithmischen Term den Raten im klassischen iid-Fall entsprechen. Diese Raten werden mit Hilfe von Orakel-Ungleichungen gezeigt, die auf Ergebnissen über Markovketten in diskreter Zeit von Clémencon, 2001, beruhen. Außerdem betrachten wir einen technisch einfacheren Spezialfall und zeigen einige Computersimulationen dieses Schätzers.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Satellite image classification involves designing and developing efficient image classifiers. With satellite image data and image analysis methods multiplying rapidly, selecting the right mix of data sources and data analysis approaches has become critical to the generation of quality land-use maps. In this study, a new postprocessing information fusion algorithm for the extraction and representation of land-use information based on high-resolution satellite imagery is presented. This approach can produce land-use maps with sharp interregional boundaries and homogeneous regions. The proposed approach is conducted in five steps. First, a GIS layer - ATKIS data - was used to generate two coarse homogeneous regions, i.e. urban and rural areas. Second, a thematic (class) map was generated by use of a hybrid spectral classifier combining Gaussian Maximum Likelihood algorithm (GML) and ISODATA classifier. Third, a probabilistic relaxation algorithm was performed on the thematic map, resulting in a smoothed thematic map. Fourth, edge detection and edge thinning techniques were used to generate a contour map with pixel-width interclass boundaries. Fifth, the contour map was superimposed on the thematic map by use of a region-growing algorithm with the contour map and the smoothed thematic map as two constraints. For the operation of the proposed method, a software package is developed using programming language C. This software package comprises the GML algorithm, a probabilistic relaxation algorithm, TBL edge detector, an edge thresholding algorithm, a fast parallel thinning algorithm, and a region-growing information fusion algorithm. The county of Landau of the State Rheinland-Pfalz, Germany was selected as a test site. The high-resolution IRS-1C imagery was used as the principal input data.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Il principale scopo di questa tesi è focalizzato alla ricerca di una caratterizzazione dei contenuti in video 3D. In una prima analisi, le complessità spaziale e temporale di contenuti 3D sono state studiate seguendo le convenzionali tecniche applicate a video 2D. In particolare, Spatial Information (SI) e Temporal Information (TI) sono i due indicatori utilizzati nella caratterizzazione 3D di contenuti spaziali e temporali. Per presentare una descrizione completa di video 3D, deve essere considerata anche la caratterizzazione in termini di profondità. A questo riguardo, nuovi indicatori di profondità sono stati proposti sulla base di valutazioni statistiche degli istogrammi di mappe di profondità. Il primo depth indicator è basato infatti sullo studio della media e deviazione standard della distribuzione dei dati nella depth map. Un'altra metrica proposta in questo lavoro stima la profondità basandosi sul calcolo dell’entropia della depth map. Infine, il quarto algoritmo implementato applica congiuntamente una tecnica di sogliatura (thresholding technique) e analizza i valori residui dell’istogramma calcolando l’indice di Kurtosis. Gli algoritmi proposti sono stati testati con un confronto tra le metriche proposte in questo lavoro e quelle precedenti, ma anche con risultati di test soggettivi. I risultati sperimentali mostrano l’efficacia delle soluzioni proposte nel valutare la profondità in video 3D. Infine, uno dei nuovi indicatori è stato applicato ad un database di video 3D per completare la caratterizzazione di contenuti 3D.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Spectrum sensing is currently one of the most challenging design problems in cognitive radio. A robust spectrum sensing technique is important in allowing implementation of a practical dynamic spectrum access in noisy and interference uncertain environments. In addition, it is desired to minimize the sensing time, while meeting the stringent cognitive radio application requirements. To cope with this challenge, cyclic spectrum sensing techniques have been proposed. However, such techniques require very high sampling rates in the wideband regime and thus are costly in hardware implementation and power consumption. In this thesis the concept of compressed sensing is applied to circumvent this problem by utilizing the sparsity of the two-dimensional cyclic spectrum. Compressive sampling is used to reduce the sampling rate and a recovery method is developed for re- constructing the sparse cyclic spectrum from the compressed samples. The reconstruction solution used, exploits the sparsity structure in the two-dimensional cyclic spectrum do-main which is different from conventional compressed sensing techniques for vector-form sparse signals. The entire wideband cyclic spectrum is reconstructed from sub-Nyquist-rate samples for simultaneous detection of multiple signal sources. After the cyclic spectrum recovery two methods are proposed to make spectral occupancy decisions from the recovered cyclic spectrum: a band-by-band multi-cycle detector which works for all modulation schemes, and a fast and simple thresholding method that works for Binary Phase Shift Keying (BPSK) signals only. In addition a method for recovering the power spectrum of stationary signals is developed as a special case. Simulation results demonstrate that the proposed spectrum sensing algorithms can significantly reduce sampling rate without sacrifcing performance. The robustness of the algorithms to the noise uncertainty of the wireless channel is also shown.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We consider the problem of fitting a union of subspaces to a collection of data points drawn from one or more subspaces and corrupted by noise and/or gross errors. We pose this problem as a non-convex optimization problem, where the goal is to decompose the corrupted data matrix as the sum of a clean and self-expressive dictionary plus a matrix of noise and/or gross errors. By self-expressive we mean a dictionary whose atoms can be expressed as linear combinations of themselves with low-rank coefficients. In the case of noisy data, our key contribution is to show that this non-convex matrix decomposition problem can be solved in closed form from the SVD of the noisy data matrix. The solution involves a novel polynomial thresholding operator on the singular values of the data matrix, which requires minimal shrinkage. For one subspace, a particular case of our framework leads to classical PCA, which requires no shrinkage. For multiple subspaces, the low-rank coefficients obtained by our framework can be used to construct a data affinity matrix from which the clustering of the data according to the subspaces can be obtained by spectral clustering. In the case of data corrupted by gross errors, we solve the problem using an alternating minimization approach, which combines our polynomial thresholding operator with the more traditional shrinkage-thresholding operator. Experiments on motion segmentation and face clustering show that our framework performs on par with state-of-the-art techniques at a reduced computational cost.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The brain is a complex neural network with a hierarchical organization and the mapping of its elements and connections is an important step towards the understanding of its function. Recent developments in diffusion-weighted imaging have provided the opportunity to reconstruct the whole-brain structural network in-vivo at a large scale level and to study the brain structural substrate in a framework that is close to the current understanding of brain function. However, methods to construct the connectome are still under development and they should be carefully evaluated. To this end, the first two studies included in my thesis aimed at improving the analytical tools specific to the methodology of brain structural networks. The first of these papers assessed the repeatability of the most common global and local network metrics used in literature to characterize the connectome, while in the second paper the validity of further metrics based on the concept of communicability was evaluated. Communicability is a broader measure of connectivity which accounts also for parallel and indirect connections. These additional paths may be important for reorganizational mechanisms in the presence of lesions as well as to enhance integration in the network. These studies showed good to excellent repeatability of global network metrics when the same methodological pipeline was applied, but more variability was detected when considering local network metrics or when using different thresholding strategies. In addition, communicability metrics have been found to add some insight into the integration properties of the network by detecting subsets of nodes that were highly interconnected or vulnerable to lesions. The other two studies used methods based on diffusion-weighted imaging to obtain knowledge concerning the relationship between functional and structural connectivity and about the etiology of schizophrenia. The third study integrated functional oscillations measured using electroencephalography (EEG) and functional magnetic resonance imaging (fMRI) as well as diffusion-weighted imaging data. The multimodal approach that was applied revealed a positive relationship between individual fluctuations of the EEG alpha-frequency and diffusion properties of specific connections of two resting-state networks. Finally, in the fourth study diffusion-weighted imaging was used to probe for a relationship between the underlying white matter tissue structure and season of birth in schizophrenia patients. The results are in line with the neurodevelopmental hypothesis of early pathological mechanisms as the origin of schizophrenia. The different analytical approaches selected in these studies also provide arguments for discussion of the current limitations in the analysis of brain structural networks. To sum up, the first studies presented in this thesis illustrated the potential of brain structural network analysis to provide useful information on features of brain functional segregation and integration using reliable network metrics. In the other two studies alternative approaches were presented. The common discussion of the four studies enabled us to highlight the benefits and possibilities for the analysis of the connectome as well as some current limitations.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The nematode Caenorhabditis elegans is a well-known model organism used to investigate fundamental questions in biology. Motility assays of this small roundworm are designed to study the relationships between genes and behavior. Commonly, motility analysis is used to classify nematode movements and characterize them quantitatively. Over the past years, C. elegans' motility has been studied across a wide range of environments, including crawling on substrates, swimming in fluids, and locomoting through microfluidic substrates. However, each environment often requires customized image processing tools relying on heuristic parameter tuning. In the present study, we propose a novel Multi-Environment Model Estimation (MEME) framework for automated image segmentation that is versatile across various environments. The MEME platform is constructed around the concept of Mixture of Gaussian (MOG) models, where statistical models for both the background environment and the nematode appearance are explicitly learned and used to accurately segment a target nematode. Our method is designed to simplify the burden often imposed on users; here, only a single image which includes a nematode in its environment must be provided for model learning. In addition, our platform enables the extraction of nematode ‘skeletons’ for straightforward motility quantification. We test our algorithm on various locomotive environments and compare performances with an intensity-based thresholding method. Overall, MEME outperforms the threshold-based approach for the overwhelming majority of cases examined. Ultimately, MEME provides researchers with an attractive platform for C. elegans' segmentation and ‘skeletonizing’ across a wide range of motility assays.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Seizure freedom in patients suffering from pharmacoresistant epilepsies is still not achieved in 20–30% of all cases. Hence, current therapies need to be improved, based on a more complete understanding of ictogenesis. In this respect, the analysis of functional networks derived from intracranial electroencephalographic (iEEG) data has recently become a standard tool. Functional networks however are purely descriptive models and thus are conceptually unable to predict fundamental features of iEEG time-series, e.g., in the context of therapeutical brain stimulation. In this paper we present some first steps towards overcoming the limitations of functional network analysis, by showing that its results are implied by a simple predictive model of time-sliced iEEG time-series. More specifically, we learn distinct graphical models (so called Chow–Liu (CL) trees) as models for the spatial dependencies between iEEG signals. Bayesian inference is then applied to the CL trees, allowing for an analytic derivation/prediction of functional networks, based on thresholding of the absolute value Pearson correlation coefficient (CC) matrix. Using various measures, the thus obtained networks are then compared to those which were derived in the classical way from the empirical CC-matrix. In the high threshold limit we find (a) an excellent agreement between the two networks and (b) key features of periictal networks as they have previously been reported in the literature. Apart from functional networks, both matrices are also compared element-wise, showing that the CL approach leads to a sparse representation, by setting small correlations to values close to zero while preserving the larger ones. Overall, this paper shows the validity of CL-trees as simple, spatially predictive models for periictal iEEG data. Moreover, we suggest straightforward generalizations of the CL-approach for modeling also the temporal features of iEEG signals.