846 resultados para Wide baseline matching


Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper presents a strategy for solving the feature matching problem in calibrated very wide-baseline camera settings. In this kind of settings, perspective distortion, depth discontinuities and occlusion represent enormous challenges. The proposed strategy addresses them by using geometrical information, specifically by exploiting epipolar-constraints. As a result it provides a sparse number of reliable feature points for which 3D position is accurately recovered. Special features known as junctions are used for robust matching. In particular, a strategy for refinement of junction end-point matching is proposed which enhances usual junction-based approaches. This allows to compute cross-correlation between perfectly aligned plane patches in both images, thus yielding better matching results. Evaluation of experimental results proves the effectiveness of the proposed algorithm in very wide-baseline environments.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper presents an empirical study of affine invariant feature detectors to perform matching on video sequences of people with non-rigid surface deformation. Recent advances in feature detection and wide baseline matching have focused on static scenes. Video frames of human movement capture highly non-rigid deformation such as loose hair, cloth creases, skin stretching and free flowing clothing. This study evaluates the performance of six widely used feature detectors for sparse temporal correspondence on single view and multiple view video sequences. Quantitative evaluation is performed of both the number of features detected and their temporal matching against and without ground truth correspondence. Recall-accuracy analysis of feature matching is reported for temporal correspondence on single view and multiple view sequences of people with variation in clothing and movement. This analysis identifies that existing feature detection and matching algorithms are unreliable for fast movement with common clothing.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A "second generation" matching-to-sample procedure that minimizes past sources of artifacts involves (1) successive discrimination between sample stimuli, (2) stimulus displays ranging from four to 16 comparisons, (3) variable stimulus locations to avoid unwanted stimulus-location control, and (4) high accuracy levels (e.g., 90% correct on a 16-choice task in which chance accuracy is 6%). Examples of behavioral engineering with experienced capuchin monkeys included four-choice matching problems with video images of monkeys with substantially above-chance matching in a single session and 90% matching within six sessions. Exclusion performance was demonstrated by interspersing non-identical sample-comparison pairs within a baseline of a nine-comparison identity-matching-to-sample procedure with pictures as stimuli. The test for exclusion presented the newly "mapped" stimulus in a situation in which exclusion was not possible. Degradation of matching between physically non-identical forms occurred while baseline identity accuracy was sustained at high levels, thus confirming that Cebus cf. apella is capable of exclusion. Additionally, exclusion performance when baseline matching relations involved non-identical stimuli was shown.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

La evolución de los teléfonos móviles inteligentes, dotados de cámaras digitales, está provocando una creciente demanda de aplicaciones cada vez más complejas que necesitan algoritmos de visión artificial en tiempo real; puesto que el tamaño de las señales de vídeo no hace sino aumentar y en cambio el rendimiento de los procesadores de un solo núcleo se ha estancado, los nuevos algoritmos que se diseñen para visión artificial han de ser paralelos para poder ejecutarse en múltiples procesadores y ser computacionalmente escalables. Una de las clases de procesadores más interesantes en la actualidad se encuentra en las tarjetas gráficas (GPU), que son dispositivos que ofrecen un alto grado de paralelismo, un excelente rendimiento numérico y una creciente versatilidad, lo que los hace interesantes para llevar a cabo computación científica. En esta tesis se exploran dos aplicaciones de visión artificial que revisten una gran complejidad computacional y no pueden ser ejecutadas en tiempo real empleando procesadores tradicionales. En cambio, como se demuestra en esta tesis, la paralelización de las distintas subtareas y su implementación sobre una GPU arrojan los resultados deseados de ejecución con tasas de refresco interactivas. Asimismo, se propone una técnica para la evaluación rápida de funciones de complejidad arbitraria especialmente indicada para su uso en una GPU. En primer lugar se estudia la aplicación de técnicas de síntesis de imágenes virtuales a partir de únicamente dos cámaras lejanas y no paralelas—en contraste con la configuración habitual en TV 3D de cámaras cercanas y paralelas—con información de color y profundidad. Empleando filtros de mediana modificados para la elaboración de un mapa de profundidad virtual y proyecciones inversas, se comprueba que estas técnicas son adecuadas para una libre elección del punto de vista. Además, se demuestra que la codificación de la información de profundidad con respecto a un sistema de referencia global es sumamente perjudicial y debería ser evitada. Por otro lado se propone un sistema de detección de objetos móviles basado en técnicas de estimación de densidad con funciones locales. Este tipo de técnicas es muy adecuada para el modelado de escenas complejas con fondos multimodales, pero ha recibido poco uso debido a su gran complejidad computacional. El sistema propuesto, implementado en tiempo real sobre una GPU, incluye propuestas para la estimación dinámica de los anchos de banda de las funciones locales, actualización selectiva del modelo de fondo, actualización de la posición de las muestras de referencia del modelo de primer plano empleando un filtro de partículas multirregión y selección automática de regiones de interés para reducir el coste computacional. Los resultados, evaluados sobre diversas bases de datos y comparados con otros algoritmos del estado del arte, demuestran la gran versatilidad y calidad de la propuesta. Finalmente se propone un método para la aproximación de funciones arbitrarias empleando funciones continuas lineales a tramos, especialmente indicada para su implementación en una GPU mediante el uso de las unidades de filtraje de texturas, normalmente no utilizadas para cómputo numérico. La propuesta incluye un riguroso análisis matemático del error cometido en la aproximación en función del número de muestras empleadas, así como un método para la obtención de una partición cuasióptima del dominio de la función para minimizar el error. ABSTRACT The evolution of smartphones, all equipped with digital cameras, is driving a growing demand for ever more complex applications that need to rely on real-time computer vision algorithms. However, video signals are only increasing in size, whereas the performance of single-core processors has somewhat stagnated in the past few years. Consequently, new computer vision algorithms will need to be parallel to run on multiple processors and be computationally scalable. One of the most promising classes of processors nowadays can be found in graphics processing units (GPU). These are devices offering a high parallelism degree, excellent numerical performance and increasing versatility, which makes them interesting to run scientific computations. In this thesis, we explore two computer vision applications with a high computational complexity that precludes them from running in real time on traditional uniprocessors. However, we show that by parallelizing subtasks and implementing them on a GPU, both applications attain their goals of running at interactive frame rates. In addition, we propose a technique for fast evaluation of arbitrarily complex functions, specially designed for GPU implementation. First, we explore the application of depth-image–based rendering techniques to the unusual configuration of two convergent, wide baseline cameras, in contrast to the usual configuration used in 3D TV, which are narrow baseline, parallel cameras. By using a backward mapping approach with a depth inpainting scheme based on median filters, we show that these techniques are adequate for free viewpoint video applications. In addition, we show that referring depth information to a global reference system is ill-advised and should be avoided. Then, we propose a background subtraction system based on kernel density estimation techniques. These techniques are very adequate for modelling complex scenes featuring multimodal backgrounds, but have not been so popular due to their huge computational and memory complexity. The proposed system, implemented in real time on a GPU, features novel proposals for dynamic kernel bandwidth estimation for the background model, selective update of the background model, update of the position of reference samples of the foreground model using a multi-region particle filter, and automatic selection of regions of interest to reduce computational cost. The results, evaluated on several databases and compared to other state-of-the-art algorithms, demonstrate the high quality and versatility of our proposal. Finally, we propose a general method for the approximation of arbitrarily complex functions using continuous piecewise linear functions, specially formulated for GPU implementation by leveraging their texture filtering units, normally unused for numerical computation. Our proposal features a rigorous mathematical analysis of the approximation error in function of the number of samples, as well as a method to obtain a suboptimal partition of the domain of the function to minimize approximation error.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The impact of peritoneal dialysis modality on patient survival and peritonitis rates is not fully understood, and no large-scale randomized clinical trial (RCT) is available. In the absence of a RCT, the use of an advanced matching procedure to reduce selection bias in large cohort studies may be the best approach. The aim of this study is to compare automated peritoneal dialysis (APD) and continuous ambulatory peritoneal dialysis (CAPD) according to peritonitis risk, technique failure and patient survival in a large nation-wide PD cohort. This is a prospective cohort study that included all incident PD patients with at least 90 days of PD recruited in the BRAZPD study. All patients who were treated exclusively with either APD or CAPD were matched for 15 different covariates using a propensity score calculated with the nearest neighbor method. Clinical outcomes analyzed were overall mortality, technique failure and time to first peritonitis. For all analysis we also adjusted the curves for the presence of competing risks with the Fine and Gray analysis. After the matching procedure, 2,890 patients were included in the analysis (1,445 in each group). Baseline characteristics were similar for all covariates including: age, diabetes, BMI, Center-experience, coronary artery disease, cancer, literacy, hypertension, race, previous HD, gender, pre-dialysis care, family income, peripheral artery disease and year of starting PD. Mortality rate was higher in CAPD patients (SHR1.44 CI95%1.21-1.71) compared to APD, but no difference was observed for technique failure (SHR0.83 CI95%0.69-1.02) nor for time till the first peritonitis episode (SHR0.96 CI95%0.93-1.11). In the first large PD cohort study with groups balanced for several covariates using propensity score matching, PD modality was not associated with differences in neither time to first peritonitis nor in technique failure. Nevertheless, patient survival was significantly better in APD patients.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

In this thesis the use of widefield imaging techniques and VLBI observations with a limited number of antennas are explored. I present techniques to efficiently and accurately image extremely large UV datasets. Very large VLBI datasets must be reduced into multiple, smaller datasets if today’s imaging algorithms are to be used to image them. I present a procedure for accurately shifting the phase centre of a visibility dataset. This procedure has been thoroughly tested and found to be almost two orders of magnitude more accurate than existing techniques. Errors have been found at the level of one part in 1.1 million. These are unlikely to be measurable except in the very largest UV datasets. Results of a four-station VLBI observation of a field containing multiple sources are presented. A 13 gigapixel image was constructed to search for sources across the entire primary beam of the array by generating over 700 smaller UV datasets. The source 1320+299A was detected and its astrometric position with respect to the calibrator J1329+3154 is presented. Various techniques for phase calibration and imaging across this field are explored including using the detected source as an in-beam calibrator and peeling of distant confusing sources from VLBI visibility datasets. A range of issues pertaining to wide-field VLBI have been explored including; parameterising the wide-field performance of VLBI arrays; estimating the sensitivity across the primary beam both for homogeneous and heterogeneous arrays; applying techniques such as mosaicing and primary beam correction to VLBI observations; quantifying the effects of time-average and bandwidth smearing; and calibration and imaging of wide-field VLBI datasets. The performance of a computer cluster at the Istituto di Radioastronomia in Bologna has been characterised with regard to its ability to correlate using the DiFX software correlator. Using existing software it was possible to characterise the network speed particularly for MPI applications. The capabilities of the DiFX software correlator, running on this cluster, were measured for a range of observation parameters and were shown to be commensurate with the generic performance parameters measured. The feasibility of an Italian VLBI array has been explored, with discussion of the infrastructure required, the performance of such an array, possible collaborations, and science which could be achieved. Results from a 22 GHz calibrator survey are also presented. 21 out of 33 sources were detected on a single baseline between two Italian antennas (Medicina to Noto). The results and discussions presented in this thesis suggest that wide-field VLBI is a technique whose time has finally come. Prospects for exciting new science are discussed in the final chapter.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND Only multifaceted hospital wide interventions have been successful in achieving sustained improvements in hand hygiene (HH) compliance. METHODOLOGY/PRINCIPAL FINDINGS Pre-post intervention study of HH performance at baseline (October 2007-December 2009) and during intervention, which included two phases. Phase 1 (2010) included multimodal WHO approach. Phase 2 (2011) added Continuous Quality Improvement (CQI) tools and was based on: a) Increase of alcohol hand rub (AHR) solution placement (from 0.57 dispensers/bed to 1.56); b) Increase in frequency of audits (three days every three weeks: "3/3 strategy"); c) Implementation of a standardized register form of HH corrective actions; d) Statistical Process Control (SPC) as time series analysis methodology through appropriate control charts. During the intervention period we performed 819 scheduled direct observation audits which provided data from 11,714 HH opportunities. The most remarkable findings were: a) significant improvements in HH compliance with respect to baseline (25% mean increase); b) sustained high level (82%) of HH compliance during intervention; c) significant increase in AHRs consumption over time; c) significant decrease in the rate of healthcare-acquired MRSA; d) small but significant improvements in HH compliance when comparing phase 2 to phase 1 [79.5% (95% CI: 78.2-80.7) vs 84.6% (95% CI:83.8-85.4), p<0.05]; e) successful use of control charts to identify significant negative and positive deviations (special causes) related to the HH compliance process over time ("positive": 90.1% as highest HH compliance coinciding with the "World hygiene day"; and "negative":73.7% as lowest HH compliance coinciding with a statutory lay-off proceeding). CONCLUSIONS/SIGNIFICANCE CQI tools may be a key addition to WHO strategy to maintain a good HH performance over time. In addition, SPC has shown to be a powerful methodology to detect special causes in HH performance (positive and negative) and to help establishing adequate feedback to healthcare workers.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

PURPOSE: To investigate the ability of inversion recovery ON-resonant water suppression (IRON) in conjunction with P904 (superparamagnetic nanoparticles which consisting of a maghemite core coated with a low-molecular-weight amino-alcohol derivative of glucose) to perform steady-state equilibrium phase MR angiography (MRA) over a wide dose range. MATERIALS AND METHODS: Experiments were approved by the institutional animal care committee. Rabbits (n = 12) were imaged at baseline and serially after the administration of 10 incremental dosages of 0.57-5.7 mgFe/Kg P904. Conventional T1-weighted and IRON MRA were obtained on a clinical 1.5 Tesla (T) scanner to image the thoracic and abdominal aorta, and peripheral vessels. Contrast-to-noise ratios (CNR) and vessel sharpness were quantified. RESULTS: Using IRON MRA, CNR and vessel sharpness progressively increased with incremental dosages of the contrast agent P904, exhibiting constantly higher contrast values than T1 -weighted MRA over a very wide range of contrast agent doses (CNR of 18.8 ± 5.6 for IRON versus 11.1 ± 2.8 for T1 -weighted MRA at 1.71 mgFe/kg, P = 0.02 and 19.8 ± 5.9 for IRON versus -0.8 ± 1.4 for T1-weighted MRA at 3.99 mgFe/kg, P = 0.0002). Similar results were obtained for vessel sharpness in peripheral vessels, (Vessel sharpness of 46.76 ± 6.48% for IRON versus 33.20 ± 3.53% for T1-weighted MRA at 1.71 mgFe/Kg, P = 0.002, and of 48.66 ± 5.50% for IRON versus 19.00 ± 7.41% for T1-weighted MRA at 3.99 mgFe/Kg, P = 0.003). CONCLUSION: Our study suggests that quantitative CNR and vessel sharpness after the injection of P904 are consistently higher for IRON MRA when compared with conventional T1-weighted MRA. These findings apply for a wide range of contrast agent dosages.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Recently, edge matching puzzles, an NP-complete problem, have rececived, thanks to money-prized contests, considerable attention from wide audiences. We consider these competitions not only a challenge for SAT/CSP solving techniques but also as an opportunity to showcase the advances in the SAT/CSP community to a general audience. This paper studies the NP-complete problem of edge matching puzzles focusing on providing generation models of problem instances of variable hardness and on its resolution through the application of SAT and CSP techniques. From the generation side, we also identify the phase transition phenomena for each model. As solving methods, we employ both; SAT solvers through the translation to a SAT formula, and two ad-hoc CSP solvers we have developed, with different levels of consistency, employing several generic and specialized heuristics. Finally, we conducted an extensive experimental investigation to identify the hardest generation models and the best performing solving techniques.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This doctoral dissertation investigates the adult education policy of the European Union (EU) in the framework of the Lisbon agenda 2000–2010, with a particular focus on the changes of policy orientation that occurred during this reference decade. The year 2006 can be considered, in fact, a turning point for the EU policy-making in the adult learning sector: a radical shift from a wide--ranging and comprehensive conception of educating adults towards a vocationally oriented understanding of this field and policy area has been observed, in particular in the second half of the so--called ‘Lisbon decade’. In this light, one of the principal objectives of the mainstream policy set by the Lisbon Strategy, that of fostering all forms of participation of adults in lifelong learning paths, appears to have muted its political background and vision in a very short period of time, reflecting an underlying polarisation and progressive transformation of European policy orientations. Hence, by means of content analysis and process tracing, it is shown that the new target of the EU adult education policy, in this framework, has shifted from citizens to workers, and the competence development model, borrowed from the corporate sector, has been established as the reference for the new policy road maps. This study draws on the theory of governance architectures and applies a post-ontological perspective to discuss whether the above trends are intrinsically due to the nature of the Lisbon Strategy, which encompasses education policies, and to what extent supranational actors and phenomena such as globalisation influence the European governance and decision--making. Moreover, it is shown that the way in which the EU is shaping the upgrading of skills and competences of adult learners is modeled around the needs of the ‘knowledge economy’, thus according a great deal of importance to the ‘new skills for new jobs’ and perhaps not enough to life skills in its broader sense which include, for example, social and civic competences: these are actually often promoted but rarely implemented in depth in the EU policy documents. In this framework, it is conveyed how different EU policy areas are intertwined and interrelated with global phenomena, and it is emphasised how far the building of the EU education systems should play a crucial role in the formation of critical thinking, civic competences and skills for a sustainable democratic citizenship, from which a truly cohesive and inclusive society fundamentally depend, and a model of environmental and cosmopolitan adult education is proposed in order to address the challenges of the new millennium. In conclusion, an appraisal of the EU’s public policy, along with some personal thoughts on how progress might be pursued and actualised, is outlined.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Understanding the relationships between trait diversity, species diversity and ecosystem functioning is essential for sustainable management. For functions comprising two trophic levels, trait matching between interacting partners should also drive functioning. However, the predictive ability of trait diversity and matching is unclear for most functions, particularly for crop pollination, where interacting partners did not necessarily co-evolve. World-wide, we collected data on traits of flower visitors and crops, visitation rates to crop flowers per insect species and fruit set in 469 fields of 33 crop systems. Through hierarchical mixed-effects models, we tested whether flower visitor trait diversity and/or trait matching between flower visitors and crops improve the prediction of crop fruit set (functioning) beyond flower visitor species diversity and abundance. Flower visitor trait diversity was positively related to fruit set, but surprisingly did not explain more variation than flower visitor species diversity. The best prediction of fruit set was obtained by matching traits of flower visitors (body size and mouthpart length) and crops (nectar accessibility of flowers) in addition to flower visitor abundance, species richness and species evenness. Fruit set increased with species richness, and more so in assemblages with high evenness, indicating that additional species of flower visitors contribute more to crop pollination when species abundances are similar. Synthesis and applications. Despite contrasting floral traits for crops world-wide, only the abundance of a few pollinator species is commonly managed for greater yield. Our results suggest that the identification and enhancement of pollinator species with traits matching those of the focal crop, as well as the enhancement of pollinator richness and evenness, will increase crop yield beyond current practices. Furthermore, we show that field practitioners can predict and manage agroecosystems for pollination services based on knowledge of just a few traits that are known for a wide range of flower visitor species.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background Anxiety disorders are common, and cognitive–behavioural therapy (CBT) is a first-line treatment. Candidate gene studies have suggested a genetic basis to treatment response, but findings have been inconsistent. Aims To perform the first genome-wide association study (GWAS) of psychological treatment response in children with anxiety disorders (n = 980). Method Presence and severity of anxiety was assessed using semi-structured interview at baseline, on completion of treatment (post-treatment), and 3 to 12 months after treatment completion (follow-up). DNA was genotyped using the Illumina Human Core Exome-12v1.0 array. Linear mixed models were used to test associations between genetic variants and response (change in symptom severity) immediately post-treatment and at 6-month follow-up. Results No variants passed a genome-wide significance threshold (P = 5×10−8) in either analysis. Four variants met criteria for suggestive significance (P<5×10−6) in association with response post-treatment, and three variants in the 6-month follow-up analysis. Conclusions This is the first genome-wide therapygenetic study. It suggests no common variants of very high effect underlie response to CBT. Future investigations should maximise power to detect single-variant and polygenic effects by using larger, more homogeneous cohorts.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Do capuchin monkeys respond to photos as icons? Do they discriminate photos of capuchin monkeys' faces? Looking for answers to these questions we trained three capuchin monkeys in simple and conditional discrimination tasks and tested the discriminations when comparison stimuli were partially covered. Three capuchin monkeys experienced in simultaneous simple discrimination and IDMTS were trained with repeated shifts of simple discriminations (RSSD), with four simultaneous choices, and IDMTS (1 s delay, 4 choices) with pictures of known capuchins monkeys' faces. All monkeys did discriminate the pictures in both procedures. Performances in probes with partial masks with one fourth of the stimulus hidden were consistent with baseline level. Errors occurred when a picture similar to the correct one was available among the comparison stimuli, when the covered part was the most distinct, or when pictures displayed the same monkey. Capuchin monkeys do match pictures of capuchin monkeys' faces to the sample. The monkeys treated different pictures of the same monkey as equivalent, suggesting that they respond to the pictures as icons, although this was not true to pictures of other monkeys. Subsequent studies may bring more evidence that capuchin monkeys treat pictures as depictions of real scenes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Despite the efficacy of minutia-based fingerprint matching techniques for good-quality images captured by optical sensors, minutia-based techniques do not often perform so well on poor-quality images or fingerprint images captured by small solid-state sensors. Solid-state fingerprint sensors are being increasingly deployed in a wide range of applications for user authentication purposes. Therefore, it is necessary to develop new fingerprint-matching techniques that utilize other features to deal with fingerprint images captured by solid-state sensors. This paper presents a new fingerprint matching technique based on fingerprint ridge features. This technique was assessed on the MSU-VERIDICOM database, which consists of fingerprint impressions obtained from 160 users (4 impressions per finger) using a solid-state sensor. The combination of ridge-based matching scores computed by the proposed ridge-based technique with minutia-based matching scores leads to a reduction of the false non-match rate by approximately 1.7% at a false match rate of 0.1%. © 2005 IEEE.