901 resultados para Filter-rectify-filter-model
Resumo:
La evolución de los teléfonos móviles inteligentes, dotados de cámaras digitales, está provocando una creciente demanda de aplicaciones cada vez más complejas que necesitan algoritmos de visión artificial en tiempo real; puesto que el tamaño de las señales de vídeo no hace sino aumentar y en cambio el rendimiento de los procesadores de un solo núcleo se ha estancado, los nuevos algoritmos que se diseñen para visión artificial han de ser paralelos para poder ejecutarse en múltiples procesadores y ser computacionalmente escalables. Una de las clases de procesadores más interesantes en la actualidad se encuentra en las tarjetas gráficas (GPU), que son dispositivos que ofrecen un alto grado de paralelismo, un excelente rendimiento numérico y una creciente versatilidad, lo que los hace interesantes para llevar a cabo computación científica. En esta tesis se exploran dos aplicaciones de visión artificial que revisten una gran complejidad computacional y no pueden ser ejecutadas en tiempo real empleando procesadores tradicionales. En cambio, como se demuestra en esta tesis, la paralelización de las distintas subtareas y su implementación sobre una GPU arrojan los resultados deseados de ejecución con tasas de refresco interactivas. Asimismo, se propone una técnica para la evaluación rápida de funciones de complejidad arbitraria especialmente indicada para su uso en una GPU. En primer lugar se estudia la aplicación de técnicas de síntesis de imágenes virtuales a partir de únicamente dos cámaras lejanas y no paralelas—en contraste con la configuración habitual en TV 3D de cámaras cercanas y paralelas—con información de color y profundidad. Empleando filtros de mediana modificados para la elaboración de un mapa de profundidad virtual y proyecciones inversas, se comprueba que estas técnicas son adecuadas para una libre elección del punto de vista. Además, se demuestra que la codificación de la información de profundidad con respecto a un sistema de referencia global es sumamente perjudicial y debería ser evitada. Por otro lado se propone un sistema de detección de objetos móviles basado en técnicas de estimación de densidad con funciones locales. Este tipo de técnicas es muy adecuada para el modelado de escenas complejas con fondos multimodales, pero ha recibido poco uso debido a su gran complejidad computacional. El sistema propuesto, implementado en tiempo real sobre una GPU, incluye propuestas para la estimación dinámica de los anchos de banda de las funciones locales, actualización selectiva del modelo de fondo, actualización de la posición de las muestras de referencia del modelo de primer plano empleando un filtro de partículas multirregión y selección automática de regiones de interés para reducir el coste computacional. Los resultados, evaluados sobre diversas bases de datos y comparados con otros algoritmos del estado del arte, demuestran la gran versatilidad y calidad de la propuesta. Finalmente se propone un método para la aproximación de funciones arbitrarias empleando funciones continuas lineales a tramos, especialmente indicada para su implementación en una GPU mediante el uso de las unidades de filtraje de texturas, normalmente no utilizadas para cómputo numérico. La propuesta incluye un riguroso análisis matemático del error cometido en la aproximación en función del número de muestras empleadas, así como un método para la obtención de una partición cuasióptima del dominio de la función para minimizar el error. ABSTRACT The evolution of smartphones, all equipped with digital cameras, is driving a growing demand for ever more complex applications that need to rely on real-time computer vision algorithms. However, video signals are only increasing in size, whereas the performance of single-core processors has somewhat stagnated in the past few years. Consequently, new computer vision algorithms will need to be parallel to run on multiple processors and be computationally scalable. One of the most promising classes of processors nowadays can be found in graphics processing units (GPU). These are devices offering a high parallelism degree, excellent numerical performance and increasing versatility, which makes them interesting to run scientific computations. In this thesis, we explore two computer vision applications with a high computational complexity that precludes them from running in real time on traditional uniprocessors. However, we show that by parallelizing subtasks and implementing them on a GPU, both applications attain their goals of running at interactive frame rates. In addition, we propose a technique for fast evaluation of arbitrarily complex functions, specially designed for GPU implementation. First, we explore the application of depth-image–based rendering techniques to the unusual configuration of two convergent, wide baseline cameras, in contrast to the usual configuration used in 3D TV, which are narrow baseline, parallel cameras. By using a backward mapping approach with a depth inpainting scheme based on median filters, we show that these techniques are adequate for free viewpoint video applications. In addition, we show that referring depth information to a global reference system is ill-advised and should be avoided. Then, we propose a background subtraction system based on kernel density estimation techniques. These techniques are very adequate for modelling complex scenes featuring multimodal backgrounds, but have not been so popular due to their huge computational and memory complexity. The proposed system, implemented in real time on a GPU, features novel proposals for dynamic kernel bandwidth estimation for the background model, selective update of the background model, update of the position of reference samples of the foreground model using a multi-region particle filter, and automatic selection of regions of interest to reduce computational cost. The results, evaluated on several databases and compared to other state-of-the-art algorithms, demonstrate the high quality and versatility of our proposal. Finally, we propose a general method for the approximation of arbitrarily complex functions using continuous piecewise linear functions, specially formulated for GPU implementation by leveraging their texture filtering units, normally unused for numerical computation. Our proposal features a rigorous mathematical analysis of the approximation error in function of the number of samples, as well as a method to obtain a suboptimal partition of the domain of the function to minimize approximation error.
Resumo:
Zinc finger domains are structures that mediate sequence recognition for a large number of DNA-binding proteins. These domains consist of sequences of amino acids containing cysteine and histidine residues tetrahedrally coordinated to a zinc ion. In this report, we present a means to selectively inhibit a zinc finger transcription factor with cobalt(III) Schiff-base complexes. 1H NMR spectroscopy confirmed that the structure of a zinc finger peptide is disrupted by axial ligation of the cobalt(III) complex to the nitrogen of the imidazole ring of a histidine residue. Fluorescence studies reveal that the zinc ion is displaced from the model zinc finger peptide in the presence of the cobalt complex. In addition, gel-shift and filter-binding assays reveal that cobalt complexes inhibit binding of a complete zinc finger protein, human transcription factor Sp1, to its consensus sequence. Finally, a DNA-coupled conjugate of the cobalt complexes selectively inhibited Sp1 in the presence of several other transcription factors.
Resumo:
Efficient and safe heparin anticoagulation has remained a problem for continuous renal replacement therapies and intermittent hemodialysis for patients with acute renal failure. To make heparin therapy safer for the patient with acute renal failure at high risk of bleeding, we have proposed regional heparinization of the circuit via an immobilized heparinase I filter. This study tested a device based on Taylor-Couette flow and simultaneous separation/reaction for efficacy and safety of heparin removal in a sheep model. Heparinase I was immobilized onto agarose beads via cyanogen bromide activation. The device, referred to as a vortex flow plasmapheretic reactor, consisted of two concentric cylinders, a priming volume of 45 ml, a microporous membrane for plasma separation, and an outer compartment where the immobilized heparinase I was fluidized separately from the blood cells. Manual white cell and platelet counts, hematocrit, total protein, and fibrinogen assays were performed. Heparin levels were indirectly measured via whole-blood recalcification times (WBRTs). The vortex flow plasmapheretic reactor maintained significantly higher heparin levels in the extracorporeal circuit than in the sheep (device inlet WBRTs were 1.5 times the device outlet WBRTs) with no hemolysis. The reactor treatment did not effect any physiologically significant changes in complete blood cell counts, platelets, and protein levels for up to 2 hr of operation. Furthermore, gross necropsy and histopathology did not show any significant abnormalities in the kidney, liver, heart, brain, and spleen.
Resumo:
We study the effect of sublattice symmetry breaking on the electronic, magnetic, and transport properties of two-dimensional graphene as well as zigzag terminated one- and zero-dimensional graphene nanostructures. The systems are described with the Hubbard model within the collinear mean field approximation. We prove that for the noninteracting bipartite lattice with an unequal number of atoms in each sublattice, in-gap states still exist in the presence of a staggered on-site potential ±Δ/2. We compute the phase diagram of both 2D and 1D graphene with zigzag edges, at half filling, defined by the normalized interaction strength U/t and Δ/t, where t is the first neighbor hopping. In the case of 2D we find that the system is always insulating, and we find the Uc(Δ) curve above which the system goes antiferromagnetic. In 1D we find that the system undergoes a phase transition from nonmagnetic insulator for U
Resumo:
In autumn 2012, the new release 05 (RL05) of monthly geopotencial spherical harmonics Stokes coefficients (SC) from GRACE (Gravity Recovery and Climate Experiment) mission was published. This release reduces the noise in high degree and order SC, but they still need to be filtered. One of the most common filtering processing is the combination of decorrelation and Gaussian filters. Both of them are parameters dependent and must be tuned by the users. Previous studies have analyzed the parameters choice for the RL05 GRACE data for oceanic applications, and for RL04 data for global application. This study updates the latter for RL05 data extending the statistics analysis. The choice of the parameters of the decorrelation filter has been optimized to: (1) balance the noise reduction and the geophysical signal attenuation produced by the filtering process; (2) minimize the differences between GRACE and model-based data; (3) maximize the ratio of variability between continents and oceans. The Gaussian filter has been optimized following the latter criteria. Besides, an anisotropic filter, the fan filter, has been analyzed as an alternative to the Gauss filter, producing better statistics.
Resumo:
There is increasing evidence to support the notion that membrane proteins, instead of being isolated components floating in a fluid lipid environment, can be assembled into supramolecular complexes that take part in a variety of cooperative cellular functions. The interplay between lipid-protein and protein-protein interactions is expected to be a determinant factor in the assembly and dynamics of such membrane complexes. Here we report on a role of anionic phospholipids in determining the extent of clustering of KcsA, a model potassium channel. Assembly/disassembly of channel clusters occurs, at least partly, as a consequence of competing lipid-protein and protein-protein interactions at nonannular lipid binding sites on the channel surface and brings about profound changes in the gating properties of the channel. Our results suggest that these latter effects of anionic lipids are mediated via the Trp67–Glu71–Asp80 inactivation triad within the channel structure and its bearing on the selectivity filter.
Resumo:
In this paper, a novel approach for exploiting multitemporal remote sensing data focused on real-time monitoring of agricultural crops is presented. The methodology is defined in a dynamical system context using state-space techniques, which enables the possibility of merging past temporal information with an update for each new acquisition. The dynamic system context allows us to exploit classical tools in this domain to perform the estimation of relevant variables. A general methodology is proposed, and a particular instance is defined in this study based on polarimetric radar data to track the phenological stages of a set of crops. A model generation from empirical data through principal component analysis is presented, and an extended Kalman filter is adapted to perform phenological stage estimation. Results employing quad-pol Radarsat-2 data over three different cereals are analyzed. The potential of this methodology to retrieve vegetation variables in real time is shown.
Resumo:
The analysis of clusters has attracted considerable interest over the last few decades. The articulation of clusters into complex networks and systems of innovation -- generally known as regional innovation systems -- has, in particular, been associated with the delivery of greater innovation and growth. However, despite the growing economic and policy relevance of clusters, little systematic research has been conducted into their association with other factors promoting innovation and economic growth. This article addresses this issue by looking at the relationship between innovation and economic growth in 152 regions of Europe during the period between 1995 and 2006. Using an econometric model with a static and a dynamic dimension, the results of the analysis highlight that: a) regional growth through innovation in Europe is fundamentally connected to the presence of an adequate socioeconomic environment and, in particular, to the existence of a well-trained and educated pool of workers; b) the presence of clusters matters for regional growth, but only in combination with a good ‘social filter’, and this association wanes in time; c) more traditional R&D variables have a weak initial connection to economic development, but this connection increases over time and, is, once again, contingent on the existence of adequate socioeconomic conditions.
Resumo:
This paper argues that the Phillips curve relationship is not sufficient to trace back the output gap, because the effect of excess demand is not symmetric across tradeable and non-tradeable sectors. In the non-tradeable sector, excess demand creates excess employment and inflation via the Phillips curve, while in the tradeable sector much of the excess demand is absorbed by the trade balance. We set up an unobserved-components model including both a Phillips curve and a current account equation to estimate ‘sustainable output’ for 45 countries. Our estimates for many countries differ substantially from the potential output estimates of the European Commission, IMF and OECD. We assemble a comprehensive real-time dataset to estimate our model on data which was available in each year from 2004-15. Our model was able to identify correctly the sign of pre-crisis output gaps using real time data for countries such as the United States, Spain and Ireland, in contrast to the estimates of the three institutions, which estimated negative output gaps real-time, while their current estimates for the pre-crisis period suggest positive gaps. In the past five years the annual output gap estimate revisions of our model, the European Commission, IMF, OECD and the Hodrick-Prescott filter were broadly similar in the range of 0.5-1.0 percent of GDP for advanced countries. Such large revisions are worrisome, because the European fiscal framework can translate the imprecision in output gap estimates into poorly grounded fiscal policymaking in the EU.
Validation of the Swiss methane emission inventory by atmospheric observations and inverse modelling
Resumo:
Atmospheric inverse modelling has the potential to provide observation-based estimates of greenhouse gas emissions at the country scale, thereby allowing for an independent validation of national emission inventories. Here, we present a regional-scale inverse modelling study to quantify the emissions of methane (CH₄) from Switzerland, making use of the newly established CarboCount-CH measurement network and a high-resolution Lagrangian transport model. In our reference inversion, prior emissions were taken from the "bottom-up" Swiss Greenhouse Gas Inventory (SGHGI) as published by the Swiss Federal Office for the Environment in 2014 for the year 2012. Overall we estimate national CH₄ emissions to be 196 ± 18 Gg yr⁻¹ for the year 2013 (1σ uncertainty). This result is in close agreement with the recently revised SGHGI estimate of 206 ± 33 Gg yr⁻¹ as reported in 2015 for the year 2012. Results from sensitivity inversions using alternative prior emissions, uncertainty covariance settings, large-scale background mole fractions, two different inverse algorithms (Bayesian and extended Kalman filter), and two different transport models confirm the robustness and independent character of our estimate. According to the latest SGHGI estimate the main CH₄ source categories in Switzerland are agriculture (78 %), waste handling (15 %) and natural gas distribution and combustion (6 %). The spatial distribution and seasonal variability of our posterior emissions suggest an overestimation of agricultural CH₄ emissions by 10 to 20 % in the most recent SGHGI, which is likely due to an overestimation of emissions from manure handling. Urban areas do not appear as emission hotspots in our posterior results, suggesting that leakages from natural gas distribution are only a minor source of CH₄ in Switzerland. This is consistent with rather low emissions of 8.4 Gg yr⁻¹ reported by the SGHGI but inconsistent with the much higher value of 32 Gg yr⁻¹ implied by the EDGARv4.2 inventory for this sector. Increased CH₄ emissions (up to 30 % compared to the prior) were deduced for the north-eastern parts of Switzerland. This feature was common to most sensitivity inversions, which is a strong indicator that it is a real feature and not an artefact of the transport model and the inversion system. However, it was not possible to assign an unambiguous source process to the region. The observations of the CarboCount-CH network provided invaluable and independent information for the validation of the national bottom-up inventory. Similar systems need to be sustained to provide independent monitoring of future climate agreements.
Resumo:
Ecological succession provides a widely accepted description of seasonal changes in phytoplankton and mesozooplankton assemblages in the natural environment, but concurrent changes in smaller (i.e. microbes) and larger (i.e. macroplankton) organisms are not included in the model because plankton ranging from bacteria to jellies are seldom sampled and analyzed simultaneously. Here we studied, for the first time in the aquatic literature, the succession of marine plankton in the whole-plankton assemblage that spanned 5 orders of magnitude in size from microbes to macroplankton predators (not including fish or fish larvae, for which no consistent data were available). Samples were collected in the northwestern Mediterranean Sea (Bay of Villefranche) weekly during 10 months. Simultaneously collected samples were analyzed by flow cytometry, inverse microscopy, FlowCam, and ZooScan. The whole-plankton assemblage underwent sharp reorganizations that corresponded to bottom-up events of vertical mixing in the water-column, and its development was top-down controlled by large gelatinous filter feeders and predators. Based on the results provided by our novel whole-plankton assemblage approach, we propose a new comprehensive conceptual model of the annual plankton succession (i.e. whole plankton model) characterized by both stepwise stacking of four broad trophic communities from early spring through summer, which is a new concept, and progressive replacement of ecological plankton categories within the different trophic communities, as recognised traditionally.
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-06
Resumo:
The marginalisation of the teaching and learning of legal research in the Australian law school curriculum is, in the author's experience, a condition common to many law schools. This is reflected in the reluctance of some law teachers to include legal research skills in the substantive law teaching schedule — often the result of unwillingness on the part of law school administrators to provide the resources necessary to ensure that such integration does not place a disproportionately heavy burden of assessment on those who are tempted. However, this may only be one of many reasons for the marginalisation of legal research in the law school experience. Rather than analyse the reasons for this marginalisation, this article deals with what needs to be done to rectify the situation, and to ensure that the teaching of legal research can be integrated into the law school curriculum in a meaningful way. This requires the use of teaching and learning theory which focuses on student-centred learning. This article outlines a model of legal research. It incorporates five transparent stages which are: analysis, contextualisation, bibliographic skills, interpretation and assessment and application.
Resumo:
Objective: Existing evidence suggests that vocational rehabilitation services, in particular individual placement and support (IPS), are effective in assisting people with schizophrenia and related conditions gain open employment. Despite this, such services are not available to all unemployed people with schizophrenia who wish to work. Existing evidence suggests that while IPS confers no clinical advantages over routine care, it does improve the proportion of people returning to employment. The objective of the current study is to investigate the net benefit of introducing IPS services into current mental health services in Australia. Method: The net benefit of IPS is assessed from a health sector perspective using cost-benefit analysis. A two-stage approach is taken to the assessment of benefit. The first stage involves a quantitative analysis of the net benefit, defined as the benefits of IPS (comprising transfer payments averted, income tax accrued and individual income earned) minus the costs. The second stage involves application of 'second-filter' criteria (including equity, strength of evidence, feasibility and acceptability to stakeholders) to results. The robustness of results is tested using the multivariate probabilistic sensitivity analysis. Results: The costs of IPS are $A10.3M (95% uncertainty interval $A7.4M-$A13.6M), the benefits are $A4.7M ($A3.1M-$A6.5M), resulting in a negative net benefit of $A5.6M ($A8.4M-$A3.4M). Conclusions: The current analysis suggests that IPS costs are greater than the monetary benefits. However, the evidence-base of the current analysis is weak. Structural conditions surrounding welfare payments in Australia create disincentives to full-time employment for people with disabilities.
Resumo:
We discuss the construction of a photometric redshift catalogue of luminous red galaxies (LRGs) from the Sloan Digital Sky Survey (SDSS), emphasizing the principal steps necessary for constructing such a catalogue: (i) photometrically selecting the sample, (ii) measuring photometric redshifts and their error distributions, and (iii) estimating the true redshift distribution. We compare two photometric redshift algorithms for these data and find that they give comparable results. Calibrating against the SDSS and SDSS-2dF (Two Degree Field) spectroscopic surveys, we find that the photometric redshift accuracy is sigma similar to 0.03 for redshifts less than 0.55 and worsens at higher redshift (similar to 0.06 for z < 0.7). These errors are caused by photometric scatter, as well as systematic errors in the templates, filter curves and photometric zero-points. We also parametrize the photometric redshift error distribution with a sum of Gaussians and use this model to deconvolve the errors from the measured photometric redshift distribution to estimate the true redshift distribution. We pay special attention to the stability of this deconvolution, regularizing the method with a prior on the smoothness of the true redshift distribution. The methods that we develop are applicable to general photometric redshift surveys.