921 resultados para Adaptative projection


Relevância:

60.00% 60.00%

Publicador:

Resumo:

Ce mémoire s'intéresse à la vision par ordinateur appliquée à des projets d'art technologique. Le sujet traité est la calibration de systèmes de caméras et de projecteurs dans des applications de suivi et de reconstruction 3D en arts visuels et en art performatif. Le mémoire s'articule autour de deux collaborations avec les artistes québécois Daniel Danis et Nicolas Reeves. La géométrie projective et les méthodes de calibration classiques telles que la calibration planaire et la calibration par géométrie épipolaire sont présentées pour introduire les techniques utilisées dans ces deux projets. La collaboration avec Nicolas Reeves consiste à calibrer un système caméra-projecteur sur tête robotisée pour projeter des vidéos en temps réel sur des écrans cubiques mobiles. En plus d'appliquer des méthodes de calibration classiques, nous proposons une nouvelle technique de calibration de la pose d'une caméra sur tête robotisée. Cette technique utilise des plans elliptiques générés par l'observation d'un seul point dans le monde pour déterminer la pose de la caméra par rapport au centre de rotation de la tête robotisée. Le projet avec le metteur en scène Daniel Danis aborde les techniques de calibration de systèmes multi-caméras. Pour son projet de théâtre, nous avons développé un algorithme de calibration d'un réseau de caméras wiimotes. Cette technique basée sur la géométrie épipolaire permet de faire de la reconstruction 3D d'une trajectoire dans un grand volume à un coût minime. Les résultats des techniques de calibration développées sont présentés, de même que leur utilisation dans des contextes réels de performance devant public.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper is concerned with the numerical solutions of time dependent two-dimensional incompressible flows. By using the primitive variables of velocity and pressure, the Navier-Stokes and mass conservation equations are solved by a semi-implicit finite difference projection method. A new bounded higher order upwind convection scheme is employed to deal with the non-linear (advective) terms. The procedure is an adaptation of the GENSMAC (J. Comput. Phys. 1994; 110: 171-186) methodology for calculating confined and free surface fluid flows at both low and high Reynolds numbers. The calculations were performed by using the 2D version of the Freeflow simulation system (J. Comp. Visual. Science 2000; 2:199-210). In order to demonstrate the capabilities of the numerical method, various test cases are presented. These are the fully developed flow in a channel, the flow over a backward facing step, the die-swell problem, the broken dam flow, and an impinging jet onto a flat plate. The numerical results compare favourably with the experimental data and the analytical solutions. Copyright (c) 2006 John Wiley & Sons, Ltd.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The ability to control both the minimum size of holes and the minimum size of structural members are essential requirements in the topology optimization design process for manufacturing. This paper addresses both requirements by means of a unified approach involving mesh-independent projection techniques. An inverse projection is developed to control the minimum hole size while a standard direct projection scheme is used to control the minimum length of structural members. In addition, a heuristic scheme combining both contrasting requirements simultaneously is discussed. Two topology optimization implementations are contributed: one in which the projection (either inverse or direct) is used at each iteration; and the other in which a two-phase scheme is explored. In the first phase, the compliance minimization is carried out without any projection until convergence. In the second phase, the chosen projection scheme is applied iteratively until a solution is obtained while satisfying either the minimum member size or minimum hole size. Examples demonstrate the various features of the projection-based techniques presented.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We study the problem of distributed estimation based on the affine projection algorithm (APA), which is developed from Newton`s method for minimizing a cost function. The proposed solution is formulated to ameliorate the limited convergence properties of least-mean-square (LMS) type distributed adaptive filters with colored inputs. The analysis of transient and steady-state performances at each individual node within the network is developed by using a weighted spatial-temporal energy conservation relation and confirmed by computer simulations. The simulation results also verify that the proposed algorithm provides not only a faster convergence rate but also an improved steady-state performance as compared to an LMS-based scheme. In addition, the new approach attains an acceptable misadjustment performance with lower computational and memory cost, provided the number of regressor vectors and filter length parameters are appropriately chosen, as compared to a distributed recursive-least-squares (RLS) based method.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Recent advances in the control of molecular engineering architectures have allowed unprecedented ability of molecular recognition in biosensing, with a promising impact for clinical diagnosis and environment control. The availability of large amounts of data from electrical, optical, or electrochemical measurements requires, however, sophisticated data treatment in order to optimize sensing performance. In this study, we show how an information visualization system based on projections, referred to as Projection Explorer (PEx), can be used to achieve high performance for biosensors made with nanostructured films containing immobilized antigens. As a proof of concept, various visualizations were obtained with impedance spectroscopy data from an array of sensors whose electrical response could be specific toward a given antibody (analyte) owing to molecular recognition processes. In addition to discussing the distinct methods for projection and normalization of the data, we demonstrate that an excellent distinction can be made between real samples tested positive for Chagas disease and Leishmaniasis, which could not be achieved with conventional statistical methods. Such high performance probably arose from the possibility of treating the data in the whole frequency range. Through a systematic analysis, it was inferred that Sammon`s mapping with standardization to normalize the data gives the best results, where distinction could be made of blood serum samples containing 10(-7) mg/mL of the antibody. The method inherent in PEx and the procedures for analyzing the impedance data are entirely generic and can be extended to optimize any type of sensor or biosensor.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Mestrado em Engenharia Informática. Área de Especialização em Tecnologias do Conhecimento e Decisão.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Objective: Summarize all relevant findings in published literature regarding the potential dose reduction related to image quality using Sinogram-Affirmed Iterative Reconstruction (SAFIRE) compared to Filtered Back Projection (FBP). Background: Computed Tomography (CT) is one of the most used radiographic modalities in clinical practice providing high spatial and contrast resolution. However it also delivers a relatively high radiation dose to the patient. Reconstructing raw-data using Iterative Reconstruction (IR) algorithms has the potential to iteratively reduce image noise while maintaining or improving image quality of low dose standard FBP reconstructions. Nevertheless, long reconstruction times made IR unpractical for clinical use until recently. Siemens Medical developed a new IR algorithm called SAFIRE, which uses up to 5 different strength levels, and poses an alternative to the conventional IR with a significant reconstruction time reduction. Methods: MEDLINE, ScienceDirect and CINAHL databases were used for gathering literature. Eleven articles were included in this review (from 2012 to July 2014). Discussion: This narrative review summarizes the results of eleven articles (using studies on both patients and phantoms) and describes SAFIRE strengths for noise reduction in low dose acquisitions while providing acceptable image quality. Conclusion: Even though the results differ slightly, the literature gathered for this review suggests that the dose in current CT protocols can be reduced at least 50% while maintaining or improving image quality. There is however a lack of literature concerning paediatric population (with increased radiation sensitivity). Further studies should also assess the impact of SAFIRE on diagnostic accuracy.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Background: Computed tomography (CT) is one of the most used modalities for diagnostics in paediatric populations, which is a concern as it also delivers a high patient dose. Research has focused on developing computer algorithms that provide better image quality at lower dose. The iterative reconstruction algorithm Sinogram-Affirmed Iterative Reconstruction (SAFIRE) was introduced as a new technique that reduces noise to increase image quality. Purpose: The aim of this study is to compare SAFIRE with the current gold standard, Filtered Back Projection (FBP), and assess whether SAFIRE alone permits a reduction in dose while maintaining image quality in paediatric head CT. Methods: Images were collected using a paediatric head phantom using a SIEMENS SOMATOM PERSPECTIVE 128 modulated acquisition. 54 images were reconstructed using FBP and 5 different strengths of SAFIRE. Objective measures of image quality were determined by measuring SNR and CNR. Visual measures of image quality were determined by 17 observers with different radiographic experiences. Images were randomized and displayed using 2AFC; observers scored the images answering 5 questions using a Likert scale. Results: At different dose levels, SAFIRE significantly increased SNR (up to 54%) in the acquired images compared to FBP at 80kVp (5.2-8.4), 110kVp (8.2-12.3), 130kVp (8.8-13.1). Visual image quality was higher with increasing SAFIRE strength. The highest image quality was scored with SAFIRE level 3 and higher. Conclusion: The SAFIRE algorithm is suitable for image noise reduction in paediatric head CT. Our data demonstrates that SAFIRE enhances SNR while reducing noise with a possible reduction of dose of 68%.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The development of an algorithm for the construction of auxiliary projection nets (conform, equivalent and orthographic), in the equatorial and polar versions, is presented. The algorithm for the drawing of the "IGAREA 220" counting net (ALYES & MENDES, 1972), is also presented. Those algorithms are the base of STEGRAPH program (vers. 2.0), for MS-DOS computers, which has other applications.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Hyperspectral remote sensing exploits the electromagnetic scattering patterns of the different materials at specific wavelengths [2, 3]. Hyperspectral sensors have been developed to sample the scattered portion of the electromagnetic spectrum extending from the visible region through the near-infrared and mid-infrared, in hundreds of narrow contiguous bands [4, 5]. The number and variety of potential civilian and military applications of hyperspectral remote sensing is enormous [6, 7]. Very often, the resolution cell corresponding to a single pixel in an image contains several substances (endmembers) [4]. In this situation, the scattered energy is a mixing of the endmember spectra. A challenging task underlying many hyperspectral imagery applications is then decomposing a mixed pixel into a collection of reflectance spectra, called endmember signatures, and the corresponding abundance fractions [8–10]. Depending on the mixing scales at each pixel, the observed mixture is either linear or nonlinear [11, 12]. Linear mixing model holds approximately when the mixing scale is macroscopic [13] and there is negligible interaction among distinct endmembers [3, 14]. If, however, the mixing scale is microscopic (or intimate mixtures) [15, 16] and the incident solar radiation is scattered by the scene through multiple bounces involving several endmembers [17], the linear model is no longer accurate. Linear spectral unmixing has been intensively researched in the last years [9, 10, 12, 18–21]. It considers that a mixed pixel is a linear combination of endmember signatures weighted by the correspondent abundance fractions. Under this model, and assuming that the number of substances and their reflectance spectra are known, hyperspectral unmixing is a linear problem for which many solutions have been proposed (e.g., maximum likelihood estimation [8], spectral signature matching [22], spectral angle mapper [23], subspace projection methods [24,25], and constrained least squares [26]). In most cases, the number of substances and their reflectances are not known and, then, hyperspectral unmixing falls into the class of blind source separation problems [27]. Independent component analysis (ICA) has recently been proposed as a tool to blindly unmix hyperspectral data [28–31]. ICA is based on the assumption of mutually independent sources (abundance fractions), which is not the case of hyperspectral data, since the sum of abundance fractions is constant, implying statistical dependence among them. This dependence compromises ICA applicability to hyperspectral images as shown in Refs. [21, 32]. In fact, ICA finds the endmember signatures by multiplying the spectral vectors with an unmixing matrix, which minimizes the mutual information among sources. If sources are independent, ICA provides the correct unmixing, since the minimum of the mutual information is obtained only when sources are independent. This is no longer true for dependent abundance fractions. Nevertheless, some endmembers may be approximately unmixed. These aspects are addressed in Ref. [33]. Under the linear mixing model, the observations from a scene are in a simplex whose vertices correspond to the endmembers. Several approaches [34–36] have exploited this geometric feature of hyperspectral mixtures [35]. Minimum volume transform (MVT) algorithm [36] determines the simplex of minimum volume containing the data. The method presented in Ref. [37] is also of MVT type but, by introducing the notion of bundles, it takes into account the endmember variability usually present in hyperspectral mixtures. The MVT type approaches are complex from the computational point of view. Usually, these algorithms find in the first place the convex hull defined by the observed data and then fit a minimum volume simplex to it. For example, the gift wrapping algorithm [38] computes the convex hull of n data points in a d-dimensional space with a computational complexity of O(nbd=2cþ1), where bxc is the highest integer lower or equal than x and n is the number of samples. The complexity of the method presented in Ref. [37] is even higher, since the temperature of the simulated annealing algorithm used shall follow a log( ) law [39] to assure convergence (in probability) to the desired solution. Aiming at a lower computational complexity, some algorithms such as the pixel purity index (PPI) [35] and the N-FINDR [40] still find the minimum volume simplex containing the data cloud, but they assume the presence of at least one pure pixel of each endmember in the data. This is a strong requisite that may not hold in some data sets. In any case, these algorithms find the set of most pure pixels in the data. PPI algorithm uses the minimum noise fraction (MNF) [41] as a preprocessing step to reduce dimensionality and to improve the signal-to-noise ratio (SNR). The algorithm then projects every spectral vector onto skewers (large number of random vectors) [35, 42,43]. The points corresponding to extremes, for each skewer direction, are stored. A cumulative account records the number of times each pixel (i.e., a given spectral vector) is found to be an extreme. The pixels with the highest scores are the purest ones. N-FINDR algorithm [40] is based on the fact that in p spectral dimensions, the p-volume defined by a simplex formed by the purest pixels is larger than any other volume defined by any other combination of pixels. This algorithm finds the set of pixels defining the largest volume by inflating a simplex inside the data. ORA SIS [44, 45] is a hyperspectral framework developed by the U.S. Naval Research Laboratory consisting of several algorithms organized in six modules: exemplar selector, adaptative learner, demixer, knowledge base or spectral library, and spatial postrocessor. The first step consists in flat-fielding the spectra. Next, the exemplar selection module is used to select spectral vectors that best represent the smaller convex cone containing the data. The other pixels are rejected when the spectral angle distance (SAD) is less than a given thresh old. The procedure finds the basis for a subspace of a lower dimension using a modified Gram–Schmidt orthogonalizati on. The selected vectors are then projected onto this subspace and a simplex is found by an MV T pro cess. ORA SIS is oriented to real-time target detection from uncrewed air vehicles using hyperspectral data [46]. In this chapter we develop a new algorithm to unmix linear mixtures of endmember spectra. First, the algorithm determines the number of endmembers and the signal subspace using a newly developed concept [47, 48]. Second, the algorithm extracts the most pure pixels present in the data. Unlike other methods, this algorithm is completely automatic and unsupervised. To estimate the number of endmembers and the signal subspace in hyperspectral linear mixtures, the proposed scheme begins by estimating sign al and noise correlation matrices. The latter is based on multiple regression theory. The signal subspace is then identified by selectin g the set of signal eigenvalue s that best represents the data, in the least-square sense [48,49 ], we note, however, that VCA works with projected and with unprojected data. The extraction of the end members exploits two facts: (1) the endmembers are the vertices of a simplex and (2) the affine transformation of a simplex is also a simplex. As PPI and N-FIND R algorithms, VCA also assumes the presence of pure pixels in the data. The algorithm iteratively projects data on to a direction orthogonal to the subspace spanned by the endmembers already determined. The new end member signature corresponds to the extreme of the projection. The algorithm iterates until all end members are exhausted. VCA performs much better than PPI and better than or comparable to N-FI NDR; yet it has a computational complexity between on e and two orders of magnitude lower than N-FINDR. The chapter is structure d as follows. Section 19.2 describes the fundamentals of the proposed method. Section 19.3 and Section 19.4 evaluate the proposed algorithm using simulated and real data, respectively. Section 19.5 presents some concluding remarks.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

El objetivo de este trabajo es caracterizar la respuesta de P. putida frente a condiciones ambientales adversas dadas por la presencia del detergente catiónico tetradeciltrimetilamonio (TDTMA). El objetivo final que se persigue es el de utilizar este microorganismo como vehículo en procesos de biorremediación. El proyecto comprende aspectos relacionados con la degradación y con la respuesta adaptativa que le permiten a P. putida tolerar altas concentraciones del biocida. La degradación de TDTMA por P. putida involucra una actividad monooxigenasa, que produce trimetilamina (TMA) y tetradecilalcanal. Parte de la TMA producida es demetilada, por una TMAdehidrogenasa (TMADH), e utilizada por la bacteria como fuente de nitrógeno y parte es acumulada intracelularmente, inhibiendo el crecimiento bacteriano. Considerando la importancia de las oxigenasas y dehidrogenasas en la transformación química de compuestos recalcitrantes, se identificarán los genes responsables de la actividad monooxigenasa y de la TMADH, se caracterizarán las enzimas, lo que permitirá conocer, además, datos evolutivos de las mismas. Teniendo en cuenta que la acumulación intracelular TMA conduce a la degradación parcial del detergente, efecto contrarrestado por la adición de aluminio (Al), se investigarán si otros factores nutricionales participan en el control de la degradación de TDMA por P. putida. Se investigará si el regulador global NtrC, que se activa en respuesta a limitación de nitrógeno, participa en el metabolismo de TDTMA. Se prevé construir mutantes en los genes que codifican para monoxigenasa y TMADH y analizar la respuesta de estas cepas frente al estrés ocasionado por TDTMA y Al. En este proyecto se postula además que los cambios a nivel de fosfolípidos (PL) de membrana son una estrategia de P. putida para sobrevivir en presencia del TDTMA. Para concluir si fosfatidilglicerol es el principal responsable de la adaptación de P. putida frente al estrés ocasionado por TDTMA, se pretenden obtener mutantes afectadas en la biosíntesis de novo de PL, particularmente en cardiolipina sintasa. Paralelamente se estudiará si fosfolipasa D participa en la respuesta, lo que permitirá asignar un rol a esta enzima en procesos de señalización análogos a los que ocurren en organismos eucariotas. En presencia de TDTMA y Al, P. putida responde aumentando el contenido de fosfatidilcolina y posiblemente este PL actúe como un reservorio temporario del ión. Identificar en P. putida los genes que codifican para las enzimas responsables de su biosíntesis, particularmente fosfatidilcolina sintasa y/o fosfolípido N-metiltranferasa, conducirá a conocer el mecanismo por el cual fosfatidilcolina estaría involucrada en la respuesta a Al.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

La criptococosis es causada por la inhalación de levaduras encapsuladas de Cryptococcus neoformans o Cryptococcus gattii. Representa una de las tres infecciones graves por oportunistas en pacientes con SIDA y existe aproximadamente un 6 por ciento de incidencia de criptococosis clínica en pacientes con transplante de órganos sólidos. Estas dos especies difieren la fisiopatogenia durante la infección. El factor de virulencia principal de Cryptococcus sp. es la presencia del polisacárido capsular, glucuronoxilomanano (GXM), de alto peso molecular, que es continuamente secretado por las levaduras. Los macrófagos son células centrales en la respuesta innata al hongo, los cuales deben ser activados por linfocitos T helper 1 para un eficiente control de la infección. Sin embargo, estas células también son suceptibles al parasitismo intracelular, permitiendo la infección persistente y la diseminación a sitios extrapulmonares. Este proyecto propone investigar la capacidad de levaduras de C. neoformans, C. gattii y de los polisacáridos capsulares para modular la respuesta proinflamatoria de los macrófagos. Queremos estudiar si el tratamiento de macrófagos con levaduras o polisacárido puede inducir perfiles supresores de la respuesta protectiva T helper 1, tales como linfocitos T helper 2 o T reguladores, favoreciendo la sobrevida intracelular del hongo. Además, pensamos que C. neoformans o C. gattii podrían inducir un activación diferencial de macrófagos lo que condicionaría la respuesta adaptativa, lo que podría explicar las diferencias en la fisiopatogenia de estas dos especies. Procedimientos experimentales -Microorganismos y obtención de GXM: se trabajará con C. neoformans variedad grubii, cepa ATCC 62067 y C. gattii serotipo B, cepa NIH112B. Se obtendrán polisacáridos capsulares (GXM) de C. neoformans y C. gattii por precipitación con etanol y y acomplejamiento selectivo con CTAB. - Obtención de macrófagos murinos y cultivos celulares: se obtendrán macrófagos por lavados peritoneales y/o alveolares de ratones BALB/c. Los macrófagos se cultivarán por 24 h en ausencia o presencia de levaduras muertas o vivas (sin opsonizar u opsonizadas) de C. neoformans o C. gattii o en presencia de GXM purificado. -Objetivo 1. Estudio de la modulación de las propiedades proinflamatorias de Mac: en sobrenadantes de los cultivos se medirán las citoquinas por ELISA de captura y en lisados celulares, la expresión de las enzimas (iNOS, arginasa, IDO) por western blot. Se analizará por citometría de flujo la expresión de MCHII y moléculas CD80, CD86, CD40, CTLA-4. -Objetivo 2. Estudios in vitro de la capacidad de macrófagos tratados con levaduras o GXM para inducir linfocitos Th1, Th2 o Treg: los macrófagos preincubados con GXM o levaduras, se incubarán con linfocitos autólogos estimulados con anti-CD3. Se medirá la proliferación celular y el perfil de citoquinas por citomtría de flujo. Células T CD4+ CD25- serán purificadas de suspenciones esplénicas de ratones normales. Luego las células serán incubadas con macrófagos (sin tratar o tratados con levaduras o GXM) y estimulados con anti-CD3. Se analizará la proliferación celular con CFSE y expresión de CD4, CD25 y Foxp3 . - Objetivo 3. Estudios in vivo de la capacidad de levaduras o GXM para inducir linfocitos Th1, Th2 o Treg . Rol de los macrófagos in vivo: Los ratones serán inyectados con 100000 levaduras o con 200 µg de GXM puro vía endovenosa y luego de 7, 14, 30 y 40 días se evaluarán las poblaciones celulares de bazo, por citometría de flujo usando marcaciones simultáneas para CD4, CD8, CD25, Foxp3 y citoquinas intracelulares. Para investigar la participación in vivo de los macrófagos, se depletaran estas células inyectando los animales con PBS-liposomas o clodronato (DMDP)-liposomas por vía endovenosa o inhalatoria (200- 300 µl por ratón). Luego de 24 h, los animales se infectarán con levaduras o inocularán con GXM y se evaluarán los perfiles de células T esplénicos o de nódulos linfaticos.