10 resultados para Distortions

em Universidad Politécnica de Madrid


Relevância:

10.00% 10.00%

Publicador:

Resumo:

The wake produced by the structural supports of the ultrasonic anemometers (UAs)causes distortions in the velocity field in the vicinity of the sonic path. These distortions are measured by the UA, inducing errors in the determination of the mean velocity, turbulence intensity, spectrum, etc.; basic parameters to determine the effect of wind on structures. Additionally, these distortions can lead to indefinition in the calibration function of the sensors (Cuerva et al., 2004). Several wind tunnel tests have been dedicated to obtaining experimental data, from which have been developed fit models to describe and to correct these distortions (Kaimal, 1978 and Wyngaard, 1985). This work explores the effect of a vortex wake generated by the supports of an UA, on the measurement of wind speed done by this instrument. To do this, the Von Karman¿s vortex street potential model is combined with the mathematical model of the measuring process carried out by UAs developed by Franchini et al. (2007). The obtained results are the correction functions of the measured wind velocity, which depends on the geometry of the sonic anemometer and aerodynamic conditions. These results have been validated with the ones obtained in a wind tunnel test done on a single path UA, especially developed for research. The supports of this UA have been modified in order to reproduce the conditions of the theoretical model. Good agreements between experimental and theoretical results have been found.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this PhD Thesis proposal, the principles of diffusion MRI (dMRI) in its application to the human brain mapping of connectivity are reviewed. The background section covers the fundamentals of dMRI, with special focus on those related to the distortions caused by susceptibility inhomogeneity across tissues. Also, a deep survey of available correction methodologies for this common artifact of dMRI is presented. Two methodological approaches to improved correction are introduced. Finally, the PhD proposal describes its objectives, the research plan, and the necessary resources.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The development of (static and dynamics)programs with constant and linear elements has shown good behaviour. It seems so natural to combine both advantages so that the results will not be affected by local distortions. This paper will be dedicated to presenting the reserch of mixed elements and the way to solve the over-determination that appears in some cases. Although all the study has been done with the potential theory, its application to elastic problems is straightforward.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Connectivity analysis on diffusion MRI data of the whole-brain suffers from distortions caused by the standard echo-planar imaging acquisition strategies. These images show characteristic geometrical deformations and signal destruction that are an important drawback limiting the success of tractography algorithms. Several retrospective correction techniques are readily available. In this work, we use a digital phantom designed for the evaluation of connectivity pipelines. We subject the phantom to a “theoretically correct” and plausible deformation that resembles the artifact under investigation. We correct data back, with three standard methodologies (namely fieldmap-based, reversed encoding-based, and registration- based). Finally, we rank the methods based on their geometrical accuracy, the dropout compensation, and their impact on the resulting connectivity matrices.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A real-time large scale part-to-part video matching algorithm, based on the cross correlation of the intensity of motion curves, is proposed with a view to originality recognition, video database cleansing, copyright enforcement, video tagging or video result re-ranking. Moreover, it is suggested how the most representative hashes and distance functions - strada, discrete cosine transformation, Marr-Hildreth and radial - should be integrated in order for the matching algorithm to be invariant against blur, compression and rotation distortions: (R; _) 2 [1; 20]_[1; 8], from 512_512 to 32_32pixels2 and from 10 to 180_. The DCT hash is invariant against blur and compression up to 64x64 pixels2. Nevertheless, although its performance against rotation is the best, with a success up to 70%, it should be combined with the Marr-Hildreth distance function. With the latter, the image selected by the DCT hash should be at a distance lower than 1.15 times the Marr-Hildreth minimum distance.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Value chain in agriculture is a current issue affecting from farmers to consumers. It questions important issues as profitability, and even though continuity of certain sectors. Although there has been an evolution along time in the structure and concentration of intermediate and final levels of the value chain between distribution and retail sector, a similar evolution seems not to arrive at the initial level of the chain, the production sector. This produces large imbalances in power and leverage between levels of the value chain that could imply several problems for rural actors. Relatively little attention has been paid to possible market distortions caused by the high level of concentration distribution side of the agrifood system.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper gives an overview of three recent studies by the authors on the topic of 3D video Quality of Experience (QoE). Two of studies [1,2] investigated different psychological dimension that may be needed for describing 3D video QoE and the third the visibility and annoyance of crosstalk[3]. The results shows that the video quality scale could be sufficient for evaluating S3D video experience for coding and spatial resolution reduction distortions. It was also confirmed that with a more complex mixture of degradations more than one scale should be used to capture the QoE in these cases. The study found a linear relationship between the perceived crosstalk and the amount of crosstalk.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A lo largo del presente trabajo se investiga la viabilidad de la descomposición automática de espectros de radiación gamma por medio de algoritmos de resolución de sistemas de ecuaciones algebraicas lineales basados en técnicas de pseudoinversión. La determinación de dichos algoritmos ha sido realizada teniendo en cuenta su posible implementación sobre procesadores de propósito específico de baja complejidad. En el primer capítulo se resumen las técnicas para la detección y medida de la radiación gamma que han servido de base para la confección de los espectros tratados en el trabajo. Se reexaminan los conceptos asociados con la naturaleza de la radiación electromagnética, así como los procesos físicos y el tratamiento electrónico que se hallan involucrados en su detección, poniendo de relieve la naturaleza intrínsecamente estadística del proceso de formación del espectro asociado como una clasificación del número de detecciones realizadas en función de la energía supuestamente continua asociada a las mismas. Para ello se aporta una breve descripción de los principales fenómenos de interacción de la radiación con la materia, que condicionan el proceso de detección y formación del espectro. El detector de radiación es considerado el elemento crítico del sistema de medida, puesto que condiciona fuertemente el proceso de detección. Por ello se examinan los principales tipos de detectores, con especial hincapié en los detectores de tipo semiconductor, ya que son los más utilizados en la actualidad. Finalmente, se describen los subsistemas electrónicos fundamentales para el acondicionamiento y pretratamiento de la señal procedente del detector, a la que se le denomina con el término tradicionalmente utilizado de Electrónica Nuclear. En lo que concierne a la espectroscopia, el principal subsistema de interés para el presente trabajo es el analizador multicanal, el cual lleva a cabo el tratamiento cualitativo de la señal, y construye un histograma de intensidad de radiación en el margen de energías al que el detector es sensible. Este vector N-dimensional es lo que generalmente se conoce con el nombre de espectro de radiación. Los distintos radionúclidos que participan en una fuente de radiación no pura dejan su impronta en dicho espectro. En el capítulo segundo se realiza una revisión exhaustiva de los métodos matemáticos en uso hasta el momento ideados para la identificación de los radionúclidos presentes en un espectro compuesto, así como para determinar sus actividades relativas. Uno de ellos es el denominado de regresión lineal múltiple, que se propone como la aproximación más apropiada a los condicionamientos y restricciones del problema: capacidad para tratar con espectros de baja resolución, ausencia del concurso de un operador humano (no supervisión), y posibilidad de ser soportado por algoritmos de baja complejidad capaces de ser instrumentados sobre procesadores dedicados de alta escala de integración. El problema del análisis se plantea formalmente en el tercer capítulo siguiendo las pautas arriba mencionadas y se demuestra que el citado problema admite una solución en la teoría de memorias asociativas lineales. Un operador basado en este tipo de estructuras puede proporcionar la solución al problema de la descomposición espectral deseada. En el mismo contexto, se proponen un par de algoritmos adaptativos complementarios para la construcción del operador, que gozan de unas características aritméticas especialmente apropiadas para su instrumentación sobre procesadores de alta escala de integración. La característica de adaptatividad dota a la memoria asociativa de una gran flexibilidad en lo que se refiere a la incorporación de nueva información en forma progresiva.En el capítulo cuarto se trata con un nuevo problema añadido, de índole altamente compleja. Es el del tratamiento de las deformaciones que introducen en el espectro las derivas instrumentales presentes en el dispositivo detector y en la electrónica de preacondicionamiento. Estas deformaciones invalidan el modelo de regresión lineal utilizado para describir el espectro problema. Se deriva entonces un modelo que incluya las citadas deformaciones como una ampliación de contribuciones en el espectro compuesto, el cual conlleva una ampliación sencilla de la memoria asociativa capaz de tolerar las derivas en la mezcla problema y de llevar a cabo un análisis robusto de contribuciones. El método de ampliación utilizado se basa en la suposición de pequeñas perturbaciones. La práctica en el laboratorio demuestra que, en ocasiones, las derivas instrumentales pueden provocar distorsiones severas en el espectro que no pueden ser tratadas por el modelo anterior. Por ello, en el capítulo quinto se plantea el problema de medidas afectadas por fuertes derivas desde el punto de vista de la teoría de optimización no lineal. Esta reformulación lleva a la introducción de un algoritmo de tipo recursivo inspirado en el de Gauss-Newton que permite introducir el concepto de memoria lineal realimentada. Este operador ofrece una capacidad sensiblemente mejorada para la descomposición de mezclas con fuerte deriva sin la excesiva carga computacional que presentan los algoritmos clásicos de optimización no lineal. El trabajo finaliza con una discusión de los resultados obtenidos en los tres principales niveles de estudio abordados, que se ofrecen en los capítulos tercero, cuarto y quinto, así como con la elevación a definitivas de las principales conclusiones derivadas del estudio y con el desglose de las posibles líneas de continuación del presente trabajo.---ABSTRACT---Through the present research, the feasibility of Automatic Gamma-Radiation Spectral Decomposition by Linear Algebraic Equation-Solving Algorithms using Pseudo-Inverse Techniques is explored. The design of the before mentioned algorithms has been done having into account their possible implementation on Specific-Purpose Processors of Low Complexity. In the first chapter, the techniques for the detection and measurement of gamma radiation employed to construct the spectra being used throughout the research are reviewed. Similarly, the basic concepts related with the nature and properties of the hard electromagnetic radiation are also re-examined, together with the physic and electronic processes involved in the detection of such kind of radiation, with special emphasis in the intrinsic statistical nature of the spectrum build-up process, which is considered as a classification of the number of individual photon-detections as a function of the energy associated to each individual photon. Fbr such, a brief description of the most important matter-energy interaction phenomena conditioning the detection and spectrum formation processes is given. The radiation detector is considered as the most critical element in the measurement system, as this device strongly conditions the detection process. Fbr this reason, the characteristics of the most frequent detectors are re-examined, with special emphasis on those of semiconductor nature, as these are the most frequently employed ones nowadays. Finally, the fundamental electronic subsystems for preaconditioning and treating of the signal delivered by the detector, classically addresed as Nuclear Electronics, is described. As far as Spectroscopy is concerned, the subsystem most interesting for the scope covered by the present research is the so-called Multichannel Analyzer, which is devoted to the cualitative treatment of the signal, building-up a hystogram of radiation intensity in the range of energies in which the detector is sensitive. The resulting N-dimensional vector is generally known with the ñame of Radiation Spectrum. The different radio-nuclides contributing to the spectrum of a composite source will leave their fingerprint in the resulting spectrum. Through the second chapter, an exhaustive review of the mathematical methods devised to the present moment to identify the radio-nuclides present in the composite spectrum and to quantify their relative contributions, is reviewed. One of the more popular ones is the so-known Múltiple Linear Regression, which is proposed as the best suited approach according to the constraints and restrictions present in the formulation of the problem, i.e., the need to treat low-resolution spectra, the absence of control by a human operator (un-supervision), and the possibility of being implemented as low-complexity algorithms amenable of being supported by VLSI Specific Processors. The analysis problem is formally stated through the third chapter, following the hints established in this context, and it is shown that the addressed problem may be satisfactorily solved under the point of view of Linear Associative Memories. An operator based on this kind of structures may provide the solution to the spectral decomposition problem posed. In the same context, a pair of complementary adaptive algorithms useful for the construction of the solving operator are proposed, which share certain special arithmetic characteristics that render them specially suitable for their implementation on VLSI Processors. The adaptive nature of the associative memory provides a high flexibility to this operator, in what refers to the progressive inclusión of new information to the knowledge base. Through the fourth chapter, this fact is treated together with a new problem to be considered, of a high interest but quite complex nature, as is the treatment of the deformations appearing in the spectrum when instrumental drifts in both the detecting device and the pre-acconditioning electronics are to be taken into account. These deformations render the Linear Regression Model proposed almost unuseful to describe the resulting spectrum. A new model including the drifts is derived as an extensión of the individual contributions to the composite spectrum, which implies a simple extensión of the Associative Memory, which renders this suitable to accept the drifts in the composite spectrum, thus producing a robust analysis of contributions. The extensión method is based on the Low-Amplitude Perturbation Hypothesis. Experimental practice shows that in certain cases the instrumental drifts may provoke severe distortions in the resulting spectrum, which can not be treated with the before-mentioned hypothesis. To cover also these less-frequent cases, through the fifth chapter, the problem involving strong drifts is treated under the point of view of Non-Linear Optimization Techniques. This reformulation carries the study to the consideration of recursive algorithms based on the Gauss-Newton methods, which allow the introduction of Feed-Back Memories, computing elements with a sensibly improved capability to decompose spectra affected by strong drifts. The research concludes with a discussion of the results obtained in the three main levéis of study considerad, which are presented in chapters third, fourth and fifth, toghether with the review of the main conclusions derived from the study and the outline of the main research lines opened by the present work.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The structural connectivity of the brain is considered to encode species-wise and subject-wise patterns that will unlock large areas of understanding of the human brain. Currently, diffusion MRI of the living brain enables to map the microstructure of tissue, allowing to track the pathways of fiber bundles connecting the cortical regions across the brain. These bundles are summarized in a network representation called connectome that is analyzed using graph theory. The extraction of the connectome from diffusion MRI requires a large processing flow including image enhancement, reconstruction, segmentation, registration, diffusion tracking, etc. Although a concerted effort has been devoted to the definition of standard pipelines for the connectome extraction, it is still crucial to define quality assessment protocols of these workflows. The definition of quality control protocols is hindered by the complexity of the pipelines under test and the absolute lack of gold-standards for diffusion MRI data. Here we characterize the impact on structural connectivity workflows of the geometrical deformation typically shown by diffusion MRI data due to the inhomogeneity of magnetic susceptibility across the imaged object. We propose an evaluation framework to compare the existing methodologies to correct for these artifacts including whole-brain realistic phantoms. Additionally, we design and implement an image segmentation and registration method to avoid performing the correction task and to enable processing in the native space of diffusion data. We release PySDCev, an evaluation framework for the quality control of connectivity pipelines, specialized in the study of susceptibility-derived distortions. In this context, we propose Diffantom, a whole-brain phantom that provides a solution to the lack of gold-standard data. The three correction methodologies under comparison performed reasonably, and it is difficult to determine which method is more advisable. We demonstrate that susceptibility-derived correction is necessary to increase the sensitivity of connectivity pipelines, at the cost of specificity. Finally, with the registration and segmentation tool called regseg we demonstrate how the problem of susceptibility-derived distortion can be overcome allowing data to be used in their original coordinates. This is crucial to increase the sensitivity of the whole pipeline without any loss in specificity.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

La tesis ≪Peter Celsing en el complejo de Sergels torg. La Casa de la Cultura de Estocolmo ≫ intenta profundizar en la obra de este autor, en sus conexiones con otras arquitecturas y arquitectos de su entorno físico y temporal, que constituye uno de los episodios más interesantes y menos conocidos de la arquitectura nórdica. El objeto particular de estudio es la Casa de la Cultura dentro del complejo de Sergels torg como pieza clave que marca un antes y un después en su trayectoria. Se observa un proceso de desarrollo constante, que se fue gestando paulatinamente, y que en este ejercicio alcanza su punto álgido. Además, los proyectos que coexistieron con su evolución, y los posteriores, filtraron las inquietudes latentes del mayor reto al que se había enfrentado, dando pie a resultados novedosos en su producción. La investigación se estructura en tres capítulos. El primero, ≪Aprendizaje≫, examina sus experiencias de juventud, viajes de estudios y lecciones de sus maestros; así como sus inicios profesionales en el proyecto de restauración de la Catedral de Uppsala y en sus iglesias junto a Lewerentz. En ese periodo prima lo formal y escultórico, el hormigón y el ladrillo artesanal, y como referencia Le Corbusier y la capilla de Ronchamp. El segundo capítulo, ≪Obra≫, estudia el concurso que da origen a la Casa de la Cultura, su proceso de gestación y diseño, y las modificaciones posteriores durante su construcción. De repente emergen las cualidades espaciales y los sistemas estructurales aprendidos de Mies en el Crown Hall de Chicago, la gran escala de la metrópoli y las soluciones industriales ligeras. El tercer capítulo, ≪Madurez≫, sirve de cierre, y revisa su trayectoria posterior con relación a la obra referida. Tras este edificio, el diseño de las propuestas coetáneas se vuelve más abstracto y sencillo, gana en autonomía, rotundidad, atrevimiento y carácter. Las conclusiones verifican el cambio de actitud y de paradigmas. Hay elementos como las distorsiones, los contrastes y las manipulaciones, aprendidas en sus primeros años de formación y junto a Lewerentz, si bien es cierto que ahora han aumentado proporcionalmente a la escala de sus intervenciones e incluso se observan en el detalle, como sucede en el encuentro de los materiales y su montaje. Su visión, su punto de vista se eleva, y la pieza adopta una volumetría compacta y unitaria. Cada nuevo trabajo sintetiza un enfoque más universal y abierto en conceptos y metodología operativa. Sus preocupaciones nos hablan de un arquitecto consciente de su tiempo y cuya arquitectura final mira ya al siglo XXI. ABSTRACT The thesis ≪Peter Celsing in the complex of Sergels torg. The House of Culture in Stockholm ≫ delves deeper into the work of this author, in his connections with other architectures and architects of his physical and temporal surroundings, which is one of the most interesting and least known episodes of Nordic architecture. The particular focus of this study is how the House of Culture, a key point within the complex of Sergels torg, marks a before and after in his career. There is an observable process of constant development, which was growing steadily, and reaches its critical point in this exercise. In addition, the projects that coexisted during his evolution and those that came before, reveal latent concerns leading up to the biggest challenge he would face, ultimately giving way to new developments in his work. The study is divided into three chapters. The first, ≪Learning≫, examines his experiences as a youth, academic trips and lessons from his masters; as well as his professional beginnings alongside Lewerentz in the restoration project of Uppsala Cathedral and his churches. In that period the formal and sculptural were given preference, also concrete and handmade brick. And as reference, there is Le Corbusier and the chapel at Ronchamp. The second chapter, ≪Work≫, studies the competition that gives rise to the House of Culture, the gestation process and design and subsequent amendments during its construction. Suddenly the spatial qualities and structural systems learned from Mies in Crown Hall of Chicago emerge, the large scale of the metropolis and light industrial solutions. The third chapter, ≪Maturity≫, serves as the closure and examines his subsequent career in relation to said work. After this building, the design of coetaneous proposals become more abstract and simple, gaining autonomy, firmness, boldness and character. The conclusions verify the change in attitudes and paradigms. There are elements such as distortions, contrasts and manipulations, learned in his early years of training and with Lewerentz, that have now undeniably increased in proportion to the scale of their involvement and can even be observed in detail, as so happens in the joining of materials and assembly. His vision, his point of view is heightened, and the piece adopts a single and compact volume. Each new work synthesizes a more universal and open focus in concepts and operational methodology. His concerns speak of an architect aware of his time and whose final architecture now looks toward the 21st century.