957 resultados para Place recognition algorithm
Resumo:
Dissertação de mestrado em Engenharia de Sistemas
Resumo:
Dissertação de mestrado integrado em Engenharia e Gestão de Sistemas de Informação
Resumo:
Tese de Doutoramento em Engenharia de Eletrónica e de Computadores
Resumo:
The MAP-i Doctoral Programme in Informatics, of the Universities of Minho, Aveiro and Porto
Resumo:
Feminisms in Portugal, as elsewhere, have been shaped historically. From the revolutions of the late 19th and early 20th centuries, which ended monarchy and established a republican system, women have taken a stand. In the late 1970s, after 48 years of dictatorship during which feminist issues were effectively silenced, feminist groups began to appear in Portugal. It was then, in 1976, that UMAR (Unia˜o de Mulheres Alternativa e Resposta [‘Union of Women for Alternatives and Answers’]) began its fight against discrimination and violence against women.
Resumo:
Dissertação de mestrado em Direitos Humanos
Resumo:
This chapter identifies and analyzes some characteristics of working situations that are important in the construction of professional identities. It is claimed that professional identity presumes a dynamic process and is constructed and reconstructed in social situations and interactions. Work has a central place in the life of individuals and societies. The contexts of work are places par excellence for investment, expression, negotiation and recognition of the actor themselves and of others. They are, thus, situations for the attribution offeelings about work, ofrelational transactions, oflearning and production of professional know ledge, which are fundamental elements in the (re )construction of professional identity and which are emphasized here below.
Resumo:
El Virus de la Inmunodeficiencia Humana tipo 1 (VIH-1) afecta principalmente a la respuesta inmune específica causando una pérdida progresiva de los linfocitos T CD4+. Sin embargo, este virus también afecta a células del sistema inmune innato, tales como los Polimorfonucleares Neutrófilos (PMN). Existen evidencias de alteraciones funcionales de los PMN durante la progresión de la infección por VIH y una de las explicaciones de estos defectos, la atribuye a una muerte celular programada o apoptosis constitutiva incrementada. El compromiso de la apoptosis de los PMN en la infección por VIH no está totalmente dilucidado, por ello, los objetivos de este proyecto son investigar el efecto de la infección por VIH sobre la apoptosis de PMN, analizar la expresión de moléculas y receptores de patrones de reconocimiento en estas células y evaluar el impacto de la terapia antirretroviral sobre la apoptosis y expresión de moléculas y receptores en PMN. Se incluirán individuos en distintos estadios clínicos e inmunológicos de la infección con o sin tratamiento antirretroviral y se determinarán parámetros hematológicos, inmunológicos y virológicos a fin de correlacionar el nivel de apoptosis y expresión de moléculas y receptores con el nivel de linfocitos T CD4+ y carga viral. La importancia de los PMN en el control de la infección por el VIH es actualmente un área de mucho interés, ya pueden ejercer un efecto anti-VIH directo, y al mismo tiempo, ser blancos de la infección viral. Los mecanismos que conducen a la muerte acelerada de los PMN no han sido totalmente dilucidados, por ello, su estudio permitirá entender las bases bioquímicas de los cambios morfológicos y determinar los mecanismos que definen su iniciación y regulación. En el presente proyecto, el estudio de la apoptosis de PMN de pacientes con infección VIH/SIDA posibilitará caracterizar la sobrevida de éstas células y su relación con el estado inmunológico, virológico y la terapia antirretroviral. Además, el estudio de los receptores reconocedores de patrones moleculares asociados a patógenos permitirá aclarar algunos aspectos de la activación de la respuesta inmune innata y su conexión con la inmunidad adaptativa. Comprender aspectos claves de la cascada de la apoptosis de PMN y de la expresión de receptores reconocedores de patrones moleculares en la infección VIH/SIDA podría en un futuro aportar posibles blancos terapéuticos para restaurar la función de estas células durante esta infección.
Resumo:
El avance en la potencia de cómputo en nuestros días viene dado por la paralelización del procesamiento, dadas las características que disponen las nuevas arquitecturas de hardware. Utilizar convenientemente este hardware impacta en la aceleración de los algoritmos en ejecución (programas). Sin embargo, convertir de forma adecuada el algoritmo en su forma paralela es complejo, y a su vez, esta forma, es específica para cada tipo de hardware paralelo. En la actualidad los procesadores de uso general más comunes son los multicore, procesadores paralelos, también denominados Symmetric Multi-Processors (SMP). Hoy en día es difícil hallar un procesador para computadoras de escritorio que no tengan algún tipo de paralelismo del caracterizado por los SMP, siendo la tendencia de desarrollo, que cada día nos encontremos con procesadores con mayor numero de cores disponibles. Por otro lado, los dispositivos de procesamiento de video (Graphics Processor Units - GPU), a su vez, han ido desarrollando su potencia de cómputo por medio de disponer de múltiples unidades de procesamiento dentro de su composición electrónica, a tal punto que en la actualidad no es difícil encontrar placas de GPU con capacidad de 200 a 400 hilos de procesamiento paralelo. Estos procesadores son muy veloces y específicos para la tarea que fueron desarrollados, principalmente el procesamiento de video. Sin embargo, como este tipo de procesadores tiene muchos puntos en común con el procesamiento científico, estos dispositivos han ido reorientándose con el nombre de General Processing Graphics Processor Unit (GPGPU). A diferencia de los procesadores SMP señalados anteriormente, las GPGPU no son de propósito general y tienen sus complicaciones para uso general debido al límite en la cantidad de memoria que cada placa puede disponer y al tipo de procesamiento paralelo que debe realizar para poder ser productiva su utilización. Los dispositivos de lógica programable, FPGA, son dispositivos capaces de realizar grandes cantidades de operaciones en paralelo, por lo que pueden ser usados para la implementación de algoritmos específicos, aprovechando el paralelismo que estas ofrecen. Su inconveniente viene derivado de la complejidad para la programación y el testing del algoritmo instanciado en el dispositivo. Ante esta diversidad de procesadores paralelos, el objetivo de nuestro trabajo está enfocado en analizar las características especificas que cada uno de estos tienen, y su impacto en la estructura de los algoritmos para que su utilización pueda obtener rendimientos de procesamiento acordes al número de recursos utilizados y combinarlos de forma tal que su complementación sea benéfica. Específicamente, partiendo desde las características del hardware, determinar las propiedades que el algoritmo paralelo debe tener para poder ser acelerado. Las características de los algoritmos paralelos determinará a su vez cuál de estos nuevos tipos de hardware son los mas adecuados para su instanciación. En particular serán tenidos en cuenta el nivel de dependencia de datos, la necesidad de realizar sincronizaciones durante el procesamiento paralelo, el tamaño de datos a procesar y la complejidad de la programación paralela en cada tipo de hardware. Today´s advances in high-performance computing are driven by parallel processing capabilities of available hardware architectures. These architectures enable the acceleration of algorithms when thes ealgorithms are properly parallelized and exploit the specific processing power of the underneath architecture. Most current processors are targeted for general pruposes and integrate several processor cores on a single chip, resulting in what is known as a Symmetric Multiprocessing (SMP) unit. Nowadays even desktop computers make use of multicore processors. Meanwhile, the industry trend is to increase the number of integrated rocessor cores as technology matures. On the other hand, Graphics Processor Units (GPU), originally designed to handle only video processing, have emerged as interesting alternatives to implement algorithm acceleration. Current available GPUs are able to implement from 200 to 400 threads for parallel processing. Scientific computing can be implemented in these hardware thanks to the programability of new GPUs that have been denoted as General Processing Graphics Processor Units (GPGPU).However, GPGPU offer little memory with respect to that available for general-prupose processors; thus, the implementation of algorithms need to be addressed carefully. Finally, Field Programmable Gate Arrays (FPGA) are programmable devices which can implement hardware logic with low latency, high parallelism and deep pipelines. Thes devices can be used to implement specific algorithms that need to run at very high speeds. However, their programmability is harder that software approaches and debugging is typically time-consuming. In this context where several alternatives for speeding up algorithms are available, our work aims at determining the main features of thes architectures and developing the required know-how to accelerate algorithm execution on them. We look at identifying those algorithms that may fit better on a given architecture as well as compleme
Resumo:
There is considerable interest in alcohol in Irish society, yet minimal sociologcial understanding of its consumption, particularly of the sites where most drinking occurs: the country's 8750 pubs. Despite widespread public discussions on the role of the pub, there is scant social science evidence to better inform debate. Pubs are central to Irish community and are key sites of social interaction. American sociologist Ray Oldenburg has argued that "third places" (neither workplace nor home) are crucial to the maintenance of the community and the enhancement of social capital. According to Oldenburg, the role of the third place in the community is to provide continuity, regularity, a sense of place - all of which conceptually contribute to the construction of the self, the projection of the self within the public sphere, the distribution of social capital and the generation of a collective identity. The pub is the archetypal third place, but Oldenburg is concerned that modern pubs are less able to provide this vital function. Social scientists have suggested that community is in a state of fragmentation and decline due to changes in modes of social interaction and a decrease in shared spaces, resulting in a weakened connection to place. Community without propinquity has been characterised by social alienation, fragmentation and what Oldenburg refers to as the "problem of place" (13). Third places, and thus the Irish pub, have been particularly affected. In order to increase the sociological knowledge of the pub in Ireland, this project critically engages with the pub to assess the importance that public drinking houses have in the everyday. Moreover, this research sets out to investigate the people/place relationship using the pub as an investigative lens and examine the ways in which people shape place, place shapes people and how that relationship is implicated in the construction of irish identities. Furthermore, this is also an articulation of a cultural shift within Ireland and Irish places whose effects are deep and multi-layered. This project aims to explore the development of the contemporary geography of identity as the irish pub as a third place is transformed or disappears from the social landscape.
Resumo:
The idea for this thesis arose from a chain of reactions first set in motion by a particular experience. In keeping with the contemporary need to deconstruct every phenomenon it seemed important to analyse this experience in the hope of a satisfactory explanation. The experience referred to is the aesthetic experience provoked by works of art. The plan for the thesis involved trying to establish whether the aesthetic experience is unique and individual, or whether it is one that is experienced universally. Each question that arises in the course of this exploration promotes a dialectical reaction. I rely on the history of aesthetics as a philosophical discipline to supply the answers. This study concentrates on the efforts by philosophers and critical theorists to understand the tensions between the empirical and the emotional, the individual and the universal responses to the sociological, political and material conditions that prevail and are expressed through the medium of art. What I found is that the history of aesthetics is full of contradictory evidence and cannot provide a dogmatic solution to the questions posed. In fact what is indicated is that the mystery that attaches to the aesthetic experience is one that can also apply to the spiritual or transcendent experience. The aim of this thesis is to support the contribution of visual art in the spiritual well being of human development and supports the uniqueness of the evaluation and aesthetic judgement by the individual of a work of art. I suggest that mystery will continue to be of value in the holistic development of human beings and this mystery can be expressed through visual art. Furthermore, this thesis might suggest that what could be looked at is whether a work of art may be redemptive in its affect and offset the current decline in affective religious practice.
Resumo:
As digital imaging processing techniques become increasingly used in a broad range of consumer applications, the critical need to evaluate algorithm performance has become recognised by developers as an area of vital importance. With digital image processing algorithms now playing a greater role in security and protection applications, it is of crucial importance that we are able to empirically study their performance. Apart from the field of biometrics little emphasis has been put on algorithm performance evaluation until now and where evaluation has taken place, it has been carried out in a somewhat cumbersome and unsystematic fashion, without any standardised approach. This paper presents a comprehensive testing methodology and framework aimed towards automating the evaluation of image processing algorithms. Ultimately, the test framework aims to shorten the algorithm development life cycle by helping to identify algorithm performance problems quickly and more efficiently.
Resumo:
This project was funded under the Applied Research Grants Scheme administered by Enterprise Ireland. The project was a partnership between Galway - Mayo Institute of Technology and an industrial company, Tyco/Mallinckrodt Galway. The project aimed to develop a semi - automatic, self - learning pattern recognition system capable of detecting defects on the printed circuits boards such as component vacancy, component misalignment, component orientation, component error, and component weld. The research was conducted in three directions: image acquisition, image filtering/recognition and software development. Image acquisition studied the process of forming and digitizing images and some fundamental aspects regarding the human visual perception. The importance of choosing the right camera and illumination system for a certain type of problem has been highlighted. Probably the most important step towards image recognition is image filtering, The filters are used to correct and enhance images in order to prepare them for recognition. Convolution, histogram equalisation, filters based on Boolean mathematics, noise reduction, edge detection, geometrical filters, cross-correlation filters and image compression are some examples of the filters that have been studied and successfully implemented in the software application. The software application developed during the research is customized in order to meet the requirements of the industrial partner. The application is able to analyze pictures, perform the filtering, build libraries, process images and generate log files. It incorporates most of the filters studied and together with the illumination system and the camera it provides a fully integrated framework able to analyze defects on printed circuit boards.