854 resultados para visual information
Resumo:
Dissertação apresentada para cumprimento dos requisitos necessários à obtenção do grau de Mestre em Ciências da Educação – Especialização em Supervisão Pedagógica.
Resumo:
The project was made during the Erasmus+ Program in Instituto Superior de Engenharia do Porto, Portugal. I had a pleasure to do this in Gislotica Mechanical Solution, Lda. This document presents a process of design a vertical inspection station for truck tires. The first part contains an introduction. There are information about Gislotica Company and also first analysis of problem. In next part is presented way to figured out the task and described all issues connected with designed machine. In last part were made some conclusions about problems and results. There is a place not only for sum up design process but also my develop during the project. I repeatedly pointed out which issues were new for me. A lot of times I focus on myself and gained experience and information about design process.
Resumo:
Visual recognition is a fundamental research topic in computer vision. This dissertation explores datasets, features, learning, and models used for visual recognition. In order to train visual models and evaluate different recognition algorithms, this dissertation develops an approach to collect object image datasets on web pages using an analysis of text around the image and of image appearance. This method exploits established online knowledge resources (Wikipedia pages for text; Flickr and Caltech data sets for images). The resources provide rich text and object appearance information. This dissertation describes results on two datasets. The first is Berg’s collection of 10 animal categories; on this dataset, we significantly outperform previous approaches. On an additional set of 5 categories, experimental results show the effectiveness of the method. Images are represented as features for visual recognition. This dissertation introduces a text-based image feature and demonstrates that it consistently improves performance on hard object classification problems. The feature is built using an auxiliary dataset of images annotated with tags, downloaded from the Internet. Image tags are noisy. The method obtains the text features of an unannotated image from the tags of its k-nearest neighbors in this auxiliary collection. A visual classifier presented with an object viewed under novel circumstances (say, a new viewing direction) must rely on its visual examples. This text feature may not change, because the auxiliary dataset likely contains a similar picture. While the tags associated with images are noisy, they are more stable when appearance changes. The performance of this feature is tested using PASCAL VOC 2006 and 2007 datasets. This feature performs well; it consistently improves the performance of visual object classifiers, and is particularly effective when the training dataset is small. With more and more collected training data, computational cost becomes a bottleneck, especially when training sophisticated classifiers such as kernelized SVM. This dissertation proposes a fast training algorithm called Stochastic Intersection Kernel Machine (SIKMA). This proposed training method will be useful for many vision problems, as it can produce a kernel classifier that is more accurate than a linear classifier, and can be trained on tens of thousands of examples in two minutes. It processes training examples one by one in a sequence, so memory cost is no longer the bottleneck to process large scale datasets. This dissertation applies this approach to train classifiers of Flickr groups with many group training examples. The resulting Flickr group prediction scores can be used to measure image similarity between two images. Experimental results on the Corel dataset and a PASCAL VOC dataset show the learned Flickr features perform better on image matching, retrieval, and classification than conventional visual features. Visual models are usually trained to best separate positive and negative training examples. However, when recognizing a large number of object categories, there may not be enough training examples for most objects, due to the intrinsic long-tailed distribution of objects in the real world. This dissertation proposes an approach to use comparative object similarity. The key insight is that, given a set of object categories which are similar and a set of categories which are dissimilar, a good object model should respond more strongly to examples from similar categories than to examples from dissimilar categories. This dissertation develops a regularized kernel machine algorithm to use this category dependent similarity regularization. Experiments on hundreds of categories show that our method can make significant improvement for categories with few or even no positive examples.
Resumo:
Esta dissertação apresenta uma arquitectura interoperável que permite lidar com a obtenção, manipulação, processamento e análise de informação geográfica. A aplicação 30, implementada como parte da arquitectura, para além de permitir a visualização e manipulação de dados dentro de um ambiente 30, oferece métodos que permitem descobrir, aceder e usar geo-processos, disponíveis através de serviços Web. A interacção com o utilizador é também feita através uma abordagem que quebra a típica complexidade que a maioria dos Sistemas de Informação Geográfica apresenta. O recurso à programação visual reduz a complexidade do sistema, e permite aos operadores tirar proveito da localização e de uma abstracção de um processo complexo, onde as unidades de processamento são representadas no terreno através de componentes 30 que podem ser directamente manipuladas e ligadas de modo a criar encandeamentos complexos de processos. Estes processos podem também ser criados visualmente e disponibilizados online. ABSTRACT; This thesis presents an interoperable architecture mainly designed for manipulation, processing and geographical information analysis. The three-dimensional interface, implemented as part of the architecture, besides allowing the visualization and manipulation of spatial data within a 30 environment, offers methods for discovering, accessing and using geo-processes, available through Web Services. Furthermore, the user interaction is done through an approach that breaks the typical complexity of most Geographic information Systems. This simplicity is in general archived through a visual programming approach that allows operators to take advantage of location, and use processes through abstract representations. Thus, processing units are represented on the terrain through 30 components, which can be directly manipulated and linked to create complex process chains. New processes can also be visually created and deployed online.
Resumo:
It has been recently shownthat localfield potentials (LFPs)fromthe auditory and visual cortices carry information about sensory stimuli, but whether this is a universal property of sensory cortices remains to be determined. Moreover, little is known about the temporal dynamics of sensory information contained in LFPs following stimulus onset. Here we investigated the time course of the amount of stimulus information in LFPs and spikes from the gustatory cortex of awake rats subjected to tastants and water delivery on the tongue. We found that the phase and amplitude of multiple LFP frequencies carry information about stimuli, which have specific time courses after stimulus delivery. The information carried by LFP phase and amplitude was independent within frequency bands, since the joint information exhibited neither synergy nor redundancy. Tastant information in LFPs was also independent and had a different time course from the information carried by spikes. These findings support the hypothesis that the brain uses different frequency channels to dynamically code for multiple features of a stimulus.
Resumo:
Aims: To compare reading performance in children with and without visual function anomalies and identify the influence of abnormal visual function and other variables in reading ability. Methods: A cross-sectional study was carried in 110 children of school age (6-11 years) with Abnormal Visual Function (AVF) and 562 children with Normal Visual Function (NVF). An orthoptic assessment (visual acuity, ocular alignment, near point of convergence and accommodation, stereopsis and vergences) and autorefraction was carried out. Oral reading was analyzed (list of 34 words). Number of errors, accuracy (percentage of success) and reading speed (words per minute - wpm) were used as reading indicators. Sociodemographic information from parents (n=670) and teachers (n=34) was obtained. Results: Children with AVF had a higher number of errors (AVF=3.00 errors; NVF=1.00 errors; p<0.001), a lower accuracy (AVF=91.18%; NVF=97.06%; p<0.001) and reading speed (AVF=24.71 wpm; NVF=27.39 wpm; p=0.007). Reading speed in the 3rd school grade was not statistically different between the two groups (AVF=31.41 wpm; NVF=32.54 wpm; p=0.113). Children with uncorrected hyperopia (p=0.003) and astigmatism (p=0.019) had worst reading performance. Children in 2nd, 3rd, or 4th grades presented a lower risk of having reading impairment when compared with the 1st grade. Conclusion: Children with AVF had reading impairment in the first school grade. It seems that reading abilities have a wide variation and this disparity lessens in older children. The slow reading characteristics of the children with AVF are similar to dyslexic children, which suggest the need for an eye evaluation before classifying the children as dyslexic.
Resumo:
Sequences of timestamped events are currently being generated across nearly every domain of data analytics, from e-commerce web logging to electronic health records used by doctors and medical researchers. Every day, this data type is reviewed by humans who apply statistical tests, hoping to learn everything they can about how these processes work, why they break, and how they can be improved upon. To further uncover how these processes work the way they do, researchers often compare two groups, or cohorts, of event sequences to find the differences and similarities between outcomes and processes. With temporal event sequence data, this task is complex because of the variety of ways single events and sequences of events can differ between the two cohorts of records: the structure of the event sequences (e.g., event order, co-occurring events, or frequencies of events), the attributes about the events and records (e.g., gender of a patient), or metrics about the timestamps themselves (e.g., duration of an event). Running statistical tests to cover all these cases and determining which results are significant becomes cumbersome. Current visual analytics tools for comparing groups of event sequences emphasize a purely statistical or purely visual approach for comparison. Visual analytics tools leverage humans' ability to easily see patterns and anomalies that they were not expecting, but is limited by uncertainty in findings. Statistical tools emphasize finding significant differences in the data, but often requires researchers have a concrete question and doesn't facilitate more general exploration of the data. Combining visual analytics tools with statistical methods leverages the benefits of both approaches for quicker and easier insight discovery. Integrating statistics into a visualization tool presents many challenges on the frontend (e.g., displaying the results of many different metrics concisely) and in the backend (e.g., scalability challenges with running various metrics on multi-dimensional data at once). I begin by exploring the problem of comparing cohorts of event sequences and understanding the questions that analysts commonly ask in this task. From there, I demonstrate that combining automated statistics with an interactive user interface amplifies the benefits of both types of tools, thereby enabling analysts to conduct quicker and easier data exploration, hypothesis generation, and insight discovery. The direct contributions of this dissertation are: (1) a taxonomy of metrics for comparing cohorts of temporal event sequences, (2) a statistical framework for exploratory data analysis with a method I refer to as high-volume hypothesis testing (HVHT), (3) a family of visualizations and guidelines for interaction techniques that are useful for understanding and parsing the results, and (4) a user study, five long-term case studies, and five short-term case studies which demonstrate the utility and impact of these methods in various domains: four in the medical domain, one in web log analysis, two in education, and one each in social networks, sports analytics, and security. My dissertation contributes an understanding of how cohorts of temporal event sequences are commonly compared and the difficulties associated with applying and parsing the results of these metrics. It also contributes a set of visualizations, algorithms, and design guidelines for balancing automated statistics with user-driven analysis to guide users to significant, distinguishing features between cohorts. This work opens avenues for future research in comparing two or more groups of temporal event sequences, opening traditional machine learning and data mining techniques to user interaction, and extending the principles found in this dissertation to data types beyond temporal event sequences.
Resumo:
Los arquitectos de la época moderna en el Ecuador consideraron a la fotografía como un elemento importante para registrar y difundir sus obras. El presente trabajo se basa en esta importancia, centrándose en la documentación fotográfica de la arquitectura moderna ecuatoriana. Para la realización de esta tesis se partió de la contextualización del tema, dando a conocer cómo se desarrolló la fotografía y cómo ésta se ligó a la arquitectura en diferentes países de América y Europa. A continuación se realizó una investigación acerca de la fotografía de arquitectura moderna del país, de los diferentes archivos fotográficos, las publicaciones y los fotógrafos de la época en Cuenca, Guayaquil y Quito. Luego del proceso investigativo, se seleccionaron dos fotógrafos principales: RolfBlomberg y Christoph Hirtz. Se analizaron sus obras fotográficas, basándose en criterios de valor de la buena fotografía de arquitectura. Posteriormente, por medio de las imágenes se analizaron las modificaciones que han sufrido, hasta la actualidad, las obras arquitectónicas y su entorno. El trabajo permitió recopilar información que se encontraba dispersa. Para su difusión se creó una base de datos, que está a disposición como medio de consulta para futuras investigaciones. Esta base consta de toda la información recopilada de los fotógrafos ecuatorianos de arquitectura moderna que se pudo encontrar en los diferentes archivos.
Resumo:
Field lab in marketing: Children consumer behaviour
Resumo:
While a variety of crisis types loom as real risks for organizations and communities, and the media landscape continues to evolve, research is needed to help explain and predict how people respond to various kinds of crisis and disaster information. For example, despite the rising prevalence of digital and mobile media centered on still and moving visuals, and stark increases in Americans’ use of visual-based platforms for seeking and sharing disaster information, relatively little is known about how the presence or absence of disaster visuals online might prompt or deter resilience-related feelings, thoughts, and/or behaviors. Yet, with such insights, governmental and other organizational entities as well as communities themselves may best help individuals and communities prepare for, cope with, and recover from adverse events. Thus, this work uses the theoretical lens of the social-mediated crisis communication model (SMCC) coupled with the limited capacity model of motivated mediated message processing (LC4MP) to explore effects of disaster information source and visuals on viewers’ resilience-related responses to an extreme flooding scenario. Results from two experiments are reported. First a preliminary 2 (disaster information source: organization/US National Weather Service vs. news media/USA Today) x 2 (disaster visuals: no visual podcast vs. moving visual video) factorial between-subjects online experiment with a convenience sample of university students probes effects of crisis source and visuals on a variety of cognitive, affective, and behavioral outcomes. A second between-subjects online experiment manipulating still and moving visual pace in online videos (no visual vs. still, slow-pace visual vs. still, medium-pace visual vs. still, fast-pace visual vs. moving, slow-pace visual vs. moving, medium-pace visual vs. moving, fast-pace visual) with a convenience sample recruited from Amazon’s Mechanical Turk (mTurk) similarly probes a variety of potentially resilience-related cognitive, affective, and behavioral outcomes. The role of biological sex as a quasi-experimental variable is also investigated in both studies. Various implications for community resilience and recommendations for risk and disaster communicators are explored. Implications for theory building and future research are also examined. Resulting modifications of the SMCC model (i.e., removing “message strategy” and adding the new category of “message content elements” under organizational considerations) are proposed.
Resumo:
No existe en Cuenca un proyecto de investigación periodística y de producción audiovisual que indague, recopile y presente información sobre aquellas profesiones tradicionales heredadas a través del tiempo y que poco a poco se van perdiendo con miras a extinguirse completamente. Este proyecto, de cierta manera, puede ser innovador, ya que involucra dos áreas: comunicación audiovisual y redacción dentro del periodismo. Se involucran por el hecho de presentar información relevante, a través de un producto final, visual y escrito, que enseñe de quéforma estas profesiones son desarrolladas por diferentes actores humanos, sus contextos y sus procesos, con la intención de servir de apoyo investigativo cultural en el ámbito local y nacional.
Resumo:
A educação na arte e pela arte confere a todos os seus intervenientes a estimulação da sua criatividade e da sua consciência cultural, proporcionando meios para se exprimirem e participarem ativamente no mundo que nos rodeia. A integração das tecnologias de informação e comunicação no processo de ensino-aprendizagem veio alargar o papel que a arte pode desempenhar neste processo, promovendo novas formas de aprender, de ensinar e de pensar. Assim, a utilização de ambientes virtuais em contexto educativo tem revelado um enorme potencial, sobretudo ao nível da comunicação e da interação entre alunos e obras de arte. Neste sentido, considerou-se importante desenvolver um estudo de caso em contexto de sala de aula da Educação Visual, promovendo uma aprendizagem baseada na articulação entre a observação, interpretação e análise da obra de arte e o museu virtual. Assim o principal objetivo deste estudo foi avaliar as potencialidades do Google Art Project, enquanto objeto de aprendizagem, na promoção da aprendizagem na área da literacia em artes. Para além disso, procurámos ainda avaliar se a utilização de ferramentas multimédia como o referido Google Art Project e o Quadro Interativo, constituem fatores de motivação na aprendizagem da disciplina de Educação Visual. Do ponto de vista metodológico desenvolvemos uma estratégia baseada na investigação-ação. Procurámos, por um lado, descobrir e compreender o significado de uma realidade vivida por um grupo de alunos e, por outro lado, refletir sobre a prática educativa com o intuito de a melhorar e transformar. Este estudo envolveu cinco turmas do sexto ano do ensino público. Para a recolha de dados utilizámos técnicas baseadas na conversação e na observação, no questionário e nas notas de campo. Os resultados deste estudo revelam que as ferramentas tecnológicas utilizadas podem efetivamente contribuir para a promoção da aprendizagem dos alunos na área da Educação Visual, mais concretamente ao nível do domínio da literacia artística, da representação e da interpretação visual.
Resumo:
Analysis of data without labels is commonly subject to scrutiny by unsupervised machine learning techniques. Such techniques provide more meaningful representations, useful for better understanding of a problem at hand, than by looking only at the data itself. Although abundant expert knowledge exists in many areas where unlabelled data is examined, such knowledge is rarely incorporated into automatic analysis. Incorporation of expert knowledge is frequently a matter of combining multiple data sources from disparate hypothetical spaces. In cases where such spaces belong to different data types, this task becomes even more challenging. In this paper we present a novel immune-inspired method that enables the fusion of such disparate types of data for a specific set of problems. We show that our method provides a better visual understanding of one hypothetical space with the help of data from another hypothetical space. We believe that our model has implications for the field of exploratory data analysis and knowledge discovery.
Resumo:
Posttraumatic stress and PTSD are becoming familiar terms to refer to what we often call the invisible wounds of war, yet these are recent additions to a popular discourse in which images of and ideas about combat-affected veterans have long circulated. A legacy of ideas about combat veterans and war trauma thus intersects with more recent clinical information about PTSD to become part of a discourse of visual media that has defined and continues to redefine veteran for popular audiences. In this dissertation I examine realist combat veteran representations in selected films and other visual media from three periods: during and after World Wars I and II (James Allen from I Am a Fugitive from a Chain Gang, Fred Derry and Al Stephenson from The Best Years of Our Lives); after the Vietnam War (Michael from The Deer Hunter, Eriksson from Casualties of War), and post 9/11 (Will James from The Hurt Locker, a collection of veterans from Wartorn: 1861-2010.) Employing a theoretical framework informed by visual media studies, Barthes’ concept of myth, and Foucault’s concept ofdiscursive unity, I analyze how these veteran representations are endowed with PTSD symptom-like behaviors and responses that seem reasonable and natural within the narrative arc. I contend that veteran myths appear through each veteran representation as the narrative develops and resolves. I argue that these veteran myths are many and varied but that they crystallize in a dominant veteran discourse, a discursive unity that I term veteranness. I further argue that veteranness entangles discrete categories such as veteran, combat veteran, and PTSD with veteran myths, often tying dominant discourse about combat-related PTSD to outdated or outmoded notions that significantly affect our attitudes about and treatment of veterans. A basic premise of my research is that unless and until we learn about the lasting effects of the trauma inherent to combat, we hinder our ability to fulfill our responsibilities to war veterans. A society that limits its understanding of posttraumatic stress, PTSD and post-war experiences of actual veterans affected by war trauma to veteranness or veteran myths risks normalizing or naturalizing an unexamined set of sociocultural expectations of all veterans, rendering them voice-less, invisible, and, ultimately disposable.
Resumo:
With the progress of computer technology, computers are expected to be more intelligent in the interaction with humans, presenting information according to the user's psychological and physiological characteristics. However, computer users with visual problems may encounter difficulties on the perception of icons, menus, and other graphical information displayed on the screen, limiting the efficiency of their interaction with computers. In this dissertation, a personalized and dynamic image precompensation method was developed to improve the visual performance of the computer users with ocular aberrations. The precompensation was applied on the graphical targets before presenting them on the screen, aiming to counteract the visual blurring caused by the ocular aberration of the user's eye. A complete and systematic modeling approach to describe the retinal image formation of the computer user was presented, taking advantage of modeling tools, such as Zernike polynomials, wavefront aberration, Point Spread Function and Modulation Transfer Function. The ocular aberration of the computer user was originally measured by a wavefront aberrometer, as a reference for the precompensation model. The dynamic precompensation was generated based on the resized aberration, with the real-time pupil diameter monitored. The potential visual benefit of the dynamic precompensation method was explored through software simulation, with the aberration data from a real human subject. An "artificial eye'' experiment was conducted by simulating the human eye with a high-definition camera, providing objective evaluation to the image quality after precompensation. In addition, an empirical evaluation with 20 human participants was also designed and implemented, involving image recognition tests performed under a more realistic viewing environment of computer use. The statistical analysis results of the empirical experiment confirmed the effectiveness of the dynamic precompensation method, by showing significant improvement on the recognition accuracy. The merit and necessity of the dynamic precompensation were also substantiated by comparing it with the static precompensation. The visual benefit of the dynamic precompensation was further confirmed by the subjective assessments collected from the evaluation participants.