978 resultados para open video tools
Resumo:
This research will discuss some experiences from a four year school research study. It was conducted in cooperation with teachers from four municipalities in Dalarna. The aim of the research was to examine teachers´ professional development when they participated in collaborative discussions based on video recordings and video edited material from specific lessons in their own practice. The study had two foci one was to investigate methods and tools that teachers can use to develop their ability to assess their students while working on multimodal tasks. The other was to examine how video can be used by teachers wanting to obtain knowledge about assessing students. The study is based on several theories about when teachers collaborate to create new knowledge. The first is the design theoretical approach – where visual ethnography and a semiotic approach contribute to problematize the use and mixture of different modes. A basic assumption of the framework here is that meanings are made and communicated in mathematics through a wide range of semiotic modes. By using video as an essential tool in the research the framework theories concerning visual ethnography, video documentation and individuals as reflective practitioners were also needed. The findings can be divided into the following themes: the use of tasks for assessment, collaborative discussion, equipment, ethical dilemmas. Collaborative discussions were evaluated as a meaningful way of sharing knowledge. The use of video recordings in association with these discussions raised important ethical issues. Working with the assessment framework was of great interest to the teachers but it took a lot of time from their ordinary work. In this way the project highlighted more general aspects of school development. The research also concerns teachers´ use of collaborative discussions in assessment work, multimodal tasks in mathematics and video as a research tool in general.
Resumo:
The Open Provenance Model is a model of provenance that is designed to meet the following requirements: (1) To allow provenance information to be exchanged between systems, by means of a compatibility layer based on a shared provenance model. (2) To allow developers to build and share tools that operate on such a provenance model. (3) To define provenance in a precise, technology-agnostic manner. (4) To support a digital representation of provenance for any 'thing', whether produced by computer systems or not. (5) To allow multiple levels of description to coexist. (6) To define a core set of rules that identify the valid inferences that can be made on provenance representation. This document contains the specification of the Open Provenance Model (v1.1) resulting from a community-effort to achieve inter-operability in the Provenance Challenge series.
Resumo:
Vídeos são dos principais meios de difusão de conhecimento, informação e entretenimento existentes. Todavia, apesar da boa qualidade e da boa aceitação do público, os vídeos atuais ainda restringem o espectador a um único ponto de vista. Atualmente, alguns estudos estão sendo desenvolvidos visando oferecer ao espectador maior liberdade para decidir de onde ele gostaria de assistir a cena. O tipo de vídeo a ser produzido por essas iniciativas tem sido chamado genericamente de vídeo 3D. Esse trabalho propõe uma arquitetura para captura e exibição de vídeos 3D em tempo real utilizando as informações de cor e profundidade da cena, capturadas para cada pixel de cada quadro do vídeo. A informação de profundidade pode ser obtida utilizando-se câmeras 3D, algoritmos de extração de disparidade a partir de estéreo, ou com auxílio de luz estruturada. A partir da informação de profundidade é possível calcular novos pontos de vista da cena utilizando um algoritmo de warping 3D. Devido a não disponibilidade de câmeras 3D durante a realização deste trabalho, a arquitetura proposta foi validada utilizando um ambiente sintético construído usando técnicas de computação gráfica. Este protótipo também foi utilizado para analisar diversos algoritmos de visão computacional que utilizam imagens estereoscópias para a extração da profundidade de cenas em tempo real. O uso de um ambiente controlado permitiu uma análise bastante criteriosa da qualidade dos mapas de profundidade produzidos por estes algoritmos, nos levando a concluir que eles ainda não são apropriados para uso de aplicações que necessitem da captura de vídeo 3D em tempo real.
Resumo:
Image stitching is the process of joining several images to obtain a bigger view of a scene. It is used, for example, in tourism to transmit to the viewer the sensation of being in another place. I am presenting an inexpensive solution for automatic real time video and image stitching with two web cameras as the video/image sources. The proposed solution relies on the usage of several markers in the scene as reference points for the stitching algorithm. The implemented algorithm is divided in four main steps, the marker detection, camera pose determination (in reference to the markers), video/image size and 3d transformation, and image translation. Wii remote controllers are used to support several steps in the process. The built‐in IR camera provides clean marker detection, which facilitates the camera pose determination. The only restriction in the algorithm is that markers have to be in the field of view when capturing the scene. Several tests where made to evaluate the final algorithm. The algorithm is able to perform video stitching with a frame rate between 8 and 13 fps. The joining of the two videos/images is good with minor misalignments in objects at the same depth of the marker,misalignments in the background and foreground are bigger. The capture process is simple enough so anyone can perform a stitching with a very short explanation. Although real‐time video stitching can be achieved by this affordable approach, there are few shortcomings in current version. For example, contrast inconsistency along the stitching line could be reduced by applying a color correction algorithm to every source videos. In addition, the misalignments in stitched images due to camera lens distortion could be eased by optical correction algorithm. The work was developed in Apple’s Quartz Composer, a visual programming environment. A library of extended functions was developed using Xcode tools also from Apple.
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
The Finite Element Method is a well-known technique, being extensively applied in different areas. Studies using the Finite Element Method (FEM) are targeted to improve cardiac ablation procedures. For such simulations, the finite element meshes should consider the size and histological features of the target structures. However, it is possible to verify that some methods or tools used to generate meshes of human body structures are still limited, due to nondetailed models, nontrivial preprocessing, or mainly limitation in the use condition. In this paper, alternatives are demonstrated to solid modeling and automatic generation of highly refined tetrahedral meshes, with quality compatible with other studies focused on mesh generation. The innovations presented here are strategies to integrate Open Source Software (OSS). The chosen techniques and strategies are presented and discussed, considering cardiac structures as a first application context. © 2013 E. Pavarino et al.
Resumo:
This paper proposes a rank aggregation framework for video multimodal geocoding. Textual and visual descriptions associated with videos are used to define ranked lists. These ranked lists are later combined, and the resulting ranked list is used to define appropriate locations for videos. An architecture that implements the proposed framework is designed. In this architecture, there are specific modules for each modality (e.g, textual and visual) that can be developed and evolved independently. Another component is a data fusion module responsible for combining seamlessly the ranked lists defined for each modality. We have validated the proposed framework in the context of the MediaEval 2012 Placing Task, whose objective is to automatically assign geographical coordinates to videos. Obtained results show how our multimodal approach improves the geocoding results when compared to methods that rely on a single modality (either textual or visual descriptors). We also show that the proposed multimodal approach yields comparable results to the best submissions to the Placing Task in 2012 using no extra information besides the available development/training data. Another contribution of this work is related to the proposal of a new effectiveness evaluation measure. The proposed measure is based on distance scores that summarize how effective a designed/tested approach is, considering its overall result for a test dataset. © 2013 Springer Science+Business Media New York.
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
A crescente utilização dos serviços de telecomunicações principalmente sem fio tem exigido a adoção de novos padrões de redes que ofereçam altas taxas de transmissão e que alcance um número maior de usuários. Neste sentido o padrão IEEE 802.16, no qual é baseado o WiMAX, surge como uma tecnologia em potencial para o fornecimento de banda larga na próxima geração de redes sem fio, principalmente porque oferece Qualidade de Serviço (QoS) nativamente para fluxos de voz, dados e vídeo. A respeito das aplicações baseadas vídeo, tem ocorrido um grande crescimento nos últimos anos. Em 2011 a previsão é que esse tipo de conteúdo ultrapasse 50% de todo tráfego proveniente de dispositivos móveis. Aplicações do tipo vídeo têm um forte apelo ao usuário final que é quem de fato deve ser o avaliador do nível de qualidade recebida. Diante disso, são necessárias novas formas de avaliação de desempenho que levem em consideração a percepção do usuário, complementando assim as técnicas tradicionais que se baseiam apenas em aspectos de rede (QoS). Nesse sentido, surgiu a avaliação de desempenho baseada Qualidade de Experiência (QoE) onde a avaliação do usuário final em detrimento a aplicação é o principal parâmetro mensurado. Os resultados das investigações em QoE podem ser usados como uma extensão em detrimento aos tradicionais métodos de QoS, e ao mesmo tempo fornecer informações a respeito da entrega de serviços multimídias do ponto de vista do usuário. Exemplos de mecanismos de controle que poderão ser incluídos em redes com suporte a QoE são novas abordagens de roteamento, processo de seleção de estação base e tráfego condicionado. Ambas as metodologias de avaliação são complementares, e se usadas de forma combinada podem gerar uma avaliação mais robusta. Porém, a grande quantidade de informações dificulta essa combinação. Nesse contexto, esta dissertação tem como objetivo principal criar uma metodologia de predição de qualidade de vídeo em redes WiMAX com uso combinado de simulações e técnicas de Inteligência Computacional (IC). A partir de parâmetros de QoS e QoE obtidos através das simulações será realizado a predição do comportamento futuro do vídeo com uso de Redes Neurais Artificiais (RNA). Se por um lado o uso de simulações permite uma gama de opções como extrapolação de cenários de modo a imitar as mesmas situações do mundo real, as técnicas de IC permitem agilizar a análise dos resultados de modo que sejam feitos previsões de um comportamento futuro, correlações e outros. No caso deste trabalho, optou-se pelo uso de RNAs uma vez que é a técnica mais utilizada para previsão do comportamento, como está sendo proposto nesta dissertação.
Resumo:
The Finite Element Method (FEM) is a way of numerical solution applied in different areas, as simulations used in studies to improve cardiac ablation procedures. For this purpose, the meshes should have the same size and histological features of the focused structures. Some methods and tools used to generate tetrahedral meshes are limited mainly by the use conditions. In this paper, the integration of Open Source Softwares is presented as an alternative to solid modeling and automatic mesh generation. To demonstrate its efficiency, the cardiac structures were considered as a first application context: atriums, ventricles, valves, arteries and pericardium. The proposed method is feasible to obtain refined meshes in an acceptable time and with the required quality for simulations using FEM.
Resumo:
This work focuses on the study of video compression standard MPEG. To this end, a study was undertaken starting from the basics of digital video, addressing the components necessary for the understanding of the tools used by the video coding standard MPEG. The Motion Picture Experts Group (MPEG) was formed in the late '80s by a group of experts in order to create international standards for encoding and decoding audio and video. This paper will discuss the techniques present in the video compression standard MPEG, as well as its evolution. Will be described in the MPEG-1, MPEG-2, MPEG-4 and H.264 (MPEG-4 Part 10), however, the last two will be presented with more emphasis, because the standards are present in most modern video technologies, as in HDTV broadcasts
Resumo:
Wild bearded capuchins, Cebus libidinosus, in Fazenda Boa Vista, Brazil crack tough palm nuts using hammer stones. We analysed the contribution of intrinsic factors (body weight, behaviour), size of the nuts and the anvil surface (flat or pit) to the efficiency of cracking. We provided capuchins with local palm nuts and a single hammer stone at an anvil. From video we scored the capuchins` position and actions with the nut prior to each strike, and outcomes of each strike. The most efficient capuchin opened 15 nuts per 100 strikes (6.6 strikes per nut). The least efficient capuchin that succeeded in opening a nut opened 1.32 nuts per 100 strikes (more than 75 strikes per nut). Body weight and diameter of the nut best predicted whether a capuchin would crack a nut on a given strike. All the capuchins consistently placed nuts into pits. To provide an independent analysis of the effect of placing the nut into a pit, we filmed an adult human cracking nuts on the same anvil using the same stone. The human displaced the nut on proportionally fewer strikes when he placed it into a pit rather than on a flat surface. Thus the capuchins placed the nut in a more effective location on the anvil to crack it. Nut cracking as practised by bearded capuchins is a striking example of a plastic behaviour where costs and benefits vary enormously across individuals, and where efficiency requires years to attain. (C) 2009 The Association for the Study of Animal Behaviour. Published by Elsevier Ltd. All rights reserved.