974 resultados para Gravações de video - Produção e direção
Resumo:
No Brasil, cultiva-se o abacaxizeiro, Ananas comosus (L) na maioria das regiões ,sendo a 'Pérola' a cultivar mais plantada. Apesar de seu aspecto rústico, essa bromeliácea, em uma produção comercial, exige tratos culturais e fitossanitários rigorosos, para evitar problemas como a murcha que está associada à cochonilha Dysmicoccus brevipes Cockerell (1893) (Hemiptera: Pseudococcidae), cujas perdas na produção, em cultivares suscetíveis, podem ultrapassar os 80% (SANCHES, 2005). O mercado interno é ainda o mais visado pelos produtores de abacaxi, sendo a aquisição ou venda de mudas entre produtores uma prática muito comum que, propicia a dispersão desse inseto de uma propriedade para outra ou de uma região para outra. O Sistema de Produção Integrada de Abacaxi é uma prática de apoio aos produtores para atender as exigências crescentes do mercado consumidor quanto à produção de alimentos seguros. Esse sistema é baseado nas boas práticas agrícolas traduzindo em valorização do ser humano, conservação do meio ambiente (solo e água), melhoria da qualidade de vida dos produtores rurais, respeito à legislação trabalhista e segurança do trabalhador.
Resumo:
2006
Resumo:
2007
Resumo:
2006
Resumo:
1984
Resumo:
1991
Resumo:
1998
Resumo:
2007
Resumo:
2004
Resumo:
This paper consists of two major parts. First, we present the outline of a simple approach to very-low bandwidth video-conferencing system relying on an example-based hierarchical image compression scheme. In particular, we discuss the use of example images as a model, the number of required examples, faces as a class of semi-rigid objects, a hierarchical model based on decomposition into different time-scales, and the decomposition of face images into patches of interest. In the second part, we present several algorithms for image processing and animation as well as experimental evaluations. Among the original contributions of this paper is an automatic algorithm for pose estimation and normalization. We also review and compare different algorithms for finding the nearest neighbors in a database for a new input as well as a generalized algorithm for blending patches of interest in order to synthesize new images. Finally, we outline the possible integration of several algorithms to illustrate a simple model-based video-conference system.
Resumo:
2007
Resumo:
Passive monitoring of large sites typically requires coordination between multiple cameras, which in turn requires methods for automatically relating events between distributed cameras. This paper tackles the problem of self-calibration of multiple cameras which are very far apart, using feature correspondences to determine the camera geometry. The key problem is finding such correspondences. Since the camera geometry and photometric characteristics vary considerably between images, one cannot use brightness and/or proximity constraints. Instead we apply planar geometric constraints to moving objects in the scene in order to align the scene"s ground plane across multiple views. We do not assume synchronized cameras, and we show that enforcing geometric constraints enables us to align the tracking data in time. Once we have recovered the homography which aligns the planar structure in the scene, we can compute from the homography matrix the 3D position of the plane and the relative camera positions. This in turn enables us to recover a homography matrix which maps the images to an overhead view. We demonstrate this technique in two settings: a controlled lab setting where we test the effects of errors in internal camera calibration, and an uncontrolled, outdoor setting in which the full procedure is applied to external camera calibration and ground plane recovery. In spite of noise in the internal camera parameters and image data, the system successfully recovers both planar structure and relative camera positions in both settings.
Resumo:
2007
Resumo:
This memo describes the initial results of a project to create a self-supervised algorithm for learning object segmentation from video data. Developmental psychology and computational experience have demonstrated that the motion segmentation of objects is a simpler, more primitive process than the detection of object boundaries by static image cues. Therefore, motion information provides a plausible supervision signal for learning the static boundary detection task and for evaluating performance on a test set. A video camera and previously developed background subtraction algorithms can automatically produce a large database of motion-segmented images for minimal cost. The purpose of this work is to use the information in such a database to learn how to detect the object boundaries in novel images using static information, such as color, texture, and shape. This work was funded in part by the Office of Naval Research contract #N00014-00-1-0298, in part by the Singapore-MIT Alliance agreement of 11/6/98, and in part by a National Science Foundation Graduate Student Fellowship.
Resumo:
2003