907 resultados para cameras and camera accessories
Resumo:
Acknowledgements This work received funding from the Marine Alliance for Science and Technology for Scotland (MASTS) pooling initiative and their support is gratefully acknowledged. MASTS is funded by the Scottish Funding Council (grant reference HR09011) and contributing institutions. We thank Joshua Lawrence and Niall Fallon for their assistance in collecting some of the video data.
Resumo:
Real-time data of key performance enablers in logistics warehouses are of growing importance as they permit decision-makers to instantaneously react to alerts, deviations and damages. Several technologies appear as adequate data sources to collect the information required in order to achieve the goal. In the present re-search paper, the load status of the fork of a forklift is to be recognized with the help of a sensor-based and a camera-based solution approach. The comparison of initial experimentation results yields a statement about which direction to pursue for promising further research.
Resumo:
The stylistic strategies, in particular those concerning camera placement and movement, of The Shield (FX, 2002-08) seem to directly fit into an aesthetic tradition developed by US cop dramas like Hill Street Blues (NBC, 1981-87), Homicide: Life on the Street (NBC, 1993-99) and NYPD Blue (ABC, 1993-2005). In these precinct dramas, decisions concerning spatial arrangements of camera and performer foreground a desire to present and react to action while it is happening, and with a minimum of apparent construction. As Jonathan Bignell (2009) has argued, the intimacy and immediacy of this stylistic approach, which has at its core an attempt at a documentary-like realism, is important to the police drama as a genre, while also being tendencies that have been taken as specific characteristics of television more generally. I explore how The Shield develops this tradition of a reactive camera style in its strategy of shooting with two cameras rather than one, with specific attention to how this shapes the presentation of performance. Through a detailed examination of the relationship between performer and camera(s) the chapter considers the way the series establishes access to the fictional world, which is crucial to the manner of police investigation central to its drama, and the impact of this on how we engage with performance. The cameras’ placement appears to balance various impulses, including: the demands of attending to an ensemble cast, spontaneous performance style, and action that is physically dynamic and involving. In a series that makes stylistic decisions around presentation of the body on-screen deliberately close yet obstructive, involving yet fleeting, the chapter explores the affect of this on the watching experience.
Resumo:
Activities involving fauna monitoring are usually limited by the lack of resources; therefore, the choice of a proper and efficient methodology is fundamental to maximize the cost-benefit ratio. Both direct and indirect methods can be used to survey mammals, but the latter are preferred due to the difficulty to come in sight of and/or to capture the individuals, besides being cheaper. We compared the performance of two methods to survey medium and large-sized mammal: track plot recording and camera trapping, and their costs were assessed. At Jatai Ecological Station (S21 degrees 31`15 ``- W47 degrees 34`42 ``-Brazil) we installed ten camera traps along a dirt road directly in front of ten track plots, and monitored them for 10 days. We cleaned the plots, adjusted the cameras, and noted down the recorded species daily. Records taken by both methods showed they sample the local richness in different ways (Wilcoxon, T=231; p;;0.01). The track plot method performed better on registering individuals whereas camera trapping provided records which permitted more accurate species identification. The type of infra-red sensor camera used showed a strong bias towards individual body mass (R(2)=0.70; p=0.017), and the variable expenses of this method in a 10-day survey were estimated about 2.04 times higher compared to track plot method; however, in a long run camera trapping becomes cheaper than track plot recording. Concluding, track plot recording is good enough for quick surveys under a limited budget, and camera trapping is best for precise species identification and the investigation of species details, performing better for large animals. When used together, these methods can be complementary.
Resumo:
Digital still cameras capable of filming short video clips are readily available, but the quality of these recordings for telemedicine has not been reported. We performed a blinded study using four commonly available digital cameras. A simulated patient with a hemiplegic gait pattern was filmed by the same videographer in an identical, brightly lit indoor setting. Six neurologists viewed the blinded video clips on their PC and comparisons were made between cameras, between video clips recorded with and without a tripod, and between video clips filmed on high- or low-quality settings. Use of a tripod had a smaller effect than expected, while images taken on a high-quality setting were strongly preferred to those taken on a low-quality setting. Although there was some variability in video quality between selected cameras, all were of sufficient quality to identify physical signs such as gait and tremor. Adequate-quality video clips of movement disorders can be produced with low-cost cameras and transmitted by email for teleneurology purposes.
Resumo:
In this article we present an approach to object tracking handover in a network of smart cameras, based on self-interested autonomous agents, which exchange responsibility for tracking objects in a market mechanism, in order to maximise their own utility. A novel ant-colony inspired mechanism is used to learn the vision graph, that is, the camera neighbourhood relations, during runtime, which may then be used to optimise communication between cameras. The key benefits of our completely decentralised approach are on the one hand generating the vision graph online, enabling efficient deployment in unknown scenarios and camera network topologies, and on the other hand relying only on local information, increasing the robustness of the system. Since our market-based approach does not rely on a priori topology information, the need for any multicamera calibration can be avoided. We have evaluated our approach both in a simulation study and in network of real distributed smart cameras.
Resumo:
This work explores the use of statistical methods in describing and estimating camera poses, as well as the information feedback loop between camera pose and object detection. Surging development in robotics and computer vision has pushed the need for algorithms that infer, understand, and utilize information about the position and orientation of the sensor platforms when observing and/or interacting with their environment.
The first contribution of this thesis is the development of a set of statistical tools for representing and estimating the uncertainty in object poses. A distribution for representing the joint uncertainty over multiple object positions and orientations is described, called the mirrored normal-Bingham distribution. This distribution generalizes both the normal distribution in Euclidean space, and the Bingham distribution on the unit hypersphere. It is shown to inherit many of the convenient properties of these special cases: it is the maximum-entropy distribution with fixed second moment, and there is a generalized Laplace approximation whose result is the mirrored normal-Bingham distribution. This distribution and approximation method are demonstrated by deriving the analytical approximation to the wrapped-normal distribution. Further, it is shown how these tools can be used to represent the uncertainty in the result of a bundle adjustment problem.
Another application of these methods is illustrated as part of a novel camera pose estimation algorithm based on object detections. The autocalibration task is formulated as a bundle adjustment problem using prior distributions over the 3D points to enforce the objects' structure and their relationship with the scene geometry. This framework is very flexible and enables the use of off-the-shelf computational tools to solve specialized autocalibration problems. Its performance is evaluated using a pedestrian detector to provide head and foot location observations, and it proves much faster and potentially more accurate than existing methods.
Finally, the information feedback loop between object detection and camera pose estimation is closed by utilizing camera pose information to improve object detection in scenarios with significant perspective warping. Methods are presented that allow the inverse perspective mapping traditionally applied to images to be applied instead to features computed from those images. For the special case of HOG-like features, which are used by many modern object detection systems, these methods are shown to provide substantial performance benefits over unadapted detectors while achieving real-time frame rates, orders of magnitude faster than comparable image warping methods.
The statistical tools and algorithms presented here are especially promising for mobile cameras, providing the ability to autocalibrate and adapt to the camera pose in real time. In addition, these methods have wide-ranging potential applications in diverse areas of computer vision, robotics, and imaging.
Resumo:
Universidade Estadual de Campinas . Faculdade de Educação Física
Resumo:
BACKGROUND AND STUDY AIMS Colon capsule endoscopy (CCE) was developed for the evaluation of colorectal pathology. In this study, our aim was to assess if a dual-camera analysis using CCE allows better evaluation of the whole gastrointestinal (GI) tract compared to a single-camera analysis. PATIENTS AND METHODS We included 21 patients (12 males, mean age 56.20 years) submitted for a CCE examination. After standard colon preparation, the colon capsule endoscope (PillCam Colon™) was swallowed after reinitiation from its "sleep" mode. Four physicians performed the analysis: two reviewed both video streams at the same time (dual-camera analysis); one analyzed images from one side of the device ("camera 1"); and the other reviewed the opposite side ("camera 2"). We compared numbers of findings from different parts of the entire GI tract and level of agreement among reviewers. RESULTS A complete evaluation of the GI tract was possible in all patients. Dual-camera analysis provided 16% and 5% more findings compared to camera 1 and camera 2 analysis, respectively. Overall agreement was 62.7% (kappa = 0.44, 95% CI: 0.373-0.510). Esophageal (kappa = 0.611) and colorectal (kappa = 0.595) findings had a good level of agreement, while small bowel (kappa = 0.405) showed moderate agreement. CONCLUSION The use of dual-camera analysis with CCE for the evaluation of the GI tract is feasible and detects more abnormalities when compared with single-camera analysis.
Resumo:
BACKGROUND AND STUDY AIMS Colon capsule endoscopy (CCE) was developed for the evaluation of colorectal pathology. In this study, our aim was to assess if a dual-camera analysis using CCE allows better evaluation of the whole gastrointestinal (GI) tract compared to a single-camera analysis. PATIENTS AND METHODS We included 21 patients (12 males, mean age 56.20 years) submitted for a CCE examination. After standard colon preparation, the colon capsule endoscope (PillCam Colon™) was swallowed after reinitiation from its "sleep" mode. Four physicians performed the analysis: two reviewed both video streams at the same time (dual-camera analysis); one analyzed images from one side of the device ("camera 1"); and the other reviewed the opposite side ("camera 2"). We compared numbers of findings from different parts of the entire GI tract and level of agreement among reviewers. RESULTS A complete evaluation of the GI tract was possible in all patients. Dual-camera analysis provided 16% and 5% more findings compared to camera 1 and camera 2 analysis, respectively. Overall agreement was 62.7% (kappa = 0.44, 95% CI: 0.373-0.510). Esophageal (kappa = 0.611) and colorectal (kappa = 0.595) findings had a good level of agreement, while small bowel (kappa = 0.405) showed moderate agreement. CONCLUSION The use of dual-camera analysis with CCE for the evaluation of the GI tract is feasible and detects more abnormalities when compared with single-camera analysis.
Resumo:
A visual telepresence system has been developed at the University of Reading which utilizes eye tracing to adjust the horizontal orientation of the cameras and display system according to the convergence state of the operator's eyes. Slaving the cameras to the operator's direction of gaze enables the object of interest to be centered on the displays. The advantage of this is that the camera field of view may be decreased to maximize the achievable depth resolution. An active camera system requires an active display system if appropriate binocular cues are to be preserved. For some applications, which critically depend upon the veridical perception of the object's location and dimensions, it is imperative that the contribution of binocular cues to these judgements be ascertained because they are directly influenced by camera and display geometry. Using the active telepresence system, we investigated the contribution of ocular convergence information to judgements of size, distance and shape. Participants performed an open- loop reach and grasp of the virtual object under reduced cue conditions where the orientation of the cameras and the displays were either matched or unmatched. Inappropriate convergence information produced weak perceptual distortions and caused problems in fusing the images.
Resumo:
Automated virtual camera control has been widely used in animation and interactive virtual environments. We have developed a multiple sparse camera based free view video system prototype that allows users to control the position and orientation of a virtual camera, enabling the observation of a real scene in three dimensions (3D) from any desired viewpoint. Automatic camera control can be activated to follow selected objects by the user. Our method combines a simple geometric model of the scene composed of planes (virtual environment), augmented with visual information from the cameras and pre-computed tracking information of moving targets to generate novel perspective corrected 3D views of the virtual camera and moving objects. To achieve real-time rendering performance, view-dependent textured mapped billboards are used to render the moving objects at their correct locations and foreground masks are used to remove the moving objects from the projected video streams. The current prototype runs on a PC with a common graphics card and can generate virtual 2D views from three cameras of resolution 768 x 576 with several moving objects at about 11 fps. (C)2011 Elsevier Ltd. All rights reserved.
Resumo:
Bilayer segmentation of live video in uncontrolled environments is an essential task for home applications in which the original background of the scene must be replaced, as in videochats or traditional videoconference. The main challenge in such conditions is overcome all difficulties in problem-situations (e. g., illumination change, distract events such as element moving in the background and camera shake) that may occur while the video is being captured. This paper presents a survey of segmentation methods for background substitution applications, describes the main concepts and identifies events that may cause errors. Our analysis shows that although robust methods rely on specific devices (multiple cameras or sensors to generate depth maps) which aid the process. In order to achieve the same results using conventional devices (monocular video cameras), most current research relies on energy minimization frameworks, in which temporal and spacial information are probabilistically combined with those of color and contrast.
Resumo:
Obesity is becoming an epidemic phenomenon in most developed countries. The fundamental cause of obesity and overweight is an energy imbalance between calories consumed and calories expended. It is essential to monitor everyday food intake for obesity prevention and management. Existing dietary assessment methods usually require manually recording and recall of food types and portions. Accuracy of the results largely relies on many uncertain factors such as user's memory, food knowledge, and portion estimations. As a result, the accuracy is often compromised. Accurate and convenient dietary assessment methods are still blank and needed in both population and research societies. In this thesis, an automatic food intake assessment method using cameras, inertial measurement units (IMUs) on smart phones was developed to help people foster a healthy life style. With this method, users use their smart phones before and after a meal to capture images or videos around the meal. The smart phone will recognize food items and calculate the volume of the food consumed and provide the results to users. The technical objective is to explore the feasibility of image based food recognition and image based volume estimation. This thesis comprises five publications that address four specific goals of this work: (1) to develop a prototype system with existing methods to review the literature methods, find their drawbacks and explore the feasibility to develop novel methods; (2) based on the prototype system, to investigate new food classification methods to improve the recognition accuracy to a field application level; (3) to design indexing methods for large-scale image database to facilitate the development of new food image recognition and retrieval algorithms; (4) to develop novel convenient and accurate food volume estimation methods using only smart phones with cameras and IMUs. A prototype system was implemented to review existing methods. Image feature detector and descriptor were developed and a nearest neighbor classifier were implemented to classify food items. A reedit card marker method was introduced for metric scale 3D reconstruction and volume calculation. To increase recognition accuracy, novel multi-view food recognition algorithms were developed to recognize regular shape food items. To further increase the accuracy and make the algorithm applicable to arbitrary food items, new food features, new classifiers were designed. The efficiency of the algorithm was increased by means of developing novel image indexing method in large-scale image database. Finally, the volume calculation was enhanced through reducing the marker and introducing IMUs. Sensor fusion technique to combine measurements from cameras and IMUs were explored to infer the metric scale of the 3D model as well as reduce noises from these sensors.
Resumo:
Introduction: Photography through a microscope is virtually identical to that used with an astronomical telescope. For years, the 35mm camera was the choice for microphotography, but we live in a digital camera age now. We describe a custom homemade adapter that can be fit most of the cameras and microscopes. [See PDF for complete abstract]