999 resultados para Digital cameras
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
The aim of the study was to analyze the relationship between run-up spatial-temporal variables with ball velocity in the dominant and non-dominant kicks, and to compare the ball velocity between contralateral limbs. Six futsal players (aged 13 and 14 years) participated in the study. The participants performed 4 kicks with maximal velocity in the stationary ball with each limb. Participants’ movements were recorded by 4 digital cameras (120 Hz). Dvideow software was used for kinematic procedures. The variables analyzed were: length and width of the last but one step and last step before ball contact, distance of the support foot to the ball, run-up velocity and ball velocity. The relationship between spatial-temporal variables with the ball velocity was analyzed by linear regressions with ball velocity as dependent variable. Student t test for paired samples was used to compare ball velocity between dominant and non-dominant kicks. For the dominant limb, the ball velocity was predicted only by the run-up velocity in 16.7%, while for the non-dominant limb only the distance of the support foot to the ball was prognostic variable in 11.9%. The ball velocity was greater for the dominant limb. Run-up variables that predictive ball velocity were different between the dominant and non-dominant kicks.
Resumo:
[EN]The widespread availability of portable computing power and inexpensive digital cameras is opening up new possibilities for retailers. One example is in optical shops, where a number of systems exist that facilitate eyeglasses selection. These systems are now more necessary as the market is saturated with an increasingly complex array of lenses, frames, coatings, tints, photochromic and polarizing treatments, etc. Research challenges encompass Computer Vision, Multimedia and Human-Computer Interaction. Cost factors are also of importance for widespread product acceptance. This paper describes a low-cost system that allows the user to visualize di erent spectacle models in live video. The user can also move the spectacles to adjust its position on the face. Experiments show the potential of the system.
Resumo:
Transmission electron microscopy has provided most of what is known about the ultrastructural organization of tissues, cells, and organelles. Due to tremendous advances in crystallography and magnetic resonance imaging, almost any protein can now be modeled at atomic resolution. To fully understand the workings of biological "nanomachines" it is necessary to obtain images of intact macromolecular assemblies in situ. Although the resolution power of electron microscopes is on the atomic scale, in biological samples artifacts introduced by aldehyde fixation, dehydration and staining, but also section thickness reduces it to some nanometers. Cryofixation by high pressure freezing circumvents many of the artifacts since it allows vitrifying biological samples of about 200 mum in thickness and immobilizes complex macromolecular assemblies in their native state in situ. To exploit the perfect structural preservation of frozen hydrated sections, sophisticated instruments are needed, e.g., high voltage electron microscopes equipped with precise goniometers that work at low temperature and digital cameras of high sensitivity and pixel number. With them, it is possible to generate high resolution tomograms, i.e., 3D views of subcellular structures. This review describes theory and applications of the high pressure cryofixation methodology and compares its results with those of conventional procedures. Moreover, recent findings will be discussed showing that molecular models of proteins can be fitted into depicted organellar ultrastructure of images of frozen hydrated sections. High pressure freezing of tissue is the base which may lead to precise models of macromolecular assemblies in situ, and thus to a better understanding of the function of complex cellular structures.
Resumo:
The instantaneous three-dimensional velocity field past a bioprosthetic heart valve was measured using tomographic particle image velocimetry (PIV). Two digital cameras were used together with a mirror setup to record PIV images from four different angles. Measurements were conducted in a transparent silicone phantom with a simplified geometry of the aortic root. The refraction indices of the silicone phantom and the working fluid were matched to minimize optical distortion from the flow field to the cameras. The silicone phantom of the aorta was integrated in a flow loop driven by a piston pump. Measurements were conducted for steady and pulsatile flow conditions. Results of the instantaneous, ensemble and phase averaged flow field are presented. The three-dimensional velocity field reveals a flow topology, which can be related to features of the aortic valve prosthesis.
Resumo:
In this article, the authors examine the current status of different elements that integrate the landscape of the municipality of Olias del Rey in Toledo (Spain). A methodology for the study of rural roads, activity farming and local hunting management. We used Geographic Information Technologies (GIT) in order to optimize spatial information including the design of a Geographic Information System (GIS). In the acquisition of field data we have used vehicle "mobile mapping" instrumentation equipped with GNSS, LiDAR, digital cameras and odometer. The main objective is the integration of geoinformation and geovisualization of the information to provide a fundamental tool for rural planning and management.
Resumo:
La evolución de los teléfonos móviles inteligentes, dotados de cámaras digitales, está provocando una creciente demanda de aplicaciones cada vez más complejas que necesitan algoritmos de visión artificial en tiempo real; puesto que el tamaño de las señales de vídeo no hace sino aumentar y en cambio el rendimiento de los procesadores de un solo núcleo se ha estancado, los nuevos algoritmos que se diseñen para visión artificial han de ser paralelos para poder ejecutarse en múltiples procesadores y ser computacionalmente escalables. Una de las clases de procesadores más interesantes en la actualidad se encuentra en las tarjetas gráficas (GPU), que son dispositivos que ofrecen un alto grado de paralelismo, un excelente rendimiento numérico y una creciente versatilidad, lo que los hace interesantes para llevar a cabo computación científica. En esta tesis se exploran dos aplicaciones de visión artificial que revisten una gran complejidad computacional y no pueden ser ejecutadas en tiempo real empleando procesadores tradicionales. En cambio, como se demuestra en esta tesis, la paralelización de las distintas subtareas y su implementación sobre una GPU arrojan los resultados deseados de ejecución con tasas de refresco interactivas. Asimismo, se propone una técnica para la evaluación rápida de funciones de complejidad arbitraria especialmente indicada para su uso en una GPU. En primer lugar se estudia la aplicación de técnicas de síntesis de imágenes virtuales a partir de únicamente dos cámaras lejanas y no paralelas—en contraste con la configuración habitual en TV 3D de cámaras cercanas y paralelas—con información de color y profundidad. Empleando filtros de mediana modificados para la elaboración de un mapa de profundidad virtual y proyecciones inversas, se comprueba que estas técnicas son adecuadas para una libre elección del punto de vista. Además, se demuestra que la codificación de la información de profundidad con respecto a un sistema de referencia global es sumamente perjudicial y debería ser evitada. Por otro lado se propone un sistema de detección de objetos móviles basado en técnicas de estimación de densidad con funciones locales. Este tipo de técnicas es muy adecuada para el modelado de escenas complejas con fondos multimodales, pero ha recibido poco uso debido a su gran complejidad computacional. El sistema propuesto, implementado en tiempo real sobre una GPU, incluye propuestas para la estimación dinámica de los anchos de banda de las funciones locales, actualización selectiva del modelo de fondo, actualización de la posición de las muestras de referencia del modelo de primer plano empleando un filtro de partículas multirregión y selección automática de regiones de interés para reducir el coste computacional. Los resultados, evaluados sobre diversas bases de datos y comparados con otros algoritmos del estado del arte, demuestran la gran versatilidad y calidad de la propuesta. Finalmente se propone un método para la aproximación de funciones arbitrarias empleando funciones continuas lineales a tramos, especialmente indicada para su implementación en una GPU mediante el uso de las unidades de filtraje de texturas, normalmente no utilizadas para cómputo numérico. La propuesta incluye un riguroso análisis matemático del error cometido en la aproximación en función del número de muestras empleadas, así como un método para la obtención de una partición cuasióptima del dominio de la función para minimizar el error. ABSTRACT The evolution of smartphones, all equipped with digital cameras, is driving a growing demand for ever more complex applications that need to rely on real-time computer vision algorithms. However, video signals are only increasing in size, whereas the performance of single-core processors has somewhat stagnated in the past few years. Consequently, new computer vision algorithms will need to be parallel to run on multiple processors and be computationally scalable. One of the most promising classes of processors nowadays can be found in graphics processing units (GPU). These are devices offering a high parallelism degree, excellent numerical performance and increasing versatility, which makes them interesting to run scientific computations. In this thesis, we explore two computer vision applications with a high computational complexity that precludes them from running in real time on traditional uniprocessors. However, we show that by parallelizing subtasks and implementing them on a GPU, both applications attain their goals of running at interactive frame rates. In addition, we propose a technique for fast evaluation of arbitrarily complex functions, specially designed for GPU implementation. First, we explore the application of depth-image–based rendering techniques to the unusual configuration of two convergent, wide baseline cameras, in contrast to the usual configuration used in 3D TV, which are narrow baseline, parallel cameras. By using a backward mapping approach with a depth inpainting scheme based on median filters, we show that these techniques are adequate for free viewpoint video applications. In addition, we show that referring depth information to a global reference system is ill-advised and should be avoided. Then, we propose a background subtraction system based on kernel density estimation techniques. These techniques are very adequate for modelling complex scenes featuring multimodal backgrounds, but have not been so popular due to their huge computational and memory complexity. The proposed system, implemented in real time on a GPU, features novel proposals for dynamic kernel bandwidth estimation for the background model, selective update of the background model, update of the position of reference samples of the foreground model using a multi-region particle filter, and automatic selection of regions of interest to reduce computational cost. The results, evaluated on several databases and compared to other state-of-the-art algorithms, demonstrate the high quality and versatility of our proposal. Finally, we propose a general method for the approximation of arbitrarily complex functions using continuous piecewise linear functions, specially formulated for GPU implementation by leveraging their texture filtering units, normally unused for numerical computation. Our proposal features a rigorous mathematical analysis of the approximation error in function of the number of samples, as well as a method to obtain a suboptimal partition of the domain of the function to minimize approximation error.
Resumo:
We examined the feasibility of a low-cost, store-and-forward teledermatology service for general practitioners (GPs) in regional Queensland. Digital pictures and a brief case history were transmitted by email. A service coordinator carried out quality control checks and then forwarded these email messages to a consultant dermatologist. On receiving a clinical response from the dermatologist, the service coordinator returned the message to the referring GP. The aim was to provide advice to rural Gps within one working day. Over six months, 63 referrals were processed by the teledermatology service, covering a wide range of dermatological conditions. In the majority of cases the referring doctors were able to treat the condition after receipt of email advice from the dermatologist; however, in 10 cases (16%) additional images or biopsy results were requested because image quality was inadequate. The average time between a referral being received and clinical advice being provided to the referring GPs was 46 hours. The number of referrals in the present study, 1.05 per month per site, was similar to that reported in other primary care studies. While the use of low-cost digital cameras and public email is feasible, there may be other issues, for example remuneration, which will militate against the widespread introduction of primary care teledermatology in Australia.
Resumo:
This article presents two novel approaches for incorporating sentiment prior knowledge into the topic model for weakly supervised sentiment analysis where sentiment labels are considered as topics. One is by modifying the Dirichlet prior for topic-word distribution (LDA-DP), the other is by augmenting the model objective function through adding terms that express preferences on expectations of sentiment labels of the lexicon words using generalized expectation criteria (LDA-GE). We conducted extensive experiments on English movie review data and multi-domain sentiment dataset as well as Chinese product reviews about mobile phones, digital cameras, MP3 players, and monitors. The results show that while both LDA-DP and LDAGE perform comparably to existing weakly supervised sentiment classification algorithms, they are much simpler and computationally efficient, rendering themmore suitable for online and real-time sentiment classification on the Web. We observed that LDA-GE is more effective than LDA-DP, suggesting that it should be preferred when considering employing the topic model for sentiment analysis. Moreover, both models are able to extract highly domain-salient polarity words from text.
Resumo:
Image collections are ever growing and hence visual information is becoming more and more important. Moreover, the classical paradigm of taking pictures has changed, first with the spread of digital cameras and, more recently, with mobile devices equipped with integrated cameras. Clearly, these image repositories need to be managed, and tools for effectively and efficiently searching image databases are highly sought after, especially on mobile devices where more and more images are being stored. In this paper, we present an image browsing system for interactive exploration of image collections on mobile devices. Images are arranged so that visually similar images are grouped together while large image repositories become accessible through a hierarchical, browsable tree structure, arranged on a hexagonal lattice. The developed system provides an intuitive and fast interface for navigating through image databases using a variety of touch gestures. © 2012 Springer-Verlag.
Resumo:
A parallel pipelined array of cells suitable for realtime computation of histograms is proposed. The cell architecture builds on previous work to now allow operating on a stream of data at 1 pixel per clock cycle. This new cell is more suitable for interfacing to camera sensors or to microprocessors of 8-bit data buses which are common in consumer digital cameras. Arrays using the new proposed cells are obtained via C-slow retiming techniques and can be clocked at a 65% faster frequency than previous arrays. This achieves over 80% of the performance of two-pixel per clock cycle parallel pipelined arrays.
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-08
Resumo:
Clouds are important in weather prediction, climate studies and aviation safety. Important parameters include cloud height, type and cover percentage. In this paper, the recent improvements in the development of a low-cost cloud height measurement setup are described. It is based on stereo vision with consumer digital cameras. The cameras positioning is calibrated using the position of stars in the night sky. An experimental uncertainty analysis of the calibration parameters is performed. Cloud height measurement results are presented and compared with LIDAR measurements.
Resumo:
The job of a historian is to understand what happened in the past, resorting in many cases to written documents as a firsthand source of information. Text, however, does not amount to the only source of knowledge. Pictorial representations, in fact, have also accompanied the main events of the historical timeline. In particular, the opportunity of visually representing circumstances has bloomed since the invention of photography, with the possibility of capturing in real-time the occurrence of a specific events. Thanks to the widespread use of digital technologies (e.g. smartphones and digital cameras), networking capabilities and consequent availability of multimedia content, the academic and industrial research communities have developed artificial intelligence (AI) paradigms with the aim of inferring, transferring and creating new layers of information from images, videos, etc. Now, while AI communities are devoting much of their attention to analyze digital images, from an historical research standpoint more interesting results may be obtained analyzing analog images representing the pre-digital era. Within the aforementioned scenario, the aim of this work is to analyze a collection of analog documentary photographs, building upon state-of-the-art deep learning techniques. In particular, the analysis carried out in this thesis aims at producing two following results: (a) produce the date of an image, and, (b) recognizing its background socio-cultural context,as defined by a group of historical-sociological researchers. Given these premises, the contribution of this work amounts to: (i) the introduction of an historical dataset including images of “Family Album” among all the twentieth century, (ii) the introduction of a new classification task regarding the identification of the socio-cultural context of an image, (iii) the exploitation of different deep learning architectures to perform the image dating and the image socio-cultural context classification.
Resumo:
Capsule Avian predators are principally responsible. Aims To document the fate of Spotted Flycatcher nests and to identify the species responsible for nest predation. Methods During 2005-06, purpose-built, remote, digital nest-cameras were deployed at 65 out of 141 Spotted Flycatcher nests monitored in two study areas, one in south Devon and the second on the border of Bedfordshire and Cambridgeshire. Results Of the 141 nests monitored, 90 were successful (non-camera nests, 49 out of 76 successful, camera nests, 41 out of 65). Fate was determined for 63 of the 65 nests monitored by camera, with 20 predation events documented, all of which occurred during daylight hours. Avian predators carried out 17 of the 20 predations, with the principal nest predator identified as Eurasian Jay Garrulus glandarius. The only mammal recorded predating nests was the Domestic Cat Felis catus, the study therefore providing no evidence that Grey Squirrels Sciurus carolinensis are an important predator of Spotted Flycatcher nests. There was no evidence of differences in nest survival rates at nests with and without cameras. Nest remains following predation events gave little clue as to the identity of the predator species responsible. Conclusions Nest-cameras can be useful tools in the identification of nest predators, and may be deployed with no subsequent effect on nest survival. The majority of predation of Spotted Flycatcher nests in this study was by avian predators, principally the Jay. There was little evidence of predation by mammalian predators. Identification of specific nest predators enhances studies of breeding productivity and predation risk.