979 resultados para Night vision devices
Resumo:
Background: Falls among hospitalised patients impose a considerable burden on health systems globally and prevention is a priority. Some patient-level interventions have been effective in reducing falls, but others have not. An alternative and promising approach to reducing inpatient falls is through the modification of the hospital physical environment and the night lighting of hospital wards is a leading candidate for investigation. In this pilot trial, we will determine the feasibility of conducting a main trial to evaluate the effects of modified night lighting on inpatient ward level fall rates. We will test also the feasibility of collecting novel forms of patient level data through a concurrent observational sub-study. Methods/design: A stepped wedge, cluster randomised controlled trial will be conducted in six inpatient wards over 14 months in a metropolitan teaching hospital in Brisbane (Australia). The intervention will consist of supplementary night lighting installed across all patient rooms within study wards. The planned placement of luminaires, configurations and spectral characteristics are based on prior published research and pre-trial testing and modification. We will collect data on rates of falls on study wards (falls per 1000 patient days), the proportion of patients who fall once or more, and average length of stay. We will recruit two patients per ward per month to a concurrent observational sub-study aimed at understanding potential impacts on a range of patient sleep and mobility behaviour. The effect on the environment will be monitored with sensors to detect variation in light levels and night-time room activity. We will also collect data on possible patient-level confounders including demographics, pre-admission sleep quality, reported vision, hearing impairment and functional status. Discussion: This pragmatic pilot trial will assess the feasibility of conducting a main trial to investigate the effects of modified night lighting on inpatient fall rates using several new methods previously untested in the context of environmental modifications and patient safety. Pilot data collected through both parts of the trial will be utilised to inform sample size calculations, trial design and final data collection methods for a subsequent main trial.
Resumo:
Two bycatch reduction devices (BRDs)—the extended mesh funnel (EMF) and the Florida fisheye (FFE)—were evaluated in otter trawls with net mouth circumferences of 14 m, 17 m, and 20 m and total net areas of 45 m2. Each test net was towed 20 times in parallel with a control net that had the same dimensions and configuration but no BRD. Both BRDs were tested at night during fall 1996 and winter 1997 in Tampa Bay, Florida. Usually, the bycatch was composed principally of finfish (44 species were captured); horseshoe crabs and blue crabs seasonally predominated in some trawls. Ten finfish species composed 92% of the total finfish catch; commercially or recreationally valuable species accounted for 7% of the catch. Mean finfish size in the BRD-equipped nets was usually slightly smaller than that in the control nets. Compared with the corresponding control nets, both biomass and number of finfish were almost always less in the BRD-equipped nets but neither shrimp number nor biomass were significantly reduced. The differences in proportions of both shrimp and finfish catch between the BRD-equipped and control nets varied between seasons and among net sizes, and differences in finfish catch were specific for each BRD type and season. In winter, shrimp catch was highest and size range of shrimp was greater than in fall. Season-specific differences in shrimp catch among the BRD types occurred only in the 14-m, EMF nets. Finfish bycatch species composition was also highly seasonal; each species was captured mainly during only one season. However, regardless of the finfish composition, the shrimp catch was relatively constant. In part as a result of this study, the State of Florida now requires the use of BRDs in state waters.
Resumo:
Gemstone Team FACE
Resumo:
To date, the processing of wildlife location data has relied on a diversity of software and file formats. Data management and the following spatial and statistical analyses were undertaken in multiple steps, involving many time-consuming importing/exporting phases. Recent technological advancements in tracking systems have made large, continuous, high-frequency datasets of wildlife behavioral data available, such as those derived from the global positioning system (GPS) and other animal-attached sensor devices. These data can be further complemented by a wide range of other information about the animals’ environment. Management of these large and diverse datasets for modelling animal behaviour and ecology can prove challenging, slowing down analysis and increasing the probability of mistakes in data handling. We address these issues by critically evaluating the requirements for good management of GPS data for wildlife biology. We highlight that dedicated data management tools and expertise are needed. We explore current research in wildlife data management. We suggest a general direction of development, based on a modular software architecture with a spatial database at its core, where interoperability, data model design and integration with remote-sensing data sources play an important role in successful GPS data handling.
Resumo:
The aim of this paper is to demonstrate the applicability and the effectiveness of a computationally demanding stereo matching algorithm in different lowcost and low-complexity embedded devices, by focusing on the analysis of timing and image quality performances. Various optimizations have been implemented to allow its deployment on specific hardware architectures while decreasing memory and processing time requirements: (1) reduction of color channel information and resolution for input images, (2) low-level software optimizations such as parallel computation, replacement of function calls or loop unrolling, (3) reduction of redundant data structures and internal data representation. The feasibility of a stereovision system on a low cost platform is evaluated by using standard datasets and images taken from Infra-Red (IR) cameras. Analysis of the resulting disparity map accuracy with respect to a full-size dataset is performed as well as the testing of suboptimal solutions
Resumo:
Aims: Cataract surgery is one of the most common surgeries performed, but its overuse has been reported. The threshold for cataract surgery has become increasingly lenient; therefore, the selection process and surgical need has been questioned. The aim of this study was to evaluate the changes associated with cataract surgery in patient-reported vision-related quality of life (VR-QoL).
Methods: A prospective cohort study was conducted. Consecutive patients referred to cataract clinics in an NHS unit in Scotland were identified. Those listed for surgery were invited to complete a validated questionnaire (TyPE) to measure VR-QoL pre- and post-operatively. TyPE has five different domains (near vision, distance vision, daytime driving, night-time driving, and glare) and a global score of vision. The influence of pre-operative visual acuity (VA) levels, vision, and lens status of the fellow eye on changes in VR-QoL were explored.
Results: A total of 320 listed patients were approached, of whom 36 were excluded. Among the 284 enrolled patients, 229 (81%) returned the questionnaire after surgery. Results revealed that the mean overall vision improved, as reported by patients. Improvements were also seen in all sub-domains of the questionnaire.
Conclusion: The majority of patients appear to have improvement in patient-reported VR-QoL, including those with good pre-operative VA and previous surgery to the fellow eye. VA thresholds may not capture the effects of the quality of life on patients. This information can assist clinicians to make more informed decisions when debating over the benefits of listing a patient for cataract extraction.
Resumo:
It is well known that image processing requires a huge amount of computation, mainly at low level processing where the algorithms are dealing with a great number of data-pixel. One of the solutions to estimate motions involves detection of the correspondences between two images. For normalised correlation criteria, previous experiments shown that the result is not altered in presence of nonuniform illumination. Usually, hardware for motion estimation has been limited to simple correlation criteria. The main goal of this paper is to propose a VLSI architecture for motion estimation using a matching criteria more complex than Sum of Absolute Differences (SAD) criteria. Today hardware devices provide many facilities for the integration of more and more complex designs as well as the possibility to easily communicate with general purpose processors
Resumo:
Animal behavioral parameters can be used to assess welfare status in commercial broiler breeders. Behavioral parameters can be monitored with a variety of sensing devices, for instance, the use of video cameras allows comprehensive assessment of animal behavioral expressions. Nevertheless, the development of efficient methods and algorithms to continuously identify and differentiate animal behavior patterns is needed. The objective this study was to provide a methodology to identify hen white broiler breeder behavior using combined techniques of image processing and computer vision. These techniques were applied to differentiate body shapes from a sequence of frames as the birds expressed their behaviors. The method was comprised of four stages: (1) identification of body positions and their relationship with typical behaviors. For this stage, the number of frames required to identify each behavior was determined; (2) collection of image samples, with the isolation of the birds that expressed a behavior of interest; (3) image processing and analysis using a filter developed to separate white birds from the dark background; and finally (4) construction and validation of a behavioral classification tree, using the software tool Weka (model 148). The constructed tree was structured in 8 levels and 27 leaves, and it was validated using two modes: the set training mode with an overall rate of success of 96.7%, and the cross validation mode with an overall rate of success of 70.3%. The results presented here confirmed the feasibility of the method developed to identify white broiler breeder behavior for a particular group of study. Nevertheless, more improvements in the method can be made in order to increase the validation overall rate of success. (C) 2013 Elsevier B.V. All rights reserved.
Resumo:
The integration of CMOS cameras with embedded processors and wireless communication devices has enabled the development of distributed wireless vision systems. Wireless Vision Sensor Networks (WVSNs), which consist of wirelessly connected embedded systems with vision and sensing capabilities, provide wide variety of application areas that have not been possible to realize with the wall-powered vision systems with wired links or scalar-data based wireless sensor networks. In this paper, the design of a middleware for a wireless vision sensor node is presented for the realization of WVSNs. The implemented wireless vision sensor node is tested through a simple vision application to study and analyze its capabilities, and determine the challenges in distributed vision applications through a wireless network of low-power embedded devices. The results of this paper highlight the practical concerns for the development of efficient image processing and communication solutions for WVSNs and emphasize the need for cross-layer solutions that unify these two so-far-independent research areas.
Resumo:
PURPOSES: To describe and interpret teachers' opinions about and responsiveness to guidance on optical aids for low vision. METHODS: It was conducted a cross-sectional analytical study. The convenience, non-random sample consisted of 58 teachers from the public school network of the city of Campinas. It was constructed and applied a structured questionnaire, available online at the assessed website. For qualitative data collection it was conducted an exploratory study using the focus group technique. RESULTS: Responses expressed, for the most part, a marked interest in the website, its easiness of access, and the comprehensive nature of the information provided. Most people reported frequent use of the Internet to seek information, and found it easier to access the Internet at home. Among the qualitative aspects of the evaluation, we should mention the perceived importance of the website as a source of information, despite some criticism about the accessibility and reliability of the information found on the Internet. CONCLUSION: Teachers' need for training to deal with visually impaired students and their positive response to advice and information lead to the conclusion that web-based guidelines on the use of optical aids were considered beneficial to ease the understanding of visual impairment and the rehabilitation of the affected subjects.
Resumo:
Electronic applications are nowadays converging under the umbrella of the cloud computing vision. The future ecosystem of information and communication technology is going to integrate clouds of portable clients and embedded devices exchanging information, through the internet layer, with processing clusters of servers, data-centers and high performance computing systems. Even thus the whole society is waiting to embrace this revolution, there is a backside of the story. Portable devices require battery to work far from the power plugs and their storage capacity does not scale as the increasing power requirement does. At the other end processing clusters, such as data-centers and server farms, are build upon the integration of thousands multiprocessors. For each of them during the last decade the technology scaling has produced a dramatic increase in power density with significant spatial and temporal variability. This leads to power and temperature hot-spots, which may cause non-uniform ageing and accelerated chip failure. Nonetheless all the heat removed from the silicon translates in high cooling costs. Moreover trend in ICT carbon footprint shows that run-time power consumption of the all spectrum of devices accounts for a significant slice of entire world carbon emissions. This thesis work embrace the full ICT ecosystem and dynamic power consumption concerns by describing a set of new and promising system levels resource management techniques to reduce the power consumption and related issues for two corner cases: Mobile Devices and High Performance Computing.
Resumo:
BACKGROUND: Central and peripheral vision is needed for object detection. Previous research has shown that visual target detection is affected by age. In addition, light conditions also influence visual exploration. The aim of the study was to investigate the effects of age and different light conditions on visual exploration behavior and on driving performance during simulated driving. METHODS: A fixed-base simulator with 180 degree field of view was used to simulate a motorway route under daylight and night conditions to test 29 young subjects (25-40 years) and 27 older subjects (65-78 years). Drivers' eye fixations were analyzed and assigned to regions of interests (ROI) such as street, road signs, car ahead, environment, rear view mirror, side mirror left, side mirror right, incoming car, parked car, road repair. In addition, lane-keeping and driving speed were analyzed as a measure of driving performance. RESULTS: Older drivers had longer fixations on the task relevant ROI, but had a lower frequency of checking mirrors when compared to younger drivers. In both age groups, night driving led to a less fixations on the mirror. At the performance level, older drivers showed more variation in driving speed and lane-keeping behavior, which was especially prominent at night. In younger drivers, night driving had no impact on driving speed or lane-keeping behavior. CONCLUSIONS: Older drivers' visual exploration behavior are more fixed on the task relevant ROI, especially at night, when driving performance becomes more heterogeneous than in younger drivers.
Resumo:
Nowadays, we can send audio on the Internet for multiples uses like telephony, broadcast audio or teleconferencing. The issue comes when you need to synchronize the sound from different sources because the network where we are going to work could lose packets and introduce delay in the delivery. This can also come because the sound cards could be work in different speeds. In this project, we will work with two computers emitting sound (one will simulate the left channel (mono) of a stereo signal, and the other the right channel) and connected with a third computer by a TCP network. The last computer must get the sound from both computers and reproduce it in a speaker properly (without delay). So, basically, the main goal of the project is to synchronize multi-track sound over a network. TCP networks introduce latency into data transfers. Streaming audio suffers from two problems: a delay and an offset between the channels. This project explores the causes of latency, investigates the affect of the inter-channel offset and proposes a solution to synchronize the received channels. In conclusion, a good synchronization of the sound is required in a time when several audio applications are being developed. When two devices are ready to send audio over a network, this multi-track sound will arrive at the third computer with an offset giving a negative effect to the listener. This project has dealt with this offset achieving a good synchronization of the multitrack sound getting a good effect on the listener. This was achieved thanks to the division of the project into several steps having constantly a good vision of the problem, a good scalability and having controlled the latency at all times. As we can see in the chapter 4 of the project, a lack of synchronization over c. 100μs is audible to the listener. RESUMEN. A día de hoy, podemos transmitir audio a través de Internet por varios motivos como pueden ser: una llamada telefónica, una emisión de audio o una teleconferencia. El problema viene cuando necesitas sincronizar ese sonido producido por los diferentes orígenes ya que la red a la que nos vamos a conectar puede perder los paquetes y/o introducir un retardo en las entregas de los mismos. Así mismo, estos retardos también pueden venir producidos por las diferentes velocidades a las que trabajan las tarjetas de sonido de cada dispositivo. En este proyecto, se ha trabajado con dos ordenadores emitiendo sonido de manera intermitente (uno se encargará de simular el canal izquierdo (mono) de la señal estéreo emitida, y el otro del canal derecho), estando conectados a través de una red TCP a un tercer ordenador, el cual debe recibir el sonido y reproducirlo en unos altavoces adecuadamente y sin retardo (deberá juntar los dos canales y reproducirlo como si de estéreo de tratara). Así, el objetivo principal de este proyecto es el de encontrar la manera de sincronizar el sonido producido por los dos ordenadores y escuchar el conjunto en unos altavoces finales. Las redes TCP introducen latencia en la transferencia de datos. El streaming de audio emitido a través de una red de este tipo puede sufrir dos grandes contratiempos: retardo y offset, los dos existentes en las comunicaciones entre ambos canales. Este proyecto se centra en las causas de ese retardo, investiga el efecto que provoca el offset entre ambos canales y propone una solución para sincronizar los canales en el dispositivo receptor. Para terminar, una buena sincronización del sonido es requerida en una época donde las aplicaciones de audio se están desarrollando continuamente. Cuando los dos dispositivos estén preparados para enviar audio a través de la red, la señal de sonido multi-canal llegará al tercer ordenador con un offset añadido, por lo que resultará en una mala experiencia en la escucha final. En este proyecto se ha tenido que lidiar con ese offset mencionado anteriormente y se ha conseguido una buena sincronización del sonido multi-canal obteniendo un buen efecto en la escucha final. Esto ha sido posible gracias a una división del proyecto en diversas etapas que proporcionaban la facilidad de poder solucionar los errores en cada paso dando una importante visión del problema y teniendo controlada la latencia en todo momento. Como se puede ver en el capítulo 4 del proyecto, la falta de sincronización sobre una diferencia de 100μs entre dos canales (offset) empieza a ser audible en la escucha final.
Resumo:
In this work we address a scenario where 3D content is transmitted to a mobile terminal with 3D display capabilities. We consider the use of 2D plus depth format to represent the 3D content and focus on the generation of synthetic views in the terminal. We evaluate different types of smoothing filters that are applied to depth maps with the aim of reducing the disoccluded regions. The evaluation takes into account the reduction of holes in the synthetic view as well as the presence of geometrical distortion caused by the smoothing operation. The selected filter has been included within an implemented module for the VideoLan Client (VLC) software in order to render 3D content from the 2D plus depth data format.
Resumo:
La evolución de los teléfonos móviles inteligentes, dotados de cámaras digitales, está provocando una creciente demanda de aplicaciones cada vez más complejas que necesitan algoritmos de visión artificial en tiempo real; puesto que el tamaño de las señales de vídeo no hace sino aumentar y en cambio el rendimiento de los procesadores de un solo núcleo se ha estancado, los nuevos algoritmos que se diseñen para visión artificial han de ser paralelos para poder ejecutarse en múltiples procesadores y ser computacionalmente escalables. Una de las clases de procesadores más interesantes en la actualidad se encuentra en las tarjetas gráficas (GPU), que son dispositivos que ofrecen un alto grado de paralelismo, un excelente rendimiento numérico y una creciente versatilidad, lo que los hace interesantes para llevar a cabo computación científica. En esta tesis se exploran dos aplicaciones de visión artificial que revisten una gran complejidad computacional y no pueden ser ejecutadas en tiempo real empleando procesadores tradicionales. En cambio, como se demuestra en esta tesis, la paralelización de las distintas subtareas y su implementación sobre una GPU arrojan los resultados deseados de ejecución con tasas de refresco interactivas. Asimismo, se propone una técnica para la evaluación rápida de funciones de complejidad arbitraria especialmente indicada para su uso en una GPU. En primer lugar se estudia la aplicación de técnicas de síntesis de imágenes virtuales a partir de únicamente dos cámaras lejanas y no paralelas—en contraste con la configuración habitual en TV 3D de cámaras cercanas y paralelas—con información de color y profundidad. Empleando filtros de mediana modificados para la elaboración de un mapa de profundidad virtual y proyecciones inversas, se comprueba que estas técnicas son adecuadas para una libre elección del punto de vista. Además, se demuestra que la codificación de la información de profundidad con respecto a un sistema de referencia global es sumamente perjudicial y debería ser evitada. Por otro lado se propone un sistema de detección de objetos móviles basado en técnicas de estimación de densidad con funciones locales. Este tipo de técnicas es muy adecuada para el modelado de escenas complejas con fondos multimodales, pero ha recibido poco uso debido a su gran complejidad computacional. El sistema propuesto, implementado en tiempo real sobre una GPU, incluye propuestas para la estimación dinámica de los anchos de banda de las funciones locales, actualización selectiva del modelo de fondo, actualización de la posición de las muestras de referencia del modelo de primer plano empleando un filtro de partículas multirregión y selección automática de regiones de interés para reducir el coste computacional. Los resultados, evaluados sobre diversas bases de datos y comparados con otros algoritmos del estado del arte, demuestran la gran versatilidad y calidad de la propuesta. Finalmente se propone un método para la aproximación de funciones arbitrarias empleando funciones continuas lineales a tramos, especialmente indicada para su implementación en una GPU mediante el uso de las unidades de filtraje de texturas, normalmente no utilizadas para cómputo numérico. La propuesta incluye un riguroso análisis matemático del error cometido en la aproximación en función del número de muestras empleadas, así como un método para la obtención de una partición cuasióptima del dominio de la función para minimizar el error. ABSTRACT The evolution of smartphones, all equipped with digital cameras, is driving a growing demand for ever more complex applications that need to rely on real-time computer vision algorithms. However, video signals are only increasing in size, whereas the performance of single-core processors has somewhat stagnated in the past few years. Consequently, new computer vision algorithms will need to be parallel to run on multiple processors and be computationally scalable. One of the most promising classes of processors nowadays can be found in graphics processing units (GPU). These are devices offering a high parallelism degree, excellent numerical performance and increasing versatility, which makes them interesting to run scientific computations. In this thesis, we explore two computer vision applications with a high computational complexity that precludes them from running in real time on traditional uniprocessors. However, we show that by parallelizing subtasks and implementing them on a GPU, both applications attain their goals of running at interactive frame rates. In addition, we propose a technique for fast evaluation of arbitrarily complex functions, specially designed for GPU implementation. First, we explore the application of depth-image–based rendering techniques to the unusual configuration of two convergent, wide baseline cameras, in contrast to the usual configuration used in 3D TV, which are narrow baseline, parallel cameras. By using a backward mapping approach with a depth inpainting scheme based on median filters, we show that these techniques are adequate for free viewpoint video applications. In addition, we show that referring depth information to a global reference system is ill-advised and should be avoided. Then, we propose a background subtraction system based on kernel density estimation techniques. These techniques are very adequate for modelling complex scenes featuring multimodal backgrounds, but have not been so popular due to their huge computational and memory complexity. The proposed system, implemented in real time on a GPU, features novel proposals for dynamic kernel bandwidth estimation for the background model, selective update of the background model, update of the position of reference samples of the foreground model using a multi-region particle filter, and automatic selection of regions of interest to reduce computational cost. The results, evaluated on several databases and compared to other state-of-the-art algorithms, demonstrate the high quality and versatility of our proposal. Finally, we propose a general method for the approximation of arbitrarily complex functions using continuous piecewise linear functions, specially formulated for GPU implementation by leveraging their texture filtering units, normally unused for numerical computation. Our proposal features a rigorous mathematical analysis of the approximation error in function of the number of samples, as well as a method to obtain a suboptimal partition of the domain of the function to minimize approximation error.