786 resultados para video cameras


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Visual recording devices such as video cameras, CCTVs, or webcams have been broadly used to facilitate work progress or safety monitoring on construction sites. Without human intervention, however, both real-time reasoning about captured scenes and interpretation of recorded images are challenging tasks. This article presents an exploratory method for automated object identification using standard video cameras on construction sites. The proposed method supports real-time detection and classification of mobile heavy equipment and workers. The background subtraction algorithm extracts motion pixels from an image sequence, the pixels are then grouped into regions to represent moving objects, and finally the regions are identified as a certain object using classifiers. For evaluating the method, the formulated computer-aided process was implemented on actual construction sites, and promising results were obtained. This article is expected to contribute to future applications of automated monitoring systems of work zone safety or productivity.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Topographic structural complexity of a reef is highly correlated to coral growth rates, coral cover and overall levels of biodiversity, and is therefore integral in determining ecological processes. Modeling these processes commonly includes measures of rugosity obtained from a wide range of different survey techniques that often fail to capture rugosity at different spatial scales. Here we show that accurate estimates of rugosity can be obtained from video footage captured using underwater video cameras (i.e., monocular video). To demonstrate the accuracy of our method, we compared the results to in situ measurements of a 2m x 20m area of forereef from Glovers Reef atoll in Belize. Sequential pairs of images were used to compute fine scale bathymetric reconstructions of the reef substrate from which precise measurements of rugosity and reef topographic structural complexity can be derived across multiple spatial scales. To achieve accurate bathymetric reconstructions from uncalibrated monocular video, the position of the camera for each image in the video sequence and the intrinsic parameters (e.g., focal length) must be computed simultaneously. We show that these parameters can be often determined when the data exhibits parallax-type motion, and that rugosity and reef complexity can be accurately computed from existing video sequences taken from any type of underwater camera from any reef habitat or location. This technique provides an infinite array of possibilities for future coral reef research by providing a cost-effective and automated method of determining structural complexity and rugosity in both new and historical video surveys of coral reefs.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Based on analyses of high-speed video recordings of cloud-to-ground lightning in Brazil and the USA, the characteristics of positive cloud-to-ground (+CG) leaders are presented. The high frame rates permitted the average, 2-dimensional speeds of development along the paths of the channels to be resolved with good accuracy. The values range from 0.3 to 6.0 x 10(5) ms(-1) with a mean of 2.7 x 10(5) ms(-1). Contrary to what is usually assumed, downward +CG leader speeds are similar to downward -CG leader speeds. Our observations also show that the speeds tend to increase by a factor of 1.1 to 6.5 as they approach the ground. The presence of short duration, recoil leaders (RLs) during the development of positive leaders reveal a highly branched structure that is not usually recorded when using conventional photographic and video cameras. The existence of the RLs may help to explain observations of UHF-VHF radiation during the development of +CG flashes.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

A target tracking algorithm able to identify the position and to pursuit moving targets in video digital sequences is proposed in this paper. The proposed approach aims to track moving targets inside the vision field of a digital camera. The position and trajectory of the target are identified by using a neural network presenting competitive learning technique. The winning neuron is trained to approximate to the target and, then, pursuit it. A digital camera provides a sequence of images and the algorithm process those frames in real time tracking the moving target. The algorithm is performed both with black and white and multi-colored images to simulate real world situations. Results show the effectiveness of the proposed algorithm, since the neurons tracked the moving targets even if there is no pre-processing image analysis. Single and multiple moving targets are followed in real time.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Bilayer segmentation of live video in uncontrolled environments is an essential task for home applications in which the original background of the scene must be replaced, as in videochats or traditional videoconference. The main challenge in such conditions is overcome all difficulties in problem-situations (e. g., illumination change, distract events such as element moving in the background and camera shake) that may occur while the video is being captured. This paper presents a survey of segmentation methods for background substitution applications, describes the main concepts and identifies events that may cause errors. Our analysis shows that although robust methods rely on specific devices (multiple cameras or sensors to generate depth maps) which aid the process. In order to achieve the same results using conventional devices (monocular video cameras), most current research relies on energy minimization frameworks, in which temporal and spacial information are probabilistically combined with those of color and contrast.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Presentación oral SPIE Photonics Europe, Brussels, 16-19 April 2012.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

This work has, as its objective, the development of non-invasive and low-cost systems for monitoring and automatic diagnosing specific neonatal diseases by means of the analysis of suitable video signals. We focus on monitoring infants potentially at risk of diseases characterized by the presence or absence of rhythmic movements of one or more body parts. Seizures and respiratory diseases are specifically considered, but the approach is general. Seizures are defined as sudden neurological and behavioural alterations. They are age-dependent phenomena and the most common sign of central nervous system dysfunction. Neonatal seizures have onset within the 28th day of life in newborns at term and within the 44th week of conceptional age in preterm infants. Their main causes are hypoxic-ischaemic encephalopathy, intracranial haemorrhage, and sepsis. Studies indicate an incidence rate of neonatal seizures of 0.2% live births, 1.1% for preterm neonates, and 1.3% for infants weighing less than 2500 g at birth. Neonatal seizures can be classified into four main categories: clonic, tonic, myoclonic, and subtle. Seizures in newborns have to be promptly and accurately recognized in order to establish timely treatments that could avoid an increase of the underlying brain damage. Respiratory diseases related to the occurrence of apnoea episodes may be caused by cerebrovascular events. Among the wide range of causes of apnoea, besides seizures, a relevant one is Congenital Central Hypoventilation Syndrome (CCHS) \cite{Healy}. With a reported prevalence of 1 in 200,000 live births, CCHS, formerly known as Ondine's curse, is a rare life-threatening disorder characterized by a failure of the automatic control of breathing, caused by mutations in a gene classified as PHOX2B. CCHS manifests itself, in the neonatal period, with episodes of cyanosis or apnoea, especially during quiet sleep. The reported mortality rates range from 8% to 38% of newborn with genetically confirmed CCHS. Nowadays, CCHS is considered a disorder of autonomic regulation, with related risk of sudden infant death syndrome (SIDS). Currently, the standard method of diagnosis, for both diseases, is based on polysomnography, a set of sensors such as ElectroEncephaloGram (EEG) sensors, ElectroMyoGraphy (EMG) sensors, ElectroCardioGraphy (ECG) sensors, elastic belt sensors, pulse-oximeter and nasal flow-meters. This monitoring system is very expensive, time-consuming, moderately invasive and requires particularly skilled medical personnel, not always available in a Neonatal Intensive Care Unit (NICU). Therefore, automatic, real-time and non-invasive monitoring equipments able to reliably recognize these diseases would be of significant value in the NICU. A very appealing monitoring tool to automatically detect neonatal seizures or breathing disorders may be based on acquiring, through a network of sensors, e.g., a set of video cameras, the movements of the newborn's body (e.g., limbs, chest) and properly processing the relevant signals. An automatic multi-sensor system could be used to permanently monitor every patient in the NICU or specific patients at home. Furthermore, a wire-free technique may be more user-friendly and highly desirable when used with infants, in particular with newborns. This work has focused on a reliable method to estimate the periodicity in pathological movements based on the use of the Maximum Likelihood (ML) criterion. In particular, average differential luminance signals from multiple Red, Green and Blue (RGB) cameras or depth-sensor devices are extracted and the presence or absence of a significant periodicity is analysed in order to detect possible pathological conditions. The efficacy of this monitoring system has been measured on the basis of video recordings provided by the Department of Neurosciences of the University of Parma. Concerning clonic seizures, a kinematic analysis was performed to establish a relationship between neonatal seizures and human inborn pattern of quadrupedal locomotion. Moreover, we have decided to realize simulators able to replicate the symptomatic movements characteristic of the diseases under consideration. The reasons is, essentially, the opportunity to have, at any time, a 'subject' on which to test the continuously evolving detection algorithms. Finally, we have developed a smartphone App, called 'Smartphone based contactless epilepsy detector' (SmartCED), able to detect neonatal clonic seizures and warn the user about the occurrence in real-time.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This presentation explores molarization and overcoding of social machines and relationality within an assemblage consisting of empirical data of immigrant families in Australia. Immigration is key to sustainable development of Western societies like Australia and Canada. Newly arrived immigrants enter a country and are literally taken over by the Ministry of Immigration regarding housing, health, education and accessing job possibilities. If the immigrants do not know the official language(s) of the country, they enroll in language classes for new immigrants. Language classes do more than simply teach language. Language is presented in local contexts (celebrating the national day, what to do to get a job) and in control societies, language classes foreground values of a nation state in order for immigrants to integrate. In the current project, policy documents from Australia reveal that while immigration is the domain of government, the subject/immigrant is nevertheless at the core of policy. While support is provided, it is the subject/immigrant transcendent view that prevails. The onus remains on the immigrant to “succeed”. My perspective lies within transcendental empiricism and deploys Deleuzian ontology, how one might live in order to examine how segmetary lines of power (pouvoir) reflected in policy documents and operationalized in language classes rupture into lines of flight of nomad immigrants. The theoretical framework is Multiple Literacies Theory (MLT); reading is intensive and immanent. The participants are one Korean and one Sudanese family and their children who have recently immigrated to Australia. Observations in classrooms were obtained and followed by interviews based on the observations. Families also borrowed small video cameras and they filmed places, people and things relevant to them in terms of becoming citizen and immigrating to and living in a different country. Interviews followed. Rhizoanalysis informs the process of reading data. Rhizoanalysis is a research event and performed with an assemblage (MLT, data/vignettes, researcher, etc.). It is a way to work with transgressive data. Based on the concept of the rhizome, a bloc of data has no beginning, no ending. A researcher enters in the middle and exists somewhere in the middle, an intermezzo suggesting that the challenges to molar immigration lie in experimenting and creating molecular processes of becoming citizen.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Safety is one of the major world health issues, and is even more acute for “vulnerable” road users, pedestrians and cyclists. At the same time, public authorities are promoting the active modes of transportation that involve these very users for their health benefits. It is therefore important to understand the factors and designs that provide the best safety for vulnerable road users and encourage more people to use these modes. Qualitative and quantitative shortcomings of collisions make it necessary to use surrogate measures of safety in studying these modes. Some interactions without a collision such as conflicts can be good surrogates of collisions as they are more frequent and less costly. To overcome subjectivity and reliability challenges, automatic conflict analysis using video cameras and deriving users’ trajectories is a solution to overcome shortcomings of manual conflict analysis. The goal of this paper is to identify and characterize various interactions between cyclists and pedestrians at bus stops along bike paths using a fully automated process. Three conflict severity indicators are calculated and adapted to the situation of interest to capture those interactions. A microscopic analysis of users’ behavior is proposed to explain interactions more precisely. Eventually, the study aims to show the capability of automatically collecting and analyzing data for pedestrian-cyclist interactions at bus stops along segregated bike paths in order to better understand the actual and perceived risks of these facilities.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

It is commonplace to use digital video cameras in robotic applications. These cameras have built-in exposure control but they do not have any knowledge of the environment, the lens being used, the important areas of the image and do not always produce optimal image exposure. Therefore, it is desirable and often necessary to control the exposure off the camera. In this paper we present a scheme for exposure control which enables the user application to determine the area of interest. The proposed scheme introduces an intermediate transparent layer between the camera and the user application which combines the information from these for optimal exposure production. We present results from indoor and outdoor scenarios using directional and fish-eye lenses showing the performance and advantages of this framework.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In recent years, rapid advances in information technology have led to various data collection systems which are enriching the sources of empirical data for use in transport systems. Currently, traffic data are collected through various sensors including loop detectors, probe vehicles, cell-phones, Bluetooth, video cameras, remote sensing and public transport smart cards. It has been argued that combining the complementary information from multiple sources will generally result in better accuracy, increased robustness and reduced ambiguity. Despite the fact that there have been substantial advances in data assimilation techniques to reconstruct and predict the traffic state from multiple data sources, such methods are generally data-driven and do not fully utilize the power of traffic models. Furthermore, the existing methods are still limited to freeway networks and are not yet applicable in the urban context due to the enhanced complexity of the flow behavior. The main traffic phenomena on urban links are generally caused by the boundary conditions at intersections, un-signalized or signalized, at which the switching of the traffic lights and the turning maneuvers of the road users lead to shock-wave phenomena that propagate upstream of the intersections. This paper develops a new model-based methodology to build up a real-time traffic prediction model for arterial corridors using data from multiple sources, particularly from loop detectors and partial observations from Bluetooth and GPS devices.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Introduction Markerless motion capture systems are relatively new devices that can significantly speed up capturing full body motion. A precision of the assessment of the finger’s position with this type of equipment was evaluated at 17.30 ± 9.56 mm when compare to an active marker system [1]. The Microsoft Kinect was proposed to standardized and enhanced clinical evaluation of patients with hemiplegic cerebral palsy [2]. Markerless motion capture systems have the potential to be used in a clinical setting for movement analysis, as well as for large cohort research. However, the precision of such system needs to be characterized. Global objectives • To assess the precision within the recording field of the markerless motion capture system Openstage 2 (Organic Motion, NY). • To compare the markerless motion capture system with an optoelectric motion capture system with active markers. Specific objectives • To assess the noise of a static body at 13 different location within the recording field of the markerless motion capture system. • To assess the smallest oscillation detected by the markerless motion capture system. • To assess the difference between both systems regarding the body joint angle measurement. Methods Equipment • OpenStage® 2 (Organic Motion, NY) o Markerless motion capture system o 16 video cameras (acquisition rate : 60Hz) o Recording zone : 4m * 5m * 2.4m (depth * width * height) o Provide position and angle of 23 different body segments • VisualeyezTM VZ4000 (PhoeniX Technologies Incorporated, BC) o Optoelectric motion capture system with active markers o 4 trackers system (total of 12 cameras) o Accuracy : 0.5~0.7mm Protocol & Analysis • Static noise: o Motion recording of an humanoid mannequin was done in 13 different locations o RMSE was calculated for each segment in each location • Smallest oscillation detected: o Small oscillations were induced to the humanoid mannequin and motion was recorded until it stopped. o Correlation between the displacement of the head recorded by both systems was measured. A corresponding magnitude was also measured. • Body joints angle: o Body motion was recorded simultaneously with both systems (left side only). o 6 participants (3 females; 32.7 ± 9.4 years old) • Tasks: Walk, Squat, Shoulder flexion & abduction, Elbow flexion, Wrist extension, Pronation / supination (not in results), Head flexion & rotation (not in results), Leg rotation (not in results), Trunk rotation (not in results) o Several body joint angles were measured with both systems. o RMSE was calculated between signals of both systems. Results Conclusion Results show that the Organic Motion markerless system has the potential to be used for assessment of clinical motor symptoms or motor performances However, the following points should be considered: • Precision of the Openstage system varied within the recording field. • Precision is not constant between limb segments. • The error seems to be higher close to the range of motion extremities.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Technology is increasingly infiltrating all aspects of our lives and the rapid uptake of devices that live near, on or in our bodies are facilitating radical new ways of working, relating and socialising. This distribution of technology into the very fabric of our everyday life creates new possibilities, but also raises questions regarding our future relationship with data and the quantified self. By embedding technology into the fabric of our clothes and accessories, it becomes ‘wearable’. Such ‘wearables’ enable the acquisition of and the connection to vast amounts of data about people and environments in order to provide life-augmenting levels of interactivity. Wearable sensors for example, offer the potential for significant benefits in the future management of our wellbeing. Fitness trackers such as ‘Fitbit’ and ‘Garmen’ provide wearers with the ability to monitor their personal fitness indicators while other wearables provide healthcare professionals with information that improves diagnosis. While the rapid uptake of wearables may offer unique and innovative opportunities, there are also concerns surrounding the high levels of data sharing that come as a consequence of these technologies. As more ‘smart’ devices connect to the Internet, and as technology becomes increasingly available (e.g. via Wi-Fi, Bluetooth), more products, artefacts and things are becoming interconnected. This digital connection of devices is called The ‘Internet of Things’ (IoT). IoT is spreading rapidly, with many traditionally non-online devices becoming increasingly connected; products such as mobile phones, fridges, pedometers, coffee machines, video cameras, cars and clothing. The IoT is growing at a rapid rate with estimates indicating that by 2020 there will be over 25 billion connected things globally. As the number of devices connected to the Internet increases, so too does the amount of data collected and type of information that is stored and potentially shared. The ability to collect massive amounts of data - known as ‘big data’ - can be used to better understand and predict behaviours across all areas of research from societal and economic to environmental and biological. With this kind of information at our disposal, we have a more powerful lens with which to perceive the world, and the resulting insights can be used to design more appropriate products, services and systems. It can however, also be used as a method of surveillance, suppression and coercion by governments or large organisations. This is becoming particularly apparent in advertising that targets audiences based on the individual preferences revealed by the data collected from social media and online devices such as GPS systems or pedometers. This type of technology also provides fertile ground for public debates around future fashion, identity and broader social issues such as culture, politics and the environment. The potential implications of these type of technological interactions via wearables, through and with the IoT, have never been more real or more accessible. But, as highlighted, this interconnectedness also brings with it complex technical, ethical and moral challenges. Data security and the protection of privacy and personal information will become ever more present in current and future ethical and moral debates of the 21st century. This type of technology is also a stepping-stone to a future that includes implantable technology, biotechnologies, interspecies communication and augmented humans (cyborgs). Technologies that live symbiotically and perpetually in our bodies, the built environment and the natural environment are no longer the stuff of science fiction; it is in fact a reality. So, where next?... The works exhibited in Wear Next_ provide a snapshot into the broad spectrum of wearables in design and in development internationally. This exhibition has been curated to serve as a platform for enhanced broader debate around future technology, our mediated future-selves and the evolution of human interactions. As you explore the exhibition, may we ask that you pause and think to yourself, what might we... Wear Next_? WEARNEXT ONLINE LISTINGS AND MEDIA COVERAGE: http://indulgemagazine.net/wear-next/ http://www.weekendnotes.com/wear-next-exhibition-gallery-artisan/ http://concreteplayground.com/brisbane/event/wear-next_/ http://www.nationalcraftinitiative.com.au/news_and_events/event/48/wear-next http://bneart.com/whats-on/wear-next_/ http://creativelysould.tumblr.com/post/124899079611/creative-weekend-art-edition http://www.abc.net.au/radionational/programs/breakfast/smartly-dressed-the-future-of-wearable-technology/6744374 http://couriermail.newspaperdirect.com/epaper/viewer.aspx RADIO COVERAGE http://www.abc.net.au/radionational/programs/breakfast/wear-next-exhibition-whats-next-for-wearable-technology/6745986 TELEVISION COVERAGE http://www.abc.net.au/radionational/programs/breakfast/wear-next-exhibition-whats-next-for-wearable-technology/6745986 https://au.news.yahoo.com/video/watch/29439742/how-you-could-soon-be-wearing-smart-clothes/#page1