834 resultados para Cameras
Detecting the attributes of a wheat crop using digital imagery acquired from a low-altitude platform
Resumo:
A low-altitude platform utilising a 1.8-m diameter tethered helium balloon was used to position a multispectral sensor, consisting of two digital cameras, above a fertiliser trial plot where wheat (Triticum spp.) was being grown. Located in Cecil Plains, Queensland, Australia, the plot was a long-term fertiliser trial being conducted by a fertiliser company to monitor the response of crops to various levels of nutrition. The different levels of nutrition were achieved by varying nitrogen application rates between 0 and 120 units of N at 40 unit increments. Each plot had received the same application rate for 10 years. Colour and near-infrared images were acquired that captured the whole 2 ha plot. These images were examined and relationships sought between the captured digital information and the crop parameters imaged at anthesis and the at-harvest quality and quantity parameters. The statistical analysis techniques used were correlation analysis, discriminant analysis and partial least squares regression. A high correlation was found between the image and yield (R2 = 0.91) and a moderate correlation between the image and grain protein content (R2 = 0.66). The utility of the system could be extended by choosing a more mobile platform. This would increase the potential for the system to be used to diagnose the causes of the variability and allow remediation, and/or to segregate the crop at harvest to meet certain quality parameters.
Resumo:
The vision sense of standalone robots is limited by line of sight and onboard camera capabilities, but processing video from remote cameras puts a high computational burden on robots. This paper describes the Distributed Robotic Vision Service, DRVS, which implements an on-demand distributed visual object detection service. Robots specify visual information requirements in terms of regions of interest and object detection algorithms. DRVS dynamically distributes the object detection computation to remote vision systems with processing capabilities, and the robots receive high-level object detection information. DRVS relieves robots of managing sensor discovery and reduces data transmission compared to image sharing models of distributed vision. Navigating a sensorless robot from remote vision systems is demonstrated in simulation as a proof of concept.
Resumo:
The Great Barrier Reef is a unique World Heritage Area of national and international significance. As a multiple use Marine Park, activities such as fishing and tourism occur along with conservation goals. Managers need information on habitats and biodiversity distribution and risks to ensure these activities are conducted sustainably. However, while the coral reefs have been relatively well studied, less was known about the deeper seabed in the region. From 2003 to 2006, the GBR Seabed Biodiversity Project has mapped habitats and their associated biodiversity across the length and breadth of the Marine Park to provide information that will help managers with conservation planning and to assess whether fisheries are ecologically sustainable, as required by environmental protection legislation (e.g. EPBC Act 1999). Holistic information on the biodiversity of the seabed was acquired by visiting almost 1,500 sites, representing a full range of known environments, during 10 month-long voyages on two vessels and deploying several types of devices such as: towed video and digital cameras, baited remote underwater video stations (BRUVS), a digital echo-sounder, an epibenthic sled and a research trawl to collect samples for more detailed data about plants, invertebrates and fishes on the seabed. Data were collected and processed from >600 km of towed video and almost 100,000 photos, 1150 BRUVS videos, ~140 GB of digital echograms, and from sorting and identification of ~14,000 benthic samples, ~4,000 seabed fish samples, and ~1,200 sediment samples.
Resumo:
Introduction Markerless motion capture systems are relatively new devices that can significantly speed up capturing full body motion. A precision of the assessment of the finger’s position with this type of equipment was evaluated at 17.30 ± 9.56 mm when compare to an active marker system [1]. The Microsoft Kinect was proposed to standardized and enhanced clinical evaluation of patients with hemiplegic cerebral palsy [2]. Markerless motion capture systems have the potential to be used in a clinical setting for movement analysis, as well as for large cohort research. However, the precision of such system needs to be characterized. Global objectives • To assess the precision within the recording field of the markerless motion capture system Openstage 2 (Organic Motion, NY). • To compare the markerless motion capture system with an optoelectric motion capture system with active markers. Specific objectives • To assess the noise of a static body at 13 different location within the recording field of the markerless motion capture system. • To assess the smallest oscillation detected by the markerless motion capture system. • To assess the difference between both systems regarding the body joint angle measurement. Methods Equipment • OpenStage® 2 (Organic Motion, NY) o Markerless motion capture system o 16 video cameras (acquisition rate : 60Hz) o Recording zone : 4m * 5m * 2.4m (depth * width * height) o Provide position and angle of 23 different body segments • VisualeyezTM VZ4000 (PhoeniX Technologies Incorporated, BC) o Optoelectric motion capture system with active markers o 4 trackers system (total of 12 cameras) o Accuracy : 0.5~0.7mm Protocol & Analysis • Static noise: o Motion recording of an humanoid mannequin was done in 13 different locations o RMSE was calculated for each segment in each location • Smallest oscillation detected: o Small oscillations were induced to the humanoid mannequin and motion was recorded until it stopped. o Correlation between the displacement of the head recorded by both systems was measured. A corresponding magnitude was also measured. • Body joints angle: o Body motion was recorded simultaneously with both systems (left side only). o 6 participants (3 females; 32.7 ± 9.4 years old) • Tasks: Walk, Squat, Shoulder flexion & abduction, Elbow flexion, Wrist extension, Pronation / supination (not in results), Head flexion & rotation (not in results), Leg rotation (not in results), Trunk rotation (not in results) o Several body joint angles were measured with both systems. o RMSE was calculated between signals of both systems. Results Conclusion Results show that the Organic Motion markerless system has the potential to be used for assessment of clinical motor symptoms or motor performances However, the following points should be considered: • Precision of the Openstage system varied within the recording field. • Precision is not constant between limb segments. • The error seems to be higher close to the range of motion extremities.
Resumo:
Red light cameras were introduced in Victoria in August 1983, with the intention of reducing the number of accidents that result from motorists disobeying red traffic signals at signalised intersections. Accident data from 46 treated and 46 control sites from 1981 to 1986 were analysed. The analysis indicated that red light camera use resulted in a reduction in the incidence of right angle accidents, and in the number of accident casualties. Legislation was introduced in March 1986 to place the onus for red light camera offences onto the vehicle owner. This legislation was intended to improve Police efficiency and therefore increase the number of red light cameras in operation. Data supplied by the Police indicated that these aims have beneficial road safety effects.
Resumo:
EXECUTIVE SUMMARY (excerpts) The red light camera (RLC) program commenced in July 1988, with five cameras operating at 15 sites in metropolitan Adelaide. This report deals with the first eighteen months of operation, to December 1989. A number of recommendations have been made… PROGRAM EVALUATION … In 1989 dollars, the program was estimated to have achieved an accident reduction benefit of $1.4m in the first 12 months of operation, which is almost twice the benefit expected using the assumptions made when selecting the sites. (There are 8 recommendations, mostly specific to the particular program characteristics)
Resumo:
Red light cameras were introduced in August 1983 to deter run-the-red offences and therefore to reduce the incidence of right-angle accidents at signalised intersections in Melbourne. This report was prepared after two years of operation of the program. It provides a detailed account of the technical aspects of the program, but does not provide any detailed, evaluative analyses of accident data.
Resumo:
Due to the recent development in CCD technology aerial photography is now slowly changing from film to digital cameras. This new aspect in remote sensing allows and requires also new automated analysis methods. Basic research on reflectance properties of natural targets is needed so that computerized processes could be fully utilized. For this reason an instrument was developed at Finnish Geodetic Institute for measurement of multiangular reflectance of small remote sensing targets e.g. forest understorey or asphalt. Finnish Geodetic Institute Field Goniospectrometer (FiGIFiGo) is a portable device that is operated by 1 or 2 persons. It can be reassembled to a new location in 15 minutes and after that a target's multiangular reflectance can be measured in 10 - 30 minutes (with one illumination angle). FiGIFiGo has effective spectral range approximately from 400 nm to 2000 nm. The measurements can be made either outside with sunlight or in laboratory with 1000 W QTH light source. In this thesis FiGIFiGo is introduced and the theoretical basis of such reflectance measurements are discussed. A new method is introduced for extraction of subcomponent proportions from reflectance of a mixture sample, e.g. for retrieving proportion of lingonberry's reflectance in observation of lingonberry-lichen sample. This method was tested by conducting a series of measurements on reflectance properties of artificial samples. The component separation method yielded sound results and brought up interesting aspects in targets' reflectances. The method and the results still need to be verified with further studies, but the preliminary results imply that this method could be a valuable tool in analysis of such mixture samples.
Resumo:
Bait containing sodium fluoroacetate (1080) is widely used for the routine control of feral pigs in Australia. In Queensland, meat baits are popular in western and northern pastoral areas where they are readily accepted by feral pigs and can be distributed aerially. Field studies have indicated some levels of interference and consumption of baits by nontarget species and, based on toxicity data and the 1080 content of baits, many nontarget species (particularly birds and varanids) are potentially at risk through primary poisoning. While occasional deaths of species have been recorded, it remains unclear whether the level of mortality is sufficient to threaten the viability or ecological function of species. A series of field trials at Culgoa National Park in south-western Queensland was conducted to determine the effect of broadscale aerial baiting (1.7 baits per km2) on the density of nontarget avian species that may consume baits. Counts of susceptible bird species were conducted prior to and following aerial baiting, and on three nearby unbaited properties, in May and November 2011, and May 2012. A sample of baits was monitored with remote cameras in the November 2011 and May 2012 trials. Over the three baiting campaigns, there was no evidence of a population-level decline among the seven avian nontarget species that were monitored. Thirty per cent and 15% of baits monitored by remote cameras in the November 2011 and May 2012 trials were sampled by birds, varanids or other reptiles. These results support the continued use of 1080 meat baits for feral pig management in western Queensland and similar environs.
Resumo:
User generated information such as product reviews have been booming due to the advent of web 2.0. In particular, rich information associated with reviewed products has been buried in such big data. In order to facilitate identifying useful information from product (e.g., cameras) reviews, opinion mining has been proposed and widely used in recent years. In detail, as the most critical step of opinion mining, feature extraction aims to extract significant product features from review texts. However, most existing approaches only find individual features rather than identifying the hierarchical relationships between the product features. In this paper, we propose an approach which finds both features and feature relationships, structured as a feature hierarchy which is referred to as feature taxonomy in the remainder of the paper. Specifically, by making use of frequent patterns and association rules, we construct the feature taxonomy to profile the product at multiple levels instead of single level, which provides more detailed information about the product. The experiment which has been conducted based upon some real world review datasets shows that our proposed method is capable of identifying product features and relations effectively.
Resumo:
Sensor networks represent an attractive tool to observe the physical world. Networks of tiny sensors can be used to detect a fire in a forest, to monitor the level of pollution in a river, or to check on the structural integrity of a bridge. Application-specific deployments of static-sensor networks have been widely investigated. Commonly, these networks involve a centralized data-collection point and no sharing of data outside the organization that owns it. Although this approach can accommodate many application scenarios, it significantly deviates from the pervasive computing vision of ubiquitous sensing where user applications seamlessly access anytime, anywhere data produced by sensors embedded in the surroundings. With the ubiquity and ever-increasing capabilities of mobile devices, urban environments can help give substance to the ubiquitous sensing vision through Urbanets, spontaneously created urban networks. Urbanets consist of mobile multi-sensor devices, such as smart phones and vehicular systems, public sensor networks deployed by municipalities, and individual sensors incorporated in buildings, roads, or daily artifacts. My thesis is that "multi-sensor mobile devices can be successfully programmed to become the underpinning elements of an open, infrastructure-less, distributed sensing platform that can bring sensor data out of their traditional close-loop networks into everyday urban applications". Urbanets can support a variety of services ranging from emergency and surveillance to tourist guidance and entertainment. For instance, cars can be used to provide traffic information services to alert drivers to upcoming traffic jams, and phones to provide shopping recommender services to inform users of special offers at the mall. Urbanets cannot be programmed using traditional distributed computing models, which assume underlying networks with functionally homogeneous nodes, stable configurations, and known delays. Conversely, Urbanets have functionally heterogeneous nodes, volatile configurations, and unknown delays. Instead, solutions developed for sensor networks and mobile ad hoc networks can be leveraged to provide novel architectures that address Urbanet-specific requirements, while providing useful abstractions that hide the network complexity from the programmer. This dissertation presents two middleware architectures that can support mobile sensing applications in Urbanets. Contory offers a declarative programming model that views Urbanets as a distributed sensor database and exposes an SQL-like interface to developers. Context-aware Migratory Services provides a client-server paradigm, where services are capable of migrating to different nodes in the network in order to maintain a continuous and semantically correct interaction with clients. Compared to previous approaches to supporting mobile sensing urban applications, our architectures are entirely distributed and do not assume constant availability of Internet connectivity. In addition, they allow on-demand collection of sensor data with the accuracy and at the frequency required by every application. These architectures have been implemented in Java and tested on smart phones. They have proved successful in supporting several prototype applications and experimental results obtained in ad hoc networks of phones have demonstrated their feasibility with reasonable performance in terms of latency, memory, and energy consumption.
Resumo:
Fresh meat baits containing sodium fluoroacetate (1080) are widely used for controlling feral pigs in Queensland, but there is a potential poisoning risk to non-target species. This study investigated the non-target species interactions with meat bait by comparing the time until first approach, investigation, sample and consumption, and whether dying bait green would reduce interactions. A trial assessing species interactions with undyed bait was completed at Culgoa Floodplain National Park, Queensland. Meat baits were monitored for 79 consecutive days with camera traps. Of 40 baits, 100% were approached, 35% investigated (moved) and 25% sampled, and 25% consumed. Monitors approached (P < 0.05) and investigated (P < 0.05) the bait more rapidly than pigs or birds, but the median time until first sampling was not significantly different (P > 0.05), and did not consume any entire bait. A trial was conducted at Whetstone State Forest, southern Queensland, with green-dyed and undyed baits monitored for eight consecutive days with cameras. Of 60 baits, 92% were approached and also investigated by one or more non-target species. Most (85%) were sampled and 57% were consumed, with monitors having slightly more interaction with undyed baits than with green-dyed baits. Mean time until first approach and sample differed significantly between species groups (P = 0.038 and 0.007 respectively) with birds approaching sooner (P < 0.05) and monitors sampling later (P < 0.05) than other (unknown) species (P > 0.05). Undyed bait was sampled earlier (mean 2.19 days) than green-dyed bait (2.7 days) (P = 0.003). Data from the two trials demonstrate that many non-target species regularly visit and sample baits. The use of green-dyed baits may help reduce non-target uptake, but testing is required to determine the effect on attractiveness to feral pigs. Further research is recommended to quantify the benefits of potential strategies to reduce the non-target uptake of meat baits to help improve the availability of bait to feral pigs.
Resumo:
Robotic vision is limited by line of sight and onboard camera capabilities. Robots can acquire video or images from remote cameras, but processing additional data has a computational burden. This paper applies the Distributed Robotic Vision Service, DRVS, to robot path planning using data outside line-of-sight of the robot. DRVS implements a distributed visual object detection service to distributes the computation to remote camera nodes with processing capabilities. Robots request task-specific object detection from DRVS by specifying a geographic region of interest and object type. The remote camera nodes perform the visual processing and send the high-level object information to the robot. Additionally, DRVS relieves robots of sensor discovery by dynamically distributing object detection requests to remote camera nodes. Tested over two different indoor path planning tasks DRVS showed dramatic reduction in mobile robot compute load and wireless network utilization.
Resumo:
This paper argues that the Panopticon is an accurate model for and illustration of policing and security methods in the modern society. Initially, I overview the theoretical concept of the Panopticon as a structure of perceived universal surveillance which facilitates automatic obedience in its subjects as identified by the theorists Jeremy Bentham and Michel Foucault. The paper subsequently moves to identify how the Panopticon, despite being a theoretical construct, is nevertheless instantiated to an extent through the prevalence of security cameras as a means of sovereignly regulating human conduct; speeding is an ordinary example. It could even be contended that increasing surveillance according to the model of the Panopticon would reduce the frequency of offences. However, in the final analysis the paper considers that even if adopting an approach based on the Panopticon is a more effective method of policing, it is not necessarily a more desirable one.
Resumo:
This article addresses the problem of how to select the optimal combination of sensors and how to determine their optimal placement in a surveillance region in order to meet the given performance requirements at a minimal cost for a multimedia surveillance system. We propose to solve this problem by obtaining a performance vector, with its elements representing the performances of subtasks, for a given input combination of sensors and their placement. Then we show that the optimal sensor selection problem can be converted into the form of Integer Linear Programming problem (ILP) by using a linear model for computing the optimal performance vector corresponding to a sensor combination. Optimal performance vector corresponding to a sensor combination refers to the performance vector corresponding to the optimal placement of a sensor combination. To demonstrate the utility of our technique, we design and build a surveillance system consisting of PTZ (Pan-Tilt-Zoom) cameras and active motion sensors for capturing faces. Finally, we show experimentally that optimal placement of sensors based on the design maximizes the system performance.