839 resultados para Automated Guideways.
Resumo:
Automated crowd counting allows excessive crowding to be detected immediately, without the need for constant human surveillance. Current crowd counting systems are location specific, and for these systems to function properly they must be trained on a large amount of data specific to the target location. As such, configuring multiple systems to use is a tedious and time consuming exercise. We propose a scene invariant crowd counting system which can easily be deployed at a different location to where it was trained. This is achieved using a global scaling factor to relate crowd sizes from one scene to another. We demonstrate that a crowd counting system trained at one viewpoint can achieve a correct classification rate of 90% at a different viewpoint.
Resumo:
Objective: To determine whether there are clinical and public health dilemmas resulting from the reproducibility of routine vitamin D assays. Methods: Blinded agreement studies were conducted in eight clinical laboratories using two commonly used assays to measure serum 25-hydroxyvitamin D (25(OH)D) levels in Australasia and Canada (DiaSorin Radioimmunoassay (RIA) and DiaSorin LIAISON® one). Results: Only one laboratory measured 25(OH)D with excellent precision. Replicate 25(OH)D measurements varied by up to 97% and 15% of paired results differed by more than 50%. Thirteen percent of subjects received one result indicating insufficiency [25-50 nmol/l] and another suggesting adequacy [>50 nmol/l]). Agreement ranged from poor to excellent for laboratories using the manual RIA, while the precision of the semi-automated Liaison® system was consistently poor. Conclusions: Recent interest in the relevance of vitamin D to human health has increased demand for 25(OH)D testing and associated costs. Our results suggest clinicians and public health authorities are making decisions about treatment or changes to public health policy based on imprecise data. Clinicians, researchers and policy makers should be made aware of the imprecision of current 25(OH)D testing so that they exercise caution when using these assays for clinical practice, and when interpreting the findings of epidemiological studies based on vitamin D levels measured using these assays. Development of a rapid, reproducible, accurate and robust assay should be a priority due to interest in populationbased screening programs and research to inform public health policy about the amount of sun exposure required for human health. In the interim, 25(OH)D results should routinely include a statement of measurement uncertainty.
Resumo:
Secondary tasks such as cell phone calls or interaction with automated speech dialog systems (SDSs) increase the driver’s cognitive load as well as the probability of driving errors. This study analyzes speech production variations due to cognitive load and emotional state of drivers in real driving conditions. Speech samples were acquired from 24 female and 17 male subjects (approximately 8.5 h of data) while talking to a co-driver and communicating with two automated call centers, with emotional states (neutral, negative) and the number of necessary SDS query repetitions also labeled. A consistent shift in a number of speech production parameters (pitch, first format center frequency, spectral center of gravity, spectral energy spread, and duration of voiced segments) was observed when comparing SDS interaction against co-driver interaction; further increases were observed when considering negative emotion segments and the number of requested SDS query repetitions. A mel frequency cepstral coefficient based Gaussian mixture classifier trained on 10 male and 10 female sessions provided 91% accuracy in the open test set task of distinguishing co-driver interactions from SDS interactions, suggesting—together with the acoustic analysis—that it is possible to monitor the level of driver distraction directly from their speech.
Resumo:
This paper discusses the use of models in automatic computer forensic analysis, and proposes and elaborates on a novel model for use in computer profiling, the computer profiling object model. The computer profiling object model is an information model which models a computer as objects with various attributes and inter-relationships. These together provide the information necessary for a human investigator or an automated reasoning engine to make judgements as to the probable usage and evidentiary value of a computer system. The computer profiling object model can be implemented so as to support automated analysis to provide an investigator with the information needed to decide whether manual analysis is required.
Resumo:
With the emergence of multi-cores into the mainstream, there is a growing need for systems to allow programmers and automated systems to reason about data dependencies and inherent parallelismin imperative object-oriented languages. In this paper we exploit the structure of object-oriented programs to abstract computational side-effects. We capture and validate these effects using a static type system. We use these as the basis of sufficient conditions for several different data and task parallelism patterns. We compliment our static type system with a lightweight runtime system to allow for parallelization in the presence of complex data flows. We have a functioning compiler and worked examples to demonstrate the practicality of our solution.
Resumo:
Monitoring Internet traffic is critical in order to acquire a good understanding of threats to computer and network security and in designing efficient computer security systems. Researchers and network administrators have applied several approaches to monitoring traffic for malicious content. These techniques include monitoring network components, aggregating IDS alerts, and monitoring unused IP address spaces. Another method for monitoring and analyzing malicious traffic, which has been widely tried and accepted, is the use of honeypots. Honeypots are very valuable security resources for gathering artefacts associated with a variety of Internet attack activities. As honeypots run no production services, any contact with them is considered potentially malicious or suspicious by definition. This unique characteristic of the honeypot reduces the amount of collected traffic and makes it a more valuable source of information than other existing techniques. Currently, there is insufficient research in the honeypot data analysis field. To date, most of the work on honeypots has been devoted to the design of new honeypots or optimizing the current ones. Approaches for analyzing data collected from honeypots, especially low-interaction honeypots, are presently immature, while analysis techniques are manual and focus mainly on identifying existing attacks. This research addresses the need for developing more advanced techniques for analyzing Internet traffic data collected from low-interaction honeypots. We believe that characterizing honeypot traffic will improve the security of networks and, if the honeypot data is handled in time, give early signs of new vulnerabilities or breakouts of new automated malicious codes, such as worms. The outcomes of this research include: • Identification of repeated use of attack tools and attack processes through grouping activities that exhibit similar packet inter-arrival time distributions using the cliquing algorithm; • Application of principal component analysis to detect the structure of attackers’ activities present in low-interaction honeypots and to visualize attackers’ behaviors; • Detection of new attacks in low-interaction honeypot traffic through the use of the principal component’s residual space and the square prediction error statistic; • Real-time detection of new attacks using recursive principal component analysis; • A proof of concept implementation for honeypot traffic analysis and real time monitoring.
Resumo:
Abstract—It is easy to create new combinatorial games but more difficult to predict those that will interest human players. We examine the concept of game quality, its automated measurement through self-play simulations, and its use in the evolutionary search for new high-quality games. A general game system called Ludi is described and experiments conducted to test its ability to synthesize and evaluate new games. Results demonstrate the validity of the approach through the automated creation of novel, interesting, and publishable games. Index Terms—Aesthetics, artificial intelligence (AI), combinatorial game, evolutionary search, game design.
Resumo:
New air traffic automated separation management concepts are constantly under investigation. Yet most of the automated separation management algorithms proposed over the last few decades have assumed either perfect communication or exact knowledge of all aircraft locations. In realistic environments, these idealized assumptions are not valid and any communication failure can potentially lead to disastrous outcomes. This paper examines the separation performance behavior of several popular algorithms during periods of information loss. This comparison is done through simulation studies. These simulation studies suggest that communication failure can cause the performance of these separation management algorithms to degrade significantly. This paper also describes some preliminary flight tests.
Resumo:
Since 1993 we have been working on the automation of dragline excavators, the largest earthmoving machines that exist. Recently we completed a large-scale experimental program where the automation system was used for production purposes over a two week period and moved over 200,000 tonnes of overburden. This is a landmark achievement in the history of automated excavation. In this paper we briefly describe the robotic system and how it works cooperatively with the machine operator. We then describe our methodology for gauging machine performance, analyze results from the production trial and comment on the effectiveness of the system that we have created. © Springer-Verlag Berlin Heidelberg 2006.
Automation of an underground mining vehicle using reactive navigation and opportunistic localization
Resumo:
This paper describes the implementation of an autonomous navigation system onto a 30 tonne Load-Haul-Dump truck. The control architecture is based on a robust reactive wall-following behaviour. To make it purposeful we provide driving hints derived from an approximate nodal-map. For most of the time, the vehicle is driven with weak localization (odometry). This need only be improved at intersections where decisions must be made - a technique we refer to as opportunistic localization. The truck has achieved full-speed autonomous operation at an artificial test mine, and subsequently, at a operational underground mine.
Resumo:
The LiteSteel Beam (LSB) is a new hollow flange section developed by OneSteel Australian Tube Mills using their patented dual electric resistance welding and automated continuous roll-forming technologies. It has a unique geometry consisting of torsionally rigid rectangular hollow flanges and a relatively slender web. It has found increasing popularity in residential, industrial and commercial buildings as flexural members. The LSB is considerably lighter than traditional hot-rolled steel beams and provides both structural and construction efficiencies. However, the LSB flexural members are subjected to a relatively new lateral distortional buckling mode, which reduces their member moment capacities. Unlike the commonly observed lateral torsional buckling of steel beams, the lateral distortional buckling of LSBs is characterised by simultaneous lateral defection, twist and cross sectional change due to web distortion. The current design rules in AS/NZS 4600 (SA, 2005) for flexural members subject to lateral distortional buckling were found to be conservative by about 8% in the inelastic buckling region. Therefore, a new design rule was developed for LSBs subject to lateral distortional buckling based on finite element analyses of LSBs. The effect of section geometry was then considered and several geometrical parameters were used to develop an advanced set of design rules. This paper presents the details of the finite element analyses and the design curve development for hollow flange sections subject to lateral distortional buckling.
Resumo:
Non-driving related cognitive load and variations of emotional state may impact a driver’s capability to control a vehicle and introduces driving errors. Availability of reliable cognitive load and emotion detection in drivers would benefit the design of active safety systems and other intelligent in-vehicle interfaces. In this study, speech produced by 68 subjects while driving in urban areas is analyzed. A particular focus is on speech production differences in two secondary cognitive tasks, interactions with a co-driver and calls to automated spoken dialog systems (SDS), and two emotional states during the SDS interactions - neutral/negative. A number of speech parameters are found to vary across the cognitive/emotion classes. Suitability of selected cepstral- and production-based features for automatic cognitive task/emotion classification is investigated. A fusion of GMM/SVM classifiers yields an accuracy of 94.3% in cognitive task and 81.3% in emotion classification.
Resumo:
This paper presents the results from a study of information behaviors in the context of people's everyday lives undertaken in order to develop an integrated model of information behavior (IB). 34 participants from across 6 countries maintained a daily information journal or diary – mainly through a secure web log – for two weeks, to an aggregate of 468 participant days over five months. The text-rich diary data was analyzed using a multi-method qualitative-quantitative analysis in the following order: Grounded Theory analysis with manual coding, automated concept analysis using thesaurus-based visualization, and finally a statistical analysis of the coding data. The findings indicate that people engage in several information behaviors simultaneously throughout their everyday lives (including home and work life) and that sense-making is entangled in all aspects of them. Participants engaged in many of the information behaviors in a parallel, distributed, and concurrent fashion: many information behaviors for one information problem, one information behavior across many information problems, and many information behaviors concurrently across many information problems. Findings indicate also that information avoidance – both active and passive avoidance – is a common phenomenon and that information organizing behaviors or the lack thereof caused the most problems for participants. An integrated model of information behaviors is presented based on the findings.
Resumo:
This paper presents an automated system for 3D assembly of tissue engineering (TE) scaffolds made from biocompatible microscopic building blocks with relatively large fabrication error. It focuses on the pin-into-hole force control developed for this demanding microassembly task. A beam-like gripper with integrated force sensing at a 3 mN resolution with a 500 mN measuring range is designed, and is used to implement an admittance force-controlled insertion using commercial precision stages. Visual-based alignment followed by an insertion is complemented by a haptic exploration strategy using force and position information. The system demonstrates fully automated construction of TE scaffolds with 50 microparts whose dimension error is larger than 5%.
Resumo:
This paper presents research that is being conducted by the Commonwealth Scientific and Industrial Research Organisation (CSIRO) with the aim of investigating the use of wireless sensor networks for automated livestock monitoring and control. It is difficult to achieve practical and reliable cattle monitoring with current conventional technologies due to challenges such as large grazing areas of cattle, long time periods of data sampling, and constantly varying physical environments. Wireless sensor networks bring a new level of possibilities into this area with the potential for greatly increased spatial and temporal resolution of measurement data. CSIRO has created a wireless sensor platform for animal behaviour monitoring where we are able to observe and collect information of animals without significantly interfering with them. Based on such monitoring information, we can identify each animal's behaviour and activities successfully