942 resultados para Real time
Resumo:
Criminal intelligence is an area of expertise highly sought-after internationally and within a variety of justice-related professions; however, producing university graduates with the requisite professional knowledge, as well as analytical, organisational and technical skills presents a pedagogical and technical challenge to university educators. The situation becomes even more challenging when students are undertaking their studies by distance education. This best practice session showcases the design of an online undergraduate unit for final year justice students which uses an evolving real-time criminal scenario as the focus of authentic learning activities in order to prepare students for graduate roles within the criminal intelligence and justice professions. Within the unit, students take on the role of criminal intelligence analysts, applying relevant theories, models and strategies to solve a complex but realistic crime and complete briefings and documentation to industry standards as their major summative assessment task. The session will demonstrate how the design of the online unit corresponds to authentic learning principles, and will specifically map the elements of the unit design to Herrington & Oliver’s instructional design framework for authentic learning (2000; Herrington & Herrington 2006). The session will show how a range of technologies was used to create a rich learning experience for students that could be easily maintained over multiple unit iterations without specialist technical support. The session will also discuss the unique pedagogical affordances and challenges implicated in the location of the unit within an online learning environment, and will reflect on some of the lessons learned from the development which may be relevant to other authentic online learning contexts.
Resumo:
Vehicular accidents are one of the deadliest safety hazards and accordingly an immense concern of individuals and governments. Although, a wide range of active autonomous safety systems, such as advanced driving assistance and lane keeping support, are introduced to facilitate safer driving experience, these stand-alone systems have limited capabilities in providing safety. Therefore, cooperative vehicular systems were proposed to fulfill more safety requirements. Most cooperative vehicle-to-vehicle safety applications require relative positioning accuracy of decimeter level with an update rate of at least 10 Hz. These requirements cannot be met via direct navigation or differential positioning techniques. This paper studies a cooperative vehicle platform that aims to facilitate real-time relative positioning (RRP) among adjacent vehicles. The developed system is capable of exchanging both GPS position solutions and raw observations using RTCM-104 format over vehicular dedicated short range communication (DSRC) links. Real-time kinematic (RTK) positioning technique is integrated into the system to enable RRP to be served as an embedded real-time warning system. The 5.9 GHz DSRC technology is adopted as the communication channel among road-side units (RSUs) and on-board units (OBUs) to distribute GPS corrections data received from a nearby reference station via the Internet using cellular technologies, by means of RSUs, as well as to exchange the vehicular real-time GPS raw observation data. Ultimately, each receiving vehicle calculates relative positions of its neighbors to attain a RRP map. A series of real-world data collection experiments was conducted to explore the synergies of both DSRC and positioning systems. The results demonstrate a significant enhancement in precision and availability of relative positioning at mobile vehicles.
Resumo:
The existence of Macroscopic Fundamental Diagram (MFD), which relates space-mean density and flow, has been shown in urban networks under homogeneous traffic conditions. Since MFD represents the area-wide network traffic performances, studies on perimeter control strategies and an area traffic state estimation utilizing the MFD concept has been reported. One of the key requirements for well-defined MFD is the homogeneity of the area-wide traffic condition with links of similar properties, which is not universally expected in real world. For the practical application of the MFD concept, several researchers have identified the influencing factors for network homogeneity. However, they did not explicitly take the impact of drivers’ behaviour and information provision into account, which has a significant impact on simulation outputs. This research aims to demonstrate the effect of dynamic information provision on network performance by employing the MFD as a measurement. A microscopic simulation, AIMSUN, is chosen as an experiment platform. By changing the ratio of en-route informed drivers and pre-trip informed drivers different scenarios are simulated in order to investigate how drivers’ adaptation to the traffic congestion influences the network performance with respect to the MFD shape as well as other indicators, such as total travel time. This study confirmed the impact of information provision on the MFD shape, and addressed the usefulness of the MFD for measuring the dynamic information provision benefit.
Resumo:
In our laboratory, we have developed methods in real-time detection and quantitative-polymerase chain reaction (Q-PCR) to analyse the relative levels of gene expression in post mortem brain tissues. We have then applied this method to examine differences in gene activity between normal white matter (NWM) and plaque tissue from multiple sclerosis (MS) patients. Genes were selected based on their association with pathology and through identification by previously conducted global gene expression analysis. Plaque tissue was obtained from secondary progressive (SP) patients displaying chronic active, as well as acute pathologies; while NWM from the same location was obtained from age- and sex-matched controls (normal patients). In this study, we used both SYBR Green I supplementation and commercially available mixes to assess both comparative and absolute levels of gene activity. The results of both methods compared favourably for four of the five genes examined (P < 0.05, Pearsons), while differences in gene expression between chronic active and acute pathologies were also identified. For example, a >50-fold increase in osteopontin (Spp1) and inositol 1-4-5 phosphate 3 kinase B (Itpkb) levels in acute plaques contrasted with the 5-fold or less increase in chronic active plaques (P < 0.05, unpaired t test). By contrast, there was no significant difference in the levels of the MS marker and calcium-dependent protease (Calpain, Capns1) in MS plaque tissue. In summary, Q-PCR analysis using SYBR Green I has allowed us to economically obtain what may be clinically significant information from small amounts of the CNS, providing an opportunity for further clinical investigations.
Resumo:
With an increased emphasis on genotyping of single nucleotide polymorphisms (SNPs) in disease association studies, the genotyping platform of choice is constantly evolving. In addition, the development of more specific SNP assays and appropriate genotype validation applications is becoming increasingly critical to elucidate ambiguous genotypes. In this study, we have used SNP specific Locked Nucleic Acid (LNA) hybridization probes on a real-time PCR platform to genotype an association cohort and propose three criteria to address ambiguous genotypes. Based on the kinetic properties of PCR amplification, the three criteria address PCR amplification efficiency, the net fluorescent difference between maximal and minimal fluorescent signals and the beginning of the exponential growth phase of the reaction. Initially observed SNP allelic discrimination curves were confirmed by DNA sequencing (n = 50) and application of our three genotype criteria corroborated both sequencing and observed real-time PCR results. In addition, the tested Caucasian association cohort was in Hardy-Weinberg equilibrium and observed allele frequencies were very similar to two independently tested Caucasian association cohorts for the same tested SNP. We present here a novel approach to effectively determine ambiguous genotypes generated from a real-time PCR platform. Application of our three novel criteria provides an easy to use semi-automated genotype confirmation protocol.
Resumo:
The early warning based on real-time prediction of rain-induced instability of natural residual slopes helps to minimise human casualties due to such slope failures. Slope instability prediction is complicated, as it is influenced by many factors, including soil properties, soil behaviour, slope geometry, and the location and size of deep cracks in the slope. These deep cracks can facilitate rainwater infiltration into the deep soil layers and reduce the unsaturated shear strength of residual soil. Subsequently, it can form a slip surface, triggering a landslide even in partially saturated soil slopes. Although past research has shown the effects of surface-cracks on soil stability, research examining the influence of deep-cracks on soil stability is very limited. This study aimed to develop methodologies for predicting the real-time rain-induced instability of natural residual soil slopes with deep cracks. The results can be used to warn against potential rain-induced slope failures. The literature review conducted on rain induced slope instability of unsaturated residual soil associated with soil crack, reveals that only limited studies have been done in the following areas related to this topic: - Methods for detecting deep cracks in residual soil slopes. - Practical application of unsaturated soil theory in slope stability analysis. - Mechanistic methods for real-time prediction of rain induced residual soil slope instability in critical slopes with deep cracks. Two natural residual soil slopes at Jombok Village, Ngantang City, Indonesia, which are located near a residential area, were investigated to obtain the parameters required for the stability analysis of the slope. A survey first identified all related field geometrical information including slope, roads, rivers, buildings, and boundaries of the slope. Second, the electrical resistivity tomography (ERT) method was used on the slope to identify the location and geometrical characteristics of deep cracks. The two ERT array models employed in this research are: Dipole-dipole and Azimuthal. Next, bore-hole tests were conducted at different locations in the slope to identify soil layers and to collect undisturbed soil samples for laboratory measurement of the soil parameters required for the stability analysis. At the same bore hole locations, Standard Penetration Test (SPT) was undertaken. Undisturbed soil samples taken from the bore-holes were tested in a laboratory to determine the variation of the following soil properties with the depth: - Classification and physical properties such as grain size distribution, atterberg limits, water content, dry density and specific gravity. - Saturated and unsaturated shear strength properties using direct shear apparatus. - Soil water characteristic curves (SWCC) using filter paper method. - Saturated hydraulic conductivity. The following three methods were used to detect and simulate the location and orientation of cracks in the investigated slope: (1) The electrical resistivity distribution of sub-soil obtained from ERT. (2) The profile of classification and physical properties of the soil, based on laboratory testing of soil samples collected from bore-holes and visual observations of the cracks on the slope surface. (3) The results of stress distribution obtained from 2D dynamic analysis of the slope using QUAKE/W software, together with the laboratory measured soil parameters and earthquake records of the area. It was assumed that the deep crack in the slope under investigation was generated by earthquakes. A good agreement was obtained when comparing the location and the orientation of the cracks detected by Method-1 and Method-2. However, the simulated cracks in Method-3 were not in good agreement with the output of Method-1 and Method-2. This may have been due to the material properties used and the assumptions made, for the analysis. From Method-1 and Method-2, it can be concluded that the ERT method can be used to detect the location and orientation of a crack in a soil slope, when the ERT is conducted in very dry or very wet soil conditions. In this study, the cracks detected by the ERT were used for stability analysis of the slope. The stability of the slope was determined using the factor of safety (FOS) of a critical slip surface obtained by SLOPE/W using the limit equilibrium method. Pore-water pressure values for the stability analysis were obtained by coupling the transient seepage analysis of the slope using finite element based software, called SEEP/W. A parametric study conducted on the stability of an investigated slope revealed that the existence of deep cracks and their location in the soil slope are critical for its stability. The following two steps are proposed to predict the rain-induced instability of a residual soil slope with cracks. (a) Step-1: The transient stability analysis of the slope is conducted from the date of the investigation (initial conditions are based on the investigation) to the preferred date (current date), using measured rainfall data. Then, the stability analyses are continued for the next 12 months using the predicted annual rainfall that will be based on the previous five years rainfall data for the area. (b) Step-2: The stability of the slope is calculated in real-time using real-time measured rainfall. In this calculation, rainfall is predicted for the next hour or 24 hours and the stability of the slope is calculated one hour or 24 hours in advance using real time rainfall data. If Step-1 analysis shows critical stability for the forthcoming year, it is recommended that Step-2 be used for more accurate warning against the future failure of the slope. In this research, the results of the application of the Step-1 on an investigated slope (Slope-1) showed that its stability was not approaching a critical value for year 2012 (until 31st December 2012) and therefore, the application of Step-2 was not necessary for the year 2012. A case study (Slope-2) was used to verify the applicability of the complete proposed predictive method. A landslide event at Slope-2 occurred on 31st October 2010. The transient seepage and stability analyses of the slope using data obtained from field tests such as Bore-hole, SPT, ERT and Laboratory tests, were conducted on 12th June 2010 following the Step-1 and found that the slope in critical condition on that current date. It was then showing that the application of the Step-2 could have predicted this failure by giving sufficient warning time.
Resumo:
We advocate for the use of predictive techniques in interactive computer music systems. We suggest that the inclusion of prediction can assist in the design of proactive rather than reactive computational performance partners. We summarize the significant role prediction plays in human musical decisions, and the only modest use of prediction in interactive music systems to date. After describing how we are working toward employing predictive processes in our own metacreation software we reflect on future extensions to these approaches.
Resumo:
This paper presents an investigation into event detection in crowded scenes, where the event of interest co-occurs with other activities and only binary labels at the clip level are available. The proposed approach incorporates a fast feature descriptor from the MPEG domain, and a novel multiple instance learning (MIL) algorithm using sparse approximation and random sensing. MPEG motion vectors are used to build particle trajectories that represent the motion of objects in uniform video clips, and the MPEG DCT coefficients are used to compute a foreground map to remove background particles. Trajectories are transformed into the Fourier domain, and the Fourier representations are quantized into visual words using the K-Means algorithm. The proposed MIL algorithm models the scene as a linear combination of independent events, where each event is a distribution of visual words. Experimental results show that the proposed approaches achieve promising results for event detection compared to the state-of-the-art.
Resumo:
Safety concerns in the operation of autonomous aerial systems require safe-landing protocols be followed during situations where the a mission should be aborted due to mechanical or other failure. On-board cameras provide information that can be used in the determination of potential landing sites, which are continually updated and ranked to prevent injury and minimize damage. Pulse Coupled Neural Networks have been used for the detection of features in images that assist in the classification of vegetation and can be used to minimize damage to the aerial vehicle. However, a significant drawback in the use of PCNNs is that they are computationally expensive and have been more suited to off-line applications on conventional computing architectures. As heterogeneous computing architectures are becoming more common, an OpenCL implementation of a PCNN feature generator is presented and its performance is compared across OpenCL kernels designed for CPU, GPU and FPGA platforms. This comparison examines the compute times required for network convergence under a variety of images obtained during unmanned aerial vehicle trials to determine the plausibility for real-time feature detection.
Resumo:
Price based technique is one way to handle increase in peak demand and deal with voltage violations in residential distribution systems. This paper proposes an improved real time pricing scheme for residential customers with demand response option. Smart meters and in-home display units are used to broadcast the price and appropriate load adjustment signals. Customers are given an opportunity to respond to the signals and adjust the loads. This scheme helps distribution companies to deal with overloading problems and voltage issues in a more efficient way. Also, variations in wholesale electricity prices are passed on to electricity customers to take collective measure to reduce network peak demand. It is ensured that both customers and utility are benefitted by this scheme.
Resumo:
This paper describes the theory and practice for a stable haptic teleoperation of a flying vehicle. It extends passivity-based control framework for haptic teleoperation of aerial vehicles in the longest intercontinental setting that presents great challenges. The practicality of the control architecture has been shown in maneuvering and obstacle-avoidance tasks over the internet with the presence of significant time-varying delays and packet losses. Experimental results are presented for teleoperation of a slave quadrotor in Australia from a master station in the Netherlands. The results show that the remote operator is able to safely maneuver the flying vehicle through a structure using haptic feedback of the state of the slave and the perceived obstacles.
Resumo:
The ability to measure surface temperature and represent it on a metrically accurate 3D model has proven applications in many areas such as medical imaging, building energy auditing, and search and rescue. A system is proposed that enables this task to be performed with a handheld sensor, and for the first time with results able to be visualized and analyzed in real-time. A device comprising a thermal-infrared camera and range sensor is calibrated geometrically and used for data capture. The device is localized using a combination of ICP and video-based pose estimation from the thermal-infrared video footage which is shown to reduce the occurrence of failure modes. Furthermore, the problem of misregistration which can introduce severe distortions in assigned surface temperatures is avoided through the use of a risk-averse neighborhood weighting mechanism. Results demonstrate that the system is more stable and accurate than previous approaches, and can be used to accurately model complex objects and environments for practical tasks.
Resumo:
Technological advances have led to an influx of affordable hardware that supports sensing, computation and communication. This hardware is increasingly deployed in public and private spaces, tracking and aggregating a wealth of real-time environmental data. Although these technologies are the focus of several research areas, there is a lack of research dealing with the problem of making these capabilities accessible to everyday users. This thesis represents a first step towards developing systems that will allow users to leverage the available infrastructure and create custom tailored solutions. It explores how this notion can be utilized in the context of energy monitoring to improve conventional approaches. The project adopted a user-centered design process to inform the development of a flexible system for real-time data stream composition and visualization. This system features an extensible architecture and defines a unified API for heterogeneous data streams. Rather than displaying the data in a predetermined fashion, it makes this information available as building blocks that can be combined and shared. It is based on the insight that individual users have diverse information needs and presentation preferences. Therefore, it allows users to compose rich information displays, incorporating personally relevant data from an extensive information ecosystem. The prototype was evaluated in an exploratory study to observe its natural use in a real-world setting, gathering empirical usage statistics and conducting semi-structured interviews. The results show that a high degree of customization does not warrant sustained usage. Other factors were identified, yielding recommendations for increasing the impact on energy consumption.
Resumo:
Currently, the GNSS computing modes are of two classes: network-based data processing and user receiver-based processing. A GNSS reference receiver station essentially contributes raw measurement data in either the RINEX file format or as real-time data streams in the RTCM format. Very little computation is carried out by the reference station. The existing network-based processing modes, regardless of whether they are executed in real-time or post-processed modes, are centralised or sequential. This paper describes a distributed GNSS computing framework that incorporates three GNSS modes: reference station-based, user receiver-based and network-based data processing. Raw data streams from each GNSS reference receiver station are processed in a distributed manner, i.e., either at the station itself or at a hosting data server/processor, to generate station-based solutions, or reference receiver-specific parameters. These may include precise receiver clock, zenith tropospheric delay, differential code biases, ambiguity parameters, ionospheric delays, as well as line-of-sight information such as azimuth and elevation angles. Covariance information for estimated parameters may also be optionally provided. In such a mode the nearby precise point positioning (PPP) or real-time kinematic (RTK) users can directly use the corrections from all or some of the stations for real-time precise positioning via a data server. At the user receiver, PPP and RTK techniques are unified under the same observation models, and the distinction is how the user receiver software deals with corrections from the reference station solutions and the ambiguity estimation in the observation equations. Numerical tests demonstrate good convergence behaviour for differential code bias and ambiguity estimates derived individually with single reference stations. With station-based solutions from three reference stations within distances of 22–103 km the user receiver positioning results, with various schemes, show an accuracy improvement of the proposed station-augmented PPP and ambiguity-fixed PPP solutions with respect to the standard float PPP solutions without station augmentation and ambiguity resolutions. Overall, the proposed reference station-based GNSS computing mode can support PPP and RTK positioning services as a simpler alternative to the existing network-based RTK or regionally augmented PPP systems.
Resumo:
Real-time image analysis and classification onboard robotic marine vehicles, such as AUVs, is a key step in the realisation of adaptive mission planning for large-scale habitat mapping in previously unexplored environments. This paper describes a novel technique to train, process, and classify images collected onboard an AUV used in relatively shallow waters with poor visibility and non-uniform lighting. The approach utilises Förstner feature detectors and Laws texture energy masks for image characterisation, and a bag of words approach for feature recognition. To improve classification performance we propose a usefulness gain to learn the importance of each histogram component for each class. Experimental results illustrate the performance of the system in characterisation of a variety of marine habitats and its ability to operate onboard an AUV's main processor suitable for real-time mission planning.