610 resultados para video data


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Traffic congestion has a significant impact on the economy and environment. Encouraging the use of multimodal transport (public transport, bicycle, park’n’ride, etc.) has been identified by traffic operators as a good strategy to tackle congestion issues and its detrimental environmental impacts. A multi-modal and multi-objective trip planner provides users with various multi-modal options optimised on objectives that they prefer (cheapest, fastest, safest, etc) and has a potential to reduce congestion on both a temporal and spatial scale. The computation of multi-modal and multi-objective trips is a complicated mathematical problem, as it must integrate and utilize a diverse range of large data sets, including both road network information and public transport schedules, as well as optimising for a number of competing objectives, where fully optimising for one objective, such as travel time, can adversely affect other objectives, such as cost. The relationship between these objectives can also be quite subjective, as their priorities will vary from user to user. This paper will first outline the various data requirements and formats that are needed for the multi-modal multi-objective trip planner to operate, including static information about the physical infrastructure within Brisbane as well as real-time and historical data to predict traffic flow on the road network and the status of public transport. It will then present information on the graph data structures representing the road and public transport networks within Brisbane that are used in the trip planner to calculate optimal routes. This will allow for an investigation into the various shortest path algorithms that have been researched over the last few decades, and provide a foundation for the construction of the Multi-modal Multi-objective Trip Planner by the development of innovative new algorithms that can operate the large diverse data sets and competing objectives.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Big data is big news in almost every sector including crisis communication. However, not everyone has access to big data and even if we have access to big data, we often do not have necessary tools to analyze and cross reference such a large data set. Therefore this paper looks at patterns in small data sets that we have ability to collect with our current tools to understand if we can find actionable information from what we already have. We have analyzed 164390 tweets collected during 2011 earthquake to find out what type of location specific information people mention in their tweet and when do they talk about that. Based on our analysis we find that even a small data set that has far less data than a big data set can be useful to find priority disaster specific areas quickly.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The use of Wireless Sensor Networks (WSNs) for Structural Health Monitoring (SHM) has become a promising approach due to many advantages such as low cost, fast and flexible deployment. However, inherent technical issues such as data synchronization error and data loss have prevented these distinct systems from being extensively used. Recently, several SHM-oriented WSNs have been proposed and believed to be able to overcome a large number of technical uncertainties. Nevertheless, there is limited research verifying the applicability of those WSNs with respect to demanding SHM applications like modal analysis and damage identification. This paper first presents a brief review of the most inherent uncertainties of the SHM-oriented WSN platforms and then investigates their effects on outcomes and performance of the most robust Output-only Modal Analysis (OMA) techniques when employing merged data from multiple tests. The two OMA families selected for this investigation are Frequency Domain Decomposition (FDD) and Data-driven Stochastic Subspace Identification (SSI-data) due to the fact that they both have been widely applied in the past decade. Experimental accelerations collected by a wired sensory system on a large-scale laboratory bridge model are initially used as clean data before being contaminated by different data pollutants in sequential manner to simulate practical SHM-oriented WSN uncertainties. The results of this study show the robustness of FDD and the precautions needed for SSI-data family when dealing with SHM-WSN uncertainties. Finally, the use of the measurement channel projection for the time-domain OMA techniques and the preferred combination of the OMA techniques to cope with the SHM-WSN uncertainties is recommended.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper describes the implementation of the first portable, embedded data acquisition unit (BabelFuse) that is able to acquire and timestamp generic sensor data and trigger General Purpose I/O (GPIO) events against a microsecond-accurate wirelessly-distributed ‘global’ clock. A significant issue encountered when fusing data received from multiple sensors is the accuracy of the timestamp associated with each piece of data. This is particularly important in applications such as Simultaneous Localisation and Mapping (SLAM) where vehicle velocity forms an important part of the mapping algorithms; on fast-moving vehicles, even millisecond inconsistencies in data timestamping can produce errors which need to be compensated for. The timestamping problem is compounded in a robot swarm environment especially if non-deterministic communication hardware (such as IEEE-802.11-based wireless) and inaccurate clock synchronisation protocols are used. The issue of differing timebases makes correlation of data difficult and prevents the units from reliably performing synchronised operations or manoeuvres. By utilising hardware-assisted timestamping, clock synchronisation protocols based on industry standards and firmware designed to minimise indeterminism, an embedded data acquisition unit capable of microsecond-level clock synchronisation is presented.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Objective: To describe unintentional injuries to children aged less than one year, using coded and textual information, in three-month age bands to reflect their development over the year. Methods: Data from the Queensland Injury Surveillance Unit was used. The Unit collects demographic, clinical and circumstantial details about injured persons presenting to selected emergency departments across the State. Only injuries coded as unintentional in children admitted to hospital were included for this analysis. Results: After editing, 1,082 children remained for analysis, 24 with transport-related injuries. Falls were the most common injury, but becoming proportionately less over the year, whereas burns and scalds and foreign body injuries increased. The proportion of injuries due to contact with persons or objects varied little, but poisonings were relatively more common in the first and fourth three-month periods. Descriptions indicated that family members were somehow causally involved in 16% of injuries. Our findings are in qualitative agreement with comparable previous studies. Conclusion: The pattern of injuries varies over the first year of life and is clearly linked to the child's increasing mobility. Implications: Injury patterns in the first year of life should be reported over shorter intervals. Preventive measures for young children need to be designed with their rapidly changing developmental stage in mind, using a variety of strategies, one of which could be opportunistic developmentally specific education of parents. Injuries in young children are of abiding concern given their immediate health and emotional effects, and potential for long-term adverse sequelae. In Australia, in the financial year 2006/07, 2,869 children less than 12 months of age were admitted to hospital for an unintentional injury, a rate of 10.6 per 1,000, representing a considerable economic and social burden. Given that many of these injuries are preventable, this is particularly concerning. Most epidemiologic studies analyse data in five-year age bands, so children less than five years of age are examined as a group. This study includes only those children younger than one year of age to identify injury detail lost in analyses of the larger group, as we hypothesised that the injury pattern varied with the developmental stage of the child. The authors of several North American studies have commented that in dealing with injuries in pre-school children, broad age groupings are inadequate to do justice to the rapid developmental changes in infancy and early childhood, and have in consequence analysed injuries in shorter intervals. To our knowledge, no similar analysis of Australian infant injuries has been published to date. This paper describes injury in children less than 12 months of age using data from the Queensland Injury Surveillance Unit (QISU).

Relevância:

20.00% 20.00%

Publicador:

Resumo:

To recognize faces in video, face appearances have been widely modeled as piece-wise local linear models which linearly approximate the smooth yet non-linear low dimensional face appearance manifolds. The choice of representations of the local models is crucial. Most of the existing methods learn each local model individually meaning that they only anticipate variations within each class. In this work, we propose to represent local models as Gaussian distributions which are learned simultaneously using the heteroscedastic probabilistic linear discriminant analysis (PLDA). Each gallery video is therefore represented as a collection of such distributions. With the PLDA, not only the within-class variations are estimated during the training, the separability between classes is also maximized leading to an improved discrimination. The heteroscedastic PLDA itself is adapted from the standard PLDA to approximate face appearance manifolds more accurately. Instead of assuming a single global within-class covariance, the heteroscedastic PLDA learns different within-class covariances specific to each local model. In the recognition phase, a probe video is matched against gallery samples through the fusion of point-to-model distances. Experiments on the Honda and MoBo datasets have shown the merit of the proposed method which achieves better performance than the state-of-the-art technique.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The Queensland University of Technology (QUT) Library, like many other academic and research institution libraries in Australia, has been collaborating with a range of academic and service provider partners to develop a range of research data management services and collections. Three main strategies are being employed and an overview of process, infrastructure, usage and benefits is provided of each of these service aspects. The development of processes and infrastructure to facilitate the strategic identification and management of QUT developed datasets has been a major focus. A number of Australian National Data Service (ANDS) sponsored projects - including Seeding the Commons; Metadata Hub / Store; Data Capture and Gold Standard Record Exemplars have / will provide QUT with a data registry system, linkages to storage, processes for identifying and describing datasets, and a degree of academic awareness. QUT supports open access and has established a culture for making its research outputs available via the QUT ePrints institutional repository. Incorporating open access research datasets into the library collections is an equally important aspect of facilitating the adoption of data-centric eresearch methods. Some datasets are available commercially, and the library has collaborated with QUT researchers, in the QUT Business School especially strongly, to identify and procure a rapidly growing range of financial datasets to support research. The library undertakes licensing and uses the Library Resource Allocation to pay for the subscriptions. It is a new area of collection development for with much to be learned. The final strategy discussed is the library acting as “data broker”. QUT Library has been working with researchers to identify these datasets and undertake the licensing, payment and access as a centrally supported service on behalf of researchers.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Management of groundwater systems requires realistic conceptual hydrogeological models as a framework for numerical simulation modelling, but also for system understanding and communicating this to stakeholders and the broader community. To help overcome these challenges we developed GVS (Groundwater Visualisation System), a stand-alone desktop software package that uses interactive 3D visualisation and animation techniques. The goal was a user-friendly groundwater management tool that could support a range of existing real-world and pre-processed data, both surface and subsurface, including geology and various types of temporal hydrological information. GVS allows these data to be integrated into a single conceptual hydrogeological model. In addition, 3D geological models produced externally using other software packages, can readily be imported into GVS models, as can outputs of simulations (e.g. piezometric surfaces) produced by software such as MODFLOW or FEFLOW. Boreholes can be integrated, showing any down-hole data and properties, including screen information, intersected geology, water level data and water chemistry. Animation is used to display spatial and temporal changes, with time-series data such as rainfall, standing water levels and electrical conductivity, displaying dynamic processes. Time and space variations can be presented using a range of contouring and colour mapping techniques, in addition to interactive plots of time-series parameters. Other types of data, for example, demographics and cultural information, can also be readily incorporated. The GVS software can execute on a standard Windows or Linux-based PC with a minimum of 2 GB RAM, and the model output is easy and inexpensive to distribute, by download or via USB/DVD/CD. Example models are described here for three groundwater systems in Queensland, northeastern Australia: two unconfined alluvial groundwater systems with intensive irrigation, the Lockyer Valley and the upper Condamine Valley, and the Surat Basin, a large sedimentary basin of confined artesian aquifers. This latter example required more detail in the hydrostratigraphy, correlation of formations with drillholes and visualisation of simulation piezometric surfaces. Both alluvial system GVS models were developed during drought conditions to support government strategies to implement groundwater management. The Surat Basin model was industry sponsored research, for coal seam gas groundwater management and community information and consultation. The “virtual” groundwater systems in these 3D GVS models can be interactively interrogated by standard functions, plus production of 2D cross-sections, data selection from the 3D scene, rear end database and plot displays. A unique feature is that GVS allows investigation of time-series data across different display modes, both 2D and 3D. GVS has been used successfully as a tool to enhance community/stakeholder understanding and knowledge of groundwater systems and is of value for training and educational purposes. Projects completed confirm that GVS provides a powerful support to management and decision making, and as a tool for interpretation of groundwater system hydrological processes. A highly effective visualisation output is the production of short videos (e.g. 2–5 min) based on sequences of camera ‘fly-throughs’ and screen images. Further work involves developing support for multi-screen displays and touch-screen technologies, distributed rendering, gestural interaction systems. To highlight the visualisation and animation capability of the GVS software, links to related multimedia hosted online sites are included in the references.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This study uses borehole geophysical log data of sonic velocity and electrical resistivity to estimate permeability in sandstones in the northern Galilee Basin, Queensland. The prior estimates of permeability are calculated according to the deterministic log–log linear empirical correlations between electrical resistivity and measured permeability. Both negative and positive relationships are influenced by the clay content. The prior estimates of permeability are updated in a Bayesian framework for three boreholes using both the cokriging (CK) method and a normal linear regression (NLR) approach to infer the likelihood function. The results show that the mean permeability estimated from the CK-based Bayesian method is in better agreement with the measured permeability when a fairly apparent linear relationship exists between the logarithm of permeability and sonic velocity. In contrast, the NLR-based Bayesian approach gives better estimates of permeability for boreholes where no linear relationship exists between logarithm permeability and sonic velocity.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

“Supermassive” is a synchronised four-channel video installation with sound. Each video channel shows a different camera view of an animated three-dimensional scene, which visually references galactic or astral imagery. This scene is comprised of forty-four separate clusters of slowly orbiting white text. Each cluster refers to a different topic that has been sourced online. The topics are diverse with recurring subjects relating to spirituality, science, popular culture, food and experiences of contemporary urban life. The slow movements of the text and camera views are reinforced through a rhythmic, contemplative soundtrack. As an immersive installation, “Supermassive” operates somewhere between a meditational mind map and a representation of a contemporary data stream. “Supermassive” contributes to studies in the field of contemporary art. It is particularly concerned with the ways that graphic representations of language can operate in the exploration of contemporary lived experiences, whether actual or virtual. Artists such as Ed Ruscha and Christopher Wool have long explored the emotive and psychological potentials of graphic text. Other artists such as Doug Aitken and Pipilotti Rist have engaged with the physical and spatial potentials of audio-visual installations to create emotive and symbolic experiences for their audiences. Using a practice-led research methodology, “Supermassive” extends these creative inquiries. By creating a reflective atmosphere in which divergent textual subjects are pictured together, the work explores not only how we navigate information, but also how such navigations inform understandings of our physical and psychological realities. “Supermassive” has been exhibited internationally at LA Louver Gallery, Venice, California in 2013 and nationally with GBK as part of Art Month Sydney, also in 2013. It has been critically reviewed in The Los Angeles Times.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The increasing demand for mobile video has attracted much attention from both industry and researchers. To satisfy users and to facilitate the usage of mobile video, providing optimal quality to the users is necessary. As a result, quality of experience (QoE) becomes an important focus in measuring the overall quality perceived by the end-users, from the aspects of both objective system performance and subjective experience. However, due to the complexity of user experience and diversity of resources (such as videos, networks and mobile devices), it is still challenging to develop QoE models for mobile video that can represent how user-perceived value varies with changing conditions. Previous QoE modelling research has two main limitations: aspects influencing QoE are insufficiently considered; and acceptability as the user value is seldom studied. Focusing on the QoE modelling issues, two aims are defined in this thesis: (i) investigating the key influencing factors of mobile video QoE; and (ii) establishing QoE prediction models based on the relationships between user acceptability and the influencing factors, in order to help provide optimal mobile video quality. To achieve the first goal, a comprehensive user study was conducted. It investigated the main impacts on user acceptance: video encoding parameters such as quantization parameter, spatial resolution, frame rate, and encoding bitrate; video content type; mobile device display resolution; and user profiles including gender, preference for video content, and prior viewing experience. Results from both quantitative and qualitative analysis revealed the significance of these factors, as well as how and why they influenced user acceptance of mobile video quality. Based on the results of the user study, statistical techniques were used to generate a set of QoE models that predict the subjective acceptability of mobile video quality by using a group of the measurable influencing factors, including encoding parameters and bitrate, content type, and mobile device display resolution. Applying the proposed QoE models into a mobile video delivery system, optimal decisions can be made for determining proper video coding parameters and for delivering most suitable quality to users. This would lead to consistent user experience on different mobile video content and efficient resource allocation. The findings in this research enhance the understanding of user experience in the field of mobile video, which will benefit mobile video design and research. This thesis presents a way of modelling QoE by emphasising user acceptability of mobile video quality, which provides a strong connection between technical parameters and user-desired quality. Managing QoE based on acceptability promises the potential for adapting to the resource limitations and achieving an optimal QoE in the provision of mobile video content.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This study used a video-based hazard perception dual task to compare the hazard perception skills of young drivers with middle aged, more experienced drivers and to determine if these skills can be improved with video-based road commentary training. The primary task required the participants to detect and verbally identify immediate hazard on video-based traffic scenarios while concurrently performing a secondary tracking task, simulating the steering of real driving. The results showed that the young drivers perceived fewer immediate hazards (mean = 75.2%, n = 24, 19 females) than the more experienced drivers (mean = 87.5%, n = 8, all females), and had longer hazard perception times, but performed better in the secondary tracking task. After the road commentary training, the mean percentage of hazards detected and identified by the young drivers improved to the level of the experienced drivers and was significantly higher than that of an age and driving experience matched control group. The results will be discussed in the context of psychological theories of hazard perception and in relation to road commentary as an evidence-based training intervention that seems to improve many aspects of unsafe driving behaviour in young drivers.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Classroom emotional climates are interrelated with students’ engagement with university courses. Despite growing interest in emotions and emotional climate research, little is known about the ways in which social interactions and different subject matter mediate emotional climates in preservice science teacher education classes. In this study we investigated the emotional climate and associated classroom interactions in a preservice science teacher education class. We were interested in the ways in which salient classroom interactions were related to the emotional climate during lessons centered on debates about science-based issues (e.g., nuclear energy alternatives). Participants used audience response technology to indicate their perceptions of the emotional climate. Analysis of conversation for salient video clips and analysis of non-verbal conduct (acoustic parameters, body movements, and facial expressions) supplemented emotional climate data. One key contribution that this study makes to preservice science teacher education is to identify the micro-processes of successful and unsuccessful class interactions that were associated with positive and neutral emotional climate. The structure of these interactions can inform the practice of other science educators who wish to produce positive emotional climates in their classes. The study also extends and explicates the construct of intensity of emotional climate.