931 resultados para scenarios


Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper, a novel 2×2 multiple-input multiple-output orthogonal frequency division multiplexing (MIMO-OFDM) testbed based on an Analog Devices AD9361 highly integrated radio frequency (RF) agile transceiver was specifically implemented for the purpose of estimating and analyzing MIMO-OFDM channel capacity in vehicle-to-infrastructure (V2I) environments using the 920 MHz industrial, scientific, and medical (ISM) band. We implemented two-dimensional discrete cosine transform-based filtering to reduce the channel estimation errors and show its effectiveness on our measurement results. We have also analyzed the effects of channel estimation error on the MIMO channel capacity by simulation. Three different scenarios of subcarrier spacing were investigated which correspond to IEEE 802.11p, Long-Term Evolution (LTE), and Digital Video Broadcasting Terrestrial (DVB-T)(2k) standards. An extensive MIMO-OFDM V2I channel measurement campaign was performed in a suburban environment. Analysis of the measured MIMO channel capacity results as a function of the transmitter-to-receiver (TX-RX) separation distance up to 250 m shows that the variance of the MIMO channel capacity is larger for the near-range line-of-sight (LOS) scenarios than for the long-range non-LOS cases, using a fixed receiver signal-to-noise ratio (SNR) criterion. We observed that the largest capacity values were achieved at LOS propagation despite the common assumption of a degenerated MIMO channel in LOS. We consider that this is due to the large angular spacing between MIMO subchannels which occurs when the receiver vehicle rooftop antennas pass by the fixed transmitter antennas at close range, causing MIMO subchannels to be orthogonal. In addition, analysis on the effects of different subcarrier spacings on MIMO-OFDM channel capacity showed negligible differences in mean channel capacity for the subcarrier spacing range investigated. Measured channels described in this paper are available on request.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Public buildings and large infrastructure are typically monitored by tens or hundreds of cameras, all capturing different physical spaces and observing different types of interactions and behaviours. However to date, in large part due to limited data availability, crowd monitoring and operational surveillance research has focused on single camera scenarios which are not representative of real-world applications. In this paper we present a new, publicly available database for large scale crowd surveillance. Footage from 12 cameras for a full work day covering the main floor of a busy university campus building, including an internal and external foyer, elevator foyers, and the main external approach are provided; alongside annotation for crowd counting (single or multi-camera) and pedestrian flow analysis for 10 and 6 sites respectively. We describe how this large dataset can be used to perform distributed monitoring of building utilisation, and demonstrate the potential of this dataset to understand and learn the relationship between different areas of a building.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Conceptual combination performs a fundamental role in creating the broad range of compound phrases utilised in everyday language. While the systematicity and productivity of language provide a strong argument in favour of assuming compositionality, this very assumption is still regularly questioned in both cognitive science and philosophy. This article provides a novel probabilistic framework for assessing whether the semantics of conceptual combinations are compositional, and so can be considered as a function of the semantics of the constituent concepts, or not. Rather than adjudicating between different grades of compositionality, the framework presented here contributes formal methods for determining a clear dividing line between compositional and non-compositional semantics. Compositionality is equated with a joint probability distribution modelling how the constituent concepts in the combination are interpreted. Marginal selectivity is emphasised as a pivotal probabilistic constraint for the application of the Bell/CH and CHSH systems of inequalities (referred to collectively as Bell-type). Non-compositionality is then equated with either a failure of marginal selectivity, or, in the presence of marginal selectivity, with a violation of Bell-type inequalities. In both non-compositional scenarios, the conceptual combination cannot be modelled using a joint probability distribution with variables corresponding to the interpretation of the individual concepts. The framework is demonstrated by applying it to an empirical scenario of twenty-four non-lexicalised conceptual combinations.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The QUT-NOISE-SRE protocol is designed to mix the large QUT-NOISE database, consisting of over 10 hours of back- ground noise, collected across 10 unique locations covering 5 common noise scenarios, with commonly used speaker recognition datasets such as Switchboard, Mixer and the speaker recognition evaluation (SRE) datasets provided by NIST. By allowing common, clean, speech corpora to be mixed with a wide variety of noise conditions, environmental reverberant responses, and signal-to-noise ratios, this protocol provides a solid basis for the development, evaluation and benchmarking of robust speaker recognition algorithms, and is freely available to download alongside the QUT-NOISE database. In this work, we use the QUT-NOISE-SRE protocol to evaluate a state-of-the-art PLDA i-vector speaker recognition system, demonstrating the importance of designing voice-activity-detection front-ends specifically for speaker recognition, rather than aiming for perfect coherence with the true speech/non-speech boundaries.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Particle swarm optimization (PSO), a new population based algorithm, has recently been used on multi-robot systems. Although this algorithm is applied to solve many optimization problems as well as multi-robot systems, it has some drawbacks when it is applied on multi-robot search systems to find a target in a search space containing big static obstacles. One of these defects is premature convergence. This means that one of the properties of basic PSO is that when particles are spread in a search space, as time increases they tend to converge in a small area. This shortcoming is also evident on a multi-robot search system, particularly when there are big static obstacles in the search space that prevent the robots from finding the target easily; therefore, as time increases, based on this property they converge to a small area that may not contain the target and become entrapped in that area.Another shortcoming is that basic PSO cannot guarantee the global convergence of the algorithm. In other words, initially particles explore different areas, but in some cases they are not good at exploiting promising areas, which will increase the search time.This study proposes a method based on the particle swarm optimization (PSO) technique on a multi-robot system to find a target in a search space containing big static obstacles. This method is not only able to overcome the premature convergence problem but also establishes an efficient balance between exploration and exploitation and guarantees global convergence, reducing the search time by combining with a local search method, such as A-star.To validate the effectiveness and usefulness of algorithms,a simulation environment has been developed for conducting simulation-based experiments in different scenarios and for reporting experimental results. These experimental results have demonstrated that the proposed method is able to overcome the premature convergence problem and guarantee global convergence.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Organisations are constantly seeking new ways to improve operational efficiencies. This study investigates a novel way to identify potential efficiency gains in business operations by observing how they were carried out in the past and then exploring better ways of executing them by taking into account trade-offs between time, cost and resource utilisation. This paper demonstrates how these trade-offs can be incorporated in the assessment of alternative process execution scenarios by making use of a cost environment. A number of optimisation techniques are proposed to explore and assess alternative execution scenarios. The objective function is represented by a cost structure that captures different process dimensions. An experimental evaluation is conducted to analyse the performance and scalability of the optimisation techniques: integer linear programming (ILP), hill climbing, tabu search, and our earlier proposed hybrid genetic algorithm approach. The findings demonstrate that the hybrid genetic algorithm is scalable and performs better compared to other techniques. Moreover, we argue that the use of ILP is unrealistic in this setup and cannot handle complex cost functions such as the ones we propose. Finally, we show how cost-related insights can be gained from improved execution scenarios and how these can be utilised to put forward recommendations for reducing process-related cost and overhead within organisations.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

BACKGROUND CONTEXT: The Neck Disability Index frequently is used to measure outcomes of the neck. The statistical rigor of the Neck Disability Index has been assessed with conflicting outcomes. To date, Confirmatory Factor Analysis of the Neck Disability Index has not been reported for a suitably large population study. Because the Neck Disability Index is not a condition-specific measure of neck function, initial Confirmatory Factor Analysis should consider problematic neck patients as a homogenous group. PURPOSE: We sought to analyze the factor structure of the Neck Disability Index through Confirmatory Factor Analysis in a symptomatic, homogeneous, neck population, with respect to pooled populations and gender subgroups. STUDY DESIGN: This was a secondary analysis of pooled data. PATIENT SAMPLE: A total of 1,278 symptomatic neck patients (67.5% female, median age 41 years), 803 nonspecific and 475 with whiplash-associated disorder. OUTCOME MEASURES: The Neck Disability Index was used to measure outcomes. METHODS: We analyzed pooled baseline data from six independent studies of patients with neck problems who completed Neck Disability Index questionnaires at baseline. The Confirmatory Factor Analysis was considered in three scenarios: the full sample and separate sexes. Models were compared empirically for best fit. RESULTS: Two-factor models have good psychometric properties across both the pooled and sex subgroups. However, according to these analyses, the one-factor solution is preferable from both a statistical perspective and parsimony. The two-factor model was close to significant for the male subgroup (p<.07) where questions separated into constructs of mental function (pain, reading headaches and concentration) and physical function (personal care, lifting, work, driving, sleep, and recreation). CONCLUSIONS: The Neck Disability Index demonstrated a one-factor structure when analyzed by Confirmatory Factor Analysis in a pooled, homogenous sample of neck problem patients. However, a two-factor model did approach significance for male subjects where questions separated into constructs of mental and physical function. Further investigations in different conditions, subgroup and sex-specific populations are warranted.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Roads and road infrastructure will be faced with multiple challenges over the coming decades – challenges that in many ways bear little resemblance to those previously faced - and as such will require new approaches. The opportunity exists to transform the way road infrastructure is conceived and constructed, as a key part of the process of assisting society to respond to climate change and reduce other environmental pressures. Innovations in road construction, use and management in order to manage these changes can now be seen. Scenario planning is one tool that can take into account emerging challenges, develop or adopt new approaches, and thus help this transformation to occur. The paper explores scenario planning methodologies, global innovations and trends in road construction and maintenance and the findings from stakeholder workshops in Brisbane and Perth. It highlights key opportunities for road agencies to use scenarios to enable planning that, in the face of future uncertainties, facilitates appropriate responses.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Creative and ad-hoc work often involves non-digital artifacts, such as whiteboards and post-it notes. The preferred method of brainstorming and idea development, while facilitating work among collocated participants, makes it particularly tricky to involve remote participants, not even mentioning cases where live social involvement is required and the number and location of remote participants can be vast. Our work has originally focused on large distributed teams in business entities. Vast majority of teams in large organizations are distributed teams. Our team of corporate researchers decided to identify state of the art technologies that could facilitate the scenarios mentioned above. This paper is an account of a research project in the area of enterprise collaboration, with a strong focus on the aspects of human computer interaction in mixed mode environments, especially in areas of collaboration where computers still play a secondary role. It is describing a currently running corporate research project. In this paper we signal the potential use of the technology in situation, where community involvement is either required or desirable. The goal of the paper is to initiate a discussion on the use of technologies, initially designed as supporting enterprise collaboration, in situation requiring community engagement. In other words, it is a contribution of technically focused research exploring the uses of the technology in areas such as social engagement and community involvement. © 2012 IEEE.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper we present a robust method to detect handwritten text from unconstrained drawings on normal whiteboards. Unlike printed text on documents, free form handwritten text has no pattern in terms of size, orientation and font and it is often mixed with other drawings such as lines and shapes. Unlike handwritings on paper, handwritings on a normal whiteboard cannot be scanned so the detection has to be based on photos. Our work traces straight edges on photos of the whiteboard and builds graph representation of connected components. We use geometric properties such as edge density, graph density, aspect ratio and neighborhood similarity to differentiate handwritten text from other drawings. The experiment results show that our method achieves satisfactory precision and recall. Furthermore, the method is robust and efficient enough to be deployed in a mobile device. This is an important enabler of business applications that support whiteboard-centric visual meetings in enterprise scenarios. © 2012 IEEE.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Experiences showed that developing business applications that base on text analysis normally requires a lot of time and expertise in the field of computer linguistics. Several approaches of integrating text analysis systems with business applications have been proposed, but so far there has been no coordinated approach which would enable building scalable and flexible applications of text analysis in enterprise scenarios. In this paper, a service-oriented architecture for text processing applications in the business domain is introduced. It comprises various groups of processing components and knowledge resources. The architecture, created as a result of our experiences with building natural language processing applications in business scenarios, allows for the reuse of text analysis and other components, and facilitates the development of business applications. We verify our approach by showing how the proposed architecture can be applied to create a text analytics enabled business application that addresses a concrete business scenario. © 2010 IEEE.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Process variability in pollutant build-up and wash-off generates inherent uncertainty that affects the outcomes of stormwater quality models. Poor characterisation of process variability constrains the accurate accounting of the uncertainty associated with pollutant processes. This acts as a significant limitation to effective decision making in relation to stormwater pollution mitigation. The study undertaken developed three theoretical scenarios based on research findings that variations in particle size fractions <150µm and >150µm during pollutant build-up and wash-off primarily determine the variability associated with these processes. These scenarios, which combine pollutant build-up and wash-off processes that takes place on a continuous timeline, are able to explain process variability under different field conditions. Given the variability characteristics of a specific build-up or wash-off event, the theoretical scenarios help to infer the variability characteristics of the associated pollutant process that follows. Mathematical formulation of the theoretical scenarios enables the incorporation of variability characteristics of pollutant build-up and wash-off processes in stormwater quality models. The research study outcomes will contribute to the quantitative assessment of uncertainty as an integral part of the interpretation of stormwater quality modelling outcomes.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In life cycle assessment studies, greenhouse gas (GHG) emissions from direct land-use change have been estimated to make a significant contribution to the global warming potential of agricultural products. However, these estimates have a high uncertainty due to the complexity of data requirements and difficulty in attribution of land-use change. This paper presents estimates of GHG emissions from direct land-use change from native woodland to grazing land for two beef production regions in eastern Australia, which were the subject of a multi-impact life cycle assessment study for premium beef production. Spatially- and temporally consistent datasets were derived for areas of forest cover and biomass carbon stocks using published remotely sensed tree-cover data and regionally applicable allometric equations consistent with Australia's national GHG inventory report. Standard life cycle assessment methodology was used to estimate GHG emissions and removals from direct land-use change attributed to beef production. For the northern-central New South Wales region of Australia estimates ranged from a net emission of 0.03 t CO2-e ha-1 year-1 to net removal of 0.12 t CO2-e ha-1 year-1 using low and high scenarios, respectively, for sequestration in regrowing forests. For the same period (1990-2010), the study region in southern-central Queensland was estimated to have net emissions from land-use change in the range of 0.45-0.25 t CO2-e ha-1 year-1. The difference between regions reflects continuation of higher rates of deforestation in Queensland until strict regulation in 2006 whereas native vegetation protection laws were introduced earlier in New South Wales. On the basis of liveweight produced at the farm-gate, emissions from direct land-use change for 1990-2010 were comparable in magnitude to those from other on-farm sources, which were dominated by enteric methane. However, calculation of land-use change impacts for the Queensland region for a period starting 2006, gave a range from net emissions of 0.11 t CO2-e ha-1 year-1 to net removals of 0.07 t CO2-e ha-1 year-1. This study demonstrated a method for deriving spatially- and temporally consistent datasets to improve estimates for direct land-use change impacts in life cycle assessment. It identified areas of uncertainty, including rates of sequestration in woody regrowth and impacts of land-use change on soil carbon stocks in grazed woodlands, but also showed the potential for direct land-use change to represent a net sink for GHG.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This project developed three mathematical models for scheduling ambulances and ambulance crews and proceeded to solve each model for test scenarios based on real data. Results from these models can serve as decision aids for dispatching or relocating ambulances; and for strategic decisions on the ambulance crews needed each shift. This thesis used Flexible Flow Shop Scheduling techniques to formulate strategic, dynamic and real time models. Metaheuristic solutions techniques were applied for a case study with realistic data. These models are suitable for ambulance planners and dispatchers.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The Australian Law Reform Commission is conducting an inquiry into copyright law and the digital economy in 2012 and 2013.The President, Rosalind Croucher, stated: “While the Copyright Act has been amended on occasion over the past 12 years to account for digital developments, these changes occurred before the digital economy took off. The Australian Law Reform Commission will need to find reforms that are responsive to this new environment, and to future scenarios that are still in the realm of the imagination. It is a complex and important area of law and we are looking forward to some robust debate and discussion during the course of this very important Inquiry.”