64 resultados para system performance evaluation
Resumo:
Several pixel-based people counting methods have been developed over the years. Among these the product of scale-weighted pixel sums and a linear correlation coefficient is a popular people counting approach. However most approaches have paid little attention to resolving the true background and instead take all foreground pixels into account. With large crowds moving at varying speeds and with the presence of other moving objects such as vehicles this approach is prone to problems. In this paper we present a method which concentrates on determining the true-foreground, i.e. human-image pixels only. To do this we have proposed, implemented and comparatively evaluated a human detection layer to make people counting more robust in the presence of noise and lack of empty background sequences. We show the effect of combining human detection with a pixel-map based algorithm to i) count only human-classified pixels and ii) prevent foreground pixels belonging to humans from being absorbed into the background model. We evaluate the performance of this approach on the PETS 2009 dataset using various configurations of the proposed methods. Our evaluation demonstrates that the basic benchmark method we implemented can achieve an accuracy of up to 87% on sequence ¿S1.L1 13-57 View 001¿ and our proposed approach can achieve up to 82% on sequence ¿S1.L3 14-33 View 001¿ where the crowd stops and the benchmark accuracy falls to 64%.
Resumo:
This paper describes the crowd image analysis challenge that forms part of the PETS 2009 workshop. The aim of this challenge is to use new or existing systems for i) crowd count and density estimation, ii) tracking of individual(s) within a crowd, and iii) detection of separate flows and specific crowd events, in a real-world environment. The dataset scenarios were filmed from multiple cameras and involve multiple actors.
Resumo:
This paper describes the crowd image analysis challenge that forms part of the PETS 2009 workshop. The aim of this challenge is to use new or existing systems for i) crowd count and density estimation, ii) tracking of individual(s) within a crowd, and iii) detection of separate flows and specific crowd events, in a real-world environment. The dataset scenarios were filmed from multiple cameras and involve multiple actors.
Resumo:
Dual Carrier Modulation (DCM) was chosen as the higher data rate modulation scheme for MB-OFDM (Multiband Orthogonal Frequency Division Multiplexing) in the UWB (Ultra-Wide Band) radio platform ECMA-368. ECMA-368 has been chosen as the physical implementation for high data rate Wireless USB (W-USB) and Bluetooth 3.0. In this paper, different demapping methods for the DCM demapper are presented, being Soft Bit, Maximum Likely (ML) Soft Bit and Log Likelihood Ratio (LLR). Frequency diversity and Channel State Information (CSI) are further techniques to enhance demapping methods. The system performance for those DCM demapping methods simulated in realistic multi-path environments are provided and compared.
Resumo:
This paper presents a novel intelligent multiple-controller framework incorporating a fuzzy-logic-based switching and tuning supervisor along with a generalised learning model (GLM) for an autonomous cruise control application. The proposed methodology combines the benefits of a conventional proportional-integral-derivative (PID) controller, and a PID structure-based (simultaneous) zero and pole placement controller. The switching decision between the two nonlinear fixed structure controllers is made on the basis of the required performance measure using a fuzzy-logic-based supervisor, operating at the highest level of the system. The supervisor is also employed to adaptively tune the parameters of the multiple controllers in order to achieve the desired closed-loop system performance. The intelligent multiple-controller framework is applied to the autonomous cruise control problem in order to maintain a desired vehicle speed by controlling the throttle plate angle in an electronic throttle control (ETC) system. Sample simulation results using a validated nonlinear vehicle model are used to demonstrate the effectiveness of the multiple-controller with respect to adaptively tracking the desired vehicle speed changes and achieving the desired speed of response, whilst penalising excessive control action. Crown Copyright (C) 2008 Published by Elsevier B.V. All rights reserved.
Resumo:
Quadrature Phase Shift Keying (QPSK) and Dual Carrier Modulation (DCM) are currently used as the modulation schemes for Multiband Orthogonal Frequency Division Multiplexing (MB-OFDM) in the ECMA-368 defined Ultra-Wideband (UWB) radio platform. ECMA-368 has been chosen as the physical radio platform for many systems including Wireless USB (W-USB), Bluetooth 3.0 and Wireless HDMI; hence ECMA-368 is an important issue to consumer electronics and the users experience of these products. To enable the transport of high-rate USB, ECMA-368 offers up to 480 Mb/s instantaneous bit rate to the Medium Access Control (MAC) layer, but depending on radio channel conditions dropped packets unfortunately result in a lower throughput. This paper presents an alternative high data rate modulation scheme that fits within the configuration of the current standard increasing system throughput by achieving 600 Mb/s (reliable to 3.1 meters) thus maintaining the high rate USB throughput even with a moderate level of dropped packets. The modulation system is termed Dual Circular 32-QAM (DC 32-QAM). The system performance for DC 32-QAM modulation is presented and compared with 16-QAM and DCM1.
Resumo:
The work reported in this paper is motivated towards handling single node failures for parallel summation algorithms in computer clusters. An agent based approach is proposed in which a task to be executed is decomposed to sub-tasks and mapped onto agents that traverse computing nodes. The agents intercommunicate across computing nodes to share information during the event of a predicted node failure. Two single node failure scenarios are considered. The Message Passing Interface is employed for implementing the proposed approach. Quantitative results obtained from experiments reveal that the agent based approach can handle failures more efficiently than traditional failure handling approaches.
Resumo:
With continually increasing demands for improvements to atmospheric and planetary remote-sensing instrumentation, for both high optical system performance and extended operational lifetimes, an investigation to access the effects of prolonged exposure of the space environment to a series of infrared interference filters and optical materials was promoted on the NASA LDEF mission. The NASA Long Duration Exposure Facility (LDEF) was launchd by the Space Shuttle to transport various science and technology experiments both to and from space, providing investigators with the opportunity to study the effects of the space environment on materials and systems used in space-flight applications. Preliminary results to be discussed consist of transmission measurements obtained and processed from an infrared spectrophotometer both before (1983) and after (1990) exposure compared with unexposed control specimens, together with results of detailed microscopic and general visual examinations performed on the experiment. The principle lead telluride (PbTe) and Zinc Sulphide (ZnS) based multilayer filters selected for this preliminary investigation consist of : an 8-12µm low pass edge filter, a 10.6µm 2.5% half bandwidth (HBW) double half-wave narrow bandpass filter, and a 10% HBW triple half-wave wide bandpass filter at 15µm. Optical substrates of MgF2 and KRS-5 (T1BrI) will also be discussed.
Resumo:
The use of data reconciliation techniques can considerably reduce the inaccuracy of process data due to measurement errors. This in turn results in improved control system performance and process knowledge. Dynamic data reconciliation techniques are applied to a model-based predictive control scheme. It is shown through simulations on a chemical reactor system that the overall performance of the model-based predictive controller is enhanced considerably when data reconciliation is applied. The dynamic data reconciliation techniques used include a combined strategy for the simultaneous identification of outliers and systematic bias.
Resumo:
This chapter considers the Multiband Orthogonal Frequency Division Multiplexing (MB- OFDM) modulation and demodulation with the intention to optimize the Ultra-Wideband (UWB) system performance. OFDM is a type of multicarrier modulation and becomes the most important aspect for the MB-OFDM system performance. It is also a low cost digital signal component efficiently using Fast Fourier Transform (FFT) algorithm to implement the multicarrier orthogonality. Within the MB-OFDM approach, the OFDM modulation is employed in each 528 MHz wide band to transmit the data across the different bands while also using the frequency hopping technique across different bands. Each parallel bit stream can be mapped onto one of the OFDM subcarriers. Quadrature Phase Shift Keying (QPSK) and Dual Carrier Modulation (DCM) are currently used as the modulation schemes for MB-OFDM in the ECMA-368 defined UWB radio platform. A dual QPSK soft-demapper is suitable for ECMA-368 that exploits the inherent Time-Domain Spreading (TDS) and guard symbol subcarrier diversity to improve the receiver performance, yet merges decoding operations together to minimize hardware and power requirements. There are several methods to demap the DCM, which are soft bit demapping, Maximum Likelihood (ML) soft bit demapping, and Log Likelihood Ratio (LLR) demapping. The Channel State Information (CSI) aided scheme coupled with the band hopping information is used as a further technique to improve the DCM demapping performance. ECMA-368 offers up to 480 Mb/s instantaneous bit rate to the Medium Access Control (MAC) layer, but depending on radio channel conditions dropped packets unfortunately result in a lower throughput. An alternative high data rate modulation scheme termed Dual Circular 32-QAM that fits within the configuration of the current standard increasing system throughput thus maintaining the high rate throughput even with a moderate level of dropped packets.
Resumo:
In this paper, numerical analyses of the thermal performance of an indirect evaporative air cooler incorporating a M-cycle cross-flow heat exchanger has been carried out. The numerical model was established from solving the coupled governing equations for heat and mass transfer between the product and working air, using the finite-element method. The model was developed using the EES (Engineering Equation Solver) environment and validated by published experimental data. Correlation between the cooling (wet-bulb) effectiveness, system COP and a number of air flow/exchanger parameters was developed. It is found that lower channel air velocity, lower inlet air relative humidity, and higher working-to-product air ratio yielded higher cooling effectiveness. The recommended average air velocities in dry and wet channels should not be greater than 1.77 m/s and 0.7 m/s, respectively. The optimum flow ratio of working-to-product air for this cooler is 50%. The channel geometric sizes, i.e. channel length and height, also impose significant impact to system performance. Longer channel length and smaller channel height contribute to increase of the system cooling effectiveness but lead to reduced system COP. The recommend channel height is 4 mm and the dimensionless channel length, i.e., ratio of the channel length to height, should be in the range 100 to 300. Numerical study results indicated that this new type of M-cycle heat and mass exchanger can achieve 16.7% higher cooling effectiveness compared with the conventional cross-flow heat and mass exchanger for the indirect evaporative cooler. The model of this kind is new and not yet reported in literatures. The results of the study help with design and performance analyses of such a new type of indirect evaporative air cooler, and in further, help increasing market rating of the technology within building air conditioning sector, which is currently dominated by the conventional compression refrigeration technology.
Resumo:
In practice, all I/Q signal processing receivers face the problem of I/Q imbalance. In this paper, we investigate the effect of I/Q imbalance on the performance of MIMO maximal ratio combining (MRC) systems that perform the combining at the radio frequency (RF) level, thereby requiring only one RF chain. Based on a system modeling that takes the I/Q imbalance into account, we evaluate the performance in terms of average symbol error probability (SEP), outage probability and system capacity, which are derived considering transmission over uncorrelated Rayleigh fading channels. Numerical results are provided to illustrate the effects of system parameters, such as the image- leakage ratio, numbers of transmit and receive antennas, and modulation order of quadrature amplitude modulation (QAM), on the system performance.
Resumo:
Capacity dimensioning is one of the key problems in wireless network planning. Analytical and simulation methods are usually used to pursue the accurate capacity dimensioning of wireless network. In this paper, an analytical capacity dimensioning method for WCDMA with high speed wireless link is proposed based on the analysis on relations among system performance and high speed wireless transmission technologies, such as H-ARQ, AMC and fast scheduling. It evaluates system capacity in closed-form expressions from link level and system level. Numerical results show that the proposed method can calculate link level and system level capacity for WCDMA system with HSDPA and HSUPA.