726 resultados para Performance degradation

em Queensland University of Technology - ePrints Archive


Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this note, we present a method to characterize the degradation in performance that arises in linear systems due to constraints imposed on the magnitude of the control signal to avoid saturation effects. We do this in the context of cheap control for tracking step signals.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Patients with a number of psychiatric and neuropathological conditions demonstrate problems in recognising facial expressions of emotion. Research indicating that patients with schizophrenia perform more poorly in the recognition of negative valence facial stimuli than positive valence stimuli has been interpreted as evidence of a negative emotion specific deficit. An alternate explanation rests in the psychometric properties of the stimulus materials. This model suggests that the pattern of impairment observed in schizophrenia may reflect initial discrepancies in task difficulty between stimulus categories, which are not apparent in healthy subjects because of ceiling effects. This hypothesis is tested, by examining the performance of healthy subjects in a facial emotion categorisation task with three levels of stimulus resolution. Results confirm the predictions of the model, showing that performance degrades differentially across emotion categories, with the greatest deterioration to negative valence stimuli. In the light of these results, a possible methodology for detecting emotion specific deficits in clinical samples is discussed.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

CFO and I/Q mismatch could cause significant performance degradation to OFDM systems. Their estimation and compensation are generally difficult as they are entangled in the received signal. In this paper, we propose some low-complexity estimation and compensation schemes in the receiver, which are robust to various CFO and I/Q mismatch values although the performance is slightly degraded for very small CFO. These schemes consist of three steps: forming a cosine estimator free of I/Q mismatch interference, estimating I/Q mismatch using the estimated cosine value, and forming a sine estimator using samples after I/Q mismatch compensation. These estimators are based on the perception that an estimate of cosine serves much better as the basis for I/Q mismatch estimation than the estimate of CFO derived from the cosine function. Simulation results show that the proposed schemes can improve system performance significantly, and they are robust to CFO and I/Q mismatch.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

An element spacing of less than half a wavelength introduces strong mutual coupling between the ports of compact antenna arrays. The strong coupling causes significant system performance degradation. A decoupling network may compensate for the mutual coupling. Alternatively, port decoupling can be achieved using a modal feed network. In response to an input signal at one of the input ports, this feed network excites the antenna elements in accordance with one of the eigenvectors of the array scattering parameter matrix. In this paper, a novel 4-element monopole array is described. The feed network of the array is implemented as a planar ring-type circuit in stripline with four coupled line sections. The new configuration offers a significant reduction in size, resulting in a very compact array.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Speaker verification is the process of verifying the identity of a person by analysing their speech. There are several important applications for automatic speaker verification (ASV) technology including suspect identification, tracking terrorists and detecting a person’s presence at a remote location in the surveillance domain, as well as person authentication for phone banking and credit card transactions in the private sector. Telephones and telephony networks provide a natural medium for these applications. The aim of this work is to improve the usefulness of ASV technology for practical applications in the presence of adverse conditions. In a telephony environment, background noise, handset mismatch, channel distortions, room acoustics and restrictions on the available testing and training data are common sources of errors for ASV systems. Two research themes were pursued to overcome these adverse conditions: Modelling mismatch and modelling uncertainty. To directly address the performance degradation incurred through mismatched conditions it was proposed to directly model this mismatch. Feature mapping was evaluated for combating handset mismatch and was extended through the use of a blind clustering algorithm to remove the need for accurate handset labels for the training data. Mismatch modelling was then generalised by explicitly modelling the session conditions as a constrained offset of the speaker model means. This session variability modelling approach enabled the modelling of arbitrary sources of mismatch, including handset type, and halved the error rates in many cases. Methods to model the uncertainty in speaker model estimates and verification scores were developed to address the difficulties of limited training and testing data. The Bayes factor was introduced to account for the uncertainty of the speaker model estimates in testing by applying Bayesian theory to the verification criterion, with improved performance in matched conditions. Modelling the uncertainty in the verification score itself met with significant success. Estimating a confidence interval for the "true" verification score enabled an order of magnitude reduction in the average quantity of speech required to make a confident verification decision based on a threshold. The confidence measures developed in this work may also have significant applications for forensic speaker verification tasks.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The Lane Change Test (LCT) is one of the growing number of methods developed to quantify driving performance degradation brought about by the use of in-vehicle devices. Beyond its validity and reliability, for such a test to be of practical use, it must also be sensitive to the varied demands of individual tasks. The current study evaluated the ability of several recent LCT lateral control and event detection parameters to discriminate between visual-manual and cognitive surrogate In-Vehicle Information System tasks with different levels of demand. Twenty-seven participants (mean age 24.4 years) completed a PC version of the LCT while performing visual search and math problem solving tasks. A number of the lateral control metrics were found to be sensitive to task differences, but the event detection metrics were less able to discriminate between tasks. The mean deviation and lane excursion measures were able to distinguish between the visual and cognitive tasks, but were less sensitive to the different levels of task demand. The other LCT metrics examined were less sensitive to task differences. A major factor influencing the sensitivity of at least some of the LCT metrics could be the type of lane change instructions given to participants. The provision of clear and explicit lane change instructions and further refinement of its metrics will be essential for increasing the utility of the LCT as an evaluation tool.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Small element spacing in compact arrays results in strong mutual coupling between array elements. Performance degradation associated with the strong coupling can be avoided through the introduction of a decoupling network consisting of interconnected reactive elements. We present a systematic design procedure for decoupling networks of symmetrical arrays with more than three elements and characterized by circulant scattering parameter matrices. The elements of the decoupling network are obtained through repeated decoupling of the characteristic eigenmodes of the array, which allows the calculation of element values using closed-form expressions.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Reduced element spacing in antenna arrays gives rise to strong mutual coupling between array elements and may cause significant performance degradation. These effects can be alleviated by introducing a decoupling network consisting of interconnected reactive elements. The existing design approach for the synthesis of a decoupling network for circulant symmetric arrays allows calculation of element values using closed-form expressions, but the resulting circuit configuration requires multilayer technology for implementation. In this paper, a new structure for the decoupling of circulant symmetric arrays of more than four elements is presented. Element values are no longer obtained in closed form, but the resulting circuit is much simpler and can be implemented on a single layer.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

A Networked Control System (NCS) is a feedback-driven control system wherein the control loops are closed through a real-time network. Control and feedback signals in an NCS are exchanged among the system’s components in the form of information packets via the network. Nowadays, wireless technologies such as IEEE802.11 are being introduced to modern NCSs as they offer better scalability, larger bandwidth and lower costs. However, this type of network is not designed for NCSs because it introduces a large amount of dropped data, and unpredictable and long transmission latencies due to the characteristics of wireless channels, which are not acceptable for real-time control systems. Real-time control is a class of time-critical application which requires lossless data transmission, small and deterministic delays and jitter. For a real-time control system, network-introduced problems may degrade the system’s performance significantly or even cause system instability. It is therefore important to develop solutions to satisfy real-time requirements in terms of delays, jitter and data losses, and guarantee high levels of performance for time-critical communications in Wireless Networked Control Systems (WNCSs). To improve or even guarantee real-time performance in wireless control systems, this thesis presents several network layout strategies and a new transport layer protocol. Firstly, real-time performances in regard to data transmission delays and reliability of IEEE 802.11b-based UDP/IP NCSs are evaluated through simulations. After analysis of the simulation results, some network layout strategies are presented to achieve relatively small and deterministic network-introduced latencies and reduce data loss rates. These are effective in providing better network performance without performance degradation of other services. After the investigation into the layout strategies, the thesis presents a new transport protocol which is more effcient than UDP and TCP for guaranteeing reliable and time-critical communications in WNCSs. From the networking perspective, introducing appropriate communication schemes, modifying existing network protocols and devising new protocols, have been the most effective and popular ways to improve or even guarantee real-time performance to a certain extent. Most previously proposed schemes and protocols were designed for real-time multimedia communication and they are not suitable for real-time control systems. Therefore, devising a new network protocol that is able to satisfy real-time requirements in WNCSs is the main objective of this research project. The Conditional Retransmission Enabled Transport Protocol (CRETP) is a new network protocol presented in this thesis. Retransmitting unacknowledged data packets is effective in compensating for data losses. However, every data packet in realtime control systems has a deadline and data is assumed invalid or even harmful when its deadline expires. CRETP performs data retransmission only in the case that data is still valid, which guarantees data timeliness and saves memory and network resources. A trade-off between delivery reliability, transmission latency and network resources can be achieved by the conditional retransmission mechanism. Evaluation of protocol performance was conducted through extensive simulations. Comparative studies between CRETP, UDP and TCP were also performed. These results showed that CRETP significantly: 1). improved reliability of communication, 2). guaranteed validity of received data, 3). reduced transmission latency to an acceptable value, and 4). made delays relatively deterministic and predictable. Furthermore, CRETP achieved the best overall performance in comparative studies which makes it the most suitable transport protocol among the three for real-time communications in a WNCS.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In this paper we analyze the performance degradation of slotted amplify-and-forward protocol in wireless environments with high node density where the number of relays grows asymptotically large. Channel gains between source-destination pairs in such networks can no longer be independent. We analyze the degradation of performance in such wireless environments where channel gains are exponentially correlated by looking at the capacity per channel use. Theoretical results for eigenvalue distribution and the capacity are derived and compared with the simulation results. Both analytical and simulated results show that the capacity given by the asymptotic mutual information decreases with the network density.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Cognitive radio is an emerging technology proposing the concept of dynamic spec- trum access as a solution to the looming problem of spectrum scarcity caused by the growth in wireless communication systems. Under the proposed concept, non- licensed, secondary users (SU) can access spectrum owned by licensed, primary users (PU) so long as interference to PU are kept minimal. Spectrum sensing is a crucial task in cognitive radio whereby the SU senses the spectrum to detect the presence or absence of any PU signal. Conventional spectrum sensing assumes the PU signal as ‘stationary’ and remains in the same activity state during the sensing cycle, while an emerging trend models PU as ‘non-stationary’ and undergoes state changes. Existing studies have focused on non-stationary PU during the transmission period, however very little research considered the impact on spectrum sensing when the PU is non-stationary during the sensing period. The concept of PU duty cycle is developed as a tool to analyse the performance of spectrum sensing detectors when detecting non-stationary PU signals. New detectors are also proposed to optimise detection with respect to duty cycle ex- hibited by the PU. This research consists of two major investigations. The first stage investigates the impact of duty cycle on the performance of existing detec- tors and the extent of the problem in existing studies. The second stage develops new detection models and frameworks to ensure the integrity of spectrum sensing when detecting non-stationary PU signals. The first investigation demonstrates that conventional signal model formulated for stationary PU does not accurately reflect the behaviour of a non-stationary PU. Therefore the performance calculated and assumed to be achievable by the conventional detector does not reflect actual performance achieved. Through analysing the statistical properties of duty cycle, performance degradation is proved to be a problem that cannot be easily neglected in existing sensing studies when PU is modelled as non-stationary. The second investigation presents detectors that are aware of the duty cycle ex- hibited by a non-stationary PU. A two stage detection model is proposed to improve the detection performance and robustness to changes in duty cycle. This detector is most suitable for applications that require long sensing periods. A second detector, the duty cycle based energy detector is formulated by integrat- ing the distribution of duty cycle into the test statistic of the energy detector and suitable for short sensing periods. The decision threshold is optimised with respect to the traffic model of the PU, hence the proposed detector can calculate average detection performance that reflect realistic results. A detection framework for the application of spectrum sensing optimisation is proposed to provide clear guidance on the constraints on sensing and detection model. Following this framework will ensure the signal model accurately reflects practical behaviour while the detection model implemented is also suitable for the desired detection assumption. Based on this framework, a spectrum sensing optimisation algorithm is further developed to maximise the sensing efficiency for non-stationary PU. New optimisation constraints are derived to account for any PU state changes within the sensing cycle while implementing the proposed duty cycle based detector.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Reliability of the performance of biometric identity verification systems remains a significant challenge. Individual biometric samples of the same person (identity class) are not identical at each presentation and performance degradation arises from intra-class variability and inter-class similarity. These limitations lead to false accepts and false rejects that are dependent. It is therefore difficult to reduce the rate of one type of error without increasing the other. The focus of this dissertation is to investigate a method based on classifier fusion techniques to better control the trade-off between the verification errors using text-dependent speaker verification as the test platform. A sequential classifier fusion architecture that integrates multi-instance and multisample fusion schemes is proposed. This fusion method enables a controlled trade-off between false alarms and false rejects. For statistically independent classifier decisions, analytical expressions for each type of verification error are derived using base classifier performances. As this assumption may not be always valid, these expressions are modified to incorporate the correlation between statistically dependent decisions from clients and impostors. The architecture is empirically evaluated by applying the proposed architecture for text dependent speaker verification using the Hidden Markov Model based digit dependent speaker models in each stage with multiple attempts for each digit utterance. The trade-off between the verification errors is controlled using the parameters, number of decision stages (instances) and the number of attempts at each decision stage (samples), fine-tuned on evaluation/tune set. The statistical validation of the derived expressions for error estimates is evaluated on test data. The performance of the sequential method is further demonstrated to depend on the order of the combination of digits (instances) and the nature of repetitive attempts (samples). The false rejection and false acceptance rates for proposed fusion are estimated using the base classifier performances, the variance in correlation between classifier decisions and the sequence of classifiers with favourable dependence selected using the 'Sequential Error Ratio' criteria. The error rates are better estimated by incorporating user-dependent (such as speaker-dependent thresholds and speaker-specific digit combinations) and class-dependent (such as clientimpostor dependent favourable combinations and class-error based threshold estimation) information. The proposed architecture is desirable in most of the speaker verification applications such as remote authentication, telephone and internet shopping applications. The tuning of parameters - the number of instances and samples - serve both the security and user convenience requirements of speaker-specific verification. The architecture investigated here is applicable to verification using other biometric modalities such as handwriting, fingerprints and key strokes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

With increasingly complex engineering assets and tight economic requirements, asset reliability becomes more crucial in Engineering Asset Management (EAM). Improving the reliability of systems has always been a major aim of EAM. Reliability assessment using degradation data has become a significant approach to evaluate the reliability and safety of critical systems. Degradation data often provide more information than failure time data for assessing reliability and predicting the remnant life of systems. In general, degradation is the reduction in performance, reliability, and life span of assets. Many failure mechanisms can be traced to an underlying degradation process. Degradation phenomenon is a kind of stochastic process; therefore, it could be modelled in several approaches. Degradation modelling techniques have generated a great amount of research in reliability field. While degradation models play a significant role in reliability analysis, there are few review papers on that. This paper presents a review of the existing literature on commonly used degradation models in reliability analysis. The current research and developments in degradation models are reviewed and summarised in this paper. This study synthesises these models and classifies them in certain groups. Additionally, it attempts to identify the merits, limitations, and applications of each model. It provides potential applications of these degradation models in asset health and reliability prediction.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The performance criteria of piezoelectric polymers based on polyvinylidene flouride (PVDF) in complex space environments have been evaluated. Thin films of these materials are being explored as in-situ responsive materials for large aperture space-based telescopes with the shape deformation and optical features dependent on long-term deformation and optical features dependent on long-term degradation effects, mainly due to thermal cycling, vacuum UV exposure and atomic oxygen. A summary of previous studies related to materials testing and performance prediction based on a laboratory environment is presented. The degradation pathways are a combination of molecular chemical changes primarily induced via radiative damage and physical degradation processes due to temperature and atomic oxygen exposure resulting in depoling, loss of orientation and surface erosing. Experimental validation for these materials to be used in space is being conducted as part of MISSE-6 (Materials International Space Station Experiment) with an overview of the experimental strategies discussed here.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Piezoelectric polymers based on polyvinylidene fluoride (PVDF) are of interest for large aperture space-based telescopes. Dimensional adjustments of adaptive polymer films are achieved via charge deposition and require a detailed understanding of the piezoelectric material responses which are expected to suffer due to strong vacuum UV, gamma, X-ray, energetic particles and atomic oxygen under low earth orbit exposure conditions. The degradation of PVDF and its copolymers under various stress environments has been investigated. Initial radiation aging studies using gamma- and e-beam irradiation have shown complex material changes with significant crosslinking, lowered melting and Curie points (where observable), effects on crystallinity, but little influence on overall piezoelectric properties. Surprisingly, complex aging processes have also been observed in elevated temperature environments with annealing phenomena and cyclic stresses resulting in thermal depoling of domains. Overall materials performance appears to be governed by a combination of chemical and physical degradation processes. Molecular changes are primarily induced via radiative damage, and physical damage from temperature and AO exposure is evident as depoling and surface erosion. Major differences between individual copolymers have been observed providing feedback on material selection strategies.