875 resultados para Wang, Yiyuan.
Resumo:
This chapter discussed the various modes of operation of the Doubly Fed Induction Generator (DFIG) based wind farm system. The impact of a auxiliary damping controller on the different modes of operation for the DFIG based wind generation system is investigated. The co-ordinated tuning of the damping controller to enhance the damping of the oscillatory modes using Bacteria Foraging (BF) technique is presented. The results from eigenvalue analysis are presented to elucidate the effectiveness of the tuned damping controller in the DFIG system under Super/Sub-synchronous speed of operation. The robustness issue of the damping controller is also investigated.
Resumo:
This chapter focuses on the implementation of the TS (Tagaki-Sugino) fuzzy controller for the Doubly Fed Induction Generator (DFIG) based wind generator. The conventional PI control loops for mantaining desired active power and DC capacitor voltage is compared with the TS fuzzy controllers. DFIG system is represented by a third-order model where electromagnetic transients of the stator are neglected. The effectiveness of the TS-fuzzy controller on the rotor speed oscillations and the DC capacitor voltage variations of the DFIG damping controller on converter ratings is also investigated. The results from the time domain simulations are presented to elucidate the effectiveness of the TS-fuzzy controller over the conventional PI controller in the DFIG system. The proposed TS-fuzzy con-troller can improve the fault ride through capability of DFIG compared to the conventional PI controller.
Resumo:
Background: This study attempted to develop health risk-based metrics for defining a heatwave in Brisbane, Australia. Methods: Poisson generalised additive model was performed to assess the impact of heatwaves on mortality and emergency hospital admissions (EHAs) in Brisbane. Results: In general, the higher the intensity and the longer the duration of a heatwave, the greater the health impacts. There was no apparent difference in EHAs risk during different periods of a warm season. However, there was a greater risk of mortality in the second half of a warm season than that in the first half. While elderly (>75 years)were particularly vulnerable to both the EHA and mortality effects of a heatwave, the risk for EHAs also significantly increased for two other age groups (0-64 years and 65-74 years) during severe heatwaves. Different patterns between cardiorespiratory mortality and EHAs were observed. Based on these findings, we propose the use of a teiered heat warning system based on the health risk of heatwave. Conclusions: Health risk-based metrics are a useful tool for the development of local heatwave definitions. thsi tool may have significant implications for the assessment of heatwave-related health consequences and development of heatwave response plans and implementation strategies.
Resumo:
Andreasen (2003) argues that there is a ‘starting change’ bias in the social marketing field as much research is centred on inducing initial behavioural change. However, repeat or maintenance behaviour is often critical to achieving social goals across many domains. For instance, the repeat use of professional therapeutic services is vital for improved mental health, although premature discontinuance of service use is common (Wang, 2007). This study contributes to addressing this gap in the social marketing literature by exploring key drivers of maintenance behaviour, in the form of repeat service use, in mental health. This is in line with Andreasen’s (1994) argument that social marketing is an appropriate approach to addressing mental health challenges.
Resumo:
Local spatio-temporal features with a Bag-of-visual words model is a popular approach used in human action recognition. Bag-of-features methods suffer from several challenges such as extracting appropriate appearance and motion features from videos, converting extracted features appropriate for classification and designing a suitable classification framework. In this paper we address the problem of efficiently representing the extracted features for classification to improve the overall performance. We introduce two generative supervised topic models, maximum entropy discrimination LDA (MedLDA) and class- specific simplex LDA (css-LDA), to encode the raw features suitable for discriminative SVM based classification. Unsupervised LDA models disconnect topic discovery from the classification task, hence yield poor results compared to the baseline Bag-of-words framework. On the other hand supervised LDA techniques learn the topic structure by considering the class labels and improve the recognition accuracy significantly. MedLDA maximizes likelihood and within class margins using max-margin techniques and yields a sparse highly discriminative topic structure; while in css-LDA separate class specific topics are learned instead of common set of topics across the entire dataset. In our representation first topics are learned and then each video is represented as a topic proportion vector, i.e. it can be comparable to a histogram of topics. Finally SVM classification is done on the learned topic proportion vector. We demonstrate the efficiency of the above two representation techniques through the experiments carried out in two popular datasets. Experimental results demonstrate significantly improved performance compared to the baseline Bag-of-features framework which uses kmeans to construct histogram of words from the feature vectors.
Resumo:
The commercialization of aerial image processing is highly dependent on the platforms such as UAVs (Unmanned Aerial Vehicles). However, the lack of an automated UAV forced landing site detection system has been identified as one of the main impediments to allow UAV flight over populated areas in civilian airspace. This article proposes a UAV forced landing site detection system that is based on machine learning approaches including the Gaussian Mixture Model and the Support Vector Machine. A range of learning parameters are analysed including the number of Guassian mixtures, support vector kernels including linear, radial basis function Kernel (RBF) and polynormial kernel (poly), and the order of RBF kernel and polynormial kernel. Moreover, a modified footprint operator is employed during feature extraction to better describe the geometric characteristics of the local area surrounding a pixel. The performance of the presented system is compared to a baseline UAV forced landing site detection system which uses edge features and an Artificial Neural Network (ANN) region type classifier. Experiments conducted on aerial image datasets captured over typical urban environments reveal improved landing site detection can be achieved with an SVM classifier with an RBF kernel using a combination of colour and texture features. Compared to the baseline system, the proposed system provides significant improvement in term of the chance to detect a safe landing area, and the performance is more stable than the baseline in the presence of changes to the UAV altitude.
Resumo:
Solvothermally synthesized Ga2O3 nanoparticles are incorporated into liquid metal/metal oxide (LM/MO) frameworks in order to form enhanced photocatalytic systems. The LM/MO frameworks, both with and without incorporated Ga2O3 nanoparticles, show photocatalytic activitydue to a plasmonic effect where performance is related to the loading of Ga2O3 nanoparticles. Optimum photocatalytic efficiency is obtained with 1 wt% incorporation of Ga2O3 nanoparticles. This can be attributed to the sub-bandgap states of LM/MO frameworks, contributing to pseudo-ohmic contacts which reduce the free carrier injection barrier to Ga2O3.
Resumo:
So far, low probability differentials for the key schedule of block ciphers have been used as a straightforward proof of security against related-key differential analysis. To achieve resistance, it is believed that for cipher with k-bit key it suffices the upper bound on the probability to be 2− k . Surprisingly, we show that this reasonable assumption is incorrect, and the probability should be (much) lower than 2− k . Our counter example is a related-key differential analysis of the well established block cipher CLEFIA-128. We show that although the key schedule of CLEFIA-128 prevents differentials with a probability higher than 2− 128, the linear part of the key schedule that produces the round keys, and the Feistel structure of the cipher, allow to exploit particularly chosen differentials with a probability as low as 2− 128. CLEFIA-128 has 214 such differentials, which translate to 214 pairs of weak keys. The probability of each differential is too low, but the weak keys have a special structure which allows with a divide-and-conquer approach to gain an advantage of 27 over generic analysis. We exploit the advantage and give a membership test for the weak-key class and provide analysis of the hashing modes. The proposed analysis has been tested with computer experiments on small-scale variants of CLEFIA-128. Our results do not threaten the practical use of CLEFIA.
Resumo:
The development and maintenance of large and complex ontologies are often time-consuming and error-prone. Thus, automated ontology learning and revision have attracted intensive research interest. In data-centric applications where ontologies are designed or automatically learnt from the data, when new data instances are added that contradict to the ontology, it is often desirable to incrementally revise the ontology according to the added data. This problem can be intuitively formulated as the problem of revising a TBox by an ABox. In this paper we introduce a model-theoretic approach to such an ontology revision problem by using a novel alternative semantic characterisation of DL-Lite ontologies. We show some desired properties for our ontology revision. We have also developed an algorithm for reasoning with the ontology revision without computing the revision result. The algorithm is efficient as its computational complexity is in coNP in the worst case and in PTIME when the size of the new data is bounded.
Resumo:
Stochastic modelling is critical in GNSS data processing. Currently, GNSS data processing commonly relies on the empirical stochastic model which may not reflect the actual data quality or noise characteristics. This paper examines the real-time GNSS observation noise estimation methods enabling to determine the observation variance from single receiver data stream. The methods involve three steps: forming linear combination, handling the ionosphere and ambiguity bias and variance estimation. Two distinguished ways are applied to overcome the ionosphere and ambiguity biases, known as the time differenced method and polynomial prediction method respectively. The real time variance estimation methods are compared with the zero-baseline and short-baseline methods. The proposed method only requires single receiver observation, thus applicable to both differenced and un-differenced data processing modes. However, the methods may be subject to the normal ionosphere conditions and low autocorrelation GNSS receivers. Experimental results also indicate the proposed method can result on more realistic parameter precision.
Resumo:
An outbreak detection and response system, using time series moving percentile method based on historical data, in China has been used for identifying dengue fever outbreaks since 2008. For dengue fever outbreaks reported from 2009 to 2012, this system achieved a sensitivity of 100%, a specificity of 99.8% and a median time to detection of 3 days, which indicated that the system was a useful decision tool for dengue fever control and risk-management programs in China.
Hand, foot and mouth disease in China: Evaluating an automated system for the detection of outbreaks
Resumo:
Objective To evaluate the performance of China’s infectious disease automated alert and response system in the detection of outbreaks of hand, foot and mouth (HFM) disease. Methods We estimated size, duration and delay in reporting HFM disease outbreaks from cases notified between 1 May 2008 and 30 April 2010 and between 1 May 2010 and 30 April 2012, before and after automatic alert and response included HFM disease. Sensitivity, specificity and timeliness of detection of aberrations in the incidence of HFM disease outbreaks were estimated by comparing automated detections to observations of public health staff. Findings The alert and response system recorded 106 005 aberrations in the incidence of HFM disease between 1 May 2010 and 30 April 2012 – a mean of 5.6 aberrations per 100 days in each county that reported HFM disease. The response system had a sensitivity of 92.7% and a specificity of 95.0%. The mean delay between the reporting of the first case of an outbreak and detection of that outbreak by the response system was 2.1 days. Between the first and second study periods, the mean size of an HFM disease outbreak decreased from 19.4 to 15.8 cases and the mean interval between the onset and initial reporting of such an outbreak to the public health emergency reporting system decreased from 10.0 to 9.1 days. Conclusion The automated alert and response system shows good sensitivity in the detection of HFM disease outbreaks and appears to be relatively rapid. Continued use of this system should allow more effective prevention and limitation of such outbreaks in China.
Resumo:
This study uses the concept of ‘place-making’ to consider political engagement on Sina Weibo, one of the most popular microblogging services in China. Besides articulating statepublic confrontation during major social controversies, Weibo has been used to recollect and renarrate the memories of a city, such as Guangzhou, where dramatic social and cultural changes took place during the economic reform era. The Chinese government’s ongoing project to create a culturally indifferent ‘national identity’ triggers a defensive response from local places. Through consuming news and information about leisure and entertainment in Guangzhou, the digital narration of the city becomes an important source for Guangzhou people to learn about their geo-identity, and the kind of rights and responsibility attaching to it.
Resumo:
Due to the increasing recognition of global climate change, the building and construction industry is under pressure to reduce carbon emissions. A central issue in striving towards reduced carbon emissions is the need for a practicable and meaningful yardstick for assessing and communicating greenhouse gas (GHG) results. ISO 14067 was published by the International Organization for Standardization in May 2013. By providing specific requirements in the life cycle assessment (LCA) approach, the standard clarifies the GHG assessment in the aspects of choosing system boundaries and simulating use and end-of-life phases when quantifying carbon footprint of products (CFPs). More importantly, the standard, for the first time, provides step-to-step guidance and standardized template for communicating CFPs in the form of CFP external communication report, CFP performance tracking report, CFP declaration and CFP label. ISO 14067 therefore makes a valuable contribution to GHG quantification and transparent communication and comparison of CFPs. In addition, as cradle-to-grave should be used as the system boundary if use and end-of-life phases can be simulated, ISO 14067 will hopefully promote the development and implementation of simulation technologies, with Building Information Modelling (BIM) in particular, in the building and construction industry.