292 resultados para RM extended algorithm
Resumo:
The paper presents an improved Phase-Locked Loop (PLL) for measuring the fundamental frequency and selective harmonic content of a distorted signal. This information can be used by grid interfaced devices and harmonic compensators. The single-phase structure is based on the Synchronous Reference Frame (SRF) PLL. The proposed PLL needs only a limited number of harmonic stages by incorporating Moving Average Filters (MAF) for eliminating the undesired harmonic content at each stage. The frequency dependency of MAF in effective filtering of undesired harmonics is also dealt with by a proposed method for adaptation to frequency variations of input signal. The method is suitable for high sampling rates and a wide frequency measurement range. Furthermore, an extended model of this structure is proposed which includes the response to both the frequency and phase angle variations. The proposed algorithm is simulated and verified using Hardware-in-the-Loop (HIL) testing.
Resumo:
Rolling-element bearing failures are the most frequent problems in rotating machinery, which can be catastrophic and cause major downtime. Hence, providing advance failure warning and precise fault detection in such components are pivotal and cost-effective. The vast majority of past research has focused on signal processing and spectral analysis for fault diagnostics in rotating components. In this study, a data mining approach using a machine learning technique called anomaly detection (AD) is presented. This method employs classification techniques to discriminate between defect examples. Two features, kurtosis and Non-Gaussianity Score (NGS), are extracted to develop anomaly detection algorithms. The performance of the developed algorithms was examined through real data from a test to failure bearing. Finally, the application of anomaly detection is compared with one of the popular methods called Support Vector Machine (SVM) to investigate the sensitivity and accuracy of this approach and its ability to detect the anomalies in early stages.
Resumo:
We present an algorithm for multiarmed bandits that achieves almost optimal performance in both stochastic and adversarial regimes without prior knowledge about the nature of the environment. Our algorithm is based on augmentation of the EXP3 algorithm with a new control lever in the form of exploration parameters that are tailored individually for each arm. The algorithm simultaneously applies the “old” control lever, the learning rate, to control the regret in the adversarial regime and the new control lever to detect and exploit gaps between the arm losses. This secures problem-dependent “logarithmic” regret when gaps are present without compromising on the worst-case performance guarantee in the adversarial regime. We show that the algorithm can exploit both the usual expected gaps between the arm losses in the stochastic regime and deterministic gaps between the arm losses in the adversarial regime. The algorithm retains “logarithmic” regret guarantee in the stochastic regime even when some observations are contaminated by an adversary, as long as on average the contamination does not reduce the gap by more than a half. Our results for the stochastic regime are supported by experimental validation.
Resumo:
We investigate the terminating concept of BKZ reduction first introduced by Hanrot et al. [Crypto'11] and make extensive experiments to predict the number of tours necessary to obtain the best possible trade off between reduction time and quality. Then, we improve Buchmann and Lindner's result [Indocrypt'09] to find sub-lattice collision in SWIFFT. We illustrate that further improvement in time is possible through special setting of SWIFFT parameters and also through the combination of different reduction parameters adaptively. Our contribution also include a probabilistic simulation approach top-up deterministic simulation described by Chen and Nguyen [Asiacrypt'11] that can able to predict the Gram-Schmidt norms more accurately for large block sizes.
Resumo:
There are numerous load estimation methods available, some of which are captured in various online tools. However, most estimators are subject to large biases statistically, and their associated uncertainties are often not reported. This makes interpretation difficult and the estimation of trends or determination of optimal sampling regimes impossible to assess. In this paper, we first propose two indices for measuring the extent of sampling bias, and then provide steps for obtaining reliable load estimates by minimizing the biases and making use of possible predictive variables. The load estimation procedure can be summarized by the following four steps: - (i) output the flow rates at regular time intervals (e.g. 10 minutes) using a time series model that captures all the peak flows; - (ii) output the predicted flow rates as in (i) at the concentration sampling times, if the corresponding flow rates are not collected; - (iii) establish a predictive model for the concentration data, which incorporates all possible predictor variables and output the predicted concentrations at the regular time intervals as in (i), and; - (iv) obtain the sum of all the products of the predicted flow and the predicted concentration over the regular time intervals to represent an estimate of the load. The key step to this approach is in the development of an appropriate predictive model for concentration. This is achieved using a generalized regression (rating-curve) approach with additional predictors that capture unique features in the flow data, namely the concept of the first flush, the location of the event on the hydrograph (e.g. rise or fall) and cumulative discounted flow. The latter may be thought of as a measure of constituent exhaustion occurring during flood events. The model also has the capacity to accommodate autocorrelation in model errors which are the result of intensive sampling during floods. Incorporating this additional information can significantly improve the predictability of concentration, and ultimately the precision with which the pollutant load is estimated. We also provide a measure of the standard error of the load estimate which incorporates model, spatial and/or temporal errors. This method also has the capacity to incorporate measurement error incurred through the sampling of flow. We illustrate this approach using the concentrations of total suspended sediment (TSS) and nitrogen oxide (NOx) and gauged flow data from the Burdekin River, a catchment delivering to the Great Barrier Reef. The sampling biases for NOx concentrations range from 2 to 10 times indicating severe biases. As we expect, the traditional average and extrapolation methods produce much higher estimates than those when bias in sampling is taken into account.
Resumo:
During the past few decades, developing efficient methods to solve dynamic facility layout problems has been focused on significantly by practitioners and researchers. More specifically meta-heuristic algorithms, especially genetic algorithm, have been proven to be increasingly helpful to generate sub-optimal solutions for large-scale dynamic facility layout problems. Nevertheless, the uncertainty of the manufacturing factors in addition to the scale of the layout problem calls for a mixed genetic algorithm–robust approach that could provide a single unlimited layout design. The present research aims to devise a customized permutation-based robust genetic algorithm in dynamic manufacturing environments that is expected to be generating a unique robust layout for all the manufacturing periods. The numerical outcomes of the proposed robust genetic algorithm indicate significant cost improvements compared to the conventional genetic algorithm methods and a selective number of other heuristic and meta-heuristic techniques.
Resumo:
This study examined an aspect of adolescent writing development, specifically whether teaching secondary school students to use strategies to enhance succinctness in their essays changed the grammatical sophistication of their sentences. A quasi-experimental intervention was used to compare changes in syntactic complexity and lexical density between one-draft and polished essays. No link was demonstrated between the intervention and the changes. A thematic analysis of teacher interviews explored links between changes to student texts and teaching approaches. The study has implications for making syntactic complexity an explicit goal of student drafting.
Resumo:
The computational technique of the full ranges of the second-order inelastic behaviour evaluation of steel-concrete composite structure is not always sought forgivingly, and therefore it hinders the development and application of the performance-based design approach for the composite structure. To this end, this paper addresses of the advanced computational technique of the higher-order element with the refined plastic hinges to capture the all-ranges behaviour of an entire steel-concrete composite structure. Moreover, this paper presents the efficient and economical cross-section analysis to evaluate the element section capacity of the non-uniform and arbitrary composite section subjected to the axial and bending interaction. Based on the same single algorithm, it can accurately and effectively evaluate nearly continuous interaction capacity curve from decompression to pure bending technically, which is the important capacity range but highly nonlinear. Hence, this cross-section analysis provides the simple but unique algorithm for the design approach. In summary, the present nonlinear computational technique can simulate both material and geometric nonlinearities of the composite structure in the accurate, efficient and reliable fashion, including partial shear connection and gradual yielding at pre-yield stage, plasticity and strain-hardening effect due to axial and bending interaction at post-yield stage, loading redistribution, second-order P-δ and P-Δ effect, and also the stiffness and strength deterioration. And because of its reliable and accurate behavioural evaluation, the present technique can be extended for the design of the high-strength composite structure and potentially for the fibre-reinforced concrete structure.
Resumo:
Ramp metering (RM) is an access control for motorways, in which a traffic signal is placed at on-ramps to regulate the rate of vehicles entering the motorway and thus to preserve the motorway capacity. In general, RM algorithms fall into two categories by their effective scope: local control and coordinated control. Local control algorithm determines the metering rate based on the traffic condition on adjacent motorway mainline and the on-ramp. Conversely, coordinated RM strategies make use of measurements from the entire motorway network to operate individual ramp signals for optimal performance at the network level. This study proposes a multi-hierarchical strategy for on-ramp coordination. The strategy is structured in two layers. At the higher layer, a centralised, predictive controller plans the coordination control within a long update interval based on the location of high-risk breakdown flow. At the lower layer, reactive controllers determine the metering rates of those ramps involved in the ramp coordination with a short update interval. This strategy is modelled and applied to the northbound model of the Pacific Motorway in a micro-simulation platform (AIMSUN). The simulation results show that the proposed strategy effectively delays the onset of congestion and reduces total congestion with better managed on-ramp queues.
Resumo:
Sustainable implementation of new workforce redesign initiatives requires strategies that minimize barriers and optimize supports. Such strategies could be provided by a set of guiding principles. A broad understanding of the concerns of all the key stakeholder groups is required before effective strategies and initiatives are developed. Many new workforce redesign initiatives are not underpinned by prior planning, and this threatens their uptake and sustainability. This study reports on a cross-sectional qualitative study that sought the perspectives of representatives of key stakeholders in a new workforce redesign initiative (extended-scope-of-practice physiotherapy) in one Australian tertiary hospital. The key stakeholder groups were those that had been involved in some way in the development, management, training, funding, and/or delivery of the initiative. Data were collected using semistructured questions, answered individually by interview or in writing. Responses were themed collaboratively, using descriptive analysis. Key identified themes comprised: the importance of service marketing; proactively addressing barriers; using readily understood nomenclature; demonstrating service quality and safety, monitoring adverse events, measuring health and cost outcomes; legislative issues; registration; promoting viable career pathways; developing, accrediting, and delivering a curriculum supporting physiotherapists to work outside of the usual scope; and progression from "a good idea" to established service. Health care facilities planning to implement new workforce initiatives that extend scope of usual practice should consider these issues before instigating workforce/model of care changes. © 2014 Morris et al.
Resumo:
Aim Frail older people typically suffer several chronic diseases, receive multiple medications and are more likely to be institutionalized in residential aged care facilities. In such patients, optimizing prescribing and avoiding use of high-risk medications might prevent adverse events. The present study aimed to develop a pragmatic, easily applied algorithm for medication review to help clinicians identify and discontinue potentially inappropriate high-risk medications. Methods The literature was searched for robust evidence of the association of adverse effects related to potentially inappropriate medications in older patients to identify high-risk medications. Prior research into the cessation of potentially inappropriate medications in older patients in different settings was synthesized into a four-step algorithm for incorporation into clinical assessment protocols for patients, particularly those in residential aged care facilities. Results The algorithm comprises several steps leading to individualized prescribing recommendations: (i) identify a high-risk medication; (ii) ascertain the current indications for the medication and assess their validity; (iii) assess if the drug is providing ongoing symptomatic benefit; and (iv) consider withdrawing, altering or continuing medications. Decision support resources were developed to complement the algorithm in ensuring a systematic and patient-centered approach to medication discontinuation. These include a comprehensive list of high-risk medications and the reasons for inappropriateness, lists of alternative treatments, and suggested medication withdrawal protocols. Conclusions The algorithm captures a range of different clinical scenarios in relation to potentially inappropriate medications, and offers an evidence-based approach to identifying and, if appropriate, discontinuing such medications. Studies are required to evaluate algorithm effects on prescribing decisions and patient outcomes.
Resumo:
There is an increased interest on the use of UAVs for environmental research such as tracking bush fires, volcanic eruptions, chemical accidents or pollution sources. The aim of this paper is to describe the theory and results of a bio-inspired plume tracking algorithm. A method for generating sparse plumes in a virtual environment was also developed. Results indicated the ability of the algorithms to track plumes in 2D and 3D. The system has been tested with hardware in the loop (HIL) simulations and in flight using a CO2 gas sensor mounted to a multi-rotor UAV. The UAV is controlled by the plume tracking algorithm running on the ground control station (GCS).
Resumo:
The family of location and scale mixtures of Gaussians has the ability to generate a number of flexible distributional forms. The family nests as particular cases several important asymmetric distributions like the Generalized Hyperbolic distribution. The Generalized Hyperbolic distribution in turn nests many other well known distributions such as the Normal Inverse Gaussian. In a multivariate setting, an extension of the standard location and scale mixture concept is proposed into a so called multiple scaled framework which has the advantage of allowing different tail and skewness behaviours in each dimension with arbitrary correlation between dimensions. Estimation of the parameters is provided via an EM algorithm and extended to cover the case of mixtures of such multiple scaled distributions for application to clustering. Assessments on simulated and real data confirm the gain in degrees of freedom and flexibility in modelling data of varying tail behaviour and directional shape.
Resumo:
Web data can often be represented in free tree form; however, free tree mining methods seldom exist. In this paper, a computationally fast algorithm FreeS is presented to discover all frequently occurring free subtrees in a database of labelled free trees. FreeS is designed using an optimal canonical form, BOCF that can uniquely represent free trees even during the presence of isomorphism. To avoid enumeration of false positive candidates, it utilises the enumeration approach based on a tree-structure guided scheme. This paper presents lemmas that introduce conditions to conform the generation of free tree candidates during enumeration. Empirical study using both real and synthetic datasets shows that FreeS is scalable and significantly outperforms (i.e. few orders of magnitude faster than) the state-of-the-art frequent free tree mining algorithms, HybridTreeMiner and FreeTreeMiner.
Resumo:
A5-GMR-1 is a synchronous stream cipher used to provide confidentiality for communications between satellite phones and satellites. The keystream generator may be considered as a finite state machine, with an internal state of 81 bits. The design is based on four linear feedback shift registers, three of which are irregularly clocked. The keystream generator takes a 64-bit secret key and 19-bit frame number as inputs, and produces an output keystream of length between $2^8$ and $2^{10}$ bits. Analysis of the initialisation process for the keystream generator reveals serious flaws which significantly reduce the number of distinct keystreams that the generator can produce. Multiple (key, frame number) pairs produce the same keystream, and the relationship between the various pairs is easy to determine. Additionally, many of the keystream sequences produced are phase shifted versions of each other, for very small phase shifts. These features increase the effectiveness of generic time-memory tradeoff attacks on the cipher, making such attacks feasible.