7 resultados para NETWORK REDUCTION
em QUB Research Portal - Research Directory and Institutional Repository for Queen's University Belfast
Resumo:
Wavelet transforms provide basis functions for time-frequency analysis and have properties that are particularly useful for compression of analogue point on wave transient and disturbance power system signals. This paper evaluates the reduction properties of the wavelet transform using real power system data and discusses the application of the reduction method for information transfer in network communications.
Resumo:
An artificial neural network (ANN) model is developed for the analysis and simulation of the correlation between the properties of maraging steels and composition, processing and working conditions. The input parameters of the model consist of alloy composition, processing parameters (including cold deformation degree, ageing temperature, and ageing time), and working temperature. The outputs of the ANN model include property parameters namely: ultimate tensile strength, yield strength, elongation, reduction in area, hardness, notched tensile strength, Charpy impact energy, fracture toughness, and martensitic transformation start temperature. Good performance of the ANN model is achieved. The model can be used to calculate properties of maraging steels as functions of alloy composition, processing parameters, and working condition. The combined influence of Co and Mo on the properties of maraging steels is simulated using the model. The results are in agreement with experimental data. Explanation of the calculated results from the metallurgical point of view is attempted. The model can be used as a guide for further alloy development.
Resumo:
The future convergence of voice, video and data applications on the Internet requires that next generation technology provides bandwidth and delay guarantees. Current technology trends are moving towards scalable aggregate-based systems where applications are grouped together and guarantees are provided at the aggregate level only. This solution alone is not enough for interactive video applications with sub-second delay bounds. This paper introduces a novel packet marking scheme that controls the end-to-end delay of an individual flow as it traverses a network enabled to supply aggregate- granularity Quality of Service (QoS). IPv6 Hop-by-Hop extension header fields are used to track the packet delay encountered at each network node and autonomous decisions are made on the best queuing strategy to employ. The results of network simulations are presented and it is shown that when the proposed mechanism is employed the requested delay bound is met with a 20% reduction in resource reservation and no packet loss in the network.
Resumo:
This paper investigates the center selection of multi-output radial basis function (RBF) networks, and a multi-output fast recursive algorithm (MFRA) is proposed. This method can not only reveal the significance of each candidate center based on the reduction in the trace of the error covariance matrix, but also can estimate the network weights simultaneously using a back substitution approach. The main contribution is that the center selection procedure and the weight estimation are performed within a well-defined regression context, leading to a significantly reduced computational complexity. The efficiency of the algorithm is confirmed by a computational complexity analysis, and simulation results demonstrate its effectiveness. (C) 2010 Elsevier B.V. All rights reserved.
Resumo:
A reduction in the time required to locate and restore faults on a utility's distribution network improves the customer minutes lost (CML) measurement and hence brings direct cost savings to the operating company. The traditional approach to fault location involves fault impedance determination from high volume waveform files dispatched across a communications channel to a central location for processing and analysis. This paper examines an alternative scheme where data processing is undertaken locally within a recording instrument thus reducing the volume of data to be transmitted. Processed event fault reports may be emailed to relevant operational staff for the timely repair and restoration of the line.
Resumo:
Retrospective clinical datasets are often characterized by a relatively small sample size and many missing data. In this case, a common way for handling the missingness consists in discarding from the analysis patients with missing covariates, further reducing the sample size. Alternatively, if the mechanism that generated the missing allows, incomplete data can be imputed on the basis of the observed data, avoiding the reduction of the sample size and allowing methods to deal with complete data later on. Moreover, methodologies for data imputation might depend on the particular purpose and might achieve better results by considering specific characteristics of the domain. The problem of missing data treatment is studied in the context of survival tree analysis for the estimation of a prognostic patient stratification. Survival tree methods usually address this problem by using surrogate splits, that is, splitting rules that use other variables yielding similar results to the original ones. Instead, our methodology consists in modeling the dependencies among the clinical variables with a Bayesian network, which is then used to perform data imputation, thus allowing the survival tree to be applied on the completed dataset. The Bayesian network is directly learned from the incomplete data using a structural expectation–maximization (EM) procedure in which the maximization step is performed with an exact anytime method, so that the only source of approximation is due to the EM formulation itself. On both simulated and real data, our proposed methodology usually outperformed several existing methods for data imputation and the imputation so obtained improved the stratification estimated by the survival tree (especially with respect to using surrogate splits).
Resumo:
A number of neural networks can be formulated as the linear-in-the-parameters models. Training such networks can be transformed to a model selection problem where a compact model is selected from all the candidates using subset selection algorithms. Forward selection methods are popular fast subset selection approaches. However, they may only produce suboptimal models and can be trapped into a local minimum. More recently, a two-stage fast recursive algorithm (TSFRA) combining forward selection and backward model refinement has been proposed to improve the compactness and generalization performance of the model. This paper proposes unified two-stage orthogonal least squares methods instead of the fast recursive-based methods. In contrast to the TSFRA, this paper derives a new simplified relationship between the forward and the backward stages to avoid repetitive computations using the inherent orthogonal properties of the least squares methods. Furthermore, a new term exchanging scheme for backward model refinement is introduced to reduce computational demand. Finally, given the error reduction ratio criterion, effective and efficient forward and backward subset selection procedures are proposed. Extensive examples are presented to demonstrate the improved model compactness constructed by the proposed technique in comparison with some popular methods.