76 resultados para Density-based Scanning Algorithm


Relevância:

40.00% 40.00%

Publicador:

Resumo:

This letter presents a microprocessor-based algorithm for calculating symmetrical components from the distorted transient voltage and current signals in a power system. The fundamental frequency components of the 3-phase signals are first extracted using an algorithm based on Haar functions and then "symmetrical-component transformation is applied to obtain the sequence components. The algorithm presented is computationally efficient and fast. This algorithm is better suited for application in microprocessor-based protection schemes of synchronous and induction machines.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This paper presents an artificial feed forward neural network (FFNN) approach for the assessment of power system voltage stability. A novel approach based on the input-output relation between real and reactive power, as well as voltage vectors for generators and load buses is used to train the neural net (NN). The input properties of the feed forward network are generated from offline training data with various simulated loading conditions using a conventional voltage stability algorithm based on the L-index. The neural network is trained for the L-index output as the target vector for each of the system loads. Two separate trained NN, corresponding to normal loading and contingency, are investigated on the 367 node practical power system network. The performance of the trained artificial neural network (ANN) is also investigated on the system under various voltage stability assessment conditions. As compared to the computationally intensive benchmark conventional software, near accurate results in the value of L-index and thus the voltage profile were obtained. Proposed algorithm is fast, robust and accurate and can be used online for predicting the L-indices of all the power system buses. The proposed ANN approach is also shown to be effective and computationally feasible in voltage stability assessment as well as potential enhancements within an overall energy management system in order to determining local and global stability indices

Relevância:

40.00% 40.00%

Publicador:

Resumo:

A density matrix renormalization group (DMRG) algorithm is presented for the Bethe lattice with connectivity Z = 3 and antiferromagnetic exchange between nearest-neighbor spins s = 1/2 or 1 sites in successive generations g. The algorithm is accurate for s = 1 sites. The ground states are magnetic with spin S(g) = 2(g)s, staggered magnetization that persists for large g > 20, and short-range spin correlation functions that decrease exponentially. A finite energy gap to S > S(g) leads to a magnetization plateau in the extended lattice. Closely similar DMRG results for s = 1/2 and 1 are interpreted in terms of an analytical three-site model.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Composite T-joints are commonly used in modern composite airframe, pressure vessels and piping structures, mainly to increase the bending strength of the joint and prevents buckling of plates and shells, and in multi-cell thin-walled structures. Here we report a detailed study on the propagation of guided ultrasonic wave modes in a composite T-joint and their interactions with delamination in the co-cured co-bonded flange. A well designed guiding path is employed wherein the waves undergo a two step mode conversion process, one is due to the web and joint filler on the back face of the flange and the other is due to the delamination edges close to underneath the accessible surface of the flange. A 3D Laser Doppler Vibrometer is used to obtain the three components of surface displacements/velocities of the accessible face of the flange of the T-joint. The waves are launched by a piezo ceramic wafer bonded on to the back surface of the flange. What is novel in the proposed method is that the location of any change in material/geometric properties can be traced by computing a frequency domain power flow along a scan line. The scan line can be chosen over a grid either during scan or during post-processing of the scan data off-line. The proposed technique eliminates the necessity of baseline data and disassembly of structure for structural interrogation.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Several research groups have attempted to optimize photopolymerization parameters to increase the throughput of scanning based microstereolithography (MSL) systems through modified beam scanning techniques. Efforts in reducing the curing line width have been implemented through high numerical aperture (NA) optical setups. However, the intensity contour symmetry and the depth of field of focus have led to grossly non-vertical and non-uniform curing profiles. This work tries to review the photopolymerization process in a scanning based MSL system from the aspect of material functionality and optical design. The focus has been to exploit the rich potential of photoreactor scanning system in achieving desired fabrication modalities (minimum curing width, uniform depth profile, and vertical curing profile) even with a reduced NA optical setup and a single movable stage. The present study tries to manipulate to its advantage the effect of optimized lower c] (photoinitiator (PI) concentration) in reducing the minimum curing width to similar to 10-20 mu m even with a higher spot size (similar to 21.36 mu m) through a judiciously chosen ``monomer-PI'' system. Optimization on grounds of increasing E-max (maximum laser exposure energy at surface) by optimizing the scan rate provides enough time for the monomer or resin to get cured across the entire resist thickness (surface to substrate similar to 10-100 mu m), leading to uniform depth profiles along the entire scan lengths. (C) 2012 American Institute of Physics. http://dx.doi.org/10.1063/1.4750975]

Relevância:

40.00% 40.00%

Publicador:

Resumo:

In this letter, we characterize the extrinsic information transfer (EXIT) behavior of a factor graph based message passing algorithm for detection in large multiple-input multiple-output (MIMO) systems with tens to hundreds of antennas. The EXIT curves of a joint detection-decoding receiver are obtained for low density parity check (LDPC) codes of given degree distributions. From the obtained EXIT curves, an optimization of the LDPC code degree profiles is carried out to design irregular LDPC codes matched to the large-MIMO channel and joint message passing receiver. With low complexity joint detection-decoding, these codes are shown to perform better than off-the-shelf irregular codes in the literature by about 1 to 1.5 dB at a coded BER of 10(-5) in 16 x 16, 64 x 64 and 256 x 256 MIMO systems.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

It is well known that extremely long low-density parity-check (LDPC) codes perform exceptionally well for error correction applications, short-length codes are preferable in practical applications. However, short-length LDPC codes suffer from performance degradation owing to graph-based impairments such as short cycles, trapping sets and stopping sets and so on in the bipartite graph of the LDPC matrix. In particular, performance degradation at moderate to high E-b/N-0 is caused by the oscillations in bit node a posteriori probabilities induced by short cycles and trapping sets in bipartite graphs. In this study, a computationally efficient algorithm is proposed to improve the performance of short-length LDPC codes at moderate to high E-b/N-0. This algorithm makes use of the information generated by the belief propagation (BP) algorithm in previous iterations before a decoding failure occurs. Using this information, a reliability-based estimation is performed on each bit node to supplement the BP algorithm. The proposed algorithm gives an appreciable coding gain as compared with BP decoding for LDPC codes of a code rate equal to or less than 1/2 rate coding. The coding gains are modest to significant in the case of optimised (for bipartite graph conditioning) regular LDPC codes, whereas the coding gains are huge in the case of unoptimised codes. Hence, this algorithm is useful for relaxing some stringent constraints on the graphical structure of the LDPC code and for developing hardware-friendly designs.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Chebyshev-inequality-based convex relaxations of Chance-Constrained Programs (CCPs) are shown to be useful for learning classifiers on massive datasets. In particular, an algorithm that integrates efficient clustering procedures and CCP approaches for computing classifiers on large datasets is proposed. The key idea is to identify high density regions or clusters from individual class conditional densities and then use a CCP formulation to learn a classifier on the clusters. The CCP formulation ensures that most of the data points in a cluster are correctly classified by employing a Chebyshev-inequality-based convex relaxation. This relaxation is heavily dependent on the second-order statistics. However, this formulation and in general such relaxations that depend on the second-order moments are susceptible to moment estimation errors. One of the contributions of the paper is to propose several formulations that are robust to such errors. In particular a generic way of making such formulations robust to moment estimation errors is illustrated using two novel confidence sets. An important contribution is to show that when either of the confidence sets is employed, for the special case of a spherical normal distribution of clusters, the robust variant of the formulation can be posed as a second-order cone program. Empirical results show that the robust formulations achieve accuracies comparable to that with true moments, even when moment estimates are erroneous. Results also illustrate the benefits of employing the proposed methodology for robust classification of large-scale datasets.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Motivated by the observation that communities in real world social networks form due to actions of rational individuals in networks, we propose a novel game theory inspired algorithm to determine communities in networks. The algorithm is decentralized and only uses local information at each node. We show the efficacy of the proposed algorithm through extensive experimentation on several real world social network data sets.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Recent focus of flood frequency analysis (FFA) studies has been on development of methods to model joint distributions of variables such as peak flow, volume, and duration that characterize a flood event, as comprehensive knowledge of flood event is often necessary in hydrological applications. Diffusion process based adaptive kernel (D-kernel) is suggested in this paper for this purpose. It is data driven, flexible and unlike most kernel density estimators, always yields a bona fide probability density function. It overcomes shortcomings associated with the use of conventional kernel density estimators in FFA, such as boundary leakage problem and normal reference rule. The potential of the D-kernel is demonstrated by application to synthetic samples of various sizes drawn from known unimodal and bimodal populations, and five typical peak flow records from different parts of the world. It is shown to be effective when compared to conventional Gaussian kernel and the best of seven commonly used copulas (Gumbel-Hougaard, Frank, Clayton, Joe, Normal, Plackett, and Student's T) in estimating joint distribution of peak flow characteristics and extrapolating beyond historical maxima. Selection of optimum number of bins is found to be critical in modeling with D-kernel.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Granger causality is increasingly being applied to multi-electrode neurophysiological and functional imaging data to characterize directional interactions between neurons and brain regions. For a multivariate dataset, one might be interested in different subsets of the recorded neurons or brain regions. According to the current estimation framework, for each subset, one conducts a separate autoregressive model fitting process, introducing the potential for unwanted variability and uncertainty. In this paper, we propose a multivariate framework for estimating Granger causality. It is based on spectral density matrix factorization and offers the advantage that the estimation of such a matrix needs to be done only once for the entire multivariate dataset. For any subset of recorded data, Granger causality can be calculated through factorizing the appropriate submatrix of the overall spectral density matrix.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Composite T-joints are commonly used in modern composite airframe, pressure vessels and piping structures, mainly to increase the bending strength of the joint and prevents buckling of plates and shells, and in multi-cell thin-walled structures. Here we report a detailed study on the propagation of guided ultrasonic wave modes in a composite T-joint and their interactions with delamination in the co-cured co-bonded flange. A well designed guiding path is employed wherein the waves undergo a two step mode conversion process, one is due to the web and joint filler on the back face of the flange and the other is due to the delamination edges close to underneath the accessible surface of the flange. A 3D Laser Doppler Vibrometer is used to obtain the three components of surface displacements/velocities of the accessible face of the flange of the T-joint. The waves are launched by a piezo ceramic wafer bonded on to the back surface of the flange. What is novel in the proposed method is that the location of any change in material/geometric properties can be traced by computing a frequency domain power flow along a scan line. The scan line can be chosen over a grid either during scan or during post-processing of the scan data off-line. The proposed technique eliminates the necessity of baseline data and disassembly of structure for structural interrogation.