355 resultados para Linear rock cutting
Resumo:
This paper presents methodologies for incorporating phasor measurements into conventional state estimator. The angle measurements obtained from Phasor Measurement Units are handled as angle difference measurements rather than incorporating the angle measurements directly. Handling in such a manner overcomes the problems arising due to the choice of reference bus. Current measurements obtained from Phasor Measurement Units are treated as equivalent pseudo-voltage measurements at the neighboring buses. Two solution approaches namely normal equations approach and linear programming approach are presented to show how the Phasor Measurement Unit measurements can be handled. Comparative evaluation of both the approaches is also presented. Test results on IEEE 14 bus system are presented to validate both the approaches.
Resumo:
Let X-1,..., X-m be a set of m statistically dependent sources over the common alphabet F-q, that are linearly independent when considered as functions over the sample space. We consider a distributed function computation setting in which the receiver is interested in the lossless computation of the elements of an s-dimensional subspace W spanned by the elements of the row vector X-1,..., X-m]Gamma in which the (m x s) matrix Gamma has rank s. A sequence of three increasingly refined approaches is presented, all based on linear encoders. The first approach uses a common matrix to encode all the sources and a Korner-Marton like receiver to directly compute W. The second improves upon the first by showing that it is often more efficient to compute a carefully chosen superspace U of W. The superspace is identified by showing that the joint distribution of the {X-i} induces a unique decomposition of the set of all linear combinations of the {X-i}, into a chain of subspaces identified by a normalized measure of entropy. This subspace chain also suggests a third approach, one that employs nested codes. For any joint distribution of the {X-i} and any W, the sum-rate of the nested code approach is no larger than that under the Slepian-Wolf (SW) approach. Under the SW approach, W is computed by first recovering each of the {X-i}. For a large class of joint distributions and subspaces W, the nested code approach is shown to improve upon SW. Additionally, a class of source distributions and subspaces are identified, for which the nested-code approach is sum-rate optimal.
Resumo:
In document community support vector machines and naïve bayes classifier are known for their simplistic yet excellent performance. Normally the feature subsets used by these two approaches complement each other, however a little has been done to combine them. The essence of this paper is a linear classifier, very similar to these two. We propose a novel way of combining these two approaches, which synthesizes best of them into a hybrid model. We evaluate the proposed approach using 20ng dataset, and compare it with its counterparts. The efficacy of our results strongly corroborate the effectiveness of our approach.
Resumo:
Seismic site classifications are used to represent site effects for estimating hazard parameters (response spectral ordinates) at the soil surface. Seismic site classifications have generally been carried out using average shear wave velocity and/or standard penetration test n-values of top 30-m soil layers, according to the recommendations of the National Earthquake Hazards Reduction Program (NEHRP) or the International Building Code (IBC). The site classification system in the NEHRP and the IBC is based on the studies carried out in the United States where soil layers extend up to several hundred meters before reaching any distinct soil-bedrock interface and may not be directly applicable to other regions, especially in regions having shallow geological deposits. This paper investigates the influence of rock depth on site classes based on the recommendations of the NEHRP and the IBC. For this study, soil sites having a wide range of average shear wave velocities (or standard penetration test n-values) have been collected from different parts of Australia, China, and India. Shear wave velocities of rock layers underneath soil layers have also been collected at depths from a few meters to 180 m. It is shown that a site classification system based on the top 30-m soil layers often represents stiffer site classes for soil sites having shallow rock depths (rock depths less than 25 m from the soil surface). A new site classification system based on average soil thickness up to engineering bedrock has been proposed herein, which is considered more representative for soil sites in shallow bedrock regions. It has been observed that response spectral ordinates, amplification factors, and site periods estimated using one-dimensional shear wave analysis considering the depth of engineering bedrock are different from those obtained considering top 30-m soil layers.
Resumo:
We present a study correlating uniaxial stress in a polymer with its underlying structure when it is strained. The uniaxial stress is significantly influenced by the mean-square bond length and mean bond angle. In contrast, the size and shape of the polymer, typically represented by the end-to-end length, mass ratio, and radius of gyration, contribute negligibly. Among externally set control variables, density and polymer chain length play a critical role in influencing the anisotropic uniaxial stress. Short chain polymers more or less behave like rigid molecules. Temperature and rate of loading, in the range considered, have a very mild effect on the uniaxial stress.
Resumo:
The delineation of seismic source zones plays an important role in the evaluation of seismic hazard. In most of the studies the seismic source delineation is done based on geological features. In the present study, an attempt has been made to delineate seismic source zones in the study area (south India) based on the seismicity parameters. Seismicity parameters and the maximum probable earthquake for these source zones were evaluated and were used in the hazard evaluation. The probabilistic evaluation of seismic hazard for south India was carried out using a logic tree approach. Two different types of seismic sources, linear and areal, were considered in the present study to model the seismic sources in the region more precisely. In order to properly account for the attenuation characteristics of the region, three different attenuation relations were used with different weightage factors. Seismic hazard evaluation was done for the probability of exceedance (PE) of 10% and 2% in 50 years. The spatial variation of rock level peak horizontal acceleration (PHA) and spectral acceleration (Sa) values corresponding to return periods of 475 and 2500 years for the entire study area are presented in this work. The peak ground acceleration (PGA) values at ground surface level were estimated based on different NEHRP site classes by considering local site effects.
Resumo:
Slow flow in granular materials is characterized by high solid fraction and sustained inter-particle interaction. The kinematics of trawling or cutting is encountered in processes such as locomotion of organisms in sand; trawl gear movement on a soil deposit; plow movement; movement of rovers, earth moving equipment etc. Additionally, this configuration is very akin to shallow drilling configuration encountered in the mining and petroleum industries. An experimental study has been made in order to understand velocity and deformation fields in cutting of a model rounded sand. Under nominal plane strain conditions, sand is subjected to orthogonal cutting at different tool-rake angles. High-resolution optical images of the region of cutting were obtained during the flow of the granular ensemble around the tool. Interesting kinematics underlying the formation of a chip and the evolution of the deformation field is seen in these experiments. These images are also analyzed using a PIV algorithm and detailed information of the deformation parameters such as velocity, strain rate and volume change is obtained.
Resumo:
State estimation is one of the most important functions in an energy control centre. An computationally efficient state estimator which is free from numerical instability/ill-conditioning is essential for security assessment of electric power grid. Whereas approaches to successfully overcome the numerical ill-conditioning issues have been proposed, an efficient algorithm for addressing the convergence issues in the presence of topological errors is yet to be evolved. Trust region (TR) methods have been successfully employed to overcome the divergence problem to certain extent. In this study, case studies are presented where the conventional algorithms including the existing TR methods would fail to converge. A linearised model-based TR method for successfully overcoming the convergence issues is proposed. On the computational front, unlike the existing TR methods for state estimation which employ quadratic models, the proposed linear model-based estimator is computationally efficient because the model minimiser can be computed in a single step. The model minimiser at each step is computed by minimising the linearised model in the presence of TR and measurement mismatch constraints. The infinity norm is used to define the geometry of the TR. Measurement mismatch constraints are employed to improve the accuracy. The proposed algorithm is compared with the quadratic model-based TR algorithm with case studies on the IEEE 30-bus system, 205-bus and 514-bus equivalent systems of part of Indian grid.
Resumo:
The random eigenvalue problem arises in frequency and mode shape determination for a linear system with uncertainties in structural properties. Among several methods of characterizing this random eigenvalue problem, one computationally fast method that gives good accuracy is a weak formulation using polynomial chaos expansion (PCE). In this method, the eigenvalues and eigenvectors are expanded in PCE, and the residual is minimized by a Galerkin projection. The goals of the current work are (i) to implement this PCE-characterized random eigenvalue problem in the dynamic response calculation under random loading and (ii) to explore the computational advantages and challenges. In the proposed method, the response quantities are also expressed in PCE followed by a Galerkin projection. A numerical comparison with a perturbation method and the Monte Carlo simulation shows that when the loading has a random amplitude but deterministic frequency content, the proposed method gives more accurate results than a first-order perturbation method and a comparable accuracy as the Monte Carlo simulation in a lower computational time. However, as the frequency content of the loading becomes random, or for general random process loadings, the method loses its accuracy and computational efficiency. Issues in implementation, limitations, and further challenges are also addressed.
Resumo:
We study absorption spectra and two photon absorption coefficient of expanded porphyrins (EPs) by the density matrix renormalization group (DMRG) technique. We employ the Pariser-Parr-Pople (PPP) Hamiltonian which includes long-range electron-electron interactions. We find that, in the 4n+2 EPs, there are two prominent low-lying one-photon excitations, while in 4n EPs, there is only one such excitation. We also find that 4n+2 EPs have large two-photon absorption cross sections compared to 4n EPs. The charge density rearrangement in the one-photon excited state is mostly at the pyrrole nitrogen site and at the meso carbon sites. In the two-photon states, the charge density rearrangement occurs mostly at the aza-ring sites. In the one-photon state, the C-C bond length in aza rings shows a tendency to become uniform. In the two-photon state, the bond distortions are on C-N bonds of the pyrrole ring and the adjoining C-C bonds which connect the pyrrole ring to the aza or meso carbon sites.
Resumo:
The magnetic saw effect, induced by the Lorentz force generated due to the application of a series of electromagnetic ( EM) pulses, can be utilized to cut a metallic component containing a pre-existing cut or crack. By combining a mechanical force with the Lorentz force, the cut can be propagated along any arbitrary direction in a controlled fashion, thus producing an `electromagnetic jigsaw', yielding a novel tool-less, free-formed manufacturing process, particularly suitable for hard-to-cut metals. This paper presents validation of the above concept based on a simple analytical model, along with experiments on two materials - Pb foil and steel plate. (C) 2013 The Authors. Published by Elsevier B.V. Selection and/or peer-review under responsibility of Professor Bert Lauwers
Resumo:
We describe the synthesis, crystal structure, magnetic and electrochemical characterization of new rock salt-related oxides of formula, Li3M2RuO6 (M=Co, Ni). The M=Co oxide adopts the LiCoO2 (R-3m) structure, where sheets of LiO6 and (Co-2/Ru)O-6 octahedra are alternately stacked along the c-direction. The M=Ni oxide also adopts a similar layered structure related to Li2TiO3, where partial mixing of Li and Ni/Ru atoms lowers the symmetry to monoclinic (C2/c). Magnetic susceptibility measurements reveal that in Li3Co2RuO6, the oxidation states of transition metal ions are Co3+ (S=0), Co2+ (S=1/2) and Ru4+ (S=1), all of them in low-spin configuration and at 10 K, the material orders antiferromagnetically. Analogous Li3Ni2RuO6 presents a ferrimagnetic behavior with a Curie temperature of 100 K. The differences in the magnetic behavior have been explained in terms of differences in the crystal structure. Electrochemical studies correlate well with both magnetic properties and crystal structure. Li-transition metal intermixing may be at the origin of the more impeded oxidation of Li3Ni2RuO6 when compared to Li3CO2RuO6. Interestingly high first charge capacities (between ca. 160 and 180 mAh g(-1)) corresponding to ca. 2/3 of theoretical capacity are reached albeit, in both cases, capacity retention and cyclability are not satisfactory enough to consider these materials as alternatives to LiCoO2. (C) 2013 Elsevier Inc. All rights reserved.
Resumo:
The algebraic formulation for linear network coding in acyclic networks with each link having an integer delay is well known. Based on this formulation, for a given set of connections over an arbitrary acyclic network with integer delay assumed for the links, the output symbols at the sink nodes at any given time instant is a Fq-linear combination of the input symbols across different generations, where Fq denotes the field over which the network operates. We use finite-field discrete Fourier transform (DFT) to convert the output symbols at the sink nodes at any given time instant into a Fq-linear combination of the input symbols generated during the same generation. We call this as transforming the acyclic network with delay into n-instantaneous networks (n is sufficiently large). We show that under certain conditions, there exists a network code satisfying sink demands in the usual (non-transform) approach if and only if there exists a network code satisfying sink demands in the transform approach. Furthermore, assuming time invariant local encoding kernels, we show that the transform method can be employed to achieve half the rate corresponding to the individual source-destination mincut (which are assumed to be equal to 1) for some classes of three-source three-destination multiple unicast network with delays using alignment strategies when the zero-interference condition is not satisfied.
Resumo:
Visual search in real life involves complex displays with a target among multiple types of distracters, but in the laboratory, it is often tested using simple displays with identical distracters. Can complex search be understood in terms of simple searches? This link may not be straightforward if complex search has emergent properties. One such property is linear separability, whereby search is hard when a target cannot be separated from its distracters using a single linear boundary. However, evidence in favor of linear separability is based on testing stimulus configurations in an external parametric space that need not be related to their true perceptual representation. We therefore set out to assess whether linear separability influences complex search at all. Our null hypothesis was that complex search performance depends only on classical factors such as target-distracter similarity and distracter homogeneity, which we measured using simple searches. Across three experiments involving a variety of artificial and natural objects, differences between linearly separable and nonseparable searches were explained using target-distracter similarity and distracter heterogeneity. Further, simple searches accurately predicted complex search regardless of linear separability (r = 0.91). Our results show that complex search is explained by simple search, refuting the widely held belief that linear separability influences visual search.