936 resultados para Germanic peoples
Resumo:
During a lightning strike to ground or structure nearby, currents are induced in all conducting structures including tall towers. As compared to the case of a direct strike, these induced currents will be of much lower amplitude, however, appear more frequently. A quantitative knowledge on these induced currents will be of interest to instrumented and communication towers. A preliminary analysis on the characteristics of the induced currents was reported in an earlier work 1], which employed simplifications by neglecting the induced charge on the tower and also the contribution from the upward connecting leader. This work aims to make further progress by considering all the essential aspects in ascertaining the induced currents. For determining the field produced by the developing return stroke, a macro-physical model for the return stroke is employed and for the evaluation of the induced currents, an in-house time domain numerical electromagnetic code along with suitable modifications for incorporating the dynamics of upward leader is employed.
Resumo:
UHV power transmission lines have high probability of shielding failure due to their higher height, larger exposure area and high operating voltage. Lightning upward leader inception and propagation is an integral part of lightning shielding failure analysis and need to be studied in detail. In this paper a model for lightning attachment has been proposed based on the present knowledge of lightning physics. Leader inception is modeled based on the corona charge present near the conductor region and the propagation model is based on the correlation between the lightning induced voltage on the conductor and the drop along the upward leader channel. The inception model developed is compared with previous inception models and the results obtained using the present and previous models are comparable. Lightning striking distances (final jump) for various return stroke current were computed for different conductor heights. The computed striking distance values showed good correlation with the values calculated using the equation proposed by the IEEE working group for the applicable conductor heights of up to 8 m. The model is applied to a 1200 kV AC power transmission line and inception of the upward leader is analyzed for this configuration.
Resumo:
Lightning strike to instrumented and communication towers can be a source of electromagnetic disturbance to the system connected. Long cables running on these towers can get significant induction to their sheath/core, which would then couple to the connected equipments. For a quantitative analysis of the situation, suitable theoretical analysis is necessary. Due to the dominance of the transverse magnetic mode during the fast rising portion of the stroke current, which is the period of significant induction, a full wave solution based on Maxwell's equations is necessary. Owing to the large geometric aspect ratio of tower lattice elements and for feasibility of a numerical solution, the thin-wire formulation for the electric field integral equation is generally adopted. However, the classical thin-wire formulation is not set for handling non-cylindrical conductors like tower lattice elements and the proximity of other conductors. The present work investigates further into a recently proposed method for handling such a situation and optimizes the numerical solution approach.
Resumo:
For a multilayered specimen, the back-scattered signal in frequency-domain optical-coherence tomography (FDOCT) is expressible as a sum of cosines, each corresponding to a change of refractive index in the specimen. Each of the cosines represent a peak in the reconstructed tomogram. We consider a truncated cosine series representation of the signal, with the constraint that the coefficients in the basis expansion be sparse. An l(2) (sum of squared errors) data error is considered with an l(1) (summation of absolute values) constraint on the coefficients. The optimization problem is solved using Weiszfeld's iteratively reweighted least squares (IRLS) algorithm. On real FDOCT data, improved results are obtained over the standard reconstruction technique with lower levels of background measurement noise and artifacts due to a strong l(1) penalty. The previous sparse tomogram reconstruction techniques in the literature proposed collecting sparse samples, necessitating a change in the data capturing process conventionally used in FDOCT. The IRLS-based method proposed in this paper does not suffer from this drawback.
Resumo:
We address the problem of separating a speech signal into its excitation and vocal-tract filter components, which falls within the framework of blind deconvolution. Typically, the excitation in case of voiced speech is assumed to be sparse and the vocal-tract filter stable. We develop an alternating l(p) - l(2) projections algorithm (ALPA) to perform deconvolution taking into account these constraints. The algorithm is iterative, and alternates between two solution spaces. The initialization is based on the standard linear prediction decomposition of a speech signal into an autoregressive filter and prediction residue. In every iteration, a sparse excitation is estimated by optimizing an l(p)-norm-based cost and the vocal-tract filter is derived as a solution to a standard least-squares minimization problem. We validate the algorithm on voiced segments of natural speech signals and show applications to epoch estimation. We also present comparisons with state-of-the-art techniques and show that ALPA gives a sparser impulse-like excitation, where the impulses directly denote the epochs or instants of significant excitation.
Resumo:
In big data image/video analytics, we encounter the problem of learning an over-complete dictionary for sparse representation from a large training dataset, which cannot be processed at once because of storage and computational constraints. To tackle the problem of dictionary learning in such scenarios, we propose an algorithm that exploits the inherent clustered structure of the training data and make use of a divide-and-conquer approach. The fundamental idea behind the algorithm is to partition the training dataset into smaller clusters, and learn local dictionaries for each cluster. Subsequently, the local dictionaries are merged to form a global dictionary. Merging is done by solving another dictionary learning problem on the atoms of the locally trained dictionaries. This algorithm is referred to as the split-and-merge algorithm. We show that the proposed algorithm is efficient in its usage of memory and computational complexity, and performs on par with the standard learning strategy, which operates on the entire data at a time. As an application, we consider the problem of image denoising. We present a comparative analysis of our algorithm with the standard learning techniques that use the entire database at a time, in terms of training and denoising performance. We observe that the split-and-merge algorithm results in a remarkable reduction of training time, without significantly affecting the denoising performance.
Resumo:
The effect of multiplicative noise on a signal when compared with that of additive noise is very large. In this paper, we address the problem of suppressing multiplicative noise in one-dimensional signals. To deal with signals that are corrupted with multiplicative noise, we propose a denoising algorithm based on minimization of an unbiased estimator (MURE) of meansquare error (MSE). We derive an expression for an unbiased estimate of the MSE. The proposed denoising is carried out in wavelet domain (soft thresholding) by considering time-domain MURE. The parameters of thresholding function are obtained by minimizing the unbiased estimator MURE. We show that the parameters for optimal MURE are very close to the optimal parameters considering the oracle MSE. Experiments show that the SNR improvement for the proposed denoising algorithm is competitive with a state-of-the-art method.
Resumo:
Local polynomial approximation of data is an approach towards signal denoising. Savitzky-Golay (SG) filters are finite-impulse-response kernels, which convolve with the data to result in polynomial approximation for a chosen set of filter parameters. In the case of noise following Gaussian statistics, minimization of mean-squared error (MSE) between noisy signal and its polynomial approximation is optimum in the maximum-likelihood (ML) sense but the MSE criterion is not optimal for non-Gaussian noise conditions. In this paper, we robustify the SG filter for applications involving noise following a heavy-tailed distribution. The optimal filtering criterion is achieved by l(1) norm minimization of error through iteratively reweighted least-squares (IRLS) technique. It is interesting to note that at any stage of the iteration, we solve a weighted SG filter by minimizing l(2) norm but the process converges to l(1) minimized output. The results show consistent improvement over the standard SG filter performance.
Resumo:
The Exact Cover problem takes a universe U of n elements, a family F of m subsets of U and a positive integer k, and decides whether there exists a subfamily(set cover) F' of size at most k such that each element is covered by exactly one set. The Unique Cover problem also takes the same input and decides whether there is a subfamily F' subset of F such that at least k of the elements F' covers are covered uniquely(by exactly one set). Both these problems are known to be NP-complete. In the parameterized setting, when parameterized by k, Exact Cover is W1]-hard. While Unique Cover is FPT under the same parameter, it is known to not admit a polynomial kernel under standard complexity-theoretic assumptions. In this paper, we investigate these two problems under the assumption that every set satisfies a given geometric property Pi. Specifically, we consider the universe to be a set of n points in a real space R-d, d being a positive integer. When d = 2 we consider the problem when. requires all sets to be unit squares or lines. When d > 2, we consider the problem where. requires all sets to be hyperplanes in R-d. These special versions of the problems are also known to be NP-complete. When parameterizing by k, the Unique Cover problem has a polynomial size kernel for all the above geometric versions. The Exact Cover problem turns out to be W1]-hard for squares, but FPT for lines and hyperplanes. Further, we also consider the Unique Set Cover problem, which takes the same input and decides whether there is a set cover which covers at least k elements uniquely. To the best of our knowledge, this is a new problem, and we show that it is NP-complete (even for the case of lines). In fact, the problem turns out to be W1]-hard in the abstract setting, when parameterized by k. However, when we restrict ourselves to the lines and hyperplanes versions, we obtain FPT algorithms.
Resumo:
We give an overview of recent results and techniques in parameterized algorithms for graph modification problems.
Resumo:
Oversmoothing of speech parameter trajectories is one of the causes for quality degradation of HMM-based speech synthesis. Various methods have been proposed to overcome this effect, the most recent ones being global variance (GV) and modulation-spectrum-based post-filter (MSPF). However, there is still a significant quality gap between natural and synthesized speech. In this paper, we propose a two-fold post-filtering technique to alleviate to a certain extent the oversmoothing of spectral and excitation parameter trajectories of HMM-based speech synthesis. For the spectral parameters, we propose a sparse coding-based post-filter to match the trajectories of synthetic speech to that of natural speech, and for the excitation trajectory, we introduce a perceptually motivated post-filter. Experimental evaluations show quality improvement compared with existing methods.
Resumo:
Quantum cellular automata (QCA) is a new technology in the nanometer scale and has been considered as one of the alternative to CMOS technology. In this paper, we describe the design and layout of a serial memory and parallel memory, showing the layout of individual memory cells. Assuming that we can fabricate cells which are separated by 10nm, memory capacities of over 1.6 Gbit/cm2 can be achieved. Simulations on the proposed memories were carried out using QCADesigner, a layout and simulation tool for QCA. During the design, we have tried to reduce the number of cells as well as to reduce the area which is found to be 86.16sq mm and 0.12 nm2 area with the QCA based memory cell. We have also achieved an increase in efficiency by 40%.These circuits are the building block of nano processors and provide us to understand the nano devices of the future.
Resumo:
Speech enhancement in stationary noise is addressed using the ideal channel selection framework. In order to estimate the binary mask, we propose to classify each time-frequency (T-F) bin of the noisy signal as speech or noise using Discriminative Random Fields (DRF). The DRF function contains two terms - an enhancement function and a smoothing term. On each T-F bin, we propose to use an enhancement function based on likelihood ratio test for speech presence, while Ising model is used as smoothing function for spectro-temporal continuity in the estimated binary mask. The effect of the smoothing function over successive iterations is found to reduce musical noise as opposed to using only enhancement function. The binary mask is inferred from the noisy signal using Iterated Conditional Modes (ICM) algorithm. Sentences from NOIZEUS corpus are evaluated from 0 dB to 15 dB Signal to Noise Ratio (SNR) in 4 kinds of additive noise settings: additive white Gaussian noise, car noise, street noise and pink noise. The reconstructed speech using the proposed technique is evaluated in terms of average segmental SNR, Perceptual Evaluation of Speech Quality (PESQ) and Mean opinion Score (MOS).
Resumo:
Resumen: n esta intervención Carlos Galli dialoga con la valiosa ponencia del cardenal Walter Kasper resumiendo la recepción de la eclesiología conciliar desde la Argentina. Este diálogo se centra en cuatro puntos, la dialéctica liberación – libertad en el diálogo de la Iglesia con la modernidad, la recepción argentina de su eclesiología considerando las relaciones entre la fe del Pueblo de Dios y las culturas de los pueblos, la reforma evangélica radical a partir del paradigma de la conversión misionera de todo el Pueblo de Dios y de todos en el Pueblo de Dios. El género del texto une el comentario interpretativo y el diálogo teológico más constructivo que crítico.
Resumo:
Resumen: A partir de la primera versión literaria en lengua vernácula del “Cuento de la doncella sin manos”, escrita por Philippe de Remi en el siglo XIII, la literatura medieval no dejó de reelaborar el relato a lo largo y a lo ancho del Occidente europeo. Del periodo que abarca desde el siglo XIII hasta el XVII nos llegan, por lo menos, unas treinta y cuatro versiones escritas solo en los ámbitos románico y germánico. Existe asimismo una tradición arábiga del cuento, probablemente de origen semítico, que constituiría, según algunos autores, una rama narrativa independiente. En la tradición oral el relato ha pervivido hasta nuestros días, en diversos países del mundo, incluida América del Sur, particularmente Brasil, Chile y la Argentina. El legado folclórico en Europa, inicialmente recopilado y puesto por escrito por los hermanos Grimm en 1812, presenta, ciertamente, numerosos puntos de contacto con las versiones americanas. Sin embargo, se ha establecido un vínculo aún más estrecho entre estas y los Cuentos populares españoles recogidos por Aurelio Espinosa en 1923, por un lado, así como también con una de las tres versiones provenientes del ámbito árabe. Luego de trazar un panorama histórico del corpus y estudiar los puntos de contacto entre la tradición europea y la americana, nos centraremos en el análisis de las versiones sudamericanas, particularmente las recogidas en la Argentina