44 resultados para Real and imaginary journeys
Resumo:
When a uniform flow of any nature is interrupted, the readjustment of the flow results in concentrations and rare-factions, so that the peak value of the flow parameter will be higher than that which an elementary computation would suggest. When stress flow in a structure is interrupted, there are stress concentrations. These are generally localized and often large, in relation to the values indicated by simple equilibrium calculations. With the advent of the industrial revolution, dynamic and repeated loading of materials had become commonplace in engine parts and fast moving vehicles of locomotion. This led to serious fatigue failures arising from stress concentrations. Also, many metal forming processes, fabrication techniques and weak-link type safety systems benefit substantially from the intelligent use or avoidance, as appropriate, of stress concentrations. As a result, in the last 80 years, the study and and evaluation of stress concentrations has been a primary objective in the study of solid mechanics. Exact mathematical analysis of stress concentrations in finite bodies presents considerable difficulty for all but a few problems of infinite fields, concentric annuli and the like, treated under the presumption of small deformation, linear elasticity. A whole series of techniques have been developed to deal with different classes of shapes and domains, causes and sources of concentration, material behaviour, phenomenological formulation, etc. These include real and complex functions, conformal mapping, transform techniques, integral equations, finite differences and relaxation, and, more recently, the finite element methods. With the advent of large high speed computers, development of finite element concepts and a good understanding of functional analysis, it is now, in principle, possible to obtain with economy satisfactory solutions to a whole range of concentration problems by intelligently combining theory and computer application. An example is the hybridization of continuum concepts with computer based finite element formulations. This new situation also makes possible a more direct approach to the problem of design which is the primary purpose of most engineering analyses. The trend would appear to be clear: the computer will shape the theory, analysis and design.
Resumo:
Expressions for the phase change Φ suffered by microwaves when transmitted through an artificial dielectric composed of metallic discs arranged in a three-dimensional array have been derived with different approaches as follows (i) molecular theory, (ii) electromagnetic theory and (iii) transmission line theory. The phase change depends on the distance t that the wave traverses inside the dielectric and also the spacing d between centre to centre of any two adjacent discs in the three principal directions. Molecular theory indicates Φ as an increasing function of t, whereas, the other two theories indicate Φ as an oscillatory function of t. The transmission line theory also exhibits Φ to be real or imaginary depending on t. Experimental values of Φ as a function of t have been obtained with the help of a microwave (3·2 cms wavelength) interferometer for two dielectrics having d as 1·91 cms and 2·22 cms respectively.
Resumo:
We study charge pumping when a combination of static potentials and potentials oscillating with a time period T is applied in a one-dimensional system of noninteracting electrons. We consider both an infinite system using the Dirac equation in the continuum approximation and a periodic ring with a finite number of sites using the tight-binding model. The infinite system is taken to be coupled to reservoirs on the two sides which are at the same chemical potential and temperature. We consider a model in which oscillating potentials help the electrons to access a transmission resonance produced by the static potentials and show that nonadiabatic pumping violates the simple sin phi rule which is obeyed by adiabatic two-site pumping. For the ring, we do not introduce any reservoirs, and we present a method for calculating the current averaged over an infinite time using the time evolution operator U(T) assuming a purely Hamiltonian evolution. We analytically show that the averaged current is zero if the Hamiltonian is real and time-reversal invariant. Numerical studies indicate another interesting result, namely, that the integrated current is zero for any time dependence of the potential if it is applied to only one site. Finally we study the effects of pumping at two sites on a ring at resonant and nonresonant frequencies, and show that the pumped current has different dependences on the pumping amplitude in the two cases.
Resumo:
In rapid parallel magnetic resonance imaging, the problem of image reconstruction is challenging. Here, a novel image reconstruction technique for data acquired along any general trajectory in neural network framework, called ``Composite Reconstruction And Unaliasing using Neural Networks'' (CRAUNN), is proposed. CRAUNN is based on the observation that the nature of aliasing remains unchanged whether the undersampled acquisition contains only low frequencies or includes high frequencies too. Here, the transformation needed to reconstruct the alias-free image from the aliased coil images is learnt, using acquisitions consisting of densely sampled low frequencies. Neural networks are made use of as machine learning tools to learn the transformation, in order to obtain the desired alias-free image for actual acquisitions containing sparsely sampled low as well as high frequencies. CRAUNN operates in the image domain and does not require explicit coil sensitivity estimation. It is also independent of the sampling trajectory used, and could be applied to arbitrary trajectories as well. As a pilot trial, the technique is first applied to Cartesian trajectory-sampled data. Experiments performed using radial and spiral trajectories on real and synthetic data, illustrate the performance of the method. The reconstruction errors depend on the acceleration factor as well as the sampling trajectory. It is found that higher acceleration factors can be obtained when radial trajectories are used. Comparisons against existing techniques are presented. CRAUNN has been found to perform on par with the state-of-the-art techniques. Acceleration factors of up to 4, 6 and 4 are achieved in Cartesian, radial and spiral cases, respectively. (C) 2010 Elsevier Inc. All rights reserved.
Resumo:
Multiple beam interference of light in a wedge is considered when the wedge is filled with an absorbing medium. The aim is to examine a method that may give values of both the real and the imaginary parts of the refractive index of the absorbing medium. We propose here a method to determine these quantities from simple techniques like fringe counting and interferometry, by using as the incident wave either a single Gaussian beam or two parallel Gaussian beams.
Resumo:
Results of frequency-dependent and temperature-dependent dielectric measurements performed on the double-perovskite Tb2NiMnO6 are presented. The real (epsilon(1)(f,T)) and imaginary (epsilon(2)(f,T)) parts of dielectric permittivity show three plateaus suggesting dielectric relaxation originating from the bulk, grain boundaries and the sample-electrode interfaces, respectively. The epsilon(1)(f,T) and epsilon(2)(f,T) are successfully simulated by a RC circuit model. The complex plane of impedance, Z'-Z `', is simulated using a series network with a resistor R and a constant phase element. Through the analysis of epsilon(f,T) using the modified Debye model, two different relaxation time regimes separated by a characteristic temperature, T*, are identified. The temperature variation of R and C corresponding to the bulk and the parameter alpha from modified Debye fit lend support to this hypothesis. Interestingly, the T* compares with the Griffiths temperature for this compound observed in magnetic measurements. Though these results cannot be interpreted as magnetoelectric coupling, the relationship between lattice and magnetism is markedly clear. We assume that the observed features have their origin in the polar nanoregions which originate from the inherent cationic defect structure of double perovskites. Copyright (C) EPLA, 2013
Resumo:
Although various strategies have been developed for scheduling parallel applications with independent tasks, very little work exists for scheduling tightly coupled parallel applications on cluster environments. In this paper, we compare four different strategies based on performance models of tightly coupled parallel applications for scheduling the applications on clusters. In addition to algorithms based on existing popular optimization techniques, we also propose a new algorithm called Box Elimination that searches the space of performance model parameters to determine the best schedule of machines. By means of real and simulation experiments, we evaluated the algorithms on single cluster and multi-cluster setups. We show that our Box Elimination algorithm generates up to 80% more efficient schedule than other algorithms. We also show that the execution times of the schedules produced by our algorithm are more robust against the performance modeling errors.
Resumo:
The superfluid state of fermion-antifermion fields developed in our previous papers is generalized to include higher orbital and spin states. In addition to single-particle excitations, the system is capable of having real and virtual bound or quasibound composite excitations which are akin to bosons of spinJ P equal to0 �, 1�, 2+, etc. These pseudoscalar, vector, and tensor bosons can be massive or massless and provide the vehicles for strong, electromagnetic, weak, and gravitational interactions. The concept that the basic (unmanifest) fermion-antifermion interaction can lead to a multiplicity of manifest interactions seems to provide a basis for a unified field theory.
Resumo:
The moments of the real and the absorptive parts of the antiproton optical potentials are evaluated for the first time to study the geometries of the potentials at 180 MeV. Interesting features are revealed which are found to be comparable to the proton case in general despite the presence of strong annihilation. A few interesting deviations, however, are also found compared to the proton case.
Resumo:
This paper deals with haptic realism related to Kinematic capabilities of the devices used in manipulation of virtual objects in virtual assembly environments and its effect on achieving haptic realism. Haptic realism implies realistic touch sensation. In virtual world all the operations are to be performed in the same way and with same level of accuracy as in the real world .In order to achieve realism there should be a complete mapping of real and virtual world dimensions. Experiments are conducted to know the kinematic capabilities of the device by comparing the dimensions of the object in the real and virtual world. Registered dimensions in the virtual world are found to be approximately 1.5 times that of the real world. Dimensional variations observed were discrepancy due to exoskeleton and discrepancy due to real and virtual hands. Experiments are conducted to know the discrepancy due to exoskeleton and this discrepancy can be taken care of by either at the hardware or software level. A Mathematical model is proposed to know the discrepancy between real and virtual hands. This could not give a fixed value and can not be taken care of by calibration. Experiments are conducted to figure out how much compensation can be given to achieve haptic realism.
Resumo:
In this paper we consider the process of discovering frequent episodes in event sequences. The most computationally intensive part of this process is that of counting the frequencies of a set of candidate episodes. We present two new frequency counting algorithms for speeding up this part. These, referred to as non-overlapping and non-inteleaved frequency counts, are based on directly counting suitable subsets of the occurrences of an episode. Hence they are different from the frequency counts of Mannila et al [1], where they count the number of windows in which the episode occurs. Our new frequency counts offer a speed-up factor of 7 or more on real and synthetic datasets. We also show how the new frequency counts can be used when the events in episodes have time-durations as well.
Resumo:
This paper presents an artificial feed forward neural network (FFNN) approach for the assessment of power system voltage stability. A novel approach based on the input-output relation between real and reactive power, as well as voltage vectors for generators and load buses is used to train the neural net (NN). The input properties of the feed forward network are generated from offline training data with various simulated loading conditions using a conventional voltage stability algorithm based on the L-index. The neural network is trained for the L-index output as the target vector for each of the system loads. Two separate trained NN, corresponding to normal loading and contingency, are investigated on the 367 node practical power system network. The performance of the trained artificial neural network (ANN) is also investigated on the system under various voltage stability assessment conditions. As compared to the computationally intensive benchmark conventional software, near accurate results in the value of L-index and thus the voltage profile were obtained. Proposed algorithm is fast, robust and accurate and can be used online for predicting the L-indices of all the power system buses. The proposed ANN approach is also shown to be effective and computationally feasible in voltage stability assessment as well as potential enhancements within an overall energy management system in order to determining local and global stability indices
Resumo:
Exascale systems of the future are predicted to have mean time between failures (MTBF) of less than one hour. Malleable applications, where the number of processors on which the applications execute can be changed during executions, can make use of their malleability to better tolerate high failure rates. We present AdFT, an adaptive fault tolerance framework for long running malleable applications to maximize application performance in the presence of failures. AdFT framework includes cost models for evaluating the benefits of various fault tolerance actions including checkpointing, live-migration and rescheduling, and runtime decisions for dynamically selecting the fault tolerance actions at different points of application execution to maximize performance. Simulations with real and synthetic failure traces show that our approach outperforms existing fault tolerance mechanisms for malleable applications yielding up to 23% improvement in application performance, and is effective even for petascale systems and beyond.
Resumo:
Image inpainting is the process of filling the unwanted region in an image marked by the user. It is used for restoring old paintings and photographs, removal of red eyes from pictures, etc. In this paper, we propose an efficient inpainting algorithm which takes care of false edge propagation. We use the classical exemplar based technique to find out the priority term for each patch. To ensure that the edge content of the nearest neighbor patch found by minimizing L-2 distance between patches, we impose an additional constraint that the entropy of the patches be similar. Entropy of the patch acts as a good measure of edge content. Additionally, we fill the image by considering overlapping patches to ensure smoothness in the output. We use structural similarity index as the measure of similarity between ground truth and inpainted image. The results of the proposed approach on a number of examples on real and synthetic images show the effectiveness of our algorithm in removing objects and thin scratches or text written on image. It is also shown that the proposed approach is robust to the shape of the manually selected target. Our results compare favorably to those obtained by existing techniques