92 resultados para L10 - General
Resumo:
In this paper, we propose FeatureMatch, a generalised approximate nearest-neighbour field (ANNF) computation framework, between a source and target image. The proposed algorithm can estimate ANNF maps between any image pairs, not necessarily related. This generalisation is achieved through appropriate spatial-range transforms. To compute ANNF maps, global colour adaptation is applied as a range transform on the source image. Image patches from the pair of images are approximated using low-dimensional features, which are used along with KD-tree to estimate the ANNF map. This ANNF map is further improved based on image coherency and spatial transforms. The proposed generalisation, enables us to handle a wider range of vision applications, which have not been tackled using the ANNF framework. We illustrate two such applications namely: 1) optic disk detection and 2) super resolution. The first application deals with medical imaging, where we locate optic disks in retinal images using a healthy optic disk image as common target image. The second application deals with super resolution of synthetic images using a common source image as dictionary. We make use of ANNF mappings in both these applications and show experimentally that our proposed approaches are faster and accurate, compared with the state-of-the-art techniques.
Resumo:
Solder joints in electronic packages undergo thermo-mechanical cycling, resulting in nucleation of micro-cracks, especially at the solder/bond-pad interface, which may lead to fracture of the joints. The fracture toughness of a solder joint depends on material properties, process conditions and service history, as well as strain rate and mode-mixity. This paper reports on a methodology for determining the mixed-mode fracture toughness of solder joints with an interfacial starter-crack, using a modified compact mixed mode (CMM) specimen containing an adhesive joint. Expressions for stress intensity factor (K) and strain energy release rate (G) are developed, using a combination of experiments and finite element (FE) analysis. In this methodology, crack length dependent geometry factors to convert for the modified CMM sample are first obtained via the crack-tip opening displacement (CTOD)-based linear extrapolation method to calculate the under far-field mode I and II conditions (f(1a) and f(2a)), (ii) generation of a master-plot to determine a(c), and (iii) computation of K and G to analyze the fracture behavior of joints. The developed methodology was verified using J-integral calculations, and was also used to calculate experimental fracture toughness values of a few lead-free solder-Cu joints. (C) 2014 Elsevier Ltd. All rights reserved.
Resumo:
Frequent episode discovery is one of the methods used for temporal pattern discovery in sequential data. An episode is a partially ordered set of nodes with each node associated with an event type. For more than a decade, algorithms existed for episode discovery only when the associated partial order is total (serial episode) or trivial (parallel episode). Recently, the literature has seen algorithms for discovering episodes with general partial orders. In frequent pattern mining, the threshold beyond which a pattern is inferred to be interesting is typically user-defined and arbitrary. One way of addressing this issue in the pattern mining literature has been based on the framework of statistical hypothesis testing. This paper presents a method of assessing statistical significance of episode patterns with general partial orders. A method is proposed to calculate thresholds, on the non-overlapped frequency, beyond which an episode pattern would be inferred to be statistically significant. The method is first explained for the case of injective episodes with general partial orders. An injective episode is one where event-types are not allowed to repeat. Later it is pointed out how the method can be extended to the class of all episodes. The significance threshold calculations for general partial order episodes proposed here also generalize the existing significance results for serial episodes. Through simulations studies, the usefulness of these statistical thresholds in pruning uninteresting patterns is illustrated. (C) 2014 Elsevier Inc. All rights reserved.
Resumo:
We compute the instantaneous contributions to the spherical harmonic modes of gravitational waveforms from compact binary systems in general orbits up to the third post-Newtonian (PN) order. We further extend these results for compact binaries in quasielliptical orbits using the 3PN quasi-Keplerian representation of the conserved dynamics of compact binaries in eccentric orbits. Using the multipolar post-Minkowskian formalism, starting from the different mass and current-type multipole moments, we compute the spin-weighted spherical harmonic decomposition of the instantaneous part of the gravitational waveform. These are terms which are functions of the retarded time and do not depend on the history of the binary evolution. Together with the hereditary part, which depends on the binary's dynamical history, these waveforms form the basis for construction of accurate templates for the detection of gravitational wave signals from binaries moving in quasielliptical orbits.
Resumo:
Eleven general circulation models/global climate models (GCMs) - BCCR-BCCM2.0, INGV-ECHAM4, GFDL2.0, GFDL2.1, GISS, IPSL-CM4, MIROC3, MRI-CGCM2, NCAR-PCMI, UKMO-HADCM3 and UKMO-HADGEM1 - are evaluated for Indian climate conditions using the performance indicator, skill score (SS). Two climate variables, temperature T (at three levels, i.e. 500, 700, 850 mb) and precipitation rate (Pr) are considered resulting in four SS-based evaluation criteria (T500, T700, T850, Pr). The multicriterion decision-making method, technique for order preference by similarity to an ideal solution, is applied to rank 11 GCMs. Efforts are made to rank GCMs for the Upper Malaprabha catchment and two river basins, namely, Krishna and Mahanadi (covered by 17 and 15 grids of size 2.5 degrees x 2.5 degrees, respectively). Similar efforts are also made for India (covered by 73 grid points of size 2.5 degrees x 2.5 degrees) for which an ensemble of GFDL2.0, INGV-ECHAM4, UKMO-HADCM3, MIROC3, BCCR-BCCM2.0 and GFDL2.1 is found to be suitable. It is concluded that the proposed methodology can be applied to similar situations with ease.
Resumo:
This paper studies a pilot-assisted physical layer data fusion technique known as Distributed Co-Phasing (DCP). In this two-phase scheme, the sensors first estimate the channel to the fusion center (FC) using pilots sent by the latter; and then they simultaneously transmit their common data by pre-rotating them by the estimated channel phase, thereby achieving physical layer data fusion. First, by analyzing the symmetric mutual information of the system, it is shown that the use of higher order constellations (HOC) can improve the throughput of DCP compared to the binary signaling considered heretofore. Using an HOC in the DCP setting requires the estimation of the composite DCP channel at the FC for data decoding. To this end, two blind algorithms are proposed: 1) power method, and 2) modified K-means algorithm. The latter algorithm is shown to be computationally efficient and converges significantly faster than the conventional K-means algorithm. Analytical expressions for the probability of error are derived, and it is found that even at moderate to low SNRs, the modified K-means algorithm achieves a probability of error comparable to that achievable with a perfect channel estimate at the FC, while requiring no pilot symbols to be transmitted from the sensor nodes. Also, the problem of signal corruption due to imperfect DCP is investigated, and constellation shaping to minimize the probability of signal corruption is proposed and analyzed. The analysis is validated, and the promising performance of DCP for energy-efficient physical layer data fusion is illustrated, using Monte Carlo simulations.
Resumo:
Despite the long history, so far there is no general theoretical framework for calculating the acoustic emission spectrum accompanying any plastic deformation. We set up a discrete wave equation with plastic strain rate as a source term and include the Rayleigh-dissipation function to represent dissipation accompanying acoustic emission. We devise a method of bridging the widely separated time scales of plastic deformation and elastic degrees of freedom. While this equation is applicable to any type of plastic deformation, it should be supplemented by evolution equations for the dislocation microstructure for calculating the plastic strain rate. The efficacy of the framework is illustrated by considering three distinct cases of plastic deformation. The first one is the acoustic emission during a typical continuous yield exhibiting a smooth stress-strain curve. We first construct an appropriate set of evolution equations for two types of dislocation densities and then show that the shape of the model stress-strain curve and accompanying acoustic emission spectrum match very well with experimental results. The second and the third are the more complex cases of the Portevin-Le Chatelier bands and the Luders band. These two cases are dealt with in the context of the Ananthakrishna model since the model predicts the three types of the Portevin-Le Chatelier bands and also Luders-like bands. Our results show that for the type-C bands where the serration amplitude is large, the acoustic emission spectrum consists of well-separated bursts of acoustic emission. At higher strain rates of hopping type-B bands, the burst-type acoustic emission spectrum tends to overlap, forming a nearly continuous background with some sharp acoustic emission bursts. The latter can be identified with the nucleation of new bands. The acoustic emission spectrum associated with the continuously propagating type-A band is continuous. These predictions are consistent with experimental results. More importantly, our study shows that the low-amplitude continuous acoustic emission spectrum seen in both the type-B and type-A band regimes is directly correlated to small-amplitude serrations induced by propagating bands. The acoustic emission spectrum of the Luders-like band matches with recent experiments as well. In all of these cases, acoustic emission signals are burstlike, reflecting the intermittent character of dislocation-mediated plastic flow.
Resumo:
In this paper we first derive a necessary and sufficient condition for a stationary strategy to be the Nash equilibrium of discounted constrained stochastic game under certain assumptions. In this process we also develop a nonlinear (non-convex) optimization problem for a discounted constrained stochastic game. We use the linear best response functions of every player and complementary slackness theorem for linear programs to derive both the optimization problem and the equivalent condition. We then extend this result to average reward constrained stochastic games. Finally, we present a heuristic algorithm motivated by our necessary and sufficient conditions for a discounted cost constrained stochastic game. We numerically observe the convergence of this algorithm to Nash equilibrium. (C) 2015 Elsevier B.V. All rights reserved.
Resumo:
Quantum ensembles form easily accessible architectures for studying various phenomena in quantum physics, quantum information science and spectroscopy. Here we review some recent protocols for measurements in quantum ensembles by utilizing ancillary systems. We also illustrate these protocols experimentally via nuclear magnetic resonance techniques. In particular, we shall review noninvasive measurements, extracting expectation values of various operators, characterizations of quantum states and quantum processes, and finally quantum noise engineering.
Resumo:
Einstein established the theory of general relativity and the corresponding field equation in 1915 and its vacuum solutions were obtained by Schwarzschild and Kerr for, respectively, static and rotating black holes, in 1916 and 1963, respectively. They are, however, still playing an indispensable role, even after 100 years of their original discovery, to explain high energy astrophysical phenomena. Application of the solutions of Einstein's equation to resolve astrophysical phenomena has formed an important branch, namely relativistic astrophysics. I devote this article to enlightening some of the current astrophysical problems based on general relativity. However, there seem to be some issues with regard to explaining certain astrophysical phenomena based on Einstein's theory alone. I show that Einstein's theory and its modified form, both are necessary to explain modern astrophysical processes, in particular, those related to compact objects.
Resumo:
Schemes that can be proven to be unconditionally stable in the linear context can yield unstable solutions when used to solve nonlinear dynamical problems. Hence, the formulation of numerical strategies for nonlinear dynamical problems can be particularly challenging. In this work, we show that time finite element methods because of their inherent energy momentum conserving property (in the case of linear and nonlinear elastodynamics), provide a robust time-stepping method for nonlinear dynamic equations (including chaotic systems). We also show that most of the existing schemes that are known to be robust for parabolic or hyperbolic problems can be derived within the time finite element framework; thus, the time finite element provides a unification of time-stepping schemes used in diverse disciplines. We demonstrate the robust performance of the time finite element method on several challenging examples from the literature where the solution behavior is known to be chaotic. (C) 2015 Elsevier Inc. All rights reserved.