866 resultados para Reproducing kernel
Resumo:
The Indian subcontinent divides the north Indian Ocean into two tropical basins, namely the Arabian Sea and the Bay of Bengal. The Arabian Sea has high salinity whereas the salinity of the Bay of Bengal is much lower due to the contrast in freshwater forcing of the two basins. The freshwater received by the Bay in large amounts during the summer monsoon through river discharge is flushed out annually by ocean circulation. After the withdrawal of the summer monsoon, the Ganga – Brahmaputra river plume flows first along the Indian coast and then around Sri Lanka into the Arabian Sea creating a low salinity pool in the southeastern Arabian Sea (SEAS). In the same region, during the pre-monsoon months of February – April, a warm pool, known as the Arabian Sea Mini Warm Pool (ASMWP), which is distinctly warmer than the rest of the Indian Ocean, takes shape. In fact, this is the warmest region in the world oceans during this period. Simulation of the river plume and its movement as well as its implications to thermodynamics has been a challenging problem for models of Indian Ocean. Here we address these issues using an ocean general circulation model – first we show that the model is capable of reproducing fresh plumes in the Bay of Bengal as well as its movement and then we use the model to determine the processes that lead to formation of the ASMWP. Hydrographic observations from the western Bay of Bengal have shown the presence of a fresh plume along the northern part of the Indian coast during summer monsoon. The Indian Ocean model when forced by realistic winds and climatological river discharge reproduces the fresh plume with reasonable accuracy. The fresh plume does not advect along the Indian coast until the end of summer monsoon. The North Bay Monsoon Current, which flows eastward in the northern Bay, separates the low salinity water from the more saline southern parts of the bay and thus plays an important role in the fresh water budget of the Bay of Bengal. The model also reproduces the surge of the fresh-plume along the Indian coast, into the Arabian Sea during northeast monsoon. Mechanisms that lead to the formation of the Arabian Sea Mini Warm Pool are investigated using several numerical experiments. Contrary to the existing theories, we find that salinity effects are not necessary for the formation of the ASMWP. The orographic effects of the Sahyadris (Western Ghats) and resulting reduction in wind speed leads to the formation of the ASMWP. During November – April, the SEAS behave as a low-wind heatdominated regime where the evolution of sea surface temperature is solely determined by atmospheric forcing. In such regions the evolution of surface layer temperature is not dependent on the characteristics of the subsurface ocean such as the barrier layer and temperature inversion.
Resumo:
MATLAB is an array language, initially popular for rapid prototyping, but is now being increasingly used to develop production code for numerical and scientific applications. Typical MATLAB programs have abundant data parallelism. These programs also have control flow dominated scalar regions that have an impact on the program's execution time. Today's computer systems have tremendous computing power in the form of traditional CPU cores and throughput oriented accelerators such as graphics processing units(GPUs). Thus, an approach that maps the control flow dominated regions to the CPU and the data parallel regions to the GPU can significantly improve program performance. In this paper, we present the design and implementation of MEGHA, a compiler that automatically compiles MATLAB programs to enable synergistic execution on heterogeneous processors. Our solution is fully automated and does not require programmer input for identifying data parallel regions. We propose a set of compiler optimizations tailored for MATLAB. Our compiler identifies data parallel regions of the program and composes them into kernels. The problem of combining statements into kernels is formulated as a constrained graph clustering problem. Heuristics are presented to map identified kernels to either the CPU or GPU so that kernel execution on the CPU and the GPU happens synergistically and the amount of data transfer needed is minimized. In order to ensure required data movement for dependencies across basic blocks, we propose a data flow analysis and edge splitting strategy. Thus our compiler automatically handles composition of kernels, mapping of kernels to CPU and GPU, scheduling and insertion of required data transfer. The proposed compiler was implemented and experimental evaluation using a set of MATLAB benchmarks shows that our approach achieves a geometric mean speedup of 19.8X for data parallel benchmarks over native execution of MATLAB.
Resumo:
This paper addresses the problem of maximum margin classification given the moments of class conditional densities and the false positive and false negative error rates. Using Chebyshev inequalities, the problem can be posed as a second order cone programming problem. The dual of the formulation leads to a geometric optimization problem, that of computing the distance between two ellipsoids, which is solved by an iterative algorithm. The formulation is extended to non-linear classifiers using kernel methods. The resultant classifiers are applied to the case of classification of unbalanced datasets with asymmetric costs for misclassification. Experimental results on benchmark datasets show the efficacy of the proposed method.
Resumo:
Many downscaling techniques have been developed in the past few years for projection of station-scale hydrological variables from large-scale atmospheric variables simulated by general circulation models (GCMs) to assess the hydrological impacts of climate change. This article compares the performances of three downscaling methods, viz. conditional random field (CRF), K-nearest neighbour (KNN) and support vector machine (SVM) methods in downscaling precipitation in the Punjab region of India, belonging to the monsoon regime. The CRF model is a recently developed method for downscaling hydrological variables in a probabilistic framework, while the SVM model is a popular machine learning tool useful in terms of its ability to generalize and capture nonlinear relationships between predictors and predictand. The KNN model is an analogue-type method that queries days similar to a given feature vector from the training data and classifies future days by random sampling from a weighted set of K closest training examples. The models are applied for downscaling monsoon (June to September) daily precipitation at six locations in Punjab. Model performances with respect to reproduction of various statistics such as dry and wet spell length distributions, daily rainfall distribution, and intersite correlations are examined. It is found that the CRF and KNN models perform slightly better than the SVM model in reproducing most daily rainfall statistics. These models are then used to project future precipitation at the six locations. Output from the Canadian global climate model (CGCM3) GCM for three scenarios, viz. A1B, A2, and B1 is used for projection of future precipitation. The projections show a change in probability density functions of daily rainfall amount and changes in the wet and dry spell distributions of daily precipitation. Copyright (C) 2011 John Wiley & Sons, Ltd.
Optimised form of acceleration correction algorithm within SPH-based simulations of impact mechanics
Resumo:
In the context of SPH-based simulations of impact dynamics, an optimised and automated form of the acceleration correction algorithm (Shaw and Reid, 2009a) is developed so as to remove spurious high frequency oscillations in computed responses whilst retaining the stabilizing characteristics of the artificial viscosity in the presence of shocks and layers with sharp gradients. A rational framework for an insightful characterisation of the erstwhile acceleration correction method is first set up. This is followed by the proposal of an optimised version of the method, wherein the strength of the correction term in the momentum balance and energy equations is optimised. For the first time, this leads to an automated procedure to arrive at the artificial viscosity term. In particular, this is achieved by taking a spatially varying response-dependent support size for the kernel function through which the correction term is computed. The optimum value of the support size is deduced by minimising the (spatially localised) total variation of the high oscillation in the acceleration term with respect to its (local) mean. The derivation of the method, its advantages over the heuristic method and issues related to its numerical implementation are discussed in detail. (C) 2011 Elsevier Ltd. All rights reserved.
Resumo:
Long-distance dispersal (LDD) events, although rare for most plant species, can strongly influence population and community dynamics. Animals function as a key biotic vector of seeds and thus, a mechanistic and quantitative understanding of how individual animal behaviors scale to dispersal patterns at different spatial scales is a question of critical importance from both basic and applied perspectives. Using a diffusion-theory based analytical approach for a wide range of animal movement and seed transportation patterns, we show that the scale (a measure of local dispersal) of the seed dispersal kernel increases with the organisms' rate of movement and mean seed retention time. We reveal that variations in seed retention time is a key determinant of various measures of LDD such as kurtosis (or shape) of the kernel, thinkness of tails and the absolute number of seeds falling beyond a threshold distance. Using empirical data sets of frugivores, we illustrate the importance of variability in retention times for predicting the key disperser species that influence LDD. Our study makes testable predictions linking animal movement behaviors and gut retention times to dispersal patterns and, more generally, highlights the potential importance of animal behavioral variability for the LDD of seeds.
Resumo:
The characteristic function for a contraction is a classical complete unitary invariant devised by Sz.-Nagy and Foias. Just as a contraction is related to the Szego kernel k(S)(z, w) = ( 1 - z(w)over bar)- 1 for |z|, |w| < 1, by means of (1/k(S))( T, T *) = 0, we consider an arbitrary open connected domain Omega in C(n), a kernel k on Omega so that 1/k is a polynomial and a tuple T = (T(1), T(2), ... , T(n)) of commuting bounded operators on a complex separable Hilbert spaceHsuch that (1/k)( T, T *) >= 0. Under some standard assumptions on k, it turns out that whether a characteristic function can be associated with T or not depends not only on T, but also on the kernel k. We give a necessary and sufficient condition. When this condition is satisfied, a functional model can be constructed. Moreover, the characteristic function then is a complete unitary invariant for a suitable class of tuples T.
Resumo:
In this note, we show that a quasi-free Hilbert module R defined over the polydisk algebra with kernel function k(z,w) admits a unique minimal dilation (actually an isometric co-extension) to the Hardy module over the polydisk if and only if S (-1)(z, w)k(z, w) is a positive kernel function, where S(z,w) is the Szego kernel for the polydisk. Moreover, we establish the equivalence of such a factorization of the kernel function and a positivity condition, defined using the hereditary functional calculus, which was introduced earlier by Athavale [8] and Ambrozie, Englis and Muller [2]. An explicit realization of the dilation space is given along with the isometric embedding of the module R in it. The proof works for a wider class of Hilbert modules in which the Hardy module is replaced by more general quasi-free Hilbert modules such as the classical spaces on the polydisk or the unit ball in a'', (m) . Some consequences of this more general result are then explored in the case of several natural function algebras.
Resumo:
We solve the wave equations of arbitrary integer spin fields in the BTZ black hole background and obtain exact expressions for their quasinormal modes. We show that these quasinormal modes precisely agree with the location of the poles of the corresponding two point function in the dual conformal field theory as predicted by the AdS/CFT correspondence. We then use these quasinormal modes to construct the one-loop determinant of the higher spin field in the thermal BTZ background. This is shown to agree with that obtained from the corresponding heat kernel constructed recently by group theoretic methods.
Resumo:
Recently it has been shown that the wave equations of bosonic higher spin fields in the BTZ background can be solved exactly. In this work we extend this analysis to fermionic higher spin fields. We solve the wave equations for arbitrary half-integer spin fields in the BTZ black hole background and obtain exact expressions for their quasinormal modes. These quasinormal modes are shown to agree precisely with the poles of the corresponding two point function in the dual conformal field theory as predicted by the AdS/CFT correspondence. We also obtain an expression for the 1-loop determinant for the Euclidean non-rotating BTZ black hole in terms of the quasinormal modes which agrees with that obtained by integrating the heat kernel found by group theoretic methods.
Resumo:
We consider the speech production mechanism and the asso- ciated linear source-filter model. For voiced speech sounds in particular, the source/glottal excitation is modeled as a stream of impulses and the filter as a cascade of second-order resonators. We show that the process of sampling speech signals can be modeled as filtering a stream of Dirac impulses (a model for the excitation) with a kernel function (the vocal tract response),and then sampling uniformly. We show that the problem of esti- mating the excitation is equivalent to the problem of recovering a stream of Dirac impulses from samples of a filtered version. We present associated algorithms based on the annihilating filter and also make a comparison with the classical linear prediction technique, which is well known in speech analysis. Results on synthesized as well as natural speech data are presented.
Resumo:
Signal acquisition under a compressed sensing scheme offers the possibility of acquisition and reconstruction of signals sparse on some basis incoherent with measurement kernel with sub-Nyquist number of measurements. In particular when the sole objective of the acquisition is the detection of the frequency of a signal rather than exact reconstruction, then an undersampling framework like CS is able to perform the task. In this paper we explore the possibility of acquisition and detection of frequency of multiple analog signals, heavily corrupted with additive white Gaussian noise. We improvise upon the MOSAICS architecture proposed by us in our previous work to include a wider class of signals having non-integral frequency components. This makes it possible to perform multiplexed compressed sensing for general frequency sparse signals.
Resumo:
MATLAB is an array language, initially popular for rapid prototyping, but is now being increasingly used to develop production code for numerical and scientific applications. Typical MATLAB programs have abundant data parallelism. These programs also have control flow dominated scalar regions that have an impact on the program's execution time. Today's computer systems have tremendous computing power in the form of traditional CPU cores and throughput oriented accelerators such as graphics processing units(GPUs). Thus, an approach that maps the control flow dominated regions to the CPU and the data parallel regions to the GPU can significantly improve program performance. In this paper, we present the design and implementation of MEGHA, a compiler that automatically compiles MATLAB programs to enable synergistic execution on heterogeneous processors. Our solution is fully automated and does not require programmer input for identifying data parallel regions. We propose a set of compiler optimizations tailored for MATLAB. Our compiler identifies data parallel regions of the program and composes them into kernels. The problem of combining statements into kernels is formulated as a constrained graph clustering problem. Heuristics are presented to map identified kernels to either the CPU or GPU so that kernel execution on the CPU and the GPU happens synergistically and the amount of data transfer needed is minimized. In order to ensure required data movement for dependencies across basic blocks, we propose a data flow analysis and edge splitting strategy. Thus our compiler automatically handles composition of kernels, mapping of kernels to CPU and GPU, scheduling and insertion of required data transfer. The proposed compiler was implemented and experimental evaluation using a set of MATLAB benchmarks shows that our approach achieves a geometric mean speedup of 19.8X for data parallel benchmarks over native execution of MATLAB.
Resumo:
The paper addresses experiments and modeling studies on the use of producer gas, a bio-derived low energy content fuel in a spark-ignited engine. Producer gas, generated in situ, has thermo-physical properties different from those of fossil fuel(s). Experiments on naturally aspirated and turbo-charged engine operation and subsequent analysis of the cylinder pressure traces reveal significant differences in the heat release pattern within the cylinder compared with a typical fossil fuel. The heat release patterns for gasoline and producer gas compare well in the initial 50% but beyond this, producer gas combustion tends to be sluggish leading to an overall increase in the combustion duration. This is rather unexpected considering that producer gas with nearly 20% hydrogen has higher flame speeds than gasoline. The influence of hydrogen on the initial flame kernel development period and the combustion duration and hence on the overall heat release pattern is addressed. The significant deviations in the heat release profiles between conventional fuels and producer gas necessitates the estimation of producer gas-specific Wiebe coefficients. The experimental heat release profiles are used for estimating the Wiebe coefficients. Experimental evidence of lower fuel conversion efficiency based on the chemical and thermal analysis of the engine exhaust gas is used to arrive at the Wiebe coefficients. The efficiency factor a is found to be 2.4 while the shape factor m is estimated at 0.7 for 2% to 90% burn duration. The standard Wiebe coefficients for conventional fuels and fuel-specific coefficients for producer gas are used in a zero D model to predict the performance of a 6-cylinder gas engine under naturally aspirated and turbo-charged conditions. While simulation results with standard Wiebe coefficients result in excessive deviations from the experimental results, excellent match is observed when producer gas-specific coefficients are used. Predictions using the same coefficients on a 3-cylinder gas engine having different geometry and compression ratio(s) indicate close match with the experimental traces highlighting the versatility of the coefficients.
Resumo:
Medical image segmentation finds application in computer-aided diagnosis, computer-guided surgery, measuring tissue volumes, locating tumors, and pathologies. One approach to segmentation is to use active contours or snakes. Active contours start from an initialization (often manually specified) and are guided by image-dependent forces to the object boundary. Snakes may also be guided by gradient vector fields associated with an image. The first main result in this direction is that of Xu and Prince, who proposed the notion of gradient vector flow (GVF), which is computed iteratively. We propose a new formalism to compute the vector flow based on the notion of bilateral filtering of the gradient field associated with the edge map - we refer to it as the bilateral vector flow (BVF). The range kernel definition that we employ is different from the one employed in the standard Gaussian bilateral filter. The advantage of the BVF formalism is that smooth gradient vector flow fields with enhanced edge information can be computed noniteratively. The quality of image segmentation turned out to be on par with that obtained using the GVF and in some cases better than the GVF.