975 resultados para Sequential Gaussian simulation
Resumo:
We address the issue of rate-distortion (R/D) performance optimality of the recently proposed switched split vector quantization (SSVQ) method. The distribution of the source is modeled using Gaussian mixture density and thus, the non-parametric SSVQ is analyzed in a parametric model based framework for achieving optimum R/D performance. Using high rate quantization theory, we derive the optimum bit allocation formulae for the intra-cluster split vector quantizer (SVQ) and the inter-cluster switching. For the wide-band speech line spectrum frequency (LSF) parameter quantization, it is shown that the Gaussian mixture model (GMM) based parametric SSVQ method provides 1 bit/vector advantage over the non-parametric SSVQ method.
Resumo:
We propose certain discrete parameter variants of well known simulation optimization algorithms. Two of these algorithms are based on the smoothed functional (SF) technique while two others are based on the simultaneous perturbation stochastic approximation (SPSA) method. They differ from each other in the way perturbations are obtained and also the manner in which projections and parameter updates are performed. All our algorithms use two simulations and two-timescale stochastic approximation. As an application setting, we consider the important problem of admission control of packets in communication networks under dependent service times. We consider a discrete time slotted queueing model of the system and consider two different scenarios - one where the service times have a dependence on the system state and the other where they depend on the number of arrivals in a time slot. Under our settings, the simulated objective function appears ill-behaved with multiple local minima and a unique global minimum characterized by a sharp dip in the objective function in a small region of the parameter space. We compare the performance of our algorithms on these settings and observe that the two SF algorithms show the best results overall. In fact, in many cases studied, SF algorithms converge to the global minimum.
Resumo:
Hydrographic observations were taken along two coastal sections and one open ocean section in the Bay of Bengal during the 1999 southwest monsoon, as a part of the Bay of Bengal Monsoon Experiment (BOBMEX). The coastal section in the northwestern Bay of Bengal, which was occupied twice, captured a freshwater plume in its two stages: first when the plume was restricted to the coastal region although separated from the coast, and then when the plume spread offshore. Below the freshwater layer there were indications of an undercurrent. The coastal section in the southern Bay of Bengal was marked by intense coastal upwelling in a 50 km wide band. In regions under the influence of the freshwater plume, the mixed layer was considerably thinner and occasionally led to the formation of a temperature inversion. The mixed layer and isothermal layer were of similar depth for most of the profiles within and outside the freshwater plume and temperature below the mixed layer decreased rapidly till the top of seasonal thermocline. There was no barrier layer even in regions well under the influence of the freshwater plume. The freshwater plume in the open Bay of Bengal does not advect to the south of 16 degrees N during the southwest monsoon. A model of the Indian Ocean, forced by heat, momentum and freshwater fluxes for the year 1999, reproduces the freshwater plume in the Bay of Bengal reasonably well. Model currents as well as the surface circulation calculated as the sum of geostrophic and Ekman drift show a southeastward North Bay Monsoon Current (NBMC) across the Bay, which forms the southern arm of a cyclonic gyre. The NBMC separates the very low salinity waters of the northern Bay from the higher salinities in the south and thus plays an important role in the regulation of near surface stratification. (c) 2007 Elsevier Ltd. All rights reserved.
Resumo:
Hydrographic observations were taken along two coastal sections and one open ocean section in the Bay of Bengal during the 1999 southwest monsoon, as a part of the Bay of Bengal Monsoon Experiment (BOBMEX). The coastal section in the northwestern Bay of Bengal, which was occupied twice, captured a freshwater plume in its two stages: first when the plume was restricted to the coastal region although separated from the coast, and then when the plume spread offshore. Below the freshwater layer there were indications of an undercurrent. The coastal section in the southern Bay of Bengal was marked by intense coastal upwelling in a 50 km wide band. In regions under the influence of the freshwater plume, the mixed layer was considerably thinner and occasionally led to the formation of a temperature inversion. The mixed layer and isothermal layer were of similar depth for most of the profiles within and outside the freshwater plume and temperature below the mixed layer decreased rapidly till the top of seasonal thermocline. There was no barrier layer even in regions well under the influence of the freshwater plume. The freshwater plume in the open Bay of Bengal does not advect to the south of 16 degrees N during the southwest monsoon. A model of the Indian Ocean, forced by heat, momentum and freshwater fluxes for the year 1999, reproduces the freshwater plume in the Bay of Bengal reasonably well. Model currents as well as the surface circulation calculated as the sum of geostrophic and Ekman drift show a southeastward North Bay Monsoon Current (NBMC) across the Bay, which forms the southern arm of a cyclonic gyre. The NBMC separates the very low salinity waters of the northern Bay from the higher salinities in the south and thus plays an important role in the regulation of near surface stratification. (c) 2007 Elsevier Ltd. All rights reserved.
Resumo:
Well injection replenishes depleting water levels in a well field. Observation well water levels some distance away from the injection well are the indicators of the success of a well injection program. Simulation of the observation well response, located a few tens of meters from the injection well, is likely to be affected by the effects of nonhomogeneous medium, inclined initial water table, and aquifer clogging. Existing algorithms, such as the U.S. Geological Survey groundwater flow software MODFLOW, are capable of handling the first two conditions, whereas time-dependent clogging effects are yet to be introduced in the groundwater flow models. Elsewhere, aquifer clogging is extensively researched in theory of filtration; scope for its application in a well field is a potential research problem. In the present paper, coupling of one such filtration theory to MODFLOW is introduced. Simulation of clogging effects during “Hansol” well recharge in the parts of western India is found to be encouraging.
Resumo:
In this paper we present and compare the results obtained from semi-classical and quantum mechanical simulation for a Double Gate MOSFET structure to analyze the electrostatics and carrier dynamics of this device. The geometries like gate length, body, thickness of this device have been chosen according to the ITRS specification for the different technology nodes. We have shown the extent of deviation between the semi-classical and quantum mechanical results and hence the need of quantum simulations for the promising nanoscale devices in the future technology nodes predicted in ITRS.
Resumo:
We consider the problem of detecting statistically significant sequential patterns in multineuronal spike trains. These patterns are characterized by ordered sequences of spikes from different neurons with specific delays between spikes. We have previously proposed a data-mining scheme to efficiently discover such patterns, which occur often enough in the data. Here we propose a method to determine the statistical significance of such repeating patterns. The novelty of our approach is that we use a compound null hypothesis that not only includes models of independent neurons but also models where neurons have weak dependencies. The strength of interaction among the neurons is represented in terms of certain pair-wise conditional probabilities. We specify our null hypothesis by putting an upper bound on all such conditional probabilities. We construct a probabilistic model that captures the counting process and use this to derive a test of significance for rejecting such a compound null hypothesis. The structure of our null hypothesis also allows us to rank-order different significant patterns. We illustrate the effectiveness of our approach using spike trains generated with a simulator.
Resumo:
A test for time-varying correlation is developed within the framework of a dynamic conditional score (DCS) model for both Gaussian and Student t-distributions. The test may be interpreted as a Lagrange multiplier test and modified to allow for the estimation of models for time-varying volatility in the individual series. Unlike standard moment-based tests, the score-based test statistic includes information on the level of correlation under the null hypothesis and local power arguments indicate the benefits of doing so. A simulation study shows that the performance of the score-based test is strong relative to existing tests across a range of data generating processes. An application to the Hong Kong and South Korean equity markets shows that the new test reveals changes in correlation that are not detected by the standard moment-based test.
Resumo:
We investigate the events near the fusion interfaces of dissimilar welds using a phase-field model developed for single-phase solidification of binary alloys. The parameters used here correspond to the dissimilar welding of a Ni/Cu couple. The events at the Ni and the Cu interface are very different, which illustrate the importance of the phase diagram through the slope of the liquidus curves. In the Ni side, where the liquidus temperature decreases with increasing alloying, solutal melting of the base metal takes place; the resolidification, with continuously increasing solid composition, is very sluggish until the interface encounters a homogeneous melt composition. The growth difficulty of the base metal increases with increasing initial melt composition, which is equivalent to a steeper slope of the liquidus curve. In the Cu side, the initial conditions result in a deeply undercooled melt and contributions from both constrained and unconstrained modes of growth are observed. The simulations bring out the possibility of nucleation of a concentrated solid phase from the melt, and a secondary melting of the substrate due to the associated recalescence event. The results for the Ni and Cu interfaces can be used to understand more complex dissimilar weld interfaces involving multiphase solidification.
Resumo:
We develop several hardware and software simulation blocks for the TinyOS-2 (TOSSIM-T2) simulator. The choice of simulated hardware platform is the popular MICA2 mote. While the hardware simulation elements comprise of radio and external flash memory, the software blocks include an environment noise model, packet delivery model and an energy estimator block for the complete system. The hardware radio block uses the software environment noise model to sample the noise floor. The packet delivery model is built by establishing the SNR-PRR curve for the MICA2 system. The energy estimator block models energy consumption by Micro Controller Unit(MCU), Radio, LEDs, and external flash memory. Using the manufacturerpsilas data sheets we provide an estimate of the energy consumed by the hardware during transmission, reception and also track several of the MCUs states with the associated energy consumption. To study the effectiveness of this work, we take a case study of a paper presented in [1]. We obtain three sets of results for energy consumption through mathematical analysis, simulation using the blocks built into PowerTossim-T2 and finally laboratory measurements. Since there is a significant match between these result sets, we propose our blocks for T2 community to effectively test their application energy requirements and node life times.
Resumo:
Agricultural pests are responsible for millions of dollars in crop losses and management costs every year. In order to implement optimal site-specific treatments and reduce control costs, new methods to accurately monitor and assess pest damage need to be investigated. In this paper we explore the combination of unmanned aerial vehicles (UAV), remote sensing and machine learning techniques as a promising technology to address this challenge. The deployment of UAVs as a sensor platform is a rapidly growing field of study for biosecurity and precision agriculture applications. In this experiment, a data collection campaign is performed over a sorghum crop severely damaged by white grubs (Coleoptera: Scarabaeidae). The larvae of these scarab beetles feed on the roots of plants, which in turn impairs root exploration of the soil profile. In the field, crop health status could be classified according to three levels: bare soil where plants were decimated, transition zones of reduced plant density and healthy canopy areas. In this study, we describe the UAV platform deployed to collect high-resolution RGB imagery as well as the image processing pipeline implemented to create an orthoimage. An unsupervised machine learning approach is formulated in order to create a meaningful partition of the image into each of the crop levels. The aim of the approach is to simplify the image analysis step by minimizing user input requirements and avoiding the manual data labeling necessary in supervised learning approaches. The implemented algorithm is based on the K-means clustering algorithm. In order to control high-frequency components present in the feature space, a neighbourhood-oriented parameter is introduced by applying Gaussian convolution kernels prior to K-means. The outcome of this approach is a soft K-means algorithm similar to the EM algorithm for Gaussian mixture models. The results show the algorithm delivers decision boundaries that consistently classify the field into three clusters, one for each crop health level. The methodology presented in this paper represents a venue for further research towards automated crop damage assessments and biosecurity surveillance.