989 resultados para sequential methods
Resumo:
We develop a communication theoretic framework for modeling 2-D magnetic recording channels. Using the model, we define the signal-to-noise ratio (SNR) for the channel considering several physical parameters, such as the channel bit density, code rate, bit aspect ratio, and noise parameters. We analyze the problem of optimizing the bit aspect ratio for maximizing SNR. The read channel architecture comprises a novel 2-D joint self-iterating equalizer and detection system with noise prediction capability. We evaluate the system performance based on our channel model through simulations. The coded performance with the 2-D equalizer detector indicates similar to 5.5 dB of SNR gain over uncoded data.
Resumo:
In this article, we analyse several discontinuous Galerkin (DG) methods for the Stokes problem under minimal regularity on the solution. We assume that the velocity u belongs to H-0(1)(Omega)](d) and the pressure p is an element of L-0(2)(Omega). First, we analyse standard DG methods assuming that the right-hand side f belongs to H-1(Omega) boolean AND L-1(Omega)](d). A DG method that is well defined for f belonging to H-1(Omega)](d) is then investigated. The methods under study include stabilized DG methods using equal-order spaces and inf-sup stable ones where the pressure space is one polynomial degree less than the velocity space.
Resumo:
In this article, we prove convergence of the weakly penalized adaptive discontinuous Galerkin methods. Unlike other works, we derive the contraction property for various discontinuous Galerkin methods only assuming the stabilizing parameters are large enough to stabilize the method. A central idea in the analysis is to construct an auxiliary solution from the discontinuous Galerkin solution by a simple post processing. Based on the auxiliary solution, we define the adaptive algorithm which guides to the convergence of adaptive discontinuous Galerkin methods.
Resumo:
This study considers linear filtering methods for minimising the end-to-end average distortion of a fixed-rate source quantisation system. For the source encoder, both scalar and vector quantisation are considered. The codebook index output by the encoder is sent over a noisy discrete memoryless channel whose statistics could be unknown at the transmitter. At the receiver, the code vector corresponding to the received index is passed through a linear receive filter, whose output is an estimate of the source instantiation. Under this setup, an approximate expression for the average weighted mean-square error (WMSE) between the source instantiation and the reconstructed vector at the receiver is derived using high-resolution quantisation theory. Also, a closed-form expression for the linear receive filter that minimises the approximate average WMSE is derived. The generality of framework developed is further demonstrated by theoretically analysing the performance of other adaptation techniques that can be employed when the channel statistics are available at the transmitter also, such as joint transmit-receive linear filtering and codebook scaling. Monte Carlo simulation results validate the theoretical expressions, and illustrate the improvement in the average distortion that can be obtained using linear filtering techniques.
Resumo:
Lime stabilization prevails to be the most widely adopted in situ stabilization method for controlling the swell-shrink potentials of expansive soils despite construction difficulties and its ineffectiveness in certain conditions. In addition to the in situ stabilization methods presently practiced, it is theoretically possible to facilitate in situ precipitation of lime in soil by successive permeation of calcium chloride (CaCl2 ) and sodium hydroxide (NaOH) solutions into the expansive soil. In this laboratory investigation, an attempt is made to study the precipitation of lime in soil by successive mixing of CaCl2 and NaOH solutions with the expansive soil in two different sequences.Experimental results indicated that in situ precipitation of lime in soil by sequential mixing of CaCl2 and NaOH solutions with expansive soil developed strong lime-modification and soil-lime pozzolanic reactions. The lime-modification reactions together with the poorly de- veloped cementation products controlled the swelling potential, reduced the plasticity index, and increased the unconfined compressive strength of the expansive clay cured for 24 h. Comparatively, both lime-modification reactions and well-developed crystalline cementation products (formed by lime-soil pozzolanic reactions) contributed to the marked increase in the unconfined compressive strength of the ex-pansive soil that was cured for 7–21 days. Results also show that the sequential mixing of expansive soil with CaCl2 solution followed by NaOH solution is more effective than mixing expansive soil with NaOH solution followed by CaCl2 solution. DOI: 10.1061/(ASCE)MT .1943-5533.0000483. © 2012 American Society of Civil Engineers.
Resumo:
Knowledge of the plasticity associated with the incipient stage of chip formation is useful toward developing an understanding of the deformation field underlying severe plastic deformation processes. The transition from a transient state of straining to a steady state was investigated in plane strain machining of a model material system-copper. Characterization of the evolution to a steady-state deformation field was made by image correlation, hardness mapping, load analysis, and microstructure characterization. Empirical relationships relating the deformation heterogeneity and the process parameters were found and explained by the corresponding effects on shear plane geometry. The results are potentially useful to facilitate a framework for process design of large strain deformation configurations, wherein transient deformation fields prevail. These implications are considered in the present study to quantify the efficiency of processing methods for bulk ultrafine-grained metals by large strain extrusion machining and equal channel angular pressing.
Resumo:
Our work is motivated by impromptu (or ``as-you-go'') deployment of wireless relay nodes along a path, a need that arises in many situations. In this paper, the path is modeled as starting at the origin (where there is the data sink, e.g., the control center), and evolving randomly over a lattice in the positive quadrant. A person walks along the path deploying relay nodes as he goes. At each step, the path can, randomly, either continue in the same direction or take a turn, or come to an end, at which point a data source (e.g., a sensor) has to be placed, that will send packets to the data sink. A decision has to be made at each step whether or not to place a wireless relay node. Assuming that the packet generation rate by the source is very low, and simple link-by-link scheduling, we consider the problem of sequential relay placement so as to minimize the expectation of an end-to-end cost metric (a linear combination of the sum of convex hop costs and the number of relays placed). This impromptu relay placement problem is formulated as a total cost Markov decision process. First, we derive the optimal policy in terms of an optimal placement set and show that this set is characterized by a boundary (with respect to the position of the last placed relay) beyond which it is optimal to place the next relay. Next, based on a simpler one-step-look-ahead characterization of the optimal policy, we propose an algorithm which is proved to converge to the optimal placement set in a finite number of steps and which is faster than value iteration. We show by simulations that the distance threshold based heuristic, usually assumed in the literature, is close to the optimal, provided that the threshold distance is carefully chosen. (C) 2014 Elsevier B.V. All rights reserved.
Resumo:
This paper considers cooperative spectrum sensing algorithms for Cognitive Radios which focus on reducing the number of samples to make a reliable detection. We propose algorithms based on decentralized sequential hypothesis testing in which the Cognitive Radios sequentially collect the observations, make local decisions and send them to the fusion center for further processing to make a final decision on spectrum usage. The reporting channel between the Cognitive Radios and the fusion center is assumed more realistically as a Multiple Access Channel (MAC) with receiver noise. Furthermore the communication for reporting is limited, thereby reducing the communication cost. We start with an algorithm where the fusion center uses an SPRT-like (Sequential Probability Ratio Test) procedure and theoretically analyze its performance. Asymptotically, its performance is close to the optimal centralized test without fusion center noise. We further modify this algorithm to improve its performance at practical operating points. Later we generalize these algorithms to handle uncertainties in SNR and fading. (C) 2014 Elsevier B.V. All rights reserved.
Resumo:
Today's programming languages are supported by powerful third-party APIs. For a given application domain, it is common to have many competing APIs that provide similar functionality. Programmer productivity therefore depends heavily on the programmer's ability to discover suitable APIs both during an initial coding phase, as well as during software maintenance. The aim of this work is to support the discovery and migration of math APIs. Math APIs are at the heart of many application domains ranging from machine learning to scientific computations. Our approach, called MATHFINDER, combines executable specifications of mathematical computations with unit tests (operational specifications) of API methods. Given a math expression, MATHFINDER synthesizes pseudo-code comprised of API methods to compute the expression by mining unit tests of the API methods. We present a sequential version of our unit test mining algorithm and also design a more scalable data-parallel version. We perform extensive evaluation of MATHFINDER (1) for API discovery, where math algorithms are to be implemented from scratch and (2) for API migration, where client programs utilizing a math API are to be migrated to another API. We evaluated the precision and recall of MATHFINDER on a diverse collection of math expressions, culled from algorithms used in a wide range of application areas such as control systems and structural dynamics. In a user study to evaluate the productivity gains obtained by using MATHFINDER for API discovery, the programmers who used MATHFINDER finished their programming tasks twice as fast as their counterparts who used the usual techniques like web and code search, IDE code completion, and manual inspection of library documentation. For the problem of API migration, as a case study, we used MATHFINDER to migrate Weka, a popular machine learning library. Overall, our evaluation shows that MATHFINDER is easy to use, provides highly precise results across several math APIs and application domains even with a small number of unit tests per method, and scales to large collections of unit tests.
Resumo:
We present a survey on different numerical interpolation schemes used for two-phase transient heat conduction problems in the context of interface capturing phase-field methods. Examples are general transport problems in the context of diffuse interface methods with a non-equal heat conductivity in normal and tangential directions to the interface. We extend the tonsorial approach recently published by Nicoli M et al (2011 Phys. Rev. E 84 1-6) to the general three-dimensional (3D) transient evolution equations. Validations for one-dimensional, two-dimensional and 3D transient test cases are provided, and the results are in good agreement with analytical and numerical reference solutions.
Resumo:
The concurrent planning of sequential saccades offers a simple model to study the nature of visuomotor transformations since the second saccade vector needs to be remapped to foveate the second target following the first saccade. Remapping is thought to occur through egocentric mechanisms involving an efference copy of the first saccade that is available around the time of its onset. In contrast, an exocentric representation of the second target relative to the first target, if available, can be used to directly code the second saccade vector. While human volunteers performed a modified double-step task, we examined the role of exocentric encoding in concurrent saccade planning by shifting the first target location well before the efference copy could be used by the oculomotor system. The impact of the first target shift on concurrent processing was tested by examining the end-points of second saccades following a shift of the second target during the first saccade. The frequency of second saccades to the old versus new location of the second target, as well as the propagation of first saccade localization errors, both indices of concurrent processing, were found to be significantly reduced in trials with the first target shift compared to those without it. A similar decrease in concurrent processing was obtained when we shifted the first target but kept constant the second saccade vector. Overall, these results suggest that the brain can use relatively stable visual landmarks, independent of efference copy-based egocentric mechanisms, for concurrent planning of sequential saccades.
Resumo:
Magnetic Resonance Imaging (MRI) has been widely used in cancer treatment planning, which takes the advantage of high-resolution and high-contrast provided by it. The raw data collected in the MRI can also be used to obtain the temperature maps and has been explored for performing MR thermometry. This review article describes the methods that are used in performing MR thermometry, with an emphasis on reconstruction methods that are useful to obtain these temperature maps in real-time for large region of interest. This article also proposes a prior-image constrained reconstruction method for temperature reconstruction in MR thermometry, and a systematic comparison using ex-vivo tissue experiments with state of the art reconstruction method is presented.
Resumo:
Package-board co-design plays a crucial role in determining the performance of high-speed systems. Although there exist several commercial solutions for electromagnetic analysis and verification, lack of Computer Aided Design (CAD) tools for SI aware design and synthesis lead to longer design cycles and non-optimal package-board interconnect geometries. In this work, the functional similarities between package-board design and radio-frequency (RF) imaging are explored. Consequently, qualitative methods common to the imaging community, like Tikhonov Regularization (TR) and Landweber method are applied to solve multi-objective, multi-variable package design problems. In addition, a new hierarchical iterative piecewise linear algorithm is developed as a wrapper over LBP for an efficient solution in the design space.
Resumo:
Frequent episode discovery is one of the methods used for temporal pattern discovery in sequential data. An episode is a partially ordered set of nodes with each node associated with an event type. For more than a decade, algorithms existed for episode discovery only when the associated partial order is total (serial episode) or trivial (parallel episode). Recently, the literature has seen algorithms for discovering episodes with general partial orders. In frequent pattern mining, the threshold beyond which a pattern is inferred to be interesting is typically user-defined and arbitrary. One way of addressing this issue in the pattern mining literature has been based on the framework of statistical hypothesis testing. This paper presents a method of assessing statistical significance of episode patterns with general partial orders. A method is proposed to calculate thresholds, on the non-overlapped frequency, beyond which an episode pattern would be inferred to be statistically significant. The method is first explained for the case of injective episodes with general partial orders. An injective episode is one where event-types are not allowed to repeat. Later it is pointed out how the method can be extended to the class of all episodes. The significance threshold calculations for general partial order episodes proposed here also generalize the existing significance results for serial episodes. Through simulations studies, the usefulness of these statistical thresholds in pruning uninteresting patterns is illustrated. (C) 2014 Elsevier Inc. All rights reserved.
Resumo:
Ice volume estimates are crucial for assessing water reserves stored in glaciers. Due to its large glacier coverage, such estimates are of particular interest for the Himalayan-Karakoram (HK) region. In this study, different existing methodologies are used to estimate the ice reserves: three area-volume relations, one slope-dependent volume estimation method, and two ice-thickness distribution models are applied to a recent, detailed, and complete glacier inventory of the HK region, spanning over the period 2000-2010 and revealing an ice coverage of 40 775 km(2). An uncertainty and sensitivity assessment is performed to investigate the influence of the observed glacier area and important model parameters on the resulting total ice volume. Results of the two ice-thickness distribution models are validated with local ice-thickness measurements at six glaciers. The resulting ice volumes for the entire HK region range from 2955 to 4737 km(3), depending on the approach. This range is lower than most previous estimates. Results from the ice thickness distribution models and the slope-dependent thickness estimations agree well with measured local ice thicknesses. However, total volume estimates from area-related relations are larger than those from other approaches. The study provides evidence on the significant effect of the selected method on results and underlines the importance of a careful and critical evaluation.