66 resultados para value-based sales
Resumo:
Statistical learning algorithms provide a viable framework for geotechnical engineering modeling. This paper describes two statistical learning algorithms applied for site characterization modeling based on standard penetration test (SPT) data. More than 2700 field SPT values (N) have been collected from 766 boreholes spread over an area of 220 sqkm area in Bangalore. To get N corrected value (N,), N values have been corrected (Ne) for different parameters such as overburden stress, size of borehole, type of sampler, length of connecting rod, etc. In three-dimensional site characterization model, the function N-c=N-c (X, Y, Z), where X, Y and Z are the coordinates of a point corresponding to N, value, is to be approximated in which N, value at any half-space point in Bangalore can be determined. The first algorithm uses least-square support vector machine (LSSVM), which is related to aridge regression type of support vector machine. The second algorithm uses relevance vector machine (RVM), which combines the strengths of kernel-based methods and Bayesian theory to establish the relationships between a set of input vectors and a desired output. The paper also presents the comparative study between the developed LSSVM and RVM model for site characterization. Copyright (C) 2009 John Wiley & Sons,Ltd.
Resumo:
Spike detection in neural recordings is the initial step in the creation of brain machine interfaces. The Teager energy operator (TEO) treats a spike as an increase in the `local' energy and detects this increase. The performance of TEO in detecting action potential spikes suffers due to its sensitivity to the frequency of spikes in the presence of noise which is present in microelectrode array (MEA) recordings. The multiresolution TEO (mTEO) method overcomes this shortcoming of the TEO by tuning the parameter k to an optimal value m so as to match to frequency of the spike. In this paper, we present an algorithm for the mTEO using the multiresolution structure of wavelets along with inbuilt lowpass filtering of the subband signals. The algorithm is efficient and can be implemented for real-time processing of neural signals for spike detection. The performance of the algorithm is tested on a simulated neural signal with 10 spike templates obtained from [14]. The background noise is modeled as a colored Gaussian random process. Using the noise standard deviation and autocorrelation functions obtained from recorded data, background noise was simulated by an autoregressive (AR(5)) filter. The simulations show a spike detection accuracy of 90%and above with less than 5% false positives at an SNR of 2.35 dB as compared to 80% accuracy and 10% false positives reported [6] on simulated neural signals.
Resumo:
The paper proposes two methodologies for damage identification from measured natural frequencies of a contiguously damaged reinforced concrete beam, idealised with distributed damage model. The first method identifies damage from Iso-Eigen-Value-Change contours, plotted between pairs of different frequencies. The performance of the method is checked for a wide variation of damage positions and extents. The method is also extended to a discrete structure in the form of a five-storied shear building and the simplicity of the method is demonstrated. The second method is through smeared damage model, where the damage is assumed constant for different segments of the beam and the lengths and centres of these segments are the known inputs. First-order perturbation method is used to derive the relevant expressions. Both these methods are based on distributed damage models and have been checked with experimental program on simply supported reinforced concrete beams, subjected to different stages of symmetric and un-symmetric damages. The results of the experiments are encouraging and show that both the methods can be adopted together in a damage identification scenario.
Resumo:
The element-based piecewise smooth functional approximation in the conventional finite element method (FEM) results in discontinuous first and higher order derivatives across element boundaries Despite the significant advantages of the FEM in modelling complicated geometries, a motivation in developing mesh-free methods has been the ease with which higher order globally smooth shape functions can be derived via the reproduction of polynomials There is thus a case for combining these advantages in a so-called hybrid scheme or a `smooth FEM' that, whilst retaining the popular mesh-based discretization, obtains shape functions with uniform C-p (p >= 1) continuity One such recent attempt, a NURBS based parametric bridging method (Shaw et al 2008b), uses polynomial reproducing, tensor-product non-uniform rational B-splines (NURBS) over a typical FE mesh and relies upon a (possibly piecewise) bijective geometric map between the physical domain and a rectangular (cuboidal) parametric domain The present work aims at a significant extension and improvement of this concept by replacing NURBS with DMS-splines (say, of degree n > 0) that are defined over triangles and provide Cn-1 continuity across the triangle edges This relieves the need for a geometric map that could precipitate ill-conditioning of the discretized equations Delaunay triangulation is used to discretize the physical domain and shape functions are constructed via the polynomial reproduction condition, which quite remarkably relieves the solution of its sensitive dependence on the selected knotsets Derivatives of shape functions are also constructed based on the principle of reproduction of derivatives of polynomials (Shaw and Roy 2008a) Within the present scheme, the triangles also serve as background integration cells in weak formulations thereby overcoming non-conformability issues Numerical examples involving the evaluation of derivatives of targeted functions up to the fourth order and applications of the method to a few boundary value problems of general interest in solid mechanics over (non-simply connected) bounded domains in 2D are presented towards the end of the paper
Resumo:
We consider numerical solutions of nonlinear multiterm fractional integrodifferential equations, where the order of the highest derivative is fractional and positive but is otherwise arbitrary. Here, we extend and unify our previous work, where a Galerkin method was developed for efficiently approximating fractional order operators and where elements of the present differential algebraic equation (DAE) formulation were introduced. The DAE system developed here for arbitrary orders of the fractional derivative includes an added block of equations for each fractional order operator, as well as forcing terms arising from nonzero initial conditions. We motivate and explain the structure of the DAE in detail. We explain how nonzero initial conditions should be incorporated within the approximation. We point out that our approach approximates the system and not a specific solution. Consequently, some questions not easily accessible to solvers of initial value problems, such as stability analyses, can be tackled using our approach. Numerical examples show excellent accuracy. DOI: 10.1115/1.4002516]
Resumo:
Relentless CMOS scaling coupled with lower design tolerances is making ICs increasingly susceptible to wear-out related permanent faults and transient faults, necessitating on-chip fault tolerance in future chip microprocessors (CMPs). In this paper we introduce a new energy-efficient fault-tolerant CMP architecture known as Redundant Execution using Critical Value Forwarding (RECVF). RECVF is based on two observations: (i) forwarding critical instruction results from the leading to the trailing core enables the latter to execute faster, and (ii) this speedup can be exploited to reduce energy consumption by operating the trailing core at a lower voltage-frequency level. Our evaluation shows that RECVF consumes 37% less energy than conventional dual modular redundant (DMR) execution of a program. It consumes only 1.26 times the energy of a non-fault-tolerant baseline and has a performance overhead of just 1.2%.
Resumo:
In this paper we present a cache coherence protocol for multistage interconnection network (MIN)-based multiprocessors with two distinct private caches: private-blocks caches (PCache) containing blocks private to a process and shared-blocks caches (SCache) containing data accessible by all processes. The architecture is extended by a coherence control bus connecting all shared-block cache controllers. Timing problems due to variable transit delays through the MIN are dealt with by introducing Transient states in the proposed cache coherence protocol. The impact of the coherence protocol on system performance is evaluated through a performance study of three phases. Assuming homogeneity of all nodes, a single-node queuing model (phase 3) is developed to analyze system performance. This model is solved for processor and coherence bus utilizations using the mean value analysis (MVA) technique with shared-blocks steady state probabilities (phase 1) and communication delays (phase 2) as input parameters. The performance of our system is compared to that of a system with an equivalent-sized unified cache and with a multiprocessor implementing a directory-based coherence protocol. System performance measures are verified through simulation.
Temperature dependent electrical transport behavior of InN/GaN heterostructure based Schottky diodes
Resumo:
InN/GaN heterostructure based Schottky diodes were fabricated by plasma-assisted molecular beam epitaxy. The temperature dependent electrical transport properties were carried out for InN/GaN heterostructure. The barrier height and the ideality factor of the Schottky diodes were found to be temperature dependent. The temperature dependence of the barrier height indicates that the Schottky barrier height is inhomogeneous in nature at the heterostructure interface. The higher value of the ideality factor and its temperature dependence suggest that the current transport is primarily dominated by thermionic field emission (TFE) other than thermionic emission (TE). The room temperature barrier height obtained by using TE and TFE models were 1.08 and 1.43 eV, respectively. (C) 2011 American Institute of Physics. doi: 10.1063/1.3549685]
Resumo:
A simple, cost-effective and environment-friendly pathway for preparing highly porous matrix of giant dielectric material CaCu3Ti4O12 (CCTO) through combustion of a completely aqueous precursor solution is presented. The pathway yields phase-pure and impurity-less CCTO ceramic at an ultra-low temperature (700 degrees C) and is better than traditional solid-state reaction schemes which fail to produce pure phase at as high temperature as 1000 degrees C (Li, Schwartz, Phys. Rev. B 75, 012104). The porous ceramic matrix on grinding produced CCTO powder having particle size in submicron order with an average size 300 nm. On sintering at 1050 degrees C for 5 h the powder shows high dielectric constants (>10(4) at all frequencies from 100 Hz to 100 kHz) and low loss (with 0.05 as the lowest value) which is suitable for device applications. The reaction pathway is expected to be extended to prepare other multifunctional complex perovskite materials. (C) 2010 Elsevier B.V. All rights reserved.
Resumo:
Timer-based mechanisms are often used in several wireless systems to help a given (sink) node select the best helper node among many available nodes. Specifically, a node transmits a packet when its timer expires, and the timer value is a function of its local suitability metric. In practice, the best node gets selected successfully only if no other node's timer expires within a `vulnerability' window after its timer expiry. In this paper, we provide a complete closed-form characterization of the optimal metric-to-timer mapping that maximizes the probability of success for any probability distribution function of the metric. The optimal scheme is scalable, distributed, and much better than the popular inverse metric timer mapping. We also develop an asymptotic characterization of the optimal scheme that is elegant and insightful, and accurate even for a small number of nodes.
Resumo:
In this paper, we consider the problem of selecting, for any given positive integer k, the top-k nodes in a social network, based on a certain measure appropriate for the social network. This problem is relevant in many settings such as analysis of co-authorship networks, diffusion of information, viral marketing, etc. However, in most situations, this problem turns out to be NP-hard. The existing approaches for solving this problem are based on approximation algorithms and assume that the objective function is sub-modular. In this paper, we propose a novel and intuitive algorithm based on the Shapley value, for efficiently computing an approximate solution to this problem. Our proposed algorithm does not use the sub-modularity of the underlying objective function and hence it is a general approach. We demonstrate the efficacy of the algorithm using a co-authorship data set from e-print arXiv (www.arxiv.org), having 8361 authors.
Resumo:
In this paper we address the problem of forming procurement networks for items with value adding stages that are linearly arranged. Formation of such procurement networks involves a bottom-up assembly of complex production, assembly, and exchange relationships through supplier selection and contracting decisions. Research in supply chain management has emphasized that such decisions need to take into account the fact that suppliers and buyers are intelligent and rational agents who act strategically. In this paper, we view the problem of procurement network formation (PNF) for multiple units of a single item as a cooperative game where agents cooperate to form a surplus maximizing procurement network and then share the surplus in a fair manner. We study the implications of using the Shapley value as a solution concept for forming such procurement networks. We also present a protocol, based on the extensive form game realization of the Shapley value, for forming these networks.
Resumo:
In this article, finite-time consensus algorithms for a swarm of self-propelling agents based on sliding mode control and graph algebraic theories are presented. Algorithms are developed for swarms that can be described by balanced graphs and that are comprised of agents with dynamics of the same order. Agents with first and higher order dynamics are considered. For consensus, the agents' inputs are chosen to enforce sliding mode on surfaces dependent on the graph Laplacian matrix. The algorithms allow for the tuning of the time taken by the swarm to reach a consensus as well as the consensus value. As an example, the case when a swarm of first-order agents is in cyclic pursuit is considered.
Optimised form of acceleration correction algorithm within SPH-based simulations of impact mechanics
Resumo:
In the context of SPH-based simulations of impact dynamics, an optimised and automated form of the acceleration correction algorithm (Shaw and Reid, 2009a) is developed so as to remove spurious high frequency oscillations in computed responses whilst retaining the stabilizing characteristics of the artificial viscosity in the presence of shocks and layers with sharp gradients. A rational framework for an insightful characterisation of the erstwhile acceleration correction method is first set up. This is followed by the proposal of an optimised version of the method, wherein the strength of the correction term in the momentum balance and energy equations is optimised. For the first time, this leads to an automated procedure to arrive at the artificial viscosity term. In particular, this is achieved by taking a spatially varying response-dependent support size for the kernel function through which the correction term is computed. The optimum value of the support size is deduced by minimising the (spatially localised) total variation of the high oscillation in the acceleration term with respect to its (local) mean. The derivation of the method, its advantages over the heuristic method and issues related to its numerical implementation are discussed in detail. (C) 2011 Elsevier Ltd. All rights reserved.
Resumo:
The storage capacity of an activated carbon bed is studied using a 2D transport model with constant inlet flow conditions. The predicted filling times and variation in bed pressure and temperature are in good agreement with experimental observations obtained using a 1.82 L prototype ANG storage cylinder. Storage efficiencies based on the maximum achievable V/V (volume of gas/volume of container) and filling times are used to quantify the performance of the charging process. For the high permeability beds used in the experiments, storage efficiencies are controlled by the rate of heat removal. Filling times, defined as the time at which the bed pressure reaches 3.5 MPa, range from 120 to 3.4 min for inlet flow rates of 1.0 L min(-1) and 30.0 L min(-1), respectively. The corresponding storage efficiencies, eta(s), vary from 90% to 76%, respectively. Simulations with L/D ratios ranging from 0.35 to 7.8 indicate that the storage efficiencies can be improved with an increase in the LID ratios and/or with water cooled convection. Thus for an inlet flow rate of 30.0 L min(-1), an eta(s) value of 90% can be obtained with water cooling for an L/D ratio of 7.8 and a filling time of a few minutes. In the absence of water cooling the eta(s) value reduces to 83% at the same L/D ratio. Our study suggests that with an appropriate choice of cylinder dimensions, solutions based on convective cooling during adsorptive storage are possible with some compromise in the storage capacity.