164 resultados para Empirical Functions


Relevância:

20.00% 20.00%

Publicador:

Resumo:

It is shown that at most, n + 3 tests are required to detect any single stuck-at fault in an AND gate or a single faulty EXCLUSIVE OR (EOR) gate in a Reed-Muller canonical form realization of a switching function.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A nonexhaustive procedure for obtaining minimal Reed-Muller canonical (RMC) forms of switching functions is presented. This procedure is a modification of a procedure presented earlier in the literature and enables derivation of an upper bound on the number of RMC forms to be derived to choose a minimal one. It is shown that the task of obtaining minimal RMC forms is simplified in the case of symmetric functions and self-dual functions.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We extend some of the classical connections between automata and logic due to Büchi (1960) [5] and McNaughton and Papert (1971) [12] to languages of finitely varying functions or “signals”. In particular, we introduce a natural class of automata for generating finitely varying functions called View the MathML source’s, and show that it coincides in terms of language definability with a natural monadic second-order logic interpreted over finitely varying functions Rabinovich (2002) [15]. We also identify a “counter-free” subclass of View the MathML source’s which characterise the first-order definable languages of finitely varying functions. Our proofs mainly factor through the classical results for word languages. These results have applications in automata characterisations for continuously interpreted real-time logics like Metric Temporal Logic (MTL) Chevalier et al. (2006, 2007) [6] and [7].

Relevância:

20.00% 20.00%

Publicador:

Resumo:

EEG recordings are often contaminated with ocular artifacts such as eye blinks and eye movements. These artifacts may obscure underlying brain activity in the electroencephalogram (EEG) data and make the analysis of the data difficult. In this paper, we explore the use of empirical mode decomposition (EMD) based filtering technique to correct the eye blinks and eye movementartifacts in single channel EEG data. In this method, the single channel EEG data containing ocular artifact is segmented such that the artifact in each of the segment is considered as some type of slowly varying trend in the dataand the EMD is used to remove the trend. The filtering is done using partial reconstruction from components of the decomposition. The method is completely data dependent and hence adaptive and nonlinear. Experimental results are provided to check the applicability of the method on real EEG data and the results are quantified using power spectral density (PSD) as a measure. The method has given fairlygood results and does not make use of any preknowledge of artifacts or the EEG data used.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Inventory management (IM) has a decisive role in the enhancement of manufacturing industry's competitiveness. Therefore, major manufacturing industries are following IM practices with the intention of improving their performance. However, the effort to introduce IM in SMEs is very limited due to lack of initiation, expertise, and financial constraints. This paper aims to provide a guideline for entrepreneurs in enhancing their IM performance, as it presents the results of a survey based study carried out for machine tool Small and Medium Enterprises (SMEs) in Bangalore. Having established the significance of inventory as an input, we probed the relationship between IM performance and economic performance of these SMEs. To the extent possible all the factors of production and performance indicators were deliberately considered in pure economic terms. All economic performance indicators adopted seem to have a positive and significant association with IM performance in SMEs. On the whole, we found that SMEs which are IM efficient are likely to perform better on the economic front also and experience higher returns to scale.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Close relationships between guessing functions and length functions are established. Good length functions lead to good guessing functions. In particular, guessing in the increasing order of Lempel-Ziv lengths has certain universality properties for finite-state sources. As an application, these results show that hiding the parameters of the key-stream generating source in a private key crypto-system may not enhance the privacy of the system, the privacy level being measured by the difficulty in brute-force guessing of the key stream.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A rotating beam finite element in which the interpolating shape functions are obtained by satisfying the governing static homogenous differential equation of Euler–Bernoulli rotating beams is developed in this work. The shape functions turn out to be rational functions which also depend on rotation speed and element position along the beam and account for the centrifugal stiffening effect. These rational functions yield the Hermite cubic when rotation speed becomes zero. The new element is applied for static and dynamic analysis of rotating beams. In the static case, a cantilever beam having a tip load is considered, with a radially varying axial force. It is found that this new element gives a very good approximation of the tip deflection to the analytical series solution value, as compared to the classical finite element given by the Hermite cubic shape functions. In the dynamic analysis, the new element is applied for uniform, and tapered rotating beams with cantilever and hinged boundary conditions to determine the natural frequencies, and the results compare very well with the published results given in the literature.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Window technique is one of the simplest methods to design Finite Impulse Response (FIR) filters. It uses special functions to truncate an infinite sequence to a finite one. In this paper, we propose window techniques based on integer sequences. The striking feature of the proposed work is that it overcomes all the problems posed by floating point numbers and inaccuracy, as the sequences are made of only integers. Some of these integer window sequences, yield sharp transition, while some of them result in zero ripple in passband and stopband.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The electrical conduction in insulating materials is a complex process and several theories have been suggested in the literature. Many phenomenological empirical models are in use in the DC cable literature. However, the impact of using different models for cable insulation has not been investigated until now, but for the claims of relative accuracy. The steady state electric field in the DC cable insulation is known to be a strong function of DC conductivity. The DC conductivity, in turn, is a complex function of electric field and temperature. As a result, under certain conditions, the stress at cable screen is higher than that at the conductor boundary. The paper presents detailed investigations on using different empirical conductivity models suggested in the literature for HV DC cable applications. It has been expressly shown that certain models give rise to erroneous results in electric field and temperature computations. It is pointed out that the use of these models in the design or evaluation of cables will lead to errors.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A new rotating beam finite element is developed in which the basis functions are obtained by the exact solution of the governing static homogenous differential equation of a stiff string, which results from an approximation in the rotating beam equation. These shape functions depend on rotation speed and element position along the beam and account for the centrifugal stiffening effect. Using this new element and the Hermite cubic finite element, a convergence study of natural frequencies is performed, and it is found that the new element converges much more rapidly than the conventional Hermite cubic element for the first two modes at higher rotation speeds. The new element is also applied for uniform and tapered rotating beams to determine the natural frequencies, and the results compare very well with the published results given in the literature.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We consider a scenario in which a wireless sensor network is formed by randomly deploying n sensors to measure some spatial function over a field, with the objective of computing a function of the measurements and communicating it to an operator station. We restrict ourselves to the class of type-threshold functions (as defined in the work of Giridhar and Kumar, 2005), of which max, min, and indicator functions are important examples: our discussions are couched in terms of the max function. We view the problem as one of message-passing distributed computation over a geometric random graph. The network is assumed to be synchronous, and the sensors synchronously measure values and then collaborate to compute and deliver the function computed with these values to the operator station. Computation algorithms differ in (1) the communication topology assumed and (2) the messages that the nodes need to exchange in order to carry out the computation. The focus of our paper is to establish (in probability) scaling laws for the time and energy complexity of the distributed function computation over random wireless networks, under the assumption of centralized contention-free scheduling of packet transmissions. First, without any constraint on the computation algorithm, we establish scaling laws for the computation time and energy expenditure for one-time maximum computation. We show that for an optimal algorithm, the computation time and energy expenditure scale, respectively, as Theta(radicn/log n) and Theta(n) asymptotically as the number of sensors n rarr infin. Second, we analyze the performance of three specific computation algorithms that may be used in specific practical situations, namely, the tree algorithm, multihop transmission, and the Ripple algorithm (a type of gossip algorithm), and obtain scaling laws for the computation time and energy expenditure as n rarr infin. In particular, we show that the computation time for these algorithms scales as Theta(radicn/lo- g n), Theta(n), and Theta(radicn log n), respectively, whereas the energy expended scales as , Theta(n), Theta(radicn/log n), and Theta(radicn log n), respectively. Finally, simulation results are provided to show that our analysis indeed captures the correct scaling. The simulations also yield estimates of the constant multipliers in the scaling laws. Our analyses throughout assume a centralized optimal scheduler, and hence, our results can be viewed as providing bounds for the performance with practical distributed schedulers.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Supercritical processes are gaining importance in the last few years in the food, environmental and pharmaceutical product processing. The design of any supercritical process needs accurate experimental data on solubilities of solids in the supercritical fluids (SCFs). The empirical equations are quite successful in correlating the solubilities of solid compounds in SCF both in the presence and absence of cosolvents. In this work, existing solvate complex models are discussed and a new set of empirical equations is proposed. These equations correlate the solubilities of solids in supercritical carbon dioxide (both in the presence and absence of cosolvents) as a function of temperature, density of supercritical carbon dioxide and the mole fraction of cosolvent. The accuracy of the proposed models was evaluated by correlating 15 binary and 18 ternary systems. The proposed models provided the best overall correlations. (C) 2009 Elsevier BA/. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper deals with the development of simplified semi-empirical relations for the prediction of residual velocities of small calibre projectiles impacting on mild steel target plates, normally or at an angle, and the ballistic limits for such plates. It has been shown, for several impact cases for which test results on perforation of mild steel plates are available, that most of the existing semi-empirical relations which are applicable only to normal projectile impact do not yield satisfactory estimations of residual velocity. Furthermore, it is difficult to quantify some of the empirical parameters present in these relations for a given problem. With an eye towards simplicity and ease of use, two new regression-based relations employing standard material parameters have been discussed here for predicting residual velocity and ballistic limit for both normal and oblique impact. The latter expressions differ in terms of usage of quasi-static or strain rate-dependent average plate material strength. Residual velocities yielded by the present semi-empirical models compare well with the experimental results. Additionally, ballistic limits from these relations show close correlation with the corresponding finite element-based predictions.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper proposes the use of empirical modeling techniques for building microarchitecture sensitive models for compiler optimizations. The models we build relate program performance to settings of compiler optimization flags, associated heuristics and key microarchitectural parameters. Unlike traditional analytical modeling methods, this relationship is learned entirely from data obtained by measuring performance at a small number of carefully selected compiler/microarchitecture configurations. We evaluate three different learning techniques in this context viz. linear regression, adaptive regression splines and radial basis function networks. We use the generated models to a) predict program performance at arbitrary compiler/microarchitecture configurations, b) quantify the significance of complex interactions between optimizations and the microarchitecture, and c) efficiently search for'optimal' settings of optimization flags and heuristics for any given microarchitectural configuration. Our evaluation using benchmarks from the SPEC CPU2000 suits suggests that accurate models (< 5% average error in prediction) can be generated using a reasonable number of simulations. We also find that using compiler settings prescribed by a model-based search can improve program performance by as much as 19% (with an average of 9.5%) over highly optimized binaries.