105 resultados para benchmark

em Indian Institute of Science - Bangalore - Índia


Relevância:

20.00% 20.00%

Publicador:

Resumo:

A new two-stage state feedback control design approach has been developed to monitor the voltage supplied to magnetorheological (MR) dampers for semi-active vibration control of the benchmark highway bridge. The first stage contains a primary controller, which provides the force required to obtain a desired closed-loop response of the system. In the second stage, an optimal dynamic inversion (ODI) approach has been developed to obtain the amount of voltage to be supplied to each of the MR dampers such that it provides the required force prescribed by the primary controller. ODI is formulated by optimization with dynamic inversion, such that an optimal voltage is supplied to each damper in a set. The proposed control design has been simulated for both phase-I and phase-II study of the recently developed benchmark highway bridge problem. The efficiency of the proposed controller is analyzed in terms of the performance indices defined in the benchmark problem definition. Simulation results demonstrate that the proposed approach generally reduces peak response quantities over those obtained from the sample semi-active controller, although some response quantities have been seen to be increasing. Overall, the proposed control approach is quite competitive as compared with the sample semi-active control approach.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We discuss constrained and semi--constrained versions of the next--to--minimal supersymmetric extension of the Standard Model (NMSSM) in which a singlet Higgs superfield is added to the two doublet superfields that are present in the minimal extension (MSSM). This leads to a richer Higgs and neutralino spectrum and allows for many interesting phenomena that are not present in the MSSM. In particular, light Higgs particles are still allowed by current constraints and could appear as decay products of the heavier Higgs states, rendering their search rather difficult at the LHC. We propose benchmark scenarios which address the new phenomenological features, consistent with present constraints from colliders and with the dark matter relic density, and with (semi--)universal soft terms at the GUT scale. We present the corresponding spectra for the Higgs particles, their couplings to gauge bosons and fermions and their most important decay branching ratios. A brief survey of the search strategies for these states at the LHC is given.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Fuzzy logic control (FLC) systems have been applied as an effective control system in various fields, including vibration control of structures. The advantage of this approach is its inherent robustness and ability to handle non‐linearities and uncertainties in structural behavior and loading. The study evaluates the three‐dimensional benchmark control problem for a seismically excited highway bridge using an ANFIS driven hydraulic actuators. An ANN based training strategy that considers both velocity and acceleration feedback together with a fuzzy logic rule base is developed. Present study needs only 4 accelerometers and 4 fuzzy rule bases to determine the control force, instead of 8 accelerometers and 4 displacement transducers used in the benchmark study problem. The results obtained are better than that obtained from the benchmark control algorithm.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Structures with governing equations having identical inertial terms but somewhat differing stiffness terms can be termed flexurally analogous. An example of such a structure includes an axially loaded non-uniform beam and an unloaded uniform beam, for which an exact solution exists. We find that there exist shared eigenpairs (frequency and mode shapes) for a particular mode between such structures. Non-uniform beams with uniform axial loads, gravity loaded beams and rotating beams are considered and shared eigenpairs with uniform beams are found. In general, the derived flexural stiffness functions (FSF's) for the non-uniform beams required for the existence of shared eigenpair have internal singularities, but some of the singularities can be removed by an appropriate selection of integration constants using the theory of limits. The derived functions yield an insight into the relationship between the axial load and flexural stiffness of axially loaded beam structures. The derived functions can serve as benchmark solutions for numerical methods. (C) 2016 Elsevier Ltd. All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Some of the well known formulations for topology optimization of compliant mechanisms could lead to lumped compliant mechanisms. In lumped compliance, most of the elastic deformation in a mechanism occurs at few points, while rest of the mechanism remains more or less rigid. Such points are referred to as point-flexures. It has been noted in literature that high relative rotation is associated with point-flexures. In literature we also find a formulation of local constraint on relative rotations to avoid lumped compliance. However it is well known that a global constraint is easier to handle than a local constraint, by a numerical optimization algorithm. The current work presents a way of putting global constraint on relative rotations. This constraint is also simpler to implement since it uses linearized rotation at the center of finite-elements, to compute relative rotations. I show the results obtained by using this constraint oil the following benchmark problems - displacement inverter and gripper.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Forested areas play a dominant role in the global hydrological cycle. Evapotranspiration is a dominant component most of the time catching up with the rainfall. Though there are sophisticated methods which are available for its estimation, a simple reliable tool is needed so that a good budgeting could be made. Studies have established that evapotranspiration in forested areas is much higher than in agricultural areas. Latitude, type of forests, climate and geological characteristics also add to the complexity of its estimation. Few studies have compared different methods of evapotranspiration on forested watersheds in semi arid tropical forests. In this paper a comparative study of different methods of estimation of evapotranspiration is made with reference to the actual measurements made using all parameter climatological station data of a small deciduous forested watershed of Mulehole (area of 4.5 km2 ), South India. Potential evapotranspiration (ETo) was calculated using ten physically based and empirical methods. Actual evapotranspiration (AET) has been calculated through computation of water balance through SWAT model. The Penman-Montieth method has been used as a benchmark to compare the estimates arrived at using various methods. The AET calculated shows good agreement with the curve for evapotranspiration for forests worldwide. Error estimates have been made with respect to Penman-Montieth method. This study could give an idea of the errors involved whenever methods with limited data are used and also show the use indirect methods in estimation of Evapotranspiration which is more suitable for regional scale studies.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Gaussian processes (GPs) are promising Bayesian methods for classification and regression problems. Design of a GP classifier and making predictions using it is, however, computationally demanding, especially when the training set size is large. Sparse GP classifiers are known to overcome this limitation. In this letter, we propose and study a validation-based method for sparse GP classifier design. The proposed method uses a negative log predictive (NLP) loss measure, which is easy to compute for GP models. We use this measure for both basis vector selection and hyperparameter adaptation. The experimental results on several real-world benchmark data sets show better orcomparable generalization performance over existing methods.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Data-flow analysis is an integral part of any aggressive optimizing compiler. We propose a framework for improving the precision of data-flow analysis in the presence of complex control-flow. W initially perform data-flow analysis to determine those control-flow merges which cause the loss in data-flow analysis precision. The control-flow graph of the program is then restructured such that performing data-flow analysis on the resulting restructured graph gives more precise results. The proposed framework is both simple, involving the familiar notion of product automata, and also general, since it is applicable to any forward data-flow analysis. Apart from proving that our restructuring process is correct, we also show that restructuring is effective in that it necessarily leads to more optimization opportunities. Furthermore, the framework handles the trade-off between the increase in data-flow precision and the code size increase inherent in the restructuring. We show that determining an optimal restructuring is NP-hard, and propose and evaluate a greedy strategy. The framework has been implemented in the Scale research compiler, and instantiated for the specific problem of Constant Propagation. On the SPECINT 2000 benchmark suite we observe an average speedup of 4% in the running times over Wegman-Zadeck conditional constant propagation algorithm and 2% over a purely path profile guided approach.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A numerical scheme is presented for accurate simulation of fluid flow using the lattice Boltzmann equation (LBE) on unstructured mesh. A finite volume approach is adopted to discretize the LBE on a cell-centered, arbitrary shaped, triangular tessellation. The formulation includes a formal, second order discretization using a Total Variation Diminishing (TVD) scheme for the terms representing advection of the distribution function in physical space, due to microscopic particle motion. The advantage of the LBE approach is exploited by implementing the scheme in a new computer code to run on a parallel computing system. Performance of the new formulation is systematically investigated by simulating four benchmark flows of increasing complexity, namely (1) flow in a plane channel, (2) unsteady Couette flow, (3) flow caused by a moving lid over a 2D square cavity and (4) flow over a circular cylinder. For each of these flows, the present scheme is validated with the results from Navier-Stokes computations as well as lattice Boltzmann simulations on regular mesh. It is shown that the scheme is robust and accurate for the different test problems studied.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The StreamIt programming model has been proposed to exploit parallelism in streaming applications oil general purpose multicore architectures. The StreamIt graphs describe task, data and pipeline parallelism which can be exploited on accelerators such as Graphics Processing Units (GPUs) or CellBE which support abundant parallelism in hardware. In this paper, we describe a novel method to orchestrate the execution of if StreamIt program oil a multicore platform equipped with an accelerator. The proposed approach identifies, using profiling, the relative benefits of executing a task oil the superscalar CPU cores and the accelerator. We formulate the problem of partitioning the work between the CPU cores and the GPU, taking into account the latencies for data transfers and the required buffer layout transformations associated with the partitioning, as all integrated Integer Linear Program (ILP) which can then be solved by an ILP solver. We also propose an efficient heuristic algorithm for the work-partitioning between the CPU and the GPU, which provides solutions which are within 9.05% of the optimal solution on an average across the benchmark Suite. The partitioned tasks are then software pipelined to execute oil the multiple CPU cores and the Streaming Multiprocessors (SMs) of the GPU. The software pipelining algorithm orchestrates the execution between CPU cores and the GPU by emitting the code for the CPU and the GPU, and the code for the required data transfers. Our experiments on a platform with 8 CPU cores and a GeForce 8800 GTS 512 GPU show a geometric mean speedup of 6.94X with it maximum of 51.96X over it single threaded CPU execution across the StreamIt benchmarks. This is a 18.9% improvement over it partitioning strategy that maps only the filters that cannot be executed oil the GPU - the filters with state that is persistent across firings - onto the CPU.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper presents a new numerical integration technique oil arbitrary polygonal domains. The polygonal domain is mapped conformally to the unit disk using Schwarz-Christoffel mapping and a midpoint quadrature rule defined oil this unit disk is used. This method eliminates the need for a two-level isoparametric mapping Usually required. Moreover, the positivity of the Jacobian is guaranteed. Numerical results presented for a few benchmark problems in the context of polygonal finite elements show that the proposed method yields accurate results.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A new 8-node serendipity quadrilateral plate bending element (MQP8) based on the Mindlin-Reissner theory for the analysis of thin and moderately thick plate bending problems using Integrated Force Method is presented in this paper. The performance of this new element (MQP8) is studied for accuracy and convergence by analyzing many standard benchmark plate bending problems. This new element MQP8 performs excellent in both thin and moderately thick plate bending situations. And also this element is free from spurious/zero energy modes and free from shear locking problem.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Enhanced Scan design can significantly improve the fault coverage for two pattern delay tests at the cost of exorbitantly high area overhead. The redundant flip-flops introduced in the scan chains have traditionally only been used to launch the two-pattern delay test inputs, not to capture tests results. This paper presents a new, much lower cost partial Enhanced Scan methodology with both improved controllability and observability. Facilitating observation of some hard to observe internal nodes by capturing their response in the already available and underutilized redundant flip-flops improves delay fault coverage with minimal or almost negligible cost. Experimental results on ISCAS'89 benchmark circuits show significant improvement in TDF fault coverage for this new partial enhance scan methodology.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A method of testing for parametric faults of analog circuits based on a polynomial representation of fault-free function of the circuit is presented. The response of the circuit under test (CUT) is estimated as a polynomial in the applied input voltage at relevant frequencies in addition to DC. Classification or Cur is based on a comparison of the estimated polynomial coefficients with those of the fault free circuit. This testing method requires no design for test hardware as might be added to the circuit fly some other methods. The proposed method is illustrated for a benchmark elliptic filter. It is shown to uncover several parametric faults causing deviations as small as 5% from the nominal values.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The problem of reconstruction of a refractive-index distribution (RID) in optical refraction tomography (ORT) with optical path-length difference (OPD) data is solved using two adaptive-estimation-based extended-Kalman-filter (EKF) approaches. First, a basic single-resolution EKF (SR-EKF) is applied to a state variable model describing the tomographic process, to estimate the RID of an optically transparent refracting object from noisy OPD data. The initialization of the biases and covariances corresponding to the state and measurement noise is discussed. The state and measurement noise biases and covariances are adaptively estimated. An EKF is then applied to the wavelet-transformed state variable model to yield a wavelet-based multiresolution EKF (MR-EKF) solution approach. To numerically validate the adaptive EKF approaches, we evaluate them with benchmark studies of standard stationary cases, where comparative results with commonly used efficient deterministic approaches can be obtained. Detailed reconstruction studies for the SR-EKF and two versions of the MR-EKF (with Haar and Daubechies-4 wavelets) compare well with those obtained from a typically used variant of the (deterministic) algebraic reconstruction technique, the average correction per projection method, thus establishing the capability of the EKF for ORT. To the best of our knowledge, the present work contains unique reconstruction studies encompassing the use of EKF for ORT in single-resolution and multiresolution formulations, and also in the use of adaptive estimation of the EKF's noise covariances. (C) 2010 Optical Society of America