25 resultados para Value engineering
em Indian Institute of Science - Bangalore - Índia
Resumo:
For a feedback system consisting of a transfer function $G(s)$ in the forward path and a time-varying gain $n(t)(0 \leqq n(t) \leqq k)$ in the feedback loop, a stability multiplier $Z(s)$ has been constructed (and used to prove stability) by Freedman [2] such that $Z(s)(G(s) + {1 / K})$ and $Z(s - \sigma )(0 < \sigma < \sigma _ * )$ are strictly positive real, where $\sigma _ * $ can be computed from a knowledge of the phase-angle characteristic of $G(i\omega ) + {1 / k}$ and the time-varying gain $n(t)$ is restricted by $\sigma _ * $ by means of an integral inequality. In this note it is shown that an improved value for $\sigma _ * $ is possible by making some modifications in his derivation. ©1973 Society for Industrial and Applied Mathematics.
Resumo:
Purpose: A computationally efficient algorithm (linear iterative type) based on singular value decomposition (SVD) of the Jacobian has been developed that can be used in rapid dynamic near-infrared (NIR) diffuse optical tomography. Methods: Numerical and experimental studies have been conducted to prove the computational efficacy of this SVD-based algorithm over conventional optical image reconstruction algorithms. Results: These studies indicate that the performance of linear iterative algorithms in terms of contrast recovery (quantitation of optical images) is better compared to nonlinear iterative (conventional) algorithms, provided the initial guess is close to the actual solution. The nonlinear algorithms can provide better quality images compared to the linear iterative type algorithms. Moreover, the analytical and numerical equivalence of the SVD-based algorithm to linear iterative algorithms was also established as a part of this work. It is also demonstrated that the SVD-based image reconstruction typically requires O(NN2) operations per iteration, as contrasted with linear and nonlinear iterative methods that, respectively, requir O(NN3) and O(NN6) operations, with ``NN'' being the number of unknown parameters in the optical image reconstruction procedure. Conclusions: This SVD-based computationally efficient algorithm can make the integration of image reconstruction procedure with the data acquisition feasible, in turn making the rapid dynamic NIR tomography viable in the clinic to continuously monitor hemodynamic changes in the tissue pathophysiology.
Resumo:
In this paper we address the problem of forming procurement networks for items with value adding stages that are linearly arranged. Formation of such procurement networks involves a bottom-up assembly of complex production, assembly, and exchange relationships through supplier selection and contracting decisions. Recent research in supply chain management has emphasized that such decisions need to take into account the fact that suppliers and buyers are intelligent and rational agents who act strategically. In this paper, we view the problem of Procurement Network Formation (PNF) for multiple units of a single item as a cooperative game where agents cooperate to form a surplus maximizing procurement network and then share the surplus in a fair manner. We study the implications of using the Shapley value as a solution concept for forming such procurement networks. We also present a protocol, based on the extensive form game realization of the Shapley value, for forming these networks.
Resumo:
The reduction in natural frequencies,however small, of a civil engineering structure, is the first and the easiest method of estimating its impending damage. As a first level screening for health-monitoring, information on the frequency reduction of a few fundamentalmodes can be used to estimate the positions and the magnitude of damage in a smeared fashion. The paper presents the Eigen value sensitivity equations, derived from first-order perturbation technique, for typical infra-structural systems like a simply supported bridge girder, modelled as a beam, an endbearing pile, modelled as an axial rod and a simply supported plate as a continuum dynamic system. A discrete structure, like a building frame is solved for damage using Eigen-sensitivity derived by a computationalmodel. Lastly, neural network based damage identification is also demonstrated for a simply supported bridge beam, where the known-pairs of damage-frequency vector is used to train a neural network. The performance of these methods under the influence of measurement error is outlined. It is hoped that the developed method could be integrated in a typical infra-structural management program, such that magnitudes of damage and their positions can be obtained using acquired natural frequencies, synthesized from the excited/ambient vibration signatures.
Resumo:
It is well known that the numerical accuracy of a series solution to a boundary-value problem by the direct method depends on the technique of approximate satisfaction of the boundary conditions and on the stage of truncation of the series. On the other hand, it does not appear to be generally recognized that, when the boundary conditions can be described in alternative equivalent forms, the convergence of the solution is significantly affected by the actual form in which they are stated. The importance of the last aspect is studied for three different techniques of computing the deflections of simply supported regular polygonal plates under uniform pressure. It is also shown that it is sometimes possible to modify the technique of analysis to make the accuracy independent of the description of the boundary conditions.
Resumo:
Design creativity involves developing novel and useful solutions to design problems The research in this article is an attempt to understand how novelty of a design resulting from a design process is related to the kind of outcomes. described here as constructs, involved in the design process A model of causality, the SAPPhIRE model, is used as the basis of the analysis The analysis is based on previous research that shows that designing involves development and exploration of the seven basic constructs of the SAPPhIRE model that constitute the causal connection between the various levels of abstraction at which a design can be described The constructs am state change, action, parts. phenomenon. input. organs. and effect The following two questions are asked. Is there a relationship between novelty and the constructs? If them is a relationship, what is the degree of this relationship? A hypothesis is developed to answer the questions an increase in the number and variety of ideas explored while designing should enhance the variety of concept space. leading to an increase in the novelty of the concept space Eight existing observational studies of designing sessions are used to empirically validate the hypothesis Each designing session involves an individual designer. experienced or novice. solving a design problem by producing concepts and following a think-aloud protocol. The results indicate dependence of novelty of concept space on variety of concept space and dependence of variety of concept space on variety of idea space. thereby validating the hypothesis The Jesuits also reveal a strong correlation between novelty and the constructs, correlation value decreases as the abstraction level of the constructs reduces. signifying the importance of using constructs at higher abstraction levels for enhancing novelty
Resumo:
The paper proposes two methodologies for damage identification from measured natural frequencies of a contiguously damaged reinforced concrete beam, idealised with distributed damage model. The first method identifies damage from Iso-Eigen-Value-Change contours, plotted between pairs of different frequencies. The performance of the method is checked for a wide variation of damage positions and extents. The method is also extended to a discrete structure in the form of a five-storied shear building and the simplicity of the method is demonstrated. The second method is through smeared damage model, where the damage is assumed constant for different segments of the beam and the lengths and centres of these segments are the known inputs. First-order perturbation method is used to derive the relevant expressions. Both these methods are based on distributed damage models and have been checked with experimental program on simply supported reinforced concrete beams, subjected to different stages of symmetric and un-symmetric damages. The results of the experiments are encouraging and show that both the methods can be adopted together in a damage identification scenario.
Resumo:
The problem of time variant reliability analysis of existing structures subjected to stationary random dynamic excitations is considered. The study assumes that samples of dynamic response of the structure, under the action of external excitations, have been measured at a set of sparse points on the structure. The utilization of these measurements m in updating reliability models, postulated prior to making any measurements, is considered. This is achieved by using dynamic state estimation methods which combine results from Markov process theory and Bayes' theorem. The uncertainties present in measurements as well as in the postulated model for the structural behaviour are accounted for. The samples of external excitations are taken to emanate from known stochastic models and allowance is made for ability (or lack of it) to measure the applied excitations. The future reliability of the structure is modeled using expected structural response conditioned on all the measurements made. This expected response is shown to have a time varying mean and a random component that can be treated as being weakly stationary. For linear systems, an approximate analytical solution for the problem of reliability model updating is obtained by combining theories of discrete Kalman filter and level crossing statistics. For the case of nonlinear systems, the problem is tackled by combining particle filtering strategies with data based extreme value analysis. In all these studies, the governing stochastic differential equations are discretized using the strong forms of Ito-Taylor's discretization schemes. The possibility of using conditional simulation strategies, when applied external actions are measured, is also considered. The proposed procedures are exemplifiedmby considering the reliability analysis of a few low-dimensional dynamical systems based on synthetically generated measurement data. The performance of the procedures developed is also assessed based on a limited amount of pertinent Monte Carlo simulations. (C) 2010 Elsevier Ltd. All rights reserved.
Resumo:
Our study concerns an important current problem, that of diffusion of information in social networks. This problem has received significant attention from the Internet research community in the recent times, driven by many potential applications such as viral marketing and sales promotions. In this paper, we focus on the target set selection problem, which involves discovering a small subset of influential players in a given social network, to perform a certain task of information diffusion. The target set selection problem manifests in two forms: 1) top-k nodes problem and 2) lambda-coverage problem. In the top-k nodes problem, we are required to find a set of k key nodes that would maximize the number of nodes being influenced in the network. The lambda-coverage problem is concerned with finding a set of k key nodes having minimal size that can influence a given percentage lambda of the nodes in the entire network. We propose a new way of solving these problems using the concept of Shapley value which is a well known solution concept in cooperative game theory. Our approach leads to algorithms which we call the ShaPley value-based Influential Nodes (SPINs) algorithms for solving the top-k nodes problem and the lambda-coverage problem. We compare the performance of the proposed SPIN algorithms with well known algorithms in the literature. Through extensive experimentation on four synthetically generated random graphs and six real-world data sets (Celegans, Jazz, NIPS coauthorship data set, Netscience data set, High-Energy Physics data set, and Political Books data set), we show that the proposed SPIN approach is more powerful and computationally efficient. Note to Practitioners-In recent times, social networks have received a high level of attention due to their proven ability in improving the performance of web search, recommendations in collaborative filtering systems, spreading a technology in the market using viral marketing techniques, etc. It is well known that the interpersonal relationships (or ties or links) between individuals cause change or improvement in the social system because the decisions made by individuals are influenced heavily by the behavior of their neighbors. An interesting and key problem in social networks is to discover the most influential nodes in the social network which can influence other nodes in the social network in a strong and deep way. This problem is called the target set selection problem and has two variants: 1) the top-k nodes problem, where we are required to identify a set of k influential nodes that maximize the number of nodes being influenced in the network and 2) the lambda-coverage problem which involves finding a set of influential nodes having minimum size that can influence a given percentage lambda of the nodes in the entire network. There are many existing algorithms in the literature for solving these problems. In this paper, we propose a new algorithm which is based on a novel interpretation of information diffusion in a social network as a cooperative game. Using this analogy, we develop an algorithm based on the Shapley value of the underlying cooperative game. The proposed algorithm outperforms the existing algorithms in terms of generality or computational complexity or both. Our results are validated through extensive experimentation on both synthetically generated and real-world data sets.
Resumo:
A discussion of a technical note with the aforementioned title by Day and Marsh, published in this journal (Volume 121, Number 7, July 1995), is presented. Discussers Robinson and Allam assert that the authors' application of the pore-pressure parameter A to predict and quantify swell or collapse of compacted soils is hard to use because the authors visualize the collapse-swell phenomenon to occur in compacted soils broadly classified as sands and clays. The literature demonstrates that mineralogy has an important role in the volume change behavior of fine-grained soils. Robinson and Allam state that the A-value measurements may not completely predict the type of volume change anticipated in compacted soils on soaking without soil clay mineralogy details. Discussion is followed by closure from the authors.
Resumo:
In this paper we address the problem of forming procurement networks for items with value adding stages that are linearly arranged. Formation of such procurement networks involves a bottom-up assembly of complex production, assembly, and exchange relationships through supplier selection and contracting decisions. Research in supply chain management has emphasized that such decisions need to take into account the fact that suppliers and buyers are intelligent and rational agents who act strategically. In this paper, we view the problem of procurement network formation (PNF) for multiple units of a single item as a cooperative game where agents cooperate to form a surplus maximizing procurement network and then share the surplus in a fair manner. We study the implications of using the Shapley value as a solution concept for forming such procurement networks. We also present a protocol, based on the extensive form game realization of the Shapley value, for forming these networks.