85 resultados para Information search – models
Resumo:
With many innovations in process technology, forging is establishing itself as a precision manufacturing process: as forging is used to produce complex shapes in difficult materials, it requires dies of complex configuration of high strength and of wear-resistant materials. Extensive research and development work is being undertaken, internationally, to analyse the stresses in forging dies and the flow of material in forged components. Identification of the location, size and shape of dead-metal zones is required for component design. Further, knowledge of the strain distribution in the flowing metal indicates the degree to which the component is being work hardened. Such information is helpful in the selection of process parameters such as dimensional allowances and interface lubrication, as well as in the determination of post-forging operations such as heat treatment and machining. In the presently reported work the effect of aperture width and initial specimen height on the strain distribution in the plane-strain extrusion forging of machined lead billets is observed: the distortion of grids inscribed on the face of the specimen gives the strain distribution. The stress-equilibrium approach is used to optimise a model of flow in extrusion forging, which model is found to be effective in estimating the size of the dead-metal zone. The work carried out so far indicates that the methodology of using the stress-equilibrium approach to develop models of flow in closed-die forging can be a useful tool in component, process and die design.
Resumo:
Knowledge of drag force is an important design parameter in aerodynamics. Measurement of aerodynamic forces at hypersonic speed is a challenge and usually ground test facilities like shock tunnels are used to carry out such tests. Accelerometer based force balances are commonly employed for measuring aerodynamic drag around bodies in hypersonic shock tunnels. In this study, we present an analysis of the effect of model material on the performance of an accelerometer balance used for measurement of drag in impulse facilities. From the experimental studies performed on models constructed out of Bakelite HYLEM and Aluminum, it is clear that the rigid body assumption does not hold good during the short testing duration available in shock tunnels. This is notwithstanding the fact that the rubber bush used for supporting the model allows unconstrained motion of the model during the short testing time available in the shock tunnel. The vibrations induced in the model on impact loading in the shock tunnel are damped out in metallic model, resulting in a smooth acceleration signal, while the signal become noisy and non-linear when we use non-isotropic materials like Bakelite HYLEM. This also implies that careful analysis and proper data reduction methodologies are necessary for measuring aerodynamic drag for non-metallic models in shock tunnels. The results from the drag measurements carried out using a 60 degrees half angle blunt cone is given in the present analysis.
Resumo:
The phenomenological theory of hemispherical growth is generalised to time-dependent nucleation and growth-rates. Special cases, which include models with diffusion-controlled rates, are analysed. Expressions are obtained for small and large time behaviour and peak characteristics of potentiostatic transients, and their use in model parameter estimation is discussed. Two earlier equations are corrected. Numerically calculated transients which are presented exhibit some interesting features such as a maximum preceding the steady state, oscillations and shoulder.
Resumo:
A general theory is evolved for a class of macrogrowth models which possess two independent growth-rates. Relations connecting growth-rates to growth geometry are established and some new growth forms are shown to result for models with passivation or diffusion-controlled rates. The corresponding potentiostatic responses, their small and large time behaviours and peak characteristics are obtained. Numerical transients are also presented. An empirical equation is derived as a special case and an earlier equation is corrected. An interesting stochastic result pertaining to nucleation events in the successive layers is proved.
Resumo:
It is shown, in the composite fermion models studied by 't Hooft and others, that the requirements of Adler-Bell-Jackiw anomaly matching and n-independence are sufficient to fix the indices of composite representations. The third requirement, namely that of decoupling relations, follows from these two constraints in such models and hence is inessential.
Resumo:
New dimensionally consistent modified solvate complex models are derived to correlate solubilities of solids in supercritical fluids both in the presence and absence of entrainers (cosolvents). These models are compared against the standard solvate complex models [J.Chrastil, J. Phys. Chem. 86 (1982) 3016-3021; J.C. Gonzalez, M.R.Vieytes, A.M. Botana, J.M. Vieites, L.M. Botana, J. Chromatogr. A 910 (2001) 119-125; Y. Adachi, B.C.Y. Lu, Fluid Phase Equilb. 14 (1983) 47-156; J.M. del Valle, J.M. Aguilera, Ind. Eng. Chem. Res. 27 (1988) 1551-1553] by correlating the solubilities of 13 binary and 12 ternary systems. Though the newly derived models are not significantly better than the standard models in predicting the solubilities, they are dimensionally consistent. (C) 2009 Elsevier B.V. All rights reserved.
Resumo:
Various intrusion detection systems (IDSs) reported in the literature have shown distinct preferences for detecting a certain class of attack with improved accuracy, while performing moderately on the other classes. In view of the enormous computing power available in the present-day processors, deploying multiple IDSs in the same network to obtain best-of-breed solutions has been attempted earlier. The paper presented here addresses the problem of optimizing the performance of IDSs using sensor fusion with multiple sensors. The trade-off between the detection rate and false alarms with multiple sensors is highlighted. It is illustrated that the performance of the detector is better when the fusion threshold is determined according to the Chebyshev inequality. In the proposed data-dependent decision ( DD) fusion method, the performance optimization of ndividual IDSs is first addressed. A neural network supervised learner has been designed to determine the weights of individual IDSs depending on their reliability in detecting a certain attack. The final stage of this DD fusion architecture is a sensor fusion unit which does the weighted aggregation in order to make an appropriate decision. This paper theoretically models the fusion of IDSs for the purpose of demonstrating the improvement in performance, supplemented with the empirical evaluation.
Resumo:
A new automata model Mr,k, with a conceptually significant innovation in the form of multi-state alternatives at each instance, is proposed in this study. Computer simulations of the Mr,k, model in the context of feature selection in an unsupervised environment has demonstrated the superiority of the model over similar models without this multi-state-choice innovation.
Resumo:
The availability of a small fleet of aircraft in a flying-base, repair-depot combination is modeled and studied. First, a deterministic flow model relates parameters of interest and represents the state-of-the art in the planning of such systems. Second, a cyclic queue model shows the effect of the principal uncertainties in operation and repair and shows the consequent decrease in the availability of aircraft at the flying-base. Several options such as increasing fleet size, investments in additional repair facilities, or building reliability and maintainability into the individual aircraft during its life-cycle are open for increasing the availability. A life-cycle cost criterion brings out some of these features. Numerical results confirm Rose's prediction that there exists a minimal cost combination of end products and repair-depot capability to achieve a prescribed operational availability.
Resumo:
In this paper, we first describe a framework to model the sponsored search auction on the web as a mechanism design problem. Using this framework, we describe two well-known mechanisms for sponsored search auction-Generalized Second Price (GSP) and Vickrey-Clarke-Groves (VCG). We then derive a new mechanism for sponsored search auction which we call optimal (OPT) mechanism. The OPT mechanism maximizes the search engine's expected revenue, while achieving Bayesian incentive compatibility and individual rationality of the advertisers. We then undertake a detailed comparative study of the mechanisms GSP, VCG, and OPT. We compute and compare the expected revenue earned by the search engine under the three mechanisms when the advertisers are symmetric and some special conditions are satisfied. We also compare the three mechanisms in terms of incentive compatibility, individual rationality, and computational complexity. Note to Practitioners-The advertiser-supported web site is one of the successful business models in the emerging web landscape. When an Internet user enters a keyword (i.e., a search phrase) into a search engine, the user gets back a page with results, containing the links most relevant to the query and also sponsored links, (also called paid advertisement links). When a sponsored link is clicked, the user is directed to the corresponding advertiser's web page. The advertiser pays the search engine in some appropriate manner for sending the user to its web page. Against every search performed by any user on any keyword, the search engine faces the problem of matching a set of advertisers to the sponsored slots. In addition, the search engine also needs to decide on a price to be charged to each advertiser. Due to increasing demands for Internet advertising space, most search engines currently use auction mechanisms for this purpose. These are called sponsored search auctions. A significant percentage of the revenue of Internet giants such as Google, Yahoo!, MSN, etc., comes from sponsored search auctions. In this paper, we study two auction mechanisms, GSP and VCG, which are quite popular in the sponsored auction context, and pursue the objective of designing a mechanism that is superior to these two mechanisms. In particular, we propose a new mechanism which we call the OPT mechanism. This mechanism maximizes the search engine's expected revenue subject to achieving Bayesian incentive compatibility and individual rationality. Bayesian incentive compatibility guarantees that it is optimal for each advertiser to bid his/her true value provided that all other agents also bid their respective true values. Individual rationality ensures that the agents participate voluntarily in the auction since they are assured of gaining a non-negative payoff by doing so.
Resumo:
We propose a quantity called information ambiguity that plays the same role in the worst-case information-theoretic nalyses as the well-known notion of information entropy performs in the corresponding average-case analyses. We prove various properties of information ambiguity and illustrate its usefulness in performing the worst-case analysis of a variant of distributed source coding problem.
Resumo:
We consider the problem of transmission of several discrete sources over a multiple access channel (MAC) with side information at the sources and the decoder. Source-channel separation does not hold for this channel. Sufficient conditions are provided for transmission of sources with a given distortion. The channel could have continuous alphabets (Gaussian MAC is a special case). Various previous results are obtained as special cases.
Resumo:
In this paper we analyze a deploy and search strategy for multi-agent systems. Mobile agents equipped with sensors carry out search operation in the search space. The lack of information about the search space is modeled as an uncertainty density distribution over the space, and is assumed to be known to the agents a priori. In each step, the agents deploy themselves in an optimal way so as to maximize per step reduction in the uncertainty density. We analyze the proposed strategy for convergence and spatial distributedness. The control law moving the agents has been analyzed for stability and convergence using LaSalle's invariance principle, and for spatial distributedness under a few realistic constraints on the control input such as constant speed, limit on maximum speed, and also sensor range limits. The simulation experiments show that the strategy successfully reduces the average uncertainty density below the required level.
Resumo:
In a three player quantum `Dilemma' game each player takes independent decisions to maximize his/her individual gain. The optimal strategy in the quantum version of this game has a higher payoff compared to its classical counterpart. However, this advantage is lost if the initial qubits provided to the players are from a noisy source. We have experimentally implemented the three player quantum version of the `Dilemma' game as described by Johnson, [N.F. Johnson, Phys. Rev. A 63 (2001) 020302(R)] using nuclear magnetic resonance quantum information processor and have experimentally verified that the payoff of the quantum game for various levels of corruption matches the theoretical payoff. (c) 2007 Elsevier Inc. All rights reserved.
Resumo:
In this paper, we are concerned with energy efficient area monitoring using information coverage in wireless sensor networks, where collaboration among multiple sensors can enable accurate sensing of a point in a given area-to-monitor even if that point falls outside the physical coverage of all the sensors. We refer to any set of sensors that can collectively sense all points in the entire area-to-monitor as a full area information cover. We first propose a low-complexity heuristic algorithm to obtain full area information covers. Using these covers, we then obtain the optimum schedule for activating the sensing activity of various sensors that maximizes the sensing lifetime. The scheduling of sensor activity using the optimum schedules obtained using the proposed algorithm is shown to achieve significantly longer sensing lifetimes compared to those achieved using physical coverage. Relaxing the full area coverage requirement to a partial area coverage (e.g., 95% of area coverage as adequate instead of 100% area coverage) further enhances the lifetime.