243 resultados para computer science and engineering
em Indian Institute of Science - Bangalore - Índia
Resumo:
Indian logic has a long history. It somewhat covers the domains of two of the six schools (darsanas) of Indian philosophy, namely, Nyaya and Vaisesika. The generally accepted definition of Indian logic over the ages is the science which ascertains valid knowledge either by means of six senses or by means of the five members of the syllogism. In other words, perception and inference constitute the subject matter of logic. The science of logic evolved in India through three ages: the ancient, the medieval and the modern, spanning almost thirty centuries. Advances in Computer Science, in particular, in Artificial Intelligence have got researchers in these areas interested in the basic problems of language, logic and cognition in the past three decades. In the 1980s, Artificial Intelligence has evolved into knowledge-based and intelligent system design, and the knowledge base and inference engine have become standard subsystems of an intelligent system. One of the important issues in the design of such systems is knowledge acquisition from humans who are experts in a branch of learning (such as medicine or law) and transferring that knowledge to a computing system. The second important issue in such systems is the validation of the knowledge base of the system i.e. ensuring that the knowledge is complete and consistent. It is in this context that comparative study of Indian logic with recent theories of logic, language and knowledge engineering will help the computer scientist understand the deeper implications of the terms and concepts he is currently using and attempting to develop.
Resumo:
In this paper, we first describe a framework to model the sponsored search auction on the web as a mechanism design problem. Using this framework, we describe two well-known mechanisms for sponsored search auction-Generalized Second Price (GSP) and Vickrey-Clarke-Groves (VCG). We then derive a new mechanism for sponsored search auction which we call optimal (OPT) mechanism. The OPT mechanism maximizes the search engine's expected revenue, while achieving Bayesian incentive compatibility and individual rationality of the advertisers. We then undertake a detailed comparative study of the mechanisms GSP, VCG, and OPT. We compute and compare the expected revenue earned by the search engine under the three mechanisms when the advertisers are symmetric and some special conditions are satisfied. We also compare the three mechanisms in terms of incentive compatibility, individual rationality, and computational complexity. Note to Practitioners-The advertiser-supported web site is one of the successful business models in the emerging web landscape. When an Internet user enters a keyword (i.e., a search phrase) into a search engine, the user gets back a page with results, containing the links most relevant to the query and also sponsored links, (also called paid advertisement links). When a sponsored link is clicked, the user is directed to the corresponding advertiser's web page. The advertiser pays the search engine in some appropriate manner for sending the user to its web page. Against every search performed by any user on any keyword, the search engine faces the problem of matching a set of advertisers to the sponsored slots. In addition, the search engine also needs to decide on a price to be charged to each advertiser. Due to increasing demands for Internet advertising space, most search engines currently use auction mechanisms for this purpose. These are called sponsored search auctions. A significant percentage of the revenue of Internet giants such as Google, Yahoo!, MSN, etc., comes from sponsored search auctions. In this paper, we study two auction mechanisms, GSP and VCG, which are quite popular in the sponsored auction context, and pursue the objective of designing a mechanism that is superior to these two mechanisms. In particular, we propose a new mechanism which we call the OPT mechanism. This mechanism maximizes the search engine's expected revenue subject to achieving Bayesian incentive compatibility and individual rationality. Bayesian incentive compatibility guarantees that it is optimal for each advertiser to bid his/her true value provided that all other agents also bid their respective true values. Individual rationality ensures that the agents participate voluntarily in the auction since they are assured of gaining a non-negative payoff by doing so.
Resumo:
A parallel matrix multiplication algorithm is presented, and studies of its performance and estimation are discussed. The algorithm is implemented on a network of transputers connected in a ring topology. An efficient scheme for partitioning the input matrices is introduced which enables overlapping computation with communication. This makes the algorithm achieve near-ideal speed-up for reasonably large matrices. Analytical expressions for the execution time of the algorithm have been derived by analysing its computation and communication characteristics. These expressions are validated by comparing the theoretical results of the performance with the experimental values obtained on a four-transputer network for both square and irregular matrices. The analytical model is also used to estimate the performance of the algorithm for a varying number of transputers and varying problem sizes. Although the algorithm is implemented on transputers, the methodology and the partitioning scheme presented in this paper are quite general and can be implemented on other processors which have the capability of overlapping computation with communication. The equations for performance prediction can also be extended to other multiprocessor systems.
Resumo:
The basic framework and - conceptual understanding of the metallurgy of Ti alloys is strong and this has enabled the use of titanium and its alloys in safety-critical structures such as those in aircraft and aircraft engines. Nevertheless, a focus on cost-effectiveness and the compression of product development time by effectively integrating design with manufacturing in these applications, as well as those emerging in bioengineering, has driven research in recent decades towards a greater predictive capability through the use of computational materials engineering tools. Therefore this paper focuses on the complexity and variety of fundamental phenomena in this material system with a focus on phase transformations and mechanical behaviour in order to delineate the challenges that lie ahead in achieving these goals. (C) 2012 Acta Materialia Inc. Published by Elsevier Ltd. All rights reserved.
Resumo:
The problem addressed in this paper is concerned with an important issue faced by any green aware global company to keep its emissions within a prescribed cap. The specific problem is to allocate carbon reductions to its different divisions and supply chain partners in achieving a required target of reductions in its carbon reduction program. The problem becomes a challenging one since the divisions and supply chain partners, being autonomous, may exhibit strategic behavior. We use a standard mechanism design approach to solve this problem. While designing a mechanism for the emission reduction allocation problem, the key properties that need to be satisfied are dominant strategy incentive compatibility (DSIC) (also called strategy-proofness), strict budget balance (SBB), and allocative efficiency (AE). Mechanism design theory has shown that it is not possible to achieve the above three properties simultaneously. In the literature, a mechanism that satisfies DSIC and AE has recently been proposed in this context, keeping the budget imbalance minimal. Motivated by the observation that SBB is an important requirement, in this paper, we propose a mechanism that satisfies DSIC and SBB with slight compromise in allocative efficiency. Our experimentation with a stylized case study shows that the proposed mechanism performs satisfactorily and provides an attractive alternative mechanism for carbon footprint reduction by global companies.
Resumo:
The present investigation deals with grain boundary engineering of a modified austenitic stainless steel to obtain a material with enhanced properties. Three types of processing that are generally in agreement with the principles of grain boundary engineering were carried out. The parameters for each of the processing routes were fine-tuned and optimized. The as-processed samples were characterized for microstructure and texture. The influence of processing on properties was estimated by evaluating the room temperature mechanical properties through micro-tensile tests. It was possible to obtain remarkably high fractions of CSL boundaries in certain samples. The results of the micro-tensile tests indicate that the grain boundary engineered samples exhibited higher ductility than the conventionally processed samples. The investigation provides a detailed account of the approach to be adopted for GBE processing of this grade of steel. (C) 2014 Elsevier B.V. All rights reserved.
Resumo:
We present a nanostructured ``super surface'' fabricated using a simple recipe based on deep reactive ion etching of a silicon wafer. The topography of the surface is inspired by the surface topographical features of dragonfly wings. The super surface is comprised of nanopillars 4 mm in height and 220 nm in diameter with random inter-pillar spacing. The surface exhibited superhydrophobicity with a static water contact angle of 154.0 degrees and contact angle hysteresis of 8.3 degrees. Bacterial studies revealed the bactericidal property of the surface against both gram negative (Escherichia coli) and gram positive (Staphylococcus aureus) strains through mechanical rupture of the cells by the sharp nanopillars. The cell viability on these nanostructured surfaces was nearly six-fold lower than on the unmodified silicon wafer. The nanostructured surface also killed mammalian cells (mouse osteoblasts) through mechanical rupture of the cell membrane. Thus, such nanostructured super surfaces could find applications for designing selfcleaning and anti-bacterial surfaces in diverse applications such as microfluidics, surgical instruments, pipelines and food packaging.
Resumo:
The hot deformation behavior of hot isostatically pressed (HIPd) P/M IN-100 superalloy has been studied in the temperature range 1000-1200 degrees C and strain rate range 0.0003-10 s(-1) using hot compression testing. A processing map has been developed on the basis of these data and using the principles of dynamic materials modelling. The map exhibited three domains: one at 1050 degrees C and 0.01 s(-1), with a peak efficiency of power dissipation of approximate to 32%, the second at 1150 degrees C and 10 s(-1), with a peak efficiency of approximate to 36% and the third at 1200 degrees C and 0.1 s(-1), with a similar efficiency. On the basis of optical and electron microscopic observations, the first domain was interpreted to represent dynamic recovery of the gamma phase, the second domain represents dynamic recrystallization (DRX) of gamma in the presence of softer gamma', while the third domain represents DRX of the gamma phase only. The gamma' phase is stable upto 1150 degrees C, gets deformed below this temperature and the chunky gamma' accumulates dislocations, which at larger strains cause cracking of this phase. At temperatures lower than 1080 degrees C and strain rates higher than 0.1 s(-1), the material exhibits flow instability, manifested in the form of adiabatic shear bands. The material may be subjected to mechanical processing without cracking or instabilities at 1200 degrees C and 0.1 s(-1), which are the conditions for DRX of the gamma phase.
Resumo:
Texture evolution in a low cost beta titanium alloy was studied for different modes of rolling and heat treatments. The alloy was cold rolled by unidirectional and multi-step cross rolling. The cold rolled material was either aged directly or recrystallized and then aged. The evolution of texture in alpha and beta phases were studied. The rolling texture of beta phase that is characterized by the gamma fiber is stronger for MSCR than UDR; while the trend is reversed on recrystallization. The mode of rolling affects alpha transformation texture on aging with smaller alpha lath size and stronger alpha texture in UDR than in MSCR. The defect structure in beta phase influences the evolution of a texture on aging. A stronger defect structure in beta phase leads to variant selection with the rolled samples showing fewer variants than the recrystallized samples.
Resumo:
A numerical model of the entire casting process starting from the mould filling stage to complete solidification is presented. The model takes into consideration any phase change taking place during the filling process. A volume of fluid method is used for tracking the metal–air interface during filling and an enthalpy based macro-scale solidification model is used for the phase change process. The model is demonstrated for the case of filling and solidification of Pb–15 wt%Sn alloy in a side-cooled two-dimensional rectangular cavity, and the resulting evolution of a mushy region and macrosegregation are studied. The effects of process parameters related to filling, namely degree of melt superheat and filling velocity on macrosegregation in the cavity, are also investigated. Results show significant differences in the progress of the mushy zone and macrosegregation pattern between this analysis and conventional analysis without the filling effect.