977 resultados para federated search tool
Resumo:
Inspired by the demonstration that tool-use variants among wild chimpanzees and orangutans qualify as traditions (or cultures), we developed a formal model to predict the incidence of these acquired specializations among wild primates and to examine the evolution of their underlying abilities. We assumed that the acquisition of the skill by an individual in a social unit is crucially controlled by three main factors, namely probability of innovation, probability of socially biased learning, and the prevailing social conditions (sociability, or number of potential experts at close proximity). The model reconfirms the restriction of customary tool use in wild primates to the most intelligent radiation, great apes; the greater incidence of tool use in more sociable populations of orangutans and chimpanzees; and tendencies toward tool manufacture among the most sociable monkeys. However, it also indicates that sociable gregariousness is far more likely to produce the maintenance of invented skills in a population than solitary life, where the mother is the only accessible expert. We therefore used the model to explore the evolution of the three key parameters. The most likely evolutionary scenario is that where complex skills contribute to fitness, sociability and/or the capacity for socially biased learning increase, whereas innovative abilities (i.e., intelligence) follow indirectly. We suggest that the evolution of high intelligence will often be a byproduct of selection on abilities for socially biased learning that are needed to acquire important skills, and hence that high intelligence should be most common in sociable rather than solitary organisms. Evidence for increased sociability during hominin evolution is consistent with this new hypothesis. (C) 2003 Elsevier Science Ltd. All rights reserved.
Resumo:
Low-pressure MOCVD, with tris(2,4 pentanedionato)aluminum(III) as the precursor, was used in the present investigation to coat alumina on to cemented carbide cutting tools. To evaluate the MOCVD process, the efficiency in cutting operations of MOCVD-coated tools was compared with that of tools coated using the industry-standard CVD process.Three multilayer cemented carbide cutting tool inserts, viz., TiN/TiC/WC, CVD-coated Al2O3 on TiN/TiC/WC, and MOCVD-coated Al2O3 on TiN/TiC/WC, were compared in the dry turning of mild steel. Turning tests were conducted for cutting speeds ranging from 14 to 47 m/min, for a depth of cut from 0.25 to 1 mm, at the constant feed rate of 0.2 mm/min. The axial, tangential, and radial forces were measured using a lathe tool dynamometer for different cutting parameters, and the machined work pieces were tested for surface roughness. The results indicate that, in most of the cases examined, the MOCVD-coated inserts produced a smoother surface finish, while requiring lower cutting forces, indicating that MOCVD produces the best-performing insert, followed by the CVD-coated one. The superior performance of MOCVD-alumina is attributed to the co-deposition of carbon with the oxide, due to the very nature of the precursor used, leading to enhanced mechanical properties for cutting applications in harsh environment.
Resumo:
In this paper, we outline an approach to the task of designing network codes in a non-multicast setting. Our approach makes use of the concept of interference alignment. As an example, we consider the distributed storage problem where the data is stored across the network in n nodes and where a data collector can recover the data by connecting to any k of the n nodes and where furthermore, upon failure of a node, a new node can replicate the data stored in the failed node while minimizing the repair bandwidth.
Resumo:
Bid optimization is now becoming quite popular in sponsored search auctions on the Web. Given a keyword and the maximum willingness to pay of each advertiser interested in the keyword, the bid optimizer generates a profile of bids for the advertisers with the objective of maximizing customer retention without compromising the revenue of the search engine. In this paper, we present a bid optimization algorithm that is based on a Nash bargaining model where the first player is the search engine and the second player is a virtual agent representing all the bidders. We make the realistic assumption that each bidder specifies a maximum willingness to pay values and a discrete, finite set of bid values. We show that the Nash bargaining solution for this problem always lies on a certain edge of the convex hull such that one end point of the edge is the vector of maximum willingness to pay of all the bidders. We show that the other endpoint of this edge can be computed as a solution of a linear programming problem. We also show how the solution can be transformed to a bid profile of the advertisers.
Resumo:
This research shows a new approach and development of a design methodology, based on the perspective of meanings. In this study the design process is explored as a development of the structure of meanings. The processes of search and evaluation of meanings form the foundations of developing this structure. In order to facilitate the use and operation of the meanings, the WordNet lexical database and an existing visualization of WordNet — Visuwords — is used for the process of meaning search. The basic tool used for evaluation process is the WordNet::Similarity software, measuring the relatedness of meanings in the database. In this way it is measuring the degree of interconnections between different meanings. This kind of search and evaluation techniques are later on incorporated into our methodology of the structure of meanings to support the design process. The measures of relatedness of meanings are developed as convergence criteria for application in the processes of evaluation. Further on, the methodology for the structure of meanings developed here is used to construct meanings in a verification of product design. The steps of the design methodology, including the search and evaluation processes involved in developing the structure of the meanings, are elucidated. The choices, made by the designer in terms of meanings are supported by consequent searches and evaluations of meanings to be implemented in the designed product. In conclusion, the paper presents directions for developing and further extensions of the proposed design methodology.
Resumo:
In this thesis we address the problem of multi-agent search. We formulate two deploy and search strategies based on optimal deployment of agents in search space so as to maximize the search effectiveness in a single step. We show that a variation of centroidal Voronoi configuration is the optimal deployment. When the agents have sensors with different capabilities, the problem will be heterogeneous in nature. We introduce a new concept namely, generalized Voronoi partition in order to formulate and solve the heterogeneous multi-agent search problem. We address a few theoretical issues such as optimality of deployment, convergence and spatial distributedness of the control law and the search strategies. Simulation experiments are carried out to compare performances of the proposed strategies with a few simple search strategies.
Resumo:
We investigate the spatial search problem on the two-dimensional square lattice, using the Dirac evolution operator discretized according to the staggered lattice fermion formalism. d=2 is the critical dimension for the spatial search problem, where infrared divergence of the evolution operator leads to logarithmic factors in the scaling behavior. As a result, the construction used in our accompanying article [ A. Patel and M. A. Rahaman Phys. Rev. A 82 032330 (2010)] provides an O(√NlnN) algorithm, which is not optimal. The scaling behavior can be improved to O(√NlnN) by cleverly controlling the massless Dirac evolution operator by an ancilla qubit, as proposed by Tulsi Phys. Rev. A 78 012310 (2008). We reinterpret the ancilla control as introduction of an effective mass at the marked vertex, and optimize the proportionality constants of the scaling behavior of the algorithm by numerically tuning the parameters.
Resumo:
In this paper, we address a key problem faced by advertisers in sponsored search auctions on the web: how much to bid, given the bids of the other advertisers, so as to maximize individual payoffs? Assuming the generalized second price auction as the auction mechanism, we formulate this problem in the framework of an infinite horizon alternative-move game of advertiser bidding behavior. For a sponsored search auction involving two advertisers, we characterize all the pure strategy and mixed strategy Nash equilibria. We also prove that the bid prices will lead to a Nash equilibrium, if the advertisers follow a myopic best response bidding strategy. Following this, we investigate the bidding behavior of the advertisers if they use Q-learning. We discover empirically an interesting trend that the Q-values converge even if both the advertisers learn simultaneously.
INTACTE: An Interconnect Area, Delay, and Energy Estimation Tool for Microarchitectural Explorations
Resumo:
Prior work on modeling interconnects has focused on optimizing the wire and repeater design for trading off energy and delay, and is largely based on low level circuit parameters. Hence these models are hard to use directly to make high level microarchitectural trade-offs in the initial exploration phase of a design. In this paper, we propose INTACTE, a tool that can be used by architects toget reasonably accurate interconnect area, delay, and power estimates based on a few architecture level parameters for the interconnect such as length, width (in number of bits), frequency, and latency for a specified technology and voltage. The tool uses well known models of interconnect delay and energy taking into account the wire pitch, repeater size, and spacing for a range of voltages and technologies.It then solves an optimization problem of finding the lowest energy interconnect design in terms of the low level circuit parameters, which meets the architectural constraintsgiven as inputs. In addition, the tool also provides the area, energy, and delay for a range of supply voltages and degrees of pipelining, which can be used for micro-architectural exploration of a chip. The delay and energy models used by the tool have been validated against low level circuit simulations. We discuss several potential applications of the tool and present an example of optimizing interconnect design in the context of clustered VLIW architectures. Copyright 2007 ACM.
Resumo:
This paper addresses the problem of multiagent search in an unknown environment. The agents are autonomous in nature and are equipped with necessary sensors to carry out the search operation. The uncertainty, or lack of information about the search area is known a priori as a probability density function. The agents are deployed in an optimal way so as to maximize the one step uncertainty reduction. The agents continue to deploy themselves and reduce uncertainty till the uncertainty density is reduced over the search space below a minimum acceptable level. It has been shown, using LaSalle’s invariance principle, that a distributed control law which moves each of the agents towards the centroid of its Voronoi partition, modified by the sensor range leads to single step optimal deployment. This principle is now used to devise search trajectories for the agents. The simulations were carried out in 2D space with saturation on speeds of the agents. The results show that the control strategy per step indeed moves the agents to the respective centroid and the algorithm reduces the uncertainty distribution to the required level within a few steps.
Resumo:
This paper addresses a search problem with multiple limited capability search agents in a partially connected dynamical networked environment under different information structures. A self assessment-based decision-making scheme for multiple agents is proposed that uses a modified negotiation scheme with low communication overheads. The scheme has attractive features of fast decision-making and scalability to large number of agents without increasing the complexity of the algorithm. Two models of the self assessment schemes are developed to study the effect of increase in information exchange during decision-making. Some analytical results on the maximum number of self assessment cycles, effect of increasing communication range, completeness of the algorithm, lower bound and upper bound on the search time are also obtained. The performance of the various self assessment schemes in terms of total uncertainty reduction in the search region, using different information structures is studied. It is shown that the communication requirement for self assessment scheme is almost half of the negotiation schemes and its performance is close to the optimal solution. Comparisons with different sequential search schemes are also carried out. Note to Practitioners-In the futuristic military and civilian applications such as search and rescue, surveillance, patrol, oil spill, etc., a swarm of UAVs can be deployed to carry out the mission for information collection. These UAVs have limited sensor and communication ranges. In order to enhance the performance of the mission and to complete the mission quickly, cooperation between UAVs is important. Designing cooperative search strategies for multiple UAVs with these constraints is a difficult task. Apart from this, another requirement in the hostile territory is to minimize communication while making decisions. This adds further complexity to the decision-making algorithms. In this paper, a self-assessment-based decision-making scheme, for multiple UAVs performing a search mission, is proposed. The agents make their decisions based on the information acquired through their sensors and by cooperation with neighbors. The complexity of the decision-making scheme is very low. It can arrive at decisions fast with low communication overheads, while accommodating various information structures used for increasing the fidelity of the uncertainty maps. Theoretical results proving completeness of the algorithm and the lower and upper bounds on the search time are also provided.