37 resultados para Definition of cuisine


Relevância:

90.00% 90.00%

Publicador:

Resumo:

CMPs enable simultaneous execution of multiple applications on the same platforms that share cache resources. Diversity in the cache access patterns of these simultaneously executing applications can potentially trigger inter-application interference, leading to cache pollution. Whereas a large cache can ameliorate this problem, the issues of larger power consumption with increasing cache size, amplified at sub-100nm technologies, makes this solution prohibitive. In this paper in order to address the issues relating to power-aware performance of caches, we propose a caching structure that addresses the following: 1. Definition of application-specific cache partitions as an aggregation of caching units (molecules). The parameters of each molecule namely size, associativity and line size are chosen so that the power consumed by it and access time are optimal for the given technology. 2. Application-Specific resizing of cache partitions with variable and adaptive associativity per cache line, way size and variable line size. 3. A replacement policy that is transparent to the partition in terms of size, heterogeneity in associativity and line size. Through simulation studies we establish the superiority of molecular cache (caches built as aggregations of molecules) that offers a 29% power advantage over that of an equivalently performing traditional cache.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In this article, a general definition of the process average temperature has been developed, and the impact of the various dissipative mechanisms on 1/COP of the chiller evaluated. The present component-by-component black box analysis removes the assumptions regarding the generator outlet temperature(s) and the component effective thermal conductances. Mass transfer resistance is also incorporated into the absorber analysis to arrive at a more realistic upper limit to the cooling capacity. Finally, the theoretical foundation for the absorption chiller T-s diagram is derived. This diagrammatic approach only requires the inlet and outlet conditions of the chiller components and can be employed as a practical tool for system analysis and comparison. (C) 2000 Elsevier Science Ltd and IIR. All rights reserved.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

We study the elasticity, topological defects, and hydrodynamics of the recently discovered incommensurate smectic (AIC) phase, characterized by two collinear mass density waves of incommensurate spatial frequency. The low-energy long-wavelength excitations of the system can be described by a displacement field u(x) and a ��phason�� field w(x) associated, respectively, with collective and relative motion of the two constituent density waves. We formulate the elastic free energy in terms of these two variables and find that when w=0, its functional dependence on u is identical to that of a conventional smectic liquid crystal, while when u=0, its functional dependence on w is the same as that for the angle variable in a slightly anisotropic XY model. An arbitrariness in the definition of u and w allows a choice that eliminates all relevant couplings between them in the long-wavelength elastic energy. The topological defects of the system are dislocations with nonzero u and w components. We introduce a two-dimensional Burgers lattice for these dislocations, and compute the interaction between them. This has two parts: one arising from the u field that is short ranged and identical to the interaction between dislocations in an ordinary smectic liquid crystal, and one arising from the w field that is long ranged and identical to the logarithmic interaction between vortices in an XY model. The hydrodynamic modes of the AIC include first- and second-sound modes whose direction-dependent velocities are identical to those in ordinary smectics. The sound attenuations have a different direction dependence, however. The breakdown of hydrodynamics found in conventional smectic liquid crystals, with three of the five viscosities diverging as 1/? at small frequencies ?, occurs in these systems as well and is identical in all its details. In addition, there is a diffusive phason mode, not found in ordinary smectic liquid crystals, that leads to anomalously slow mechanical response analogous to that predicted in quasicrystals, but on a far more experimentally accessible time scale.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Amongv arioums ethodtsh,e t ransmissliionne o r thei mpedantcueb em ethohda sb eenm ospt opulafro r thee xperimenetavla luatioonf thea cousticiaml pedanocef a terminatioTnh. ee xistinmg ethodisn,c luding theo nesre porteeda rlierb, y thea uthorrse quirleo catioonf thes oundp ressumrei nima nd/orm axima, or elsem akeu se0 f somei terativep rocedureTsh. e presenpt aperd ealsw ith a methodo f analysios f standinwga vews hichd oesn otd epenodn anyo f thesein volvepdr ocedureIts i.s applicabtloe thec aseo f stationarays w ella sm ovingm ediaI.t enableosn to evaluatteh e impedancoef anyp assivbel ackb ox,a s well as the aeroacoustcich aracteristicosf a sourceo f pulsatingg asf low, with the leaste xperimentawl ork andc omputatiotinm ea ndw itht hee xtraa dvantagoef usinga givenim pedanctueb ef or wavelengtahss largea s fourt imesit s lengthA. methodo f externaml easuremenntost, involvinugs eo f anyi mpedance tubef, or evaluatintgh ea eroacouscthica racteristoicf as sourcoef pulsatingga sf lowi s alsod ealtw ith, based on the definition of attenuation or insertion loss of a muffler.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Indian logic has a long history. It somewhat covers the domains of two of the six schools (darsanas) of Indian philosophy, namely, Nyaya and Vaisesika. The generally accepted definition of Indian logic over the ages is the science which ascertains valid knowledge either by means of six senses or by means of the five members of the syllogism. In other words, perception and inference constitute the subject matter of logic. The science of logic evolved in India through three ages: the ancient, the medieval and the modern, spanning almost thirty centuries. Advances in Computer Science, in particular, in Artificial Intelligence have got researchers in these areas interested in the basic problems of language, logic and cognition in the past three decades. In the 1980s, Artificial Intelligence has evolved into knowledge-based and intelligent system design, and the knowledge base and inference engine have become standard subsystems of an intelligent system. One of the important issues in the design of such systems is knowledge acquisition from humans who are experts in a branch of learning (such as medicine or law) and transferring that knowledge to a computing system. The second important issue in such systems is the validation of the knowledge base of the system i.e. ensuring that the knowledge is complete and consistent. It is in this context that comparative study of Indian logic with recent theories of logic, language and knowledge engineering will help the computer scientist understand the deeper implications of the terms and concepts he is currently using and attempting to develop.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The principle of the conservation of bond orders during radical-exchange reactions is examined using Mayer's definition of bond orders. This simple intuitive approximation is not valid in a quantitative sense. Ab initio results reveal that free valences (or spin densities) develop on the migrating atom during reactions. For several examples of hydrogen-transfer reactions, the sum of the reaction coordinate bond orders in the transition state was found to be 0.92 +/- 0.04 instead of the theoretical 1.00 because free valences (or spin densities) develop on the migrating atom during reactions. It is shown that free valence is almost equal to the square of the spin density on the migrating hydrogen atom and the maxima in the free valence (or spin density) profiles coincide (or nearly coincide) with the saddle points in the corresponding energy profiles.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

A clear definition of an approximate parametrization of the curve of intersection of (n-1) implicit surfaces in Rn is given. It is justified that marching methods yield such an approximation.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The conventional definition of redundancy is applicable to skeletal structural systems only, whereas the concept of redundancy has never been discussed in the context of a continuum. Generally, structures in civil engineering constitute a combination of both skeletal and continuum segments. Hence, this gaper presents a generalized definition of redundancy that has been defined in terms of structural response sensitivity, which is applicable to both continuum and discrete structures. In contrast to the conventional definition of redundancy, which is assumed to be fixed for a given structure and is believed to be independent of loading and material properties, the new definition would depend on strength and response of the structure at a given stage of its service life. The redundancy measure proposed in this paper is linked to the structural response sensitivities. Thus, the structure can have different degrees of redundancy during its lifetime, depending on the response sensitivity under consideration It is believed that this new redundancy measure would be more relevant in structural evaluation, damage assessment, and reliability analysis of structures at large.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In this paper, we present a novel differential geometric characterization of two- and three-degree-of-freedom rigid body kinematics, using a metric defined on dual vectors. The instantaneous angular and linear velocities of a rigid body are expressed as a dual velocity vector, and dual inner product is defined on this dual vector, resulting in a positive semi-definite and symmetric dual matrix. We show that the maximum and minimum magnitude of the dual velocity vector, for a unit speed motion, can be obtained as eigenvalues of this dual matrix. Furthermore, we show that the tip of the dual velocity vector lies on a dual ellipse for a two-degree-of-freedom motion and on a dual ellipsoid for a three-degree-of-freedom motion. In this manner, the velocity distribution of a rigid body can be studied algebraically in terms of the eigenvalues of a dual matrix or geometrically with the dual ellipse and ellipsoid. The second-order properties of the two- and three-degree-of-freedom motions of a rigid body are also obtained from the derivatives of the elements of the dual matrix. This results in a definition of the geodesic motion of a rigid body. The theoretical results are illustrated with the help of a spatial 2R and a parallel three-degree-of-freedom manipulator.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The purpose of life is to obtain knowledge, use it to live with as much satisfaction as possible, and pass it on with improvements and modifications to the next generation.'' This may sound philosophical, and the interpretation of words may be subjective, yet it is fairly clear that this is what all living organisms--from bacteria to human beings--do in their life time. Indeed, this can be adopted as the information theoretic definition of life. Over billions of years, biological evolution has experimented with a wide range of physical systems for acquiring, processing and communicating information. We are now in a position to make the principles behind these systems mathematically precise, and then extend them as far as laws of physics permit. Therein lies the future of computation, of ourselves, and of life.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The Reeb graph of a scalar function represents the evolution of the topology of its level sets. This paper describes a near-optimal output-sensitive algorithm for computing the Reeb graph of scalar functions defined over manifolds or non-manifolds in any dimension. Key to the simplicity and efficiency of the algorithm is an alternate definition of the Reeb graph that considers equivalence classes of level sets instead of individual level sets. The algorithm works in two steps. The first step locates all critical points of the function in the domain. Critical points correspond to nodes in the Reeb graph. Arcs connecting the nodes are computed in the second step by a simple search procedure that works on a small subset of the domain that corresponds to a pair of critical points. The paper also describes a scheme for controlled simplification of the Reeb graph and two different graph layout schemes that help in the effective presentation of Reeb graphs for visual analysis of scalar fields. Finally, the Reeb graph is employed in four different applications-surface segmentation, spatially-aware transfer function design, visualization of interval volumes, and interactive exploration of time-varying data.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Homogenization of partial differential equations is relatively a new area and has tremendous applications in various branches of engineering sciences like: material science,porous media, study of vibrations of thin structures, composite materials to name a few. Though the material scientists and others had reasonable idea about the homogenization process, it was lacking a good mathematical theory till early seventies. The first proper mathematical procedure was developed in the seventies and later in the last 30 years or so it has flourished in various ways both application wise and mathematically. This is not a full survey article and on the other hand we will not be concentrating on a specialized problem. Indeed, we do indicate certain specialized problems of our interest without much details and that is not the main theme of the article. I plan to give an introductory presentation with the aim of catering to a wider audience. We go through few examples to understand homogenization procedure in a general perspective together with applications. We also present various mathematical techniques available and if possible some details about some of the techniques. A possible definition of homogenization would be that it is a process of understanding a heterogeneous (in-homogeneous) media, where the heterogeneties are at the microscopic level, like in composite materials, by a homogeneous media. In other words, one would like to obtain a homogeneous description of a highly oscillating in-homogeneous media. We also present other generalizations to non linear problems, porous media and so on. Finally, we will like to see a closely related issue of optimal bounds which itself is an independent area of research.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The notion of the 1-D analytic signal is well understood and has found many applications. At the heart of the analytic signal concept is the Hilbert transform. The problem in extending the concept of analytic signal to higher dimensions is that there is no unique multidimensional definition of the Hilbert transform. Also, the notion of analyticity is not so well under stood in higher dimensions. Of the several 2-D extensions of the Hilbert transform, the spiral-phase quadrature transform or the Riesz transform seems to be the natural extension and has attracted a lot of attention mainly due to its isotropic properties. From the Riesz transform, Larkin et al. constructed a vortex operator, which approximates the quadratures based on asymptotic stationary-phase analysis. In this paper, we show an alternative proof for the quadrature approximation property by invoking the quasi-eigenfunction property of linear, shift-invariant systems. We show that the vortex operator comes up as a natural consequence of applying this property. We also characterize the quadrature approximation error in terms of its energy as well as the peak spatial-domain error. Such results are available for 1-D signals, but their counter part for 2-D signals have not been provided. We also provide simulation results to supplement the analytical calculations.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

An analysis of the energy budget for the general case of a body translating in a stationary fluid under the action of an external force is used to define a power loss coefficient. This universal definition of power loss coefficient gives a measure of the energy lost in the wake of the translating body and, in general, is applicable to a variety of flow configurations including active drag reduction, self-propulsion and thrust generation. The utility of the power loss coefficient is demonstrated on a model bluff body flow problem concerning a two-dimensional elliptical cylinder in a uniform cross-flow. The upper and lower boundaries of the elliptic cylinder undergo continuous motion due to a prescribed reflectionally symmetric constant tangential surface velocity. It is shown that a decrease in drag resulting from an increase in the strength of tangential surface velocity leads to an initial reduction and eventual rise in the power loss coefficient. A maximum in energetic efficiency is attained for a drag reducing tangential surface velocity which minimizes the power loss coefficient. The effect of the tangential surface velocity on drag reduction and self-propulsion of both bluff and streamlined bodies is explored through a variation in the thickness ratio (ratio of the minor and major axes) of the elliptical cylinders.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The problem of semantic interoperability arises while integrating applications in different task domains across the product life cycle. A new shape-function-relationship (SFR) framework is proposed as a taxonomy based on which an ontology is developed. Ontology based on the SFR framework, that captures explicit definition of terminology and knowledge relationships in terms of shape, function and relationship descriptors, offers an attractive approach for solving semantic interoperability issue. Since all instances of terms are based on single taxonomy with a formal classification, mapping of terms requires a simple check on the attributes used in the classification. As a preliminary study, the framework is used to develop ontology of terms used in the aero-engine domain and the ontology is used to resolve the semantic interoperability problem in the integration of design and maintenance. Since the framework allows a single term to have multiple classifications, handling context dependent usage of terms becomes possible. Automating the classification of terms and establishing the completeness of the classification scheme are being addressed presently.