954 resultados para Analogy (Linguistics)


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Life is full of difficult choices. Everyone has their own way of dealing with these, some effective, some not. The problem is particularly acute in engineering design because of the vast amount of information designers have to process. This paper deals with a subset of this set of problems: the subset of selecting materials and processes, and their links to the design of products. Even these, though, present many of the generic problems of choice, and the challenges in creating tools to assist the designer in making them. The key elements are those of classification, of indexing, of reaching decisions using incomplete data in many different formats, and of devising effective strategies for selection. This final element - that of selection strategies - poses particular challenges. Product design, as an example, is an intricate blend of the technical and (for want of a better word) the aesthetic. To meet these needs, a tool that allows selection by analysis, by analogy, by association and simply by 'browsing' is necessary. An example of such a tool, its successes and remaining challenges, will be described.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A computer can assist the process of design by analogy by recording past designs. The experience these represent could be much wider than that of designers using the system, who therefore need to identify potential cases of interest. If the computer assists with this lookup, the designers can concentrate on the more interesting aspect of extracting and using the ideas which are found. However, as the knowledge base grows it becomes ever harder to find relevant cases using a keyword indexing scheme without knowing precisely what to look for. Therefore a more flexible searching system is needed.

If a similarity measure can be defined for the features of the designs, then it is possible to match and cluster them. Using a simple measure like co-occurrence of features within a particular case would allow this to happen without human intervention, which is tedious and time- consuming. Any knowledge that is acquired about how features are related to each other will be very shallow: it is not intended as a cognitive model for how humans understand, learn, or retrieve information, but more an attempt to make effective, efficient use of the information available. The question remains of whether such shallow knowledge is sufficient for the task.

A system to retrieve information from a large database is described. It uses co-occurrences to relate keywords to each other, and then extends search queries with similar words. This seems to make relevant material more accessible, providing hope that this retrieval technique can be applied to a broader knowledge base.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

To calculate the noise emanating from a turbulent flow using an acoustic analogy knowledge concerning the unsteady characteristics of the turbulence is required. Specifically, the form of the turbulent correlation tensor together with various time and length-scales are needed. However, if a Reynolds Averaged Navier-Stores calculation is used as the starting point then one can only obtain steady characteristics of the flow and it is necessary to model the unsteady behavior in some way. While there has been considerable attention given to the correct way to model the form of the correlation tensor less attention has been given to the underlying physics that dictate the proper choice of time-scale. In this paper the authors recognize that there are several time dependent processes occurring within a turbulent flow and propose a new way of obtaining the time-scale. Isothermal single-stream flow jets with Mach numbers 0.75 and 0.90 have been chosen for the present study. The Mani-Gliebe-Balsa-Khavaran method has been used for prediction of noise at different angles, and there is good agreement between the noise predictions and observations. Furthermore, the new time-scale has an inherent frequency dependency that arises naturally from the underlying physics, thus avoiding supplementary mathematical enhancements needed in previous modeling.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We present the results of a computational study of the post-processed Galerkin methods put forward by Garcia-Archilla et al. applied to the non-linear von Karman equations governing the dynamic response of a thin cylindrical panel periodically forced by a transverse point load. We spatially discretize the shell using finite differences to produce a large system of ordinary differential equations (ODEs). By analogy with spectral non-linear Galerkin methods we split this large system into a 'slowly' contracting subsystem and a 'quickly' contracting subsystem. We then compare the accuracy and efficiency of (i) ignoring the dynamics of the 'quick' system (analogous to a traditional spectral Galerkin truncation and sometimes referred to as 'subspace dynamics' in the finite element community when applied to numerical eigenvectors), (ii) slaving the dynamics of the quick system to the slow system during numerical integration (analogous to a non-linear Galerkin method), and (iii) ignoring the influence of the dynamics of the quick system on the evolution of the slow system until we require some output, when we 'lift' the variables from the slow system to the quick using the same slaving rule as in (ii). This corresponds to the post-processing of Garcia-Archilla et al. We find that method (iii) produces essentially the same accuracy as method (ii) but requires only the computational power of method (i) and is thus more efficient than either. In contrast with spectral methods, this type of finite-difference technique can be applied to irregularly shaped domains. We feel that post-processing of this form is a valuable method that can be implemented in computational schemes for a wide variety of partial differential equations (PDEs) of practical importance.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A novel test method for the characterisation of flexible forming processes is proposed and applied to four flexible forming processes: Incremental Sheet Forming (ISF), conventional spinning, the English wheel and power hammer. The proposed method is developed in analogy with time-domain control engineering, where a system is characterised by its impulse response. The spatial impulse response is used to characterise the change in workpiece deformation created by a process, but has also been applied with a strain spectrogram, as a novel way to characterise a process and the physical effect it has on the workpiece. Physical and numerical trials to study the effects of process and material parameters on spatial impulse response lead to three main conclusions. Incremental sheet forming is particularly sensitive to process parameters. The English wheel and power hammer are strongly similar and largely insensitive to both process and material parameters. Spinning develops in two stages and is sensitive to most process parameters, but insensitive to prior deformation. Finally, the proposed method could be applied to modelling, classification of existing and novel processes, product-process matching and closed-loop control of flexible forming processes. © 2012 Elsevier B.V.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Recent work in the area of probabilistic user simulation for training statistical dialogue managers has investigated a new agenda-based user model and presented preliminary experiments with a handcrafted model parameter set. Training the model on dialogue data is an important next step, but non-trivial since the user agenda states are not observable in data and the space of possible states and state transitions is intractably large. This paper presents a summary-space mapping which greatly reduces the number of state transitions and introduces a tree-based method for representing the space of possible agenda state sequences. Treating the user agenda as a hidden variable, the forward/backward algorithm can then be successfully applied to iteratively estimate the model parameters on dialogue data. © 2007 Association for Computational Linguistics.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper presents an agenda-based user simulator which has been extended to be trainable on real data with the aim of more closely modelling the complex rational behaviour exhibited by real users. The train-able part is formed by a set of random decision points that may be encountered during the process of receiving a system act and responding with a user act. A sample-based method is presented for using real user data to estimate the parameters that control these decisions. Evaluation results are given both in terms of statistics of generated user behaviour and the quality of policies trained with different simulators. Compared to a handcrafted simulator, the trained system provides a much better fit to corpus data and evaluations suggest that this better fit should result in improved dialogue performance. © 2010 Association for Computational Linguistics.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Modelling dialogue as a Partially Observable Markov Decision Process (POMDP) enables a dialogue policy robust to speech understanding errors to be learnt. However, a major challenge in POMDP policy learning is to maintain tractability, so the use of approximation is inevitable. We propose applying Gaussian Processes in Reinforcement learning of optimal POMDP dialogue policies, in order (1) to make the learning process faster and (2) to obtain an estimate of the uncertainty of the approximation. We first demonstrate the idea on a simple voice mail dialogue task and then apply this method to a real-world tourist information dialogue task. © 2010 Association for Computational Linguistics.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Discrete particle simulations of column of an aggregate of identical particles impacting a rigid, fixed target and a rigid, movable target are presented with the aim to understand the interaction of an aggregate of particles upon a structure. In most cases the column of particles is constrained against lateral expansion. The pressure exerted by the particles upon the fixed target (and the momentum transferred) is independent of the co-efficient of restitution and friction co-efficient between the particles but are strongly dependent upon the relative density of the particles in the column. There is a mild dependence on the contact stiffness between the particles which controls the elastic deformation of the densified aggregate of particles. In contrast, the momentum transfer to a movable target is strongly sensitive to the mass ratio of column to target. The impact event can be viewed as an inelastic collision between the sand column and the target with an effective co-efficient of restitution between 0 and 0.35 depending upon the relative density of the column. We present a foam analogy where impact of the aggregate of particles can be modelled by the impact of an equivalent foam projectile. The calculations on the equivalent projectile are significantly less intensive computationally and yet give predictions to within 5% of the full discrete particle calculations. They also suggest that "model" materials can be used to simulate the loading by an aggregate of particles within a laboratory setting. © 2012 Elsevier Ltd. All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Most previous work on trainable language generation has focused on two paradigms: (a) using a statistical model to rank a set of generated utterances, or (b) using statistics to inform the generation decision process. Both approaches rely on the existence of a handcrafted generator, which limits their scalability to new domains. This paper presents BAGEL, a statistical language generator which uses dynamic Bayesian networks to learn from semantically-aligned data produced by 42 untrained annotators. A human evaluation shows that BAGEL can generate natural and informative utterances from unseen inputs in the information presentation domain. Additionally, generation performance on sparse datasets is improved significantly by using certainty-based active learning, yielding ratings close to the human gold standard with a fraction of the data. © 2010 Association for Computational Linguistics.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper a method to incorporate linguistic information regarding single-word and compound verbs is proposed, as a first step towards an SMT model based on linguistically-classified phrases. By substituting these verb structures by the base form of the head verb, we achieve a better statistical word alignment performance, and are able to better estimate the translation model and generalize to unseen verb forms during translation. Preliminary experiments for the English - Spanish language pair are performed, and future research lines are detailed. © 2005 Association for Computational Linguistics.