29 resultados para Computational Lexical Semantics
em University of Queensland eSpace - Australia
Resumo:
This paper provides a computational framework, based on Defeasible Logic, to capture some aspects of institutional agency. Our background is Kanger-Lindahl-P\"orn account of organised interaction, which describes this interaction within a multi-modal logical setting. This work focuses in particular on the notions of counts-as link and on those of attempt and of personal and direct action to realise states of affairs. We show how standard Defeasible Logic can be extended to represent these concepts: the resulting system preserves some basic properties commonly attributed to them. In addition, the framework enjoys nice computational properties, as it turns out that the extension of any theory can be computed in time linear to the size of the theory itself.
Resumo:
Nine individuals with complex language deficits following left-hemisphere cortical lesions and a matched control group (n 5 9) performed speeded lexical decisions on the third word of auditory word triplets containing a lexical ambiguity. The critical conditions were concordant (e.g., coin–bank–money), discordant (e.g., river–bank–money), neutral (e.g., day–bank– money), and unrelated (e.g., river–day–money). Triplets were presented with an interstimulus interval (ISI) of 100 and 1250 ms. Overall, the left-hemisphere-damaged subjects appeared able to exhaustively access meanings for lexical ambiguities rapidly, but were unable to reduce the level of activation for contextually inappropriate meanings at both short and long ISIs, unlike control subjects. These findings are consistent with a disruption of the proposed role of the left hemisphere in selecting and suppressing meanings via contextual integration and a sparing of the right-hemisphere mechanisms responsible for maintaining alternative meanings.
Resumo:
This paper explains and explores the concept of "semantic molecules" in the NSM methodology of semantic analysis. A semantic molecule is a complex lexical meaning which functions as an intermediate unit in the structure of other, more complex concepts. The paper undertakes an overview of different kinds of semantic molecule, showing how they enter into more complex meanings and how they themselves can be explicated. It shows that four levels of "nesting" of molecules within molecules are attested, and it argues that while some molecules such as 'hands' and 'make', may well be language-universal, many others are language-specific.
Resumo:
This paper is concerned with the problem of argument-function mismatch observed in the apparent subject-object inversion in Chinese consumption verbs, e.g., chi 'eat' and he 'drink', and accommodation verbs, e.g., zhu 'live' and shui 'sleep'. These verbs seem to allow the linking of [agent-SUBJ theme-OBJ] as well as [agent-OBJ theme-SUBJ], but only when the agent is also the semantic role denoting the measure or extent of the action. The account offered is formulated within LFG's lexical mapping theory. Under the simplest and also the strictest interpretation of the one-to-one argument-function mapping principle (or the theta-criterion), a composite role such as ag-ext receives syntactic assignment via one composing role only. One-to-one linking thus entails the suppression of the other composing role. Apparent subject-object inversion occurs when the more prominent agent role is suppressed and thus allows the less prominent extent role to dictate the linking of the entire ag-ext composite role. This LMT account also potentially facilitates a natural explanation of markedness among the competing syntactic structures.
Resumo:
The theory of Owicki and Gries has been used as a platform for safety-based verifcation and derivation of concurrent programs. It has also been integrated with the progress logic of UNITY which has allowed newer techniques of progress-based verifcation and derivation to be developed. However, a theoretical basis for the integrated theory has thus far been missing. In this paper, we provide a theoretical background for the logic of Owicki and Gries integrated with the logic of progress from UNITY. An operational semantics for the new framework is provided which is used to prove soundness of the progress logic.
Resumo:
Action systems are a construct for reasoning about concurrent, reactive systems, in which concurrent behaviour is described by interleaving atomic actions. Sere and Troubitsyna have proposed an extension to action systems in which actions may be expressed and composed using discrete probabilistic choice as well as demonic nondeterministic choice. In this paper we develop a trace-based semantics for probabilistic action systems. This semantics provides a simple theoretical base on which practical refinement rules for probabilistic action systems may be justified.
Resumo:
The Coefficient of Variance (mean standard deviation/mean Response time) is a measure of response time variability that corrects for differences in mean Response time (RT) (Segalowitz & Segalowitz, 1993). A positive correlation between decreasing mean RTs and CVs (rCV-RT) has been proposed as an indicator of L2 automaticity and more generally as an index of processing efficiency. The current study evaluates this claim by examining lexical decision performance by individuals from three levels of English proficiency (Intermediate ESL, Advanced ESL and L1 controls) on stimuli from four levels of item familiarity, as defined by frequency of occurrence. A three-phase model of skill development defined by changing rCV-RT.values was tested. Results showed that RTs and CVs systematically decreased as a function of increasing proficiency and frequency levels, with the rCV-RT serving as a stable indicator of individual differences in lexical decision performance. The rCV-RT and automaticity/restructuring account is discussed in light of the findings. The CV is also evaluated as a more general quantitative index of processing efficiency in the L2.
Resumo:
This paper describes the emergence of new functional items in the Mauritian Creole noun phrase, following the collapse of the French determiner system when superstrate and substrate came into contact. The aim of the paper is to show how the new language strived to express the universal semantic contrasts of (in)definiteness and singular vs. plural. The process of grammaticalization of new functional items in the determiner system was accompanied by changes in the syntax from French to creole. An analysis within Chomsky’s Minimalist framework (1995, 2000, 2001) suggests that these changes were driven by the need to map semantic features onto the syntax.
Resumo:
Traditional waste stabilisation pond (WSP) models encounter problems predicting pond performance because they cannot account for the influence of pond features, such as inlet structure or pond geometry, on fluid hydrodynamics. In this study, two dimensional (2-D) computational fluid dynamics (CFD) models were compared to experimental residence time distributions (RTD) from literature. In one of the-three geometries simulated, the 2-D CFD model successfully predicted the experimental RTD. However, flow patterns in the other two geometries were not well described due to the difficulty of representing the three dimensional (3-D) experimental inlet in the 2-D CFD model, and the sensitivity of the model results to the assumptions used to characterise the inlet. Neither a velocity similarity nor geometric similarity approach to inlet representation in 2-D gave results correlating with experimental data. However. it was shown that 2-D CFD models were not affected by changes in values of model parameters which are difficult to predict, particularly the turbulent inlet conditions. This work suggests that 2-D CFD models cannot be used a priori to give an adequate description of the hydrodynamic patterns in WSP. (C) 1998 Elsevier Science Ltd. All rights reserved.
Resumo:
This paper describes U2DE, a finite-volume code that numerically solves the Euler equations. The code was used to perform multi-dimensional simulations of the gradual opening of a primary diaphragm in a shock tube. From the simulations, the speed of the developing shock wave was recorded and compared with other estimates. The ability of U2DE to compute shock speed was confirmed by comparing numerical results with the analytic solution for an ideal shock tube. For high initial pressure ratios across the diaphragm, previous experiments have shown that the measured shock speed can exceed the shock speed predicted by one-dimensional models. The shock speeds computed with the present multi-dimensional simulation were higher than those estimated by previous one-dimensional models and, thus, were closer to the experimental measurements. This indicates that multi-dimensional flow effects were partly responsible for the relatively high shock speeds measured in the experiments.
Resumo:
Computer models can be combined with laboratory experiments for the efficient determination of (i) peptides that bind MHC molecules and (ii) T-cell epitopes. For maximum benefit, the use of computer models must be treated as experiments analogous to standard laboratory procedures. This requires the definition of standards and experimental protocols for model application. We describe the requirements for validation and assessment of computer models. The utility of combining accurate predictions with a limited number of laboratory experiments is illustrated by practical examples. These include the identification of T-cell epitopes from IDDM-, melanoma- and malaria-related antigens by combining computational and conventional laboratory assays. The success rate in determining antigenic peptides, each in the context of a specific HLA molecule, ranged from 27 to 71%, while the natural prevalence of MHC-binding peptides is 0.1-5%.
Resumo:
In this and a preceding paper, we provide an introduction to the Fujitsu VPP range of vector-parallel supercomputers and to some of the computational chemistry software available for the VPP. Here, we consider the implementation and performance of seven popular chemistry application packages. The codes discussed range from classical molecular dynamics to semiempirical and ab initio quantum chemistry. All have evolved from sequential codes, and have typically been parallelised using a replicated data approach. As such they are well suited to the large-memory/fast-processor architecture of the VPP. For one code, CASTEP, a distributed-memory data-driven parallelisation scheme is presented. (C) 2000 Published by Elsevier Science B.V. All rights reserved.
Resumo:
A case sensitive intelligent model editor has been developed for constructing consistent lumped dynamic process models and for simplifying them using modelling assumptions. The approach is based on a systematic assumption-driven modelling procedure and on the syntax and semantics of process,models and the simplifying assumptions.
Resumo:
Protein kinases exhibit various degrees of substrate specificity. The large number of different protein kinases in the eukaryotic proteomes makes it impractical to determine the specificity of each enzyme experimentally. To test if it were possible to discriminate potential substrates from non-substrates by simple computational techniques, we analysed the binding enthalpies of modelled enzyme-substrate complexes and attempted to correlate it with experimental enzyme kinetics measurements. The crystal structures of phosphorylase kinase and cAMP-dependent protein kinase were used to generate models of the enzyme with a series of known peptide substrates and non-substrates, and the approximate enthalpy of binding assessed following energy minimization. We show that the computed enthalpies do not correlate closely with kinetic measurements, but the method can distinguish good substrates from weak substrates and non-substrates. Copyright (C) 2002 John Wiley Sons, Ltd.
Resumo:
The explosive growth in biotechnology combined with major advancesin information technology has the potential to radically transformimmunology in the postgenomics era. Not only do we now have readyaccess to vast quantities of existing data, but new data with relevanceto immunology are being accumulated at an exponential rate. Resourcesfor computational immunology include biological databases and methodsfor data extraction, comparison, analysis and interpretation. Publiclyaccessible biological databases of relevance to immunologists numberin the hundreds and are growing daily. The ability to efficientlyextract and analyse information from these databases is vital forefficient immunology research. Most importantly, a new generationof computational immunology tools enables modelling of peptide transportby the transporter associated with antigen processing (TAP), modellingof antibody binding sites, identification of allergenic motifs andmodelling of T-cell receptor serial triggering.