22 resultados para Encyclopedias and dictionaries, Arabic
Resumo:
Cognitive linguistics scholars argue that metaphor is fundamentally a conceptual process of mapping one domain of experience onto another domain. The study of metaphor in the context of Translation Studies has not, unfortunately, kept pace with the discoveries about the nature and role of metaphor in the cognitive sciences. This study aims primarily to fill part of this gap of knowledge. Specifically, the thesis is an attempt to explore some implications of the conceptual theory of metaphor for translation. Because the study of metaphor in translation is also based on views about the nature of translation, the thesis first presents a general overview of the discipline of Translation Studies, describing the major models of translation. The study (in Chapter Two) then discusses the major traditional theories of metaphor (comparison, substitution and interaction theories) and shows how the ideas of those theories were adopted in specific translation studies of metaphor. After that, the study presents a detailed account of the conceptual theory of metaphor and some hypothetical implications for the study of metaphor in translation from the perspective of cognitive linguistics. The data and methodology are presented in Chapter Four. A novel classification of conceptual metaphor is presented which distinguishes between different source domains of conceptual metaphors: physical, human-life and intertextual. It is suggested that each source domain places different demands on translators. The major sources of the data for this study are (1) the translations done by the Foreign Broadcasting Information Service (FBIS), which is a translation service of the Central Intelligence Agency (CIA) in the United Sates of America, of a number of speeches by the Iraqi president Saddam Hussein during the Gulf Crisis (1990-1991) and (2) official (governmental) Omani translations of National Day speeches of Sultan Qaboos bin Said of Oman.
Resumo:
Wavelet families arise by scaling and translations of a prototype function, called the mother wavelet. The construction of wavelet bases for cardinal spline spaces is generally carried out within the multi-resolution analysis scheme. Thus, the usual way of increasing the dimension of the multi-resolution subspaces is by augmenting the scaling factor. We show here that, when working on a compact interval, the identical effect can be achieved without changing the wavelet scale but reducing the translation parameter. By such a procedure we generate a redundant frame, called a dictionary, spanning the same spaces as a wavelet basis but with wavelets of broader support. We characterize the correlation of the dictionary elements by measuring their 'coherence' and produce examples illustrating the relevance of highly coherent dictionaries to problems of sparse signal representation.
Resumo:
Non-uniform B-spline dictionaries on a compact interval are discussed in the context of sparse signal representation. For each given partition, dictionaries of B-spline functions for the corresponding spline space are built up by dividing the partition into subpartitions and joining together the bases for the concomitant subspaces. The resulting slightly redundant dictionaries are composed of B-spline functions of broader support than those corresponding to the B-spline basis for the identical space. Such dictionaries are meant to assist in the construction of adaptive sparse signal representation through a combination of stepwise optimal greedy techniques.
Resumo:
A property of sparse representations in relation to their capacity for information storage is discussed. It is shown that this feature can be used for an application that we term Encrypted Image Folding. The proposed procedure is realizable through any suitable transformation. In particular, in this paper we illustrate the approach by recourse to the Discrete Cosine Transform and a combination of redundant Cosine and Dirac dictionaries. The main advantage of the proposed technique is that both storage and encryption can be achieved simultaneously using simple processing steps.
Resumo:
The Semantic Web relies on carefully structured, well defined, data to allow machines to communicate and understand one another. In many domains (e.g. geospatial) the data being described contains some uncertainty, often due to incomplete knowledge; meaningful processing of this data requires these uncertainties to be carefully analysed and integrated into the process chain. Currently, within the SemanticWeb there is no standard mechanism for interoperable description and exchange of uncertain information, which renders the automated processing of such information implausible, particularly where error must be considered and captured as it propagates through a processing sequence. In particular we adopt a Bayesian perspective and focus on the case where the inputs / outputs are naturally treated as random variables. This paper discusses a solution to the problem in the form of the Uncertainty Markup Language (UncertML). UncertML is a conceptual model, realised as an XML schema, that allows uncertainty to be quantified in a variety of ways i.e. realisations, statistics and probability distributions. UncertML is based upon a soft-typed XML schema design that provides a generic framework from which any statistic or distribution may be created. Making extensive use of Geography Markup Language (GML) dictionaries, UncertML provides a collection of definitions for common uncertainty types. Containing both written descriptions and mathematical functions, encoded as MathML, the definitions within these dictionaries provide a robust mechanism for defining any statistic or distribution and can be easily extended. Universal Resource Identifiers (URIs) are used to introduce semantics to the soft-typed elements by linking to these dictionary definitions. The INTAMAP (INTeroperability and Automated MAPping) project provides a use case for UncertML. This paper demonstrates how observation errors can be quantified using UncertML and wrapped within an Observations & Measurements (O&M) Observation. The interpolation service uses the information within these observations to influence the prediction outcome. The output uncertainties may be encoded in a variety of UncertML types, e.g. a series of marginal Gaussian distributions, a set of statistics, such as the first three marginal moments, or a set of realisations from a Monte Carlo treatment. Quantifying and propagating uncertainty in this way allows such interpolation results to be consumed by other services. This could form part of a risk management chain or a decision support system, and ultimately paves the way for complex data processing chains in the Semantic Web.
Resumo:
DUE TO COPYRIGHT RESTRICTIONS ONLY AVAILABLE FOR CONSULTATION AT ASTON UNIVERSITY LIBRARY AND INFORMATION SERVICES WITH PRIOR ARRANGEMENT
Resumo:
From Platonic and Galenic roots, the first well developed ventricular theory of brain function is due to Bishop Nemesius, fourth century C.E. Although more interested in the Christian concept of soul, St. Augustine, too addressed the question of the location of the soul, a problem that has endured in various guises to the present day. Other notable contributions to ventricular psychology are the ninth century C.E. Arabic writer, Qusta ibn Lūqā, and an early European medical text written by the twelfth century C.E. author, Nicolai the Physician. By the time of Albertus Magnus, so-called medieval cell doctrine was a well-developed model of brain function. By the sixteenth century, Vesalius no longer understands the ventricles to be imaginary cavities designed to provide a physical basis for faculty psychology but as fluid-filled spaces in the brain whose function is yet to be determined