76 resultados para Translating and interpreting
Resumo:
There are still major challenges in the area of automatic indexing and retrieval of digital data. The main problem arises from the ever increasing mass of digital media and the lack of efficient methods for indexing and retrieval of such data based on the semantic content rather than keywords. To enable intelligent web interactions or even web filtering, we need to be capable of interpreting the information base in an intelligent manner. Research has been ongoing for a few years in the field of ontological engineering with the aim of using ontologies to add knowledge to information. In this paper we describe the architecture of a system designed to automatically and intelligently index huge repositories of special effects video clips, based on their semantic content, using a network of scalable ontologies to enable intelligent retrieval.
Resumo:
Automatic indexing and retrieval of digital data poses major challenges. The main problem arises from the ever increasing mass of digital media and the lack of efficient methods for indexing and retrieval of such data based on the semantic content rather than keywords. To enable intelligent web interactions, or even web filtering, we need to be capable of interpreting the information base in an intelligent manner. For a number of years research has been ongoing in the field of ontological engineering with the aim of using ontologies to add such (meta) knowledge to information. In this paper, we describe the architecture of a system (Dynamic REtrieval Analysis and semantic metadata Management (DREAM)) designed to automatically and intelligently index huge repositories of special effects video clips, based on their semantic content, using a network of scalable ontologies to enable intelligent retrieval. The DREAM Demonstrator has been evaluated as deployed in the film post-production phase to support the process of storage, indexing and retrieval of large data sets of special effects video clips as an exemplar application domain. This paper provides its performance and usability results and highlights the scope for future enhancements of the DREAM architecture which has proven successful in its first and possibly most challenging proving ground, namely film production, where it is already in routine use within our test bed Partners' creative processes. (C) 2009 Published by Elsevier B.V.
Resumo:
This paper introduces a new neurofuzzy model construction and parameter estimation algorithm from observed finite data sets, based on a Takagi and Sugeno (T-S) inference mechanism and a new extended Gram-Schmidt orthogonal decomposition algorithm, for the modeling of a priori unknown dynamical systems in the form of a set of fuzzy rules. The first contribution of the paper is the introduction of a one to one mapping between a fuzzy rule-base and a model matrix feature subspace using the T-S inference mechanism. This link enables the numerical properties associated with a rule-based matrix subspace, the relationships amongst these matrix subspaces, and the correlation between the output vector and a rule-base matrix subspace, to be investigated and extracted as rule-based knowledge to enhance model transparency. The matrix subspace spanned by a fuzzy rule is initially derived as the input regression matrix multiplied by a weighting matrix that consists of the corresponding fuzzy membership functions over the training data set. Model transparency is explored by the derivation of an equivalence between an A-optimality experimental design criterion of the weighting matrix and the average model output sensitivity to the fuzzy rule, so that rule-bases can be effectively measured by their identifiability via the A-optimality experimental design criterion. The A-optimality experimental design criterion of the weighting matrices of fuzzy rules is used to construct an initial model rule-base. An extended Gram-Schmidt algorithm is then developed to estimate the parameter vector for each rule. This new algorithm decomposes the model rule-bases via an orthogonal subspace decomposition approach, so as to enhance model transparency with the capability of interpreting the derived rule-base energy level. This new approach is computationally simpler than the conventional Gram-Schmidt algorithm for resolving high dimensional regression problems, whereby it is computationally desirable to decompose complex models into a few submodels rather than a single model with large number of input variables and the associated curse of dimensionality problem. Numerical examples are included to demonstrate the effectiveness of the proposed new algorithm.
Resumo:
A new robust neurofuzzy model construction algorithm has been introduced for the modeling of a priori unknown dynamical systems from observed finite data sets in the form of a set of fuzzy rules. Based on a Takagi-Sugeno (T-S) inference mechanism a one to one mapping between a fuzzy rule base and a model matrix feature subspace is established. This link enables rule based knowledge to be extracted from matrix subspace to enhance model transparency. In order to achieve maximized model robustness and sparsity, a new robust extended Gram-Schmidt (G-S) method has been introduced via two effective and complementary approaches of regularization and D-optimality experimental design. Model rule bases are decomposed into orthogonal subspaces, so as to enhance model transparency with the capability of interpreting the derived rule base energy level. A locally regularized orthogonal least squares algorithm, combined with a D-optimality used for subspace based rule selection, has been extended for fuzzy rule regularization and subspace based information extraction. By using a weighting for the D-optimality cost function, the entire model construction procedure becomes automatic. Numerical examples are included to demonstrate the effectiveness of the proposed new algorithm.
Resumo:
Can human social cognitive processes and social motives be grasped by the methods of experimental economics? Experimental studies of strategic cognition and social preferences contribute to our understanding of the social aspects of economic decisions making. Yet, papers in this issue argue that the social aspects of decision-making introduce several difficulties for interpreting the results of economic experiments. In particular, the laboratory is itself a social context, and in many respects a rather distinctive one, which raises questions of external validity.
Resumo:
The winter climate of Europe and the Mediterranean is dominated by the weather systems of the mid-latitude storm tracks. The behaviour of the storm tracks is highly variable, particularly in the eastern North Atlantic, and has a profound impact on the hydroclimate of the Mediterranean region. A deeper understanding of the storm tracks and the factors that drive them is therefore crucial for interpreting past changes in Mediterranean climate and the civilizations it has supported over the last 12 000 years (broadly the Holocene period). This paper presents a discussion of how changes in climate forcing (e.g. orbital variations, greenhouse gases, ice sheet cover) may have impacted on the ‘basic ingredients’ controlling the mid-latitude storm tracks over the North Atlantic and the Mediterranean on intermillennial time scales. Idealized simulations using the HadAM3 atmospheric general circulation model (GCM) are used to explore the basic processes, while a series of timeslice simulations from a similar atmospheric GCM coupled to a thermodynamic slab ocean (HadSM3) are examined to identify the impact these drivers have on the storm track during the Holocene. The results suggest that the North Atlantic storm track has moved northward and strengthened with time since the Early to Mid-Holocene. In contrast, the Mediterranean storm track may have weakened over the same period. It is, however, emphasized that much remains still to be understood about the evolution of the North Atlantic and Mediterranean storm tracks during the Holocene period.
Resumo:
As integrated software solutions reshape project delivery, they alter the bases for collaboration and competition across firms in complex industries. This paper synthesises and extends literatures on strategy in project-based industries and digitally-integrated work to understand how project-based firms interact with digital infrastructures for project delivery. Four identified strategies are to: 1) develop and use capabilities to shape the integrated software solutions that are used in projects; 2) co-specialize, developing complementary assets to work repeatedly with a particular integrator firm; 3) retain flexibility by developing and maintaining capabilities in multiple digital technologies and processes; and 4) manage interfaces, translating work into project formats for coordination while hiding proprietary data and capabilities in internal systems. The paper articulates the strategic importance of digital infrastructures for delivery as well as product architectures. It concludes by discussing managerial implications of the identified strategies and areas for further research.
Resumo:
Interview with Peter Robinson (pp.195-201) and ‘three uncollected translations’ of Luciano Erba by Peter Robinson (pp.202-204).
Resumo:
By comparing annual and seasonal changes in precipitation over land and ocean since 1950 simulated by the CMIP5 (Coupled Model Intercomparison Project, phase 5) climate models in which natural and anthropogenic forcings have been included, we find that clear global-scale and regional-scale changes due to human influence are expected to have occurred over both land and ocean. These include moistening over northern high latitude land and ocean throughout all seasons and over the northern subtropical oceans during boreal winter. However we show that this signal of human influence is less distinct when considered over the relatively small area of land for which there are adequate observations to make assessments of multi-decadal scale trends. These results imply that extensive and significant changes in precipitation over the land and ocean may have already happened, even though, inadequacies in observations in some parts of the world make it difficult to identify conclusively such a human fingerprint on the global water cycle. In some regions and seasons, due to aliasing of different kinds of variability as a result of sub sampling by the sparse and changing observational coverage, observed trends appear to have been increased, underscoring the difficulties of interpreting the apparent magnitude of observed changes in precipitation.
Resumo:
This article argues that a native-speaker baseline is a neglected dimension of studies into second language (L2) performance. If we investigate how learners perform language tasks, we should distinguish what performance features are due to their processing an L2 and which are due to their performing a particular task. Having defined what we mean by “native speaker,” we present the background to a research study into task features on nonnative task performance, designed to include native-speaker data as a baseline for interpreting nonnative-speaker performance. The nonnative results, published in this journal (Tavakoli & Foster, 2008) are recapitulated and then the native-speaker results are presented and discussed in the light of them. The study is guided by the assumption that limited attentional resources impact on L2 performance and explores how narrative design features—namely complexity of storyline and tightness of narrative structure— affect complexity, fluency, accuracy, and lexical diversity in language. The results show that both native and nonnative speakers are prompted by storyline complexity to use more subordinated language, but narrative structure had different effects on native and nonnative fluency. The learners, who were based in either London or Tehran, did not differ in their performance when compared to each other, except in lexical diversity, where the learners in London were close to native-speaker levels. The implications of the results for the applicability of Levelt’s model of speaking to an L2 are discussed, as is the potential for further L2 research using native speakers as a baseline.
Resumo:
A high-resolution GCM is found to simulate precipitation and surface energy balance of high latitudes with high accuracy. This opens new possibilities to investigate the future mass balance of polar glaciers and its effect on sea level. The surface mass balance of the Greenland and the Antarctic ice sheets is simulated using the ECHAM3 GCM with TI06 horizontal resolution. With this model, two 5-year integrations for the present and doubled carbon dioxide conditions based on the boundary conditions provided by the ECHAM1/T21 transient experiment have been conducted. A comparison of the two experiments over Greenland and Antarctica shows to what extent the effect of climate change on the mass balance on the two largest glaciers of the world can differ. On Greenland one sees a slight decrease in accumulation and a substantial increase in melt, while on Antarctica a large increase in accumulation without melt is projected. Translating the mass balances into terms of sea-level equivalent. the Greenland discharge causes a sea level rise of 1.1 mm yr−1, while the accumulation on Antarctica tends to lower it by 0.9 mm yr−1. The change in the combined mass balance of the two continents is almost zero. The sea level change of the next century can be affected more effectively by the thermal expansion of seawater and the mass balance of smaller glaciers outside of Greenland and Antarctica.