863 resultados para Information needs – representation
Resumo:
The paper is an investigation of the exchange of ideas and information between an architect and building users in the early stages of the building design process before the design brief or any drawings have been produced. The purpose of the research is to gain insight into the type of information users exchange with architects in early design conversations and to better understand the influence the format of design interactions and interactional behaviours have on the exchange of information. We report an empirical study of pre-briefing conversations in which the overwhelming majority of the exchanges were about the functional or structural attributes of space, discussion that touched on the phenomenological, perceptual and the symbolic meanings of space were rare. We explore the contextual features of meetings and the conversational strategies taken by the architect to prompt the users for information and the influence these had on the information provided. Recommendations are made on the format and structure of pre-briefing conversations and on designers' strategies for raising the level of information provided by the user beyond the functional or structural attributes of space.
Resumo:
This article reviews current technological developments, particularly Peer-to-Peer technologies and Distributed Data Systems, and their value to community memory projects, particularly those concerned with the preservation of the cultural, literary and administrative data of cultures which have suffered genocide or are at risk of genocide. It draws attention to the comparatively good representation online of genocide denial groups and changes in the technological strategies of holocaust denial and other far-right groups. It draws on the author's work in providing IT support for a UK-based Non-Governmental Organization providing support for survivors of genocide in Rwanda.
Resumo:
To-be-enacted material is more accessible in tests of recognition and lexical decision than material not intended for action (T. Goschke J. Kuhl, 1993; R. L. Marsh, J. L. Hicks, & M. L. Bink, 1998). This finding has been attributed to the superior status of intention-related information. The current article explores an alternative (action-superiority) account that draws parallels between the intended enactment effect (IEE) and the subject-performed task effect. Using 2 paradigms, the authors observed faster recognition latencies for both enacted and to-be-enacted material. It is crucial to note that there was no evidence of an IEE for items that had already been executed during encoding. The IEE was also eliminated when motor processing was prevented after verbal encoding. These findings suggest an overlap between overt and intended enactment and indicate that motor information may be activated for verbal material in preparation for subsequent execution.
Resumo:
The nature of the spatial representations that underlie simple visually guided actions early in life was investigated in toddlers with Williams syndrome (WS), Down syndrome (DS), and healthy chronological age- and mental age-matched controls, through the use of a "double-step" saccade paradigm. The experiment tested the hypothesis that, compared to typically developing infants and toddlers, and toddlers with DS, those with WS display a deficit in using spatial representations to guide actions. Levels of sustained attention were also measured within these groups, to establish whether differences in levels of engagement influenced performance on the double-step saccade task. The results showed that toddlers with WS were unable to combine extra-retinal information with retinal information to the same extent as the other groups, and displayed evidence of other deficits in saccade planning, suggesting a greater reliance on sub-cortical mechanisms than the other populations. Results also indicated that their exploration of the visual environment is less developed. The sustained attention task revealed shorter and fewer periods of sustained attention in toddlers with DS, but not those with WS, suggesting that WS performance on the double-step saccade task is not explained by poorer engagement. The findings are also discussed in relation to a possible attention disengagement deficit in WS toddlers. Our study highlights the importance of studying genetic disorders early in development. (C) 2002 Elsevier Science Ltd. All rights reserved.
Resumo:
Accessing information, which is spread across multiple sources, in a structured and connected way, is a general problem for enterprises. A unified structure for knowledge representation is urgently needed to enable integration of heterogeneous information resources. Topic Maps seem to be a solution for this problem. The Topic Map technology enables connecting information, through concepts and relationships, and their occurrences across multiple systems. In this paper, we address this problem by describing a framework built on topic maps, to support the current need of knowledge management. New approaches for information integration, intelligent search and topic map exploration are introduced within this framework.
Resumo:
Automatic indexing and retrieval of digital data poses major challenges. The main problem arises from the ever increasing mass of digital media and the lack of efficient methods for indexing and retrieval of such data based on the semantic content rather than keywords. To enable intelligent web interactions, or even web filtering, we need to be capable of interpreting the information base in an intelligent manner. For a number of years research has been ongoing in the field of ontological engineering with the aim of using ontologies to add such (meta) knowledge to information. In this paper, we describe the architecture of a system (Dynamic REtrieval Analysis and semantic metadata Management (DREAM)) designed to automatically and intelligently index huge repositories of special effects video clips, based on their semantic content, using a network of scalable ontologies to enable intelligent retrieval. The DREAM Demonstrator has been evaluated as deployed in the film post-production phase to support the process of storage, indexing and retrieval of large data sets of special effects video clips as an exemplar application domain. This paper provides its performance and usability results and highlights the scope for future enhancements of the DREAM architecture which has proven successful in its first and possibly most challenging proving ground, namely film production, where it is already in routine use within our test bed Partners' creative processes. (C) 2009 Published by Elsevier B.V.
Resumo:
The quality of information provision influences considerably knowledge construction driven by individual users’ needs. In the design of information systems for e-learning, personal information requirements should be incorporated to determine a selection of suitable learning content, instructive sequencing for learning content, and effective presentation of learning content. This is considered as an important part of instructional design for a personalised information package. The current research reveals that there is a lack of means by which individual users’ information requirements can be effectively incorporated to support personal knowledge construction. This paper presents a method which enables an articulation of users’ requirements based on the rooted learning theories and requirements engineering paradigms. The user’s information requirements can be systematically encapsulated in a user profile (i.e. user requirements space), and further transformed onto instructional design specifications (i.e. information space). These two spaces allow the discovering of information requirements patterns for self-maintaining and self-adapting personalisation that enhance experience in the knowledge construction process.
Resumo:
The Group on Earth Observations System of Systems, GEOSS, is a co-ordinated initiative by many nations to address the needs for earth-system information expressed by the 2002 World Summit on Sustainable Development. We discuss the role of earth-system modelling and data assimilation in transforming earth-system observations into the predictive and status-assessment products required by GEOSS, across many areas of socio-economic interest. First we review recent gains in the predictive skill of operational global earth-system models, on time-scales of days to several seasons. We then discuss recent work to develop from the global predictions a diverse set of end-user applications which can meet GEOSS requirements for information of socio-economic benefit; examples include forecasts of coastal storm surges, floods in large river basins, seasonal crop yield forecasts and seasonal lead-time alerts for malaria epidemics. We note ongoing efforts to extend operational earth-system modelling and assimilation capabilities to atmospheric composition, in support of improved services for air-quality forecasts and for treaty assessment. We next sketch likely GEOSS observational requirements in the coming decades. In concluding, we reflect on the cost of earth observations relative to the modest cost of transforming the observations into information of socio-economic value.
Resumo:
Background The information processing capacity of the human mind is limited, as is evidenced by the attentional blink (AB) - a deficit in identifying the second of two temporally-close targets (T1 and T2) embedded in a rapid stream of distracters. Theories of the AB generally agree that it results from competition between stimuli for conscious representation. However, they disagree in the specific mechanisms, in particular about how attentional processing of T1 determines the AB to T2. Methodology/Principal Findings The present study used the high spatial resolution of functional magnetic resonance imaging (fMRI) to examine the neural mechanisms underlying the AB. Our research approach was to design T1 and T2 stimuli that activate distinguishable brain areas involved in visual categorization and representation. ROI and functional connectivity analyses were then used to examine how attentional processing of T1, as indexed by activity in the T1 representation area, affected T2 processing. Our main finding was that attentional processing of T1 at the level of the visual cortex predicted T2 detection rates Those individuals who activated the T1 encoding area more strongly in blink versus no-blink trials generally detected T2 on a lower percentage of trials. The coupling of activity between T1 and T2 representation areas did not vary as a function of conscious T2 perception. Conclusions/Significance These data are consistent with the notion that the AB is related to attentional demands of T1 for selection, and indicate that these demands are reflected at the level of visual cortex. They also highlight the importance of individual differences in attentional settings in explaining AB task performance.
Resumo:
One of the most pervading concepts underlying computational models of information processing in the brain is linear input integration of rate coded uni-variate information by neurons. After a suitable learning process this results in neuronal structures that statically represent knowledge as a vector of real valued synaptic weights. Although this general framework has contributed to the many successes of connectionism, in this paper we argue that for all but the most basic of cognitive processes, a more complex, multi-variate dynamic neural coding mechanism is required - knowledge should not be spacially bound to a particular neuron or group of neurons. We conclude the paper with discussion of a simple experiment that illustrates dynamic knowledge representation in a spiking neuron connectionist system.
Resumo:
Although the use of climate scenarios for impact assessment has grown steadily since the 1990s, uptake of such information for adaptation is lagging by nearly a decade in terms of scientific output. Nonetheless, integration of climate risk information in development planning is now a priority for donor agencies because of the need to prepare for climate change impacts across different sectors and countries. This urgency stems from concerns that progress made against Millennium Development Goals (MDGs) could be threatened by anthropogenic climate change beyond 2015. Up to this time the human signal, though detectable and growing, will be a relatively small component of climate variability and change. This implies the need for a twin-track approach: on the one hand, vulnerability assessments of social and economic strategies for coping with present climate extremes and variability, and, on the other hand, development of climate forecast tools and scenarios to evaluate sector-specific, incremental changes in risk over the next few decades. This review starts by describing the climate outlook for the next couple of decades and the implications for adaptation assessments. We then review ways in which climate risk information is already being used in adaptation assessments and evaluate the strengths and weaknesses of three groups of techniques. Next we identify knowledge gaps and opportunities for improving the production and uptake of climate risk information for the 2020s. We assert that climate change scenarios can meet some, but not all, of the needs of adaptation planning. Even then, the choice of scenario technique must be matched to the intended application, taking into account local constraints of time, resources, human capacity and supporting infrastructure. We also show that much greater attention should be given to improving and critiquing models used for climate impact assessment, as standard practice. Finally, we highlight the over-arching need for the scientific community to provide more information and guidance on adapting to the risks of climate variability and change over nearer time horizons (i.e. the 2020s). Although the focus of the review is on information provision and uptake in developing regions, it is clear that many developed countries are facing the same challenges. Copyright © 2009 Royal Meteorological Society
Resumo:
Monitoring Earth's terrestrial water conditions is critically important to many hydrological applications such as global food production; assessing water resources sustainability; and flood, drought, and climate change prediction. These needs have motivated the development of pilot monitoring and prediction systems for terrestrial hydrologic and vegetative states, but to date only at the rather coarse spatial resolutions (∼10–100 km) over continental to global domains. Adequately addressing critical water cycle science questions and applications requires systems that are implemented globally at much higher resolutions, on the order of 1 km, resolutions referred to as hyperresolution in the context of global land surface models. This opinion paper sets forth the needs and benefits for a system that would monitor and predict the Earth's terrestrial water, energy, and biogeochemical cycles. We discuss six major challenges in developing a system: improved representation of surface‐subsurface interactions due to fine‐scale topography and vegetation; improved representation of land‐atmospheric interactions and resulting spatial information on soil moisture and evapotranspiration; inclusion of water quality as part of the biogeochemical cycle; representation of human impacts from water management; utilizing massively parallel computer systems and recent computational advances in solving hyperresolution models that will have up to 109 unknowns; and developing the required in situ and remote sensing global data sets. We deem the development of a global hyperresolution model for monitoring the terrestrial water, energy, and biogeochemical cycles a “grand challenge” to the community, and we call upon the international hydrologic community and the hydrological science support infrastructure to endorse the effort.
Resumo:
We consider the finite sample properties of model selection by information criteria in conditionally heteroscedastic models. Recent theoretical results show that certain popular criteria are consistent in that they will select the true model asymptotically with probability 1. To examine the empirical relevance of this property, Monte Carlo simulations are conducted for a set of non–nested data generating processes (DGPs) with the set of candidate models consisting of all types of model used as DGPs. In addition, not only is the best model considered but also those with similar values of the information criterion, called close competitors, thus forming a portfolio of eligible models. To supplement the simulations, the criteria are applied to a set of economic and financial series. In the simulations, the criteria are largely ineffective at identifying the correct model, either as best or a close competitor, the parsimonious GARCH(1, 1) model being preferred for most DGPs. In contrast, asymmetric models are generally selected to represent actual data. This leads to the conjecture that the properties of parameterizations of processes commonly used to model heteroscedastic data are more similar than may be imagined and that more attention needs to be paid to the behaviour of the standardized disturbances of such models, both in simulation exercises and in empirical modelling.