51 resultados para recommender system, user profiling, personalization, implicit feedbacks
em CentAUR: Central Archive University of Reading - UK
Resumo:
The mesospheric response to the 2002 Antarctic Stratospheric Sudden Warming (SSW) is analysed using the Canadian Middle Atmosphere Model Data Assimilation System (CMAM-DAS), where it represents a vertical propagation of information from the observations into the data-free mesosphere. The CMAM-DAS simulates a cooling in the lowest part of the mesosphere which is accomplished by resolved motions, but which is extended to the mid- to upper mesosphere by the response of the model's non-orographic gravity-wave drag parameterization to the change in zonal winds. The basic mechanism is that elucidated by Holton consisting of a net eastward wave-drag anomaly in the mesosphere during the SSW, although in this case there is a net upwelling in the polar mesosphere. Since the zonal-mean mesospheric response is shown to be predictable, this demonstrates that variations in the mesospheric state can be slaved to the lower atmosphere through gravity-wave drag.
Resumo:
Increasingly, distributed systems are being used to host all manner of applications. While these platforms provide a relatively cheap and effective means of executing applications, so far there has been little work in developing tools and utilities that can help application developers understand problems with the supporting software, or the executing applications. To fully understand why an application executing on a distributed system is not behaving as would be expected it is important that not only the application, but also the underlying middleware, and the operating system are analysed too, otherwise issues could be missed and certainly overall performance profiling and fault diagnoses would be harder to understand. We believe that one approach to profiling and the analysis of distributed systems and the associated applications is via the plethora of log files generated at runtime. In this paper we report on a system (Slogger), that utilises various emerging Semantic Web technologies to gather the heterogeneous log files generated by the various layers in a distributed system and unify them in common data store. Once unified, the log data can be queried and visualised in order to highlight potential problems or issues that may be occurring in the supporting software or the application itself.
Resumo:
Where users are interacting in a distributed virtual environment, the actions of each user must be observed by peers with sufficient consistency and within a limited delay so as not to be detrimental to the interaction. The consistency control issue may be split into three parts: update control; consistent enactment and evolution of events; and causal consistency. The delay in the presentation of events, termed latency, is primarily dependent on the network propagation delay and the consistency control algorithms. The latency induced by the consistency control algorithm, in particular causal ordering, is proportional to the number of participants. This paper describes how the effect of network delays may be reduced and introduces a scalable solution that provides sufficient consistency control while minimising its effect on latency. The principles described have been developed at Reading over the past five years. Similar principles are now emerging in the simulation community through the HLA standard. This paper attempts to validate the suggested principles within the schema of distributed simulation and virtual environments and to compare and contrast with those described by the HLA definition documents.
Resumo:
The magnitude and direction of the coupled feedbacks between the biotic and abiotic components of the terrestrial carbon cycle is a major source of uncertainty in coupled climate–carbon-cycle models1, 2, 3. Materially closed, energetically open biological systems continuously and simultaneously allow the two-way feedback loop between the biotic and abiotic components to take place4, 5, 6, 7, but so far have not been used to their full potential in ecological research, owing to the challenge of achieving sustainable model systems6, 7. We show that using materially closed soil–vegetation–atmosphere systems with pro rata carbon amounts for the main terrestrial carbon pools enables the establishment of conditions that balance plant carbon assimilation, and autotrophic and heterotrophic respiration fluxes over periods suitable to investigate short-term biotic carbon feedbacks. Using this approach, we tested an alternative way of assessing the impact of increased CO2 and temperature on biotic carbon feedbacks. The results show that without nutrient and water limitations, the short-term biotic responses could potentially buffer a temperature increase of 2.3 °C without significant positive feedbacks to atmospheric CO2. We argue that such closed-system research represents an important test-bed platform for model validation and parameterization of plant and soil biotic responses to environmental changes.
Resumo:
The terrestrial biosphere is a key regulator of atmospheric chemistry and climate. During past periods of climate change, vegetation cover and interactions between the terrestrial biosphere and atmosphere changed within decades. Modern observations show a similar responsiveness of terrestrial biogeochemistry to anthropogenically forced climate change and air pollution. Although interactions between the carbon cycle and climate have been a central focus, other biogeochemical feedbacks could be as important in modulating future climate change. Total positive radiative forcings resulting from feedbacks between the terrestrial biosphere and the atmosphere are estimated to reach up to 0.9 or 1.5 W m−2 K−1 towards the end of the twenty-first century, depending on the extent to which interactions with the nitrogen cycle stimulate or limit carbon sequestration. This substantially reduces and potentially even eliminates the cooling effect owing to carbon dioxide fertilization of the terrestrial biota. The overall magnitude of the biogeochemical feedbacks could potentially be similar to that of feedbacks in the physical climate system, but there are large uncertainties in the magnitude of individual estimates and in accounting for synergies between these effects.
Resumo:
The quality control, validation and verification of the European Flood Alert System (EFAS) are described. EFAS is designed as a flood early warning system at pan-European scale, to complement national systems and provide flood warnings more than 2 days before a flood. On average 20–30 alerts per year are sent out to the EFAS partner network which consists of 24 National hydrological authorities responsible for transnational river basins. Quality control of the system includes the evaluation of the hits, misses and false alarms, showing that EFAS has more than 50% of the time hits. Furthermore, the skills of both the meteorological as well as the hydrological forecasts are evaluated, and are included here for a 10-year period. Next, end-user needs and feedback are systematically analysed. Suggested improvements, such as real-time river discharge updating, are currently implemented.
Resumo:
Large changes in the extent of northern subtropical arid regions during the Holocene are attributed to orbitally forced variations in monsoon strength and have been implicated in the regulation of atmospheric trace gas concentrations on millenial timescales. Models that omit biogeophysical feedback, however, are unable to account for the full magnitude of African monsoon amplification and extension during the early to middle Holocene (˜9500–5000 years B.P.). A data set describing land-surface conditions 6000 years B.P. on a 1° × 1° grid across northern Africa and the Arabian Peninsula has been prepared from published maps and other sources of palaeoenvironmental data, with the primary aim of providing a realistic lower boundary condition for atmospheric general circulation model experiments similar to those performed in the Palaeoclimate Modelling Intercomparison Project. The data set includes information on the percentage of each grid cell occupied by specific vegetation types (steppe, savanna, xerophytic woods/scrub, tropical deciduous forest, and tropical montane evergreen forest), open water (lakes), and wetlands, plus information on the flow direction of major drainage channels for use in large-scale palaeohydrological modeling.
Resumo:
Perturbations to the carbon cycle could constitute large feedbacks on future changes in atmospheric CO2 concentration and climate. This paper demonstrates how carbon cycle feedback can be expressed in formally similar ways to climate feedback, and thus compares their magnitudes. The carbon cycle gives rise to two climate feedback terms: the concentration–carbon feedback, resulting from the uptake of carbon by land and ocean as a biogeochemical response to the atmospheric CO2 concentration, and the climate–carbon feedback, resulting from the effect of climate change on carbon fluxes. In the earth system models of the Coupled Climate–Carbon Cycle Model Intercomparison Project (C4MIP), climate–carbon feedback on warming is positive and of a similar size to the cloud feedback. The concentration–carbon feedback is negative; it has generally received less attention in the literature, but in magnitude it is 4 times larger than the climate–carbon feedback and more uncertain. The concentration–carbon feedback is the dominant uncertainty in the allowable CO2 emissions that are consistent with a given CO2 concentration scenario. In modeling the climate response to a scenario of CO2 emissions, the net carbon cycle feedback is of comparable size and uncertainty to the noncarbon–climate response. To quantify simulated carbon cycle feedbacks satisfactorily, a radiatively coupled experiment is needed, in addition to the fully coupled and biogeochemically coupled experiments, which are referred to as coupled and uncoupled in C4MIP. The concentration–carbon and climate–carbon feedbacks do not combine linearly, and the concentration–carbon feedback is dependent on scenario and time.
Resumo:
Negative correlations between task performance in dynamic control tasks and verbalizable knowledge, as assessed by a post-task questionnaire, have been interpreted as dissociations that indicate two antagonistic modes of learning, one being “explicit”, the other “implicit”. This paper views the control tasks as finite-state automata and offers an alternative interpretation of these negative correlations. It is argued that “good controllers” observe fewer different state transitions and, consequently, can answer fewer post-task questions about system transitions than can “bad controllers”. Two experiments demonstrate the validity of the argument by showing the predicted negative relationship between control performance and the number of explored state transitions, and the predicted positive relationship between the number of explored state transitions and questionnaire scores. However, the experiments also elucidate important boundary conditions for the critical effects. We discuss the implications of these findings, and of other problems arising from the process control paradigm, for conclusions about implicit versus explicit learning processes.
Resumo:
The ultimate criterion of success for interactive expert systems is that they will be used, and used to effect, by individuals other than the system developers. A key ingredient of success in most systems is involving users in the specification and development of systems as they are being built. However, until recently, system designers have paid little attention to ascertaining user needs and to developing systems with corresponding functionality and appropriate interfaces to match those requirements. Although the situation is beginning to change, many developers do not know how to go about involving users, or else tackle the problem in an inadequate way. This paper discusses the need for user involvement and considers why many developers are still not involving users in an optimal way. It looks at the different ways in which users can be involved in the development process and describes how to select appropriate techniques and methods for studying users. Finally, it discusses some of the problems inherent in involving users in expert system development, and recommends an approach which incorporates both ethnographic analysis and formal user testing.
Resumo:
This paper describes the user modeling component of EPIAIM, a consultation system for data analysis in epidemiology. The component is aimed at representing knowledge of concepts in the domain, so that their explanations can be adapted to user needs. The first part of the paper describes two studies aimed at analysing user requirements. The first one is a questionnaire study which examines the respondents' familiarity with concepts. The second one is an analysis of concept descriptions in textbooks and from expert epidemiologists, which examines how discourse strategies are tailored to the level of experience of the expected audience. The second part of the paper describes how the results of these studies have been used to design the user modeling component of EPIAIM. This module works in a two-step approach. In the first step, a few trigger questions allow the activation of a stereotype that includes a "body" and an "inference component". The body is the representation of the body of knowledge that a class of users is expected to know, along with the probability that the knowledge is known. In the inference component, the learning process of concepts is represented as a belief network. Hence, in the second step the belief network is used to refine the initial default information in the stereotype's body. This is done by asking a few questions on those concepts where it is uncertain whether or not they are known to the user, and propagating this new evidence to revise the whole situation. The system has been implemented on a workstation under UNIX. An example of functioning is presented, and advantages and limitations of the approach are discussed.
Resumo:
In recent years there has been a growing debate over whether or not standards should be produced for user system interfaces. Those in favor of standardization argue that standards in this area will result in more usable systems, while those against argue that standardization is neither practical nor desirable. The present paper reviews both sides of this debate in relation to expert systems. It argues that in many areas guidelines are more appropriate than standards for user interface design.
Resumo:
Europe's widely distributed climate modelling expertise, now organized in the European Network for Earth System Modelling (ENES), is both a strength and a challenge. Recognizing this, the European Union's Program for Integrated Earth System Modelling (PRISM) infrastructure project aims at designing a flexible and friendly user environment to assemble, run and post-process Earth System models. PRISM was started in December 2001 with a duration of three years. This paper presents the major stages of PRISM, including: (1) the definition and promotion of scientific and technical standards to increase component modularity; (2) the development of an end-to-end software environment (graphical user interface, coupling and I/O system, diagnostics, visualization) to launch, monitor and analyse complex Earth system models built around state-of-art community component models (atmosphere, ocean, atmospheric chemistry, ocean bio-chemistry, sea-ice, land-surface); and (3) testing and quality standards to ensure high-performance computing performance on a variety of platforms. PRISM is emerging as a core strategic software infrastructure for building the European research area in Earth system sciences. Copyright (c) 2005 John Wiley & Sons, Ltd.
Resumo:
Context: Learning can be regarded as knowledge construction in which prior knowledge and experience serve as basis for the learners to expand their knowledge base. Such a process of knowledge construction has to take place continuously in order to enhance the learners’ competence in a competitive working environment. As the information consumers, the individual users demand personalised information provision which meets their own specific purposes, goals, and expectations. Objectives: The current methods in requirements engineering are capable of modelling the common user’s behaviour in the domain of knowledge construction. The users’ requirements can be represented as a case in the defined structure which can be reasoned to enable the requirements analysis. Such analysis needs to be enhanced so that personalised information provision can be tackled and modelled. However, there is a lack of suitable modelling methods to achieve this end. This paper presents a new ontological method for capturing individual user’s requirements and transforming the requirements onto personalised information provision specifications. Hence the right information can be provided to the right user for the right purpose. Method: An experiment was conducted based on the qualitative method. A medium size of group of users participated to validate the method and its techniques, i.e. articulates, maps, configures, and learning content. The results were used as the feedback for the improvement. Result: The research work has produced an ontology model with a set of techniques which support the functions for profiling user’s requirements, reasoning requirements patterns, generating workflow from norms, and formulating information provision specifications. Conclusion: The current requirements engineering approaches provide the methodical capability for developing solutions. Our research outcome, i.e. the ontology model with the techniques, can further enhance the RE approaches for modelling the individual user’s needs and discovering the user’s requirements.