851 resultados para perceived environmental uncertainty
Resumo:
Recent developments in service-oriented and distributed computing have created exciting opportunities for the integration of models in service chains to create the Model Web. This offers the potential for orchestrating web data and processing services, in complex chains; a flexible approach which exploits the increased access to products and tools, and the scalability offered by the Web. However, the uncertainty inherent in data and models must be quantified and communicated in an interoperable way, in order for its effects to be effectively assessed as errors propagate through complex automated model chains. We describe a proposed set of tools for handling, characterizing and communicating uncertainty in this context, and show how they can be used to 'uncertainty- enable' Web Services in a model chain. An example implementation is presented, which combines environmental and publicly-contributed data to produce estimates of sea-level air pressure, with estimates of uncertainty which incorporate the effects of model approximation as well as the uncertainty inherent in the observational and derived data.
Resumo:
Traditionally, geostatistical algorithms are contained within specialist GIS and spatial statistics software. Such packages are often expensive, with relatively complex user interfaces and steep learning curves, and cannot be easily integrated into more complex process chains. In contrast, Service Oriented Architectures (SOAs) promote interoperability and loose coupling within distributed systems, typically using XML (eXtensible Markup Language) and Web services. Web services provide a mechanism for a user to discover and consume a particular process, often as part of a larger process chain, with minimal knowledge of how it works. Wrapping current geostatistical algorithms with a Web service layer would thus increase their accessibility, but raises several complex issues. This paper discusses a solution to providing interoperable, automatic geostatistical processing through the use of Web services, developed in the INTAMAP project (INTeroperability and Automated MAPping). The project builds upon Open Geospatial Consortium standards for describing observations, typically used within sensor webs, and employs Geography Markup Language (GML) to describe the spatial aspect of the problem domain. Thus the interpolation service is extremely flexible, being able to support a range of observation types, and can cope with issues such as change of support and differing error characteristics of sensors (by utilising descriptions of the observation process provided by SensorML). XML is accepted as the de facto standard for describing Web services, due to its expressive capabilities which allow automatic discovery and consumption by ‘naive’ users. Any XML schema employed must therefore be capable of describing every aspect of a service and its processes. However, no schema currently exists that can define the complex uncertainties and modelling choices that are often present within geostatistical analysis. We show a solution to this problem, developing a family of XML schemata to enable the description of a full range of uncertainty types. These types will range from simple statistics, such as the kriging mean and variances, through to a range of probability distributions and non-parametric models, such as realisations from a conditional simulation. By employing these schemata within a Web Processing Service (WPS) we show a prototype moving towards a truly interoperable geostatistical software architecture.
Resumo:
The automatic interpolation of environmental monitoring network data such as air quality or radiation levels in real-time setting poses a number of practical and theoretical questions. Among the problems found are (i) dealing and communicating uncertainty of predictions, (ii) automatic (hyper)parameter estimation, (iii) monitoring network heterogeneity, (iv) dealing with outlying extremes, and (v) quality control. In this paper we discuss these issues, in light of the spatial interpolation comparison exercise held in 2004.
Resumo:
When constructing and using environmental models, it is typical that many of the inputs to the models will not be known perfectly. In some cases, it will be possible to make observations, or occasionally physics-based uncertainty propagation, to ascertain the uncertainty on these inputs. However, such observations are often either not available or even possible, and another approach to characterising the uncertainty on the inputs must be sought. Even when observations are available, if the analysis is being carried out within a Bayesian framework then prior distributions will have to be specified. One option for gathering or at least estimating this information is to employ expert elicitation. Expert elicitation is well studied within statistics and psychology and involves the assessment of the beliefs of a group of experts about an uncertain quantity, (for example an input / parameter within a model), typically in terms of obtaining a probability distribution. One of the challenges in expert elicitation is to minimise the biases that might enter into the judgements made by the individual experts, and then to come to a consensus decision within the group of experts. Effort is made in the elicitation exercise to prevent biases clouding the judgements through well-devised questioning schemes. It is also important that, when reaching a consensus, the experts are exposed to the knowledge of the others in the group. Within the FP7 UncertWeb project (http://www.uncertweb.org/), there is a requirement to build a Webbased tool for expert elicitation. In this paper, we discuss some of the issues of building a Web-based elicitation system - both the technological aspects and the statistical and scientific issues. In particular, we demonstrate two tools: a Web-based system for the elicitation of continuous random variables and a system designed to elicit uncertainty about categorical random variables in the setting of landcover classification uncertainty. The first of these examples is a generic tool developed to elicit uncertainty about univariate continuous random variables. It is designed to be used within an application context and extends the existing SHELF method, adding a web interface and access to metadata. The tool is developed so that it can be readily integrated with environmental models exposed as web services. The second example was developed for the TREES-3 initiative which monitors tropical landcover change through ground-truthing at confluence points. It allows experts to validate the accuracy of automated landcover classifications using site-specific imagery and local knowledge. Experts may provide uncertainty information at various levels: from a general rating of their confidence in a site validation to a numerical ranking of the possible landcover types within a segment. A key challenge in the web based setting is the design of the user interface and the method of interacting between the problem owner and the problem experts. We show the workflow of the elicitation tool, and show how we can represent the final elicited distributions and confusion matrices using UncertML, ready for integration into uncertainty enabled workflows.We also show how the metadata associated with the elicitation exercise is captured and can be referenced from the elicited result, providing crucial lineage information and thus traceability in the decision making process.
Resumo:
The central argument to this thesis is that the nature and purpose of corporate reporting has changed over time to become a more outward looking and forward looking document designed to promote the company and its performance to a wide range of shareholders, rather than merely to report to its owners upon past performance. it is argued that the discourse of environmental accounting and reporting is one driver for this change but that this discourse has been set up as in conflicting with the discourse of traditional accounting and performance measurement. The effect of this opposition between the discourses is that the two have been interpreted to be different and incompatible dimensions of performance with good performance along one dimension only being achievable through a sacrifice of performance along the other dimension. Thus a perceived dialectic in performance is believed to exist. One of the principal purposes of this thesis is to explore this perceived dialectic and, through analysis, to show that it does not exist and that there is not incompatibility. This exploration and analysis is based upon an investigation of the inherent inconsistencies in such corporate reports and the analysis makes use of both a statistical analysis and a semiotic analysis of corporate reports and the reported performance of companies along these dimensions. Thus the development of a semiology of corporate reporting is one of the significant outcomes of this thesis. A further outcome is a consideration of the implications of the analysis for corporate performance and its measurement. The thesis concludes with a consideration of the way in which the advent of electronic reporting may affect the ability of organisations to maintain the dialectic and the implications for corporate reporting.
Resumo:
A multi-variate descriptive model of environmental and nature conservation attitudes and values is proposed and empirically supported. A mapping sentence is developed out of analysis of data from a series of Repertory Grid interviews addressing conservation employees' attitudes towards their profession's activities. The research is carried out within the meta-theoretical framework of Facet Theory. A mapping sentence is developed consisting of 9 facets. From the mapping sentence 3 questionnaires were constructed viewing the selective orientations towards environmental concern. A mapping sentence and facet model is developed for each study. Once the internal structure of this model had been established using Similarity Structure Analysis, the elements of the facets are subjected to Partial Order Scalogram Analysis with base coordinates. A questionnaire was statistically analysed to assess the relationship between facet elements and 4 measures of attitudes towards, and involvement with, conservation. This enabled the comparison of the relative strengths of attitudes associated with each facet element and each measure of conservation attitude. In general, the relationship between the social value of conservation and involvement pledges to conservation were monotonic; perceived importance of a conservation issue appearing predictive of personal involvement. Furthermore, the elements of the life area and scale facets were differentially related to attitude measures. The multi-variate descriptive model of environmental conservation values and attitudes is discussed in relation to its implications for psychological research into environmental concern and for environmental and nature conservation.
Resumo:
In recent years there has been a great effort to combine the technologies and techniques of GIS and process models. This project examines the issues of linking a standard current generation 2½d GIS with several existing model codes. The focus for the project has been the Shropshire Groundwater Scheme, which is being developed to augment flow in the River Severn during drought periods by pumping water from the Shropshire Aquifer. Previous authors have demonstrated that under certain circumstances pumping could reduce the soil moisture available for crops. This project follows earlier work at Aston in which the effects of drawdown were delineated and quantified through the development of a software package that implemented a technique which brought together the significant spatially varying parameters. This technique is repeated here, but using a standard GIS called GRASS. The GIS proved adequate for the task and the added functionality provided by the general purpose GIS - the data capture, manipulation and visualisation facilities - were of great benefit. The bulk of the project is concerned with examining the issues of the linkage of GIS and environmental process models. To this end a groundwater model (Modflow) and a soil moisture model (SWMS2D) were linked to the GIS and a crop model was implemented within the GIS. A loose-linked approach was adopted and secondary and surrogate data were used wherever possible. The implications of which relate to; justification of a loose-linked versus a closely integrated approach; how, technically, to achieve the linkage; how to reconcile the different data models used by the GIS and the process models; control of the movement of data between models of environmental subsystems, to model the total system; the advantages and disadvantages of using a current generation GIS as a medium for linking environmental process models; generation of input data, including the use of geostatistic, stochastic simulation, remote sensing, regression equations and mapped data; issues of accuracy, uncertainty and simply providing adequate data for the complex models; how such a modelling system fits into an organisational framework.
Resumo:
Trust is a critical component of business to consumer (B2C) e-Commerce success. In the absence of typical environmental cues that consumers use to assess vendor trustworthiness in the offline retail context, online consumers often rely on trust triggers embedded within e-Commerce websites to contribute to the establishment of sufficient trust to make an online purchase. This paper presents and discusses the results of a study which took an initial look at the extent to which the context or manner in which trust triggers are evaluated may exert influence on the importance attributed to individual triggers.
Resumo:
This paper presents an empirical study based on a survey of 399 owners of small and medium size companies in Lithuania. Applying bivariate and ordered probit estimators, we investigate why some business owners expect their firms to expand, while others do not. Our main findings provide evidence that SME owner's generic and specific human capital matter. Those with higher education and 'learning by doing' attributes, either through previous job experience or additional entrepreneurial experience, expect their businesses to expand. The expectations of growth are positively related to exporting and non-monotonically to enterprise size. In addition, we analyse the link between the perceptions of constraints to business activities and growth expectations and find that the factors, which are perceived as main business barriers, are not necessary those which are associated with reduced growth expectations. In particular, perceptions of both corruption and of inadequate tax systems seem to affect growth expectations the most.
Resumo:
Self-adaptive systems have the capability to autonomously modify their behavior at run-time in response to changes in their environment. Self-adaptation is particularly necessary for applications that must run continuously, even under adverse conditions and changing requirements; sample domains include automotive systems, telecommunications, and environmental monitoring systems. While a few techniques have been developed to support the monitoring and analysis of requirements for adaptive systems, limited attention has been paid to the actual creation and specification of requirements of self-adaptive systems. As a result, self-adaptivity is often constructed in an ad-hoc manner. In order to support the rigorous specification of adaptive systems requirements, this paper introduces RELAX, a new requirements language for self-adaptive systems that explicitly addresses uncertainty inherent in adaptive systems. We present the formal semantics for RELAX in terms of fuzzy logic, thus enabling a rigorous treatment of requirements that include uncertainty. RELAX enables developers to identify uncertainty in the requirements, thereby facilitating the design of systems that are, by definition, more flexible and amenable to adaptation in a systematic fashion. We illustrate the use of RELAX on smart home applications, including an adaptive assisted living system.
Resumo:
Requirements awareness should help optimize requirements satisfaction when factors that were uncertain at design time are resolved at runtime. We use the notion of claims to model assumptions that cannot be verified with confidence at design time. By monitoring claims at runtime, their veracity can be tested. If falsified, the effect of claim negation can be propagated to the system's goal model and an alternative means of goal realization selected automatically, allowing the dynamic adaptation of the system to the prevailing environmental context. © 2011 IEEE.
Resumo:
Self-adaptive systems have the capability to autonomously modify their behaviour at run-time in response to changes in their environment. Self-adaptation is particularly necessary for applications that must run continuously, even under adverse conditions and changing requirements; sample domains include automotive systems, telecommunications, and environmental monitoring systems. While a few techniques have been developed to support the monitoring and analysis of requirements for adaptive systems, limited attention has been paid to the actual creation and specification of requirements of self-adaptive systems. As a result, self-adaptivity is often constructed in an ad-hoc manner. In this paper, we argue that a more rigorous treatment of requirements explicitly relating to self-adaptivity is needed and that, in particular, requirements languages for self-adaptive systems should include explicit constructs for specifying and dealing with the uncertainty inherent in self-adaptive systems. We present RELAX, a new requirements language for selfadaptive systems and illustrate it using examples from the smart home domain. © 2009 IEEE.
Resumo:
In this article we envision factors and trends that shape the next generation of environmental monitoring systems. One key factor in this respect is the combined effect of end-user needs and the general development of IT services and their availability. Currently, an environmental (monitoring) system is assumed to be reactive. It delivers measurement data and computational results only if the user explicitly asks for it either by query or subscription. There is a temptation to automate this by simply pushing data to end-users. This, however, leads easily to an "advertisement strategy", where data is pushed to end-users regardless of users' needs. Under this strategy, the mere amount of received data obfuscates the individual messages; any "automatic" service, regardless of its fitness, overruns a system that requires the user's initiative. The foreseeable problem is that, unless there is no overall management, each new environmental service is going to compete for end-users' attention and, thus, inadvertently hinder the use of existing services. As the main contribution we investigate the nature of proactive environmental systems, and how they should be designed to avoid the aforementioned problem. We also discuss how semantics, participatory sensing, uncertainty management, and situational awareness link to proactive environmental systems. We illustrate our proposals with some real-life examples.
Resumo:
Trust is a critical component of business to consumer (B2C) e-Commerce success. In the absence of typical environmental cues that consumers use to assess vendor trustworthiness in the offline retail context, online consumers often rely on trust triggers embedded within e-Commerce websites to contribute to the establishment of sufficient trust to make an online purchase. This paper presents and discusses the results of a study which took an initial look at the extent to which the context or manner in which trust triggers are evaluated may exert influence on the importance attributed to individual triggers.
Resumo:
OpenMI is a widely used standard allowing exchange of data between integrated models, which has mostly been applied to dynamic, deterministic models. Within the FP7 UncertWeb project we are developing mechanisms and tools to support the management of uncertainty in environmental models. In this paper we explore the integration of the UncertWeb framework with OpenMI, to assess the issues that arise when propagating uncertainty in OpenMI model compositions, and the degree of integration possible with UncertWeb tools. In particular we develop an uncertainty-enabled model for a simple Lotka-Volterra system with an interface conforming to the OpenMI standard, exploring uncertainty in the initial predator and prey levels, and the parameters of the model equations. We use the Elicitator tool developed within UncertWeb to identify the initial condition uncertainties, and show how these can be integrated, using UncertML, with simple Monte Carlo propagation mechanisms. The mediators we develop for OpenMI models are generic and produce standard Web services that expose the OpenMI models to a Web based framework. We discuss what further work is needed to allow a more complete system to be developed and show how this might be used practically.