35 resultados para Scenario Programming, Markup Language, End User Programming


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Models are central tools for modern scientists and decision makers, and there are many existing frameworks to support their creation, execution and composition. Many frameworks are based on proprietary interfaces, and do not lend themselves to the integration of models from diverse disciplines. Web based systems, or systems based on web services, such as Taverna and Kepler, allow composition of models based on standard web service technologies. At the same time the Open Geospatial Consortium has been developing their own service stack, which includes the Web Processing Service, designed to facilitate the executing of geospatial processing - including complex environmental models. The current Open Geospatial Consortium service stack employs Extensible Markup Language as a default data exchange standard, and widely-used encodings such as JavaScript Object Notation can often only be used when incorporated with Extensible Markup Language. Similarly, no successful engagement of the Web Processing Service standard with the well-supported technologies of Simple Object Access Protocol and Web Services Description Language has been seen. In this paper we propose a pure Simple Object Access Protocol/Web Services Description Language processing service which addresses some of the issues with the Web Processing Service specication and brings us closer to achieving a degree of interoperability between geospatial models, and thus realising the vision of a useful 'model web'.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The use of spreadsheets has become routine in all aspects of business with usage growing across a range of functional areas and a continuing trend towards end user spreadsheet development. However, several studies have raised concerns about the accuracy of spreadsheet models in general, and of end user developed applications in particular, raising the risk element for users. High error rates have been discovered, even though the users/developers were confident that their spreadsheets were correct. The lack of an easy to use, context-sensitive validation methodology has been highlighted as a significant contributor to the problems of accuracy. This paper describes experiences in using a practical, contingency factor-based methodology for validation of spreadsheet-based DSS. Because the end user is often both the system developer and a stakeholder, the contingency factor-based validation methodology may need to be used in more than one way. The methodology can also be extended to encompass other DSS.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this article we envision factors and trends that shape the next generation of environmental monitoring systems. One key factor in this respect is the combined effect of end-user needs and the general development of IT services and their availability. Currently, an environmental (monitoring) system is assumed to be reactive. It delivers measurement data and computational results only if the user explicitly asks for it either by query or subscription. There is a temptation to automate this by simply pushing data to end-users. This, however, leads easily to an "advertisement strategy", where data is pushed to end-users regardless of users' needs. Under this strategy, the mere amount of received data obfuscates the individual messages; any "automatic" service, regardless of its fitness, overruns a system that requires the user's initiative. The foreseeable problem is that, unless there is no overall management, each new environmental service is going to compete for end-users' attention and, thus, inadvertently hinder the use of existing services. As the main contribution we investigate the nature of proactive environmental systems, and how they should be designed to avoid the aforementioned problem. We also discuss how semantics, participatory sensing, uncertainty management, and situational awareness link to proactive environmental systems. We illustrate our proposals with some real-life examples.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The Semantic Web relies on carefully structured, well defined, data to allow machines to communicate and understand one another. In many domains (e.g. geospatial) the data being described contains some uncertainty, often due to incomplete knowledge; meaningful processing of this data requires these uncertainties to be carefully analysed and integrated into the process chain. Currently, within the SemanticWeb there is no standard mechanism for interoperable description and exchange of uncertain information, which renders the automated processing of such information implausible, particularly where error must be considered and captured as it propagates through a processing sequence. In particular we adopt a Bayesian perspective and focus on the case where the inputs / outputs are naturally treated as random variables. This paper discusses a solution to the problem in the form of the Uncertainty Markup Language (UncertML). UncertML is a conceptual model, realised as an XML schema, that allows uncertainty to be quantified in a variety of ways i.e. realisations, statistics and probability distributions. UncertML is based upon a soft-typed XML schema design that provides a generic framework from which any statistic or distribution may be created. Making extensive use of Geography Markup Language (GML) dictionaries, UncertML provides a collection of definitions for common uncertainty types. Containing both written descriptions and mathematical functions, encoded as MathML, the definitions within these dictionaries provide a robust mechanism for defining any statistic or distribution and can be easily extended. Universal Resource Identifiers (URIs) are used to introduce semantics to the soft-typed elements by linking to these dictionary definitions. The INTAMAP (INTeroperability and Automated MAPping) project provides a use case for UncertML. This paper demonstrates how observation errors can be quantified using UncertML and wrapped within an Observations & Measurements (O&M) Observation. The interpolation service uses the information within these observations to influence the prediction outcome. The output uncertainties may be encoded in a variety of UncertML types, e.g. a series of marginal Gaussian distributions, a set of statistics, such as the first three marginal moments, or a set of realisations from a Monte Carlo treatment. Quantifying and propagating uncertainty in this way allows such interpolation results to be consumed by other services. This could form part of a risk management chain or a decision support system, and ultimately paves the way for complex data processing chains in the Semantic Web.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Over recent years, hub-and-spoke distribution techniques have attracted widespread research attention. Despite there being a growing body of literature in this area there is less focus on the spoke-terminal element of the hub-and-spoke system as being a key component in the overall service received by the end-user. Current literature is highly geared towards discussing bulk optimization of freight units rather than to the more discrete and individualistic profile characteristics of shared-user Less-than-truckload (LTL) freight. In this paper, a literature review is presented to review the role hub-and-spoke systems play in meeting multi-profile customer demands, particularly in developing sectors with more sophisticated needs, such as retail. The paper also looks at the use of simulation technology as a suitable tool for analyzing spoke-terminal operations within developing hub-and spoke systems.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Over recent years, hub-and-spoke distribution techniques have attracted widespread research attention. Despite there being a growing body of literature in this area there is less focus on the spoke-terminal element of the hub-and-spoke system as being a key component in the overall service received by the end-user. Current literature is highly geared towards discussing bulk optimization of freight units rather than to the more discrete and individualistic profile characteristics of shared-user Less-than-truckload (LTL) freight. In this paper, a literature review is presented to review the role hub-and-spoke systems play in meeting multi-profile customer demands, particularly in developing sectors with more sophisticated needs, such as retail. The paper also looks at the use of simulation technology as a suitable tool for analyzing spoke-terminal operations within developing hub-and spoke systems.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We introduce ReDites, a system for realtime event detection, tracking, monitoring and visualisation. It is designed to assist Information Analysts in understanding and exploring complex events as they unfold in the world. Events are automatically detected from the Twitter stream. Then those that are categorised as being security-relevant are tracked, geolocated, summarised and visualised for the end-user. Furthermore, the system tracks changes in emotions over events, signalling possible flashpoints or abatement. We demonstrate the capabilities of ReDites using an extended use case from the September 2013 Westgate shooting incident. Through an evaluation of system latencies, we also show that enriched events are made available for users to explore within seconds of that event occurring.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The world is connected by a core network of long-haul optical communication systems that link countries and continents, enabling long-distance phone calls, data-center communications, and the Internet. The demands on information rates have been constantly driven up by applications such as online gaming, high-definition video, and cloud computing. All over the world, end-user connection speeds are being increased by replacing conventional digital subscriber line (DSL) and asymmetric DSL (ADSL) with fiber to the home. Clearly, the capacity of the core network must also increase proportionally. © 1991-2012 IEEE.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Energy service companies (ESCOs) are faced with a range of challenges and opportunities associated with the rapidly changing and flexible requirements of energy customers (end users) and rapid improvements in technologies associated with energy and ICT. These opportunities for innovation include better prediction of energy demand, transparency of data to the end user, flexible and time dependent energy pricing and a range of novel finance models. The liberalisation of energy markets across the world has leads to a very small price differential between suppliers on the unit cost of energy. Energy companies are therefore looking to add additional layers of value using service models borrowed from the manufacturing industry. This opens a range of new product and service offerings to energy markets and consumers and has implications for the overall efficiency, utility and price of energy provision.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A real-time adaptive resource allocation algorithm considering the end user's Quality of Experience (QoE) in the context of video streaming service is presented in this work. An objective no-reference quality metric, namely Pause Intensity (PI), is used to control the priority of resource allocation to users during the scheduling process. An online adjustment has been introduced to adaptively set the scheduler's parameter and maintain a desired trade-off between fairness and efficiency. The correlation between the data rates (i.e. video code rates) demanded by users and the data rates allocated by the scheduler is taken into account as well. The final allocated rates are determined based on the channel status, the distribution of PI values among users, and the scheduling policy adopted. Furthermore, since the user's capability varies as the environment conditions change, the rate adaptation mechanism for video streaming is considered and its interaction with the scheduling process under the same PI metric is studied. The feasibility of implementing this algorithm is examined and the result is compared with the most commonly existing scheduling methods.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A significant body of research investigates the acceptance of computer-based support (including devices and applications ranging from e-mail to specialized clinical systems, like PACS) among clinicians. Much of this research has focused on measuring the usability of systems using characteristics related to the clarity of interactions and ease of use. We propose that an important attribute of any clinical computer-based support tool is the intrinsic motivation of the end-user (i.e. a clinician) to use the system in practice. In this paper we present the results of a study that investigated factors motivating medical doctors (MDs) to use computer-based support. Our results demonstrate that MDs value computer-based support, find it useful and easy to use, however, uptake is hindered by perceived incompetence, and pressure and tension associated with using technology.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Mobile communication and networking infrastructures play an important role in the development of smart cities, to support real-time information exchange and management required in modern urbanization. Mobile WiFi devices that help offloading data traffic from the macro-cell base station and serve the end users within a closer range can significantly improve the connectivity of wireless communications between essential components including infrastructural and human devices in a city. However, this offloading function through interworking between LTE and WiFi systems will change the pattern of resource distributions operated by the base station. In this paper, a resource allocation scheme is proposed to ensure stable service coverage and end-user quality of experience (QoE) when offloading takes place in a macro-cell environment. In this scheme, a rate redistribution algorithm is derived to form a parametric scheduler to meet the required levels of efficiency and fairness, guided by a no-reference quality assessment metric. We show that the performance of resource allocation can be regulated by this scheduler without affecting the service coverage offered by the WLAN access point. The performances of different interworking scenarios and macro-cell scheduling policies are also compared.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

eHabitat is a Web Processing Service (WPS) designed to compute the likelihood of finding ecosystems with equal properties. Inputs to the WPS, typically thematic geospatial "layers", can be discovered using standardised catalogues, and the outputs tailored to specific end user needs. Because these layers can range from geophysical data captured through remote sensing to socio-economical indicators, eHabitat is exposed to a broad range of different types and levels of uncertainties. Potentially chained to other services to perform ecological forecasting, for example, eHabitat would be an additional component further propagating uncertainties from a potentially long chain of model services. This integration of complex resources increases the challenges in dealing with uncertainty. For such a system, as envisaged by initiatives such as the "Model Web" from the Group on Earth Observations, to be used for policy or decision making, users must be provided with information on the quality of the outputs since all system components will be subject to uncertainty. UncertWeb will create the Uncertainty-Enabled Model Web by promoting interoperability between data and models with quantified uncertainty, building on existing open, international standards. It is the objective of this paper to illustrate a few key ideas behind UncertWeb using eHabitat to discuss the main types of uncertainties the WPS has to deal with and to present the benefits of the use of the UncertWeb framework.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The quest for renewable energy sources has led to growing attention in the research of organic photovoltaics (OPVs), as a promising alternative to fossil fuels, since these devices have low manufacturing costs and attractive end-user qualities, such as ease of installation and maintenance. Wide application of OPVs is majorly limited by the devices lifetime. With the development of new encapsulation materials, some degradation factors, such as water and oxygen ingress, can almost be excluded, whereas the thermal degradation of the devices remains a major issue. Two aspects have to be addressed to solve the problem of thermal instability: bulk effects in the photoactive layer and interfacial effects at the photoactive layer/charge-transporting layers. In this work, the interface between photoactive layer and electron-transporting zinc oxide (ZnO) in devices with inverted architecture was engineered by introducing polymeric interlayers, based on zinc-binding ligands, such as 3,4-dihydroxybenzene and 8-hydroxyquinoline. Also, a cross-linkable layer of poly(3,4-dimethoxystyrene) and its fullerene derivative were studied. At first, controlled reversible addition-fragmentation chain transfer (RAFT) polymerisation was employed to achieve well-defined polymers in a range of molar masses, all bearing a chain-end functionality for further modifications. Resulting polymers have been fully characterised, including their thermal and optical properties, and introduced as interlayers to study their effect on the initial device performance and thermal stability. Poly(3,4-dihydroxystyrene) and its fullerene derivative were found unsuitable for application in devices as they increased the work function of ZnO and created a barrier for electron extraction. On the other hand, their parental polymer, poly(3,4-dimethoxystyrene), and its fullerene derivative, upon cross-linking, resulted in enhanced efficiency and stability of devices, if compared to control. Polymers based on 8-hydroxyquinoline ligand had a negative effect on the initial stability of the devices, but increased the lifetime of the cells under accelerated thermal stress. Comprehensive studies of the key mechanisms, determining efficiency, such as charge generation and extraction, were performed by using time-resolved electrical and spectroscopic techniques, in order to understand in detail the effect of the interlayers on the device performance. Obtained results allow deeper insight into mechanisms of degradation that limit the lifetime of devices and prompt the design of better materials for the interface stabilisation.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis addressed the problem of risk analysis in mental healthcare, with respect to the GRiST project at Aston University. That project provides a risk-screening tool based on the knowledge of 46 experts, captured as mind maps that describe relationships between risks and patterns of behavioural cues. Mind mapping, though, fails to impose control over content, and is not considered to formally represent knowledge. In contrast, this thesis treated GRiSTs mind maps as a rich knowledge base in need of refinement; that process drew on existing techniques for designing databases and knowledge bases. Identifying well-defined mind map concepts, though, was hindered by spelling mistakes, and by ambiguity and lack of coverage in the tools used for researching words. A novel use of the Edit Distance overcame those problems, by assessing similarities between mind map texts, and between spelling mistakes and suggested corrections. That algorithm further identified stems, the shortest text string found in related word-forms. As opposed to existing approaches’ reliance on built-in linguistic knowledge, this thesis devised a novel, more flexible text-based technique. An additional tool, Correspondence Analysis, found patterns in word usage that allowed machines to determine likely intended meanings for ambiguous words. Correspondence Analysis further produced clusters of related concepts, which in turn drove the automatic generation of novel mind maps. Such maps underpinned adjuncts to the mind mapping software used by GRiST; one such new facility generated novel mind maps, to reflect the collected expert knowledge on any specified concept. Mind maps from GRiST are stored as XML, which suggested storing them in an XML database. In fact, the entire approach here is ”XML-centric”, in that all stages rely on XML as far as possible. A XML-based query language allows user to retrieve information from the mind map knowledge base. The approach, it was concluded, will prove valuable to mind mapping in general, and to detecting patterns in any type of digital information.