954 resultados para Context-aware applications
Resumo:
The article presents a new method to automatic generation of help in software. Help generation is realized in the framework of the tool for development and automatic generation of user interfaces based on ontologies. The principal features of the approach are: support for context-sensitive help, automatic generation of help using a task project and an expandable system of help generation.
Resumo:
A real-time adaptive resource allocation algorithm considering the end user's Quality of Experience (QoE) in the context of video streaming service is presented in this work. An objective no-reference quality metric, namely Pause Intensity (PI), is used to control the priority of resource allocation to users during the scheduling process. An online adjustment has been introduced to adaptively set the scheduler's parameter and maintain a desired trade-off between fairness and efficiency. The correlation between the data rates (i.e. video code rates) demanded by users and the data rates allocated by the scheduler is taken into account as well. The final allocated rates are determined based on the channel status, the distribution of PI values among users, and the scheduling policy adopted. Furthermore, since the user's capability varies as the environment conditions change, the rate adaptation mechanism for video streaming is considered and its interaction with the scheduling process under the same PI metric is studied. The feasibility of implementing this algorithm is examined and the result is compared with the most commonly existing scheduling methods.
Resumo:
One of the problems in AI tasks solving by neurocomputing methods is a considerable training time. This problem especially appears when it is needed to reach high quality in forecast reliability or pattern recognition. Some formalised ways for increasing of networks’ training speed without loosing of precision are proposed here. The offered approaches are based on the Sufficiency Principle, which is formal representation of the aim of a concrete task and conditions (limitations) of their solving [1]. This is development of the concept that includes the formal aims’ description to the context of such AI tasks as classification, pattern recognition, estimation etc.
Resumo:
Formal grammars can used for describing complex repeatable structures such as DNA sequences. In this paper, we describe the structural composition of DNA sequences using a context-free stochastic L-grammar. L-grammars are a special class of parallel grammars that can model the growth of living organisms, e.g. plant development, and model the morphology of a variety of organisms. We believe that parallel grammars also can be used for modeling genetic mechanisms and sequences such as promoters. Promoters are short regulatory DNA sequences located upstream of a gene. Detection of promoters in DNA sequences is important for successful gene prediction. Promoters can be recognized by certain patterns that are conserved within a species, but there are many exceptions which makes the promoter recognition a complex problem. We replace the problem of promoter recognition by induction of context-free stochastic L-grammar rules, which are later used for the structural analysis of promoter sequences. L-grammar rules are derived automatically from the drosophila and vertebrate promoter datasets using a genetic programming technique and their fitness is evaluated using a Support Vector Machine (SVM) classifier. The artificial promoter sequences generated using the derived L- grammar rules are analyzed and compared with natural promoter sequences.
Resumo:
Development of methods and tools for modeling human reasoning (common sense reasoning) by analogy in intelligent decision support systems is considered. Special attention is drawn to modeling reasoning by structural analogy taking the context into account. The possibility of estimating the obtained analogies taking into account the context is studied. This work was supported by RFBR.
Resumo:
In the area of Software Engineering, traceability is defined as the capability to track requirements, their evolution and transformation in different components related to engineering process, as well as the management of the relationships between those components. However the current state of the art in traceability does not keep in mind many of the elements that compose a product, specially those created before requirements arise, nor the appropriated use of traceability to manage the knowledge underlying in order to be handled by other organizational or engineering processes. In this work we describe the architecture of a reference model that establishes a set of definitions, processes and models which allow a proper management of traceability and further uses of it, in a wider context than the one related to software development.
Resumo:
METPEX is a 3 year, FP7 project which aims to develop a PanEuropean tool to measure the quality of the passenger's experience of multimodal transport. Initial work has led to the development of a comprehensive set of variables relating to different passenger groups, forms of transport and journey stages. This paper addresses the main challenges in transforming the variables into usable, accessible computer based tools allowing for the real time collection of information, across multiple journey stages in different EU countries. Non-computer based measurement instruments will be used to gather information from those who may not have or be familiar with mobile technology. Smartphone-based measurement instruments will also be used, hosted in two applications. The mobile applications need to be easy to use, configurable and adaptable according to the context of use. They should also be inherently interesting and rewarding for the participant, whilst allowing for the collection of high quality, valid and reliable data from all journey types and stages (from planning, through to entry into and egress from different transport modes, travel on public and personal vehicles and support of active forms of transport (e.g. cycling and walking). During all phases of the data collection and processing, the privacy of the participant is highly regarded and is ensured. © 2014 Springer International Publishing.
Resumo:
Uncertainty text detection is important to many social-media-based applications since more and more users utilize social media platforms (e.g., Twitter, Facebook, etc.) as information source to produce or derive interpretations based on them. However, existing uncertainty cues are ineffective in social media context because of its specific characteristics. In this paper, we propose a variant of annotation scheme for uncertainty identification and construct the first uncertainty corpus based on tweets. We then conduct experiments on the generated tweets corpus to study the effectiveness of different types of features for uncertainty text identification. © 2013 Association for Computational Linguistics.
Resumo:
Sustainability has become a watchword and guiding principle for modern society, and with it a growing appreciation that anthropogenic 'waste', in all its manifold forms, can offer a valuable source of energy, construction materials, chemicals and high value functional products. In the context of chemical transformations, waste materials not only provide alternative renewable feedstocks, but also a resource from which to create catalysts. Such waste-derived heterogeneous catalysts serve to improve the overall energy and atom-efficiency of existing and novel chemical processes. This review outlines key chemical transformations for which waste-derived heterogeneous catalysts have been developed, spanning biomass conversion to environmental remediation, and their benefits and disadvantages relative to conventional catalytic technologies.