855 resultados para Scenario Programming, Markup Language, End User Programming


Relevância:

100.00% 100.00%

Publicador:

Resumo:

This PhD thesis contributes to the problem of resource and service discovery in the context of the composable web. In the current web, mashup technologies allow developers reusing services and contents to build new web applications. However, developers face a problem of information flood when searching for appropriate services or resources for their combination. To contribute to overcoming this problem, a framework is defined for the discovery of services and resources. In this framework, three levels are defined for performing discovery at content, discovery and agente levels. The content level involves the information available in web resources. The web follows the Representational Stateless Transfer (REST) architectural style, in which resources are returned as representations from servers to clients. These representations usually employ the HyperText Markup Language (HTML), which, along with Content Style Sheets (CSS), describes the markup employed to render representations in a web browser. Although the use of SemanticWeb standards such as Resource Description Framework (RDF) make this architecture suitable for automatic processes to use the information present in web resources, these standards are too often not employed, so automation must rely on processing HTML. This process, often referred as Screen Scraping in the literature, is the content discovery according to the proposed framework. At this level, discovery rules indicate how the different pieces of data in resources’ representations are mapped onto semantic entities. By processing discovery rules on web resources, semantically described contents can be obtained out of them. The service level involves the operations that can be performed on the web. The current web allows users to perform different tasks such as search, blogging, e-commerce, or social networking. To describe the possible services in RESTful architectures, a high-level feature-oriented service methodology is proposed at this level. This lightweight description framework allows defining service discovery rules to identify operations in interactions with REST resources. The discovery is thus performed by applying discovery rules to contents discovered in REST interactions, in a novel process called service probing. Also, service discovery can be performed by modelling services as contents, i.e., by retrieving Application Programming Interface (API) documentation and API listings in service registries such as ProgrammableWeb. For this, a unified model for composable components in Mashup-Driven Development (MDD) has been defined after the analysis of service repositories from the web. The agent level involves the orchestration of the discovery of services and contents. At this level, agent rules allow to specify behaviours for crawling and executing services, which results in the fulfilment of a high-level goal. Agent rules are plans that allow introspecting the discovered data and services from the web and the knowledge present in service and content discovery rules to anticipate the contents and services to be found on specific resources from the web. By the definition of plans, an agent can be configured to target specific resources. The discovery framework has been evaluated on different scenarios, each one covering different levels of the framework. Contenidos a la Carta project deals with the mashing-up of news from electronic newspapers, and the framework was used for the discovery and extraction of pieces of news from the web. Similarly, in Resulta and VulneraNET projects the discovery of ideas and security knowledge in the web is covered, respectively. The service level is covered in the OMELETTE project, where mashup components such as services and widgets are discovered from component repositories from the web. The agent level is applied to the crawling of services and news in these scenarios, highlighting how the semantic description of rules and extracted data can provide complex behaviours and orchestrations of tasks in the web. The main contributions of the thesis are the unified framework for discovery, which allows configuring agents to perform automated tasks. Also, a scraping ontology has been defined for the construction of mappings for scraping web resources. A novel first-order logic rule induction algorithm is defined for the automated construction and maintenance of these mappings out of the visual information in web resources. Additionally, a common unified model for the discovery of services is defined, which allows sharing service descriptions. Future work comprises the further extension of service probing, resource ranking, the extension of the Scraping Ontology, extensions of the agent model, and contructing a base of discovery rules. Resumen La presente tesis doctoral contribuye al problema de descubrimiento de servicios y recursos en el contexto de la web combinable. En la web actual, las tecnologías de combinación de aplicaciones permiten a los desarrolladores reutilizar servicios y contenidos para construir nuevas aplicaciones web. Pese a todo, los desarrolladores afrontan un problema de saturación de información a la hora de buscar servicios o recursos apropiados para su combinación. Para contribuir a la solución de este problema, se propone un marco de trabajo para el descubrimiento de servicios y recursos. En este marco, se definen tres capas sobre las que se realiza descubrimiento a nivel de contenido, servicio y agente. El nivel de contenido involucra a la información disponible en recursos web. La web sigue el estilo arquitectónico Representational Stateless Transfer (REST), en el que los recursos son devueltos como representaciones por parte de los servidores a los clientes. Estas representaciones normalmente emplean el lenguaje de marcado HyperText Markup Language (HTML), que, unido al estándar Content Style Sheets (CSS), describe el marcado empleado para mostrar representaciones en un navegador web. Aunque el uso de estándares de la web semántica como Resource Description Framework (RDF) hace apta esta arquitectura para su uso por procesos automatizados, estos estándares no son empleados en muchas ocasiones, por lo que cualquier automatización debe basarse en el procesado del marcado HTML. Este proceso, normalmente conocido como Screen Scraping en la literatura, es el descubrimiento de contenidos en el marco de trabajo propuesto. En este nivel, un conjunto de reglas de descubrimiento indican cómo los diferentes datos en las representaciones de recursos se corresponden con entidades semánticas. Al procesar estas reglas sobre recursos web, pueden obtenerse contenidos descritos semánticamente. El nivel de servicio involucra las operaciones que pueden ser llevadas a cabo en la web. Actualmente, los usuarios de la web pueden realizar diversas tareas como búsqueda, blogging, comercio electrónico o redes sociales. Para describir los posibles servicios en arquitecturas REST, se propone en este nivel una metodología de alto nivel para descubrimiento de servicios orientada a funcionalidades. Este marco de descubrimiento ligero permite definir reglas de descubrimiento de servicios para identificar operaciones en interacciones con recursos REST. Este descubrimiento es por tanto llevado a cabo al aplicar las reglas de descubrimiento sobre contenidos descubiertos en interacciones REST, en un nuevo procedimiento llamado sondeo de servicios. Además, el descubrimiento de servicios puede ser llevado a cabo mediante el modelado de servicios como contenidos. Es decir, mediante la recuperación de documentación de Application Programming Interfaces (APIs) y listas de APIs en registros de servicios como ProgrammableWeb. Para ello, se ha definido un modelo unificado de componentes combinables para Mashup-Driven Development (MDD) tras el análisis de repositorios de servicios de la web. El nivel de agente involucra la orquestación del descubrimiento de servicios y contenidos. En este nivel, las reglas de nivel de agente permiten especificar comportamientos para el rastreo y ejecución de servicios, lo que permite la consecución de metas de mayor nivel. Las reglas de los agentes son planes que permiten la introspección sobre los datos y servicios descubiertos, así como sobre el conocimiento presente en las reglas de descubrimiento de servicios y contenidos para anticipar contenidos y servicios por encontrar en recursos específicos de la web. Mediante la definición de planes, un agente puede ser configurado para descubrir recursos específicos. El marco de descubrimiento ha sido evaluado sobre diferentes escenarios, cada uno cubriendo distintos niveles del marco. El proyecto Contenidos a la Carta trata de la combinación de noticias de periódicos digitales, y en él el framework se ha empleado para el descubrimiento y extracción de noticias de la web. De manera análoga, en los proyectos Resulta y VulneraNET se ha llevado a cabo un descubrimiento de ideas y de conocimientos de seguridad, respectivamente. El nivel de servicio se cubre en el proyecto OMELETTE, en el que componentes combinables como servicios y widgets se descubren en repositorios de componentes de la web. El nivel de agente se aplica al rastreo de servicios y noticias en estos escenarios, mostrando cómo la descripción semántica de reglas y datos extraídos permiten proporcionar comportamientos complejos y orquestaciones de tareas en la web. Las principales contribuciones de la tesis son el marco de trabajo unificado para descubrimiento, que permite configurar agentes para realizar tareas automatizadas. Además, una ontología de extracción ha sido definida para la construcción de correspondencias y extraer información de recursos web. Asimismo, un algoritmo para la inducción de reglas de lógica de primer orden se ha definido para la construcción y el mantenimiento de estas correspondencias a partir de la información visual de recursos web. Adicionalmente, se ha definido un modelo común y unificado para el descubrimiento de servicios que permite la compartición de descripciones de servicios. Como trabajos futuros se considera la extensión del sondeo de servicios, clasificación de recursos, extensión de la ontología de extracción y la construcción de una base de reglas de descubrimiento.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Este proyecto trata el desarrollo de un weblog sobre tecnología e internet siguiendo la Metodología de December, abordando todas sus etapas e incorporando otros aspectos a la metodología que han enriquecido el proyecto hasta su resultado final. Se pretende realizar una aplicación web con la funcionalidad de un weblog centrándose tanto en la parte del usuario final como del administrador de la web. Que este weblog sirva para compartir conocimientos de forma dinámica actualizándose con frecuencia y para usuarios con inquietudes tecnologías y con mayor o menor nivel de conocimientos. Se pone especial interés en la usabilidad de la herramienta y es tenida en cuenta a lo largo de todo el ciclo de vida de la metodología. Como punto de partida para estructurar la aplicación se toma como metodología de desarrollo la metodología December enfocada al desarrollo web y como a partir de ella se da forma a lo que hoy es el proyecto completo. Se tienen en cuenta cada una de sus etapas en las cuales se va avanzando para ir completando cada pieza del desarrollo final. Se intenta también en esta memoria abordar datos más técnicos de la herramienta, desde la elección de los lenguajes utilizados hasta el diseño de la estructura de base de datos, los procesos que intervienen en la aplicación y las decisiones más subjetivas de diseño de la interface web. En todo momento se ha intentado estructurar la memoria de tal forma que resultará clara y concisa, fácil de leer. Plasmando en ella todo el proceso de realización del proyecto. ABSTRACT This project involves the development of a technology and internet weblog following the December’s Methodology, covering all stages and adding other aspects to this methodology that have enriched the project to its final result. I plan to develop a web application with the functionality of a weblog focusing on both, the end user and the webmaster. A weblog to share knowledge in a dynamic and updated way, for users concerned with technologies and different levels of expertise. Special emphasis has been made on the usability of the web tool, taking this aspect into account through the entire methodology’s life cycle. To begin the development, the application structure is based on December’s methodology focused on web development. The whole project is built from this methodology. All the stages have been taken into account to complete each part of the final development. This project deals with technical data of the web tool, from the choice of the programming languages used to the design of the database structure, the processes involved in the application and the subjective decisions of interface design. At all times I have tried to structure the report in a clear, concise and easy to read way, reflecting it in the whole process of the project.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

INTAMAP is a Web Processing Service for the automatic spatial interpolation of measured point data. Requirements were (i) using open standards for spatial data such as developed in the context of the Open Geospatial Consortium (OGC), (ii) using a suitable environment for statistical modelling and computation, and (iii) producing an integrated, open source solution. The system couples an open-source Web Processing Service (developed by 52°North), accepting data in the form of standardised XML documents (conforming to the OGC Observations and Measurements standard) with a computing back-end realised in the R statistical environment. The probability distribution of interpolation errors is encoded with UncertML, a markup language designed to encode uncertain data. Automatic interpolation needs to be useful for a wide range of applications and the algorithms have been designed to cope with anisotropy, extreme values, and data with known error distributions. Besides a fully automatic mode, the system can be used with different levels of user control over the interpolation process.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Linked Data semantic sources, in particular DBpedia, can be used to answer many user queries. PowerAqua is an open multi-ontology Question Answering (QA) system for the Semantic Web (SW). However, the emergence of Linked Data, characterized by its openness, heterogeneity and scale, introduces a new dimension to the Semantic Web scenario, in which exploiting the relevant information to extract answers for Natural Language (NL) user queries is a major challenge. In this paper we discuss the issues and lessons learned from our experience of integrating PowerAqua as a front-end for DBpedia and a subset of Linked Data sources. As such, we go one step beyond the state of the art on end-users interfaces for Linked Data by introducing mapping and fusion techniques needed to translate a user query by means of multiple sources. Our first informal experiments probe whether, in fact, it is feasible to obtain answers to user queries by composing information across semantic sources and Linked Data, even in its current form, where the strength of Linked Data is more a by-product of its size than its quality. We believe our experiences can be extrapolated to a variety of end-user applications that wish to scale, open up, exploit and re-use what possibly is the greatest wealth of data about everything in the history of Artificial Intelligence. © 2010 Springer-Verlag.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Clinical decision support systems (CDSSs) often base their knowledge and advice on human expertise. Knowledge representation needs to be in a format that can be easily understood by human users as well as supporting ongoing knowledge engineering, including evolution and consistency of knowledge. This paper reports on the development of an ontology specification for managing knowledge engineering in a CDSS for assessing and managing risks associated with mental-health problems. The Galatean Risk and Safety Tool, GRiST, represents mental-health expertise in the form of a psychological model of classification. The hierarchical structure was directly represented in the machine using an XML document. Functionality of the model and knowledge management were controlled using attributes in the XML nodes, with an accompanying paper manual for specifying how end-user tools should behave when interfacing with the XML. This paper explains the advantages of using the web-ontology language, OWL, as the specification, details some of the issues and problems encountered in translating the psychological model to OWL, and shows how OWL benefits knowledge engineering. The conclusions are that OWL can have an important role in managing complex knowledge domains for systems based on human expertise without impeding the end-users' understanding of the knowledge base. The generic classification model underpinning GRiST makes it applicable to many decision domains and the accompanying OWL specification facilitates its implementation.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The study shows an alternative solution to existing efforts at solving the problem of how to centrally manage and synchronise users’ Multiple Profiles (MP) across multiple discrete social networks. Most social network users hold more than one social network account and utilise them in different ways depending on the digital context (Iannella, 2009a). They may, for example, enjoy friendly chat on Facebook1, professional discussion on LinkedIn2, and health information exchange on PatientsLikeMe3 In this thesis the researcher proposes a framework for the management of a user’s multiple online social network profiles. A demonstrator, called Multiple Profile Manager (MPM), will be showcased to illustrate how effective the framework will be. The MPM will achieve the required profile management and synchronisation using a free, open, decentralized social networking platform (OSW) that was proposed by the Vodafone Group in 2010. The proposed MPM will enable a user to create and manage an integrated profile (IP) and share/synchronise this profile with all their social networks. The necessary protocols to support the prototype are also proposed by the researcher. The MPM protocol specification defines an Extensible Messaging and Presence Protocol (XMPP) extension for sharing vCard and social network accounts information between the MPM Server, MPM Client, and social network sites (SNSs). . Therefore many web users need to manage disparate profiles across many distributed online sources. Maintaining these profiles is cumbersome, time-consuming, inefficient, and may lead to lost opportunity. The writer of this thesis adopted a research approach and a number of use cases for the implementation of the project. The use cases were created to capture the functional requirements of the MPM and to describe the interactions between users and the MPM. In the research a development process was followed in establishing the prototype and related protocols. The use cases were subsequently used to illustrate the prototype via the screenshots taken of the MPM client interfaces. The use cases also played a role in evaluating the outcomes of the research such as the framework, prototype, and the related protocols. An innovative application of this project is in the area of public health informatics. The researcher utilised the prototype to examine how the framework might benefit patients and physicians. The framework can greatly enhance health information management for patients and more importantly offer a more comprehensive personal health overview of patients to physicians. This will give a more complete picture of the patient’s background than is currently available and will prove helpful in providing the right treatment. The MPM prototype and related protocols have a high application value as they can be integrated into the real OSW platform and so serve users in the modern digital world. They also provide online users with a real platform for centrally storing their complete profile data, efficiently managing their personal information, and moreover, synchronising the overall complete profile with each of their discrete profiles stored in their different social network sites.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The Yet Another Workflow Language (YAWL) language and environment has been used to prototype, verify, execute and analyse business processes in a wide variety of industrial domains, such as telephony, construction, supply chain, insurance services, medical environments, personnel management and the creative arts. These engagements offer the YAWL researcher community a great opportunity to validate our research findings within an industry setting, as well as discovery of possible enhancements from the end user perspective. This paper describes three such industry projects, discusses why YAWL was chosen and how it was used in each, and reacts on the insights gained along the way.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Mobile technologies are enabling access to information in diverse environ.ments, and are exposing a wider group of individuals to said technology. Therefore, this paper proposes that a wider view of user relations than is usually considered in information systems research is required. Specifically, we examine the potential effects of emerging mobile technologies on end-­‐user relations with a focus on the ‘secondary user’, those who are not intended to interact directly with the technology but are intended consumers of the technology’s output. For illustration, we draw on a study of a U.K. regional Fire and Rescue Service and deconstruct mobile technology use at Fire Service incidents. Our findings provide insights, which suggest that, because of the nature of mobile technologies and their context of use, secondary user relations in such emerging mobile environments are important and need further exploration.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This study constructs performance prediction models to estimate the end-user perceived video quality on mobile devices for the latest video encoding techniques –VP9 and H.265. Both subjective and objective video quality assessments were carried out for collecting data and selecting the most desirable predictors. Using statistical regression, two models were generated to achieve 94.5% and 91.5% of prediction accuracies respectively, depending on whether the predictor derived from the objective assessment is involved. These proposed models can be directly used by media industries for video quality estimation, and will ultimately help them to ensure a positive end-user quality of experience on future mobile devices after the adaptation of the latest video encoding technologies.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper describes the design and implementation of ADAMIS (‘A database for medical information systems’). ADAMIS is a relational database management system for a general hospital environment. Apart from the usual database (DB) facilities of data definition and data manipulation, ADAMIS supports a query language called the ‘simplified medical query language’ (SMQL) which is completely end-user oriented and highly non-procedural. Other features of ADAMIS include provision of facilities for statistics collection and report generation. ADAMIS also provides adequate security and integrity features and has been designed mainly for use on interactive terminals.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The performance of prediction models is often based on ``abstract metrics'' that estimate the model's ability to limit residual errors between the observed and predicted values. However, meaningful evaluation and selection of prediction models for end-user domains requires holistic and application-sensitive performance measures. Inspired by energy consumption prediction models used in the emerging ``big data'' domain of Smart Power Grids, we propose a suite of performance measures to rationally compare models along the dimensions of scale independence, reliability, volatility and cost. We include both application independent and dependent measures, the latter parameterized to allow customization by domain experts to fit their scenario. While our measures are generalizable to other domains, we offer an empirical analysis using real energy use data for three Smart Grid applications: planning, customer education and demand response, which are relevant for energy sustainability. Our results underscore the value of the proposed measures to offer a deeper insight into models' behavior and their impact on real applications, which benefit both data mining researchers and practitioners.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

[EU]Proiektu honen helburu nagusia hirigune batean LTE teknologiaren errendimendua aztertzea da. Hau lortzeko, OMNeT++ eta SUMO softwareak batera lan egiteko integratu dira; horrela, konexio-puntu batekin LTE komunikazio bat abian duen ibilgailu baten edo gehiagoren ibilbideak simulatzen dira, atzerapena edo pakete-galera bezalako parametro esanguratsuak neurtzeko. Lortutako emaitzen laguntzaz, LTE teknologiaren kapazitatearen mugak aztertuko dira, erabiltzaileei QoS minimoa bermatzen diotenak zehaztuz.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

User-value is a determining factor for product acceptance in product design. Research on rural electrification to date, however, does not draw sufficient attention to the importance of user-value with regard to the overall success of a project. This is evident from the analysis of project reports and applicable indicators from agencies active in the sector. Learning from the design, psychology and sociology literatures, it is important that rural electrification projects incorporate the value perception of the end-user and extend their success beyond the commonly used criteria of financial value, the appropriateness of the technology, capacity building and technology uptake. Creating value for the end-user is particularly important for project acceptance and the sustainability of a scheme once it has been handed over to the local community. In this research paper, existing theories and models of value-theory are transposed and applied to community operated rural electrification schemes and a user-value framework is developed. Furthermore, the importance of value to the end-user is clarified. Current literature on product design reveals that user-value has different properties, many of which are applicable to rural electrification. Five value pillars and their sub-categories important for the users of rural electrification projects are identified, namely: functional; social significance; epistemic; emotional; and cultural values. These pillars provide the main structure for the conceptual framework developed in this research paper. It is proposed that by targeting the values of the end-user, the key factors of user-value applicable to rural electrification projects will be identified and the sustainability of the project will be better ensured. © 2014 The Authors.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

“最终用户开发(End-User Development,EUD)”研究如何使作为非专业软件开发者的软件系统用户,在一定程度上开发或修改软件。EUD的研究主要有三类,即EUD的理论研究、通用的EUD技术研究和面向领域的EUD技术研究。作为一个面向领域的EUD研究,本文以电子政务中常见的表格数据分析问题为背景,研究表格数据分析的EUD方法和技术。 表格数据分析是一种广泛存在的日常应用问题,除了各种业务系统中的表格分析系统,工业界也引入了EUD,比如“电子表格(Spreadsheet)”,但无论专用分析系统,还是Spreadsheet,都不足以应对表格数据分析问题在规模和复杂度方面的迅速增长。 本文在对表格数据分析问题进行建模的基础上,提出一种基于EUD的表格数据分析方法(Methodology of EUD-Enabled Tabular-data Analysis,META),并就META方法的应用和所需的关键支撑技术开展了研究,其贡献包括以下七个方面: 1)在Spreadsheet模型的基础上,对大规模表格数据分析问题进行了建模,该模型以数据层、逻辑层和表示层的分离为特征,对表格数据、表格展示和表格数据分析计算的表达能力,都超过已有的各种模型。 2)提出一种以EUD为核心的表格数据分析方法(META),该方法有三种应用模式,分别适用于不同的用户熟练程度和需求复杂度,既适用于简单的表格数据生成问题,也能支持复杂的EUD生命周期过程。 3)在对表格数据分析问题进行建模的基础上,通过扩展传统Spreadsheet语言,设计了一种支持最终用户开发的表格数据分析语言ESL(EUD-Enabled Spreadsheet Language),该语言继承了Spreadsheet语言的最终用户可编程性,同时,在表格数据分析方面的表达能力优于已有的其他语言。 4)在将ESL公式依赖关系建模的基础上,深入研究了影响Spreadsheet计算性能的各种因素。提出了基于拓扑排序的重算消减算法,解决了传统算法中存在的重算问题;以拓扑排序算法为基础,提出了ESL语言的并行计算方法;针对大规模表格数据访问,实现了能够显著降低SQL数据访问代价的缓存机制。这些研究,经实验验证,提高了ESL语言的执行效率。 5)为降低EUD中SQL编程的复杂度,通过引入领域语义和上下文(Context)配置的方法,解决了SQL自动生成中连接选择的二义性问题,实现了精确查询的SQL自动生成。其结果也可用于其他访问关系数据库的EUD系统。 6)有效性是ESL编程中的重要问题,由于缺乏合适的研究对象和用户群体,EUD有效性研究受到限制。在社会关系网络上开发Web插件,是一种典型的EUD活动,其中的“发行前错误”问题是一种重要的风险来源。本文提出的Release-Waiting Farm(RWF)技术,能够有效地引导最终用户对Web插件进行测试,并规范最终用户的开发过程。 7)本文总结了RWF技术在社会关系网络中得以成功的关键因素,基于RWF技术,为META方法设计了支持最终用户开发的协作环境和测试环境,并在全国组织系统软件框架开发项目中进行了实现和验证。

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This research investigates some of the reasons for the reported difficulties experienced by writers when using editing software designed for structured documents. The overall objective was to determine if there are aspects of the software interfaces which militate against optimal document construction by writers who are not computer experts, and to suggest possible remedies. Studies were undertaken to explore the nature and extent of the difficulties, and to identify which components of the software interfaces are involved. A model of a revised user interface was tested, and some possible adaptations to the interface are proposed which may help overcome the difficulties. The methodology comprised: 1. identification and description of the nature of a ‘structured document’ and what distinguishes it from other types of document used on computers; 2. isolation of the requirements of users of such documents, and the construction a set of personas which describe them; 3. evaluation of other work on the interaction between humans and computers, specifically in software for creating and editing structured documents; 4. estimation of the levels of adoption of the available software for editing structured documents and the reactions of existing users to it, with specific reference to difficulties encountered in using it; 5. examination of the software and identification of any mismatches between the expectations of users and the facilities provided by the software; 6. assessment of any physical or psychological factors in the reported difficulties experienced, and to determine what (if any) changes to the software might affect these. The conclusions are that seven of the twelve modifications tested could contribute to an improvement in usability, effectiveness, and efficiency when writing structured text (new document selection; adding new sections and new lists; identifying key information typographically; the creation of cross-references and bibliographic references; and the inclusion of parts of other documents). The remaining five were seen as more applicable to editing existing material than authoring new text (adding new elements; splitting and joining elements [before and after]; and moving block text).