965 resultados para End-user querying


Relevância:

30.00% 30.00%

Publicador:

Resumo:

An article presented at the last ICAT- conference stated at the end that buildings at all times tend to picture the people who had them erected. This paper aims to show the correctness of that statement. To this end, it will examine a number of typical residential buildings dating from the beginning of the seventeenth century up to today, investigate who had the buildings erected, and relate that to the performance of the buildings. This relation analysis will mainly use the scale but also the degree of diversity in function and appearance as factors. Furthermore, using economic data and data on the buildings to identify patterns, it will investigate how size of the property and relative size of the capital interest behind the building has developed. Since the authors live in Copenhagen and Copenhagen is very typical in its historical development, buildings and environments in and around the centre of Copenhagen are used as examples.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The goal of this work was to provide professional and amateur writers with a new way of enhancing their productivity and mental well-being, by helping them overcoming writers block and being able to achieve a state of optimal experience while writing. Our approach is based on bringing together different components to create what we call a creative moment. A creative moment is composed by an image, a text, a mood, a location and a color. The color presented in the creative moment varied according to the mood that was associated to the creative moment. With the creative moments we hoped that our users could have a way to easily trigger their creativity and have a kick start in their work. The prototyping of a web crowdsourcing platform, named CreativeWall, and a Microsoft Word Add-In, that was used on the user study performed, is described and their implementations are discussed. The user study reveals that our approach does have a positive influence in the productivity of the participants when compared with another existing approach. The study also revealed that our approach can ease the process of achieving a state of optimal experience by enhancing one of the dimensions presented on the Flow Theory. At the end we present what we consider would be some possible future developments for the concept created during the development of this work.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The goal of this work was to provide professional and amateur writers with a new way of enhancing their productivity and mental well-being, by helping them overcoming writers block and being able to achieve a state of optimal experience while writing. Our approach is based on bringing together different components to create what we call a creative moment. A creative moment is composed by an image, a text, a mood, a location and a color. The color presented in the creative moment varied according to the mood that was associated to the creative moment. With the creative moments we hoped that our users could have a way to easily trigger their creativity and have a kick start in their work. The prototyping of a web crowdsourcing platform, named CreativeWall, and a Microsoft Word Add-In, that was used on the user study performed, is described and their implementations are discussed. The user study reveals that our approach does have a positive influence in the productivity of the participants when compared with another existing approach. The study also revealed that our approach can ease the process of achieving a state of optimal experience by enhancing one of the dimensions presented on the Flow Theory. At the end we present what we consider would be some possible future developments for the concept created during the development of this work.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The increasing cost of developing complex software systems has created a need for tools which aid software construction. One area in which significant progress has been made is with the so-called Compiler Writing Tools (CWTs); these aim at automated generation of various components of a compiler and hence at expediting the construction of complete programming language translators. A number of CWTs are already in quite general use, but investigation reveals significant drawbacks with current CWTs, such as lex and yacc. The effective use of a CWT typically requires a detailed technical understanding of its operation and involves tedious and error-prone input preparation. Moreover, CWTs such as lex and yacc address only a limited aspect of the compilation process; for example, actions necessary to perform lexical symbol valuation and abstract syntax tree construction must be explicitly coded by the user. This thesis presents a new CWT called CORGI (COmpiler-compiler from Reference Grammar Input) which deals with the entire `front-end' component of a compiler; this includes the provision of necessary data structures and routines to manipulate them, both generated from a single input specification. Compared with earlier CWTs, CORGI has a higher-level and hence more convenient user interface, operating on a specification derived directly from a `reference manual' grammar for the source language. Rather than developing a compiler-compiler from first principles, CORGI has been implemented by building a further shell around two existing compiler construction tools, namely lex and yacc. CORGI has been demonstrated to perform efficiently in realistic tests, both in terms of speed and the effectiveness of its user interface and error-recovery mechanisms.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Database systems have a user interface one of the components of which will normally be a query language which is based on a particular data model. Typically data models provide primitives to define, manipulate and query databases. Often these primitives are designed to form self-contained query languages. This thesis describes a prototype implementation of a system which allows users to specify queries against the database in a query language whose primitives are not those provided by the actual model on which the database system is based, but those provided by a different data model. The implementation chosen is the Functional Query Language Front End (FQLFE). This uses the Daplex functional data model and query language. Using FQLFE, users can specify the underlying database (based on the relational model) in terms of Daplex. Queries against this specified view can then be made in Daplex. FQLFE transforms these queries into the query language (Quel) of the underlying target database system (Ingres). The automation of part of the Daplex function definition phase is also described and its implementation discussed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

With the recent rapid growth of the Semantic Web (SW), the processes of searching and querying content that is both massive in scale and heterogeneous have become increasingly challenging. User-friendly interfaces, which can support end users in querying and exploring this novel and diverse, structured information space, are needed to make the vision of the SW a reality. We present a survey on ontology-based Question Answering (QA), which has emerged in recent years to exploit the opportunities offered by structured semantic information on the Web. First, we provide a comprehensive perspective by analyzing the general background and history of the QA research field, from influential works from the artificial intelligence and database communities developed in the 70s and later decades, through open domain QA stimulated by the QA track in TREC since 1999, to the latest commercial semantic QA solutions, before tacking the current state of the art in open user-friendly interfaces for the SW. Second, we examine the potential of this technology to go beyond the current state of the art to support end-users in reusing and querying the SW content. We conclude our review with an outlook for this novel research area, focusing in particular on the R&D directions that need to be pursued to realize the goal of efficient and competent retrieval and integration of answers from large scale, heterogeneous, and continuously evolving semantic sources.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper describes the work undertaken in the Scholarly Ontologies Project. The aim of the project has been to develop a computational approach to support scholarly sensemaking, through interpretation and argumentation, enabling researchers to make claims: to describe and debate their view of a document's key contributions and relationships to the literature. The project has investigated the technicalities and practicalities of capturing conceptual relations, within and between conventional documents in terms of abstract ontological structures. In this way, we have developed a new kind of index to distributed digital library systems. This paper reports a case study undertaken to test the sensemaking tools developed by the Scholarly Ontologies project. The tools used were ClaiMapper, which allows the user to sketch argument maps of individual papers and their connections, ClaiMaker, a server on which such models can be stored and saved, which provides interpretative services to assist the querying of argument maps across multiple papers and ClaimFinder, a novice interface to the search services in ClaiMaker.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Clinical Decision Support Systems (CDSSs) need to disseminate expertise in formats that suit different end users and with functionality tuned to the context of assessment. This paper reports research into a method for designing and implementing knowledge structures that facilitate the required flexibility. A psychological model of expertise is represented using a series of formally specified and linked XML trees that capture increasing elements of the model, starting with hierarchical structuring, incorporating reasoning with uncertainty, and ending with delivering the final CDSS. The method was applied to the Galatean Risk and Safety Tool, GRiST, which is a web-based clinical decision support system (www.egrist.org) for assessing mental-health risks. Results of its clinical implementation demonstrate that the method can produce a system that is able to deliver expertise targetted and formatted for specific patient groups, different clinical disciplines, and alternative assessment settings. The approach may be useful for developing other real-world systems using human expertise and is currently being applied to a logistics domain. © 2013 Polish Information Processing Society.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Technology intermediaries are seen as potent vehicles for addressing perennial problems in transferring technology from university to industry in developed and developing countries. This paper examines what constitutes effective user-end intermediation in a low-technology, developing economy context, which is an under-researched topic. The social learning in technological innovation framework is extended using situated learning theory in a longitudinal instrumental case study of an exemplar technology intermediation programme. The paper documents the role that academic-related research and advisory centres can play as intermediaries in brokering, facilitating and configuring technology, against the backdrop of a group of small-scale pisciculture businesses in a rural area of Colombia. In doing so, it demonstrates how technology intermediation activities can be optimized in the domestication and innofusion of technology amongst end-users. The design components featured in this instrumental case of intermediation can inform policy making and practice relating to technology transfer from university to rural industry. Future research on this subject should consider the intermediation components put forward, as well as the impact of such interventions, in different countries and industrial sectors. Such research would allow for theoretical replication and help improve technology domestication and innofusion in different contexts, especially in less-developed countries.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

People manage a spectrum of identities in cyber domains. Profiling individuals and assigning them to distinct groups or classes have potential applications in targeted services, online fraud detection, extensive social sorting, and cyber-security. This paper presents the Uncertainty of Identity Toolset, a framework for the identification and profiling of users from their social media accounts and e-mail addresses. More specifically, in this paper we discuss the design and implementation of two tools of the framework. The Twitter Geographic Profiler tool builds a map of the ethno-cultural communities of a person's friends on Twitter social media service. The E-mail Address Profiler tool identifies the probable identities of individuals from their e-mail addresses and maps their geographical distribution across the UK. To this end, this paper presents a framework for profiling the digital traces of individuals.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Lappeenrannan teknillinen yliopisto tutkii pientasajännitesähkön käyttöä. Yliopisto on rakennuttanut Järvi-Suomen Energia Oy:n ja Suur-Savon Sähkö Oy:n kanssa yhteistyössä kokeellisen pientasajännitesähköverkon, jolla pystytään tarjoamaan kenttäolosuhteet pienjännitetutkimukselle todellisilla asiakkailla ja todentaa LVDC-teknologiaa ja muita älykkään sähköverkon toimintoja kenttäolosuhteissa. Verkon tasajänniteyhteys on rakennettu 20 kV sähkönjakeluverkon ja neljän kuluttajan välille. 20 kV keskijännite suunnataan tasamuuntamolla ±750 V pientasajännitteeksi ja uudestaan 400/230 V vaihtojännitteeksi kuluttajien läheisyydessä. Tämän kandidaatintyön tarkoituksena on luoda yliopistolle tietokanta pientasajännitesähköverkosta kertyvälle tiedolle ja mittaustuloksille. Tietokanta nähtiin tarpeelliseksi luoda, jotta pienjänniteverkon mittaustuloksia pystytään myöhemmin tarkastelemaan yhdessä ja yhtenäisessä muodossa. Yhdeksi tutkimuskysymykseksi muodostui, kuinka järjestää ja visualisoida kaikki verkosta palvelimille kertyvä mittausdata. Työssä on huomioitu myös kolme tietokantaa mahdollisesti hyödyntävää käyttäjäryhmää: kotitalousasiakkaat, sähköverkkoyhtiöt ja tutkimuslaboratorio, sekä pohdittu tietokannan hyötyä ja merkitystä näille käyttäjille. Toiseksi tutkimuskysymykseksi muodostuikin, mikä kaikesta tietokantaan talletetusta datasta olisi oleellisen tärkeää ottaa talteen näiden asiakkaiden kannalta, ja kuinka nämä voisivat hakea tietoa tietokannasta. Työn tutkimusmenetelmät perustuvat jo valmiiksi olemassa olevaan mittausdataan. Työtä varten on käytetty sekä painettua että sähköisessä muodossa olevaa kirjallisuutta. Työn tuloksena on saatu luotua tietokanta MySQL Workbench -ohjelmistolla, sekä mittausdatan keräys- ja käsittelyohjelmat Python-ohjelmointikielellä. Lisäksi on luotu erillinen MATLAB-rajapinta tiedon visualisoimista varten, jolla havainnollistetaan kolmen asiakasryhmän mittausdataa. Tietokanta ja sen tiedon visualisointi antavat kuluttajalle mahdollisuuden ymmärtää paremmin omaa sähkönkäyttöään, sekä sähköverkkoyhtiöille ja tutkimuslaboratorioille muun muassa tietoa sähkön laadusta ja verkon kuormituksesta.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In today’s big data world, data is being produced in massive volumes, at great velocity and from a variety of different sources such as mobile devices, sensors, a plethora of small devices hooked to the internet (Internet of Things), social networks, communication networks and many others. Interactive querying and large-scale analytics are being increasingly used to derive value out of this big data. A large portion of this data is being stored and processed in the Cloud due the several advantages provided by the Cloud such as scalability, elasticity, availability, low cost of ownership and the overall economies of scale. There is thus, a growing need for large-scale cloud-based data management systems that can support real-time ingest, storage and processing of large volumes of heterogeneous data. However, in the pay-as-you-go Cloud environment, the cost of analytics can grow linearly with the time and resources required. Reducing the cost of data analytics in the Cloud thus remains a primary challenge. In my dissertation research, I have focused on building efficient and cost-effective cloud-based data management systems for different application domains that are predominant in cloud computing environments. In the first part of my dissertation, I address the problem of reducing the cost of transactional workloads on relational databases to support database-as-a-service in the Cloud. The primary challenges in supporting such workloads include choosing how to partition the data across a large number of machines, minimizing the number of distributed transactions, providing high data availability, and tolerating failures gracefully. I have designed, built and evaluated SWORD, an end-to-end scalable online transaction processing system, that utilizes workload-aware data placement and replication to minimize the number of distributed transactions that incorporates a suite of novel techniques to significantly reduce the overheads incurred both during the initial placement of data, and during query execution at runtime. In the second part of my dissertation, I focus on sampling-based progressive analytics as a means to reduce the cost of data analytics in the relational domain. Sampling has been traditionally used by data scientists to get progressive answers to complex analytical tasks over large volumes of data. Typically, this involves manually extracting samples of increasing data size (progressive samples) for exploratory querying. This provides the data scientists with user control, repeatable semantics, and result provenance. However, such solutions result in tedious workflows that preclude the reuse of work across samples. On the other hand, existing approximate query processing systems report early results, but do not offer the above benefits for complex ad-hoc queries. I propose a new progressive data-parallel computation framework, NOW!, that provides support for progressive analytics over big data. In particular, NOW! enables progressive relational (SQL) query support in the Cloud using unique progress semantics that allow efficient and deterministic query processing over samples providing meaningful early results and provenance to data scientists. NOW! enables the provision of early results using significantly fewer resources thereby enabling a substantial reduction in the cost incurred during such analytics. Finally, I propose NSCALE, a system for efficient and cost-effective complex analytics on large-scale graph-structured data in the Cloud. The system is based on the key observation that a wide range of complex analysis tasks over graph data require processing and reasoning about a large number of multi-hop neighborhoods or subgraphs in the graph; examples include ego network analysis, motif counting in biological networks, finding social circles in social networks, personalized recommendations, link prediction, etc. These tasks are not well served by existing vertex-centric graph processing frameworks whose computation and execution models limit the user program to directly access the state of a single vertex, resulting in high execution overheads. Further, the lack of support for extracting the relevant portions of the graph that are of interest to an analysis task and loading it onto distributed memory leads to poor scalability. NSCALE allows users to write programs at the level of neighborhoods or subgraphs rather than at the level of vertices, and to declaratively specify the subgraphs of interest. It enables the efficient distributed execution of these neighborhood-centric complex analysis tasks over largescale graphs, while minimizing resource consumption and communication cost, thereby substantially reducing the overall cost of graph data analytics in the Cloud. The results of our extensive experimental evaluation of these prototypes with several real-world data sets and applications validate the effectiveness of our techniques which provide orders-of-magnitude reductions in the overheads of distributed data querying and analysis in the Cloud.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Several companies are trying to improve their operation efficiency by implementing an enterprise resource planning (ERP) system that makes it possible to control the resources of the company in real time. However, the success of the implementation project is not a foregone conclusion; a significant part of these projects end in a failure, one way or another. Therefore it is important to investigate ERP system implementation more closely in order to increase understanding about factors influencing ERP system success and to improve the probability of a successful ERP implementation project. Consequently, this study was initiated because a manufacturing case company wanted to review the success of their ERP implementation project. To be exact, the case company hoped to gain both information about the success of the project and insight for future implementation improvement. This study investigated ERP success specifically by examining factors that influence ERP key-user satisfaction. User satisfaction is one of the most commonly applied indicators of information system success. The research data was mainly collected by conducting theme interviews. The subjects of the interviews were six key-users of the newly implemented ERP system. The interviewees were closely involved in the implementation project. Furthermore, they act as representative users that utilize the new system in everyday business processes. The collected data was analyzed by thematizing. Both data collection and analysis were guided by a theoretical frame of reference. This frame was based on previous research on the subject. The results of the study aligned with the theoretical framework to large extent. The four principal factors influencing key-user satisfaction were change management, contractor service, key-user’s system knowledge and characteristics of the ERP product itself. One of the most significant contributions of the research is that it confirmed the existence of a connection between change management and ERP key-user satisfaction. Furthermore, it discovered two new sub-factors influencing contractor service related key-user satisfaction. In addition, the research findings indicated that in order to improve the current level of key-user satisfaction, the case company should pay special attention to system functionality improvement and enhancement of the key-users’ knowledge. During similar implementation projects in the future, it would be important to assure the success of change management and contractor service related processes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Automation technologies are widely acclaimed to have the potential to significantly reduce energy consumption and energy-related costs in buildings. However, despite the abundance of commercially available technologies, automation in domestic environments keep on meeting commercial failures. The main reason for this is the development process that is used to build the automation applications, which tend to focus more on technical aspects rather than on the needs and limitations of the users. An instance of this problem is the complex and poorly designed home automation front-ends that deter customers from investing in a home automation product. On the other hand, developing a usable and interactive interface is a complicated task for developers due to the multidisciplinary challenges that need to be identified and solved. In this context, the current research work investigates the different design problems associated with developing a home automation interface as well as the existing design solutions that are applied to these problems. The Qualitative Data Analysis approach was used for collecting data from research papers and the open coding process was used to cluster the findings. From the analysis of the data collected, requirements for designing the interface were derived. A home energy management functionality for a Web-based home automation front-end was developed as a proof-of-concept and a user evaluation was used to assess the usability of the interface. The results of the evaluation showed that this holistic approach to designing interfaces improved its usability which increases the chances of its commercial success.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Measuring and fulfilling user requirements during medical device development will result in successful products that improve patient safety, improve device effectiveness and reduce product recalls and modifications. Medical device users are an extremely heterogeneous group and for any one device the users may include patients, their carers as well as various healthcare professionals. There are a number of factors that make capturing user requirements for medical device development challenging including the ethical and research governance involved with studying users as well as the inevitable time and financial constraints. Most ergonomics research methods have been developed in response to such practical constraints and a number of these have potential for medical device development. Some are suitable for specific points in the device cycle such as contextual inquiry and ethnography, others, such as usability tests and focus groups may be used throughout development. When designing user research there are a number of factors that may affect the quality of data collected including the sample of users studied, the use of proxies instead of real end-users and the context in which the research is performed. As different methods are effective in identifying different types of data, ideally more than one method should be used at each point in development, however financial and time factors may often constrain this.