962 resultados para End-user Queries


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The goal of this work was to provide professional and amateur writers with a new way of enhancing their productivity and mental well-being, by helping them overcoming writers block and being able to achieve a state of optimal experience while writing. Our approach is based on bringing together different components to create what we call a creative moment. A creative moment is composed by an image, a text, a mood, a location and a color. The color presented in the creative moment varied according to the mood that was associated to the creative moment. With the creative moments we hoped that our users could have a way to easily trigger their creativity and have a kick start in their work. The prototyping of a web crowdsourcing platform, named CreativeWall, and a Microsoft Word Add-In, that was used on the user study performed, is described and their implementations are discussed. The user study reveals that our approach does have a positive influence in the productivity of the participants when compared with another existing approach. The study also revealed that our approach can ease the process of achieving a state of optimal experience by enhancing one of the dimensions presented on the Flow Theory. At the end we present what we consider would be some possible future developments for the concept created during the development of this work.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The increasing cost of developing complex software systems has created a need for tools which aid software construction. One area in which significant progress has been made is with the so-called Compiler Writing Tools (CWTs); these aim at automated generation of various components of a compiler and hence at expediting the construction of complete programming language translators. A number of CWTs are already in quite general use, but investigation reveals significant drawbacks with current CWTs, such as lex and yacc. The effective use of a CWT typically requires a detailed technical understanding of its operation and involves tedious and error-prone input preparation. Moreover, CWTs such as lex and yacc address only a limited aspect of the compilation process; for example, actions necessary to perform lexical symbol valuation and abstract syntax tree construction must be explicitly coded by the user. This thesis presents a new CWT called CORGI (COmpiler-compiler from Reference Grammar Input) which deals with the entire `front-end' component of a compiler; this includes the provision of necessary data structures and routines to manipulate them, both generated from a single input specification. Compared with earlier CWTs, CORGI has a higher-level and hence more convenient user interface, operating on a specification derived directly from a `reference manual' grammar for the source language. Rather than developing a compiler-compiler from first principles, CORGI has been implemented by building a further shell around two existing compiler construction tools, namely lex and yacc. CORGI has been demonstrated to perform efficiently in realistic tests, both in terms of speed and the effectiveness of its user interface and error-recovery mechanisms.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper aims to identify the communication goal(s) of a user's information-seeking query out of a finite set of within-domain goals in natural language queries. It proposes using Tree-Augmented Naive Bayes networks (TANs) for goal detection. The problem is formulated as N binary decisions, and each is performed by a TAN. Comparative study has been carried out to compare the performance with Naive Bayes, fully-connected TANs, and multi-layer neural networks. Experimental results show that TANs consistently give better results when tested on the ATIS and DARPA Communicator corpora.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Clinical Decision Support Systems (CDSSs) need to disseminate expertise in formats that suit different end users and with functionality tuned to the context of assessment. This paper reports research into a method for designing and implementing knowledge structures that facilitate the required flexibility. A psychological model of expertise is represented using a series of formally specified and linked XML trees that capture increasing elements of the model, starting with hierarchical structuring, incorporating reasoning with uncertainty, and ending with delivering the final CDSS. The method was applied to the Galatean Risk and Safety Tool, GRiST, which is a web-based clinical decision support system (www.egrist.org) for assessing mental-health risks. Results of its clinical implementation demonstrate that the method can produce a system that is able to deliver expertise targetted and formatted for specific patient groups, different clinical disciplines, and alternative assessment settings. The approach may be useful for developing other real-world systems using human expertise and is currently being applied to a logistics domain. © 2013 Polish Information Processing Society.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Technology intermediaries are seen as potent vehicles for addressing perennial problems in transferring technology from university to industry in developed and developing countries. This paper examines what constitutes effective user-end intermediation in a low-technology, developing economy context, which is an under-researched topic. The social learning in technological innovation framework is extended using situated learning theory in a longitudinal instrumental case study of an exemplar technology intermediation programme. The paper documents the role that academic-related research and advisory centres can play as intermediaries in brokering, facilitating and configuring technology, against the backdrop of a group of small-scale pisciculture businesses in a rural area of Colombia. In doing so, it demonstrates how technology intermediation activities can be optimized in the domestication and innofusion of technology amongst end-users. The design components featured in this instrumental case of intermediation can inform policy making and practice relating to technology transfer from university to rural industry. Future research on this subject should consider the intermediation components put forward, as well as the impact of such interventions, in different countries and industrial sectors. Such research would allow for theoretical replication and help improve technology domestication and innofusion in different contexts, especially in less-developed countries.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

People manage a spectrum of identities in cyber domains. Profiling individuals and assigning them to distinct groups or classes have potential applications in targeted services, online fraud detection, extensive social sorting, and cyber-security. This paper presents the Uncertainty of Identity Toolset, a framework for the identification and profiling of users from their social media accounts and e-mail addresses. More specifically, in this paper we discuss the design and implementation of two tools of the framework. The Twitter Geographic Profiler tool builds a map of the ethno-cultural communities of a person's friends on Twitter social media service. The E-mail Address Profiler tool identifies the probable identities of individuals from their e-mail addresses and maps their geographical distribution across the UK. To this end, this paper presents a framework for profiling the digital traces of individuals.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This project is about retrieving data in range without allowing the server to read it, when the database is stored in the server. Basically, our goal is to build a database that allows the client to maintain the confidentiality of the data stored, despite all the data is stored in a different location from the client's hard disk. This means that all the information written on the hard disk can be easily read by another person who can do anything with it. Given that, we need to encrypt that data from eavesdroppers or other people. This is because they could sell it or log into accounts and use them for stealing money or identities. In order to achieve this, we need to encrypt the data stored in the hard drive, so that only the possessor of the key can easily read the information stored, while all the others are going to read only encrypted data. Obviously, according to that, all the data management must be done by the client, otherwise any malicious person can easily retrieve it and use it for any malicious intention. All the methods analysed here relies on encrypting data in transit. In the end of this project we analyse 2 theoretical and practical methods for the creation of the above databases and then we tests them with 3 datasets and with 10, 100 and 1000 queries. The scope of this work is to retrieve a trend that can be useful for future works based on this project.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The paper addresses issues related to the design of a graphical query mechanism that can act as an interface to any object-oriented database system (OODBS), in general, and the object model of ODMG 2.0, in particular. In the paper a brief literature survey of related work is given, and an analysis methodology that allows the evaluation of such languages is proposed. Moreover, the user's view level of a new graphical query language, namely GOQL (Graphical Object Query Language), for ODMG 2.0 is presented. The user's view level provides a graphical schema that does not contain any of the perplexing details of an object-oriented database schema, and it also provides a foundation for a graphical interface that can support ad-hoc queries for object-oriented database applications. We illustrate, using an example, the user's view level of GOQL

Relevância:

30.00% 30.00%

Publicador:

Resumo:

ABSTRACT Title of Document: AN ANALYSIS OF THE IMPLEMENTATION AND PERCEIVED EFFECTIVENESS OF THE SCHOOLMAX FAMILY PORTAL Warren Wesley Watts, Doctor of Education, 2015 Directed By: Margaret J. McLaughlin, Ph.D. Department of Counseling, Higher Education and Special Education School districts have spent millions of dollars implementing student information systems that offer family portals with web-based access to parents and students. One of the main purposes of these systems is to improve school-to-home communication. Research has shown that when school-to-home communication is implemented effectively, parent involvement improves and student achievement increases (Epstein, 2001). The purpose of the study was to (a) understand why parents used or refrained from using the family portal and (b) determine what barriers to use might exist. To this end, this descriptive study identified the information parent users accessed in the SchoolMAX family portal, determined how frequently parents accessed the portal, and ascertained whether parents perceived an increase in communication with their children about academic matters after they began accessing the portal. Finally, the study sought to identify whether barriers existed that prevented parents from using the family portal. The inquiry employed three data sources to answer the aforementioned queries. These sources included (a) a survey sent electronically to 19,108 parents who registered online for the SchoolMAX family portal; (b) SchoolMAX portal usage data from the student information system for system usage between January 1, 2015 and June 30, 2015; and (c) a paper survey sent to 691 parents of students that had never used the SchoolMAX family portal in one elementary school, one middle school and one high school that were representative of other schools in the district. Survey results indicated that parents at all grade levels used the family portal. Usage data also confirmed that approximately 19% of the students had parents who monitored their progress through the family portal. Usage data also showed that parents were monitoring approximately 25% of students in secondary schools (6th – 12th grade) and 16% of students in elementary schools. Of the wide menu of resources available through the SchoolMAX family portal, parents used three areas most frequently: attendance, daily grades, and report cards. Approximately 70% of parents responded that their communication had improved with their children about academic matters since they started using the SchoolMAX family portal, and 90% of parents responded that the SchoolMAX family portal was an effective or somewhat effective tool. Parents also expressed interest in the addition of additional information to the SchoolMAX family portal. Specifically, the top three additions parents wanted to see included homework assignments, high stakes test scores, and graduation requirements. Parents also reported that 92% of them spoke to their children at least 2 to 3 times per week about academics. Due to the low response rate of the parent non-user survey, potential barriers to using the SchoolMAX family portal could not be addressed in this study. However, this issue may be a useful research topic in a future study. Keywords: school to home communication, student information systems, family portal, parent portal

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In today’s big data world, data is being produced in massive volumes, at great velocity and from a variety of different sources such as mobile devices, sensors, a plethora of small devices hooked to the internet (Internet of Things), social networks, communication networks and many others. Interactive querying and large-scale analytics are being increasingly used to derive value out of this big data. A large portion of this data is being stored and processed in the Cloud due the several advantages provided by the Cloud such as scalability, elasticity, availability, low cost of ownership and the overall economies of scale. There is thus, a growing need for large-scale cloud-based data management systems that can support real-time ingest, storage and processing of large volumes of heterogeneous data. However, in the pay-as-you-go Cloud environment, the cost of analytics can grow linearly with the time and resources required. Reducing the cost of data analytics in the Cloud thus remains a primary challenge. In my dissertation research, I have focused on building efficient and cost-effective cloud-based data management systems for different application domains that are predominant in cloud computing environments. In the first part of my dissertation, I address the problem of reducing the cost of transactional workloads on relational databases to support database-as-a-service in the Cloud. The primary challenges in supporting such workloads include choosing how to partition the data across a large number of machines, minimizing the number of distributed transactions, providing high data availability, and tolerating failures gracefully. I have designed, built and evaluated SWORD, an end-to-end scalable online transaction processing system, that utilizes workload-aware data placement and replication to minimize the number of distributed transactions that incorporates a suite of novel techniques to significantly reduce the overheads incurred both during the initial placement of data, and during query execution at runtime. In the second part of my dissertation, I focus on sampling-based progressive analytics as a means to reduce the cost of data analytics in the relational domain. Sampling has been traditionally used by data scientists to get progressive answers to complex analytical tasks over large volumes of data. Typically, this involves manually extracting samples of increasing data size (progressive samples) for exploratory querying. This provides the data scientists with user control, repeatable semantics, and result provenance. However, such solutions result in tedious workflows that preclude the reuse of work across samples. On the other hand, existing approximate query processing systems report early results, but do not offer the above benefits for complex ad-hoc queries. I propose a new progressive data-parallel computation framework, NOW!, that provides support for progressive analytics over big data. In particular, NOW! enables progressive relational (SQL) query support in the Cloud using unique progress semantics that allow efficient and deterministic query processing over samples providing meaningful early results and provenance to data scientists. NOW! enables the provision of early results using significantly fewer resources thereby enabling a substantial reduction in the cost incurred during such analytics. Finally, I propose NSCALE, a system for efficient and cost-effective complex analytics on large-scale graph-structured data in the Cloud. The system is based on the key observation that a wide range of complex analysis tasks over graph data require processing and reasoning about a large number of multi-hop neighborhoods or subgraphs in the graph; examples include ego network analysis, motif counting in biological networks, finding social circles in social networks, personalized recommendations, link prediction, etc. These tasks are not well served by existing vertex-centric graph processing frameworks whose computation and execution models limit the user program to directly access the state of a single vertex, resulting in high execution overheads. Further, the lack of support for extracting the relevant portions of the graph that are of interest to an analysis task and loading it onto distributed memory leads to poor scalability. NSCALE allows users to write programs at the level of neighborhoods or subgraphs rather than at the level of vertices, and to declaratively specify the subgraphs of interest. It enables the efficient distributed execution of these neighborhood-centric complex analysis tasks over largescale graphs, while minimizing resource consumption and communication cost, thereby substantially reducing the overall cost of graph data analytics in the Cloud. The results of our extensive experimental evaluation of these prototypes with several real-world data sets and applications validate the effectiveness of our techniques which provide orders-of-magnitude reductions in the overheads of distributed data querying and analysis in the Cloud.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Several companies are trying to improve their operation efficiency by implementing an enterprise resource planning (ERP) system that makes it possible to control the resources of the company in real time. However, the success of the implementation project is not a foregone conclusion; a significant part of these projects end in a failure, one way or another. Therefore it is important to investigate ERP system implementation more closely in order to increase understanding about factors influencing ERP system success and to improve the probability of a successful ERP implementation project. Consequently, this study was initiated because a manufacturing case company wanted to review the success of their ERP implementation project. To be exact, the case company hoped to gain both information about the success of the project and insight for future implementation improvement. This study investigated ERP success specifically by examining factors that influence ERP key-user satisfaction. User satisfaction is one of the most commonly applied indicators of information system success. The research data was mainly collected by conducting theme interviews. The subjects of the interviews were six key-users of the newly implemented ERP system. The interviewees were closely involved in the implementation project. Furthermore, they act as representative users that utilize the new system in everyday business processes. The collected data was analyzed by thematizing. Both data collection and analysis were guided by a theoretical frame of reference. This frame was based on previous research on the subject. The results of the study aligned with the theoretical framework to large extent. The four principal factors influencing key-user satisfaction were change management, contractor service, key-user’s system knowledge and characteristics of the ERP product itself. One of the most significant contributions of the research is that it confirmed the existence of a connection between change management and ERP key-user satisfaction. Furthermore, it discovered two new sub-factors influencing contractor service related key-user satisfaction. In addition, the research findings indicated that in order to improve the current level of key-user satisfaction, the case company should pay special attention to system functionality improvement and enhancement of the key-users’ knowledge. During similar implementation projects in the future, it would be important to assure the success of change management and contractor service related processes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Automation technologies are widely acclaimed to have the potential to significantly reduce energy consumption and energy-related costs in buildings. However, despite the abundance of commercially available technologies, automation in domestic environments keep on meeting commercial failures. The main reason for this is the development process that is used to build the automation applications, which tend to focus more on technical aspects rather than on the needs and limitations of the users. An instance of this problem is the complex and poorly designed home automation front-ends that deter customers from investing in a home automation product. On the other hand, developing a usable and interactive interface is a complicated task for developers due to the multidisciplinary challenges that need to be identified and solved. In this context, the current research work investigates the different design problems associated with developing a home automation interface as well as the existing design solutions that are applied to these problems. The Qualitative Data Analysis approach was used for collecting data from research papers and the open coding process was used to cluster the findings. From the analysis of the data collected, requirements for designing the interface were derived. A home energy management functionality for a Web-based home automation front-end was developed as a proof-of-concept and a user evaluation was used to assess the usability of the interface. The results of the evaluation showed that this holistic approach to designing interfaces improved its usability which increases the chances of its commercial success.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Measuring and fulfilling user requirements during medical device development will result in successful products that improve patient safety, improve device effectiveness and reduce product recalls and modifications. Medical device users are an extremely heterogeneous group and for any one device the users may include patients, their carers as well as various healthcare professionals. There are a number of factors that make capturing user requirements for medical device development challenging including the ethical and research governance involved with studying users as well as the inevitable time and financial constraints. Most ergonomics research methods have been developed in response to such practical constraints and a number of these have potential for medical device development. Some are suitable for specific points in the device cycle such as contextual inquiry and ethnography, others, such as usability tests and focus groups may be used throughout development. When designing user research there are a number of factors that may affect the quality of data collected including the sample of users studied, the use of proxies instead of real end-users and the context in which the research is performed. As different methods are effective in identifying different types of data, ideally more than one method should be used at each point in development, however financial and time factors may often constrain this.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

With the exponential growth of the usage of web-based map services, the web GIS application has become more and more popular. Spatial data index, search, analysis, visualization and the resource management of such services are becoming increasingly important to deliver user-desired Quality of Service. First, spatial indexing is typically time-consuming and is not available to end-users. To address this, we introduce TerraFly sksOpen, an open-sourced an Online Indexing and Querying System for Big Geospatial Data. Integrated with the TerraFly Geospatial database [1-9], sksOpen is an efficient indexing and query engine for processing Top-k Spatial Boolean Queries. Further, we provide ergonomic visualization of query results on interactive maps to facilitate the user’s data analysis. Second, due to the highly complex and dynamic nature of GIS systems, it is quite challenging for the end users to quickly understand and analyze the spatial data, and to efficiently share their own data and analysis results with others. Built on the TerraFly Geo spatial database, TerraFly GeoCloud is an extra layer running upon the TerraFly map and can efficiently support many different visualization functions and spatial data analysis models. Furthermore, users can create unique URLs to visualize and share the analysis results. TerraFly GeoCloud also enables the MapQL technology to customize map visualization using SQL-like statements [10]. Third, map systems often serve dynamic web workloads and involve multiple CPU and I/O intensive tiers, which make it challenging to meet the response time targets of map requests while using the resources efficiently. Virtualization facilitates the deployment of web map services and improves their resource utilization through encapsulation and consolidation. Autonomic resource management allows resources to be automatically provisioned to a map service and its internal tiers on demand. v-TerraFly are techniques to predict the demand of map workloads online and optimize resource allocations, considering both response time and data freshness as the QoS target. The proposed v-TerraFly system is prototyped on TerraFly, a production web map service, and evaluated using real TerraFly workloads. The results show that v-TerraFly can accurately predict the workload demands: 18.91% more accurate; and efficiently allocate resources to meet the QoS target: improves the QoS by 26.19% and saves resource usages by 20.83% compared to traditional peak load-based resource allocation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In recent decades, the national and international media contexts, in particular television media, significantly changed. The role that social networks, in particular Facebook, have taken as a content diffusion platform is unquestionable. Nowadays, traditional media (radio, newspaper, television) use the Web’s potential to distribute news content (Canelas, 2011). Currently, all TV news channels in Portugal have a website or a page on social networks. TV stations have increased communication channels with the public on digital platforms and study strategies that promote the participation and interaction with the news content (Cazajeira, 2015). The TV / Internet convergence will not only reach the content, but also the consumer, who becomes an interactive and participative audience. This reality imposes on journalism a continuous and updated news production system, dependent on a user being permanently connected to the Internet (Cazajeira, 2015). In fact, a report launched by an autonomous institution that has the function of supervising and regulating the media Portugal (ERC, 2015), confirms the relevance that social media has assumed in the publication and consumption of news. Social networks are recognised as one of the most important means for news media consultation, right after television, and the practice of sharing news is very common among consumers of online news in Portugal. Furthermore, when compared to other countries analysed by Reuters Institute (Newman, Levy, & Nielsen, 2015), Portuguese consumers are those who make the most comments to online news, preferring social networks to news sites. Considering the importance of new online platforms for journalism, this study aims to present a quantitative analysis of user participation on the Facebook pages of the three Portuguese TV news channels, specifically RTP3, SIC Notícias and TVI24, between 8 and 14 February 2016. To track this participation, the following parameters were used: the "like" button as a way to study the demonstration of publication interest; "sharing" of a particular element, be it a photo, a video or a text, on the user Timeline, the Timeline of a friend or by private message. This monitoring is important to understand the dissemination of news content; and the comments area. The number of comments will help understand the dynamics and the discussion that the publication has on the public. The results of 1063 posts indicate that of the analysed parameters - "Like", "Comment", and "Share" – the one with the greatest power of participation among the users of the pages of the three Portuguese TV news channels is the "Like" system, followed by "Share" and then "Comment". The theme that generates the most user participation with "Likes" and “Comments” parameters are "Science and Technology", “Education” and “Humorous/Satirical/Unusual”. Finally, the publications available end of the night (10pm-1am) has better participation rates.