945 resultados para User-Computer Interface


Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND. Bioinformatics is commonly featured as a well assorted list of available web resources. Although diversity of services is positive in general, the proliferation of tools, their dispersion and heterogeneity complicate the integrated exploitation of such data processing capacity. RESULTS. To facilitate the construction of software clients and make integrated use of this variety of tools, we present a modular programmatic application interface (MAPI) that provides the necessary functionality for uniform representation of Web Services metadata descriptors including their management and invocation protocols of the services which they represent. This document describes the main functionality of the framework and how it can be used to facilitate the deployment of new software under a unified structure of bioinformatics Web Services. A notable feature of MAPI is the modular organization of the functionality into different modules associated with specific tasks. This means that only the modules needed for the client have to be installed, and that the module functionality can be extended without the need for re-writing the software client. CONCLUSIONS. The potential utility and versatility of the software library has been demonstrated by the implementation of several currently available clients that cover different aspects of integrated data processing, ranging from service discovery to service invocation with advanced features such as workflows composition and asynchronous services calls to multiple types of Web Services including those registered in repositories (e.g. GRID-based, SOAP, BioMOBY, R-bioconductor, and others).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The EVS4CSCL project starts in the context of a Computer Supported Collaborative Learning environment (CSCL). Previous UOC projects created a CSCL generic platform (CLPL) to facilitate the development of CSCL applications. A discussion forum (DF) was the first application developed over the framework. This discussion forum was different from other products on the marketplace because of its focus on the learning process. The DF carried out the specification and elaboration phases from the discussion learning process but there was a lack in the consensus phase. The consensus phase in a learning environment is not something to be achieved but tested. Common tests are done by Electronic Voting System (EVS) tools, but consensus test is not an assessment test. We are not evaluating our students by their answers but by their discussion activity. Our educational EVS would be used as a discussion catalyst proposing a discussion about the results after an initial query or it would be used after a discussion period in order to manifest how the discussion changed the students mind (consensus). It should be also used by the teacher as a quick way to know where the student needs some reinforcement. That is important in a distance-learning environment where there is no direct contact between the teacher and the student and it is difficult to detect the learning lacks. In an educational environment, assessment it is a must and the EVS will provide direct assessment by peer usefulness evaluation, teacher marks on every query created and indirect assessment from statistics regarding the user activity.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Our work is concerned with user modelling in open environments. Our proposal then is the line of contributions to the advances on user modelling in open environments thanks so the Agent Technology, in what has been called Smart User Model. Our research contains a holistic study of User Modelling in several research areas related to users. We have developed a conceptualization of User Modelling by means of examples from a broad range of research areas with the aim of improving our understanding of user modelling and its role in the next generation of open and distributed service environments. This report is organized as follow: In chapter 1 we introduce our motivation and objectives. Then in chapters 2, 3, 4 and 5 we provide the state-of-the-art on user modelling. In chapter 2, we give the main definitions of elements described in the report. In chapter 3, we present an historical perspective on user models. In chapter 4 we provide a review of user models from the perspective of different research areas, with special emphasis on the give-and-take relationship between Agent Technology and user modelling. In chapter 5, we describe the main challenges that, from our point of view, need to be tackled by researchers wanting to contribute to advances in user modelling. From the study of the state-of-the-art follows an exploratory work in chapter 6. We define a SUM and a methodology to deal with it. We also present some cases study in order to illustrate the methodology. Finally, we present the thesis proposal to continue the work, together with its corresponding work scheduling and temporalisation

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The explosive growth of Internet during the last years has been reflected in the ever-increasing amount of the diversity and heterogeneity of user preferences, types and features of devices and access networks. Usually the heterogeneity in the context of the users which request Web contents is not taken into account by the servers that deliver them implying that these contents will not always suit their needs. In the particular case of e-learning platforms this issue is especially critical due to the fact that it puts at stake the knowledge acquired by their users. In the following paper we present a system that aims to provide the dotLRN e-learning platform with the capability to adapt to its users context. By integrating dotLRN with a multi-agent hypermedia system, online courses being undertaken by students as well as their learning environment are adapted in real time

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Report for the scientific sojourn carried out at the School of Computing of the University of Dundee, United Kingdom, from 2010 to 2012. This document is a scientific report of the work done, main results, publications and accomplishment of the objectives of the 2-year post-doctoral research project with reference number BP-A 00239. The project has addressed the topic of older people (60+) and Information and Communication Technologies (ICT), which is a topic of growing social and research interest, from a Human-Computer Interaction perspective. Over a 2-year period (June 2010-June 2012), we have conducted classical ethnography of ICT use in a computer clubhouse in Scotland, addressing interaction barriers and strategies, social sharing practices in Social Network Sites, and ICT learning, and carried out rapid ethnographical studies related to geo-enabled ICT and e-government services towards supporting independent living and active ageing. The main results have provided a much deeper understanding of (i) the everyday use of Computer-Mediated Communication tools, such as video-chats and blogs, and its evolution as older people’s experience with ICT increases over time, (ii) cross-cultural aspects of ICT use in the north and south of Europe, (iii) the relevance of cognition over vision in interacting with geographical information and a wide range of ICT tools, despite common stereotypes (e.g. make things bigger), (iv) the important relationship offline-online to provide older people with socially inclusive and meaningful eservices for independent living and active ageing, (v) how older people carry out social sharing practices in the popular YouTube, (vi) their user experiences and (vii) the challenges they face in ICT learning and the strategies they use to become successful ICT learners over time. The research conducted in this project has been published in 17 papers, 4 in journals – two of which in JCR, 5 in conferences, 4 in workshops and 4 in magazines. Other public output consists of 10 invited talks and seminars.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents a webservice architecture for Statistical Machine Translation aimed at non-technical users. A workfloweditor allows a user to combine different webservices using a graphical user interface. In the current state of this project,the webservices have been implemented for a range of sentential and sub-sententialaligners. The advantage of a common interface and a common data format allows the user to build workflows exchanging different aligners.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Introduction: Therapeutic drug monitoring (TDM) aims at optimizing treatment by individualizing dosage regimen based on measurement of blood concentrations. Maintaining concentrations within a target range requires pharmacokinetic and clinical capabilities. Bayesian calculation represents a gold standard in TDM approach but requires computing assistance. In the last decades computer programs have been developed to assist clinicians in this assignment. The aim of this benchmarking was to assess and compare computer tools designed to support TDM clinical activities.¦Method: Literature and Internet search was performed to identify software. All programs were tested on common personal computer. Each program was scored against a standardized grid covering pharmacokinetic relevance, user-friendliness, computing aspects, interfacing, and storage. A weighting factor was applied to each criterion of the grid to consider its relative importance. To assess the robustness of the software, six representative clinical vignettes were also processed through all of them.¦Results: 12 software tools were identified, tested and ranked. It represents a comprehensive review of the available software's characteristics. Numbers of drugs handled vary widely and 8 programs offer the ability to the user to add its own drug model. 10 computer programs are able to compute Bayesian dosage adaptation based on a blood concentration (a posteriori adjustment) while 9 are also able to suggest a priori dosage regimen (prior to any blood concentration measurement), based on individual patient covariates, such as age, gender, weight. Among those applying Bayesian analysis, one uses the non-parametric approach. The top 2 software emerging from this benchmark are MwPharm and TCIWorks. Other programs evaluated have also a good potential but are less sophisticated (e.g. in terms of storage or report generation) or less user-friendly.¦Conclusion: Whereas 2 integrated programs are at the top of the ranked listed, such complex tools would possibly not fit all institutions, and each software tool must be regarded with respect to individual needs of hospitals or clinicians. Interest in computing tool to support therapeutic monitoring is still growing. Although developers put efforts into it the last years, there is still room for improvement, especially in terms of institutional information system interfacing, user-friendliness, capacity of data storage and report generation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Objectives: Therapeutic drug monitoring (TDM) aims at optimizing treatment by individualizing dosage regimen based on blood concentrations measurement. Maintaining concentrations within a target range requires pharmacokinetic (PK) and clinical capabilities. Bayesian calculation represents a gold standard in TDM approach but requires computing assistance. The aim of this benchmarking was to assess and compare computer tools designed to support TDM clinical activities.¦Methods: Literature and Internet were searched to identify software. Each program was scored against a standardized grid covering pharmacokinetic relevance, user-friendliness, computing aspects, interfacing, and storage. A weighting factor was applied to each criterion of the grid to consider its relative importance. To assess the robustness of the software, six representative clinical vignettes were also processed through all of them.¦Results: 12 software tools were identified, tested and ranked. It represents a comprehensive review of the available software characteristics. Numbers of drugs handled vary from 2 to more than 180, and integration of different population types is available for some programs. Nevertheless, 8 programs offer the ability to add new drug models based on population PK data. 10 computer tools incorporate Bayesian computation to predict dosage regimen (individual parameters are calculated based on population PK models). All of them are able to compute Bayesian a posteriori dosage adaptation based on a blood concentration while 9 are also able to suggest a priori dosage regimen, only based on individual patient covariates. Among those applying Bayesian analysis, MM-USC*PACK uses a non-parametric approach. The top 2 programs emerging from this benchmark are MwPharm and TCIWorks. Others programs evaluated have also a good potential but are less sophisticated or less user-friendly.¦Conclusions: Whereas 2 software packages are ranked at the top of the list, such complex tools would possibly not fit all institutions, and each program must be regarded with respect to individual needs of hospitals or clinicians. Programs should be easy and fast for routine activities, including for non-experienced users. Although interest in TDM tools is growing and efforts were put into it in the last years, there is still room for improvement, especially in terms of institutional information system interfacing, user-friendliness, capability of data storage and automated report generation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Provides instructions for using the computer program which was developed under the research project, "The Economics of Reducing the County Road System: Three Case Studies In Iowa". This program operates on an IBP personal computer with 300K storage. A fixed disk is required with at least 3 megabytes of storage. The computer must be equipped with DOS version 3.0; the programs are written in Fortran. The user's manual describes all data requirements including network preparation, trip information, cost for maintenance, reconstruction, etc. Program operation instructions are presented, as well as sample solution output and a listing of the computer programs.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Interdependence is the main feature of dyadic relationships and, in recent years, various statistical procedures have been proposed for quantifying and testing this social attribute in different dyadic designs. The purpose of this paper is to develop several functions for this kind of statistical tests in an R package, known as nonindependence, for use by applied social researchers. A Graphical User Interface (GUI) is also developed to facilitate the use of the functions included in this package. Examples drawn from psychological research and simulated data are used to illustrate how the software works.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The motivation for this research initiated from the abrupt rise and fall of minicomputers which were initially used both for industrial automation and business applications due to their significantly lower cost than their predecessors, the mainframes. Later industrial automation developed its own vertically integrated hardware and software to address the application needs of uninterrupted operations, real-time control and resilience to harsh environmental conditions. This has led to the creation of an independent industry, namely industrial automation used in PLC, DCS, SCADA and robot control systems. This industry employs today over 200'000 people in a profitable slow clockspeed context in contrast to the two mainstream computing industries of information technology (IT) focused on business applications and telecommunications focused on communications networks and hand-held devices. Already in 1990s it was foreseen that IT and communication would merge into one Information and communication industry (ICT). The fundamental question of the thesis is: Could industrial automation leverage a common technology platform with the newly formed ICT industry? Computer systems dominated by complex instruction set computers (CISC) were challenged during 1990s with higher performance reduced instruction set computers (RISC). RISC started to evolve parallel to the constant advancement of Moore's law. These developments created the high performance and low energy consumption System-on-Chip architecture (SoC). Unlike to the CISC processors RISC processor architecture is a separate industry from the RISC chip manufacturing industry. It also has several hardware independent software platforms consisting of integrated operating system, development environment, user interface and application market which enables customers to have more choices due to hardware independent real time capable software applications. An architecture disruption merged and the smartphone and tablet market were formed with new rules and new key players in the ICT industry. Today there are more RISC computer systems running Linux (or other Unix variants) than any other computer system. The astonishing rise of SoC based technologies and related software platforms in smartphones created in unit terms the largest installed base ever seen in the history of computers and is now being further extended by tablets. An underlying additional element of this transition is the increasing role of open source technologies both in software and hardware. This has driven the microprocessor based personal computer industry with few dominating closed operating system platforms into a steep decline. A significant factor in this process has been the separation of processor architecture and processor chip production and operating systems and application development platforms merger into integrated software platforms with proprietary application markets. Furthermore the pay-by-click marketing has changed the way applications development is compensated: Three essays on major trends in a slow clockspeed industry: The case of industrial automation 2014 freeware, ad based or licensed - all at a lower price and used by a wider customer base than ever before. Moreover, the concept of software maintenance contract is very remote in the app world. However, as a slow clockspeed industry, industrial automation has remained intact during the disruptions based on SoC and related software platforms in the ICT industries. Industrial automation incumbents continue to supply systems based on vertically integrated systems consisting of proprietary software and proprietary mainly microprocessor based hardware. They enjoy admirable profitability levels on a very narrow customer base due to strong technology-enabled customer lock-in and customers' high risk leverage as their production is dependent on fault-free operation of the industrial automation systems. When will this balance of power be disrupted? The thesis suggests how industrial automation could join the mainstream ICT industry and create an information, communication and automation (ICAT) industry. Lately the Internet of Things (loT) and weightless networks, a new standard leveraging frequency channels earlier occupied by TV broadcasting, have gradually started to change the rigid world of Machine to Machine (M2M) interaction. It is foreseeable that enough momentum will be created that the industrial automation market will in due course face an architecture disruption empowered by these new trends. This thesis examines the current state of industrial automation subject to the competition between the incumbents firstly through a research on cost competitiveness efforts in captive outsourcing of engineering, research and development and secondly researching process re- engineering in the case of complex system global software support. Thirdly we investigate the industry actors', namely customers, incumbents and newcomers, views on the future direction of industrial automation and conclude with our assessments of the possible routes industrial automation could advance taking into account the looming rise of the Internet of Things (loT) and weightless networks. Industrial automation is an industry dominated by a handful of global players each of them focusing on maintaining their own proprietary solutions. The rise of de facto standards like IBM PC, Unix and Linux and SoC leveraged by IBM, Compaq, Dell, HP, ARM, Apple, Google, Samsung and others have created new markets of personal computers, smartphone and tablets and will eventually also impact industrial automation through game changing commoditization and related control point and business model changes. This trend will inevitably continue, but the transition to a commoditized industrial automation will not happen in the near future.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Learning Affect Monitor (LAM) is a new computer-based assessment system integrating basic dimensional evaluation and discrete description of affective states in daily life, based on an autonomous adapting system. Subjects evaluate their affective states according to a tridimensional space (valence and activation circumplex as well as global intensity) and then qualify it using up to 30 adjective descriptors chosen from a list. The system gradually adapts to the user, enabling the affect descriptors it presents to be increasingly relevant. An initial study with 51 subjects, using a 1 week time-sampling with 8 to 10 randomized signals per day, produced n = 2,813 records with good reliability measures (e.g., response rate of 88.8%, mean split-half reliability of .86), user acceptance, and usability. Multilevel analyses show circadian and hebdomadal patterns, and significant individual and situational variance components of the basic dimension evaluations. Validity analyses indicate sound assignment of qualitative affect descriptors in the bidimensional semantic space according to the circumplex model of basic affect dimensions. The LAM assessment module can be implemented on different platforms (palm, desk, mobile phone) and provides very rapid and meaningful data collection, preserving complex and interindividually comparable information in the domain of emotion and well-being.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents a customizable system used to develop a collaborative multi-user problem solving game. It addresses the increasing demand for appealing informal learning experiences in museum-like settings. The system facilitates remote collaboration by allowing groups of learners tocommunicate through a videoconferencing system and by allowing them to simultaneously interact through a shared multi-touch interactive surface. A user study with 20 user groups indicates that the game facilitates collaboration between local and remote groups of learners. The videoconference and multitouch surface acted as communication channels, attracted students’ interest, facilitated engagement, and promoted inter- and intra-group collaboration—favoring intra-group collaboration. Our findings suggest that augmentingvideoconferencing systems with a shared multitouch space offers newpossibilities and scenarios for remote collaborative environments and collaborative learning.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Two portable Radio Frequency IDentification (RFID) systems (made by Texas Instruments and HiTAG) were developed and tested for bridge scour monitoring by the Department of Civil and Environmental Engineering at the University of Iowa (UI). Both systems consist of three similar components: 1) a passive cylindrical transponder of 2.2 cm in length (derived from transmitter/responder); 2) a low frequency reader (~134.2 kHz frequency); and 3) an antenna (of rectangular or hexagonal loop). The Texas Instruments system can only read one smart particle per time, while the HiTAG system was successfully modified here at UI by adding the anti-collision feature. The HiTAG system was equipped with four antennas and could simultaneously detect 1,000s of smart particles located in a close proximity. A computer code was written in C++ at the UI for the HiTAG system to allow simultaneous, multiple readouts of smart particles under different flow conditions. The code is written for the Windows XP operational system which has a user-friendly windows interface that provides detailed information regarding the smart particle that includes: identification number, location (orientation in x,y,z), and the instance the particle was detected.. These systems were examined within the context of this innovative research in order to identify the best suited RFID system for performing autonomous bridge scour monitoring. A comprehensive laboratory study that included 142 experimental runs and limited field testing was performed to test the code and determine the performance of each system in terms of transponder orientation, transponder housing material, maximum antenna-transponder detection distance, minimum inter-particle distance and antenna sweep angle. The two RFID systems capabilities to predict scour depth were also examined using pier models. The findings can be summarized as follows: 1) The first system (Texas Instruments) read one smart particle per time, and its effective read range was about 3ft (~1m). The second system (HiTAG) had similar detection ranges but permitted the addition of an anti-collision system to facilitate the simultaneous identification of multiple smart particles (transponders placed into marbles). Therefore, it was sought that the HiTAG system, with the anti-collision feature (or a system with similar features), would be preferable when compared to a single-read-out system for bridge scour monitoring, as the former could provide repetitive readings at multiple locations, which could help in predicting the scour-hole bathymetry along with maximum scour depth. 2) The HiTAG system provided reliable measures of the scour depth (z-direction) and the locations of the smart particles on the x-y plane within a distance of about 3ft (~1m) from the 4 antennas. A Multiplexer HTM4-I allowed the simultaneous use of four antennas for the HiTAG system. The four Hexagonal Loop antennas permitted the complete identification of the smart particles in an x, y, z orthogonal system as function of time. The HiTAG system can be also used to measure the rate of sediment movement (in kg/s or tones/hr). 3) The maximum detection distance of the antenna did not change significantly for the buried particles compared to the particles tested in the air. Thus, the low frequency RFID systems (~134.2 kHz) are appropriate for monitoring bridge scour because their waves can penetrate water and sand bodies without significant loss of their signal strength. 4) The pier model experiments in a flume with first RFID system showed that the system was able to successfully predict the maximum scour depth when the system was used with a single particle in the vicinity of pier model where scour-hole was expected. The pier model experiments with the second RFID system, performed in a sandbox, showed that system was able to successfully predict the maximum scour depth when two scour balls were used in the vicinity of the pier model where scour-hole was developed. 5) The preliminary field experiments with the second RFID system, at the Raccoon River, IA near the Railroad Bridge (located upstream of 360th street Bridge, near Booneville), showed that the RFID technology is transferable to the field. A practical method would be developed for facilitating the placement of the smart particles within the river bed. This method needs to be straightforward for the Department of Transportation (DOT) and county road working crews so it can be easily implemented at different locations. 6) Since the inception of this project, further research showed that there is significant progress in RFID technology. This includes the availability of waterproof RFID systems with passive or active transponders of detection ranges up to 60 ft (~20 m) within the water–sediment column. These systems do have anti-collision and can facilitate up to 8 powerful antennas which can significantly increase the detection range. Such systems need to be further considered and modified for performing automatic bridge scour monitoring. The knowledge gained from the two systems, including the software, needs to be adapted to the new systems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The European Space Agency's Gaia mission will create the largest and most precise three dimensional chart of our galaxy (the Milky Way), by providing unprecedented position, parallax, proper motion, and radial velocity measurements for about one billion stars. The resulting catalogue will be made available to the scientific community and will be analyzed in many different ways, including the production of a variety of statistics. The latter will often entail the generation of multidimensional histograms and hypercubes as part of the precomputed statistics for each data release, or for scientific analysis involving either the final data products or the raw data coming from the satellite instruments. In this paper we present and analyze a generic framework that allows the hypercube generation to be easily done within a MapReduce infrastructure, providing all the advantages of the new Big Data analysis paradigmbut without dealing with any specific interface to the lower level distributed system implementation (Hadoop). Furthermore, we show how executing the framework for different data storage model configurations (i.e. row or column oriented) and compression techniques can considerably improve the response time of this type of workload for the currently available simulated data of the mission. In addition, we put forward the advantages and shortcomings of the deployment of the framework on a public cloud provider, benchmark against other popular solutions available (that are not always the best for such ad-hoc applications), and describe some user experiences with the framework, which was employed for a number of dedicated astronomical data analysis techniques workshops.