12 resultados para Web Service

em Helda - Digital Repository of University of Helsinki


Relevância:

60.00% 60.00%

Publicador:

Resumo:

Usability testing is a productive and reliable method for evaluating the usability of software. Planning and implementing the test and analyzing its results is typically considered time-consuming, whereas applying usability methods in general is considered difficult. Because of this, usability testing is often priorized lower than more concrete issues in software engineering projects. Intranet Alma is a web service, users of which consist of students and personnel of the University of Helsinki. Alma was published in 2004 at the opening ceremony of the university. It has 45 000 users, and it replaces several former university network services. In this thesis, the usability of intranet Alma is evaluated with usability testing. The testing method applied has been lightened to make its taking into use as easy as possible. In the test, six students each tried to solve nine test tasks with Alma. As a result concrete usability problems were described in the final test report. Goal-orientation was given less importance in the applied usability testing. In addition, the system was tested only with test users from the largest user group. Usability test found general usability problems that occurred no matter the task or the user. However, further evaluation needs to be done: in addition to the general usability problems, there are task-dependent problems, solving of which requires thorough gathering of users goals. In the basic structure and central functionality of Alma, for example in navigation, there are serious and often repeating usability problems. It would be of interest to verify the designed user interface solutions to these problems before taking them into use. In the long run, the goals of the users, that the software is planned to support, are worth gathering, and the software development should be based on these goals.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In Helsinki's evangelical lutheran congregations, the share of the people being members of that church compared with all the people living in their specific geographical areas varies from 62,4 per cent in Paavali to 80,7 per cent in Munkkiniemi. The boundaries of the congregations are about to be redrawn to level the differences in the congregations. In this thesis, the reasons of the differences in Helsinki s districts were studied closer. The data consisted of statistical information gathered from the Population Information System of Finland. It included information by age groups about the population register keeper, marital status, native tongue, level of education and gender in the end of 2005. Additional data was gathered from Helsinki Region Statistics web service. It included information about the dwelling, level of income and main activities of the inhabitants in the districts. The main method was stepwise linear regression. Minor methods were crosstabulation and correlation matrixes. The result of the study was a statistical model that explains 72,2 per cent of the variation of the shares in the congregations. The dependent variable was the share of the people being members of evangelical lutheran church in the dirstricts. The independent variables were the share of the people having other than Finnish or Swedish as their native tongue, the share of rented apartments, the shares of apartments including four rooms and a kitchen, the share of detached houses in the districts and the shares of women and people with no income in the districts. The independent variables present in the model depict the amount of foreigners, dwellings, gender and the level of income of the population. The high share of foreigners, people with no income and rented apartments explain the low share of the people being members of evangelical lutheran church. On the contrary, the high share of the people being members of evangelical lutheran church in the district is explained by the large apartments, detached houses and amount of women living there.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Researchers and developers in academia and industry would benefit from a facility that enables them to easily locate, licence and use the kind of empirical data they need for testing and refining their hypotheses and to deposit and disseminate their data e.g. to support replication and validation of reported scientific experiments. To answer these needs initially in Finland, there is an ongoing project at University of Helsinki and its collaborators to create a user-friendly web service for researchers and developers in Finland and other countries. In our talk, we describe ongoing work to create a palette of extensive but easily available Finnish language resources and technologies for the research community, including lexical resources, wordnets, morphologically tagged corpora, dependency syntactic treebanks and parsebanks, open-source finite state toolkits and libraries and language models to support text analysis and processing at customer site. Also first publicly available results are presented.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

With the recent increase in interest in service-oriented architectures (SOA) and Web services, developing applications with the Web services paradigm has become feasible. Web services are self-describing, platform-independent computational elements. New applications can be assembled from a set of previously created Web services, which are composed together to make a service that uses its components to perform a certain task. This is the idea of service composition. To bring service composition to a mobile phone, I have created Interactive Service Composer for mobile phones. With Interactive Service Composer, the user is able to build service compositions on his mobile phone, consisting of Web services or services that are available from the mobile phone itself. The service compositions are reusable and can be saved in the phone's memory. Previously saved compositions can also be used in new compositions. While developing applications for mobile phones has been possible for some time, the usability of the solutions is not the same as when developing for desktop computers. When developing for mobile phones, the developer has to more carefully consider the decisions he is going to make with the program he is developing. With the lack of processing power and memory, the applications cannot function as well as on desktop PCs. On the other hand, this does not remove the appeal of developing applications for mobile devices.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In recent years, XML has been accepted as the format of messages for several applications. Prominent examples include SOAP for Web services, XMPP for instant messaging, and RSS and Atom for content syndication. This XML usage is understandable, as the format itself is a well-accepted standard for structured data, and it has excellent support for many popular programming languages, so inventing an application-specific format no longer seems worth the effort. Simultaneously with this XML's rise to prominence there has been an upsurge in the number and capabilities of various mobile devices. These devices are connected through various wireless technologies to larger networks, and a goal of current research is to integrate them seamlessly into these networks. These two developments seem to be at odds with each other. XML as a fully text-based format takes up more processing power and network bandwidth than binary formats would, whereas the battery-powered nature of mobile devices dictates that energy, both in processing and transmitting, be utilized efficiently. This thesis presents the work we have performed to reconcile these two worlds. We present a message transfer service that we have developed to address what we have identified as the three key issues: XML processing at the application level, a more efficient XML serialization format, and the protocol used to transfer messages. Our presentation includes both a high-level architectural view of the whole message transfer service, as well as detailed descriptions of the three new components. These components consist of an API, and an associated data model, for XML processing designed for messaging applications, a binary serialization format for the data model of the API, and a message transfer protocol providing two-way messaging capability with support for client mobility. We also present relevant performance measurements for the service and its components. As a result of this work, we do not consider XML to be inherently incompatible with mobile devices. As the fixed networking world moves toward XML for interoperable data representation, so should the wireless world also do to provide a better-integrated networking infrastructure. However, the problems that XML adoption has touch all of the higher layers of application programming, so instead of concentrating simply on the serialization format we conclude that improvements need to be made in an integrated fashion in all of these layers.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

As the virtual world grows more complex, finding a standard way for storing data becomes increasingly important. Ideally, each data item would be brought into the computer system only once. References for data items need to be cryptographically verifiable, so the data can maintain its identity while being passed around. This way there will be only one copy of the users family photo album, while the user can use multiple tools to show or manipulate the album. Copies of users data could be stored on some of his family members computer, some of his computers, but also at some online services which he uses. When all actors operate over one replicated copy of the data, the system automatically avoids a single point of failure. Thus the data will not disappear with one computer breaking, or one service provider going out of business. One shared copy also makes it possible to delete a piece of data from all systems at once, on users request. In our research we tried to find a model that would make data manageable to users, and make it possible to have the same data stored at various locations. We studied three systems, Persona, Freenet, and GNUnet, that suggest different models for protecting user data. The main application areas of the systems studied include securing online social networks, providing anonymous web, and preventing censorship in file-sharing. Each of the systems studied store user data on machines belonging to third parties. The systems differ in measures they take to protect their users from data loss, forged information, censorship, and being monitored. All of the systems use cryptography to secure names used for the content, and to protect the data from outsiders. Based on the gained knowledge, we built a prototype platform called Peerscape, which stores user data in a synchronized, protected database. Data items themselves are protected with cryptography against forgery, but not encrypted as the focus has been disseminating the data directly among family and friends instead of letting third parties store the information. We turned the synchronizing database into peer-to-peer web by revealing its contents through an integrated http server. The REST-like http API supports development of applications in javascript. To evaluate the platform’s suitability for application development we wrote some simple applications, including a public chat room, bittorrent site, and a flower growing game. During our early tests we came to the conclusion that using the platform for simple applications works well. As web standards develop further, writing applications for the platform should become easier. Any system this complex will have its problems, and we are not expecting our platform to replace the existing web, but are fairly impressed with the results and consider our work important from the perspective of managing user data.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

As the virtual world grows more complex, finding a standard way for storing data becomes increasingly important. Ideally, each data item would be brought into the computer system only once. References for data items need to be cryptographically verifiable, so the data can maintain its identity while being passed around. This way there will be only one copy of the users family photo album, while the user can use multiple tools to show or manipulate the album. Copies of users data could be stored on some of his family members computer, some of his computers, but also at some online services which he uses. When all actors operate over one replicated copy of the data, the system automatically avoids a single point of failure. Thus the data will not disappear with one computer breaking, or one service provider going out of business. One shared copy also makes it possible to delete a piece of data from all systems at once, on users request. In our research we tried to find a model that would make data manageable to users, and make it possible to have the same data stored at various locations. We studied three systems, Persona, Freenet, and GNUnet, that suggest different models for protecting user data. The main application areas of the systems studied include securing online social networks, providing anonymous web, and preventing censorship in file-sharing. Each of the systems studied store user data on machines belonging to third parties. The systems differ in measures they take to protect their users from data loss, forged information, censorship, and being monitored. All of the systems use cryptography to secure names used for the content, and to protect the data from outsiders. Based on the gained knowledge, we built a prototype platform called Peerscape, which stores user data in a synchronized, protected database. Data items themselves are protected with cryptography against forgery, but not encrypted as the focus has been disseminating the data directly among family and friends instead of letting third parties store the information. We turned the synchronizing database into peer-to-peer web by revealing its contents through an integrated http server. The REST-like http API supports development of applications in javascript. To evaluate the platform s suitability for application development we wrote some simple applications, including a public chat room, bittorrent site, and a flower growing game. During our early tests we came to the conclusion that using the platform for simple applications works well. As web standards develop further, writing applications for the platform should become easier. Any system this complex will have its problems, and we are not expecting our platform to replace the existing web, but are fairly impressed with the results and consider our work important from the perspective of managing user data.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

When authors of scholarly articles decide where to submit their manuscripts for peer review and eventual publication, they often base their choice of journals on very incomplete information abouthow well the journals serve the authors’ purposes of informing about their research and advancing their academic careers. The purpose of this study was to develop and test a new method for benchmarking scientific journals, providing more information to prospective authors. The method estimates a number of journal parameters, including readership, scientific prestige, time from submission to publication, acceptance rate and service provided by the journal during the review and publication process. Data directly obtainable from the web, data that can be calculated from such data, data obtained from publishers and editors, and data obtained using surveys with authors are used in the method, which has been tested on three different sets of journals, each from a different discipline. We found a number of problems with the different data acquisition methods, which limit the extent to which the method can be used. Publishers and editors are reluctant to disclose important information they have at hand (i.e. journal circulation, web downloads, acceptance rate). The calculation of some important parameters (for instance average time from submission to publication, regional spread of authorship) can be done but requires quite a lot of work. It can be difficult to get reasonable response rates to surveys with authors. All in all we believe that the method we propose, taking a “service to authors” perspective as a basis for benchmarking scientific journals, is useful and can provide information that is valuable to prospective authors in selected scientific disciplines.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Service researchers and practitioners have repeatedly claimed that customer service experiences are essential to all businesses. Therefore comprehension of how service experience is characterised in research is an essential element for its further development through research. The importance of greater in-depth understanding of the phenomenon of service experience has been acknowledged by several researchers, such as Carú and Cova and Vargo and Lusch. Furthermore, Service-Dominant (S-D) logic has integrated service experience to value by emphasising in its foundational premises that value is phenomenologically (experientially) determined. The present study analyses how the concept of service experience has been characterised in previous research. As such, it puts forward three ways to characterise it in relation to that research: 1) phenomenological service experience relates to the value discussion in S-D logic and interpretative consumer research, 2) process-based service experience relates to understanding service as a process, and 3) outcome-based service experience relates to understanding service experience as one element in models linking a number of variables or attributes to various outcomes. Focusing on the phenomenological service experience, the theoretical purpose of the study is to characterise service experience based on the phenomenological approach. In order to do so, an additional methodological purpose was formulated: to find a suitable methodology for analysing service experience based on the phenomenological approach. The study relates phenomenology to a philosophical Husserlian and social constructionist tradition studying phenomena as they appear in our experience in a social context. The study introduces Event-Based Narrative Inquiry Technique (EBNIT), which combines critical events with narratives and metaphors. EBNIT enabled the analysis of lived and imaginary service experiences as expressed in individual narratives. The study presents findings of eight case studies within service innovation of Web 2.0, mobile service, location aware service and public service in the municipal sector. Customers’ and service managers’ stories about their lived private and working lifeworld were the foundation for their ideal service experiences. In general, the thesis finds that service experiences are (1) subjective, (2) context-specific, (3) cumulative, (4) partially socially constructed, (5) both lived and imaginary, (6) temporally multiple-dimensional, and (7) iteratively related to perceived value. In addition to customer service experience, the thesis brings empirical evidence of managerial service experience of front-line managers experiencing the service they manage and develop in their working lifeworld. The study contributes to S-D logic, service innovation and service marketing and management in general by characterising service experience based on the phenomenological approach and integrating it to the value discussion. Additionally, the study offers a methodological approach for further exploration of service experiences. The study discusses managerial implications in conjunction with the case studies and discusses them in relation to service innovation.