959 resultados para user generated services
Resumo:
The use of Information and Communication Technology (ICT) by adults with learning disabilities has been positively promoted over the past decade. More recently, policy statements and guidance from the UK government have underlined the importance of ICT for adults with learning disabilities specifically, as well as for the population in general, through the potential it offers for social inclusion. The aim of the present study was to provide a picture of how ICT is currently being used within one organisation providing specialist services for adults with learning disabilities and more specifically to provide a picture of its use in promoting community participation. Nine day and 14 residential services were visited as part of a qualitative study to answer three main questions: What kinds of computer programs are being used? What are they being used for? Does this differ between day and residential services? Computers and digital cameras were used for a wide range of activities and ‘mainstream’ programs were used more widely than those developed for specific user groups. In day services, ICT was often embedded in wider projects and activities, whilst use in houses was based around leisure interests. In both contexts, ICT was being used to facilitate communication, although this was more linked to within-service activities, rather than those external to service provision.
Resumo:
Background - Green infrastructure is a strategic network of green spaces designed to deliver ecosystem services to human communities. Green infrastructure is a convenient concept for urban policy makers, but the term is used too-generically and with limited understanding of the relative values or benefits of different types of green space and how these complement one another. At a finer scale/more practical level– little consideration is given to the composition of the plant-communities, yet this is what ultimately defines extent of service provision. This paper calls for greater attention to be paid to urban plantings with respect to ecosystem service delivery and for plant science to engage more-fully in identifying those plants that promote various services. Scope - Many urban plantings are designed based on aesthetics alone, with limited thought on how plant choice/composition provides other ecosystem services. Research is beginning to demonstrate, however, that landscape plants provide a range of important services, such as helping mitigate floods and alleviating heat islands, but that not all species are equally effective. The paper reviews a number of important services and demonstrates how genotype choice radically affects service delivery. Conclusions – Although research is in its infancy, data is being generated that relates plant traits to specific services; thereby helping identify genotypes that optimise service delivery. The urban environment, however, will become exceedingly bland if future planting is simply restricted to monocultures of a few ‘functional’ genotypes. Therefore, further information is required on how to design plant communities where the plants identified:- a/ provide more than a single benefit (multi-functionality) b/ complement each other in maximising the range of benefits that can be delivered in one location and c/ continue to maintain public acceptance through diversity. The identification/development of functional landscape plants is an exciting and potentially high impact arena for plant science.
Resumo:
OWL-S is an application of OWL, the Web Ontology Language, that describes the semantics of Web Services so that their discovery, selection, invocation and composition can be automated. The research literature reports the use of UML diagrams for the automatic generation of Semantic Web Service descriptions in OWL-S. This paper demonstrates a higher level of automation by generating complete complete Web applications from OWL-S descriptions that have themselves been generated from UML. Previously, we proposed an approach for processing OWL-S descriptions in order to produce MVC-based skeletons for Web applications. The OWL-S ontology undergoes a series of transformations in order to generate a Model-View-Controller application implemented by a combination of Java Beans, JSP, and Servlets code, respectively. In this paper, we show in detail the documents produced at each processing step. We highlight the connections between OWL-S specifications and executable code in the various Java dialects and show the Web interfaces that result from this process.
Resumo:
The main objective for this degree project is to implement an Application Availability Monitoring (AAM) system named Softek EnView for Fujitsu Services. The aim of implementing the AAM system is to proactively identify end user performance problems, such as application and site performance, before the actual end users experience them. No matter how well applications and sites are designed and nomatter how well they meet business requirements, they are useless to the end users if the performance is slow and/or unreliable. It is important for the customers to find out whether the end user problems are caused by the network or application malfunction. The Softek EnView was comprised of the following EnView components: Robot, Monitor, Reporter, Collector and Repository. The implemented system, however, is designed to use only some of these EnView elements: Robot, Reporter and depository. Robots can be placed at any key user location and are dedicated to customers, which means that when the number of customers increases, at the sametime the amount of Robots will increase. To make the AAM system ideal for the company to use, it was integrated with Fujitsu Services’ centralised monitoring system, BMC PATROL Enterprise Manager (PEM). That was actually the reason for deciding to drop the EnView Monitor element. After the system was fully implemented, the AAM system was ready for production. Transactions were (and are) written and deployed on Robots to simulate typical end user actions. These transactions are configured to run with certain intervals, which are defined collectively with customers. While they are driven against customers’ applicationsautomatically, transactions collect availability data and response time data all the time. In case of a failure in transactions, the robot immediately quits the transactionand writes detailed information to a log file about what went wrong and which element failed while going through an application. Then an alert is generated by a BMC PATROL Agent based on this data and is sent to the BMC PEM. Fujitsu Services’ monitoring room receives the alert, reacts to it according to the incident management process in ITIL and by alerting system specialists on critical incidents to resolve problems. As a result of the data gathered by the Robots, weekly reports, which contain detailed statistics and trend analyses of ongoing quality of IT services, is provided for the Customers.
Resumo:
Wholesale trade has an intermediate position between manufacturing and retail in the distributional channel. In modern economies, consumers buy few, if any, products directly from manufacture or producer. Instead, it is a wholesaler, who is in direct contact with producers, buying goods in larger quantities and selling them in smaller quantities to retailers. Traditionally, the main function of a wholesaler has been to push goods along the distributional channel from producer to retailer, or other nonend user. However, the function of wholesalers usually goes beyond the process of the physical distribution of goods. Wholesalers also arrange storage, perform market analyses, promote trade or provide technical support to consumers (Riemers 1998). The existence of wholesalers (and other intermediaries) in the distributional channel is based on the effective and efficient performance of distribution services, that are needed by producers and other members of the supply chain. Producers usually do not enjoy the economies of scale that they have in production, when it comes to providing distributional services (Rosenbloom 2007) and this creates a space for wholesalers or other intermediaries. Even though recent developments in the distributional channel indicate that traditional wholesaling activities now also compete with other supply chain organizations, wholesaling still remains an important activity in many economies (Quinn and Sparks, 2007). In 2010, the Swedish wholesale trade sector consisted of approximately 46.000 firms and generated an annual turnover of 1 300 billion SEK (Företagsstatistiken, Statistics Sweden). In terms of turnover, wholesaling accounts for 20% of the gross domestic product and is thereby the third largest industry. This is behind manufacturing and a composite group of firms in other sectors of the service industry but ahead of retailing. This indicates that the wholesale trade sector is an important part of the Swedish economy. The position of wholesaling is further reinforced when measuring productivity growth. Measured in terms of value added per employee, wholesaling experienced the largest productivity growth of all industries in the Swedish economy during the years 2000 through 2010. The fact that wholesale trade is one of the important parts of a modern economy, and the positive development of the Swedish wholesale trade sector in recent decades, leads to several questions related to industry dynamics. The three topics that will be examined in this thesis are firm entry, firm relocation and firm growth. The main question to be answered by this thesis is what factors influence new firm formation, firm relocation and firm growth in the Swedish wholesale trade sector?
Resumo:
Internet of Things är ett samlingsbegrepp för den utveckling som innebär att olika typer av enheter kan förses med sensorer och datachip som är uppkopplade mot internet. En ökad mängd data innebär en ökad förfrågan på lösningar som kan lagra, spåra, analysera och bearbeta data. Ett sätt att möta denna förfrågan är att använda sig av molnbaserade realtidsanalystjänster. Multi-tenant och single-tenant är två typer av arkitekturer för molnbaserade realtidsanalystjänster som kan användas för att lösa problemen med hanteringen av de ökade datamängderna. Dessa arkitekturer skiljer sig åt när det gäller komplexitet i utvecklingen. I detta arbete representerar Azure Stream Analytics en multi-tenant arkitektur och HDInsight/Storm representerar en single-tenant arkitektur. För att kunna göra en jämförelse av molnbaserade realtidsanalystjänster med olika arkitekturer, har vi valt att använda oss av användbarhetskriterierna: effektivitet, ändamålsenlighet och användarnöjdhet. Vi kom fram till att vi ville ha svar på följande frågor relaterade till ovannämnda tre användbarhetskriterier: • Vilka likheter och skillnader kan vi se i utvecklingstider? • Kan vi identifiera skillnader i funktionalitet? • Hur upplever utvecklare de olika analystjänsterna? Vi har använt en design and creation strategi för att utveckla två Proof of Concept prototyper och samlat in data genom att använda flera datainsamlingsmetoder. Proof of Concept prototyperna inkluderade två artefakter, en för Azure Stream Analytics och en för HDInsight/Storm. Vi utvärderade dessa genom att utföra fem olika scenarier som var för sig hade 2-5 delmål. Vi simulerade strömmande data genom att låta en applikation kontinuerligt slumpa fram data som vi analyserade med hjälp av de två realtidsanalystjänsterna. Vi har använt oss av observationer för att dokumentera hur vi arbetade med utvecklingen av analystjänsterna samt för att mäta utvecklingstider och identifiera skillnader i funktionalitet. Vi har även använt oss av frågeformulär för att ta reda på vad användare tyckte om analystjänsterna. Vi kom fram till att Azure Stream Analytics initialt var mer användbart än HDInsight/Storm men att skillnaderna minskade efter hand. Azure Stream Analytics var lättare att arbeta med vid simplare analyser medan HDInsight/Storm hade ett bredare val av funktionalitet.
Resumo:
Background: Established in 1999, the Swedish Maternal Health Care Register (MHCR) collects data on pregnancy, birth, and the postpartum period for most pregnant women in Sweden. Antenatal care (ANC) midwives manually enter data into the Web-application that is designed for MHCR. The aim of this study was to investigate midwives? experiences, opinions and use of the MHCR. Method: A national, cross-sectional, questionnaire survey, addressing all Swedish midwives working in ANC, was conducted January to March 2012. The questionnaire included demographic data, preformed statements with six response options ranging from zero to five (0 = totally disagree and 5 = totally agree), and opportunities to add information or further clarification in the form of free text comments. Parametric and non-parametric methods and logistic regression analyses were applied, and content analysis was used for free text comments. Results: The estimated response rate was 53.1%. Most participants were positive towards the Web-application and the included variables in the MHCR. Midwives exclusively engaged in patient-related work tasks perceived the register as burdensome (70.3%) and 44.2% questioned the benefit of the register. The corresponding figures for midwives also engaged in administrative supervision were 37.8% and 18.5%, respectively. Direct electronic transfer of data from the medical records to the MHCR was emphasised as significant future improvement. In addition, the midwives suggested that new variables of interest should be included in the MHCR ? e.g., infertility, outcomes of previous pregnancy and birth, and complications of the index pregnancy. Conclusions: In general, the MHCR was valued positively, although perceived as burdensome. Direct electronic transfer of data from the medical records to the MHCR is a prioritized issue to facilitate the working situation for midwives. Finally, the data suggest that the MHCR is an underused source for operational planning and quality assessment in local ANC centres.
Resumo:
The importance of understanding the process by which a result was generated in an experiment is fundamental to science. Without such information, other scientists cannot replicate, validate, or duplicate an experiment. We define provenance as the process that led to a result. With large scale in-silico experiments, it becomes increasingly difficult for scientists to record process documentation that can be used to retrieve the provenance of a result. Provenance Recording for Services (PReServ) is a software package that allows developers to integrate process documentation recording into their applications. PReServ has been used by several applications and its performance has been benchmarked.
Resumo:
Service discovery in large scale, open distributed systems is difficult because of the need to filter out services suitable to the task at hand from a potentially huge pool of possibilities. Semantic descriptions have been advocated as the key to expressive service discovery, but the most commonly used service descriptions and registry protocols do not support such descriptions in a general manner. In this paper, we present a protocol, its implementation and an API for registering semantic service descriptions and other task/user-specific metadata, and for discovering services according to these. Our approach is based on a mechanism for attaching structured and unstructured metadata, which we show to be applicable to multiple registry technologies. The result is an extremely flexible service registry that can be the basis of a sophisticated semantically-enhanced service discovery engine, an essential component of a Semantic Grid.
Resumo:
Scientific workflows are becoming a valuable tool for scientists to capture and automate e-Science procedures. Their success brings the opportunity to publish, share, reuse and repurpose this explicitly captured knowledge. Within the myGrid project, we have identified key resources that can be shared including complete workflows, fragments of workflows and constituent services. We have examined the alternative ways these can be described by their authors (and subsequent users), and developed a unified descriptive model to support their later discovery. By basing this model on existing standards, we have been able to extend existing Web Service and Semantic Web Service infrastructure whilst still supporting the specific needs of the e-Scientist. myGrid components enable a workflow life-cycle that extends beyond execution, to include discovery of previous relevant designs, reuse of those designs, and subsequent publication. Experience with example groups of scientists indicates that this cycle is valuable. The growing number of workflows and services mean more work is needed to support the user in effective ranking of search results, and to support the repurposing process.
Resumo:
While most countries are committed to increasing access to safe water and thereby reducing child mortality, there is little consensus on how to actually improve water services. One important proposal under discussion is whether to privatize water provision. In the 1990s Argentina embarked on one of the largest privatization campaigns in the world including the privatization of local water companies covering approximately 30 percent of the country’s municipalities. Using the varia tion in ownership of water provision across time and space generated by the privatization process, we find that child mortality fell 8 percent in the areas that privatized their water services; and that the effect was largest (26 percent) in the poorest areas. We check the robustness of these estimates using cause specific mortality. While privatization is associated with significant reductions in deaths from infectious and parasitic diseases, it is uncorrelated with deaths from causes unrelated to water conditions.
Resumo:
INTRODUCTION With the advent of Web 2.0, social networking websites like Facebook, MySpace and LinkedIn have become hugely popular. According to (Nilsen, 2009), social networking websites have global1 figures of almost 250 millions unique users among the top five2, with the time people spend on those networks increasing 63% between 2007 and 2008. Facebook alone saw a massive growth of 566% in number of minutes in the same period of time. Furthermore their appeal is clear, they enable users to easily form persistent networks of friends with whom they can interact and share content. Users then use those networks to keep in touch with their current friends and to reconnect with old friends. However, online social network services have rapidly evolved into highly complex systems which contain a large amount of personally salient information derived from large networks of friends. Since that information varies from simple links to music, photos and videos, users not only have to deal with the huge amount of data generated by them and their friends but also with the fact that it‟s composed of many different media forms. Users are presented with increasing challenges, especially as the number of friends on Facebook rises. An example of a problem is when a user performs a simple task like finding a specific friend in a group of 100 or more friends. In that case he would most likely have to go through several pages and make several clicks till he finds the one he is looking for. Another example is a user with more than 100 friends in which his friends make a status update or another action per day, resulting in 10 updates per hour to keep up. That is plausible, especially since the change in direction of Facebook to rival with Twitter, by encouraging users to update their status as they do on Twitter. As a result, to better present the web of information connected to a user the use of better visualizations is essential. The visualizations used nowadays on social networking sites haven‟t gone through major changes during their lifetimes. They have added more functionality and gave more tools to their users, but still the core of their visualization hasn‟t changed. The information is still presented in a flat way in lists/groups of text and images which can‟t show the extra connections pieces of information. Those extra connections can give new meaning and insights to the user, allowing him to more easily see if that content is important to him and the information related to it. However showing extra connections of information but still allowing the user to easily navigate through it and get the needed information with a quick glance is difficult. The use of color coding, clusters and shapes becomes then essential to attain that objective. But taking into consideration the advances in computer hardware in the last decade and the software platforms available today, there is the opportunity to take advantage of 3D. That opportunity comes in because we are at a phase were the hardware and the software available is ready for the use of 3D in the web. With the use of the extra dimension brought by 3D, visualizations can be constructed to show the content and its related information to the user at the same screen and in a clear way. Also it would allow a great deal of interactivity. Another opportunity to create better information‟s visualization presents itself in the form of the open APIs, specifically the ones made available by the social networking sites. Those APIs allow any developers to create their own applications or sites taking advantage of the huge amount of information there is on those networks. Specifically to this case, they open the door for the creation of new social network visualizations. Nevertheless, the third dimension is by itself not enough to create a better interface for a social networking website, there are some challenges to overcome. One of those challenges is to make the user understand what the system is doing during the interaction with the user. Even though that is important in 2D visualizations, it becomes essential in 3D due to the extra dimension. To overcome that challenge it‟s necessary the use of the principles of animations defined by the artists at Walt Disney Studios (Johnston, et al., 1995). By applying those principles in the development of the interface, the actions of the system in response to the user inputs became clear and understandable. Furthermore, a user study needs to be performed so the users‟ main goals and motivations, while navigating the social network, are revealed. Their goals and motivations are important in the construction of an interface that reflects the user expectations for the interface, but also helps in the development of appropriate metaphors. Those metaphors have an important role in the interface, because if correctly chosen they help the user understand the elements of the interface instead of making him memorize it. The last challenge is the use of 3D visualization on the web, since there have been several attempts to bring 3D into it, mainly with the various versions of VRML which were destined to failure due to the hardware limitations at the time. However, in the last couple of years there has been a movement to make the necessary tools to finally allow developers to use 3D in a useful way, using X3D or OpenGL but especially flash. This thesis argues that there is a need for a better social network visualization that shows all the dimensions of the information connected to the user and that allows him to move through it. But there are several characteristics the new visualization has to possess in order for it to present a real gain in usability to Facebook‟s users. The first quality is to have the friends at the core of its design, and the second to make use of the metaphor of circles of friends to separate users in groups taking into consideration the order of friendship. To achieve that several methods have to be used, from the use of 3D to get an extra dimension for presenting relevant information, to the use of direct manipulation to make the interface comprehensible, predictable and controllable. Moreover animation has to be use to make all the action on the screen perceptible to the user. Additionally, with the opportunity given by the 3D enabled hardware, the flash platform, through the use of the flash engine Papervision3D and the Facebook platform, all is in place to make the visualization possible. But even though it‟s all in place, there are challenges to overcome like making the system actions in 3D understandable to the user and creating correct metaphors that would allow the user to understand the information and options available to him. This thesis document is divided in six chapters, with Chapter 2 reviewing the literature relevant to the work described in this thesis. In Chapter 3 the design stage that resulted in the application presented in this thesis is described. In Chapter 4, the development stage, describing the architecture and the components that compose the application. In Chapter 5 the usability test process is explained and the results obtained through it are presented and analyzed. To finish, Chapter 6 presents the conclusions that were arrived in this thesis.
Resumo:
Ubiquitous computing raises new usability challenges that cut across design and development. We are particularly interested in environments enhanced with sensors, public displays and personal devices. How can prototypes be used to explore the users' mobility and interaction, both explicitly and implicitly, to access services within these environments? Because of the potential cost of development and design failure, these systems must be explored using early assessment techniques and versions of the systems that could disrupt if deployed in the target environment. These techniques are required to evaluate alternative solutions before making the decision to deploy the system on location. This is crucial for a successful development, that anticipates potential user problems, and reduces the cost of redesign. This thesis reports on the development of a framework for the rapid prototyping and analysis of ubiquitous computing environments that facilitates the evaluation of design alternatives. It describes APEX, a framework that brings together an existing 3D Application Server with a modelling tool. APEX-based prototypes enable users to navigate a virtual world simulation of the envisaged ubiquitous environment. By this means users can experience many of the features of the proposed design. Prototypes and their simulations are generated in the framework to help the developer understand how the user might experience the system. These are supported through three different layers: a simulation layer (using a 3D Application Server); a modelling layer (using a modelling tool) and a physical layer (using external devices and real users). APEX allows the developer to move between these layers to evaluate different features. It supports exploration of user experience through observation of how users might behave with the system as well as enabling exhaustive analysis based on models. The models support checking of properties based on patterns. These patterns are based on ones that have been used successfully in interactive system analysis in other contexts. They help the analyst to generate and verify relevant properties. Where these properties fail then scenarios suggested by the failure provide an important aid to redesign.
Resumo:
With the current proliferation of sensor equipped mobile devices such as smartphones and tablets, location aware services are expanding beyond the mere efficiency and work related needs of users, evolving in order to incorporate fun, culture and the social life of users. Today people on the move have more and more connectivity and are expected to be able to communicate with their usual and familiar social networks. That means communications not only with their peers and colleagues, friends and family but also with unknown people that might share their interests, curiosities or happen to use the same social network. Through social networks, location aware blogging, cultural mobile applications relevant information is now available at specific geographical locations and open to feedback and conversations among friends as well as strangers. In fact, nowadays smartphone technologies aloud users to post and retrieve content while on the move, often relating to specific physical landmarks or locations, engaging and being engaged in conversations with strangers as much as their own social network. The use of such technologies and applications while on the move can often lead people to serendipitous discoveries and interactions. Throughout our thesis we are engaging on a two folded investigation: how can we foster and support serendipitous discoveries and what are the best interfaces for it? In fact, to read and write content while on the move is a cognitively intensive task. While the map serves the function of orienting the user, it also absorbs most of the user’s concentration. In order to address this kind of cognitive overload issue with Breadcrumbs we propose a 360 degrees interface that enables the user to find content around them by means of scanning the surrounding space with the mobile device. By using a loose metaphor of a periscope, harnessing the power of the smartphone sensors we designed an interactive interface capable of detecting content around the users and display it in the form of 2 dimensional bubbles which diameter depends on their distance from the users. Users will navigate the space in relation to the content that they are curious about, rather than in relation to the traditional geographical map. Through this model we envisage alleviating a certain cognitive overload generated by having to continuously confront a two dimensional map with the real three dimensional space surrounding the user, but also use the content as a navigational filter. Furthermore this alternative mean of navigating space might bring serendipitous discovery about places that user where not aware of or intending to reach. We hence conclude our thesis with the evaluation of the Breadcrumbs application and the comparison of the 360 degrees interface with a traditional 2 dimensional map displayed on the devise screen. Results from the evaluation are compiled in findings and insights for future use in designing and developing context aware mobile applications.