770 resultados para Social web
Resumo:
Introduction Axillary web syndrome (AWS) can result in early post-operative and long-term difficulties following lymphadenectomy for cancer and should be recognised by clinicians. This systematic review was conducted to synthesise information on AWS clinical presentation and diagnosis, frequency, natural progression, grading, pathoaetiology, risk factors, symptoms, interventions and outcomes. Methods Electronic searches were conducted using Cochrane, Pubmed, MEDLINE, CINAHL, EMBASE, AMED, PEDro and Google Scholar until June 2013. The methodological quality of included studies was determined using the Downs and Black checklist. Narrative synthesis of results was undertaken. Results Thirty-seven studies with methodological quality scores ranging from 11 to 26 on a 28-point scale were included. AWS diagnosis relies on inspection and palpation; grading has not been validated. AWS frequency was reported in up to 85.4 % of patients. Biopsies identified venous and lymphatic pathoaetiology with five studies suggesting lymphatic involvement. Twenty-one studies reported AWS occurrence within eight post-operative weeks, but late occurrence of greater than 3 months is possible. Pain was commonly reported with shoulder abduction more restricted than flexion. AWS symptoms usually resolve within 3 months but may persist. Risk factors may include extensiveness of surgery, younger age, lower body mass index, ethnicity and healing complications. Low-quality studies suggest that conservative approaches including analgesics, non-steroidal anti-inflammatory drugs and/or physiotherapy may be safe and effective for early symptom reduction. Conclusions AWS appears common. Current evidence for the treatment of AWS is insufficient to provide clear guidance for clinical practice. Implications for Cancer Survivors Cancer survivors should be informed about AWS. Further investigation is needed into pathoaetiology, long-term outcomes and to determine effective treatment using standardised outcomes.
Resumo:
Rationale, aims and objectives: Patients with both cardiac disease and diabetes have poorer health outcomes than patients with only one chronic condition. While evidence indicates that internet based interventions may improve health outcomes for patients with a chronic disease, there is no literature on internet programs specific to cardiac patients with comorbid diabetes. Therefore this study aimed to develop a specific web-based program, then to explore patients’ perspectives on the usefulness of a new program. Methods: The interpretive approach using semi-structured interviews on a purposive sample of eligible patients with type 2 diabetes and a cardiac condition in a metropolitan hospital in Brisbane, Australia. Thematic analysis was undertaken to describe the perceived usefulness of a newly developed Heart2heart webpage. Results: Themes identified included confidence in hospital health professionals and reliance on doctors to manage conditions. Patients found the webpage useful for managing their conditions at home. Conclusions: The new Heart2heart webpage provided a positive and useful resource. Further research on to determine the potential influence of this resource on patients’ self-management behaviours is paramount. Implications for practice include using multimedia strategies for providing information to patients’ comorbidities of cardiac disease and type 2 diabetes, and further development on enhancement of such strategies
Resumo:
The study of social phenomena in the World Wide Web has been rather fragmentary, andthere is no coherent, reseach-based theory about sense of community in Web environment. Sense of community means part of one's self-concept that has to do with perceiving oneself belonging to, and feeling affinity to a certain social grouping. The present study aimed to find evidence for sense of community in Web environment, and specifically find out what the most critical psychological factors of sense of community would be. Based on known characteristics of real life communities and sense of community, and few occational studies of Web-communities, it was hypothesized that the following factors would be the most critical ones and that they could be grouped as prerequisites, facilitators and consequences of sense of community: awareness and social presence (prerequisites), criteria for membership and borders, common purpose, social interaction and reciprocity, norms and conformity, common history (facilitators), trust and accountability (consequences). In addition to critical factors, the present study aimed to find out if this kind of grouping would be valid. Furthermore, the effect of Web-community members' background variables to sense of community was of interest. In order to answer the questions, an online-questionnaire was created and tested. It included propositions that reflect factors that precede, facilitate and follow the sense of community in Web environment. A factor analysis was calculated to find out the critical factors and analyses of variance were calculated to see if the grouping to prerequisites, facilitators and consequences was right and how the background variables would affect the sense of community in Web environment. The results indicated that the psychological structure of sense of community in Web environment could not be presented with critical variables grouped as prerequisites, facilitators and consequences. Most factors did facilitate the sense of community, but based on this data it could not be argued that some of the factors chronologically precedesense of community and some follow it. Instead, the factor analysis revealed that the most critical factors in sense of community in Web environment are 1) reciprocal involvement, 2) basic trust for others, 3) similarity and common purpose of members, and 4) shared history of members. The most influencing background variables were the member's own participation activity (indicated with reading and writing messages) and the phase in membership lifecycle (from visitor to leader). The more the member participated and the further in membership life cycle he was, the more he felt sense of community. There are many descreptions of sense of community, but the present study was one of the first to actually measure the phenomenon in Web environment, and that gained well documented, valid results based on large data, proving that sense of community in Web environment is possible, and clarifying its psychological structure, thus enhancing the understanding of sense of community in Web environment. Keywords: sense of community, Web-community, psychology of the Internet
Resumo:
This article examines Greek activists’ use of a range of communication technologies, including social media, blogs, citizen journalism sites, Web radio, and anonymous networks. Drawing on Anna Tsing’s theoretical model, the article examines key frictions around digital technologies that emerged within a case study of the antifascist movement in Athens, focusing on the period around the 2013 shutdown of Athens Indymedia. Drawing on interviews with activists and analysis of online communications, including issue networks and social media activity, we find that the antifascist movement itself is created and recreated through a process of productive friction, as different groups and individuals with varying ideologies and experiences work together.
Resumo:
Social network sites (SNSs) such as Facebook have the potential to persuade people to adopt a lifestyle based on exercise and healthy nutrition. We report the findings of a qualitative study of an SNS for bodybuilders, looking at how bodybuilders present themselves online and how they orchestrate the SNS with their offline activities. Discussing the persuasive element of appreciation, we aim to extend previous work on persuasion in web 2.0 technologies.
Resumo:
This demonstration highlights the applications of our research work i.e. second generation (Scalable Fault Tolerant Agent Grooming Environment - SAGE) Multi Agent System, Integration of Software Agents and Grid Computing and Autonomous Agent Architecture in the Agent Platform. It is a conference planner application that uses collaborative effort of services deployed geographically wide in different technologies i.e. Software Agents, Grid computing and Web services to perform useful tasks as required. Copyright 2005 ACM.
Resumo:
E-government provides a platform for governments to implement web enabled services that facilitate communication between citizens and the government. However, technology driven design approach and limited understanding of citizens' requirements, have led to a number of critical usability problems on the government websites. Hitherto, there has been no systematic attempt to analyse the way in which theory of User Centred Design (UCD) can contribute to address the usability issues of government websites. This research seeks to fill this gap by synthesising perspectives drawn from the study of User Centred Design and examining them based on the empirical data derived from case study of the Scottish Executive website. The research employs a qualitative approach in the collection and analysis of data. The triangulated analysis of the findings reveals that e-government web designers take commercial development approach and focus only on technical implementations which lead to websites that do not meet citizens' expectations. The research identifies that e-government practitioners can overcome web usability issues by transferring the theory of UCD to practice.
Resumo:
Distributed Collaborative Computing services have taken over centralized computing platforms allowing the development of distributed collaborative user applications. These applications enable people and computers to work together more productively. Multi-Agent System (MAS) has emerged as a distributed collaborative environment which allows a number of agents to cooperate and interact with each other in a complex environment. We want to place our agents in problems whose solutions require the collation and fusion of information, knowledge or data from distributed and autonomous information sources. In this paper we present the design and implementation of an agent based conference planner application that uses collaborative effort of agents which function continuously and autonomously in a particular environment. The application also enables the collaborative use of services deployed geographically wide in different technologies i.e. Software Agents, Grid computing and Web service. The premise of the application is that it allows autonomous agents interacting with web and grid services to plan a conference as a proxy to their owners (humans). © 2005 IEEE.
Resumo:
In this paper we draw on current research to explore notions of a socially just Health and Physical Education (HPE), in light of claims that a neoliberal globalisation promotes markets over the states, and a new individualism that privileges self-interest over the collective good. We also invite readers to consider United Nations Educational, Scientific and Cultural Organization’s ambition for PE in light of preliminary findings from an Australian led research project exploring national and international patterns of outsourcing HPE curricula. Data were sourced from this international research project through a mixed method approach. Each external provider engaged in four phases of research activity: (a) Web-audits, (b) Interviews with external providers, (c) Network diagrams, and (d) School partner interviews and observations. Results We use these data to pose what we believe to be three emerging lines of inquiry and challenge for a socially just school HPE within neoliberal times. In particular our data indicates that the marketization of school HPE is strengthening an emphasis on individual responsibility for personal health, elevating expectations that schools and teachers will “fill the welfare gap” and finally, influencing the nature and purchase of educative HPE programs in schools. The apparent proliferation of external providers of health work, HPE resources and services reflects the rise and pervasiveness of neoliberalism in education. We conclude that this global HPE landscape warrants attention to investigate the extent to which external providers’ resources are compatible with schooling’s educative and inclusive mandates.
Resumo:
The use of social networking has exploded, with millions of people using various web- and mobile-based services around the world. This increase in social networking use has led to user anxiety related to privacy and the unauthorised exposure of personal information. Large-scale sharing in virtual spaces means that researchers, designers and developers now need to re-consider the issues and challenges of maintaining privacy when using social networking services. This paper provides a comprehensive survey of the current state-of-the-art privacy in social networks for both desktop and mobile uses and devices from various architectural vantage points. The survey will assist researchers and analysts in academia and industry to move towards mitigating many of the privacy issues in social networks.
Resumo:
This study investigates the role of social media as a form of organizational knowledge sharing. Social media is investigated in terms of the Web 2.0 technologies that organizations provide their employees as tools of internal communication. This study is anchored in the theoretical understanding of social media as technologies which enable both knowledge collection and knowledge donation. This study investigates the factors influencing employees’ use of social media in their working environment. The study presents the multidisciplinary research tradition concerning knowledge sharing. Social media is analyzed especially in relation to internal communication and knowledge sharing. Based on previous studies, it is assumed that personal, organizational, and technological factors influence employees’ use of social media in their working environment. The research represents a case study focusing on the employees of the Finnish company Wärtsilä. Wärtsilä represents an eligible case organization for this study given that it puts in use several Web 2.0 tools in its intranet. The research is based on quantitative methods. In total 343 answers were obtained with the aid of an online survey which was available in Wärtsilä’s intranet. The associations between the variables are analyzed with the aid of correlations. Finally, with the aid of multiple linear regression analysis the causality between the assumed factors and the use of social media is tested. The analysis demonstrates that personal, organizational and technological factors influence the respondents’ use of social media. As strong predictive variables emerge the benefits that respondents expect to receive from using social media and respondents’ experience in using Web 2.0 in their private lives. Also organizational factors such as managers’ and colleagues’ activeness and organizational guidelines for using social media form a causal relationship with the use of social media. In addition, respondents’ understanding of their responsibilities affects their use of social media. The more social media is considered as a part of individual responsibilities, the more frequently social media is used. Finally, technological factors must be recognized. The more user-friendly social media tools are considered and the better technical skills respondents have, the more frequently social media is used in the working environment. The central references in relation to knowledge sharing include Chun Wei Choo’s (2006) work Knowing Organization, Ikujiro Nonaka and Hirotaka Takeuchi’s (1995) work The Knowledge Creating Company and Linda Argote’s (1999) work Organizational Learning.
Resumo:
As the virtual world grows more complex, finding a standard way for storing data becomes increasingly important. Ideally, each data item would be brought into the computer system only once. References for data items need to be cryptographically verifiable, so the data can maintain its identity while being passed around. This way there will be only one copy of the users family photo album, while the user can use multiple tools to show or manipulate the album. Copies of users data could be stored on some of his family members computer, some of his computers, but also at some online services which he uses. When all actors operate over one replicated copy of the data, the system automatically avoids a single point of failure. Thus the data will not disappear with one computer breaking, or one service provider going out of business. One shared copy also makes it possible to delete a piece of data from all systems at once, on users request. In our research we tried to find a model that would make data manageable to users, and make it possible to have the same data stored at various locations. We studied three systems, Persona, Freenet, and GNUnet, that suggest different models for protecting user data. The main application areas of the systems studied include securing online social networks, providing anonymous web, and preventing censorship in file-sharing. Each of the systems studied store user data on machines belonging to third parties. The systems differ in measures they take to protect their users from data loss, forged information, censorship, and being monitored. All of the systems use cryptography to secure names used for the content, and to protect the data from outsiders. Based on the gained knowledge, we built a prototype platform called Peerscape, which stores user data in a synchronized, protected database. Data items themselves are protected with cryptography against forgery, but not encrypted as the focus has been disseminating the data directly among family and friends instead of letting third parties store the information. We turned the synchronizing database into peer-to-peer web by revealing its contents through an integrated http server. The REST-like http API supports development of applications in javascript. To evaluate the platform’s suitability for application development we wrote some simple applications, including a public chat room, bittorrent site, and a flower growing game. During our early tests we came to the conclusion that using the platform for simple applications works well. As web standards develop further, writing applications for the platform should become easier. Any system this complex will have its problems, and we are not expecting our platform to replace the existing web, but are fairly impressed with the results and consider our work important from the perspective of managing user data.
Resumo:
As the virtual world grows more complex, finding a standard way for storing data becomes increasingly important. Ideally, each data item would be brought into the computer system only once. References for data items need to be cryptographically verifiable, so the data can maintain its identity while being passed around. This way there will be only one copy of the users family photo album, while the user can use multiple tools to show or manipulate the album. Copies of users data could be stored on some of his family members computer, some of his computers, but also at some online services which he uses. When all actors operate over one replicated copy of the data, the system automatically avoids a single point of failure. Thus the data will not disappear with one computer breaking, or one service provider going out of business. One shared copy also makes it possible to delete a piece of data from all systems at once, on users request. In our research we tried to find a model that would make data manageable to users, and make it possible to have the same data stored at various locations. We studied three systems, Persona, Freenet, and GNUnet, that suggest different models for protecting user data. The main application areas of the systems studied include securing online social networks, providing anonymous web, and preventing censorship in file-sharing. Each of the systems studied store user data on machines belonging to third parties. The systems differ in measures they take to protect their users from data loss, forged information, censorship, and being monitored. All of the systems use cryptography to secure names used for the content, and to protect the data from outsiders. Based on the gained knowledge, we built a prototype platform called Peerscape, which stores user data in a synchronized, protected database. Data items themselves are protected with cryptography against forgery, but not encrypted as the focus has been disseminating the data directly among family and friends instead of letting third parties store the information. We turned the synchronizing database into peer-to-peer web by revealing its contents through an integrated http server. The REST-like http API supports development of applications in javascript. To evaluate the platform s suitability for application development we wrote some simple applications, including a public chat room, bittorrent site, and a flower growing game. During our early tests we came to the conclusion that using the platform for simple applications works well. As web standards develop further, writing applications for the platform should become easier. Any system this complex will have its problems, and we are not expecting our platform to replace the existing web, but are fairly impressed with the results and consider our work important from the perspective of managing user data.
Resumo:
The world of mapping has changed. Earlier, only professional experts were responsible for map production, but today ordinary people without any training or experience can become map-makers. The number of online mapping sites, and the number of volunteer mappers has increased significantly. The development of the technology, such as satellite navigation systems, Web 2.0, broadband Internet connections, and smartphones, have had one of the key roles in enabling the rise of volunteered geographic information (VGI). As opening governmental data to public is a current topic in many countries, the opening of high quality geographical data has a central role in this study. The aim of this study is to investigate how is the quality of spatial data produced by volunteers by comparing it with the map data produced by public authorities, to follow what occurs when spatial data are opened for users, and to get acquainted with the user profile of these volunteer mappers. A central part of this study is OpenStreetMap project (OSM), which aim is to create a map of the entire world by volunteers. Anyone can become an OpenStreetMap contributor, and the data created by the volunteers are free to use for anyone without restricting copyrights or license charges. In this study OpenStreetMap is investigated from two viewpoints. In the first part of the study, the aim was to investigate the quality of volunteered geographic information. A pilot project was implemented by following what occurs when a high-resolution aerial imagery is released freely to the OpenStreetMap contributors. The quality of VGI was investigated by comparing the OSM datasets with the map data of The National Land Survey of Finland (NLS). The quality of OpenStreetMap data was investigated by inspecting the positional accuracy and the completeness of the road datasets, as well as the differences in the attribute datasets between the studied datasets. Also the OSM community was under analysis and the development of the map data of OpenStreetMap was investigated by visual analysis. The aim of the second part of the study was to analyse the user profile of OpenStreetMap contributors, and to investigate how the contributors act when collecting data and editing OpenStreetMap. The aim was also to investigate what motivates users to map and how is the quality of volunteered geographic information envisaged. The second part of the study was implemented by conducting a web inquiry to the OpenStreetMap contributors. The results of the study show that the quality of OpenStreetMap data compared with the data of National Land Survey of Finland can be defined as good. OpenStreetMap differs from the map of National Land Survey especially because of the amount of uncertainty, for example because of the completeness and uniformity of the map are not known. The results of the study reveal that opening spatial data increased notably the amount of the data in the study area, and both the positional accuracy and completeness improved significantly. The study confirms the earlier arguments that only few contributors have created the majority of the data in OpenStreetMap. The inquiry made for the OpenStreetMap users revealed that the data are most often collected by foot or by bicycle using GPS device, or by editing the map with the help of aerial imageries. According to the responses, the users take part to the OpenStreetMap project because they want to make maps better, and want to produce maps, which have information that is up-to-date and cannot be found from any other maps. Almost all of the users exploit the maps by themselves, most popular methods being downloading the map into a navigator or into a mobile device. The users regard the quality of OpenStreetMap as good, especially because of the up-to-dateness and the accuracy of the map.
Resumo:
Our study concerns an important current problem, that of diffusion of information in social networks. This problem has received significant attention from the Internet research community in the recent times, driven by many potential applications such as viral marketing and sales promotions. In this paper, we focus on the target set selection problem, which involves discovering a small subset of influential players in a given social network, to perform a certain task of information diffusion. The target set selection problem manifests in two forms: 1) top-k nodes problem and 2) lambda-coverage problem. In the top-k nodes problem, we are required to find a set of k key nodes that would maximize the number of nodes being influenced in the network. The lambda-coverage problem is concerned with finding a set of k key nodes having minimal size that can influence a given percentage lambda of the nodes in the entire network. We propose a new way of solving these problems using the concept of Shapley value which is a well known solution concept in cooperative game theory. Our approach leads to algorithms which we call the ShaPley value-based Influential Nodes (SPINs) algorithms for solving the top-k nodes problem and the lambda-coverage problem. We compare the performance of the proposed SPIN algorithms with well known algorithms in the literature. Through extensive experimentation on four synthetically generated random graphs and six real-world data sets (Celegans, Jazz, NIPS coauthorship data set, Netscience data set, High-Energy Physics data set, and Political Books data set), we show that the proposed SPIN approach is more powerful and computationally efficient. Note to Practitioners-In recent times, social networks have received a high level of attention due to their proven ability in improving the performance of web search, recommendations in collaborative filtering systems, spreading a technology in the market using viral marketing techniques, etc. It is well known that the interpersonal relationships (or ties or links) between individuals cause change or improvement in the social system because the decisions made by individuals are influenced heavily by the behavior of their neighbors. An interesting and key problem in social networks is to discover the most influential nodes in the social network which can influence other nodes in the social network in a strong and deep way. This problem is called the target set selection problem and has two variants: 1) the top-k nodes problem, where we are required to identify a set of k influential nodes that maximize the number of nodes being influenced in the network and 2) the lambda-coverage problem which involves finding a set of influential nodes having minimum size that can influence a given percentage lambda of the nodes in the entire network. There are many existing algorithms in the literature for solving these problems. In this paper, we propose a new algorithm which is based on a novel interpretation of information diffusion in a social network as a cooperative game. Using this analogy, we develop an algorithm based on the Shapley value of the underlying cooperative game. The proposed algorithm outperforms the existing algorithms in terms of generality or computational complexity or both. Our results are validated through extensive experimentation on both synthetically generated and real-world data sets.