882 resultados para User generated content
Resumo:
Bioacoustic data can provide an important base for environmental monitoring. To explore a large amount of field recordings collected, an automated similarity search algorithm is presented in this paper. A region of an audio defined by frequency and time bounds is provided by a user; the content of the region is used to construct a query. In the retrieving process, our algorithm will automatically scan through recordings to search for similar regions. In detail, we present a feature extraction approach based on the visual content of vocalisations – in this case ridges, and develop a generic regional representation of vocalisations for indexing. Our feature extraction method works best for bird vocalisations showing ridge characteristics. The regional representation method allows the content of an arbitrary region of a continuous recording to be described in a compressed format.
Resumo:
Few published studies have monitored destination brand image over time. This temporal aspect is an important gap in the literature, given consensus around the role perceptions play in consumers’ decision making, and the ensuing emphasis on imagery in destination branding collateral. Whereas most destination image studies have been a snapshot of perceptions at one point in time, this paper presents findings from a survey implemented four times between 2003 and 2015. Brand image is the core construct in modelling destination branding performance, which has emerged as a relatively new field of research in the past decade. Using the consumer-based brand equity (CBBE) hierarchy, the project has benchmarked and monitored destination brand salience, image and resonance for an emerging regional destination, relative to key competitors, in the domestic Australian market; and the survey instrument has been demonstrated to be reliable in the context of short break holidays by car. What is particularly interesting to date is there has been relatively little change in the market positions of the five destinations, in spite of over a decade of marketing communications by the regional tourism organisations and their stakeholders, and more recently the mass of user-generated travel content on social media. The project didn’t analyse the actual marketing communications for each of the DMOs. Therefore an important implication is that irrespective of the level of marketing undertaken the DMOs seem to have had little control over the perceptions held in their largest market during this time period. Therefore it must be recognised any improvement in perceptions will likely take a long period of time, and so branding needs to be underpinned by a philosophy of a long term financial investment as well as commitment to a consistency of message over time; which given the politics of DMO decision making represents a considerable challenge.
Resumo:
In the recent past, hardly anyone could predict this course of GIS development. GIS is moving from desktop to cloud. Web 2.0 enabled people to input data into web. These data are becoming increasingly geolocated. Big amounts of data formed something that is called "Big Data". Scientists still don't know how to deal with it completely. Different Data Mining tools are used for trying to extract some useful information from this Big Data. In our study, we also deal with one part of these data - User Generated Geographic Content (UGGC). The Panoramio initiative allows people to upload photos and describe them with tags. These photos are geolocated, which means that they have exact location on the Earth's surface according to a certain spatial reference system. By using Data Mining tools, we are trying to answer if it is possible to extract land use information from Panoramio photo tags. Also, we tried to answer to what extent this information could be accurate. At the end, we compared different Data Mining methods in order to distinguish which one has the most suited performances for this kind of data, which is text. Our answers are quite encouraging. With more than 70% of accuracy, we proved that extracting land use information is possible to some extent. Also, we found Memory Based Reasoning (MBR) method the most suitable method for this kind of data in all cases.
Resumo:
Next Generation Networks (NGN) provide Telecommunications operators with the possibility to share their resources and infrastructure, facilitate the interoperability with other networks, and simplify and unify the management, operation and maintenance of service offerings, thus enabling the fast and cost-effective creation of new personal, broadband ubiquitous services. Unfortunately, service creation over NGN is far from the success of service creation in the Web, especially when it comes to Web 2.0. This paper presents a novel approach to service creation and delivery, with a platform that opens to non-technically skilled users the possibility to create, manage and share their own convergent (NGN-based and Web-based) services. To this end, the business approach to user-generated services is analyzed and the technological bases supporting the proposal are explained.
Resumo:
Microblogging is one of the most popular user-generated media, hence its accessibility has a large impact for users. However, the accessibility of this medium is poor in practice, due to the combination of bad practices by different agents ranging from the providers that host microblogging services to prosumers that post contents to them. Here we present an accessibility-oriented model of microblogging services, analyze the impact of its components, and propose guidelines for each of them to meet accessibility requirements. In particular, we base on an study we have performed on Twitter, one of the most-relevant microblogging platform, to identify good and bad practices in microblogging content generation regarding accessibility in microblogging content generation.
Resumo:
Starting from the Schumpeterian producer-driven understanding of innovation, followed by user-generated solutions and understanding of collaborative forms of co-creation, scholars investigated the drivers and the nature of interactions underpinning success in various ways. Innovation literature has gone a long way, where open innovation has attracted researchers to investigate problems like compatibilities of external resources, networks of innovation, or open source collaboration. Openness itself has gained various shades in the different strands of literature. In this paper the author provides with an overview and a draft evaluation of the different models of open innovation, illustrated with some empirical findings from various fields drawn from the literature. She points to the relevance of transaction costs affecting viable forms of (open) innovation strategies of firms, and the importance to define the locus of innovation for further analyses of different firm and interaction level formations.
Resumo:
This dissertation research points out major challenging problems with current Knowledge Organization (KO) systems, such as subject gateways or web directories: (1) the current systems use traditional knowledge organization systems based on controlled vocabulary which is not very well suited to web resources, and (2) information is organized by professionals not by users, which means it does not reflect intuitively and instantaneously expressed users’ current needs. In order to explore users’ needs, I examined social tags which are user-generated uncontrolled vocabulary. As investment in professionally-developed subject gateways and web directories diminishes (support for both BUBL and Intute, examined in this study, is being discontinued), understanding characteristics of social tagging becomes even more critical. Several researchers have discussed social tagging behavior and its usefulness for classification or retrieval; however, further research is needed to qualitatively and quantitatively investigate social tagging in order to verify its quality and benefit. This research particularly examined the indexing consistency of social tagging in comparison to professional indexing to examine the quality and efficacy of tagging. The data analysis was divided into three phases: analysis of indexing consistency, analysis of tagging effectiveness, and analysis of tag attributes. Most indexing consistency studies have been conducted with a small number of professional indexers, and they tended to exclude users. Furthermore, the studies mainly have focused on physical library collections. This dissertation research bridged these gaps by (1) extending the scope of resources to various web documents indexed by users and (2) employing the Information Retrieval (IR) Vector Space Model (VSM) - based indexing consistency method since it is suitable for dealing with a large number of indexers. As a second phase, an analysis of tagging effectiveness with tagging exhaustivity and tag specificity was conducted to ameliorate the drawbacks of consistency analysis based on only the quantitative measures of vocabulary matching. Finally, to investigate tagging pattern and behaviors, a content analysis on tag attributes was conducted based on the FRBR model. The findings revealed that there was greater consistency over all subjects among taggers compared to that for two groups of professionals. The analysis of tagging exhaustivity and tag specificity in relation to tagging effectiveness was conducted to ameliorate difficulties associated with limitations in the analysis of indexing consistency based on only the quantitative measures of vocabulary matching. Examination of exhaustivity and specificity of social tags provided insights into particular characteristics of tagging behavior and its variation across subjects. To further investigate the quality of tags, a Latent Semantic Analysis (LSA) was conducted to determine to what extent tags are conceptually related to professionals’ keywords and it was found that tags of higher specificity tended to have a higher semantic relatedness to professionals’ keywords. This leads to the conclusion that the term’s power as a differentiator is related to its semantic relatedness to documents. The findings on tag attributes identified the important bibliographic attributes of tags beyond describing subjects or topics of a document. The findings also showed that tags have essential attributes matching those defined in FRBR. Furthermore, in terms of specific subject areas, the findings originally identified that taggers exhibited different tagging behaviors representing distinctive features and tendencies on web documents characterizing digital heterogeneous media resources. These results have led to the conclusion that there should be an increased awareness of diverse user needs by subject in order to improve metadata in practical applications. This dissertation research is the first necessary step to utilize social tagging in digital information organization by verifying the quality and efficacy of social tagging. This dissertation research combined both quantitative (statistics) and qualitative (content analysis using FRBR) approaches to vocabulary analysis of tags which provided a more complete examination of the quality of tags. Through the detailed analysis of tag properties undertaken in this dissertation, we have a clearer understanding of the extent to which social tagging can be used to replace (and in some cases to improve upon) professional indexing.
Resumo:
Online social networks, user-created content and participatory media are often still ignored by professionals, denounced in the press and banned in schools. but the potential of digital literacy should not be underestimated. Hartley reassesses the historical and global context, commercial and cultural dynamics and the potential of popular productivity through analysis of the use of digital media in various domains, including creative industries, digital storytelling, YouTube, journalism and mediated fashion.
Resumo:
Story Circle is the first collection ever devoted to a comprehensive international study of the digital storytelling movement. Exploring subjects of central importance on the emergent and ever-shifting digital landscape-consumer-generated content, memory grids, the digital storytelling youth movement, and micro-documentary- Story Circle pinpoints who is telling what stories, where, on what terms, and what they look and sound like.
Resumo:
Online technological advances are pioneering the wider distribution of geospatial information for general mapping purposes. The use of popular web-based applications, such as Google Maps, is ensuring that mapping based applications are becoming commonplace amongst Internet users which has facilitated the rapid growth of geo-mashups. These user generated creations enable Internet users to aggregate and publish information over specific geographical points. This article identifies privacy invasive geo-mashups that involve the unauthorized use of personal information, the inadvertent disclosure of personal information and invasion of privacy issues. Building on Zittrain’s Privacy 2.0, the author contends that first generation information privacy laws, founded on the notions of fair information practices or information privacy principles, may have a limited impact regarding the resolution of privacy problems arising from privacy invasive geo-mashups. Principally because geo-mashups have different patterns of personal information provision, collection, storage and use that reflect fundamental changes in the Web 2.0 environment. The author concludes by recommending embedded technical and social solutions to minimize the risks arising from privacy invasive geo-mashups that could lead to the establishment of guidelines for the general protection of privacy in geo-mashups.
Resumo:
In the context of a multi-paper special issue of TVNM on the future of media studies, this paper traces the tradition of ‘active audience’ theory in TV scholarship, arguing that it has much to offer in the study of new digital media, especially an approach to user-created content and dynamics of change. The paper argues for a ‘cultural science’ approach to ‘active audiences’ in order to analyse and understand how non-professionals and consumers contribute to the growth of knowledge in complex open media systems.
Resumo:
Über die letzten Jahre hat sich einige öffentliche und kommerzielle Aufmerksamkeit auf ein Phänomen gerichtet, das sich anschickt, die Medienlandschaft grundlegend zu verändern. Yahoo! kaufte Flickr. Google erwarb YouTube. Rupert Murdoch kaufte MySpace, und erklärte, die Zukunft seines NewsCorp-Imperiums läge eher in der nutzergesteuerten Inhaltserschaffung innerhalb solcher sozialer Medien als in seinen vielen Zeitungen, Fernsehsendern und anderen Medieninteressen (2005). Schließlich brach TIME mit seiner langetablierten Tradition, eine herausragende Persönlichkeit als „Person des Jahres“ zu nominieren, und wählte stattdessen „You“: uns alle, die wir online in Kollaboration Inhalte schaffen (2006). Allerdings liegt die Bedeutung dieses nutzergesteuerten Phänomens nicht in solchen (letztlich unwichtigen) Ehrungen, oder auch nur in den Inhalten zentraler Websites wie YouTube und Flickr – vielmehr findet man sie in logischer Folge der ihr zugrunde liegenden Prinzipien (die wir hier weiter untersuchen werden) viel flächendeckender über das World Wide Web verbreitet; was wichtig ist am neuen Phänomen ist nicht nur der Erfolg seiner sichtbarsten Exponenten, sondern auch der „Long Tail“ (Anderson 2006) der vielen anderen nutzergesteuerten Projekte, die sich überall in der Online-Welt etabliert haben und jetzt beginnen, sich sogar in die Offline-Welt hinein auszubreiten.
Resumo:
Alvin Tofflers Bild des Prosumers beeinflußt weiterhin maßgeblich unser Verständnis vieler heutzutage als „Social Media“ oder „Web 2.0“ beschriebener nutzergesteuerter, kollaborativer Prozesse der Inhaltserstellung. Ein genauerer Blick auf Tofflers eigene Beschreibung seines Prosumermodells offenbart jedoch, daß es fest im Zeitalter der Massenmedienvorherrschaft verankert bleibt: der Prosumer ist eben nicht jener aus eigenem Antrieb aktive, kreative Ersteller und Weiterbearbeiter neuer Inhalte, wie er heutzutage in Projekten von der Open-Source-Software über die Wikipedia bis hin zu Second Life zu finden ist, sondern nur ein ganz besonders gut informierter, und daher in seinem Konsumverhalten sowohl besonders kritischer als auch besonders aktiver Konsument. Hochspezialisierte, High-End-Konsumenten etwa im Hi-Fi- oder Automobilbereich stellen viel eher das Idealbild des Prosumers dar als das für Mitarbeiter in oft eben gerade nicht (oder zumindest noch nicht) kommerziell erfaßten nutzergesteuerten Kollaborationsprojekten der Fall ist. Solches von Tofflers in den 70ern erarbeiteten Modells zu erwarten, ist sicherlich ohnehin zuviel verlangt. Das Problem liegt also nicht bei Toffler selbst, sondern vielmehr in den im Industriezeitalter vorherrschenden Vorstellungen eines recht deutlich in Produktion, Distribution, und Konsum eingeteilten Prozesses. Diese Dreiteilung war für die Erschaffung materieller wie immaterieller Güter durchaus notwendig – sie ist selbst für die konventionellen Massenmedien zutreffend, bei denen Inhaltsproduktion ebenso aus kommerziellen Gründen auf einige wenige Institutionen konzentriert war wie das für die Produktion von Konsumgütern der Fall ist. Im beginnenden Informationszeitalter, beherrscht durch dezentralisierte Mediennetzwerke und weithin erhaltbare und erschwingliche Produktionsmittel, liegt der Fall jedoch anders. Was passiert, wenn Distribution automatisch erfolgt, und wenn beinahe jeder Konsument auch Produzent sein kann, anstelle einer kleinen Schar von kommerziell unterstützten Produzenten, denen bestenfallls vielleicht eine Handvoll von nahezu professionellen Prosumern zur Seite steht? Was geschieht, wenn sich die Zahl der von Eric von Hippel als ‚lead user’ beschriebenen als Produzenten aktiven Konsumenten massiv ausdehnt – wenn, wie Wikipedias Slogan es beschreibt, ‚anyone can edit’, wenn also potentiell jeder Nutzer aktiv an der Inhaltserstellung teilnehmen kann? Um die kreative und kollaborative Beteiligung zu beschreiben, die heutzutage nutzergesteuerte Projekte wie etwa die Wikipedia auszeichnet, sind Begriffe wie ‚Produktion’ und ‚Konsum’ nur noch bedingt nützlich – selbst in Konstruktionen wie 'nutzergesteuerte Produktion' oder 'P2P-Produktion'. In den Nutzergemeinschaften, die an solchen Formen der Inhaltserschaffung teilnehmen, haben sich Rollen als Konsumenten und Benutzer längst unwiederbringlich mit solchen als Produzent vermischt: Nutzer sind immer auch unausweichlich Produzenten der gemeinsamen Informationssammlung, ganz egal, ob sie sich dessens auch bewußt sind: sie haben eine neue, hybride Rolle angenommen, die sich vielleicht am besten als 'Produtzer' umschreiben lassen kann. Projekte, die auf solche Produtzung (Englisch: produsage) aufbauen, finden sich in Bereichen von Open-Source-Software über Bürgerjournalismus bis hin zur Wikipedia, und darüberhinaus auch zunehmend in Computerspielen, Filesharing, und selbst im Design materieller Güter. Obwohl unterschiedlich in ihrer Ausrichtung, bauen sie doch auf eine kleine Zahl universeller Grundprinzipien auf. Dieser Vortrag beschreibt diese Grundprinzipien, und zeigt die möglichen Implikationen dieses Übergangs von Produktion (und Prosumption) zu Produtzung auf.
Resumo:
Relations between brands and their users continue to be affected by a traditional perspective that sees the producers and consumers of goods and services as inherently different animals. In the emerging information and knowledge economy, and especially in online contexts, this model is no longer sustainable. Instead, spearheaded by the Web 2.0 phenomenon, there is a trend towards the fusing of production and usage as a new, hybrid process of produsage. This presentation presents the key characteristics driving produsage processes, and describes four guiding principles for businesses as they share their brand with users: * Be open. * Seed community processes by providing content and tools. * Support community dynamics and devolve responsibilities. * Don't exploit the community and its work.
Resumo:
The creative industries idea is better than even its original perpetrators might have imagined, judging from the original mapping documents. By throwing the heavy duty copyright industries into the same basket as public service broadcasting, the arts and a lot of not-for-profit activity (public goods) and commercial but non-copyright-based sectors (architecture, design, increasingly software), it really messed with the minds of economic and cultural traditionalists. And, perhaps unwittingly, it prepared the way for understanding the dynamics of contemporary cultural ‘prosumption’ or ‘playbour’ in an increasingly networked social and economic space.