51 resultados para user generated content
em Doria (National Library of Finland DSpace Services) - National Library of Finland, Finland
Resumo:
Esitys KDK-käytettävyystyöryhmän järjestämässä seminaarissa: Miten käyttäjien toiveet haastavat metatietokäytäntöjämme? / How users' expectations challenge our metadata practices? 30.9.2014.
Resumo:
Tässä pro gradu -tutkielmassa selvitettiin, miten aikakauslehden eli emobrändin tuttuus vaikuttaa halukkuuteen osallistua verkkopalvelun eli brändilaajennuksen sisällöntuotantoon joukkoistamalla. Kysymys liittyy muuttuneeseen tilanteeseen median kulutuksessa. Teknologia on kehittynyt, paperisten mediatuotteiden kysyntä vähentynyt ja aikakauslehdet kehittävät digitaalisia palvelujaan. Medialla kuitenkaan ole yksinoikeutta sisällöntuotantoon. Kuka tahansa voi tehdä ja julkaista sisältöä. Monet mediat ovat ottaneet lukijat mukaan sisällöntuotantoon. Aiempien tutkimusten perusteella emobrändin tuttuus vaikuttaa myönteisesti asi-akkaiden halukkuuteen käyttää brändilaajennusta. Se vaikuttaa myös ostopäätökseen ja lisää luottamusta verkkopalvelua kohtaan. Tässä tutkimuksessa tuote oli tuttu niille, jotka ovat lehden tilaajia. Lukijalähtöistä sisältöä ovat mm. blogit, verkkokeskustelut, lukijoiden tarinat, runot, kuvat, kilpailut ja kyselyt. Tutkielman aineisto kerättiin verkkokyselyn avulla ja siihen vastasi 437 vastaajaa. Tulosten perusteella emobrändin tuttuus vaikuttaa myönteisesti tilaajien halukkuuteen käyttää aikakauslehden verkkopalvelua. Tilaajat käyvät palvelussa muita käyttäjiä useammin ja osallistuvat tai haluaisivat osallistua ei-tilaajia innokkaammin sisällöntuotantoon. He ovat myös ei-tilaajia kiinnostuneempia lukijalähtöisestä sisällöstä.
Resumo:
The aim of this bachelor’s thesis was to explore adolescents’ personal branding practices in the social media environment of the photo and video sharing mobile application Instagram. As the theoretical background for personal branding is quite limited, this thesis combined concepts of personal branding and self-presentation to answer the research problems. Empirical data was collected by conducting semi-structured individual interviews with 10-14-year-old adolescent girls. The photo-elicitation method was utilized in the interviews as the participants were requested to present and discuss their Instagram accounts. The concepts of personal brand identity and personal brand positioning were found to be suitable descriptions to adolescents’ personal branding practices on Instagram. It was found that adolescents consciously consider what kind of personal brand identity they aim to portray to their audience and that authenticity of the personal brand identity is valued. Personal brand positioning, on the other hand, was found to be achieved through impression management: adolescents make strategic disclosure decisions regarding the content they post on their Instagram accounts in a way that the content is reflective of the personal brand identity. Posting brand-related user-generated content on one’s Instagram account was found to be one of the many disclosure decisions in personal brand positioning on Instagram and this type of content was very common on the participants’ accounts. Adolescents were also found to be interested in monitoring the audience reactions to their personal branding efforts.
Resumo:
Internet-verkon sisältöpalvelut ovat viime vuosina kehittyneet siten, että yhä suurempi osa sisällöstä on käyttäjäyhteisön itse tuottamaa. Esimerkkejä tällaisista uusista palveluista ovat verkkopäiväkirjat eli blogit, valokuva- ja videotietokannat Flickr ja YouTube sekä nk. wiki- sivustot. kuten Wikipedia-tietosanakirja. Tässä työssä tutkittiin, onko vastaavaa sisällöntuotannon muutosta tapahtunut avoimissa alueverkoissa. Työssä myös esitetään eräät määritelmät avoimille ja operaattorineutraaleille verkoille. Toisena tavoitteena oli tutkia, hyödynnetäänkö avoimien alueverkkojen palveluissa verkon alueellisuutta. Työ toteutettiin kyselytutkimuksina alueverkkojen ylläpitäjille sekä käyttäjille. Kyselystä saatujen tulosten mukaan avoimissa alueverkoissa ei ole laajalti tarjolla käyttäjälähtöisiä sisältöpalveluja, mutta paikallisuutta hyödyntäviä palveluja on tarjolla. Toisaalta kyselytulosten mukaan käyttäjät toivovatkin alueverkoilta nimenomaan paikallisuutta hyödyntäviä palveluja. Käyttäjälähtöisiä sisältöpalveluja käytetään runsaasti, mutta ilmeisesti laajemman Internet-verkon tasolla.
Resumo:
Learning of preference relations has recently received significant attention in machine learning community. It is closely related to the classification and regression analysis and can be reduced to these tasks. However, preference learning involves prediction of ordering of the data points rather than prediction of a single numerical value as in case of regression or a class label as in case of classification. Therefore, studying preference relations within a separate framework facilitates not only better theoretical understanding of the problem, but also motivates development of the efficient algorithms for the task. Preference learning has many applications in domains such as information retrieval, bioinformatics, natural language processing, etc. For example, algorithms that learn to rank are frequently used in search engines for ordering documents retrieved by the query. Preference learning methods have been also applied to collaborative filtering problems for predicting individual customer choices from the vast amount of user generated feedback. In this thesis we propose several algorithms for learning preference relations. These algorithms stem from well founded and robust class of regularized least-squares methods and have many attractive computational properties. In order to improve the performance of our methods, we introduce several non-linear kernel functions. Thus, contribution of this thesis is twofold: kernel functions for structured data that are used to take advantage of various non-vectorial data representations and the preference learning algorithms that are suitable for different tasks, namely efficient learning of preference relations, learning with large amount of training data, and semi-supervised preference learning. Proposed kernel-based algorithms and kernels are applied to the parse ranking task in natural language processing, document ranking in information retrieval, and remote homology detection in bioinformatics domain. Training of kernel-based ranking algorithms can be infeasible when the size of the training set is large. This problem is addressed by proposing a preference learning algorithm whose computation complexity scales linearly with the number of training data points. We also introduce sparse approximation of the algorithm that can be efficiently trained with large amount of data. For situations when small amount of labeled data but a large amount of unlabeled data is available, we propose a co-regularized preference learning algorithm. To conclude, the methods presented in this thesis address not only the problem of the efficient training of the algorithms but also fast regularization parameter selection, multiple output prediction, and cross-validation. Furthermore, proposed algorithms lead to notably better performance in many preference learning tasks considered.
Resumo:
Increasing usage of Web Services has been result of efforts to automate Web Services discovery and interoperability. The Semantic Web Service descriptions create basis for automatic Web Service information management tasks such as discovery and interoperability. The discussion of opportunities enabled by service descriptions have arisen in recent years. The end user has been considered only as a consumer of services and information sharing occurred from one service provider to public in service distribution. The social networking has changed the nature of services. The end user cannot be seen anymore only as service consumer, because by enabling semantically rich environment and right tools, the end user will be in the future the producer of services. This study investigates the ways to provide for end users the empowerment to create service descriptions on mobile device. Special focus is given to the changed role of the end user in service creation. In addition, the Web Services technologies are presented as well as different Semantic Web Service description approaches are compared. The main focus in the study is to investigate tools and techniques to enable service description creation and semantic information management on mobile device.
Resumo:
The objective of this Bachelor's Thesis is to find out the role of social media in the B-to-B marketing environment of the information technology industry and to discover how IT-firms utilize social media as a part of their customer reference marketing. To reach the objectives the concepts of customer reference marketing and social media are determined. Customer reference marketing can be characterized as one of the most practically relevant but academically relatively overlooked ways in which a company can leverage its customers and delivered solutions and use them as references in its marketing activities. We will cover which external and internal functions customer references have, that contribute to the growth and performance of B-to-B firms. We also address the three mechanisms of customer reference marketing which are 'status transfer', 'validation through testimonials' and 'demonstration of experience and prior performance'. The concept of social media stands for social interaction and creation of user-based content which exclusively occurs through Internet. The social media are excellent tools for networking because of the fast and easy access, easy interaction and vast amount of multimedia attributes. The allocation of social media is determined. The case company helps clarify the specific characteristics of social media usage as part of customer-reference-marketing activities. For IT-firms the best channels to utilize social media in their customer reference marketing activities are publishing and distribution services of content and networking services.
Resumo:
This study discusses the procedures of value co-creation that persist in gaming industry. The purpose of this study was to identify the procedures that persist in current video gaming industry which answers the main research problem how value is co-created in video gaming industry followed by three sub questions: (i) What is value co-creation in gaming industry? (ii) Who participates in value co-creation in gaming industry? (iii) What are the procedures that are involved in value co-creation in gaming industry? The theoretical background of the study consists of literature relating to the theory of marketing i.e., notion of value, conventional understanding of value creation, value chain, co-creation approach, co-production approach. The research adopted qualitative research approach. As a platform of relationship researcher used web 2.0 tool interface. Data were collected from the social networks and netnography method was applied for analyzing them. Findings show that customer and company both co-create optimum level of value while they interact with each other and within the customers as well. However mostly the C2C interaction, discussions and dialogues threads that emerged around the main discussion facilitated to co-create value. In this manner, companies require exploiting and further motivating, developing and supporting the interactions between customers participating in value creation. Hierarchy of value co-creation processes is the result derived from the identified challenges of value co-creation approach and discussion forums data analysis. Overall three general sets and seven topics were found that explored the phenomenon of customer to customer (C2C) and business to customer (B2C) interaction/debating for value co-creation through user generated contents. These topics describe how gamer contributes and interacts in co-creating value along with companies. A methodical quest in current research literature acknowledged numerous evolving flows of value in this study. These are general management perspective, new product development and innovation, virtual customer environment, service science and service dominant logic. Overall the topics deliver various realistic and conceptual implications for using and handling gamers in social networks for augmenting customers’ value co-creation process.
Resumo:
The thesis analyzes liability of Internet news portals for third-party defamatory comments. After the case of Delfi AS v. Estonia, decided by the Grand Chamber of the European Court of Human Rights on 16 June 2015, a portal can be held liable for user-generated unlawful comments. The thesis aims at exploring consequences of the case of Delfi for Internet news portals’ business model. The model is described as a mixture of two modes of information production: traditional industrial information economy and new networked information economy. Additionally, the model has a generative comment environment. I name this model “the Delfian model”. The thesis analyzes three possible strategies which portals will likely apply in the nearest future. I will discuss these strategies from two perspectives: first, how each strategy can affect the Delfian model and, second, how changes in the model can, in their turn, affect freedom of expression. The thesis is based on the analysis of case law, legal, and law and economics literature. I follow the law and technology approach in the vein of ideas developed by Lawrence Lessig, Yochai Benkler and Jonathan Zittrain. The Delfian model is researched as an example of a local battle between industrial and networked information economy modes. The thesis concludes that this local battle is lost because the Delfian model has to be replaced with a new walled-garden model. Such a change can seriously endanger freedom of expression.
Resumo:
Modern automobiles are no longer just mechanical tools. The electronics and computing services they are shipping with are making them not less than a computer. They are massive kinetic devices with sophisticated computing power. Most of the modern vehicles are made with the added connectivity in mind which may be vulnerable to outside attack. Researchers have shown that it is possible to infiltrate into a vehicle’s internal system remotely and control the physical entities such as steering and brakes. It is quite possible to experience such attacks on a moving vehicle and unable to use the controls. These massive connected computers can be life threatening as they are related to everyday lifestyle. First part of this research studied the attack surfaces in the automotive cybersecurity domain. It also illustrated the attack methods and capabilities of the damages. Online survey has been deployed as data collection tool to learn about the consumers’ usage of such vulnerable automotive services. The second part of the research portrayed the consumers’ privacy in automotive world. It has been found that almost hundred percent of modern vehicles has the capabilities to send vehicle diagnostic data as well as user generated data to their manufacturers, and almost thirty five percent automotive companies are collecting them already. Internet privacy has been studies before in many related domain but no privacy scale were matched for automotive consumers. It created the research gap and motivation for this thesis. A study has been performed to use well established consumers privacy scale – IUIPC to match with the automotive consumers’ privacy situation. Hypotheses were developed based on the IUIPC model for internet consumers’ privacy and they were studied by the finding from the data collection methods. Based on the key findings of the research, all the hypotheses were accepted and hence it is found that automotive consumers’ privacy did follow the IUIPC model under certain conditions. It is also found that a majority of automotive consumers use the services and devices that are vulnerable and prone to cyber-attacks. It is also established that there is a market for automotive cybersecurity services and consumers are willing to pay certain fees to avail that.
Resumo:
Modern automobiles are no longer just mechanical tools. The electronics and computing services they are shipping with are making them not less than a computer. They are massive kinetic devices with sophisticated computing power. Most of the modern vehicles are made with the added connectivity in mind which may be vulnerable to outside attack. Researchers have shown that it is possible to infiltrate into a vehicle’s internal system remotely and control the physical entities such as steering and brakes. It is quite possible to experience such attacks on a moving vehicle and unable to use the controls. These massive connected computers can be life threatening as they are related to everyday lifestyle. First part of this research studied the attack surfaces in the automotive cybersecurity domain. It also illustrated the attack methods and capabilities of the damages. Online survey has been deployed as data collection tool to learn about the consumers’ usage of such vulnerable automotive services. The second part of the research portrayed the consumers’ privacy in automotive world. It has been found that almost hundred percent of modern vehicles has the capabilities to send vehicle diagnostic data as well as user generated data to their manufacturers, and almost thirty five percent automotive companies are collecting them already. Internet privacy has been studies before in many related domain but no privacy scale were matched for automotive consumers. It created the research gap and motivation for this thesis. A study has been performed to use well established consumers privacy scale – IUIPC to match with the automotive consumers’ privacy situation. Hypotheses were developed based on the IUIPC model for internet consumers’ privacy and they were studied by the finding from the data collection methods. Based on the key findings of the research, all the hypotheses were accepted and hence it is found that automotive consumers’ privacy did follow the IUIPC model under certain conditions. It is also found that a majority of automotive consumers use the services and devices that are vulnerable and prone to cyber-attacks. It is also established that there is a market for automotive cybersecurity services and consumers are willing to pay certain fees to avail that.
Resumo:
Työssä tutkittiin tehokasta tietojohtamista globaalin metsäteollisuusyrityksen tutkimus ja kehitys verkostossa. Työn tavoitteena oli rakentaa kuvaus tutkimus ja kehitys sisällön hallintaan kohdeyrityksen käyttämän tietojohtamisohjelmiston avulla. Ensin selvitettiin käsitteitä tietämys ja tietojohtaminen kirjallisuuden avulla. Selvityksen perusteella esitettiin prosessimalli, jolla tietämystä voidaan tehokkaasti hallita yrityksessä. Seuraavaksi analysoitiin tietojohtamisen asettamia vaatimuksia informaatioteknologialle ja informaatioteknologian roolia prosessimallissa. Verkoston vaatimukset tietojohtamista kohtaan selvitettiin haastattelemalla yrityksen avainhenkilöitä. Haastatteluiden perusteella järjestelmän tuli tehokkaasti tukea virtuaalisten projektiryhmien työskentelyä, mahdollistaa tehtaiden välinen tietämyksen jakaminen ja tukea järjestelmään syötetyn sisällön hallintaa. Ensiksi järjestelmän käyttöliittymän rakenne ja salaukset muokattiin vastaamaan verkoston tarpeita. Rakenne tarjoaa työalueen työryhmille ja alueet tehtaiden väliseen tietämyksen jakamiseen. Sisällönhallintaa varten järjestelmään kehitettiin kategoria, profiloitu portaali ja valmiiksi määriteltyjä hakuja. Kehitetty malli tehostaa projektiryhmien työskentelyä, mahdollistaa olemassa olevan tietämyksen hyväksikäytön tehdastasolla sekä helpottaa tutkimus ja kehitys aktiviteettien seurantaa. Toimenpide-ehdotuksina esitetään järjestelmän integrointia tehtaiden operatiivisiin ohjausjärjestelmiin ja ohjelmiston käyttöönottoa tehdastason projektinhallinta työkaluksi.Ehdotusten tavoitteena on varmistaa sekä tehokas tietämyksen jakaminen tehtaiden välillä että tehokas tietojohtaminen tehdastasolla.
Resumo:
Current-day web search engines (e.g., Google) do not crawl and index a significant portion of theWeb and, hence, web users relying on search engines only are unable to discover and access a large amount of information from the non-indexable part of the Web. Specifically, dynamic pages generated based on parameters provided by a user via web search forms (or search interfaces) are not indexed by search engines and cannot be found in searchers’ results. Such search interfaces provide web users with an online access to myriads of databases on the Web. In order to obtain some information from a web database of interest, a user issues his/her query by specifying query terms in a search form and receives the query results, a set of dynamic pages that embed required information from a database. At the same time, issuing a query via an arbitrary search interface is an extremely complex task for any kind of automatic agents including web crawlers, which, at least up to the present day, do not even attempt to pass through web forms on a large scale. In this thesis, our primary and key object of study is a huge portion of the Web (hereafter referred as the deep Web) hidden behind web search interfaces. We concentrate on three classes of problems around the deep Web: characterization of deep Web, finding and classifying deep web resources, and querying web databases. Characterizing deep Web: Though the term deep Web was coined in 2000, which is sufficiently long ago for any web-related concept/technology, we still do not know many important characteristics of the deep Web. Another matter of concern is that surveys of the deep Web existing so far are predominantly based on study of deep web sites in English. One can then expect that findings from these surveys may be biased, especially owing to a steady increase in non-English web content. In this way, surveying of national segments of the deep Web is of interest not only to national communities but to the whole web community as well. In this thesis, we propose two new methods for estimating the main parameters of deep Web. We use the suggested methods to estimate the scale of one specific national segment of the Web and report our findings. We also build and make publicly available a dataset describing more than 200 web databases from the national segment of the Web. Finding deep web resources: The deep Web has been growing at a very fast pace. It has been estimated that there are hundred thousands of deep web sites. Due to the huge volume of information in the deep Web, there has been a significant interest to approaches that allow users and computer applications to leverage this information. Most approaches assumed that search interfaces to web databases of interest are already discovered and known to query systems. However, such assumptions do not hold true mostly because of the large scale of the deep Web – indeed, for any given domain of interest there are too many web databases with relevant content. Thus, the ability to locate search interfaces to web databases becomes a key requirement for any application accessing the deep Web. In this thesis, we describe the architecture of the I-Crawler, a system for finding and classifying search interfaces. Specifically, the I-Crawler is intentionally designed to be used in deepWeb characterization studies and for constructing directories of deep web resources. Unlike almost all other approaches to the deep Web existing so far, the I-Crawler is able to recognize and analyze JavaScript-rich and non-HTML searchable forms. Querying web databases: Retrieving information by filling out web search forms is a typical task for a web user. This is all the more so as interfaces of conventional search engines are also web forms. At present, a user needs to manually provide input values to search interfaces and then extract required data from the pages with results. The manual filling out forms is not feasible and cumbersome in cases of complex queries but such kind of queries are essential for many web searches especially in the area of e-commerce. In this way, the automation of querying and retrieving data behind search interfaces is desirable and essential for such tasks as building domain-independent deep web crawlers and automated web agents, searching for domain-specific information (vertical search engines), and for extraction and integration of information from various deep web resources. We present a data model for representing search interfaces and discuss techniques for extracting field labels, client-side scripts and structured data from HTML pages. We also describe a representation of result pages and discuss how to extract and store results of form queries. Besides, we present a user-friendly and expressive form query language that allows one to retrieve information behind search interfaces and extract useful data from the result pages based on specified conditions. We implement a prototype system for querying web databases and describe its architecture and components design.
Resumo:
Presentation at Open Repositories 2014, Helsinki, Finland, June 9-13, 2014
Resumo:
Presentation at Open Repositories 2014, Helsinki, Finland, June 9-13, 2014