347 resultados para Viewer
Resumo:
Desde que surgiu há mais de 50 anos, a televisão sofreu muitas transformações, tanto ao nível tecnológico (por exemplo com a passagem da emissão a preto/branco para cor, o som analógico para digital, a difusão digital) como a nível da sua influência nas sociedades. Entre outros fatores de ordem tecnológica, a consolidação da Internet com o seu elevado nível de personalização, da experiência de utilização, e a sua enorme quantidade de conteúdos disponíveis, catapultou a televisão no sentido de esta se tornar mais interativa. Assim, o telespectador passou a poder usufruir de uma experiência televisiva que pode, por um lado, ser mais participativa, sendo-lhe possível, por exemplo, alvitrar sobre a qualidade de um programa enquanto assiste à sua exibição, e, por outro, ser mais personalizada, possibilitando-lhe, por exemplo, receber conteúdos automaticamente adequados ao seu perfil e contexto. No entanto, esta experiência mais participativa e personalizável carece de uma identificação, idealmente automática e não intrusiva, de quem pode beneficiar da mesma – o telespectador. Contudo, e apesar de significativos avanços na área da televisão interativa, tanto ao nível da infraestrutura de suporte como ao nível dos serviços disponibilizados, a identificação dos utilizadores é, ainda, uma área de estudo com muitos aspetos por compreender. Os seniores, em particular, são grandes consumidores de televisão e representam uma fatia muito considerável das pessoas que podem beneficiar das potencialidades disponibilizadas pela interatividade presente em muitos serviços atuais. Um número crescente destes serviços são desenhados com o objetivo de promoverem um envelhecimento ativo e um concreto apoio à vida, pelo que os seniores podem beneficiar, em vários aspetos do seu quotidiano, se os utilizarem. Nesta faixa etária, a identificação de utilizadores tem, como elemento potenciador da experiência de utilização, um papel especialmente importante ao nível de um aproveitamento personalizado e dirigido destes serviços. No entanto, atendendo às diferentes combinações de características físicas, sensoriais, cognitivas e, mesmo, de literacia digital que tipificam os seniores, perspetivou-se existir uma dependência do perfil do utilizador na seleção do método de identificação mais adequado, os quais podem ser baseados, por exemplo, num leitor de impressões digitais, instalado no telecomando; na leitura de uma wearable tag ou de um cartão RFiD; no reconhecimento da face e, eventualmente, na voz do utilizador. Assim, a inerente investigação desenrolou-se em várias fases, no sentido de permitir alicerçar a construção de uma matriz de decisão tecnológica que, em função do perfil de utilizador, selecione o sistema de identificação mais adequado. O procedimento metodológico inerente à construção desta matriz de decisão, passou por um longo processo envolvendo utilizadores reais, que se iniciou com a realização de entrevistas exploratórias com o objetivo de permitir conhecer melhor os seniores e a forma como estes encaram a tecnologia e, mais concretamente, a televisão interativa. Foi depois implementado um protótipo de alta-fidelidade, completamente funcional, para a realização de testes com o objetivo de perceber qual a preferência relativamente a um subconjunto de tecnologias de identificação. Estes testes, uma vez que não permitiram testar todas as tecnologias em estudo, revelaram-se inconclusivos, porém permitiram reforçar a necessidade de identificar e caracterizar os referidos aspetos do perfil do utilizador que podem interferir na sua preferência relativamente ao sistema de identificação. As características identificadas constituíram-se como os parâmetros de entrada da matriz, sendo que para preencher as respetivas células realizaramse testes de aceitação, com um conjunto de seniores, tendo por base um protótipo, wizard of oz, especificamente implementado para permitir experienciar todas as tecnologias em estudo. Estes testes foram precedidos pela avaliação das capacidades funcionais dos participantes, nos diversos parâmetros definidos. Este texto relata, assim, todo o processo de investigação que foi conduzido, terminando com uma descrição de exemplos de utilização da matriz de decisão implementada e com a identificação de potenciais caminhos de desenvolvimento deste trabalho.
Resumo:
: In this paper, I look at Joanne Leonard’s Being in Pictures and engage in a critical dialogue with an assemblage of visual and textual narratives that comprise her intimate photo memoir. In doing this I draw on Hannah Arendt’s take on narratives as tangible traces of uniqueness and plurality, political traits par excellence in the cultural histories of the human condition. Being aware of my role as a reader/viewer/interpreter of a woman artist’s auto/biographical narratives, I move beyond dilemmas of representation or questions of unveiling “the real Leonard”. The artist is instead configured as a narrative persona, whose narratives respond to three interrelated themes of inquiry, namely the visualization of spatial technologies, vulnerability and the gendering of memory. Key words: gendered memories, narrative persona, spatial technologies, photo memoir, vulnerability
Resumo:
The number of software applications available on the Internet for distributing video streams in real time over P2P networks has grown quickly in the last two years. Typical this kind of distribution is made by television channel broadcasters which try to make their content globally available, using viewer's resources to support a large scale distribution of video without incurring in incremental costs. However, the lack of adaptation in video quality, combined with the lack of a standard protocol for this kind of multimedia distribution has driven content providers to basically ignore it as a solution for video delivery over the Internet. While the scalable extension of the H. 264 encoding (H.264/SVC) can be used to support terminal and network heterogeneity, it is not clear how it can be integrated in a P2P overlay to form a large scale and real time distribution. In this paper, we start by defining a solution that combines the most popular P2P file-sharing protocol, the BitTorrent, with the H. 264/SVC encoding for a real-time video content delivery. Using this solution we then evaluate the effect of several parameters in the quality received by peers.
Resumo:
Tese de doutoramento, Belas-Artes (Ciências da Arte), Universidade de Lisboa, Faculdade de Belas-Artes, 2014
Resumo:
E-poltergeist takes over the user’s internet browser, automatically initiating Web searches without their permission. Web-based artwork which explores issues of user control when confronted with complex technological systems, questioning the limits of digital interactive arts as consensual reciprocal systems. e-poltergeist was a major web commission that marked an early stage of research in a larger enquiry by Craighead and Thomson into the relationship between live virtual data, global communications networks and instruction-based art, exploring how such systems can be re-contextualised within gallery environments. e-poltergeist presented the 'viewer' with a singular narrative by using live internet search-engine data that aimed to create a perpetual and virtually unstoppable cycle of search engine results, banner ads and moving windows as an interruption into the normal use of an internet browser. The work also addressed the ‘de-personalisation’ of internet use by sending a series of messages from the live search engine data that seemed to address the user directly: 'Is anyone there?'; 'Can anyone hear me?', 'Please help me!'; 'Nobody cares!' e-poltergeist makes a significant contribution to the taxonomy of new media art by dealing with the way that new media art can re-address notions of existing traditions in art such as appropriation and manipulation, instruction-based art and conceptual art. e-poltergeist was commissioned ($12,000) for 010101: Art in Technological Times, a landmark international exhibition presented by the San Francisco Museum of Modern Art, which bought together leading international practitioners working with emergent technologies, including Tatsuo Miyajima, Janet Cardiff, Brian Eno. Peer recognition of the project in the form of reviews include: Curating New Media. Gateshead: Baltic Centre for Contemporary Art. Cook, Sarah, Beryl Graham and Sarah Martin ISBN: 1093655064; The Wire; http://www.wired.com/culture/lifestyle/news/2000/12/40464 (review by Reena Jana); Leonardo (review Barbara Lee Williams and Sonya Rapoport) http://www.leonardo.info/reviews/feb2001/ex_010101_willrapop.html All the work is developed jointly and equally between Craighead and her collaborator, Jon Thomson, Slade School of Fine Art.
Resumo:
Flat Earth is a desktop documentary, which takes the viewer on a seven minute trip around the world so that we encounter a series of fragments taken from real peoples' blogs. These fragments are knitted together to form a kind of story or singular narrative.
Resumo:
Don’t tell me the moon is shining; show me the glint of light on broken glass Anton Chekhov Representations of Africa in cinema are almost as old as cinema itself and date back to Hollywood’s silent era. Most early examples feature the continent as a mere exotic backdrop and include The Sheik (Melford 1921), soon followed, in 1926, by George Fitzmaurice’s Son of the Sheik starring Rudolph Valentino. The next decade brought Van Dyke’s Tarzan movies, Robert Stevenson’s King Solomon’s Mines (1937), and, on the European side, Duvivier’s Pépé le Moko (1936). For representations of Francophone Africa by Africans themselves, the viewing public more or less had to wait, however, until decolonisation in the 1960s (with, for example, Sembene Ousmane’s Borom Sarret and La Noire de…, both released in 1966 and, in 1968, Mandabi). Since then Francophone African cinema has come a long way and has diversified into various strands. Between Borom Sarret and Mahamat-Saleh Haroun’s 2006 Daratt, Saison sèche - or the same director’s Un homme qui crie, almost half a century has elapsed. Over this period, films inevitably have addressed a spectrum of visual, ideological and political tropes. They range from unadorned depictions of the newly independent states and their societies to highly aestheticised productions, not to mention surreal and poetic visions as displayed for instance in Djibril Diop Mambéty’s Touki Bouki (1973). Most of the early films send an overt socio-political message which is a clear and explicit denunciation of a corrupt state of affairs (Souleymane Cissé’s Baara, 1977). They aim to trigger strong emotional and political responses from the viewer, in unambiguous support for the film-maker’s stand. Sembene himself declared: “I consider cinema a means of political action” (Murphy 2000: 221). Similarly, the Mauritanian director Med Hondo wishes to “take up this technical medium and to make it a mouthpiece on behalf of [his] fellow Africans and Arabs” (Jeffries 2002: 11). All this echoes the claims of the Fédération Panafricaine des Cinéastes (FEPACI, founded in 1969), an organisation “dedicated to the liberation of Africa”. In sharp contrast to the incipient momentum given Francophonie by Bourguiba, the Nigerien Hamani Diori and the Senegalese Senghor, who invoked a worldwide communauté organique francophone, FEPACI called for “the creation of an aesthetics of disalienation… [using] didactic... forms to denounce the alienation of countries that were politically independent but culturally and economically dependent on the West” (Diawara 1996: 40). Sembene’s Xala (1974) became the blueprint for this, to this day the best-known vein of Francophone African cinema. Thus considered, this pedigree seems a million miles from mainstream global cinema with its overriding mission to entertain. A question therefore arises: to what extent can a cinema that sprang from such beginnings be seen to interface in any meaningful way with a global film industry that, overwhelmingly and for a century, has indeed entertained the world – with Hollywood at its centre?
Resumo:
A Short film about War is a narrative documentary artwork made entirely from information found on the worldwide web. In ten minutes this two screen gallery installation takes viewers around the world to a variety of war zones as seen through the collective eyes of the online photo sharing community Flickr, and as witnessed by a variety of existing military and civilian bloggers. As the ostensibly documentary 'film' plays itself out, a second screen logs the provenance of images, blog fragments and gps locations of each element comprising the work, so that the same information is simultaneously communicated to the viewer in two parallel formats -on one hand as a dramatised reportage and on the other hand as a text log. In offerring this tautology, we are attempting to explore and reveal the way in which information changes as it is gathered, edited and then mediated through networked communications technologies or broadcast media, and how that changes and distorts meaning -especially for (the generally wealthy minority of) the world's users of high speed broadband networks, who have become used to the treacherously persuasive panoptic view that google earth (and the worldwide web) appears to give us.
Resumo:
Relatório de estágio apresentado à Escola Superior de Comunicação Social como parte dos requisitos para obtenção de grau de mestre em Jornalismo.
Resumo:
Relatório de estágio apresentado à Escola Superior de Comunicação Social como parte dos requisitos para obtenção de grau de mestre em Jornalismo.
Resumo:
Personalised video can be achieved by inserting objects into a video play-out according to the viewer's profile. Content which has been authored and produced for general broadcast can take on additional commercial service features when personalised either for individual viewers or for groups of viewers participating in entertainment, training, gaming or informational activities. Although several scenarios and use-cases can be envisaged, we are focussed on the application of personalised product placement. Targeted advertising and product placement are currently garnering intense interest in the commercial networked media industries. Personalisation of product placement is a relevant and timely service for next generation online marketing and advertising and for many other revenue generating interactive services. This paper discusses the acquisition and insertion of media objects into a TV video play-out stream where the objects are determined by the profile of the viewer. The technology is based on MPEG-4 standards using object based video and MPEG-7 for metadata. No proprietary technology or protocol is proposed. To trade the objects into the video play-out, a Software-as-a-Service brokerage platform based on intelligent agent technology is adopted. Agencies, libraries and service providers are represented in a commercial negotiation to facilitate the contractual selection and usage of objects to be inserted into the video play-out.
Resumo:
Media content personalisation is a major challenge involving viewers as well as media content producer and distributor businesses. The goal is to provide viewers with media items aligned with their interests. Producers and distributors engage in item negotiations to establish the corresponding service level agreements (SLA). In order to address automated partner lookup and item SLA negotiation, this paper proposes the MultiMedia Brokerage (MMB) platform, which is a multiagent system that negotiates SLA regarding media items on behalf of media content producer and distributor businesses. The MMB platform is structured in four service layers: interface, agreement management, business modelling and market. In this context, there are: (i) brokerage SLA (bSLA), which are established between individual businesses and the platform regarding the provision of brokerage services; and (ii) item SLA (iSLA), which are established between producer and distributor businesses about the provision of media items. In particular, this paper describes the negotiation, establishment and enforcement of bSLA and iSLA, which occurs at the agreement and negotiation layers, respectively. The platform adopts a pay-per-use business model where the bSLA define the general conditions that apply to the related iSLA. To illustrate this process, we present a case study describing the negotiation of a bSLA instance and several related iSLA instances. The latter correspond to the negotiation of the Electronic Program Guide (EPG) for a specific end viewer.
Resumo:
The year 2012 was the “boom year” in MOOC and all its outstanding growth until now, made us move forward in designing the first MOOC in our Institution (and the third in our country, Portugal). Most MOOC are video lectured based and the learning analytic process to these ones is just taking its first steps. Designing a video-lecture seems, at a first glance, very easy: one can just record a live lesson or lecture and turn it, directly, into a video-lecture (even here one may experience some “sound” and “camera” problems); but developing some engaging, appealing video-lecture, that motivates students to embrace knowledge and that really contributes to the teaching/learning process, it is not an easy task. Therefore questions like: “What kind of information can induce knowledge construction, in a video-lecture?”, “How can a professor interact in a video-lecture when he is not really there?”, “What are the video-lectures attributes that contribute the most to viewer’s engagement?”, “What seems to be the maximum “time-resistance” of a viewer?”, and many others, raised in our minds when designing video-lectures to a Mathematics MOOC from the scratch. We believe this technological resource can be a powerful tool to enhance students' learning process. Students that were born in digital/image era, respond and react slightly different to outside stimulus, than their teachers/professors ever did or do. In this article we will describe just how we have tried to overcome some of the difficulties and challenges we tackled when producing our own video-math-lectures and in what way, we feel, videos can contribute to the teaching and learning process at higher education level.
Resumo:
This thesis examines the processes through which identity is acquired and the processes that Hollywood :films employ to facilitate audience identification in order to determine the extent to which individuality is possible within postmodem society. Opposing views of identity formation are considered: on the one hand, that of the Frankfurt School which envisions the mass audience controlled by the culture industry and on the other, that of John Fiske which places control in the hands of the individual. The thesis takes a mediating approach, conceding that while the mass media do provide and influence identity formation, individuals can and do decode a variety of meanings from the material made available to them in accordance with the text's use-value in relation to the individual's circumstances. The analysis conducted in this thesis operates on the assumption that audiences acquire identity components in exchange for paying to see a particular film. Reality Bites (Ben Stiller 1994) and Scream (Wes Craven 1996) are analyzed as examples of mainstream 1990s films whose material circumstances encourage audience identification and whose popularity suggest that audiences did indeed identify with them. The Royal Tenenbaums (Wes Anderson 2001) is considered for its art film sensibilities and is examined in order to determine to what extent this film can be considered a counter example. The analysis consists of a combination of textual analysis and reception study in an attempt to avoid the problems associated with each approach when employed alone. My interpretation of the filmmakers' and marketers' messages will be compared with online reviews posted by film viewers to determine how audiences received and made use of the material available to them. Viewer-posted reviews, both unsolicited and unrestricted, as found online, will be consulted and will represent a segment of the popular audience for the three films to be analyzed.
The new blockbuster film sequel : changing cultural and economic conditions within the film industry
Resumo:
Film sequels are a pervasive part of film consumption practices and have become an important part of the decision making process for Hollywood studios and producers. This thesis indicates that sequels are not homogenous groups of films, as they are often considered, but offer a variety of story construction and utilize a variety of production methods. Three types of blockbuster sequel sets are identified and discussed in this thesis. The Traditional Blockbuster Sequel Set, as exemplified by Back to the Future (1985, 1989, 1990) films, is the most conventional type of sequel set and capitalizes on the winning formula of the first film in the franchise. The MultiMedia Sequel Set, such as The Matrix (1999,2003) trilogy, allows the user/viewer to experience and consume the story as well as the world of the film through many different media. The Lord a/ the Rings (2001, 2002, 2003) set of films is an illustration of The Saga Sequel Set where plot lines are continuous over the entire franchise thus allowing the viewer to see the entire set as a unified work. The thesis also demonstrates how the blockbuster sequel sets, such as the Pirates a/ the Caribbean (2003, 2006, 2007) franchise, restructure the production process of the Hollywood film industry.