3 resultados para Windows
em WestminsterResearch - UK
Resumo:
E-poltergeist takes over the user’s internet browser, automatically initiating Web searches without their permission. Web-based artwork which explores issues of user control when confronted with complex technological systems, questioning the limits of digital interactive arts as consensual reciprocal systems. e-poltergeist was a major web commission that marked an early stage of research in a larger enquiry by Craighead and Thomson into the relationship between live virtual data, global communications networks and instruction-based art, exploring how such systems can be re-contextualised within gallery environments. e-poltergeist presented the 'viewer' with a singular narrative by using live internet search-engine data that aimed to create a perpetual and virtually unstoppable cycle of search engine results, banner ads and moving windows as an interruption into the normal use of an internet browser. The work also addressed the ‘de-personalisation’ of internet use by sending a series of messages from the live search engine data that seemed to address the user directly: 'Is anyone there?'; 'Can anyone hear me?', 'Please help me!'; 'Nobody cares!' e-poltergeist makes a significant contribution to the taxonomy of new media art by dealing with the way that new media art can re-address notions of existing traditions in art such as appropriation and manipulation, instruction-based art and conceptual art. e-poltergeist was commissioned ($12,000) for 010101: Art in Technological Times, a landmark international exhibition presented by the San Francisco Museum of Modern Art, which bought together leading international practitioners working with emergent technologies, including Tatsuo Miyajima, Janet Cardiff, Brian Eno. Peer recognition of the project in the form of reviews include: Curating New Media. Gateshead: Baltic Centre for Contemporary Art. Cook, Sarah, Beryl Graham and Sarah Martin ISBN: 1093655064; The Wire; http://www.wired.com/culture/lifestyle/news/2000/12/40464 (review by Reena Jana); Leonardo (review Barbara Lee Williams and Sonya Rapoport) http://www.leonardo.info/reviews/feb2001/ex_010101_willrapop.html All the work is developed jointly and equally between Craighead and her collaborator, Jon Thomson, Slade School of Fine Art.
Resumo:
This paper describes the development of a generic tool for dynamic cost indexing (DCI), which encompasses the ability to manage flight delay costs on a dynamic basis, trading accelerated fuel burn against ‘cost of time’. Many airlines have significant barriers to identifying which costs should be included in ‘cost of time’ calculations and how to quantify them. The need is highlighted to integrate historical passenger delay and policy data with real-time passenger connections data. The absence of industry standards for defining and interfacing necessary tools is recognised. Delay recovery decision windows and ATC cooperation are key constraints. DCI tools could also be used in the pre-departure phase, and may offer environmental decision support functionality: which could be used as a differentiating technology required for access to designated, future ‘green’ airspace. Short-term opportunities for saving fuel and/or reducing emissions are also identified.
Resumo:
Turbo codes experience a significant decoding delay because of the iterative nature of the decoding algorithms, the high number of metric computations and the complexity added by the (de)interleaver. The extrinsic information is exchanged sequentially between two Soft-Input Soft-Output (SISO) decoders. Instead of this sequential process, a received frame can be divided into smaller windows to be processed in parallel. In this paper, a novel parallel processing methodology is proposed based on the previous parallel decoding techniques. A novel Contention-Free (CF) interleaver is proposed as part of the decoding architecture which allows using extrinsic Log-Likelihood Ratios (LLRs) immediately as a-priori LLRs to start the second half of the iterative turbo decoding. The simulation case studies performed in this paper show that our parallel decoding method can provide %80 time saving compared to the standard decoding and %30 time saving compared to the previous parallel decoding methods at the expense of 0.3 dB Bit Error Rate (BER) performance degradation.