956 resultados para High definition television
Resumo:
This paper analyzes post-pornographic practices – an activist and theoretical movement that recognizes pornography as valuable in understanding social, cultural, and political systems that construct and reflect identity – through the work of American artist Marilyn Minter. The analysis contextualizes post-pornography and concludes with an examination of several of Minter’s recent paintings and photographs through a postpornographic lens to assert that these examples of her work explore sexuality and gender by incorporating aesthetic and ideological references to porn and by invoking the postpornographic tenets of collaboration, disruption of public space, and the inversion of heteronormativity. Creating art with Wangechi Mutu, displaying in Times Square high definition videos of lips that slurp green goo, and painting men garbed in lingerie constitute some of Minter’s endeavors, which reenvision pornographic relationships to authorship and agency, public versus private space, and the expression or repression of fantasy.
Resumo:
Senior thesis written for Oceanography 445
Resumo:
O trabalho aborda questões sobre a produção e composição da imagem em alta definição na TV Digital HDTV. Por meio dos dados levantados na literatura específica, impressa e eletrônica, e com entrevistas com profissionais da área e observações da programação disponível em HDTV na cidade de São Paulo puderam ser analisadas as imagens e composição visual que advém com a TV Digital de alta definição e interativa. Para tanto, a produção da imagem em alta definição precisa atender a dois tipos de público: o que assiste a transmissão digital e o que ainda continuará assistindo no sistema analógico com baixa percepção para os detalhes visuais. Os resultados demonstraram duas questões fundamentais e interdependentes: as práticas de produção, materiais cenográficos e processos de composição dos elementos da imagem precisam ser atualizados segundo as novas características tecnológicas e que o processo de implantação da TV Digital no Brasil deve ser revisto, com correções de prazos e das políticas adotadas sob o risco de se atrasar todo o processo de produção de conteúdo e da imagem em alta definição para este suporte.
Resumo:
The world is connected by a core network of long-haul optical communication systems that link countries and continents, enabling long-distance phone calls, data-center communications, and the Internet. The demands on information rates have been constantly driven up by applications such as online gaming, high-definition video, and cloud computing. All over the world, end-user connection speeds are being increased by replacing conventional digital subscriber line (DSL) and asymmetric DSL (ADSL) with fiber to the home. Clearly, the capacity of the core network must also increase proportionally. © 1991-2012 IEEE.
Resumo:
Continuous progress in optical communication technology and corresponding increasing data rates in core fiber communication systems are stimulated by the evergrowing capacity demand due to constantly emerging new bandwidth-hungry services like cloud computing, ultra-high-definition video streams, etc. This demand is pushing the required capacity of optical communication lines close to the theoretical limit of a standard single-mode fiber, which is imposed by Kerr nonlinearity [1–4]. In recent years, there have been extensive efforts in mitigating the detrimental impact of fiber nonlinearity on signal transmission, through various compensation techniques. However, there are still many challenges in applying these methods, because a majority of technologies utilized in the inherently nonlinear fiber communication systems had been originally developed for linear communication channels. Thereby, the application of ”linear techniques” in a fiber communication systems is inevitably limited by the nonlinear properties of the fiber medium. The quest for the optimal design of a nonlinear transmission channels, development of nonlinear communication technqiues and the usage of nonlinearity in a“constructive” way have occupied researchers for quite a long time.
Resumo:
The move from Standard Definition (SD) to High Definition (HD) represents a six times increases in data, which needs to be processed. With expanding resolutions and evolving compression, there is a need for high performance with flexible architectures to allow for quick upgrade ability. The technology advances in image display resolutions, advanced compression techniques, and video intelligence. Software implementation of these systems can attain accuracy with tradeoffs among processing performance (to achieve specified frame rates, working on large image data sets), power and cost constraints. There is a need for new architectures to be in pace with the fast innovations in video and imaging. It contains dedicated hardware implementation of the pixel and frame rate processes on Field Programmable Gate Array (FPGA) to achieve the real-time performance. ^ The following outlines the contributions of the dissertation. (1) We develop a target detection system by applying a novel running average mean threshold (RAMT) approach to globalize the threshold required for background subtraction. This approach adapts the threshold automatically to different environments (indoor and outdoor) and different targets (humans and vehicles). For low power consumption and better performance, we design the complete system on FPGA. (2) We introduce a safe distance factor and develop an algorithm for occlusion occurrence detection during target tracking. A novel mean-threshold is calculated by motion-position analysis. (3) A new strategy for gesture recognition is developed using Combinational Neural Networks (CNN) based on a tree structure. Analysis of the method is done on American Sign Language (ASL) gestures. We introduce novel point of interests approach to reduce the feature vector size and gradient threshold approach for accurate classification. (4) We design a gesture recognition system using a hardware/ software co-simulation neural network for high speed and low memory storage requirements provided by the FPGA. We develop an innovative maximum distant algorithm which uses only 0.39% of the image as the feature vector to train and test the system design. Database set gestures involved in different applications may vary. Therefore, it is highly essential to keep the feature vector as low as possible while maintaining the same accuracy and performance^
Resumo:
With the progress of computer technology, computers are expected to be more intelligent in the interaction with humans, presenting information according to the user's psychological and physiological characteristics. However, computer users with visual problems may encounter difficulties on the perception of icons, menus, and other graphical information displayed on the screen, limiting the efficiency of their interaction with computers. In this dissertation, a personalized and dynamic image precompensation method was developed to improve the visual performance of the computer users with ocular aberrations. The precompensation was applied on the graphical targets before presenting them on the screen, aiming to counteract the visual blurring caused by the ocular aberration of the user's eye. A complete and systematic modeling approach to describe the retinal image formation of the computer user was presented, taking advantage of modeling tools, such as Zernike polynomials, wavefront aberration, Point Spread Function and Modulation Transfer Function. The ocular aberration of the computer user was originally measured by a wavefront aberrometer, as a reference for the precompensation model. The dynamic precompensation was generated based on the resized aberration, with the real-time pupil diameter monitored. The potential visual benefit of the dynamic precompensation method was explored through software simulation, with the aberration data from a real human subject. An "artificial eye'' experiment was conducted by simulating the human eye with a high-definition camera, providing objective evaluation to the image quality after precompensation. In addition, an empirical evaluation with 20 human participants was also designed and implemented, involving image recognition tests performed under a more realistic viewing environment of computer use. The statistical analysis results of the empirical experiment confirmed the effectiveness of the dynamic precompensation method, by showing significant improvement on the recognition accuracy. The merit and necessity of the dynamic precompensation were also substantiated by comparing it with the static precompensation. The visual benefit of the dynamic precompensation was further confirmed by the subjective assessments collected from the evaluation participants.
Resumo:
Within the context of the overall ecological working programme Dynamics of Antarctic Marine Shelf Ecosystems (DynAMo) of the PS96 (ANT-XXXI/2) cruise of RV "Polarstern" to the Weddell Sea (Dec 2015 to Feb 2016), seabed imaging surveys were carried out along drift profiles by means of the Ocean Floor Observation System (OFOS) of the Alfred Wegener Institute, Helmholtz Centre for Polar and Marine Research (AWI) Bremerhaven. The setup and mode of deployment of the OFOS was similar to that described by Bergmann and Klages (2012, doi:10.1016/j.marpolbul.2012.09.018). OFOS is a surface-powered gear equipped with two downward-looking cameras installed side-by-side: one high-resolution, wide-angle still camera (CANON® EOS 5D Mark III; lens: Canon EF 24 f/1.4L II, f stop: 13, exposure time: 1/125 sec; in-air view angles: 74° (horizontal), 53° (vertical), 84° (diagonal); image size: 5760 x 3840 px = 21 MPix; front of pressure resistant camera housing consisting of plexiglass dome port) and one high-definition color video camera (SONY® FCB-H11). The system was vertically lowered over the stern of the ship with a broadband fibre-optic cable, until it hovers approximately 1.5 m above the seabed. It was then towed after the slowly sailing ship at a speed of approximately 0.5 kn (0.25 m/s). The ship's Global Acoustic Positioning System (GAPS), combining Ultra Short Base Line (USBL), Inertial Navigation System (INS) and satellite-based Global Positioning System (GPS) technologies, was used to gain highly precise underwater position data of the OFOS. During the profile, OFOS was kept hanging at the preferred height above the seafloor by means of the live video feed and occasional minor cable-length adjustments with the winch to compensate small-scale bathymetric variations in seabed morphology. Information on water depth and height above the seafloor were continuously recorded by means of OFOS-mounted sensors (GAPS transponder, Tritech altimeter). Three lasers, which are placed beside the still camera, emit parallel beams and project red light points, arranged as an equilateral triangle with a side length of 50 cm, in each photo, thus providing a scale that can be used to calculate the seabed area depicted in each image and/or measure the size of organisms or seabed features visible in the image. In addition, the seabed area depicted was estimated using altimeter-derived height above seafloor and optical characteristics of the OFOS still camera. In automatic mode, a seabed photo, depicting an area of approximately 3.45 m**2 (= 2.3 m x 1.5 m; with variations depending on the actual height above ground), was taken every 30 seconds to obtain series of "TIMER" stills distributed at regular distances along the profiles that vary in length depending on duration of the cast. At a ship speed of 0.5 kn, the average distance between seabed images was approximately 5 m. Additional "HOTKEY" photos were taken from interesting objects (organisms, seabed features, such as putative iceberg scours) when they appeared in the live video feed (which was also recorded, in addition to the stills, for documentation and possible later analysis). If any image from this collection is used, please cite the reference as given above.
Resumo:
There is an increased need for 3D recording of archaeological sites and digital preservation of their artifacts. Digital photogrammetry with prosumer DSLR cameras is a suitable tool for recording epigraphy in particular, as it allows for the recording of inscribed surfaces with very high accuracy, often better than 2 mm and with only a short time spent in the field. When photogrammetry is fused with other computational photography techniques like panoramic tours and Reflectance Transformation Imaging, a workflow exists to rival traditional LiDARbased methods. The difficulty however, arises in the presentation of 3D data. It requires an enormous amount of storage and enduser sophistication. The proposed solution is to use gameengine technology and high definition virtual tours to provide not only scholars, but also the general public with an uncomplicated interface to interact with the detailed 3D epigraphic data. The site of Stobi, located near Gradsko, in the Former Yugoslav Republic of Macedonia (FYROM) was used as a case study to demonstrate the effectiveness of RTI, photogrammetry and virtual tour imaging working in combination. A selection of nine sets of inscriptions from the archaeological site were chosen to demonstrate the range of application for the techniques. The chosen marble, sandstone and breccia inscriptions are representative of the varying levels of deterioration and degradation of the epigraphy at Stobi, in which both their rates of decay and resulting legibility is varied. This selection includes those which are treated and untreated stones as well as those in situ and those in storage. The selection consists of both Latin and Greek inscriptions with content ranging from temple dedication inscriptions to statue dedications. This combination of 3D modeling techniques presents a cost and time efficient solution to both increase the legibility of severely damaged stones and to digitally preserve the current state of the inscriptions.
Resumo:
To reach for a target, we must formulate a movement plan - a difference vector of the target position with respect to the starting hand position. While it is known that the medial part of the intraparietal sulcus (mIPS) and the dorsal premotor (PMd) activity reflects aspects of a kinematic plan for a reaching movement, it is unclear whether or how the two regions may differ. We investigated the functional roles of the mIPS and PMd in the planning of reaching movements using high definition transcranial direct current stimulation (HD-tDCS) and examined changes in horizontal endpoint error when participants were subjected to anodal and cathodal stimulation. The left mIPS and PMd were functionally localized with fMRI in each participant using an interleaved center-out pointing and saccade task and mapped onto the scalp using Brainsight. We adopted a randomized, single-blind design and applied anodal and cathodal stimulation (2mA for 20 min; 3cm radius 4x1 electrode placement) during 4 separate visits scheduled at least a week apart. Each participant performed 250 baseline, stimulation, and post-stimulation memory-guided reaches starting from one of two initial hand positions (IHPs) to one of 4 briefly flashed targets (20 cm distant, 5 cm apart horizontally) while fixating on a straight-ahead cross located at the target line. Separate 2-way repeated measures ANOVAs of horizontal endpoint error difference after cathodal tDCS at each stimulation site revealed a significant IHP by target position interaction effect at the left mIPS, and significant IHP and target main effects at the left PMd. Behaviorally, these effects corresponded to IHP-dependent contractions after cathodal mIPS tDCS and IHP-independent contractions after cathodal PMd tDCS. These results suggest that the movement vector is not yet formed at the input level of mIPS, but is encoded at the input of PMd. These results also indicate that tDCS is a viable, useful method in investigating movement planning properties through temporary perturbations of the system.
Resumo:
This presentation was both an illustrated lecture and a published paper presented at the IMPACT 9 Conference Printmaking in the Post-Print Age, Hangzhou China 2015. It was an extension of the exhibition catalogue essay for the Bluecoat Gallery Exhibition of the same name. In 2014 I curated an exhibition The Negligent Eye at the Bluecoat Gallery in Liverpool as the result of longstanding interest in scanning and 3D printing and the role of these in changing the field of Print within Fine Art Practice. In the aftermath of curatingshow I have continued to reflect on this material with reference to the writings of Vilém Flusser and Hito Steyerl. The work in the exhibition came from a wide range of artists of all generations most of whom are not explicitly located within Printmaking. Whilst some work did not use any scanning technology at all, a shared fascination with the particular translating device of the systematizing ‘eye’ of a scanning digital video camera, flatbed or medical scanner was expressed by all the work in the show. Through writing this paper I aim to extend my own understanding of questions, which arose from the juxtapositions of work and the production of the accompanying catalogue. The show developed in dialogue with curators Bryan Biggs and Sarah-Jane Parsons of the Bluecoat Gallery who sent a series of questions about scanning to participating artists. In reflecting upon their answers I will extend the discussions begun in the process of this research. A kind of created attention deficit disorder seems to operate on us all today to make and distribute images and information at speed. What value do ways of making which require slow looking or intensive material explorations have in this accelerated system? What model of the world is being constructed by the drive to simulated realities toward ever-greater resolution, so called high definition? How are our perceptions of reality being altered by the world-view presented in the smooth colourful ever morphing simulations that surround us? The limitations of digital technology are often a starting point for artists to reflect on our relationship to real-world fragility. I will be looking at practices where tactility or dimensionality in a form of hard copy engages with these questions using examples from the exhibition. Artists included in the show were: Cory Arcangel, Christiane Baumgartner, Thomas Bewick, Jyll Bradley, Maurice Carlin, Helen Chadwick, Susan Collins, Conroy/Sanderson, Nicky Coutts, Elizabeth Gossling, Beatrice Haines, Juneau Projects, Laura Maloney, Bob Matthews, London Fieldworks (with the participation of Gustav Metzger), Marilène Oliver, Flora Parrott, South Atlantic Souvenirs, Imogen Stidworthy, Jo Stockham, Wolfgang Tillmans, Alessa Tinne, Michael Wegerer, Rachel Whiteread, Jane and Louise Wilson. Scanning, Art, Technology, Copy, Materiality.
Resumo:
Dissertação de Mestrado para a obtenção de grau de Mestre em Engenharia Eletrotécnica Ramo de Automação e Eletrónica Industrial
Resumo:
With the progress of computer technology, computers are expected to be more intelligent in the interaction with humans, presenting information according to the user's psychological and physiological characteristics. However, computer users with visual problems may encounter difficulties on the perception of icons, menus, and other graphical information displayed on the screen, limiting the efficiency of their interaction with computers. In this dissertation, a personalized and dynamic image precompensation method was developed to improve the visual performance of the computer users with ocular aberrations. The precompensation was applied on the graphical targets before presenting them on the screen, aiming to counteract the visual blurring caused by the ocular aberration of the user's eye. A complete and systematic modeling approach to describe the retinal image formation of the computer user was presented, taking advantage of modeling tools, such as Zernike polynomials, wavefront aberration, Point Spread Function and Modulation Transfer Function. The ocular aberration of the computer user was originally measured by a wavefront aberrometer, as a reference for the precompensation model. The dynamic precompensation was generated based on the resized aberration, with the real-time pupil diameter monitored. The potential visual benefit of the dynamic precompensation method was explored through software simulation, with the aberration data from a real human subject. An "artificial eye'' experiment was conducted by simulating the human eye with a high-definition camera, providing objective evaluation to the image quality after precompensation. In addition, an empirical evaluation with 20 human participants was also designed and implemented, involving image recognition tests performed under a more realistic viewing environment of computer use. The statistical analysis results of the empirical experiment confirmed the effectiveness of the dynamic precompensation method, by showing significant improvement on the recognition accuracy. The merit and necessity of the dynamic precompensation were also substantiated by comparing it with the static precompensation. The visual benefit of the dynamic precompensation was further confirmed by the subjective assessments collected from the evaluation participants.
Resumo:
In the medical field images obtained from high definition cameras and other medical imaging systems are an integral part of medical diagnosis. The analysis of these images are usually performed by the physicians who sometimes need to spend long hours reviewing the images before they are able to come up with a diagnosis and then decide on the course of action. In this dissertation we present a framework for a computer-aided analysis of medical imagery via the use of an expert system. While this problem has been discussed before, we will consider a system based on mobile devices. Since the release of the iPhone on April 2003, the popularity of mobile devices has increased rapidly and our lives have become more reliant on them. This popularity and the ease of development of mobile applications has now made it possible to perform on these devices many of the image analyses that previously required a personal computer. All of this has opened the door to a whole new set of possibilities and freed the physicians from their reliance on their desktop machines. The approach proposed in this dissertation aims to capitalize on these new found opportunities by providing a framework for analysis of medical images that physicians can utilize from their mobile devices thus remove their reliance on desktop computers. We also provide an expert system to aid in the analysis and advice on the selection of medical procedure. Finally, we also allow for other mobile applications to be developed by providing a generic mobile application development framework that allows for access of other applications into the mobile domain. In this dissertation we outline our work leading towards development of the proposed methodology and the remaining work needed to find a solution to the problem. In order to make this difficult problem tractable, we divide the problem into three parts: the development user interface modeling language and tooling, the creation of a game development modeling language and tooling, and the development of a generic mobile application framework. In order to make this problem more manageable, we will narrow down the initial scope to the hair transplant, and glaucoma domains.
Resumo:
The wideband high-linearity mixers for a double conversion cable TV tuner is presented. The up-conversion mixer converts the input signal from 100MHz to 1000 MHz to the intermediate frequency (IF) of I GHz above. And the down-conversion mixer converts the frequency back. The degeneration resistors are used to Improve the linearity. The tuner is implemented in a 0.35 mu m SiGe technology. Input power at 1dB compression point can reach +14.23dBm. The lowest noise figure is 17.5dB. The two mixers consume 103mW under a supply voltage of 5 V.