884 resultados para Digital video
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
Pós-graduação em Agronomia (Energia na Agricultura) - FCA
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
Pós-graduação em Geografia - IGCE
Resumo:
Muscle fatigue is described as a cause of injuries among the many related to the running practice. Therefore, the purpose of this study was to analyze the behavior of the amplitude (RMS) and median frequency (MF) of EMG signal of the iliocostalis (CI), rectus femoris (RF), vastus lateralis (VL), vastus medialis (VM) biceps femoris (long head) (BFCL), tibialis anterior (TA) and gastrocnemius (lateral) (LNG) from the right lower limb, and the behavior of the parameters of amplitude (AP) and frequency (PF) spent in different percentages of the maximum speed during incremental protocol of treadmill running. 10 volunteers participated in this study, athletes, male, aged between 18 and 30 years with no history of injury in lower limbs and similar anthropometry. The protocol consisted of a treadmill test with initial velocity of 10 km.h-¹ and increments of 1 km.h-¹ each three minutes until volitional exhaustion, without rest interval. Synchronized collections were made of electromyographic and kinematic data. The signals were obtained through an acquisition module of biological signals (Telemyo 900 - Noraxon - USA) and software (Myoresearch - Noraxon - USA) calibrated with a sampling frequency of 1000 Hz, gain 2000 times. The raw data were filtered with a 60Hz notch filter, high pass and low pass 20Hz to 500Hz. To capture the image was used a digital video recorder (model NV-GS320, PANASONIC brand), and for image scanning and kinematic data collection was used the software Peak Motus 9.0 (ViconPeak). To obtain the values of RMS and FM analyzed the last ten passes of each speed through a specific routine (Matlab). To obtain these variables AP (m) and FP (stride I min) were analyzed for the last ten past each speed, using specific software (Peak Motus 9.0). After verification of data normality (Shapiro-Wilk) and homogeneity of the data (Levene), the comparison ...(Complete abstract click electronic access below)
Resumo:
Contemporary society is going through a cultural revolution evidenced by the area of information technology, and this has led to a transformation in social relations and in the classroom. trying to adjust to these changes and also to curricular high school, the teacher can use as teaching materials, some visual aids: the videos. for this he can use the youtube site to select this educational material. however, given the vast amount of videos available on the site requires a careful selection of them. this work of completion then you want to make a qualitative analysis about the videos available free on the youtube site
Resumo:
Pós-graduação em Cirurgia Veterinária - FCAV
Resumo:
Pós-graduação em Docência para a Educação Básica - FC
Resumo:
The last decades have seen an unrivaled growth and diffusion of mobile telecommunications. Several standards have been developed to this purposes, from GSM mobile phone communications to WLAN IEEE 802.11, providing different services for the the transmission of signals ranging from voice to high data rate digital communications and Digital Video Broadcasting (DVB). In this wide research and market field, this thesis focuses on Ultra Wideband (UWB) communications, an emerging technology for providing very high data rate transmissions over very short distances. In particular the presented research deals with the circuit design of enabling blocks for MB-OFDM UWB CMOS single-chip transceivers, namely the frequency synthesizer and the transmission mixer and power amplifier. First we discuss three different models for the simulation of chargepump phase-locked loops, namely the continuous time s-domain and discrete time z-domain approximations and the exact semi-analytical time-domain model. The limitations of the two approximated models are analyzed in terms of error in the computed settling time as a function of loop parameters, deriving practical conditions under which the different models are reliable for fast settling PLLs up to fourth order. Besides, a phase noise analysis method based upon the time-domain model is introduced and compared to the results obtained by means of the s-domain model. We compare the three models over the simulation of a fast switching PLL to be integrated in a frequency synthesizer for WiMedia MB-OFDM UWB systems. In the second part, the theoretical analysis is applied to the design of a 60mW 3.4 to 9.2GHz 12 Bands frequency synthesizer for MB-OFDM UWB based on two wide-band PLLs. The design is presented and discussed up to layout level. A test chip has been implemented in TSMC CMOS 90nm technology, measured data is provided. The functionality of the circuit is proved and specifications are met with state-of-the-art area occupation and power consumption. The last part of the thesis deals with the design of a transmission mixer and a power amplifier for MB-OFDM UWB band group 1. The design has been carried on up to layout level in ST Microlectronics 65nm CMOS technology. Main characteristics of the systems are the wideband behavior (1.6 GHz of bandwidth) and the constant behavior over process parameters, temperature and supply voltage thanks to the design of dedicated adaptive biasing circuits.
Resumo:
The thesis deals with channel coding theory applied to upper layers in the protocol stack of a communication link and it is the outcome of four year research activity. A specific aspect of this activity has been the continuous interaction between the natural curiosity related to the academic blue-sky research and the system oriented design deriving from the collaboration with European industry in the framework of European funded research projects. In this dissertation, the classical channel coding techniques, that are traditionally applied at physical layer, find their application at upper layers where the encoding units (symbols) are packets of bits and not just single bits, thus explaining why such upper layer coding techniques are usually referred to as packet layer coding. The rationale behind the adoption of packet layer techniques is in that physical layer channel coding is a suitable countermeasure to cope with small-scale fading, while it is less efficient against large-scale fading. This is mainly due to the limitation of the time diversity inherent in the necessity of adopting a physical layer interleaver of a reasonable size so as to avoid increasing the modem complexity and the latency of all services. Packet layer techniques, thanks to the longer codeword duration (each codeword is composed of several packets of bits), have an intrinsic longer protection against long fading events. Furthermore, being they are implemented at upper layer, Packet layer techniques have the indisputable advantages of simpler implementations (very close to software implementation) and of a selective applicability to different services, thus enabling a better matching with the service requirements (e.g. latency constraints). Packet coding technique improvement has been largely recognized in the recent communication standards as a viable and efficient coding solution: Digital Video Broadcasting standards, like DVB-H, DVB-SH, and DVB-RCS mobile, and 3GPP standards (MBMS) employ packet coding techniques working at layers higher than the physical one. In this framework, the aim of the research work has been the study of the state-of-the-art coding techniques working at upper layer, the performance evaluation of these techniques in realistic propagation scenario, and the design of new coding schemes for upper layer applications. After a review of the most important packet layer codes, i.e. Reed Solomon, LDPC and Fountain codes, in the thesis focus our attention on the performance evaluation of ideal codes (i.e. Maximum Distance Separable codes) working at UL. In particular, we analyze the performance of UL-FEC techniques in Land Mobile Satellite channels. We derive an analytical framework which is a useful tool for system design allowing to foresee the performance of the upper layer decoder. We also analyze a system in which upper layer and physical layer codes work together, and we derive the optimal splitting of redundancy when a frequency non-selective slowly varying fading channel is taken into account. The whole analysis is supported and validated through computer simulation. In the last part of the dissertation, we propose LDPC Convolutional Codes (LDPCCC) as possible coding scheme for future UL-FEC application. Since one of the main drawbacks related to the adoption of packet layer codes is the large decoding latency, we introduce a latency-constrained decoder for LDPCCC (called windowed erasure decoder). We analyze the performance of the state-of-the-art LDPCCC when our decoder is adopted. Finally, we propose a design rule which allows to trade-off performance and latency.
Resumo:
This work has been realized by the author in his PhD course in Electronics, Computer Science and Telecommunication at the University of Bologna, Faculty of Engineering, Italy. The subject of this thesis regards important channel estimation aspects in wideband wireless communication systems, such as echo cancellation in digital video broadcasting systems and pilot aided channel estimation through an innovative pilot design in Multi-Cell Multi-User MIMO-OFDM network. All the documentation here reported is a summary of years of work, under the supervision of Prof. Oreste Andrisano, coordinator of Wireless Communication Laboratory - WiLab, in Bologna. All the instrumentation that has been used for the characterization of the telecommunication systems belongs to CNR (National Research Council), CNIT (Italian Inter-University Center), and DEIS (Dept. of Electronics, Computer Science, and Systems). From November 2009 to May 2010, the author spent his time abroad, working in collaboration with DOCOMO - Communications Laboratories Europe GmbH (DOCOMO Euro-Labs) in Munich, Germany, in the Wireless Technologies Research Group. Some important scientific papers, submitted and/or published on IEEE journals and conferences have been produced by the author.
Resumo:
IEF protein binary separations were performed in a 12-μL drop suspended between two palladium electrodes, using pH gradients created by electrolysis of simple buffers at low voltages (1.5-5 V). The dynamics of pH gradient formation and protein separation were investigated by computer simulation and experimentally via digital video microscope imaging in the presence and absence of pH indicator solution. Albumin, ferritin, myoglobin, and cytochrome c were used as model proteins. A drop containing 2.4 μg of each protein was applied, electrophoresed, and allowed to evaporate until it splits to produce two fractions that were recovered by rinsing the electrodes with a few microliters of buffer. Analysis by gel electrophoresis revealed that anode and cathode fractions were depleted from high pI and low pI proteins, respectively, whereas proteins with intermediate pI values were recovered in both fractions. Comparable data were obtained with diluted bovine serum that was fortified with myoglobin and cytochrome c.
Resumo:
Television and movie images have been altered ever since it was technically possible. Nowadays embedding advertisements, or incorporating text and graphics in TV scenes, are common practice, but they can not be considered as integrated part of the scene. The introduction of new services for interactive augmented television is discussed in this paper. We analyse the main aspects related with the whole chain of augmented reality production. Interactivity is one of the most important added values of the digital television: This paper aims to break the model where all TV viewers receive the same final image. Thus, we introduce and discuss the new concept of interactive augmented television, i. e. real time composition of video and computer graphics - e.g. a real scene and freely selectable images or spatial rendered objects - edited and customized by the end user within the context of the user's set top box and TV receiver.
Resumo:
We investigated the effects of pH on movement behaviors of the harmful algal bloom causing raphidophyte Heterosigma akashiwo. Motility parameters from >8000 swimming tracks of individual cells were quantified using 3D digital video analysis over a 6-h period in 3 pH treatments reflecting marine carbonate chemistry during the pre-industrial era, currently, and the year 2100. Movement behaviors were investigated in two different acclimation-to-target-pH conditions: instantaneous exposure and acclimation of cells for at least 11 generations. There was no negative impairment of cell motility when exposed to elevated PCO2 (i.e., low pH) conditions but there were significant behavioral responses. Irrespective of acclimation condition, lower pH significantly increased downward velocity and frequency of downward swimming cells (p < 0.001). Rapid exposure to lower pH resulted in 9% faster downward vertical velocity and up to 19% more cells swimming downwards (p < 0.001). Compared to pH-shock experiments, pre-acclimation of cells to target pH resulted in ~30% faster swimming speed and up to 46% faster downward velocities (all p < 0.001). The effect of year 2100 PCO2 levels on population diffusivity in pre-acclimated cultures was >2-fold greater than in pH-shock treatments (2.2 × 105 µm**2/s vs. 8.4 × 104 µm**2/s). Predictions from an advection-diffusion model, suggest that as PCO2 increased the fraction of the population aggregated at the surface declined, and moved deeper in the water column. Enhanced downward swimming of H. akashiwo at low pH suggests that these behavioral responses to elevated PCO2 could reduce the likelihood of dense surface slick formation of H. akashiwo through reductions in light exposure or growth independent surface aggregations. We hypothesize that the HAB alga's response to higher PCO2 may exploit the signaling function of high PCO2 as indicative of net heterotrophy in the system, thus indicative of high predation rates or depletion of nutrients.
Resumo:
Desde los inicios de la codificación de vídeo digital hasta hoy, tanto la señal de video sin comprimir de entrada al codificador como la señal de salida descomprimida del decodificador, independientemente de su resolución, uso de submuestreo en los planos de diferencia de color, etc. han tenido siempre la característica común de utilizar 8 bits para representar cada una de las muestras. De la misma manera, los estándares de codificación de vídeo imponen trabajar internamente con estos 8 bits de precisión interna al realizar operaciones con las muestras cuando aún no se han transformado al dominio de la frecuencia. Sin embargo, el estándar H.264, en gran auge hoy en día, permite en algunos de sus perfiles orientados al mundo profesional codificar vídeo con más de 8 bits por muestra. Cuando se utilizan estos perfiles, las operaciones efectuadas sobre las muestras todavía sin transformar se realizan con la misma precisión que el número de bits del vídeo de entrada al codificador. Este aumento de precisión interna tiene el potencial de permitir unas predicciones más precisas, reduciendo el residuo a codificar y aumentando la eficiencia de codificación para una tasa binaria dada. El objetivo de este Proyecto Fin de Carrera es estudiar, utilizando las medidas de calidad visual objetiva PSNR (Peak Signal to Noise Ratio, relación señal ruido de pico) y SSIM (Structural Similarity, similaridad estructural), el efecto sobre la eficiencia de codificación y el rendimiento al trabajar con una cadena de codificación/descodificación H.264 de 10 bits en comparación con una cadena tradicional de 8 bits. Para ello se utiliza el codificador de código abierto x264, capaz de codificar video de 8 y 10 bits por muestra utilizando los perfiles High, High 10, High 4:2:2 y High 4:4:4 Predictive del estándar H.264. Debido a la ausencia de herramientas adecuadas para calcular las medidas PSNR y SSIM de vídeo con más de 8 bits por muestra y un tipo de submuestreo de planos de diferencia de color distinto al 4:2:0, como parte de este proyecto se desarrolla también una aplicación de análisis en lenguaje de programación C capaz de calcular dichas medidas a partir de dos archivos de vídeo sin comprimir en formato YUV o Y4M. ABSTRACT Since the beginning of digital video compression, the uncompressed video source used as input stream to the encoder and the uncompressed decoded output stream have both used 8 bits for representing each sample, independent of resolution, chroma subsampling scheme used, etc. In the same way, video coding standards force encoders to work internally with 8 bits of internal precision when working with samples before being transformed to the frequency domain. However, the H.264 standard allows coding video with more than 8 bits per sample in some of its professionally oriented profiles. When using these profiles, all work on samples still in the spatial domain is done with the same precision the input video has. This increase in internal precision has the potential of allowing more precise predictions, reducing the residual to be encoded, and thus increasing coding efficiency for a given bitrate. The goal of this Project is to study, using PSNR (Peak Signal to Noise Ratio) and SSIM (Structural Similarity) objective video quality metrics, the effects on coding efficiency and performance caused by using an H.264 10 bit coding/decoding chain compared to a traditional 8 bit chain. In order to achieve this goal the open source x264 encoder is used, which allows encoding video with 8 and 10 bits per sample using the H.264 High, High 10, High 4:2:2 and High 4:4:4 Predictive profiles. Given that no proper tools exist for computing PSNR and SSIM values of video with more than 8 bits per sample and chroma subsampling schemes other than 4:2:0, an analysis application written in the C programming language is developed as part of this Project. This application is able to compute both metrics from two uncompressed video files in the YUV or Y4M format.