28 resultados para metadata schemes
em Repositório Científico do Instituto Politécnico de Lisboa - Portugal
Resumo:
In MIMO systems the antenna array configuration in the BS and MS has a large influence on the available channel capacity. In this paper, we first introduce a new Frequency Selective (FS) MIMO framework for macro-cells in a realistic urban environment. The MIMO channel is built over a previously developed directional channel model, which considers the terrain and clutter information in the cluster, line-of-sight and link loss calculations. Next, MIMO configuration characteristics are investigated in order to maximize capacity, mainly the number of antennas, inter-antenna spacing and SNR impact. Channel and capacity simulation results are presented for the city of Lisbon, Portugal, using different antenna configurations. Two power allocations schemes are considered, uniform distribution and FS spatial water-filling. The results suggest optimized MIMO configurations, considering the antenna array size limitations, specially at the MS side.
Resumo:
Hoje em dia, há cada vez mais informação audiovisual e as transmissões ou ficheiros multimédia podem ser partilhadas com facilidade e eficiência. No entanto, a adulteração de conteúdos vídeo, como informação financeira, notícias ou sessões de videoconferência utilizadas num tribunal, pode ter graves consequências devido à importância desse tipo de informação. Surge então, a necessidade de assegurar a autenticidade e a integridade da informação audiovisual. Nesta dissertação é proposto um sistema de autenticação de vídeo H.264/Advanced Video Coding (AVC), denominado Autenticação de Fluxos utilizando Projecções Aleatórias (AFPA), cujos procedimentos de autenticação, são realizados ao nível de cada imagem do vídeo. Este esquema permite um tipo de autenticação mais flexível, pois permite definir um limite máximo de modificações entre duas imagens. Para efectuar autenticação é utilizada uma nova técnica de autenticação de imagens, que combina a utilização de projecções aleatórias com um mecanismo de correcção de erros nos dados. Assim é possível autenticar cada imagem do vídeo, com um conjunto reduzido de bits de paridade da respectiva projecção aleatória. Como a informação de vídeo é tipicamente, transportada por protocolos não fiáveis pode sofrer perdas de pacotes. De forma a reduzir o efeito das perdas de pacotes, na qualidade do vídeo e na taxa de autenticação, é utilizada Unequal Error Protection (UEP). Para validação e comparação dos resultados implementou-se um sistema clássico que autentica fluxos de vídeo de forma típica, ou seja, recorrendo a assinaturas digitais e códigos de hash. Ambos os esquemas foram avaliados, relativamente ao overhead introduzido e da taxa de autenticação. Os resultados mostram que o sistema AFPA, utilizando um vídeo com qualidade elevada, reduz o overhead de autenticação em quatro vezes relativamente ao esquema que utiliza assinaturas digitais e códigos de hash.
Resumo:
O RoF (Radio over Fiber) é uma tecnologia que permite a transmissão de sinais rádio de elevada largura de banda, fornecida pela fibra óptica, e simultaneamente mantêm a característica de mobilidade das redes de comunicação móvel. Esta dissertação de mestrado tem como objectivo estudar, simular e comparar sistemas RoF com modulação directa e com modulação externa, utilizando um sinal WiMAX, por ser uma tecnologia recente e com potencial de utilização futura. Desta forma foram avaliados três tipos de moduladores externos, sendo que o modulador EA (Electro-Absorption) é o que permite obter melhores valores de EVM e de SNR devido ao facto deste ter menores perdas de inserção na fibra. Da comparação entre o esquema com modulação directa e o esquema com modulação externa, é possível concluir que para larguras de banda mais baixas a utilização de modulação directa é mais eficiente que a modulação externa, mas à medida que a largura de banda aumenta a modulação externa apresenta claramente melhor desempenho. Isto deve-se ao facto de a modulação directa produzir mais chirp que a modulação externa, sendo que o chirp limita a largura de banda e o comprimento da fibra. De forma a melhorar o desempenho do sistema com modulação directa foi introduzido uma fibra com compensação de dispersão. Foi possível concluir que a utilização de fibra com compensação de dispersão é uma boa solução quando se pretende transmitir sinais de elevada largura de banda em esquemas com modulação directa.
Resumo:
Recently, several distributed video coding (DVC) solutions based on the distributed source coding (DSC) paradigm have appeared in the literature. Wyner-Ziv (WZ) video coding, a particular case of DVC where side information is made available at the decoder, enable to achieve a flexible distribution of the computational complexity between the encoder and decoder, promising to fulfill novel requirements from applications such as video surveillance, sensor networks and mobile camera phones. The quality of the side information at the decoder has a critical role in determining the WZ video coding rate-distortion (RD) performance, notably to raise it to a level as close as possible to the RD performance of standard predictive video coding schemes. Towards this target, efficient motion search algorithms for powerful frame interpolation are much needed at the decoder. In this paper, the RD performance of a Wyner-Ziv video codec is improved by using novel, advanced motion compensated frame interpolation techniques to generate the side information. The development of these type of side information estimators is a difficult problem in WZ video coding, especially because the decoder only has available some reference, decoded frames. Based on the regularization of the motion field, novel side information creation techniques are proposed in this paper along with a new frame interpolation framework able to generate higher quality side information at the decoder. To illustrate the RD performance improvements, this novel side information creation framework has been integrated in a transform domain turbo coding based Wyner-Ziv video codec. Experimental results show that the novel side information creation solution leads to better RD performance than available state-of-the-art side information estimators, with improvements up to 2 dB: moreover, it allows outperforming H.264/AVC Intra by up to 3 dB with a lower encoding complexity.
Resumo:
As teachers, we are challenged everyday to solve pedagogical problems and we have to fight for our students’ attention in a media rich world. I will talk about how we use ICT in Initial Teacher Training and give you some insight on what we are doing. The most important benefit of using ICT in education is that it makes us reflect on our practice. There is no doubt that our classrooms need to be updated, but we need to be critical about every peace of hardware, software or service that we bring into them. It is not only because our budgets are short, but also because e‐learning is primarily about learning, not technology. Therefore, we need to have the knowledge and skills required to act in different situations, and choose the best tool for the job. Not all subjects are suitable for e‐learning, nor do all students have the skills to organize themselves their own study times. Also not all teachers want to spend time programming or learning about instructional design and metadata. The promised land of easy use of authoring tools (e.g. eXe and Reload) that will lead to all teachers become Learning Objects authors and share these LO in Repositories, all this failed, like previously HyperCard, Toolbook and others. We need to know a little bit of many different technologies so we can mobilize this knowledge when a situation requires it: integrate e‐learning technologies in the classroom, not a flipped classroom, just simple tools. Lecture capture, mobile phones and smartphones, pocket size camcorders, VoIP, VLE, live video broadcast, screen sharing, free services for collaborative work, save, share and sync your files. Do not feel stressed to use everything, every time. Just because we have a whiteboard does not mean we have to make it the centre of the classroom. Start from where you are, with your preferred subject and the tools you master. Them go slowly and try some new tool in a non‐formal situation and with just one or two students. And you don’t need to be alone: subscribe a mailing list and share your thoughts with other teachers in a dedicated forum, even better if both are part of a community of practice, and share resources. We did that for music teachers and it was a success, in two years arriving at 1.000 members. Just do it.
Resumo:
Plain radiography still accounts for the vast majority of imaging studies that are performed at multiple clinical instances. Digital detectors are now prominent in many imaging facilities and they are the main driving force towards filmless environments. There has been a working paradigm shift due to the functional separation of acquisition, visualization, and storage with deep impact in the imaging workflows. Moreover with direct digital detectors images are made available almost immediately. Digital radiology is now completely integrated in Picture Archiving and Communication System (PACS) environments governed by the Digital Imaging and Communications in Medicine (DICOM) standard. In this chapter a brief overview of PACS architectures and components is presented together with a necessarily brief account of the DICOM standard. Special focus is given to the DICOM digital radiology objects and how specific attributes may now be used to improve and increase the metadata repository associated with image data. Regular scrutiny of the metadata repository may serve as a valuable tool for improved, cost-effective, and multidimensional quality control procedures.
Resumo:
Tribimaximal leptonic mixing is a mass-independent mixing scheme consistent with the present solar and atmospheric neutrino data. By conveniently decomposing the effective neutrino mass matrix associated to it, we derive generic predictions in terms of the parameters governing the neutrino masses. We extend this phenomenological analysis to other mass-independent mixing schemes which are related to the tribimaximal form by a unitary transformation. We classify models that produce tribimaximal leptonic mixing through the group structure of their family symmetries in order to point out that there is often a direct connection between the group structure and the phenomenological analysis. The type of seesaw mechanism responsible for neutrino masses plays a role here, as it restricts the choices of family representations and affects the viability of leptogenesis. We also present a recipe to generalize a given tribimaximal model to an associated model with a different mass-independent mixing scheme, which preserves the connection between the group structure and phenomenology as in the original model. This procedure is explicitly illustrated by constructing toy models with the transpose tribimaximal, bimaximal, golden ratio, and hexagonal leptonic mixing patterns.
Resumo:
Personal memories composed of digital pictures are very popular at the moment. To retrieve these media items annotation is required. During the last years, several approaches have been proposed in order to overcome the image annotation problem. This paper presents our proposals to address this problem. Automatic and semi-automatic learning methods for semantic concepts are presented. The automatic method is based on semantic concepts estimated using visual content, context metadata and audio information. The semi-automatic method is based on results provided by a computer game. The paper describes our proposals and presents their evaluations.
Resumo:
Dissertação apresentada à Escola Superior de Educação de Lisboa para obtenção de grau de mestre em Ciências da Educação, especialização Supervisão em Educação
Resumo:
Mestrado em Contabilidade
Resumo:
Low-density parity-check (LDPC) codes are nowadays one of the hottest topics in coding theory, notably due to their advantages in terms of bit error rate performance and low complexity. In order to exploit the potential of the Wyner-Ziv coding paradigm, practical distributed video coding (DVC) schemes should use powerful error correcting codes with near-capacity performance. In this paper, new ways to design LDPC codes for the DVC paradigm are proposed and studied. The new LDPC solutions rely on merging parity-check nodes, which corresponds to reduce the number of rows in the parity-check matrix. This allows to change gracefully the compression ratio of the source (DCT coefficient bitplane) according to the correlation between the original and the side information. The proposed LDPC codes reach a good performance for a wide range of source correlations and achieve a better RD performance when compared to the popular turbo codes.
Resumo:
Trabalho Final de Mestrado para obtenção do grau de Mestre em Engenharia Mecânica
Resumo:
The Schwinger proper-time method is an effective calculation method, explicitly gauge-invariant and nonperturbative. We make use of this method to investigate the radiatively induced Lorentz- and CPT-violating effects in quantum electrodynamics when an axial-vector interaction term is introduced in the fermionic sector. The induced Lorentz- and CPT-violating Chern-Simons term coincides with the one obtained using a covariant derivative expansion but differs from the result usually obtained in other regularization schemes. A possible ambiguity in the approach is also discussed. (C) 2001 Published by Elsevier Science B.V.
Resumo:
Relatório de Estágio apresentado à Escola Superior de Educação de Lisboa para obtenção de grau de mestre em Ensino de 1.º e 2.º Ciclos do Ensino Básico
Resumo:
Dissertação para obtenção do grau de Mestre em Engenharia Informática e de Computadores