973 resultados para High definition


Relevância:

60.00% 60.00%

Publicador:

Resumo:

Large Display Arrays (LDAs) use Light Emitting Diodes (LEDs) in order to inform a viewing audience. A matrix of individually driven LEDs allows the area represented to display text, images and video. LDAs have undergone rapid development over the past 10 years in both the modular and semi-flexible formats. This thesis critically analyses the communication architecture and processor functionality of current LDAs and presents an alternative method, that is, Scalable Flexible Large Display Arrays (SFLDAs). SFLDAs are more adaptable to a variety of applications because of enhancements in scalability and flexibility. Scalability is the ability to configure SFLDAs from 0.8m2 to 200m2. Flexibility is increased functionality within the processors to handle changes in configuration and the use of a communication architecture that standardises two-way communication throughout the SFLDA. While common video platforms such as Digital Video Interface (DVI), Serial Digital Interface (SDI), and High Definition Multimedia Interface (HDMI) are considered as solutions for the communication architecture of SFLDAs, so too is modulation, fibre optic, capacitive coupling and Ethernet. From an analysis of these architectures, Ethernet was identified as the best solution. The use of Ethernet as the communication architecture in SFLDAs means that both hardware and software modules are capable of interfacing to the SFLDAs. The Video to Ethernet Processor Unit (VEPU), Scoreboard, Image and Control Software (SICS) and Ethernet to LED Processor Unit (ELPU) have been developed to form the key components in designing and implementing the first SFLDA. Data throughput rate and spectrophotometer tests were used to measure the effectiveness of Ethernet within the SFLDA constructs. The result of testing and analysis of these architectures showed that Ethernet satisfactorily met the requirements of SFLDAs.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

High-throughput DNA sequencing (HTS) instruments today are capable of generating millions of sequencing reads in a short period of time, and this represents a serious challenge to current bioinformatics pipeline in processing such an enormous amount of data in a fast and economical fashion. Modern graphics cards are powerful processing units that consist of hundreds of scalar processors in parallel in order to handle the rendering of high-definition graphics in real-time. It is this computational capability that we propose to harness in order to accelerate some of the time-consuming steps in analyzing data generated by the HTS instruments. We have developed BarraCUDA, a novel sequence mapping software that utilizes the parallelism of NVIDIA CUDA graphics cards to map sequencing reads to a particular location on a reference genome. While delivering a similar mapping fidelity as other mainstream programs , BarraCUDA is a magnitude faster in mapping throughput compared to its CPU counterparts. The software is also capable of supporting multiple CUDA devices in parallel to further accelerate the mapping throughput. BarraCUDA is designed to take advantage of the parallelism of GPU to accelerate the mapping of millions of sequencing reads generated by HTS instruments. By doing this, we could, at least in part streamline the current bioinformatics pipeline such that the wider scientific community could benefit from the sequencing technology. BarraCUDA is currently available at http://seqbarracuda.sf.net

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Evaluating the mechanical properties of rock masses is the base of rock engineering design and construction. It has great influence on the safety and cost of rock project. The recognition is inevitable consequence of new engineering activities in rock, including high-rise building, super bridge, complex underground installations, hydraulic project and etc. During the constructions, lots of engineering accidents happened, which bring great damage to people. According to the investigation, many failures are due to choosing improper mechanical properties. ‘Can’t give the proper properties’ becomes one of big problems for theoretic analysis and numerical simulation. Selecting the properties reasonably and effectively is very significant for the planning, design and construction of rock engineering works. A multiple method based on site investigation, theoretic analysis, model test, numerical test and back analysis by artificial neural network is conducted to determine and optimize the mechanical properties for engineering design. The following outcomes are obtained: (1) Mapping of the rock mass structure Detailed geological investigation is the soul of the fine structure description. Based on statistical window,geological sketch and digital photography,a new method for rock mass fine structure in-situ mapping is developed. It has already been taken into practice and received good comments in Baihetan Hydropower Station. (2) Theoretic analysis of rock mass containing intermittent joints The shear strength mechanisms of joint and rock bridge are analyzed respectively. And the multiple modes of failure on different stress condition are summarized and supplied. Then, through introducing deformation compatibility equation in normal direction, the direct shear strength formulation and compression shear strength formulation for coplanar intermittent joints, as well as compression shear strength formulation for ladderlike intermittent joints are deducted respectively. In order to apply the deducted formulation conveniently in the real projects, a relationship between these formulations and Mohr-Coulomb hypothesis is built up. (3) Model test of rock mass containing intermittent joints Model tests are adopted to study the mechanical mechanism of joints to rock masses. The failure modes of rock mass containing intermittent joints are summarized from the model test. Six typical failure modes are found in the test, and brittle failures are the main failure mode. The evolvement processes of shear stress, shear displacement, normal stress and normal displacement are monitored by using rigid servo test machine. And the deformation and failure character during the loading process is analyzed. According to the model test, the failure modes quite depend on the joint distribution, connectivity and stress states. According to the contrastive analysis of complete stress strain curve, different failure developing stages are found in the intact rock, across jointed rock mass and intermittent jointed rock mass. There are four typical stages in the stress strain curve of intact rock, namely shear contraction stage, linear elastic stage, failure stage and residual strength stage. There are three typical stages in the across jointed rock mass, namely linear elastic stage, transition zone and sliding failure stage. Correspondingly, five typical stages are found in the intermittent jointed rock mass, namely linear elastic stage, sliding of joint, steady growth of post-crack, joint coalescence failure, and residual strength. According to strength analysis, the failure envelopes of intact rock and across jointed rock mass are the upper bound and lower bound separately. The strength of intermittent jointed rock mass can be evaluated by reducing the bandwidth of the failure envelope with geo-mechanics analysis. (4) Numerical test of rock mass Two sets of methods, i.e. the distinct element method (DEC) based on in-situ geology mapping and the realistic failure process analysis (RFPA) based on high-definition digital imaging, are developed and introduced. The operation process and analysis results are demonstrated detailedly from the research on parameters of rock mass based on numerical test in the Jinping First Stage Hydropower Station and Baihetan Hydropower Station. By comparison,the advantages and disadvantages are discussed. Then the applicable fields are figured out respectively. (5) Intelligent evaluation based on artificial neural network (ANN) The characters of both ANN and parameter evaluation of rock mass are discussed and summarized. According to the investigations, ANN has a bright application future in the field of parameter evaluation of rock mass. Intelligent evaluation of mechanical parameters in the Jinping First Stage Hydropower Station is taken as an example to demonstrate the analysis process. The problems in five aspects, i. e. sample selection, network design, initial value selection, learning rate and expected error, are discussed detailedly.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Recent years have witnessed a rapid growth in the demand for streaming video over the Internet, exposing challenges in coping with heterogeneous device capabilities and varying network throughput. When we couple this rise in streaming with the growing number of portable devices (smart phones, tablets, laptops) we see an ever-increasing demand for high-definition videos online while on the move. Wireless networks are inherently characterised by restricted shared bandwidth and relatively high error loss rates, thus presenting a challenge for the efficient delivery of high quality video. Additionally, mobile devices can support/demand a range of video resolutions and qualities. This demand for mobile streaming highlights the need for adaptive video streaming schemes that can adjust to available bandwidth and heterogeneity, and can provide us with graceful changes in video quality, all while respecting our viewing satisfaction. In this context the use of well-known scalable media streaming techniques, commonly known as scalable coding, is an attractive solution and the focus of this thesis. In this thesis we investigate the transmission of existing scalable video models over a lossy network and determine how the variation in viewable quality is affected by packet loss. This work focuses on leveraging the benefits of scalable media, while reducing the effects of data loss on achievable video quality. The overall approach is focused on the strategic packetisation of the underlying scalable video and how to best utilise error resiliency to maximise viewable quality. In particular, we examine the manner in which scalable video is packetised for transmission over lossy networks and propose new techniques that reduce the impact of packet loss on scalable video by selectively choosing how to packetise the data and which data to transmit. We also exploit redundancy techniques, such as error resiliency, to enhance the stream quality by ensuring a smooth play-out with fewer changes in achievable video quality. The contributions of this thesis are in the creation of new segmentation and encapsulation techniques which increase the viewable quality of existing scalable models by fragmenting and re-allocating the video sub-streams based on user requirements, available bandwidth and variations in loss rates. We offer new packetisation techniques which reduce the effects of packet loss on viewable quality by leveraging the increase in the number of frames per group of pictures (GOP) and by providing equality of data in every packet transmitted per GOP. These provide novel mechanisms for packetizing and error resiliency, as well as providing new applications for existing techniques such as Interleaving and Priority Encoded Transmission. We also introduce three new scalable coding models, which offer a balance between transmission cost and the consistency of viewable quality.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The paper starts presents the work initially carried out by Queen's University and RSRE (now Qinetiq) in the development of advanced architectures and microchips based on systolic array architectures. The paper outlines how this has led to the development of highly complex designs for high definition TV and highlights work both on advanced signal processing architectures and tool flows for advanced systems. © 2006 IEEE.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The initial part of this paper reviews the early challenges (c 1980) in achieving real-time silicon implementations of DSP computations. In particular, it discusses research on application specific architectures, including bit level systolic circuits that led to important advances in achieving the DSP performance levels then required. These were many orders of magnitude greater than those achievable using programmable (including early DSP) processors, and were demonstrated through the design of commercial digital correlator and digital filter chips. As is discussed, an important challenge was the application of these concepts to recursive computations as occur, for example, in Infinite Impulse Response (IIR) filters. An important breakthrough was to show how fine grained pipelining can be used if arithmetic is performed most significant bit (msb) first. This can be achieved using redundant number systems, including carry-save arithmetic. This research and its practical benefits were again demonstrated through a number of novel IIR filter chip designs which at the time, exhibited performance much greater than previous solutions. The architectural insights gained coupled with the regular nature of many DSP and video processing computations also provided the foundation for new methods for the rapid design and synthesis of complex DSP System-on-Chip (SoC), Intellectual Property (IP) cores. This included the creation of a wide portfolio of commercial SoC video compression cores (MPEG2, MPEG4, H.264) for very high performance applications ranging from cell phones to High Definition TV (HDTV). The work provided the foundation for systematic methodologies, tools and design flows including high-level design optimizations based on "algorithmic engineering" and also led to the creation of the Abhainn tool environment for the design of complex heterogeneous DSP platforms comprising processors and multiple FPGAs. The paper concludes with a discussion of the problems faced by designers in developing complex DSP systems using current SoC technology. © 2007 Springer Science+Business Media, LLC.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The widespread availability and demand for multimedia capable devices and multimedia content have fueled the need for high-speed wireless connectivity beyond the capabilities of existing commercial standards. While fiber optic data transfer links can provide multigigabit- per-second data rates, cost and deployment are often prohibitive in many applications. Wireless links, on the contrary, can provide a cost-effective fiber alternative to interconnect the outlining areas beyond the reach of the fiber rollout. With this in mind, the ever increasing demand for multi-gigabit wireless applications, fiber segment replacement mobile backhauling and aggregation, and covering the last mile have posed enormous challenges for next generation wireless technologies. In particular, the unbalanced temporal and geographical variations of spectrum usage along with the rapid proliferation of bandwidth- hungry mobile applications, such as video streaming with high definition television (HDTV) and ultra-high definition video (UHDV), have inspired millimeter-wave (mmWave) communications as a promising technology to alleviate the pressure of scarce spectrum resources for fifth generation (5G) mobile broadband.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This paper presents a critical analysis of ultrawideband (UWB) and considers the turbulent journey it has had from the Federal Communications Commission's bandwidth allocation in 2002 to today. It analyzes the standards, the standoffs, and the stalemate in standardization activities and investigates the past and present research and commercial activities in realizing the UWB dream. In this paper, statistical evidence is presented to depict UWB's changing fortunes and is utilized as an indicator of future prominence. This paper reviews some of the opinions and remarks from commentators and analyzes predictions that were made. Finally, it presents possible ways forward to reignite the high-data-rate UWB standardization pursuit.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

There is demand for an easily programmable, high performance image processing platform based on FPGAs. In previous work, a novel, high performance processor - IPPro was developed and a Histogram of Orientated Gradients (HOG) algorithm study undertaken on a Xilinx Zynq platform. Here, we identify and explore a number of mapping strategies to improve processing efficiency for soft-cores and a number of options for creation of a division coprocessor. This is demonstrated for the revised high definition HOG implementation on a Zynq platform, resulting in a performance of 328 fps which represents a 146% speed improvement over the original realization and a tenfold reduction in energy.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This paper presents the implementation of the OFDM demodulator and the Viterbi decoder, proposed as part of a wireless High Definition video receiver to be integrated in an FPGA. These blocks were implemented in a Xilinx Virtex-6 FPGA. The complete system was previously modeled and simulated using MATLAB/Simulink to extract importante hardware characteristics for the FPGA implementation.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The focus of this study was to detennine whether soil texture and composition variables were related to vine water status and both yield components and grape composition, and whether multispectral high definition airborne imagery could be used to segregate zones in vineyards to target fruit of highest quality for premium winemaking. The study took place on a 10-ha commercial Riesling vineyard at Thirty Bench Winemakers, in Beamsville (Ontario). Results showed that Soil moisture and leaf'l' were temporally stable and related to berry composition and remotely-sensed data. Remote-sensing, through the calculation of vegetation indices, was particularly useful to predict vine vigor, yield, fruit maturity as well as berry monoterpene concentration; it could also clearly assist in making wines that are more representative ofthe cultivar used, and also wines that are a reflection of a specific terroir, since calculated vegetation indices were highly correlated to typical Riesling.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Le Ministère des Ressources Naturelles et de la Faune (MRNF) a mandaté la compagnie de géomatique SYNETIX inc. de Montréal et le laboratoire de télédétection de l’Université de Montréal dans le but de développer une application dédiée à la détection automatique et la mise à jour du réseau routier des cartes topographiques à l’échelle 1 : 20 000 à partir de l’imagerie optique à haute résolution spatiale. À cette fin, les mandataires ont entrepris l’adaptation du progiciel SIGMA0 qu’ils avaient conjointement développé pour la mise à jour cartographique à partir d’images satellitales de résolution d’environ 5 mètres. Le produit dérivé de SIGMA0 fut un module nommé SIGMA-ROUTES dont le principe de détection des routes repose sur le balayage d’un filtre le long des vecteurs routiers de la cartographie existante. Les réponses du filtre sur des images couleurs à très haute résolution d’une grande complexité radiométrique (photographies aériennes) conduisent à l’assignation d’étiquettes selon l’état intact, suspect, disparu ou nouveau aux segments routiers repérés. L’objectif général de ce projet est d’évaluer la justesse de l’assignation des statuts ou états en quantifiant le rendement sur la base des distances totales détectées en conformité avec la référence ainsi qu’en procédant à une analyse spatiale des incohérences. La séquence des essais cible d’abord l’effet de la résolution sur le taux de conformité et dans un second temps, les gains escomptés par une succession de traitements de rehaussement destinée à rendre ces images plus propices à l’extraction du réseau routier. La démarche globale implique d’abord la caractérisation d’un site d’essai dans la région de Sherbrooke comportant 40 km de routes de diverses catégories allant du sentier boisé au large collecteur sur une superficie de 2,8 km2. Une carte de vérité terrain des voies de communication nous a permis d’établir des données de référence issues d’une détection visuelle à laquelle sont confrontés les résultats de détection de SIGMA-ROUTES. Nos résultats confirment que la complexité radiométrique des images à haute résolution en milieu urbain bénéficie des prétraitements telles que la segmentation et la compensation d’histogramme uniformisant les surfaces routières. On constate aussi que les performances présentent une hypersensibilité aux variations de résolution alors que le passage entre nos trois résolutions (84, 168 et 210 cm) altère le taux de détection de pratiquement 15% sur les distances totales en concordance avec la référence et segmente spatialement de longs vecteurs intacts en plusieurs portions alternant entre les statuts intact, suspect et disparu. La détection des routes existantes en conformité avec la référence a atteint 78% avec notre plus efficace combinaison de résolution et de prétraitements d’images. Des problèmes chroniques de détection ont été repérés dont la présence de plusieurs segments sans assignation et ignorés du processus. Il y a aussi une surestimation de fausses détections assignées suspectes alors qu’elles devraient être identifiées intactes. Nous estimons, sur la base des mesures linéaires et des analyses spatiales des détections que l’assignation du statut intact devrait atteindre 90% de conformité avec la référence après divers ajustements à l’algorithme. La détection des nouvelles routes fut un échec sans égard à la résolution ou au rehaussement d’image. La recherche des nouveaux segments qui s’appuie sur le repérage de points potentiels de début de nouvelles routes en connexion avec les routes existantes génère un emballement de fausses détections navigant entre les entités non-routières. En lien avec ces incohérences, nous avons isolé de nombreuses fausses détections de nouvelles routes générées parallèlement aux routes préalablement assignées intactes. Finalement, nous suggérons une procédure mettant à profit certaines images rehaussées tout en intégrant l’intervention humaine à quelques phases charnières du processus.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Dans l'apprentissage machine, la classification est le processus d’assigner une nouvelle observation à une certaine catégorie. Les classifieurs qui mettent en œuvre des algorithmes de classification ont été largement étudié au cours des dernières décennies. Les classifieurs traditionnels sont basés sur des algorithmes tels que le SVM et les réseaux de neurones, et sont généralement exécutés par des logiciels sur CPUs qui fait que le système souffre d’un manque de performance et d’une forte consommation d'énergie. Bien que les GPUs puissent être utilisés pour accélérer le calcul de certains classifieurs, leur grande consommation de puissance empêche la technologie d'être mise en œuvre sur des appareils portables tels que les systèmes embarqués. Pour rendre le système de classification plus léger, les classifieurs devraient être capable de fonctionner sur un système matériel plus compact au lieu d'un groupe de CPUs ou GPUs, et les classifieurs eux-mêmes devraient être optimisés pour ce matériel. Dans ce mémoire, nous explorons la mise en œuvre d'un classifieur novateur sur une plate-forme matérielle à base de FPGA. Le classifieur, conçu par Alain Tapp (Université de Montréal), est basé sur une grande quantité de tables de recherche qui forment des circuits arborescents qui effectuent les tâches de classification. Le FPGA semble être un élément fait sur mesure pour mettre en œuvre ce classifieur avec ses riches ressources de tables de recherche et l'architecture à parallélisme élevé. Notre travail montre que les FPGAs peuvent implémenter plusieurs classifieurs et faire les classification sur des images haute définition à une vitesse très élevée.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The creation of OFDM based Wireless Personal Area Networks (WPANs) has allowed the development of high bit-rate wireless communication devices suitable for streaming High Definition video between consumer products, as demonstrated in Wireless-USB and Wireless-HDMI. However, these devices need high frequency clock rates, particularly for the OFDM, FFT and symbol processing sections resulting in high silicon cost and high electrical power. The high clock rates make hardware prototyping difficult and verification is therefore very important but costly. Acknowledging that electrical power in wireless consumer devices is more critical than the number of implemented logic gates, this paper presents a Double Data Rate (DDR) architecture for implementation inside a OFDM baseband codec in order to reduce the high frequency clock rates by a complete factor of 2. The presented architecture has been implemented and tested for ECMA-368 (Wireless- USB context) resulting in a maximum clock rate of 264MHz instead of the expected 528MHz clock rate existing anywhere on the baseband codec die.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The creation of OFDM based Wireless Personal Area Networks (WPANs) has allowed high bit-rate wireless communication devices suitable for streaming High Definition video between consumer products as demonstrated in Wireless- USB. However, these devices need high clock rates, particularly for the OFDM sections resulting in high silicon cost and high electrical power. Acknowledging that electrical power in wireless consumer devices is more critical than the number of implemented logic gates, this paper presents a Double Data Rate (DDR) architecture to reduce the OFDM input and output clock rate by a factor of 2. The architecture has been implemented and tested for Wireless-USB (ECMA-368) resulting in a maximum clock of 264MHz instead of 528MHz existing anywhere on the die.