899 resultados para Video games -- Design -- TFC
Design of improved rail-to-rail low-distortion and low-stress switches in advanced CMOS technologies
Resumo:
This paper describes the efficient design of an improved and dedicated switched-capacitor (SC) circuit capable of linearizing CMOS switches to allow SC circuits to reach low distortion levels. The described circuit (SC linearization control circuit, SLC) has the advantage over conventional clock-bootstrapping circuits of exhibiting low-stress, since large gate voltages are avoided. This paper presents exhaustive corner simulation results of a SC sample-and-hold (S/H) circuit which employs the proposed and optimized circuits, together with the experimental evaluation of a complete 10-bit ADC utilizing the referred S/H circuit. These results show that the SLC circuits can reduce distortion and increase dynamic linearity above 12 bits for wide input signal bandwidths.
Resumo:
A package of B-spline finite strip models is developed for the linear analysis of piezolaminated plates and shells. This package is associated to a global optimization technique in order to enhance the performance of these types of structures, subjected to various types of objective functions and/or constraints, with discrete and continuous design variables. The models considered are based on a higher-order displacement field and one can apply them to the static, free vibration and buckling analyses of laminated adaptive structures with arbitrary lay-ups, loading and boundary conditions. Genetic algorithms, with either binary or floating point encoding of design variables, were considered to find optimal locations of piezoelectric actuators as well as to determine the best voltages applied to them in order to obtain a desired structure shape. These models provide an overall economy of computing effort for static and vibration problems.
Resumo:
Wyner-Ziv (WZ) video coding is a particular case of distributed video coding, the recent video coding paradigm based on the Slepian-Wolf and Wyner-Ziv theorems that exploits the source correlation at the decoder and not at the encoder as in predictive video coding. Although many improvements have been done over the last years, the performance of the state-of-the-art WZ video codecs still did not reach the performance of state-of-the-art predictive video codecs, especially for high and complex motion video content. This is also true in terms of subjective image quality mainly because of a considerable amount of blocking artefacts present in the decoded WZ video frames. This paper proposes an adaptive deblocking filter to improve both the subjective and objective qualities of the WZ frames in a transform domain WZ video codec. The proposed filter is an adaptation of the advanced deblocking filter defined in the H.264/AVC (advanced video coding) standard to a WZ video codec. The results obtained confirm the subjective quality improvement and objective quality gains that can go up to 0.63 dB in the overall for sequences with high motion content when large group of pictures are used.
Resumo:
Wyner - Ziv (WZ) video coding is a particular case of distributed video coding (DVC), the recent video coding paradigm based on the Slepian - Wolf and Wyner - Ziv theorems which exploits the source temporal correlation at the decoder and not at the encoder as in predictive video coding. Although some progress has been made in the last years, WZ video coding is still far from the compression performance of predictive video coding, especially for high and complex motion contents. The WZ video codec adopted in this study is based on a transform domain WZ video coding architecture with feedback channel-driven rate control, whose modules have been improved with some recent coding tools. This study proposes a novel motion learning approach to successively improve the rate-distortion (RD) performance of the WZ video codec as the decoding proceeds, making use of the already decoded transform bands to improve the decoding process for the remaining transform bands. The results obtained reveal gains up to 2.3 dB in the RD curves against the performance for the same codec without the proposed motion learning approach for high motion sequences and long group of pictures (GOP) sizes.
Resumo:
The devastating impact of the Sumatra tsunami of 26 December 2004, raised the question for scientists of how to forecast a tsunami threat. In 2005, the IOC-UNESCO XXIII assembly decided to implement a global tsunami warning system to cover the regions that were not yet protected, namely the Indian Ocean, the Caribbean and the North East Atlantic, the Mediterranean and connected seas (the NEAM region). Within NEAM, the Gulf of Cadiz is the more sensitive area, with an important record of devastating historical events. The objective of this paper is to present a preliminary design for a reliable tsunami detection network for the Gulf of Cadiz, based on a network of sea-level observatories. The tsunamigenic potential of this region has been revised in order to define the active tectonic structures. Tsunami hydrodynamic modeling and GIS technology have been used to identify the appropriate locations for the minimum number of sea-level stations. Results show that 3 tsunameters are required as the minimum number of stations necessary to assure an acceptable protection to the large coastal population in the Gulf of Cadiz. In addition, 29 tide gauge stations could be necessary to fully assess the effects of a tsunami along the affected coasts of Portugal, Spain and Morocco.
Resumo:
It is proposed a new approach based on a methodology, assisted by a tool, to create new products in the automobile industry based on previous defined processes and experiences inspired on a set of best practices or principles: it is based on high-level models or specifications; it is component-based architecture centric; it is based on generative programming techniques. This approach follows in essence the MDA (Model Driven Architecture) philosophy with some specific characteristics. We propose a repository that keeps related information, such as models, applications, design information, generated artifacts and even information concerning the development process itself (e.g., generation steps, tests and integration milestones). Generically, this methodology receives the users' requirements to a new product (e.g., functional, non-functional, product specification) as its main inputs and produces a set of artifacts (e.g., design parts, process validation output) as its main output, that will be integrated in the engineer design tool (e.g. CAD system) facilitating the work.
Resumo:
Demand for power is growing every day, mainly due to emerging economies in countries such as China, Russia, India, and Brazil. During the last 50 years steam pressure and temperature in power plants have been continuously raised to improve thermal efficiency. Recent efforts to improve efficiency leads to the development of a new generation of heat recovery steam generator, where the Benson once-through technology is applied to improve the thermal efficiency. The main purpose of this paper is to analyze the mechanical behavior of a high pressure superheater manifold by applying finite element modeling and a finite element analysis with the objective of analyzing stress propagation, leading to the study of damage mechanism, e.g., uniaxial fatigue, uniaxial creep for life prediction. The objective of this paper is also to analyze the mechanical properties of the new high temperature resistant materials in the market such as 2Cr Bainitic steels (T/P23 and T/P24) and also the 9-12Cr Martensitic steels (T/P91, T/P92, E911, and P/T122). For this study the design rules for construction of power boilers to define the geometry of the HPSH manifold were applied.
Resumo:
Recently, several distributed video coding (DVC) solutions based on the distributed source coding (DSC) paradigm have appeared in the literature. Wyner-Ziv (WZ) video coding, a particular case of DVC where side information is made available at the decoder, enable to achieve a flexible distribution of the computational complexity between the encoder and decoder, promising to fulfill novel requirements from applications such as video surveillance, sensor networks and mobile camera phones. The quality of the side information at the decoder has a critical role in determining the WZ video coding rate-distortion (RD) performance, notably to raise it to a level as close as possible to the RD performance of standard predictive video coding schemes. Towards this target, efficient motion search algorithms for powerful frame interpolation are much needed at the decoder. In this paper, the RD performance of a Wyner-Ziv video codec is improved by using novel, advanced motion compensated frame interpolation techniques to generate the side information. The development of these type of side information estimators is a difficult problem in WZ video coding, especially because the decoder only has available some reference, decoded frames. Based on the regularization of the motion field, novel side information creation techniques are proposed in this paper along with a new frame interpolation framework able to generate higher quality side information at the decoder. To illustrate the RD performance improvements, this novel side information creation framework has been integrated in a transform domain turbo coding based Wyner-Ziv video codec. Experimental results show that the novel side information creation solution leads to better RD performance than available state-of-the-art side information estimators, with improvements up to 2 dB: moreover, it allows outperforming H.264/AVC Intra by up to 3 dB with a lower encoding complexity.
Resumo:
Papers on child-care attendance as a risk factor for acute respiratory infections and diarrhea were reviewed. There was great variety among the studies with regard to the design, definition of exposure and definition of outcomes. All the traditional epidemiological study designs have been used. The studies varied in terms of how child-care attendance in general was defined, and for different settings. These definitions differed especially in relation to the minimum time of attendance required. The outcomes were also defined and measured in several different ways. The analyses performed were not always appropriate, leading to sets of results of uneven quality, and composed of different measures of association relating different exposures and outcomes, that made summarizing difficult. Despite that, the results reported were remarkably consistent. Only two of the papers reviewed failed to show some association between child-care attendance and increased acute respiratory infections, or diarrhea. On the other hand, the magnitude of the associations reported varied widely, especially for lower respiratory infections. Taken together, the studies so far published provide evidence that children attending child-care centers, especially those under three years of age, are at a higher risk of upper respiratory infections, lower respiratory infections, and diarrhea. The studies were not consistent, however, in relation to attendance at child-care homes. Children in such settings were sometimes similar to those in child-care centers, sometimes similar to those cared for at home, and sometimes presented an intermediate risk.
Resumo:
This paper presents the results from an experimental study of the technical viability of two mixture designs for self-consolidating concrete (SCC) proposed by two Portuguese researchers in a previous work. The objective was to find the best method to provide the required characteristics of SCC in fresh and hardened states without having to experiment with a large number of mixtures. Five SCC mixtures, each with a volume of 25 L (6.61 gal.) were prepared using a forced mixer with a vertical axis for each of three compressive strength targets: 40, 55, and 70 MPa (5.80, 7.98, and 10.15 ksi). The mixtures' fresh state properties of fluidity, segregation resistance ability, and bleeding and blockage tendency, and their hardened state property of compressive strength were compared. For this study, the following tests were performed. slump-flow, V-funnel, L-box, box, and compressive strength. The results of this study made it possible to identify the most influential factors in the design of the SCC mixtures.
Resumo:
As teachers, we are challenged everyday to solve pedagogical problems and we have to fight for our students’ attention in a media rich world. I will talk about how we use ICT in Initial Teacher Training and give you some insight on what we are doing. The most important benefit of using ICT in education is that it makes us reflect on our practice. There is no doubt that our classrooms need to be updated, but we need to be critical about every peace of hardware, software or service that we bring into them. It is not only because our budgets are short, but also because e‐learning is primarily about learning, not technology. Therefore, we need to have the knowledge and skills required to act in different situations, and choose the best tool for the job. Not all subjects are suitable for e‐learning, nor do all students have the skills to organize themselves their own study times. Also not all teachers want to spend time programming or learning about instructional design and metadata. The promised land of easy use of authoring tools (e.g. eXe and Reload) that will lead to all teachers become Learning Objects authors and share these LO in Repositories, all this failed, like previously HyperCard, Toolbook and others. We need to know a little bit of many different technologies so we can mobilize this knowledge when a situation requires it: integrate e‐learning technologies in the classroom, not a flipped classroom, just simple tools. Lecture capture, mobile phones and smartphones, pocket size camcorders, VoIP, VLE, live video broadcast, screen sharing, free services for collaborative work, save, share and sync your files. Do not feel stressed to use everything, every time. Just because we have a whiteboard does not mean we have to make it the centre of the classroom. Start from where you are, with your preferred subject and the tools you master. Them go slowly and try some new tool in a non‐formal situation and with just one or two students. And you don’t need to be alone: subscribe a mailing list and share your thoughts with other teachers in a dedicated forum, even better if both are part of a community of practice, and share resources. We did that for music teachers and it was a success, in two years arriving at 1.000 members. Just do it.
Resumo:
A new high throughput and scalable architecture for unified transform coding in H.264/AVC is proposed in this paper. Such flexible structure is capable of computing all the 4x4 and 2x2 transforms for Ultra High Definition Video (UHDV) applications (4320x7680@ 30fps) in real-time and with low hardware cost. These significantly high performance levels were proven with the implementation of several different configurations of the proposed structure using both FPGA and ASIC 90 nm technologies. In addition, such experimental evaluation also demonstrated the high area efficiency of theproposed architecture, which in terms of Data Throughput per Unit of Area (DTUA) is at least 1.5 times more efficient than its more prominent related designs(1).
Resumo:
The advances made in channel-capacity codes, such as turbo codes and low-density parity-check (LDPC) codes, have played a major role in the emerging distributed source coding paradigm. LDPC codes can be easily adapted to new source coding strategies due to their natural representation as bipartite graphs and the use of quasi-optimal decoding algorithms, such as belief propagation. This paper tackles a relevant scenario in distributedvideo coding: lossy source coding when multiple side information (SI) hypotheses are available at the decoder, each one correlated with the source according to different correlation noise channels. Thus, it is proposed to exploit multiple SI hypotheses through an efficient joint decoding technique withmultiple LDPC syndrome decoders that exchange information to obtain coding efficiency improvements. At the decoder side, the multiple SI hypotheses are created with motion compensated frame interpolation and fused together in a novel iterative LDPC based Slepian-Wolf decoding algorithm. With the creation of multiple SI hypotheses and the proposed decoding algorithm, bitrate savings up to 8.0% are obtained for similar decoded quality.
Resumo:
A test chamber was projected and built (according to ISO 16000-9 Standard) to simulate atmospheric conditions experienced by rubber infill (when applied in synthetic turf pitches) and measure accurately the airborne emissions of pollutants such as dusts and volatile organic compounds (VOC), as well as pollutants present in leachates. It should be pointed out that standard ISO 16000-9 is only concerned with the determination of the emission of VOC from building products and furnishing (not specific of synthetic turf materials), whereas other standards are concerned with the emission of leachates only. This procedure is to be considered as a technical option to the lysimeter "global turf system evaluation" when the rubber infill alone is to be evaluated. The advantage of the proposed option considering this "test chamber" is its simplicity and economy. This test chamber is actually installed and being used for tests in LAIST.
Resumo:
A major determinant of the level of effective natural gas supply is the ease to feed customers, minimizing system total costs. The aim of this work is the study of the right number of Gas Supply Units – GSUs - and their optimal location in a gas network. This paper suggests a GSU location heuristic, based on Lagrangean relaxation techniques. The heuristic is tested on the Iberian natural gas network, a system modelized with 65 demand nodes, linked by physical and virtual pipelines. Lagrangean heuristic results along with the allocation of loads to gas sources are presented, using a 2015 forecast gas demand scenario.