930 resultados para High-speed video


Relevância:

80.00% 80.00%

Publicador:

Resumo:

Experimental work has been carried out to investigate the effect of major operating variables on milling efficiency of calcium carbonate in laboratory and pilot size Tower and Sala Agitated (SAM) mills. The results suggest that the stirrer speed, media size and slurry density affect the specific energy consumption required to achieve the given product size. Media stress intensity analysis developed for high-speed horizontal mills was modified to include the effect of gravitational force in the vertical stirred mills such as the Tower and SAM units. The results suggest that this approach can be successfully applied for both mill types. For a given specific energy input, an optimum stress intensity range existed, for which the finest product was achieved. Finer product and therefore higher milling efficiency was obtained with SAM in the range of operating conditions tested. (C) 2001 Elsevier Science Ltd. All rights reserved.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Wet agglomeration processes have traditionally been considered an empirical art, with great difficulties in predicting and explaining observed behaviour. Industry has faced a range of problems including large recycle ratios, poor product quality control, surging and even the total failure of scale up from laboratory to full scale production. However, in recent years there has been a rapid advancement in our understanding of the fundamental processes that control granulation behaviour and product properties. This review critically evaluates the current understanding of the three key areas of wet granulation processes: wetting and nucleation, consolidation and growth, and breakage and attrition. Particular emphasis is placed on the fact that there now exist theoretical models which predict or explain the majority of experimentally observed behaviour. Provided that the correct material properties and operating parameters are known, it is now possible to make useful predictions about how a material will granulate. The challenge that now faces us is to transfer these theoretical developments into industrial practice. Standard, reliable methods need to be developed to measure the formulation properties that control granulation behaviour, such as contact angle and dynamic yield strength. There also needs to be a better understanding of the flow patterns, mixing behaviour and impact velocities in different types of granulation equipment. (C) 2001 Elsevier Science B.V. All rights reserved.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Free-space optical interconnects (FSOIs), made up of dense arrays of vertical-cavity surface-emitting lasers, photodetectors and microlenses can be used for implementing high-speed and high-density communication links, and hence replace the inferior electrical interconnects. A major concern in the design of FSOIs is minimization of the optical channel cross talk arising from laser beam diffraction. In this article we introduce modifications to the mode expansion method of Tanaka et al. [IEEE Trans. Microwave Theory Tech. MTT-20, 749 (1972)] to make it an efficient tool for modelling and design of FSOIs in the presence of diffraction. We demonstrate that our modified mode expansion method has accuracy similar to the exact solution of the Huygens-Kirchhoff diffraction integral in cases of both weak and strong beam clipping, and that it is much more accurate than the existing approximations. The strength of the method is twofold: first, it is applicable in the region of pronounced diffraction (strong beam clipping) where all other approximations fail and, second, unlike the exact-solution method, it can be efficiently used for modelling diffraction on multiple apertures. These features make the mode expansion method useful for design and optimization of free-space architectures containing multiple optical elements inclusive of optical interconnects and optical clock distribution systems. (C) 2003 Optical Society of America.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The effect of electron beam radiation on a perfluoroalkoxy (PFA) resin was examined using solid-state high-speed magic angle spinning F-19 NMR spectroscopy and FT-IR spectroscopy. Samples were prepared for analysis by subjecting them to electron beam radiation in the dose range 0.5-2.0 MGy at 633 K, which is above the crystalline melting temperature. The new structures were identified and include new saturated chain ends, short and long branches, unsaturated groups, and cross-links. The radiation chemical yield (G value) of new long branch points was greater than the G value of new chain ends, suggesting that cross-linking is the net radiolytic process. This conclusion was supported by an observed decrease in the crystallinity and an increase in the optical clarity of the polymer.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Um algoritmo numérico foi criado para apresentar a solução da conversão termoquímica de um combustível sólido. O mesmo foi criado de forma a ser flexível e dependente do mecanismo de reação a ser representado. Para tanto, um sistema das equações características desse tipo de problema foi resolvido através de um método iterativo unido a matemática simbólica. Em função de não linearidades nas equações e por se tratar de pequenas partículas, será aplicado o método de Newton para reduzir o sistema de equações diferenciais parciais (EDP’s) para um sistema de equações diferenciais ordinárias (EDO’s). Tal processo redução é baseado na união desse método iterativo à diferenciação numérica, pois consegue incorporar nas EDO’s resultantes funções analíticas. O modelo reduzido será solucionado numericamente usando-se a técnica do gradiente bi-conjugado (BCG). Tal modelo promete ter taxa de convergência alta, se utilizando de um número baixo de iterações, além de apresentar alta velocidade na apresentação das soluções do novo sistema linear gerado. Além disso, o algoritmo se mostra independente do tamanho da malha constituidora. Para a validação, a massa normalizada será calculada e comparada com valores experimentais de termogravimetria encontrados na literatura, , e um teste com um mecanismo simplificado de reação será realizado.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This work focused on the study of the impact event on molded parts in the framework of automotive components. The influence of the impact conditions and processing parameters on the mechanical behavior of talc-filled polypropylene specimens was analyzed. The specimens were lateral-gate discs produced by injection molding, and the mechanical characterization was performed through instrumented falling weight impact tests concomitantly assisted with high-speed videography. Results analyzed using the analysis of variance (ANOVA) method have shown that from the considered parameters, only the dart diameter and test temperature have significant influence on the falling weight impact properties. Higher dart diameter leads to higher peak force and peak energy results. Conversely, higher levels of test temperatures lead to lower values of peak force and peak energy. By means of high-speed videography, a more brittle fracture was observed for experiments with higher levels of test velocity and dart diameter and lower levels of test temperature. The injection-molding process conditions assessed in this study have an influence on the impact response of moldings, mainly on the deformation capabilities of the moldings.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Lossless compression algorithms of the Lempel-Ziv (LZ) family are widely used nowadays. Regarding time and memory requirements, LZ encoding is much more demanding than decoding. In order to speed up the encoding process, efficient data structures, like suffix trees, have been used. In this paper, we explore the use of suffix arrays to hold the dictionary of the LZ encoder, and propose an algorithm to search over it. We show that the resulting encoder attains roughly the same compression ratios as those based on suffix trees. However, the amount of memory required by the suffix array is fixed, and much lower than the variable amount of memory used by encoders based on suffix trees (which depends on the text to encode). We conclude that suffix arrays, when compared to suffix trees in terms of the trade-off among time, memory, and compression ratio, may be preferable in scenarios (e.g., embedded systems) where memory is at a premium and high speed is not critical.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

O presente trabalho consiste na implementação em hardware de unidades funcionais dedicadas e optimizadas, para a realização das operações de codificação e descodificação, definidas na norma de codificação com perda Joint Photographic Experts Group (JPEG), ITU-T T.81 ISO/IEC 10918-1. Realiza-se um estudo sobre esta norma de forma a caracterizar os seus principais blocos funcionais. A finalidade deste estudo foca-se na pesquisa e na proposta de optimizações, de forma a minimizar o hardware necessário para a realização de cada bloco, de modo a que o sistema realizado obtenha taxas de compressão elevadas, minimizando a distorção obtida. A redução de hardware de cada sistema, codificador e descodificador, é conseguida à custa da manipulação das equações dos blocos Forward Discrete Cosine Transform (FDCT) e Quantificação (Q) e dos blocos Forward Discrete Cosine Transform (IDCT) e Quantificação Inversa (IQ). Com as conclusões retiradas do estudo e através da análise de estruturas conhecidas, descreveu-se cada bloco em Very-High-Speed Integrated Circuits (VHSIC) Hardware Description Language (VHDL) e fez-se a sua síntese em Field Programmable Gate Array (FPGA). Cada sistema implementado recorre à execução de cada bloco em paralelo de forma a optimizar a codificação/descodificação. Assim, para o sistema codificador, será realizada a operação da FDCT e Quantificação sobre duas matrizes diferentes e em simultâneo. O mesmo sucede para o sistema descodificador, composto pelos blocos Quantificação Inversa e IDCT. A validação de cada bloco sintetizado é executada com recurso a vectores de teste obtidos através do estudo efectuado. Após a integração de cada bloco, verificou-se que, para imagens greyscale de referência com resolução de 256 linhas por 256 colunas, é necessário 820,5 μs para a codificação de uma imagem e 830,5 μs para a descodificação da mesma. Considerando uma frequência de trabalho de 100 MHz, processam-se aproximadamente 1200 imagens por segundo.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Although it is always weak between RFID Tag and Terminal in focus of the security, there are no security skills in RFID Tag. Recently there are a lot of studying in order to protect it, but because it has some physical limitation of RFID, that is it should be low electric power and high speed, it is impossible to protect with the skills. At present, the methods of RFID security are using a security server, a security policy and security. One of them the most famous skill is the security module, then they has an authentication skill and an encryption skill. In this paper, we designed and implemented after modification original SEED into 8 Round and 64 bits for Tag.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Introdução: A manipulação vertebral é um procedimento de terapia manual realizada em alta velocidade, pequena amplitude e normalmente no final de movimento. Estudos recentes sugerem a manipulação da coluna lombar com efeitos directos nos mecanismos neurofisiológicos da dor assim como na funcionalidade. Objectivo: Avaliar os efeitos, na dor e na funcionalidade, da manipulação lombar, em pacientes com dor lombar aguda de origem mecânica, no dia seguinte à manipulação. Materiais e Métodos: Participaram neste estudo três pacientes de ambos os sexos, com idades compreendidas entre os 31 e 35 anos, com queixas de dor lombar há menos de oito dias e que apresentavam restrição e dor nos movimentos de flexão lombar. Foi utilizado o teste de Mitchell para identificar as vértebras lombares disfuncionais. Os instrumentos utilizados foram a escala numérica da dor (END) para avaliar a dor e o questionário de incapacidade lombar Roland Morris (QIRM) para avaliar a funcionalidade. Os utentes foram avaliados antes da manipulação e no dia seguinte à sua aplicação. Em cada paciente foi realizada apenas uma manipulação lombar. Resultados: No dia seguinte à intervenção os pacientes apresentaram diminuição da dor (6/10 vs 0/10; 5/10 vs 3/10; 4/10vs 1/10) e melhoria da funcionalidade (7/24 vs 1/24; 16/24 vs 9/24; 8/24 vs3/24). Conclusão: Com base nos resultados obtidos pode concluir-se que, nestes três casos, a manipulação lombar utilizada, teve efeitos positivos na redução da dor e no aumento da funcionalidade.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Trabalho de Projeto para obtenção do grau de Mestre em Engenharia de Eletrónica e Telecomunicações

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This project was developed within the ART-WiSe framework of the IPP-HURRAY group (http://www.hurray.isep.ipp.pt), at the Polytechnic Institute of Porto (http://www.ipp.pt). The ART-WiSe – Architecture for Real-Time communications in Wireless Sensor networks – framework (http://www.hurray.isep.ipp.pt/art-wise) aims at providing new communication architectures and mechanisms to improve the timing performance of Wireless Sensor Networks (WSNs). The architecture is based on a two-tiered protocol structure, relying on existing standard communication protocols, namely IEEE 802.15.4 (Physical and Data Link Layers) and ZigBee (Network and Application Layers) for Tier 1 and IEEE 802.11 for Tier 2, which serves as a high-speed backbone for Tier 1 without energy consumption restrictions. Within this trend, an application test-bed is being developed with the objectives of implementing, assessing and validating the ART-WiSe architecture. Particularly for the ZigBee protocol case; even though there is a strong commercial lobby from the ZigBee Alliance (http://www.zigbee.org), there is neither an open source available to the community for this moment nor publications on its adequateness for larger-scale WSN applications. This project aims at fulfilling these gaps by providing: a deep analysis of the ZigBee Specification, mainly addressing the Network Layer and particularly its routing mechanisms; an identification of the ambiguities and open issues existent in the ZigBee protocol standard; the proposal of solutions to the previously referred problems; an implementation of a subset of the ZigBee Network Layer, namely the association procedure and the tree routing on our technological platform (MICAz motes, TinyOS operating system and nesC programming language) and an experimental evaluation of that routing mechanism for WSNs.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Low-loss power transmission gears operate at lower temperature than conventional ones because their teeth geometry is optimized to reduce friction. The main objective of this work is to compare the operating stabilization temperature and efficiency of low-loss austempered ductile iron (ADI) and carburized steel gears. Three different low-loss tooth geometries were adopted (types 311, 411 and 611, all produced using standard 20° pressure angle tools) and corresponding steel and ADI gears were tested in a FZG machine. The results obtained showed that low-loss geometries had a significant influence on power loss, gears 611 generating lower power loss than gears 311. At low speeds (500 and 1000 rpm) and high torque ADI gears generated lower power loss than steel gears. However, at high speed and high torque (high input power and high stabilization temperature) steel gears had better efficiency.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Dependability is a critical factor in computer systems, requiring high quality validation & verification procedures in the development stage. At the same time, digital devices are getting smaller and access to their internal signals and registers is increasingly complex, requiring innovative debugging methodologies. To address this issue, most recent microprocessors include an on-chip debug (OCD) infrastructure to facilitate common debugging operations. This paper proposes an enhanced OCD infrastructure with the objective of supporting the verification of fault-tolerant mechanisms through fault injection campaigns. This upgraded on-chip debug and fault injection (OCD-FI) infrastructure provides an efficient fault injection mechanism with improved capabilities and dynamic behavior. Preliminary results show that this solution provides flexibility in terms of fault triggering and allows high speed real-time fault injection in memory elements