931 resultados para High speed hanpiece
Resumo:
Experimental work has been carried out to investigate the effect of major operating variables on milling efficiency of calcium carbonate in laboratory and pilot size Tower and Sala Agitated (SAM) mills. The results suggest that the stirrer speed, media size and slurry density affect the specific energy consumption required to achieve the given product size. Media stress intensity analysis developed for high-speed horizontal mills was modified to include the effect of gravitational force in the vertical stirred mills such as the Tower and SAM units. The results suggest that this approach can be successfully applied for both mill types. For a given specific energy input, an optimum stress intensity range existed, for which the finest product was achieved. Finer product and therefore higher milling efficiency was obtained with SAM in the range of operating conditions tested. (C) 2001 Elsevier Science Ltd. All rights reserved.
Resumo:
Wet agglomeration processes have traditionally been considered an empirical art, with great difficulties in predicting and explaining observed behaviour. Industry has faced a range of problems including large recycle ratios, poor product quality control, surging and even the total failure of scale up from laboratory to full scale production. However, in recent years there has been a rapid advancement in our understanding of the fundamental processes that control granulation behaviour and product properties. This review critically evaluates the current understanding of the three key areas of wet granulation processes: wetting and nucleation, consolidation and growth, and breakage and attrition. Particular emphasis is placed on the fact that there now exist theoretical models which predict or explain the majority of experimentally observed behaviour. Provided that the correct material properties and operating parameters are known, it is now possible to make useful predictions about how a material will granulate. The challenge that now faces us is to transfer these theoretical developments into industrial practice. Standard, reliable methods need to be developed to measure the formulation properties that control granulation behaviour, such as contact angle and dynamic yield strength. There also needs to be a better understanding of the flow patterns, mixing behaviour and impact velocities in different types of granulation equipment. (C) 2001 Elsevier Science B.V. All rights reserved.
Resumo:
There is considerable anecdotal evidence from industry that poor wetting and liquid distribution can lead to broad granule size distributions in mixer granulators. Current scale-up scenarios lead to poor liquid distribution and a wider product size distribution. There are two issues to consider when scaling up: the size and nature of the spray zone and the powder flow patterns as a function of granulator scale. Short, nucleation-only experiments in a 25L PMA Fielder mixer using lactose powder with water and HPC solutions demonstrated the existence of different nucleation regimes depending on the spray flux Psi(a)-from drop-controlled nucleation to caking. In the drop-controlled regime at low Psi(a) values. each drop forms a single nucleus and the nuclei distribution is controlled by the spray droplet size distribution. As Psi(a) increases, the distribution broadens rapidly as the droplets overlap and coalesce in the spray zone. The results are in excellent agreement with previous experiments and confirm that for drop-controlled nucleation. Psi(a) should be less than 0.1. Granulator flow studies showed that there are two powder flow regimes-bumping and roping. The powder flow goes through a transition from bumping to roping as impeller speed is increased. The roping regime gives good bed turn over and stable flow patterns. This regime is recommended for good liquid distribution and nucleation. Powder surface velocities as a function of impeller speed were measured using high-speed video equipment and MetaMorph image analysis software, Powder surface velocities were 0.2 to 1 ms(-1)-an order of magnitude lower than the impeller tip speed. Assuming geometrically similar granulators, impeller speed should be set to maintain constant Froude number during scale-up rather than constant tip speed to ensure operation in the roping regime. (C) 2002 Published by Elsevier Science B.V.
Resumo:
Free-space optical interconnects (FSOIs), made up of dense arrays of vertical-cavity surface-emitting lasers, photodetectors and microlenses can be used for implementing high-speed and high-density communication links, and hence replace the inferior electrical interconnects. A major concern in the design of FSOIs is minimization of the optical channel cross talk arising from laser beam diffraction. In this article we introduce modifications to the mode expansion method of Tanaka et al. [IEEE Trans. Microwave Theory Tech. MTT-20, 749 (1972)] to make it an efficient tool for modelling and design of FSOIs in the presence of diffraction. We demonstrate that our modified mode expansion method has accuracy similar to the exact solution of the Huygens-Kirchhoff diffraction integral in cases of both weak and strong beam clipping, and that it is much more accurate than the existing approximations. The strength of the method is twofold: first, it is applicable in the region of pronounced diffraction (strong beam clipping) where all other approximations fail and, second, unlike the exact-solution method, it can be efficiently used for modelling diffraction on multiple apertures. These features make the mode expansion method useful for design and optimization of free-space architectures containing multiple optical elements inclusive of optical interconnects and optical clock distribution systems. (C) 2003 Optical Society of America.
Resumo:
The effect of electron beam radiation on a perfluoroalkoxy (PFA) resin was examined using solid-state high-speed magic angle spinning F-19 NMR spectroscopy and FT-IR spectroscopy. Samples were prepared for analysis by subjecting them to electron beam radiation in the dose range 0.5-2.0 MGy at 633 K, which is above the crystalline melting temperature. The new structures were identified and include new saturated chain ends, short and long branches, unsaturated groups, and cross-links. The radiation chemical yield (G value) of new long branch points was greater than the G value of new chain ends, suggesting that cross-linking is the net radiolytic process. This conclusion was supported by an observed decrease in the crystallinity and an increase in the optical clarity of the polymer.
Resumo:
Um algoritmo numérico foi criado para apresentar a solução da conversão termoquímica de um combustível sólido. O mesmo foi criado de forma a ser flexível e dependente do mecanismo de reação a ser representado. Para tanto, um sistema das equações características desse tipo de problema foi resolvido através de um método iterativo unido a matemática simbólica. Em função de não linearidades nas equações e por se tratar de pequenas partículas, será aplicado o método de Newton para reduzir o sistema de equações diferenciais parciais (EDP’s) para um sistema de equações diferenciais ordinárias (EDO’s). Tal processo redução é baseado na união desse método iterativo à diferenciação numérica, pois consegue incorporar nas EDO’s resultantes funções analíticas. O modelo reduzido será solucionado numericamente usando-se a técnica do gradiente bi-conjugado (BCG). Tal modelo promete ter taxa de convergência alta, se utilizando de um número baixo de iterações, além de apresentar alta velocidade na apresentação das soluções do novo sistema linear gerado. Além disso, o algoritmo se mostra independente do tamanho da malha constituidora. Para a validação, a massa normalizada será calculada e comparada com valores experimentais de termogravimetria encontrados na literatura, , e um teste com um mecanismo simplificado de reação será realizado.
Resumo:
This work focused on the study of the impact event on molded parts in the framework of automotive components. The influence of the impact conditions and processing parameters on the mechanical behavior of talc-filled polypropylene specimens was analyzed. The specimens were lateral-gate discs produced by injection molding, and the mechanical characterization was performed through instrumented falling weight impact tests concomitantly assisted with high-speed videography. Results analyzed using the analysis of variance (ANOVA) method have shown that from the considered parameters, only the dart diameter and test temperature have significant influence on the falling weight impact properties. Higher dart diameter leads to higher peak force and peak energy results. Conversely, higher levels of test temperatures lead to lower values of peak force and peak energy. By means of high-speed videography, a more brittle fracture was observed for experiments with higher levels of test velocity and dart diameter and lower levels of test temperature. The injection-molding process conditions assessed in this study have an influence on the impact response of moldings, mainly on the deformation capabilities of the moldings.
Resumo:
Lossless compression algorithms of the Lempel-Ziv (LZ) family are widely used nowadays. Regarding time and memory requirements, LZ encoding is much more demanding than decoding. In order to speed up the encoding process, efficient data structures, like suffix trees, have been used. In this paper, we explore the use of suffix arrays to hold the dictionary of the LZ encoder, and propose an algorithm to search over it. We show that the resulting encoder attains roughly the same compression ratios as those based on suffix trees. However, the amount of memory required by the suffix array is fixed, and much lower than the variable amount of memory used by encoders based on suffix trees (which depends on the text to encode). We conclude that suffix arrays, when compared to suffix trees in terms of the trade-off among time, memory, and compression ratio, may be preferable in scenarios (e.g., embedded systems) where memory is at a premium and high speed is not critical.
Resumo:
A recente norma IEEE 802.11n oferece um elevado débito em redes locais sem fios sendo por isso esperado uma adopção massiva desta tecnologia substituindo progressivamente as redes 802.11b/g. Devido à sua elevada capacidade esta recente geração de redes sem fios 802.11n permite um crescimento acentuado de serviços audiovisuais. Neste contexto esta dissertação procura estudar a rede 802.11n, caracterizando o desempenho e a qualidade associada a um serviço de transmissão de vídeo, recorrendo para o efeito a uma arquitectura de simulação da rede 802.11n. Desta forma é caracterizado o impacto das novas funcionalidades da camada MAC introduzidas na norma 801.11n, como é o caso da agregação A-MSDU e A-MPDU, bem como o impacto das novas funcionalidades da camada física como é o caso do MIMO; em ambos os casos uma optimização da parametrização é realizada. Também se verifica que as principais técnicas de codificação de vídeo H.264/AVC para optimizar o processo de distribuição de vídeo, permitem optimizar o desempenho global do sistema de transmissão. Aliando a optimização e parametrização da camada MAC, da camada física, e do processo de codificação, é possível propor um conjunto de configurações que permitem obter o melhor desempenho na qualidade de serviço da transmissão de conteúdos de vídeo numa rede 802.11n. A arquitectura de simulação construída nesta dissertação é especificamente adaptada para suportar as técnicas de agregação da camada MAC, bem como para suportar o encapsulamento em protocolos de rede que permitem a transmissão dos pacotes de vídeo RTP, codificados em H.264/AVC.
Resumo:
O presente trabalho consiste na implementação em hardware de unidades funcionais dedicadas e optimizadas, para a realização das operações de codificação e descodificação, definidas na norma de codificação com perda Joint Photographic Experts Group (JPEG), ITU-T T.81 ISO/IEC 10918-1. Realiza-se um estudo sobre esta norma de forma a caracterizar os seus principais blocos funcionais. A finalidade deste estudo foca-se na pesquisa e na proposta de optimizações, de forma a minimizar o hardware necessário para a realização de cada bloco, de modo a que o sistema realizado obtenha taxas de compressão elevadas, minimizando a distorção obtida. A redução de hardware de cada sistema, codificador e descodificador, é conseguida à custa da manipulação das equações dos blocos Forward Discrete Cosine Transform (FDCT) e Quantificação (Q) e dos blocos Forward Discrete Cosine Transform (IDCT) e Quantificação Inversa (IQ). Com as conclusões retiradas do estudo e através da análise de estruturas conhecidas, descreveu-se cada bloco em Very-High-Speed Integrated Circuits (VHSIC) Hardware Description Language (VHDL) e fez-se a sua síntese em Field Programmable Gate Array (FPGA). Cada sistema implementado recorre à execução de cada bloco em paralelo de forma a optimizar a codificação/descodificação. Assim, para o sistema codificador, será realizada a operação da FDCT e Quantificação sobre duas matrizes diferentes e em simultâneo. O mesmo sucede para o sistema descodificador, composto pelos blocos Quantificação Inversa e IDCT. A validação de cada bloco sintetizado é executada com recurso a vectores de teste obtidos através do estudo efectuado. Após a integração de cada bloco, verificou-se que, para imagens greyscale de referência com resolução de 256 linhas por 256 colunas, é necessário 820,5 μs para a codificação de uma imagem e 830,5 μs para a descodificação da mesma. Considerando uma frequência de trabalho de 100 MHz, processam-se aproximadamente 1200 imagens por segundo.
Resumo:
Although it is always weak between RFID Tag and Terminal in focus of the security, there are no security skills in RFID Tag. Recently there are a lot of studying in order to protect it, but because it has some physical limitation of RFID, that is it should be low electric power and high speed, it is impossible to protect with the skills. At present, the methods of RFID security are using a security server, a security policy and security. One of them the most famous skill is the security module, then they has an authentication skill and an encryption skill. In this paper, we designed and implemented after modification original SEED into 8 Round and 64 bits for Tag.
Resumo:
Introdução: A manipulação vertebral é um procedimento de terapia manual realizada em alta velocidade, pequena amplitude e normalmente no final de movimento. Estudos recentes sugerem a manipulação da coluna lombar com efeitos directos nos mecanismos neurofisiológicos da dor assim como na funcionalidade. Objectivo: Avaliar os efeitos, na dor e na funcionalidade, da manipulação lombar, em pacientes com dor lombar aguda de origem mecânica, no dia seguinte à manipulação. Materiais e Métodos: Participaram neste estudo três pacientes de ambos os sexos, com idades compreendidas entre os 31 e 35 anos, com queixas de dor lombar há menos de oito dias e que apresentavam restrição e dor nos movimentos de flexão lombar. Foi utilizado o teste de Mitchell para identificar as vértebras lombares disfuncionais. Os instrumentos utilizados foram a escala numérica da dor (END) para avaliar a dor e o questionário de incapacidade lombar Roland Morris (QIRM) para avaliar a funcionalidade. Os utentes foram avaliados antes da manipulação e no dia seguinte à sua aplicação. Em cada paciente foi realizada apenas uma manipulação lombar. Resultados: No dia seguinte à intervenção os pacientes apresentaram diminuição da dor (6/10 vs 0/10; 5/10 vs 3/10; 4/10vs 1/10) e melhoria da funcionalidade (7/24 vs 1/24; 16/24 vs 9/24; 8/24 vs3/24). Conclusão: Com base nos resultados obtidos pode concluir-se que, nestes três casos, a manipulação lombar utilizada, teve efeitos positivos na redução da dor e no aumento da funcionalidade.
Resumo:
Trabalho Final de Mestrado para obtenção do grau de Mestre em Engenharia Civil
Resumo:
Trabalho de Projeto para obtenção do grau de Mestre em Engenharia de Eletrónica e Telecomunicações
Resumo:
This project was developed within the ART-WiSe framework of the IPP-HURRAY group (http://www.hurray.isep.ipp.pt), at the Polytechnic Institute of Porto (http://www.ipp.pt). The ART-WiSe – Architecture for Real-Time communications in Wireless Sensor networks – framework (http://www.hurray.isep.ipp.pt/art-wise) aims at providing new communication architectures and mechanisms to improve the timing performance of Wireless Sensor Networks (WSNs). The architecture is based on a two-tiered protocol structure, relying on existing standard communication protocols, namely IEEE 802.15.4 (Physical and Data Link Layers) and ZigBee (Network and Application Layers) for Tier 1 and IEEE 802.11 for Tier 2, which serves as a high-speed backbone for Tier 1 without energy consumption restrictions. Within this trend, an application test-bed is being developed with the objectives of implementing, assessing and validating the ART-WiSe architecture. Particularly for the ZigBee protocol case; even though there is a strong commercial lobby from the ZigBee Alliance (http://www.zigbee.org), there is neither an open source available to the community for this moment nor publications on its adequateness for larger-scale WSN applications. This project aims at fulfilling these gaps by providing: a deep analysis of the ZigBee Specification, mainly addressing the Network Layer and particularly its routing mechanisms; an identification of the ambiguities and open issues existent in the ZigBee protocol standard; the proposal of solutions to the previously referred problems; an implementation of a subset of the ZigBee Network Layer, namely the association procedure and the tree routing on our technological platform (MICAz motes, TinyOS operating system and nesC programming language) and an experimental evaluation of that routing mechanism for WSNs.