928 resultados para High Speed Flow


Relevância:

80.00% 80.00%

Publicador:

Resumo:

Purpose: The objective of this in vitro study was to compare the degree of microleakage of composite restorations performed by lasers and conventional drills associated with two adhesive systems. Materials and Methods: Sixty bovine teeth were divided into 6 groups (n = 10). The preparations were performed in groups 1 and 2 with a high-speed drill (HID), in groups 3 and 5 with Er:YAG laser, and in groups 4 and 6 with Er,Cr:YSGG laser. The specimens were restored with resin composite associated with an etch-and-rinse two-step adhesive system (Single Bond 2 [SB]) (groups 1, 3, 4) and a self-etching adhesive (One-Up Bond F [OB]) (groups 2, 5, 6). After storage, the specimens were polished, thermocycled, immersed in 50% silver nitrate tracer solution, and then sectioned longitudinally. The specimens were placed under a stereomicroscope (25X) and digital images were obtained. These were evaluated by three blinded evaluators who assigned a microleakage score (0 to 3). The original data were submitted to Kruskal-Wallis and Mann-Whitney statistical tests. Results: The occlusal/enamel margins demonstrated no differences in microleakage for all treatments (p > 0.05). The gingival/dentin margins presented similar microleakage in cavities prepared with Er:YAG, Er,Cr:YSGG, and HD using the etch-and-rinse two-step adhesive system (SB) (p > 0.05); otherwise, both Er:YAG and Er,Cr:YSGG lasers demonstrated lower microleakage scores with OB than SB adhesive (p < 0.05). Conclusion: The microleakage score at gingival margins is dependent on the interaction of the hard tissue removal tool and the adhesive system used. The self-etching adhesive system had a lower microleakage score at dentin margins for cavities prepared with Er:YAG and Er,Cr:YSGG than the etch-and-rinse two-step adhesive system.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Background and Objectives: Er:YAG laser has been used for caries removal and cavity preparation, using ablative parameters. Its effect on the margins of restorations submitted to cariogenic challenge has not yet been sufficiently investigated. The aim of this study was to assess the enamel adjacent to restored Er:YAG laser-prepared cavities submitted to cariogenic challenge in situ, under polarized light microscopy. Study Design/Materials and Methods: Ninety-one enamel slabs were randomly assigned to seven groups (n = 13): I, II, III-Er:YAG laser with 250 mJ, 62.5 J/cm(2), combined with 2, 3, and 4 Hz, respectively; IV, V, VI-Er:YAG laser with 350 mJ, 87.5 J/cm(2), combined with 2, 3, and 4 Hz, respectively; VII-High-speed handpiece (control). Cavities were restored and the restorations were polished. The slabs were fixed to intra-oral appliances, worn by 13 volunteers for 14 days. Sucrose solution was applied to each slab six times per day. Samples were removed, cleaned, sectioned and ground to polarized light microscopic analysis. Demineralized area and inhibition zone width were quantitatively assessed. Presence or absence of cracks was also analyzed. Scores for demineralization and inhibition zone were determined. Results: No difference was found among the groups with regard to demineralized area, inhibition zone width, presence or absence of cracks, and demineralization score. Inhibition zone score showed difference among the groups. There was a correlation between the quantitative measures and the scores. Conclusion: Er:YAG laser was similar to high-speed handpiece, with regard to alterations in enamel adjacent to restorations submitted to cariogenic challenge in situ. The inhibition zone score might suggest less demineralization at the restoration margin of the irradiated substrates. Correlation between the quantitative measures and scores indicates that score was, in this case, a suitable complementary method for assessment of caries lesion around restorations, under polarized light microscopy. Lasers Surg. Med. 40:634-643, 2008. (c) 2008 Wiley-Liss, Inc.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This study evaluated the effect of 2% chlorhexidine digluconate (CHX) used as a therapeutic primer on the long-term bond strengths of two etch-and-rinse adhesives to normal (ND) and caries-affected (CAD) dentin. Forty extracted human molars with coronal carious lesions, surrounded by normal dentin, were selected for this study. The flat surfaces of two types of dentin (ND and CAD) were prepared with a water-cooled high-speed diamond disc, then acidetched, rinsed and air-dried. In the control groups, the dentin was re-hydrated with distilled water, blot-dried and bonded with a three-step (Scotchbond Multi-Purpose-MP) or two-step (Single Bond 2-SB) etch-and-rinse adhesive. In the experimental groups, the dentin was rehydrated with 2% CHX (60 seconds), blot-dried and bonded with the same adhesives. Resin composite build-ups were made. The specimens were prepared for microtensile bond testing in accordance with the non-trimming technique, then tested either immediately or after six-months storage in artificial saliva. The data were analyzed by ANOVA/Bonferroni tests (alpha=0.05). CHX did not affect the immediate bond strength to ND or CAD (p>0.05). CHX treatment significantly lowered the loss of bond strength after six months as seen in the control bonds for ND (p<0.05), but it did not alter the bond strength of CAD (p>0.05). The application of NIP on CHX-treated ND or CAD produced bonds that did not change over six months of storage.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Considering the increase in esthetic restorative materials and need for improvement in unsatisfactory restoration substitution with minimal inadvertent removal of healthy tissues, this study assessed the efficacy of erbium:yttrium-aluminum-garnet (Er:YAG) laser for composite resin removal and the influence of pulse repetition rate on the morphological analyses of the cavity by scanning electron microscope. Composite resin fillings were placed in cavities (1.0 mm deep) prepared in bovine teeth, and the 75 specimens were randomly assigned to five groups according to the technique used for composite filling removal (high-speed diamond bur, group I, as a control, and Er:YAG laser, 250 mJ output energy and 80 J/cm(2) energy density, using different pulse repetition rates: group II, 2 Hz; group III, 4 Hz; group IV, 6 Hz; group V, 10 Hz). After the removal, the specimens were split in the middle, and we analyzed the surrounding and deep walls to check for the presence of restorative material. The estimation was qualitative. The surfaces were examined with a scanning electron microscope. The results revealed that the experimental groups presented bigger amounts of remaining restorative material. The scanning electron microscopy (SEM) analyses showed irregularities of the resultant cavities of the experimental groups that increased proportionally with increase in repetition rate.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Experimental work has been carried out to investigate the effect of major operating variables on milling efficiency of calcium carbonate in laboratory and pilot size Tower and Sala Agitated (SAM) mills. The results suggest that the stirrer speed, media size and slurry density affect the specific energy consumption required to achieve the given product size. Media stress intensity analysis developed for high-speed horizontal mills was modified to include the effect of gravitational force in the vertical stirred mills such as the Tower and SAM units. The results suggest that this approach can be successfully applied for both mill types. For a given specific energy input, an optimum stress intensity range existed, for which the finest product was achieved. Finer product and therefore higher milling efficiency was obtained with SAM in the range of operating conditions tested. (C) 2001 Elsevier Science Ltd. All rights reserved.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Free-space optical interconnects (FSOIs), made up of dense arrays of vertical-cavity surface-emitting lasers, photodetectors and microlenses can be used for implementing high-speed and high-density communication links, and hence replace the inferior electrical interconnects. A major concern in the design of FSOIs is minimization of the optical channel cross talk arising from laser beam diffraction. In this article we introduce modifications to the mode expansion method of Tanaka et al. [IEEE Trans. Microwave Theory Tech. MTT-20, 749 (1972)] to make it an efficient tool for modelling and design of FSOIs in the presence of diffraction. We demonstrate that our modified mode expansion method has accuracy similar to the exact solution of the Huygens-Kirchhoff diffraction integral in cases of both weak and strong beam clipping, and that it is much more accurate than the existing approximations. The strength of the method is twofold: first, it is applicable in the region of pronounced diffraction (strong beam clipping) where all other approximations fail and, second, unlike the exact-solution method, it can be efficiently used for modelling diffraction on multiple apertures. These features make the mode expansion method useful for design and optimization of free-space architectures containing multiple optical elements inclusive of optical interconnects and optical clock distribution systems. (C) 2003 Optical Society of America.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The ability to generate enormous random libraries of DNA probes via split-and-mix synthesis on solid supports is an important biotechnological application of colloids that has not been fully utilized to date. To discriminate between colloid-based DNA probes each colloidal particle must be 'encoded' so it is distinguishable from all other particles. To this end, we have used novel particle synthesis strategies to produce large numbers of optically encoded particle suitable for DNA library synthesis. Multifluorescent particles with unique and reproducible optical signatures (i.e., fluorescence and light-scattering attributes) suitable for high-throughput flow cytometry have been produced. In the spectroscopic study presented here, we investigated the optical characteristics of multi-fluorescent particles that were synthesized by coating silica 'core' particles with up to six different fluorescent dye shells alternated with non-fluorescent silica 'spacer' shells. It was observed that the diameter of the particles increased by up to 20% as a result of the addition of twelve concentric shells and that there was a significant reduction in fluorescence emission intensities from inner shells as an increasing number of shells were deposited.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The effect of electron beam radiation on a perfluoroalkoxy (PFA) resin was examined using solid-state high-speed magic angle spinning F-19 NMR spectroscopy and FT-IR spectroscopy. Samples were prepared for analysis by subjecting them to electron beam radiation in the dose range 0.5-2.0 MGy at 633 K, which is above the crystalline melting temperature. The new structures were identified and include new saturated chain ends, short and long branches, unsaturated groups, and cross-links. The radiation chemical yield (G value) of new long branch points was greater than the G value of new chain ends, suggesting that cross-linking is the net radiolytic process. This conclusion was supported by an observed decrease in the crystallinity and an increase in the optical clarity of the polymer.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Um algoritmo numérico foi criado para apresentar a solução da conversão termoquímica de um combustível sólido. O mesmo foi criado de forma a ser flexível e dependente do mecanismo de reação a ser representado. Para tanto, um sistema das equações características desse tipo de problema foi resolvido através de um método iterativo unido a matemática simbólica. Em função de não linearidades nas equações e por se tratar de pequenas partículas, será aplicado o método de Newton para reduzir o sistema de equações diferenciais parciais (EDP’s) para um sistema de equações diferenciais ordinárias (EDO’s). Tal processo redução é baseado na união desse método iterativo à diferenciação numérica, pois consegue incorporar nas EDO’s resultantes funções analíticas. O modelo reduzido será solucionado numericamente usando-se a técnica do gradiente bi-conjugado (BCG). Tal modelo promete ter taxa de convergência alta, se utilizando de um número baixo de iterações, além de apresentar alta velocidade na apresentação das soluções do novo sistema linear gerado. Além disso, o algoritmo se mostra independente do tamanho da malha constituidora. Para a validação, a massa normalizada será calculada e comparada com valores experimentais de termogravimetria encontrados na literatura, , e um teste com um mecanismo simplificado de reação será realizado.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This work focused on the study of the impact event on molded parts in the framework of automotive components. The influence of the impact conditions and processing parameters on the mechanical behavior of talc-filled polypropylene specimens was analyzed. The specimens were lateral-gate discs produced by injection molding, and the mechanical characterization was performed through instrumented falling weight impact tests concomitantly assisted with high-speed videography. Results analyzed using the analysis of variance (ANOVA) method have shown that from the considered parameters, only the dart diameter and test temperature have significant influence on the falling weight impact properties. Higher dart diameter leads to higher peak force and peak energy results. Conversely, higher levels of test temperatures lead to lower values of peak force and peak energy. By means of high-speed videography, a more brittle fracture was observed for experiments with higher levels of test velocity and dart diameter and lower levels of test temperature. The injection-molding process conditions assessed in this study have an influence on the impact response of moldings, mainly on the deformation capabilities of the moldings.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A fast and direct surface plasmon resonance (SPR) method for the kinetic analysis of the interactions between peptide antigens and immobilised monoclonal antibodies (mAb) has been established. Protocols have been developed to overcome the problems posed by the small size of the analytes (< 1600 Da). The interactions were well described by a simple 1:1 bimolecular interaction and the rate constants were self-consistent and reproducible. The key features for the accuracy of the kinetic constants measured were high buffer flow rates, medium antibody surface densities and high peptide concentrations. The method was applied to an extensive analysis of over 40 peptide analogues towards two distinct anti-FMDV antibodies, providing data in total agreement with previous competition ELISA experiments. Eleven linear 15-residue synthetic peptides, reproducing all possible combinations of the four replacements found in foot-and-mouth disease virus (FMDV) field isolate C-S30, were evaluated. The direct kinetic SPR analysis of the interactions between these peptides and three anti-site A mAbs suggested additivity in all combinations of the four relevant mutations, which was confirmed by parallel ELISA analysis. The four-point mutant peptide (A15S30) reproducing site A from the C-S30 strain was the least antigenic of the set, in disagreement with previously reported studies with the virus isolate. Increasing peptide size from 15 to 21 residues did not significantly improve antigenicity. Overnight incubation of A15S30 with mAb 4C4 in solution showed a marked increase in peptide antigenicity not observed for other peptide analogues, suggesting that conformational rearrangement could lead to a stable peptide-antibody complex. In fact, peptide cyclization clearly improved antigenicity, confirming an antigenic reversion in a multiply substituted peptide. Solution NMR studies of both linear and cyclic versions of the antigenic loop of FMDV C-S30 showed that structural features previously correlated with antigenicity were more pronounced in the cyclic peptide. Twenty-six synthetic peptides, corresponding to all possible combinations of five single-point antigenicity-enhancing replacements in the GH loop of FMDV C-S8c1, were also studied. SPR kinetic screening of these peptides was not possible due to problems mainly related to the high mAb affinities displayed by these synthetic antigens. Solution affinity SPR analysis was employed and affinities displayed were generally comparable to or even higher than those corresponding to the C-S8c1 reference peptide A15. The NMR characterisation of one of these multiple mutants in solution showed that it had a conformational behaviour quite similar to that of the native sequence A15 and the X-ray diffraction crystallographic analysis of the peptide ? mAb 4C4 complex showed paratope ? epitope interactions identical to all FMDV peptide ? mAb complexes studied so far. Key residues for these interactions are those directly involved in epitope ? paratope contacts (141Arg, 143Asp, 146His) as well as residues able to stabilise a particular peptide global folding. A quasi-cyclic conformation is held up by a hydrophobic cavity defined by residues 138, 144 and 147 and by other key intrapeptide hydrogen bonds, delineating an open turn at positions 141, 142 and 143 (corresponding to the Arg-Gly-Asp motif).

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Lossless compression algorithms of the Lempel-Ziv (LZ) family are widely used nowadays. Regarding time and memory requirements, LZ encoding is much more demanding than decoding. In order to speed up the encoding process, efficient data structures, like suffix trees, have been used. In this paper, we explore the use of suffix arrays to hold the dictionary of the LZ encoder, and propose an algorithm to search over it. We show that the resulting encoder attains roughly the same compression ratios as those based on suffix trees. However, the amount of memory required by the suffix array is fixed, and much lower than the variable amount of memory used by encoders based on suffix trees (which depends on the text to encode). We conclude that suffix arrays, when compared to suffix trees in terms of the trade-off among time, memory, and compression ratio, may be preferable in scenarios (e.g., embedded systems) where memory is at a premium and high speed is not critical.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A recente norma IEEE 802.11n oferece um elevado débito em redes locais sem fios sendo por isso esperado uma adopção massiva desta tecnologia substituindo progressivamente as redes 802.11b/g. Devido à sua elevada capacidade esta recente geração de redes sem fios 802.11n permite um crescimento acentuado de serviços audiovisuais. Neste contexto esta dissertação procura estudar a rede 802.11n, caracterizando o desempenho e a qualidade associada a um serviço de transmissão de vídeo, recorrendo para o efeito a uma arquitectura de simulação da rede 802.11n. Desta forma é caracterizado o impacto das novas funcionalidades da camada MAC introduzidas na norma 801.11n, como é o caso da agregação A-MSDU e A-MPDU, bem como o impacto das novas funcionalidades da camada física como é o caso do MIMO; em ambos os casos uma optimização da parametrização é realizada. Também se verifica que as principais técnicas de codificação de vídeo H.264/AVC para optimizar o processo de distribuição de vídeo, permitem optimizar o desempenho global do sistema de transmissão. Aliando a optimização e parametrização da camada MAC, da camada física, e do processo de codificação, é possível propor um conjunto de configurações que permitem obter o melhor desempenho na qualidade de serviço da transmissão de conteúdos de vídeo numa rede 802.11n. A arquitectura de simulação construída nesta dissertação é especificamente adaptada para suportar as técnicas de agregação da camada MAC, bem como para suportar o encapsulamento em protocolos de rede que permitem a transmissão dos pacotes de vídeo RTP, codificados em H.264/AVC.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper studies all equity firms and shows which are in US firms, the main drivers of zero-debt policy. I analyze 6763 U.S. listed companies in years 1987-2009, a total of 77442 firms year. I find that financial constrained firms show a higher probability to become unlevered. In the opposite side, firms producing high cash flow are also likely to become unlevered, paying their debt. Some firms create economies of scale in the use of funds, increasing the probability of become unlevered. The industry characteristics are also important to explain the zero-debt policy. However is the high perception of risk, the most important factor influencing this extreme behavior, which is consistent with trade-off theory.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

O presente trabalho consiste na implementação em hardware de unidades funcionais dedicadas e optimizadas, para a realização das operações de codificação e descodificação, definidas na norma de codificação com perda Joint Photographic Experts Group (JPEG), ITU-T T.81 ISO/IEC 10918-1. Realiza-se um estudo sobre esta norma de forma a caracterizar os seus principais blocos funcionais. A finalidade deste estudo foca-se na pesquisa e na proposta de optimizações, de forma a minimizar o hardware necessário para a realização de cada bloco, de modo a que o sistema realizado obtenha taxas de compressão elevadas, minimizando a distorção obtida. A redução de hardware de cada sistema, codificador e descodificador, é conseguida à custa da manipulação das equações dos blocos Forward Discrete Cosine Transform (FDCT) e Quantificação (Q) e dos blocos Forward Discrete Cosine Transform (IDCT) e Quantificação Inversa (IQ). Com as conclusões retiradas do estudo e através da análise de estruturas conhecidas, descreveu-se cada bloco em Very-High-Speed Integrated Circuits (VHSIC) Hardware Description Language (VHDL) e fez-se a sua síntese em Field Programmable Gate Array (FPGA). Cada sistema implementado recorre à execução de cada bloco em paralelo de forma a optimizar a codificação/descodificação. Assim, para o sistema codificador, será realizada a operação da FDCT e Quantificação sobre duas matrizes diferentes e em simultâneo. O mesmo sucede para o sistema descodificador, composto pelos blocos Quantificação Inversa e IDCT. A validação de cada bloco sintetizado é executada com recurso a vectores de teste obtidos através do estudo efectuado. Após a integração de cada bloco, verificou-se que, para imagens greyscale de referência com resolução de 256 linhas por 256 colunas, é necessário 820,5 μs para a codificação de uma imagem e 830,5 μs para a descodificação da mesma. Considerando uma frequência de trabalho de 100 MHz, processam-se aproximadamente 1200 imagens por segundo.