928 resultados para high speed counter-current chromatography
Resumo:
In this paper, we propose an orthogonal chirp division multiplexing (OCDM) technique for coherent optical communication. OCDM is the principle of orthogonally multiplexing a group of linear chirped waveforms for high-speed data communication, achieving the maximum spectral efficiency (SE) for chirp spread spectrum, in a similar way as the orthogonal frequency division multiplexing (OFDM) does for frequency division multiplexing. In the coherent optical (CO)-OCDM, Fresnel transform formulates the synthesis of the orthogonal chirps; discrete Fresnel transform (DFnT) realizes the CO-OCDM in the digital domain. As both the Fresnel and Fourier transforms are trigonometric transforms, the CO-OCDM can be easily integrated into the existing CO-OFDM systems. Analyses and numerical results are provided to investigate the transmission of CO-OCDM signals over optical fibers. Moreover, experiments of 36-Gbit/s CO-OCDM signal are carried out to validate the feasibility and confirm the analyses. It is shown that the CO-OCDM can effectively compensate the dispersion and is more resilient to fading and noise impairment than OFDM.
Resumo:
The blast furnace is the main ironmaking production unit in the world which converts iron ore with coke and hot blast into liquid iron, hot metal, which is used for steelmaking. The furnace acts as a counter-current reactor charged with layers of raw material of very different gas permeability. The arrangement of these layers, or burden distribution, is the most important factor influencing the gas flow conditions inside the furnace, which dictate the efficiency of the heat transfer and reduction processes. For proper control the furnace operators should know the overall conditions in the furnace and be able to predict how control actions affect the state of the furnace. However, due to high temperatures and pressure, hostile atmosphere and mechanical wear it is very difficult to measure internal variables. Instead, the operators have to rely extensively on measurements obtained at the boundaries of the furnace and make their decisions on the basis of heuristic rules and results from mathematical models. It is particularly difficult to understand the distribution of the burden materials because of the complex behavior of the particulate materials during charging. The aim of this doctoral thesis is to clarify some aspects of burden distribution and to develop tools that can aid the decision-making process in the control of the burden and gas distribution in the blast furnace. A relatively simple mathematical model was created for simulation of the distribution of the burden material with a bell-less top charging system. The model developed is fast and it can therefore be used by the operators to gain understanding of the formation of layers for different charging programs. The results were verified by findings from charging experiments using a small-scale charging rig at the laboratory. A basic gas flow model was developed which utilized the results of the burden distribution model to estimate the gas permeability of the upper part of the blast furnace. This combined formulation for gas and burden distribution made it possible to implement a search for the best combination of charging parameters to achieve a target gas temperature distribution. As this mathematical task is discontinuous and non-differentiable, a genetic algorithm was applied to solve the optimization problem. It was demonstrated that the method was able to evolve optimal charging programs that fulfilled the target conditions. Even though the burden distribution model provides information about the layer structure, it neglects some effects which influence the results, such as mixed layer formation and coke collapse. A more accurate numerical method for studying particle mechanics, the Discrete Element Method (DEM), was used to study some aspects of the charging process more closely. Model charging programs were simulated using DEM and compared with the results from small-scale experiments. The mixed layer was defined and the voidage of mixed layers was estimated. The mixed layer was found to have about 12% less voidage than layers of the individual burden components. Finally, a model for predicting the extent of coke collapse when heavier pellets are charged over a layer of lighter coke particles was formulated based on slope stability theory, and was used to update the coke layer distribution after charging in the mathematical model. In designing this revision, results from DEM simulations and charging experiments for some charging programs were used. The findings from the coke collapse analysis can be used to design charging programs with more stable coke layers.
Resumo:
The performance of supersonic engine inlets and external aerodynamic surfaces can be critically affected by shock wave / boundary layer interactions (SBLIs), whose severe adverse pressure gradients can cause boundary layer separation. Currently such problems are avoided primarily through the use of boundary layer bleed/suction which can be a source of significant performance degradation. This study investigates a novel type of flow control device called micro-vortex generators (µVGs) which may offer similar control benefits without the bleed penalties. µVGs have the ability to alter the near-wall structure of compressible turbulent boundary layers to provide increased mixing of high speed fluid which improves the boundary layer health when subjected to flow disturbance. Due to their small size,µVGs are embedded in the boundary layer which provide reduced drag compared to the traditional vortex generators while they are cost-effective, physically robust and do not require a power source. To examine the potential of µVGs, a detailed experimental and computational study of micro-ramps in a supersonic boundary layer at Mach 3 subjected to an oblique shock was undertaken. The experiments employed a flat plate boundary layer with an impinging oblique shock with downstream total pressure measurements. The moderate Reynolds number of 3,800 based on displacement thickness allowed the computations to use Large Eddy Simulations without the subgrid stress model (LES-nSGS). The LES predictions indicated that the shock changes the structure of the turbulent eddies and the primary vortices generated from the micro-ramp. Furthermore, they generally reproduced the experimentally obtained mean velocity profiles, unlike similarly-resolved RANS computations. The experiments and the LES results indicate that the micro-ramps, whose height is h≈0.5δ, can significantly reduce boundary layer thickness and improve downstream boundary layer health as measured by the incompressible shape factor, H. Regions directly behind the ramp centerline tended to have increased boundary layer thickness indicating the significant three-dimensionality of the flow field. Compared to baseline sizes, smaller micro-ramps yielded improved total pressure recovery. Moving the smaller ramps closer to the shock interaction also reduced the displacement thickness and the separated area. This effect is attributed to decreased wave drag and the closer proximity of the vortex pairs to the wall. In the second part of the study, various types of µVGs are investigated including micro-ramps and micro-vanes. The results showed that vortices generated from µVGs can partially eliminate shock induced flow separation and can continue to entrain high momentum flux for boundary layer recovery downstream. The micro-ramps resulted in thinner downstream displacement thickness in comparison to the micro-vanes. However, the strength of the streamwise vorticity for the micro-ramps decayed faster due to dissipation especially after the shock interaction. In addition, the close spanwise distance between each vortex for the ramp geometry causes the vortex cores to move upwards from the wall due to induced upwash effects. Micro-vanes, on the other hand, yielded an increased spanwise spacing of the streamwise vortices at the point of formation. This resulted in streamwise vortices staying closer to the wall with less circulation decay, and the reduction in overall flow separation is attributed to these effects. Two hybrid concepts, named “thick-vane” and “split-ramp”, were also studied where the former is a vane with side supports and the latter has a uniform spacing along the centerline of the baseline ramp. These geometries behaved similar to the micro-vanes in terms of the streamwise vorticity and the ability to reduce flow separation, but are more physically robust than the thin vanes. Next, Mach number effect on flow past the micro-ramps (h~0.5δ) are examined in a supersonic boundary layer at M=1.4, 2.2 and 3.0, but with no shock waves present. The LES results indicate that micro-ramps have a greater impact at lower Mach number near the device but its influence decays faster than that for the higher Mach number cases. This may be due to the additional dissipation caused by the primary vortices with smaller effective diameter at the lower Mach number such that their coherency is easily lost causing the streamwise vorticity and the turbulent kinetic energy to decay quickly. The normal distance between the vortex core and the wall had similar growth indicating weak correlation with the Mach number; however, the spanwise distance between the two counter-rotating cores further increases with lower Mach number. Finally, various µVGs which include micro-ramp, split-ramp and a new hybrid concept “ramped-vane” are investigated under normal shock conditions at Mach number of 1.3. In particular, the ramped-vane was studied extensively by varying its size, interior spacing of the device and streamwise position respect to the shock. The ramped-vane provided increased vorticity compared to the micro-ramp and the split-ramp. This significantly reduced the separation length downstream of the device centerline where a larger ramped-vane with increased trailing edge gap yielded a fully attached flow at the centerline of separation region. The results from coarse-resolution LES studies show that the larger ramped-vane provided the most reductions in the turbulent kinetic energy and pressure fluctuation compared to other devices downstream of the shock. Additional benefits include negligible drag while the reductions in displacement thickness and shape factor were seen compared to other devices. Increased wall shear stress and pressure recovery were found with the larger ramped-vane in the baseline resolution LES studies which also gave decreased amplitudes of the pressure fluctuations downstream of the shock.
Resumo:
The aim of this thesis was threefold, firstly, to compare current player tracking technology in a single game of soccer. Secondly, to investigate the running requirements of elite women’s soccer, in particular the use and application of athlete tracking devices. Finally, how can game style be quantified and defined. Study One compared four different match analysis systems commonly used in both research and applied settings: video-based time-motion analysis, a semi-automated multiple camera based system, and two commercially available Global Positioning System (GPS) based player tracking systems at 1 Hertz (Hz) and 5 Hz respectively. A comparison was made between each of the systems when recording the same game. Total distance covered during the match for the four systems ranged from 10 830 ± 770 m (semi-automated multiple camera based system) to 9 510 ± 740m (video-based time-motion analysis). At running speeds categorised as high-intensity running (>15 km⋅h-1), the semi-automated multiple camera based system reported the highest distance of 2 650 ± 530 m with video-based time-motion analysis reporting the least amount of distance covered with 1 610 ± 370 m. At speeds considered to be sprinting (>20 km⋅h-1), the video-based time-motion analysis reported the highest value (420 ± 170 m) and 1 Hz GPS units the lowest value (230 ± 160 m). These results demonstrate there are differences in the determination of the absolute distances, and that comparison of results between match analysis systems should be made with caution. Currently, there is no criterion measure for these match analysis methods and as such it was not possible to determine if one system was more accurate than another. Study Two provided an opportunity to apply player-tracking technology (GPS) to measure activity profiles and determine the physical demands of Australian international level women soccer players. In four international women’s soccer games, data was collected on a total of 15 Australian women soccer players using a 5 Hz GPS based athlete tracking device. Results indicated that Australian women soccer players covered 9 140 ± 1 030 m during 90 min of play. The total distance covered by Australian women was less than the 10 300 m reportedly covered by female soccer players in the Danish First Division. However, there was no apparent difference in the estimated "#$%&', as measured by multi-stage shuttle tests, between these studies. This study suggests that contextual information, including the “game style” of both the team and opposition may influence physical performance in games. Study Three examined the effect the level of the opposition had on the physical output of Australian women soccer players. In total, 58 game files from 5 Hz athlete-tracking devices from 13 international matches were collected. These files were analysed to examine relationships between physical demands, represented by total distance covered, high intensity running (HIR) and distances covered sprinting, and the level of the opposition, as represented by the Fédération Internationale de Football Association (FIFA) ranking at the time of the match. Higher-ranking opponents elicited less high-speed running and greater low-speed activity compared to playing teams of similar or lower ranking. The results are important to coaches and practitioners in the preparation of players for international competition, and showed that the differing physical demands required were dependent on the level of the opponents. The results also highlighted the need for continued research in the area of integrating contextual information in team sports and demonstrated that soccer can be described as having dynamic and interactive systems. The influence of playing strategy, tactics and subsequently the overall game style was highlighted as playing a significant part in the physical demands of the players. Study Four explored the concept of game style in field sports such as soccer. The aim of this study was to provide an applied framework with suggested metrics for use by coaches, media, practitioners and sports scientists. Based on the findings of Studies 1- 3 and a systematic review of the relevant literature, a theoretical framework was developed to better understand how a team’s game style could be quantified. Soccer games can be broken into key moments of play, and for each of these moments we categorised metrics that provide insight to success or otherwise, to help quantify and measure different methods of playing styles. This study highlights that to date, there had been no clear definition of game style in team sports and as such a novel definition of game style is proposed that can be used by coaches, sport scientists, performance analysts, media and general public. Studies 1-3 outline four common methods of measuring the physical demands in soccer: video based time motion analysis, GPS at 1 Hz and at 5 Hz and semiautomated multiple camera based systems. As there are no semi-automated multiple camera based systems available in Australia, primarily due to cost and logistical reasons, GPS is widely accepted for use in team sports in tracking player movements in training and competition environments. This research identified that, although there are some limitations, GPS player-tracking technology may be a valuable tool in assessing running demands in soccer players and subsequently contribute to our understanding of game style. The results of the research undertaken also reinforce the differences between methods used to analyse player movement patterns in field sports such as soccer and demonstrate that the results from different systems such as GPS based athlete tracking devices and semi-automated multiple camera based systems cannot be used interchangeably. Indeed, the magnitude of measurement differences between methods suggests that significant measurement error is evident. This was apparent even when the same technologies are used which measure at different sampling rates, such as GPS systems using either 1 Hz or 5 Hz frequencies of measurement. It was also recognised that other factors influence how team sport athletes behave within an interactive system. These factors included the strength of the opposition and their style of play. In turn, these can impact the physical demands of players that change from game to game, and even within games depending on these contextual features. Finally, the concept of what is game style and how it might be measured was examined. Game style was defined as "the characteristic playing pattern demonstrated by a team during games. It will be regularly repeated in specific situational contexts such that measurement of variables reflecting game style will be relatively stable. Variables of importance are player and ball movements, interaction of players, and will generally involve elements of speed, time and space (location)".
Resumo:
The transistor laser is a unique three-port device that operates simultaneously as a transistor and a laser. With quantum wells incorporated in the base regions of heterojunction bipolar transistors, the transistor laser possesses advantageous characteristics of fast base spontaneous carrier lifetime, high differential optical gain, and electrical-optical characteristics for direct “read-out” of its optical properties. These devices have demonstrated many useful features such as high-speed optical transmission without the limitations of resonance, non-linear mixing, frequency multiplication, negative resistance, and photon-assisted switching. To date, all of these devices operate as multi-mode lasers without any type of wavelength selection or stabilizing mechanisms. Stable single-mode distributed feedback diode laser sources are important in many applications including spectroscopy, as pump sources for amplifiers and solid-state lasers, for use in coherent communication systems, and now as TLs potentially for integrated optoelectronics. The subject of this work is to expand the future applications of the transistor laser by demonstrating the theoretical background, process development and device design necessary to achieve singlelongitudinal- mode operation in a three-port transistor laser. A third-order distributed feedback surface grating is fabricated in the top emitter AlGaAs confining layers using soft photocurable nanoimprint lithography. The device produces continuous wave laser operation with a peak wavelength of 959.75 nm and threshold current of 13 mA operating at -70 °C. For devices with cleaved ends a side-mode suppression ratio greater than 25 dB has been achieved.
Resumo:
A methodology has been developed and presented to enable the use of small to medium scale acoustic hover facilities for the quantitative measurement of rotor impulsive noise. The methodology was applied to the University of Maryland Acoustic Chamber resulting in accurate measurements of High Speed Impulsive (HSI) noise for rotors running at tip Mach numbers between 0.65 and 0.85 – with accuracy increasing as the tip Mach number was increased. Several factors contributed to the success of this methodology including: • High Speed Impulsive (HSI) noise is characterized by very distinct pulses radiated from the rotor. The pulses radiate high frequency energy – but the energy is contained in short duration time pulses. • The first reflections from these pulses can be tracked (using ray theory) and, through adjustment of the microphone position and suitably applied acoustic treatment at the reflected surface, reduced to small levels. A computer code was developed that automates this process. The code also tracks first bounce reflection timing, making it possible to position the first bounce reflections outside of a measurement window. • Using a rotor with a small number of blades (preferably one) reduces the number of interfering first bounce reflections and generally improves the measured signal fidelity. The methodology will help the gathering of quantitative hovering rotor noise data in less than optimal acoustic facilities and thus enable basic rotorcraft research and rotor blade acoustic design.
Resumo:
Strawberries harvested for processing as frozen fruits are currently de-calyxed manually in the field. This process requires the removal of the stem cap with green leaves (i.e. the calyx) and incurs many disadvantages when performed by hand. Not only does it necessitate the need to maintain cutting tool sanitation, but it also increases labor time and exposure of the de-capped strawberries before in-plant processing. This leads to labor inefficiency and decreased harvest yield. By moving the calyx removal process from the fields to the processing plants, this new practice would reduce field labor and improve management and logistics, while increasing annual yield. As labor prices continue to increase, the strawberry industry has shown great interest in the development and implementation of an automated calyx removal system. In response, this dissertation describes the design, operation, and performance of a full-scale automatic vision-guided intelligent de-calyxing (AVID) prototype machine. The AVID machine utilizes commercially available equipment to produce a relatively low cost automated de-calyxing system that can be retrofitted into existing food processing facilities. This dissertation is broken up into five sections. The first two sections include a machine overview and a 12-week processing plant pilot study. Results of the pilot study indicate the AVID machine is able to de-calyx grade-1-with-cap conical strawberries at roughly 66 percent output weight yield at a throughput of 10,000 pounds per hour. The remaining three sections describe in detail the three main components of the machine: a strawberry loading and orientation conveyor, a machine vision system for calyx identification, and a synchronized multi-waterjet knife calyx removal system. In short, the loading system utilizes rotational energy to orient conical strawberries. The machine vision system determines cut locations through RGB real-time feature extraction. The high-speed multi-waterjet knife system uses direct drive actuation to locate 30,000 psi cutting streams to precise coordinates for calyx removal. Based on the observations and studies performed within this dissertation, the AVID machine is seen to be a viable option for automated high-throughput strawberry calyx removal. A summary of future tasks and further improvements is discussed at the end.
Resumo:
Silver and mercury are both dissolved in cyanide leaching and the mercury co-precipitates with silver during metal recovery. Mercury must then be removed from the silver/mercury amalgam by vaporizing the mercury in a retort, leading to environmental and health hazards. The need for retorting silver can be greatly reduced if mercury is selectively removed from leaching solutions. Theoretical calculations were carried out based on the thermodynamics of the Ag/Hg/CN- system in order to determine possible approaches to either preventing mercury dissolution, or selectively precipitating it without silver loss. Preliminary experiments were then carried out based on these calculations to determine if the reaction would be spontaneous with reasonably fast kinetics. In an attempt to stop mercury from dissolving and leaching the heap leach, the first set of experiments were to determine if selenium and mercury would form a mercury selenide under leaching conditions, lowering the amount of mercury in solution while forming a stable compound. From the results of the synthetic ore experiments with selenium, it was determined that another effect was already suppressing mercury dissolution and the effect of the selenium could not be well analyzed on the small amount of change. The effect dominating the reactions led to the second set of experiments in using silver sulfide as a selective precipitant of mercury. The next experiments were to determine if adding solutions containing mercury cyanide to un-leached silver sulfide would facilitate a precipitation reaction, putting silver in solution and precipitating mercury as mercury sulfide. Counter current flow experiments using the high selenium ore showed a 99.8% removal of mercury from solution. As compared to leaching with only cyanide, about 60% of the silver was removed per pass for the high selenium ore, and around 90% for the high mercury ore. Since silver sulfide is rather expensive to use solely as a mercury precipitant, another compound was sought which could selectively precipitate mercury and leave silver in solution. In looking for a more inexpensive selective precipitant, zinc sulfide was tested. The third set of experiments did show that zinc sulfide (as sphalerite) could be used to selectively precipitate mercury while leaving silver cyanide in solution. Parameters such as particle size, reduction potential, and amount of oxidation of the sphalerite were tested. Batch experiments worked well, showing 99.8% mercury removal with only ≈1% silver loss (starting with 930 ppb mercury, 300 ppb silver) at one hour. A continual flow process would work better for industrial applications, which was demonstrated with the filter funnel set up. Funnels with filter paper and sphalerite tested showed good mercury removal (from 31 ppb mercury and 333 ppb silver with a 87% mercury removal and 7% silver loss through one funnel). A counter current flow set up showed 100% mercury removal and under 0.1% silver loss starting with 704 ppb silver and 922 ppb mercury. The resulting sphalerite coated with mercury sulfide was also shown to be stable (not releasing mercury) under leaching tests. Use of sphalerite could be easily implemented through such means as sphalerite impregnated filter paper placed in currently existing processes. In summary, this work focuses on preventing mercury from following silver through the leaching circuit. Currently the only possible means of removing mercury is by retort, creating possible health hazards in the distillation process and in transportation and storage of the final mercury waste product. Preventing mercury from following silver in the earlier stages of the leaching process will greatly reduce the risk of mercury spills, human exposure to mercury, and possible environmental disasters. This will save mining companies millions of dollars from mercury handling and storage, projects to clean up spilled mercury, and will result in better health for those living near and working in the mines.
Resumo:
Current copper based circuit technology is becoming a limiting factor in high speed data transfer applications as processors are improving at a faster rate than are developments to increase on board data transfer. One solution is to utilize optical waveguide technology to overcome these bandwidth and loss restrictions. The use of this technology virtually eliminates the heat and cross-talk loss seen in copper circuitry, while also operating at a higher bandwidth. Transitioning current fabrication techniques from small scale laboratory environments to large scale manufacturing presents significant challenges. Optical-to-electrical connections and out-of-plane coupling are significant hurdles in the advancement of optical interconnects. The main goals of this research are the development of direct write material deposition and patterning tools for the fabrication of waveguide systems on large substrates, and the development of out-of-plane coupler components compatible with standard fiber optic cabling. Combining these elements with standard printed circuit boards allows for the fabrication of fully functional optical-electrical-printed-wiring-boards (OEPWBs). A direct dispense tool was designed, assembled, and characterized for the repeatable dispensing of blanket waveguide layers over a range of thicknesses (25-225 µm), eliminating waste material and affording the ability to utilize large substrates. This tool was used to directly dispense multimode waveguide cores which required no UV definition or development. These cores had circular cross sections and were comparable in optical performance to lithographically fabricated square waveguides. Laser direct writing is a non-contact process that allows for the dynamic UV patterning of waveguide material on large substrates, eliminating the need for high resolution masks. A laser direct write tool was designed, assembled, and characterized for direct write patterning waveguides that were comparable in quality to those produced using standard lithographic practices (0.047 dB/cm loss for laser written waveguides compared to 0.043 dB/cm for lithographic waveguides). Straight waveguides, and waveguide turns were patterned at multimode and single mode sizes, and the process was characterized and documented. Support structures such as angled reflectors and vertical posts were produced, showing the versatility of the laser direct write tool. Commercially available components were implanted into the optical layer for out-of-plane routing of the optical signals. These devices featured spherical lenses on the input and output sides of a total internal reflection (TIR) mirror, as well as alignment pins compatible with standard MT design. Fully functional OEPWBs were fabricated featuring input and output out-of-plane optical signal routing with total optical losses not exceeding 10 dB. These prototypes survived thermal cycling (-40°C to 85°C) and humidity exposure (95±4% humidity), showing minimal degradation in optical performance. Operational failure occurred after environmental aging life testing at 110°C for 216 hours.
Resumo:
The production of ethyl esters by alcoholysis is an alternative for splitting triacylglycerols due to the possibility of using low temperatures, which results in oxidative protection of the polyunsaturated fatty acids. Ethyl esters produced under mild conditions of temperature could be used as substrate for obtaining structured lipids. The reaction parameters of production of ethyl esters from fish oil with high content of omega-3 fatty acids by alcoholysis were optimized using response surface methodology. An experimental design (2³) (with levels +1 and -1, six axial points with levels -alpha and +alpha and three central points) was applied. The variables investigated were concentration of catalyst, amount of ethyl alcohol and temperature. Ethyl ester conversion was monitored by high performance size exclusion chromatography (HPSEC) and the best result obtained was 95% conversion rate. The optimal conditions were 40 °C, 1% of NaOH and 36% of ethanol.
Resumo:
Ethanolic extracts from propolis were performed by using lhe water and vaflous coneentrations of etanol as solvent. The extracts were investigated by measurement of absorption spectruin with Uv-spectrophotometer (UV-scanning), reversed phase-high performance thin-layer chromatography, Reversed phase-HPLC. Maximum absorption of ali extracts was 290 nm, resembling flavonoid compounds and 80% ethanolic extract showed highest absorption at 290 nm. The most isosakuranetin, quercefin, and kaempferol were extracted from mixtures of propolis and 60% etanol, whereas 70% etanol extracted te most pinocembrin and sakuranetin, but 80% etanol extracted more kaempferide, acacetin, and isorhamnetin from propolis. The 60 to 80% ethanolic extracts ofpropolis inhibited highly to microbial growth and 70 and 80% ethanolic extracts showed lhe greatest antioxidant activity and 80% ethanolic extract inhibited highly to hyaluronidase activity.
Resumo:
OBJECTIVE: To assess microleakage in conservative class V cavities prepared with aluminum-oxide air abrasion or turbine and restored with self-etching or etch-and-rinse adhesive systems. Materials and Methods: Forty premolars were randomly assigned to 4 groups (I and II: air abrasion; III and IV: turbine) and class V cavities were prepared on the buccal surfaces. Conditioning approaches were: groups I/III - 37% phosphoric acid; groups II/IV - self-priming etchant (Tyrian-SPE). Cavities were restored with One Step Plus/Filtek Z250. After finishing, specimens were thermocycled, immersed in 50% silver nitrate, and serially sectioned. Microleakage at the occlusal and cervical interfaces was measured in mm and calculated by a software. Data were subjected to ANOVA and Tukey's test (α=0.05). RESULTS: Marginal seal provided by air abrasion was similar to high-speed handpiece, except for group I. There was SIGNIFICANT difference between enamel and dentin/cementum margins for to group I and II: air abrasion. The etch-and-rinse adhesive system promoted a better marginal seal. At enamel and dentin/cementum margins, the highest microleakage values were found in cavities treated with the self-etching adhesive system. At dentin/cementum margins, high-speed handpiece preparations associated with etch-and-rinse system provided the least dye penetration. CONCLUSION: Marginal seal of cavities prepared with aluminum-oxide air abrasion was different from that of conventionally prepared cavities, and the etch-and-rinse system promoted higher marginal seal at both enamel and dentin margins.
Resumo:
Objective: The purpose of this study was to assess the efficacy of Er:YAG laser energy for composite resin removal and the influence of pulse repetition rate on the thermal alterations occurring during laser ablation. Materials and Methods: Composite resin filling was placed in cavities (1.0 mm deep) prepared in bovine teeth and the specimens were randomly assigned to five groups according to the technique used for composite filling removal. In group I (controls), the restorations were removed using a high-speed diamond bur. In the other groups, the composite fillings were removed using an Er: YAG laser with different pulse repetition rates: group 2-2 Hz; group 3-4 Hz; group 4-6 Hz; and group 5-10 Hz. The time required for complete removal of the restorative material and the temperature changes were recorded. Results: Temperature rise during composite resin removal with the Er: YAG laser occurred in the substrate underneath the restoration and was directly proportional to the increase in pulse repetition rate. None of the groups had a temperature increase during composite filling removal of more than 5.6 degrees C, which is considered the critical point above which irreversible thermal damage to the pulp may result. Regarding the time for composite filling removal, all the laser-ablated groups (except for group 5 [10 Hz]) required more time than the control group for complete elimination of the material from the cavity walls. Conclusion: Under the tested conditions, Er: YAG laser irradiation was efficient for composite resin ablation and did not cause a temperature increase above the limit considered safe for the pulp. Among the tested pulse repetition rates, 6 Hz produced minimal temperature change compared to the control group (high-speed bur), and allowed composite filling removal within a time period that is acceptable for clinical conditions.
Resumo:
It has been suggested that muscle tension plays a major role in the activation of intracellular pathways for skeletal muscle hypertrophy via an increase in mechano growth factor (MGF) and other downstream targets. Eccentric exercise (EE) imposes a greater amount of tension on the active muscle. In particular, high-speed EE seems to exert an additional effect on muscle tension and, thus, on muscle hypertrophy. However, little is known about the effect of EE velocity on hypertrophy signaling. This study investigated the effect of acute EE-velocity manipulation on the Akt/mTORCI/p70(S6K) hypertrophy pathway. Twenty subjects were assigned to either a slow (20 degrees.s(-1); ES) or fast EE (210 degrees.s(-1); EF) group. Biopsies were taken from vastus lateralis at baseline (B), immediately after (T1), and 2 h after (T2) the completion of 5 sets of 8 repetitions of eccentric knee extensions. Akt, mTOR, and p70(S6K) total protein were similar between groups, and did not change postintervention. Further, Akt and p70(S6K) protein phosphorylation were higher at T2 than at B for ES and EF. MGF messenger RNA was similar between groups, and only significantly higher at T2 than at B in ES. The acute manipulation of EE velocity does not seem to differently influence intracellular hypertrophy signaling through the Akt/mTORCI/p70S6K pathway.
Resumo:
This paper presents new experimental flow boiling heat transfer results in micro-scale tubes. The experimental data were obtained in a horizontal 2.3 mm I.D stainless steel tube with heating length of 464 mm, R134a and R245fa as working fluids, mass velocities ranging from 50 to 700 kg m(-2) s(-1), heat flux from 5 to 55 kW m(-2), exit saturation temperatures of 22, 31 and 41 degrees C, and vapor qualities ranging from 0.05 to 0.99. Flow pattern characterization was also performed from images obtained by high-speed filming. Heat transfer coefficient results from 1 to 14 kW m(-2) K(-1) were measured. It was found that the heat transfer coefficient is a strong function of heat flux, mass velocity and vapor quality. The experimental data were compared against ten flow boiling predictive methods from the literature. Liu and Winterton [3], Zhang et al. [5] and Saitoh et al. [6] worked best for both fluids, capturing most of the experimental heat transfer trends. (C) 2010 Elsevier Ltd. All rights reserved.