928 resultados para Motion study


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Any waterway with one end closed and the other open is generally called a blind channel. The main flow tends to expand, separate, and cause circulation at the mouth of blind channels. The main flow continuously transfers momentum and sediment into the circulation region through the turbulent mixing region (TMR) between them, thus leading to a large amount of sediment deposition in the blind channels. This paper experimentally investigated the properties of the water flow and sediment diffusion in TMR, demonstrating that both water flow and sediment motion in TMR approximately coincide with a similar structure as in the free mixing layer induced by a jet. The similarity functions of flow velocity and sediment concentration are then assumed, based on observation, and the resulting calculation of these functions is substantially facilitated. For the kind of low velocity flow system of blind channels with a finite width, a simple formula for the sediment deposition rate in blind channels is established by analyzing the gradient of crosswise velocity and sediment concentration in TMR.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The constitutive relations and kinematic assumptions on the composite beam with shape memory alloy (SMA) arbitrarily embedded are discussed and the results related to the different kinematic assumptions are compared. As the approach of mechanics of materials is to study the composite beam with the SMA layer embedded, the kinematic assumption is vital. In this paper, we systematically study the kinematic assumptions influence on the composite beam deflection and vibration characteristics. Based on the different kinematic assumptions, the equations of equilibrium/motion are different. Here three widely used kinematic assumptions are presented and the equations of equilibrium/motion are derived accordingly. As the three kinematic assumptions change from the simple to the complex one, the governing equations evolve from the linear to the nonlinear ones. For the nonlinear equations of equilibrium, the numerical solution is obtained by using Galerkin discretization method and Newton-Rhapson iteration method. The analysis on the numerical difficulty of using Galerkin method on the post-buckling analysis is presented. For the post-buckling analysis, finite element method is applied to avoid the difficulty due to the singularity occurred in Galerkin method. The natural frequencies of the composite beam with the nonlinear governing equation, which are obtained by directly linearizing the equations and locally linearizing the equations around each equilibrium, are compared. The influences of the SMA layer thickness and the shift from neutral axis on the deflection, buckling and post-buckling are also investigated. This paper presents a very general way to treat thermo-mechanical properties of the composite beam with SMA arbitrarily embedded. The governing equations for each kinematic assumption consist of a third order and a fourth order differential equation with a total of seven boundary conditions. Some previous studies on the SMA layer either ignore the thermal constraint effect or implicitly assume that the SMA is symmetrically embedded. The composite beam with the SMA layer asymmetrically embedded is studied here, in which symmetric embedding is a special case. Based on the different kinematic assumptions, the results are different depending on the deflection magnitude because of the nonlinear hardening effect due to the (large) deflection. And this difference is systematically compared for both the deflection and the natural frequencies. For simple kinematic assumption, the governing equations are linear and analytical solution is available. But as the deflection increases to the large magnitude, the simple kinematic assumption does not really reflect the structural deflection and the complex one must be used. During the systematic comparison of computational results due to the different kinematic assumptions, the application range of the simple kinematic assumption is also evaluated. Besides the equilibrium study of the composite laminate with SMA embedded, the buckling, post-buckling, free and forced vibrations of the composite beam with the different configurations are also studied and compared.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

By means of Tersoff and Morse potentials, a three-dimensional molecular dynamics simulation is performed to study atomic force microscopy cutting on silicon monocrystal surface. The interatomic forces between the workpiece and the pin tool and the atoms of workpiece themselves are calculated. A screw dislocation is introduced into workpiece Si. It is found that motion of dislocations does not occur during the atomic force microscopy cutting processing. Simulation results show that the shear stress acting on dislocation is far below the yield strength of Si.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In order to obtain an overall and systematic understanding of the performance of a two-stage light gas gun (TLGG), a numerical code to simulate the process occurring in a gun shot is advanced based on the quasi-one-dimensional unsteady equations of motion with the real gas effect,;friction and heat transfer taken into account in a characteristic formulation for both driver and propellant gas. Comparisons of projectile velocities and projectile pressures along the barrel with experimental results from JET (Joint European Tons) and with computational data got by the Lagrangian method indicate that this code can provide results with good accuracy over a wide range of gun geometry and loading conditions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A ground-experiment study on the motions of solid particles in liquid media with vertical temperature gradient is performed in this paper. The movement of solid spheres toward the heating end of a close cell is observed. The behavior and features of the motions examined are quite similar to thermocapillary migration of bubbles and drops in a liquid. The motion velocities of particles measured are about 10(-3) to 10(-4) mm\s. The velocity is compared with the velocity of particles floated in two liquid media. The physical mechanism of motion is explored.

Relevância:

30.00% 30.00%

Publicador:

Resumo:


A temperature-controlled poolboiling (TCPB) device was developed to perform poolboiling heat transfer studies at both normal gravity and microgravity. A platinum wire of 60 μm in diameter and 30 mm in length was simultaneously used as heaters and thermometers. The heater resistance, and thus the heater temperature, was kept constant by a feedback circuit. The fluid was R113 at 0.1 Mpa and subcooled by 24 nominally for all cases. The results of the experiments at both normal gravity and microgravityin the Drop Tower Beijing were presented. Nucleate and two-mode transition boiling were observed. For nucleate boiling, the heat transfer was slightly enhanced, namely no more than 10% increase of the heat flux was obtained inmicrogravity, while the bubble pattern is dramatically altered by the variation of the acceleration. For two-mode transition boiling, about 20% decrease of the heat flux was obtained, although the part of film boiling was receded inmicrogravity. A scale analysis on the Marangoni convection surrounding bubble in the process of subcooled nucleate poolboiling was also presented. The characteristic velocity of the lateral motion and its observability were obtained approximately. The predictions consist with theexperimental observations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Computer science and electrical engineering have been the great success story of the twentieth century. The neat modularity and mapping of a language onto circuits has led to robots on Mars, desktop computers and smartphones. But these devices are not yet able to do some of the things that life takes for granted: repair a scratch, reproduce, regenerate, or grow exponentially fast–all while remaining functional.

This thesis explores and develops algorithms, molecular implementations, and theoretical proofs in the context of “active self-assembly” of molecular systems. The long-term vision of active self-assembly is the theoretical and physical implementation of materials that are composed of reconfigurable units with the programmability and adaptability of biology’s numerous molecular machines. En route to this goal, we must first find a way to overcome the memory limitations of molecular systems, and to discover the limits of complexity that can be achieved with individual molecules.

One of the main thrusts in molecular programming is to use computer science as a tool for figuring out what can be achieved. While molecular systems that are Turing-complete have been demonstrated [Winfree, 1996], these systems still cannot achieve some of the feats biology has achieved.

One might think that because a system is Turing-complete, capable of computing “anything,” that it can do any arbitrary task. But while it can simulate any digital computational problem, there are many behaviors that are not “computations” in a classical sense, and cannot be directly implemented. Examples include exponential growth and molecular motion relative to a surface.

Passive self-assembly systems cannot implement these behaviors because (a) molecular motion relative to a surface requires a source of fuel that is external to the system, and (b) passive systems are too slow to assemble exponentially-fast-growing structures. We call these behaviors “energetically incomplete” programmable behaviors. This class of behaviors includes any behavior where a passive physical system simply does not have enough physical energy to perform the specified tasks in the requisite amount of time.

As we will demonstrate and prove, a sufficiently expressive implementation of an “active” molecular self-assembly approach can achieve these behaviors. Using an external source of fuel solves part of the the problem, so the system is not “energetically incomplete.” But the programmable system also needs to have sufficient expressive power to achieve the specified behaviors. Perhaps surprisingly, some of these systems do not even require Turing completeness to be sufficiently expressive.

Building on a large variety of work by other scientists in the fields of DNA nanotechnology, chemistry and reconfigurable robotics, this thesis introduces several research contributions in the context of active self-assembly.

We show that simple primitives such as insertion and deletion are able to generate complex and interesting results such as the growth of a linear polymer in logarithmic time and the ability of a linear polymer to treadmill. To this end we developed a formal model for active-self assembly that is directly implementable with DNA molecules. We show that this model is computationally equivalent to a machine capable of producing strings that are stronger than regular languages and, at most, as strong as context-free grammars. This is a great advance in the theory of active self- assembly as prior models were either entirely theoretical or only implementable in the context of macro-scale robotics.

We developed a chain reaction method for the autonomous exponential growth of a linear DNA polymer. Our method is based on the insertion of molecules into the assembly, which generates two new insertion sites for every initial one employed. The building of a line in logarithmic time is a first step toward building a shape in logarithmic time. We demonstrate the first construction of a synthetic linear polymer that grows exponentially fast via insertion. We show that monomer molecules are converted into the polymer in logarithmic time via spectrofluorimetry and gel electrophoresis experiments. We also demonstrate the division of these polymers via the addition of a single DNA complex that competes with the insertion mechanism. This shows the growth of a population of polymers in logarithmic time. We characterize the DNA insertion mechanism that we utilize in Chapter 4. We experimentally demonstrate that we can control the kinetics of this re- action over at least seven orders of magnitude, by programming the sequences of DNA that initiate the reaction.

In addition, we review co-authored work on programming molecular robots using prescriptive landscapes of DNA origami; this was the first microscopic demonstration of programming a molec- ular robot to walk on a 2-dimensional surface. We developed a snapshot method for imaging these random walking molecular robots and a CAPTCHA-like analysis method for difficult-to-interpret imaging data.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Designing for all requires the adaptation and modification of current design best practices to encompass a broader range of user capabilities. This is particularly the case in the design of the human-product interface. Product interfaces exist everywhere and when designing them, there is a very strong temptation to jump to prescribing a solution with only a cursory attempt to understand the nature of the problem. This is particularly the case when attempting to adapt existing designs, optimised for able-bodied users, for use by disabled users. However, such approaches have led to numerous products that are neither usable nor commercially successful. In order to develop a successful design approach it is necessary consider the fundamental structure of the design process being applied. A three stage design process development strategy which includes problem definition, solution development and solution evaluation, should be adopted. This paper describes the development of a new design approach based on the application of usability heuristics to the design of interfaces. This is illustrated by reference to a particular case study of the re-design of a computer interface for controlling an assistive device.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The motion of a single Brownian particle of arbitrary size through a dilute colloidal dispersion of neutrally buoyant bath spheres of another characteristic size in a Newtonian solvent is examined in two contexts. First, the particle in question, the probe particle, is subject to a constant applied external force drawing it through the suspension as a simple model for active and nonlinear microrheology. The strength of the applied external force, normalized by the restoring forces of Brownian motion, is the Péclet number, Pe. This dimensionless quantity describes how strongly the probe is upsetting the equilibrium distribution of the bath particles. The mean motion and fluctuations in the probe position are related to interpreted quantities of an effective viscosity of the suspension. These interpreted quantities are calculated to first order in the volume fraction of bath particles and are intimately tied to the spatial distribution, or microstructure, of bath particles relative to the probe. For weak Pe, the disturbance to the equilibrium microstructure is dipolar in nature, with accumulation and depletion regions on the front and rear faces of the probe, respectively. With increasing applied force, the accumulation region compresses to form a thin boundary layer whose thickness scales with the inverse of Pe. The depletion region lengthens to form a trailing wake. The magnitude of the microstructural disturbance is found to grow with increasing bath particle size -- small bath particles in the solvent resemble a continuum with effective microviscosity given by Einstein's viscosity correction for a dilute dispersion of spheres. Large bath particles readily advect toward the minimum approach distance possible between the probe and bath particle, and the probe and bath particle pair rotating as a doublet is the primary mechanism by which the probe particle is able to move past; this is a process that slows the motion of the probe by a factor of the size ratio. The intrinsic microviscosity is found to force thin at low Péclet number due to decreasing contributions from Brownian motion, and force thicken at high Péclet number due to the increasing influence of the configuration-averaged reduction in the probe's hydrodynamic self mobility. Nonmonotonicity at finite sizes is evident in the limiting high-Pe intrinsic microviscosity plateau as a function of bath-to-probe particle size ratio. The intrinsic microviscosity is found to grow with the size ratio for very small probes even at large-but-finite Péclet numbers. However, even a small repulsive interparticle potential, that excludes lubrication interactions, can reduce this intrinsic microviscosity back to an order one quantity. The results of this active microrheology study are compared to previous theoretical studies of falling-ball and towed-ball rheometry and sedimentation and diffusion in polydisperse suspensions, and the singular limit of full hydrodynamic interactions is noted.

Second, the probe particle in question is no longer subject to a constant applied external force. Rather, the particle is considered to be a catalytically-active motor, consuming the bath reactant particles on its reactive face while passively colliding with reactant particles on its inert face. By creating an asymmetric distribution of reactant about its surface, the motor is able to diffusiophoretically propel itself with some mean velocity. The effects of finite size of the solute are examined on the leading order diffusive microstructure of reactant about the motor. Brownian and interparticle contributions to the motor velocity are computed for several interparticle interaction potential lengths and finite reactant-to-motor particle size ratios, with the dimensionless motor velocity increasing with decreasing motor size. A discussion on Brownian rotation frames the context in which these results could be applicable, and future directions are proposed which properly incorporate reactant advection at high motor velocities.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The initial objective of Part I was to determine the nature of upper mantle discontinuities, the average velocities through the mantle, and differences between mantle structure under continents and oceans by the use of P'dP', the seismic core phase P'P' (PKPPKP) that reflects at depth d in the mantle. In order to accomplish this, it was found necessary to also investigate core phases themselves and their inferences on core structure. P'dP' at both single stations and at the LASA array in Montana indicates that the following zones are candidates for discontinuities with varying degrees of confidence: 800-950 km, weak; 630-670 km, strongest; 500-600 km, strong but interpretation in doubt; 350-415 km, fair; 280-300 km, strong, varying in depth; 100-200 km, strong, varying in depth, may be the bottom of the low-velocity zone. It is estimated that a single station cannot easily discriminate between asymmetric P'P' and P'dP' for lead times of about 30 sec from the main P'P' phase, but the LASA array reduces this uncertainty range to less than 10 sec. The problems of scatter of P'P' main-phase times, mainly due to asymmetric P'P', incorrect identification of the branch, and lack of the proper velocity structure at the velocity point, are avoided and the analysis shows that one-way travel of P waves through oceanic mantle is delayed by 0.65 to 0.95 sec relative to United States mid-continental mantle.

A new P-wave velocity core model is constructed from observed times, dt/dΔ's, and relative amplitudes of P'; the observed times of SKS, SKKS, and PKiKP; and a new mantle-velocity determination by Jordan and Anderson. The new core model is smooth except for a discontinuity at the inner-core boundary determined to be at a radius of 1215 km. Short-period amplitude data do not require the inner core Q to be significantly lower than that of the outer core. Several lines of evidence show that most, if not all, of the arrivals preceding the DF branch of P' at distances shorter than 143° are due to scattering as proposed by Haddon and not due to spherically symmetric discontinuities just above the inner core as previously believed. Calculation of the travel-time distribution of scattered phases and comparison with published data show that the strongest scattering takes place at or near the core-mantle boundary close to the seismic station.

In Part II, the largest events in the San Fernando earthquake series, initiated by the main shock at 14 00 41.8 GMT on February 9, 1971, were chosen for analysis from the first three months of activity, 87 events in all. The initial rupture location coincides with the lower, northernmost edge of the main north-dipping thrust fault and the aftershock distribution. The best focal mechanism fit to the main shock P-wave first motions constrains the fault plane parameters to: strike, N 67° (± 6°) W; dip, 52° (± 3°) NE; rake, 72° (67°-95°) left lateral. Focal mechanisms of the aftershocks clearly outline a downstep of the western edge of the main thrust fault surface along a northeast-trending flexure. Faulting on this downstep is left-lateral strike-slip and dominates the strain release of the aftershock series, which indicates that the downstep limited the main event rupture on the west. The main thrust fault surface dips at about 35° to the northeast at shallow depths and probably steepens to 50° below a depth of 8 km. This steep dip at depth is a characteristic of other thrust faults in the Transverse Ranges and indicates the presence at depth of laterally-varying vertical forces that are probably due to buckling or overriding that causes some upward redirection of a dominant north-south horizontal compression. Two sets of events exhibit normal dip-slip motion with shallow hypocenters and correlate with areas of ground subsidence deduced from gravity data. Several lines of evidence indicate that a horizontal compressional stress in a north or north-northwest direction was added to the stresses in the aftershock area 12 days after the main shock. After this change, events were contained in bursts along the downstep and sequencing within the bursts provides evidence for an earthquake-triggering phenomenon that propagates with speeds of 5 to 15 km/day. Seismicity before the San Fernando series and the mapped structure of the area suggest that the downstep of the main fault surface is not a localized discontinuity but is part of a zone of weakness extending from Point Dume, near Malibu, to Palmdale on the San Andreas fault. This zone is interpreted as a decoupling boundary between crustal blocks that permits them to deform separately in the prevalent crustal-shortening mode of the Transverse Ranges region.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Hydrogen is the only atom for which the Schr odinger equation is solvable. Consisting only of a proton and an electron, hydrogen is the lightest element and, nevertheless, is far from being simple. Under ambient conditions, it forms diatomic molecules H2 in gas phase, but di erent temperature and pressures lead to a complex phase diagram, which is not completely known yet. Solid hydrogen was rst documented in 1899 [1] and was found to be isolating. At higher pressures, however, hydrogen can be metallized. In 1935 Wigner and Huntington predicted that the metallization pressure would be 25 GPa [2], where molecules would disociate to form a monoatomic metal, as alkali metals that lie below hydrogen in the periodic table. The prediction of the metallization pressure turned out to be wrong: metallic hydrogen has not been found yet, even under a pressure as high as 320 GPa. Nevertheless, extrapolations based on optical measurements suggest that a metallic phase may be attained at 450 GPa [3]. The interest of material scientist in metallic hydrogen can be attributed, at least to a great extent, to Ashcroft, who in 1968 suggested that such a system could be a hightemperature superconductor [4]. The temperature at which this material would exhibit a transition from a superconducting to a non-superconducting state (Tc) was estimated to be around room temperature. The implications of such a statement are very interesting in the eld of astrophysics: in planets that contain a big quantity of hydrogen and whose temperature is below Tc, superconducting hydrogen may be found, specially at the center, where the gravitational pressure is high. This might be the case of Jupiter, whose proportion of hydrogen is about 90%. There are also speculations suggesting that the high magnetic eld of Jupiter is due to persistent currents related to the superconducting phase [5]. Metallization and superconductivity of hydrogen has puzzled scientists for decades, and the community is trying to answer several questions. For instance, what is the structure of hydrogen at very high pressures? Or a more general one: what is the maximum Tc a phonon-mediated superconductor can have [6]? A great experimental e ort has been carried out pursuing metallic hydrogen and trying to answer the questions above; however, the characterization of solid phases of hydrogen is a hard task. Achieving the high pressures needed to get the sought phases requires advanced technologies. Diamond anvil cells (DAC) are commonly used devices. These devices consist of two diamonds with a tip of small area; for this reason, when a force is applied, the pressure exerted is very big. This pressure is uniaxial, but it can be turned into hydrostatic pressure using transmitting media. Nowadays, this method makes it possible to reach pressures higher than 300 GPa, but even at this pressure hydrogen does not show metallic properties. A recently developed technique that is an improvement of DAC can reach pressures as high as 600 GPa [7], so it is a promising step forward in high pressure physics. Another drawback is that the electronic density of the structures is so low that X-ray di raction patterns have low resolution. For these reasons, ab initio studies are an important source of knowledge in this eld, within their limitations. When treating hydrogen, there are many subtleties in the calculations: as the atoms are so light, the ions forming the crystalline lattice have signi cant displacements even when temperatures are very low, and even at T=0 K, due to Heisenberg's uncertainty principle. Thus, the energy corresponding to this zero-point (ZP) motion is signi cant and has to be included in an accurate determination of the most stable phase. This has been done including ZP vibrational energies within the harmonic approximation for a range of pressures and at T=0 K, giving rise to a series of structures that are stable in their respective pressure ranges [8]. Very recently, a treatment of the phases of hydrogen that includes anharmonicity in ZP energies has suggested that relative stability of the phases may change with respect to the calculations within the harmonic approximation [9]. Many of the proposed structures for solid hydrogen have been investigated. Particularly, the Cmca-4 structure, which was found to be the stable one from 385-490 GPa [8], is metallic. Calculations for this structure, within the harmonic approximation for the ionic motion, predict a Tc up to 242 K at 450 GPa [10]. Nonetheless, due to the big ionic displacements, the harmonic approximation may not su ce to describe correctly the system. The aim of this work is to apply a recently developed method to treat anharmonicity, the stochastic self-consistent harmonic approximation (SSCHA) [11], to Cmca-4 metallic hydrogen. This way, we will be able to study the e ects of anharmonicity in the phonon spectrum and to try to understand the changes it may provoque in the value of Tc. The work is structured as follows. First we present the theoretical basis of the calculations: Density Functional Theory (DFT) for the electronic calculations, phonons in the harmonic approximation and the SSCHA. Then we apply these methods to Cmca-4 hydrogen and we discuss the results obtained. In the last chapter we draw some conclusions and propose possible future work.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A study is made of the accuracy of electronic digital computer calculations of ground displacement and response spectra from strong-motion earthquake accelerograms. This involves an investigation of methods of the preparatory reduction of accelerograms into a form useful for the digital computation and of the accuracy of subsequent digital calculations. Various checks are made for both the ground displacement and response spectra results, and it is concluded that the main errors are those involved in digitizing the original record. Differences resulting from various investigators digitizing the same experimental record may become as large as 100% of the maximum computed ground displacements. The spread of the results of ground displacement calculations is greater than that of the response spectra calculations. Standardized methods of adjustment and calculation are recommended, to minimize such errors.

Studies are made of the spread of response spectral values about their mean. The distribution is investigated experimentally by Monte Carlo techniques using an electric analog system with white noise excitation, and histograms are presented indicating the dependence of the distribution on the damping and period of the structure. Approximate distributions are obtained analytically by confirming and extending existing results with accurate digital computer calculations. A comparison of the experimental and analytical approaches indicates good agreement for low damping values where the approximations are valid. A family of distribution curves to be used in conjunction with existing average spectra is presented. The combination of analog and digital computations used with Monte Carlo techniques is a promising approach to the statistical problems of earthquake engineering.

Methods of analysis of very small earthquake ground motion records obtained simultaneously at different sites are discussed. The advantages of Fourier spectrum analysis for certain types of studies and methods of calculation of Fourier spectra are presented. The digitizing and analysis of several earthquake records is described and checks are made of the dependence of results on digitizing procedure, earthquake duration and integration step length. Possible dangers of a direct ratio comparison of Fourier spectra curves are pointed out and the necessity for some type of smoothing procedure before comparison is established. A standard method of analysis for the study of comparative ground motion at different sites is recommended.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Adenylate Kinase (AK) is a signal transducing protein that regulates cellular energy homeostasis balancing between different conformations. An alteration of its activity can lead to severe pathologies such as heart failure, cancer and neurodegenerative diseases. A comprehensive elucidation of the large-scale conformational motions that rule the functional mechanism of this enzyme is of great value to guide rationally the development of new medications. Here using a metadynamics-based computational protocol we elucidate the thermodynamics and structural properties underlying the AK functional transitions. The free energy estimation of the conformational motions of the enzyme allows characterizing the sequence of events that regulate its action. We reveal the atomistic details of the most relevant enzyme states, identifying residues such as Arg119 and Lys13, which play a key role during the conformational transitions and represent druggable spots to design enzyme inhibitors. Our study offers tools that open new areas of investigation on large-scale motion in proteins.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper describes an experimental investigation of tip clearance flow in a radial inflow turbine. Flow visualization and static pressure measurements were performed. These were combined with hot-wire traverses into the tip gap. The experimental data indicates that the tip clearance flow in a radial turbine can be divided into three regions. The first region is located at the rotor inlet, where the influence of relative casing motion dominates the flow over the tip. The second region is located towards midchord, where the effect of relative casing motion is weakened. Finally a third region exists in the exducer, where the effect of relative casing motion becomes small and the leakage flow resembles the tip flow behaviour in an axial turbine. Integration of the velocity profiles showed that there is little tip leakage in the first part of the rotor because of the effect of scraping. It was found that the bulk of tip leakage flow in a radial turbine passes through the exducer. The mass flow rate, measured at four chordwise positions, was compared with a standard axial turbine tip leakage model. The result revealed the need for a model suited to radial turbines. The hot-wire measurements also indicated a higher tip gap loss in the exducer of the radial turbine. This explains why the stage efficiency of a radial inflow turbine is more affected by increasing the radial clearance than by increasing the axial clearance.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

An experimental investigation of the unsteady interaction between a turbulent boundary layer and a normal shock wave of strength M∞ = 1.4 subject to periodic forcing in a parallel walled duct has been conducted. Emphasis has been placed on the mechanism by which changes in the global flow field influence the local interaction structure. Static pressure measurements and high speed Schlieren images of the unsteady interaction have been obtained. The pressure rise across the interaction and the appearance of the local SBLI structure have been observed to vary during the cycle of periodic shock wave motion. The magnitude of the pressure rise across the interaction is found to be related to the relative Mach number of the unsteady shock wave as it undergoes periodic motion. Variations in the upstream Influence of the interaction are sensitive to the magnitude and direction of shock wave velocity and acceleration and it is proposed that a viscous lag exists between the point of boundary layer separation and the shock wave position. Further work exploring the implications of these findings is proposed, including studies of the variation in position of the points of boundary layer separation and reattachment.