990 resultados para Speed limits


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Waverider generated from a given flow field has a high lift-to-drag ratio because of attached bow shock on leading edge. However, leading edge blunt and off-design condition can make bow shock off leading edge and have unfavorable influence on aerodynamic characteristics. So these two problems have always been concerned as important engineering science issues by aeronautical engineering scientists. In this paper, through respectively using low speed and high speed waverider design principles, a wide-speed rang vehicle is designed, which can level takeoff and accelerate to hypersonic speed for cruise. In addition, sharp leading edge is blunted to alleviated aeroheating. Theoretical study and wind tunnel test show that this vehicle has good aerodynamic performance in wide-speed range of subsonic, transonic, supersonic and hypersonic speeds.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The critical cavitating flow in liquid jet pumps under operating limits is investigated in this paper. Measurements on the axial pressure distribution along the wall of jet pumps indicate that two-phase critical flow occurs in the throat pipe under operating limits. The entrained flow rate and the distribution of the wall pressure upstream lowest pressure section does not change when the outlet pressure is lower than a critical value. A liquid-vapor mixing shockwave is also observed under operating limits. The wave front moves back and forth in low frequency around the position of the lowest pressure. With the measured axial wall pressures, the Mach number of the two-phase cavitating flow is calculated. It's found that the maximum Mach number is very close to I under operating limits. Further analysis infers a cross-section where Mach number approaches to I near the wave front. Thus, the liquid-vapor mixture velocity should reach the local sound velocity and resulting in the occurrence of operating limits.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Flammability limits for flames propagating in a rich propane/air mixture under gravity conditions appeared to be 6.3% C3H8 for downward propagation and 9.2% C3H8 for upward propagation. Different limits might be explained by the action of preferential diffusion of the deficient reactant (Le < 1) on the limit flames, which are in different states of instability. In one of the previous studies, the flammability limits under microgtravity conditions were found to be between the upward and downward limits obtained in a standard flammability tube under normal gravity conditions. It was found in those experiments that there are two limits under microgravity conditions: one indicated by visible flame propagation and another indicated by an increase of pressure without observed flame propagation. These limits were found to be far behind the limit for downward-propagating flame at 1 g (6.3% C3H8) and close to the limit for upward-propagating flame at 1 g (9.2% C3H8). It was decided in the present work to apply a special schlieren system and instant temperature measuring system for drop tower experiments to observe combustion development during propagation of the flame front. A small cubic closed vessel (inner side, 9 cm 9 cm 9 cm) with schlieren quality glass windows were used to study limit flames under gravity and microgravity conditions. Flame development in rich limit mixtures, not visible in previous experiments under microgravity conditions for strait photography, was identified with the use of the schlieren method and instant temperature measuring system. It was found in experiments in a small vessel that there is practically no difference in flammability limits under gravity and microgravity conditions. In this paper, the mechanism of flame propagation under these different conditions is systematically studied and compared and limit burning velocity is estimated.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The calculation of settling speed of coarse particles is firstly addressed, with accelerated Stokesian dynamics without adjustable parameters, in which far field force acting on the particle instead of particle velocity is chosen as dependent variables to consider inter-particle hydrodynamic interactions. The sedimentation of a simple cubic array of spherical particles is simulated and compared to the results available to verify and validate the numerical code and computational scheme. The improvedmethod keeps the same computational cost of the order O(N log N) as usual accelerated Stokesian dynamics does. Then, more realistic random suspension sedimentation is investigated with the help ofMont Carlo method. The computational results agree well with experimental fitting. Finally, the sedimentation of finer cohesive particle, which is often observed in estuary environment, is presented as a further application in coastal engineering.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The effects of constitution of precursor mixed powders and scan speed on microstructure and wear properties were designed and investigated during laser clad gamma/Cr7C3/TiC composite coatings on gamma-TiAl intermetallic alloy substrates with NiCr-Cr3C2 precursor mixed powders. The results indicate that both the constitution of the precursor mixed powders and the beam scan rate have remarkable influence on microstructure and attendant hardness as well as wear resistance of the formed composite coatings. The wear mechanisms of the original TiAl alloy and laser clad composite coatings were investigated. The composite coating with an optimum compromise between constitution of NiCr-Cr3C2 precursor mixed powders as well as being processed under moderate scan speed exhibits the best wear resistance under dry sliding wear test conditions. (C) 2008 Elsevier Ltd. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A method is developed to calculate the settling speed of dilute arrays of spheres for the three cases of: I, a random array of freely moving particles; II, a random array of rigidly held particles; and III, a cubic array of particles. The basic idea of the technique is to give a formal representation for the solution and then manipulate this representation in a straightforward manner to obtain the result. For infinite arrays of spheres, our results agree with the results previously found by other authors, and the analysis here appears to be simpler. This method is able to obtain more terms in the answer than was possible by Saffman's unified treatment for point particles. Some results for arbitrary two sphere distributions are presented, and an analysis of the wall effect for particles settling in a tube is given. It is expected that the method presented here can be generalized to solve other types of problems.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In Part I of this thesis, a new magnetic spectrometer experiment which measured the β spectrum of ^(35)S is described. New limits on heavy neutrino emission in nuclear β decay were set, for a heavy neutrino mass range between 12 and 22 keV. In particular, this measurement rejects the hypothesis that a 17 keV neutrino is emitted, with sin^2 θ = 0.0085, at the 6δ statistical level. In addition, an auxiliary experiment was performed, in which an artificial kink was induced in the β spectrum by means of an absorber foil which masked a fraction of the source area. In this measurement, the sensitivity of the magnetic spectrometer to the spectral features of heavy neutrino emission was demonstrated.

In Part II, a measurement of the neutron spallation yield and multiplicity by the Cosmic-ray Underground Background Experiment is described. The production of fast neutrons by muons was investigated at an underground depth of 20 meters water equivalent, with a 200 liter detector filled with 0.09% Gd-loaded liquid scintillator. We measured a neutron production yield of (3.4 ± 0.7) x 10^(-5) neutrons per muon-g/cm^2, in agreement with other experiments. A single-to-double neutron multiplicity ratio of 4:1 was observed. In addition, stopped π^+ decays to µ^+ and then e^+ were observed as was the associated production of pions and neutrons, by the muon spallation interaction. It was seen that practically all of the π^+ produced by muons were also accompanied by at least one neutron. These measurements serve as the basis for neutron background estimates for the San Onofre neutrino detector.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Despite the complexity of biological networks, we find that certain common architectures govern network structures. These architectures impose fundamental constraints on system performance and create tradeoffs that the system must balance in the face of uncertainty in the environment. This means that while a system may be optimized for a specific function through evolution, the optimal achievable state must follow these constraints. One such constraining architecture is autocatalysis, as seen in many biological networks including glycolysis and ribosomal protein synthesis. Using a minimal model, we show that ATP autocatalysis in glycolysis imposes stability and performance constraints and that the experimentally well-studied glycolytic oscillations are in fact a consequence of a tradeoff between error minimization and stability. We also show that additional complexity in the network results in increased robustness. Ribosome synthesis is also autocatalytic where ribosomes must be used to make more ribosomal proteins. When ribosomes have higher protein content, the autocatalysis is increased. We show that this autocatalysis destabilizes the system, slows down response, and also constrains the system’s performance. On a larger scale, transcriptional regulation of whole organisms also follows architectural constraints and this can be seen in the differences between bacterial and yeast transcription networks. We show that the degree distributions of bacterial transcription network follow a power law distribution while the yeast network follows an exponential distribution. We then explored the evolutionary models that have previously been proposed and show that neither the preferential linking model nor the duplication-divergence model of network evolution generates the power-law, hierarchical structure found in bacteria. However, in real biological systems, the generation of new nodes occurs through both duplication and horizontal gene transfers, and we show that a biologically reasonable combination of the two mechanisms generates the desired network.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In the quest to develop viable designs for third-generation optical interferometric gravitational-wave detectors, one strategy is to monitor the relative momentum or speed of the test-mass mirrors, rather than monitoring their relative position. The most straightforward design for a speed-meter interferometer that accomplishes this is described and analyzed in Chapter 2. This design (due to Braginsky, Gorodetsky, Khalili, and Thorne) is analogous to a microwave-cavity speed meter conceived by Braginsky and Khalili. A mathematical mapping between the microwave speed meter and the optical interferometric speed meter is developed and used to show (in accord with the speed being a quantum nondemolition observable) that in principle the interferometric speed meter can beat the gravitational-wave standard quantum limit (SQL) by an arbitrarily large amount, over an arbitrarily wide range of frequencies . However, in practice, to reach or beat the SQL, this specific speed meter requires exorbitantly high input light power. The physical reason for this is explored, along with other issues such as constraints on performance due to optical dissipation.

Chapter 3 proposes a more sophisticated version of a speed meter. This new design requires only a modest input power and appears to be a fully practical candidate for third-generation LIGO. It can beat the SQL (the approximate sensitivity of second-generation LIGO interferometers) over a broad range of frequencies (~ 10 to 100 Hz in practice) by a factor h/hSQL ~ √W^(SQL)_(circ)/Wcirc. Here Wcirc is the light power circulating in the interferometer arms and WSQL ≃ 800 kW is the circulating power required to beat the SQL at 100 Hz (the LIGO-II power). If squeezed vacuum (with a power-squeeze factor e-2R) is injected into the interferometer's output port, the SQL can be beat with a much reduced laser power: h/hSQL ~ √W^(SQL)_(circ)/Wcirce-2R. For realistic parameters (e-2R ≃ 10 and Wcirc ≃ 800 to 2000 kW), the SQL can be beat by a factor ~ 3 to 4 from 10 to 100 Hz. [However, as the power increases in these expressions, the speed meter becomes more narrow band; additional power and re-optimization of some parameters are required to maintain the wide band.] By performing frequency-dependent homodyne detection on the output (with the aid of two kilometer-scale filter cavities), one can markedly improve the interferometer's sensitivity at frequencies above 100 Hz.

Chapters 2 and 3 are part of an ongoing effort to develop a practical variant of an interferometric speed meter and to combine the speed meter concept with other ideas to yield a promising third- generation interferometric gravitational-wave detector that entails low laser power.

Chapter 4 is a contribution to the foundations for analyzing sources of gravitational waves for LIGO. Specifically, it presents an analysis of the tidal work done on a self-gravitating body (e.g., a neutron star or black hole) in an external tidal field (e.g., that of a binary companion). The change in the mass-energy of the body as a result of the tidal work, or "tidal heating," is analyzed using the Landau-Lifshitz pseudotensor and the local asymptotic rest frame of the body. It is shown that the work done on the body is gauge invariant, while the body-tidal-field interaction energy contained within the body's local asymptotic rest frame is gauge dependent. This is analogous to Newtonian theory, where the interaction energy is shown to depend on how one localizes gravitational energy, but the work done on the body is independent of that localization. These conclusions play a role in analyses, by others, of the dynamics and stability of the inspiraling neutron-star binaries whose gravitational waves are likely to be seen and studied by LIGO.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Optical microscopy has become an indispensable tool for biological researches since its invention, mostly owing to its sub-cellular spatial resolutions, non-invasiveness, instrumental simplicity, and the intuitive observations it provides. Nonetheless, obtaining reliable, quantitative spatial information from conventional wide-field optical microscopy is not always intuitive as it appears to be. This is because in the acquired images of optical microscopy the information about out-of-focus regions is spatially blurred and mixed with in-focus information. In other words, conventional wide-field optical microscopy transforms the three-dimensional spatial information, or volumetric information about the objects into a two-dimensional form in each acquired image, and therefore distorts the spatial information about the object. Several fluorescence holography-based methods have demonstrated the ability to obtain three-dimensional information about the objects, but these methods generally rely on decomposing stereoscopic visualizations to extract volumetric information and are unable to resolve complex 3-dimensional structures such as a multi-layer sphere.

The concept of optical-sectioning techniques, on the other hand, is to detect only two-dimensional information about an object at each acquisition. Specifically, each image obtained by optical-sectioning techniques contains mainly the information about an optically thin layer inside the object, as if only a thin histological section is being observed at a time. Using such a methodology, obtaining undistorted volumetric information about the object simply requires taking images of the object at sequential depths.

Among existing methods of obtaining volumetric information, the practicability of optical sectioning has made it the most commonly used and most powerful one in biological science. However, when applied to imaging living biological systems, conventional single-point-scanning optical-sectioning techniques often result in certain degrees of photo-damages because of the high focal intensity at the scanning point. In order to overcome such an issue, several wide-field optical-sectioning techniques have been proposed and demonstrated, although not without introducing new limitations and compromises such as low signal-to-background ratios and reduced axial resolutions. As a result, single-point-scanning optical-sectioning techniques remain the most widely used instrumentations for volumetric imaging of living biological systems to date.

In order to develop wide-field optical-sectioning techniques that has equivalent optical performance as single-point-scanning ones, this thesis first introduces the mechanisms and limitations of existing wide-field optical-sectioning techniques, and then brings in our innovations that aim to overcome these limitations. We demonstrate, theoretically and experimentally, that our proposed wide-field optical-sectioning techniques can achieve diffraction-limited optical sectioning, low out-of-focus excitation and high-frame-rate imaging in living biological systems. In addition to such imaging capabilities, our proposed techniques can be instrumentally simple and economic, and are straightforward for implementation on conventional wide-field microscopes. These advantages together show the potential of our innovations to be widely used for high-speed, volumetric fluorescence imaging of living biological systems.