881 resultados para Low Speed Switched Reluctance Machine
Resumo:
An investigation has been made into the effect of microstructural parameters on the propensity for forming shear localization produced during high speed torsional testing by split Hopkinson bar with different average rates of 610, 650 and 1500 s(-1) in low carbon steels. These steels received the quenched, quenched and tempered as well as normalized treatments that provide wide microstructural parameters and mechanical properties. The results indicate that the occurrence of the shear localization is susceptible to the strength of the steels. In other words, the tendency of the quenched steel to form a shear band is higher than that of the other two steels. It is also found that there is a critical strain at which the shear localization occurs in the steels. The critical strain value is strongly dependent on the strength of the steels. Before arriving at this point, the material undergoes a slow work-hardening. After this point, the material suffers work-softening, corresponding to a process during which the deformation is gradually localized and eventually becomes spatially correlated to form a macroscopic shear band. Examinations by SEM reveal that the shear localization within the band involves a series of sequential crystallographic and non-crystallographic events including the change in crystal orientation, misorientation, generation and even perhaps damage in microstructures such as the initiation, growth and coalescence of the microcracks. It is expected that the sharp drop in the load-carrying capacity is associated with the growth and coalescence of the microcracks rather than the occurrence of the shear localization, but the shear localization is seen to accelerate the growth and coalescence of the microcracks. The thin foil observations by TEM reveal that the density of dislocations in the band is extremely high and the tangled arrangement and cell structure of dislocations tends to align along the shear direction. The multiplication and interaction of dislocations seems to be responsible for work-hardening of the steels. The avalanche of the dislocation cells corresponds to the sharp drop in shear stress at which the deformed specimen is broken. Double shear bands and kink bands are also observed in the present study. The principal band develops first and its width is narrower than that of the secondary band.
Resumo:
A study has been made of the microstructure of the thermally assisted band in a low carbon ferrite-pearlite steel, resulting from high speed torsional testing with an average strain rate of about 1500 s−1. Metallographic examination showed that there are several fine shear bands distributed over a deformed region (the gauge length of the specimen). The width of these bands is estimated to be of the order of magnitude of 50 μm, and the spacing between them is roughly about 100 μm. Detailed scanning electron microscopy studies indicate that damage of the microstructure within the band is very apparent, as evidenced by microcrack initiation and coalescence along the shear deformation band. However, there is no evidence that the material in the band had become microcrystalline or non-crystalline.
Resumo:
In this paper, the general Mach number equation is derived, and the influence of typical energy forms in the solar wind is analysed in detail. It shows that the accelerating process of the solar wind is influenced critically by the form of heating in the corona, and that the transonic mechanism is mainly the result of the adjustment of the variation of the crosssection of flowing tubes and the heat source term.The accelerating mechanism for both the high-speed stream from the coronal hole and the normal solar wind is similar. But, the temperature is low in the lower level of the coronal hole and more heat energy supply in the outside is required, hence the high speed of the solar wind; while the case with the ordinary coronal region is just the opposite, and the velocity of the solar wind is therefore lower. The accelerating process for various typical parameters is calculated, and it is found that the high-speed stream may reach 800 km/sec.
Resumo:
EFTA 2009
Resumo:
179 p.
Resumo:
Singular Value Decomposition (SVD) is a key linear algebraic operation in many scientific and engineering applications. In particular, many computational intelligence systems rely on machine learning methods involving high dimensionality datasets that have to be fast processed for real-time adaptability. In this paper we describe a practical FPGA (Field Programmable Gate Array) implementation of a SVD processor for accelerating the solution of large LSE problems. The design approach has been comprehensive, from the algorithmic refinement to the numerical analysis to the customization for an efficient hardware realization. The processing scheme rests on an adaptive vector rotation evaluator for error regularization that enhances convergence speed with no penalty on the solution accuracy. The proposed architecture, which follows a data transfer scheme, is scalable and based on the interconnection of simple rotations units, which allows for a trade-off between occupied area and processing acceleration in the final implementation. This permits the SVD processor to be implemented both on low-cost and highend FPGAs, according to the final application requirements.
Resumo:
In this paper, the glass formation theory is applied to study the formation mechanism of the low leaching glassy slag during the process of plasma waste treatment. The research shows that SiO2 acts as network former to form a 3-dimensional Si-O tetrahedral network in which heavy metals are bonded or encapsulated, so the Si-O tetrahedron protect heavy metals against leaching from the vitrified slag or acid corrosion. For given chemical compositions of waste, the formation ability of the vitrified slag can be represented by the ratio of the whole oxygen ions to the whole network former ions in glass (O/Si) which is appropriate in the range of 2~3. A plasma arc reactor is used to conduct the vitrification experiments of two kinds of fly ashes with additives in which effects of various parameters including arc power, cooling speed, treatment temperature are studied. The chemical compositions of fly ashes are analyzed by X-ray fluorescence (XRF) spectrometry. The experimental results show that both cooling speed and O/Si have important influence on the formation of the vitrified slag, which is qualitatively in accordance with the predictions of the glass formation theory.
Resumo:
In the quest to develop viable designs for third-generation optical interferometric gravitational-wave detectors, one strategy is to monitor the relative momentum or speed of the test-mass mirrors, rather than monitoring their relative position. The most straightforward design for a speed-meter interferometer that accomplishes this is described and analyzed in Chapter 2. This design (due to Braginsky, Gorodetsky, Khalili, and Thorne) is analogous to a microwave-cavity speed meter conceived by Braginsky and Khalili. A mathematical mapping between the microwave speed meter and the optical interferometric speed meter is developed and used to show (in accord with the speed being a quantum nondemolition observable) that in principle the interferometric speed meter can beat the gravitational-wave standard quantum limit (SQL) by an arbitrarily large amount, over an arbitrarily wide range of frequencies . However, in practice, to reach or beat the SQL, this specific speed meter requires exorbitantly high input light power. The physical reason for this is explored, along with other issues such as constraints on performance due to optical dissipation.
Chapter 3 proposes a more sophisticated version of a speed meter. This new design requires only a modest input power and appears to be a fully practical candidate for third-generation LIGO. It can beat the SQL (the approximate sensitivity of second-generation LIGO interferometers) over a broad range of frequencies (~ 10 to 100 Hz in practice) by a factor h/hSQL ~ √W^(SQL)_(circ)/Wcirc. Here Wcirc is the light power circulating in the interferometer arms and WSQL ≃ 800 kW is the circulating power required to beat the SQL at 100 Hz (the LIGO-II power). If squeezed vacuum (with a power-squeeze factor e-2R) is injected into the interferometer's output port, the SQL can be beat with a much reduced laser power: h/hSQL ~ √W^(SQL)_(circ)/Wcirce-2R. For realistic parameters (e-2R ≃ 10 and Wcirc ≃ 800 to 2000 kW), the SQL can be beat by a factor ~ 3 to 4 from 10 to 100 Hz. [However, as the power increases in these expressions, the speed meter becomes more narrow band; additional power and re-optimization of some parameters are required to maintain the wide band.] By performing frequency-dependent homodyne detection on the output (with the aid of two kilometer-scale filter cavities), one can markedly improve the interferometer's sensitivity at frequencies above 100 Hz.
Chapters 2 and 3 are part of an ongoing effort to develop a practical variant of an interferometric speed meter and to combine the speed meter concept with other ideas to yield a promising third- generation interferometric gravitational-wave detector that entails low laser power.
Chapter 4 is a contribution to the foundations for analyzing sources of gravitational waves for LIGO. Specifically, it presents an analysis of the tidal work done on a self-gravitating body (e.g., a neutron star or black hole) in an external tidal field (e.g., that of a binary companion). The change in the mass-energy of the body as a result of the tidal work, or "tidal heating," is analyzed using the Landau-Lifshitz pseudotensor and the local asymptotic rest frame of the body. It is shown that the work done on the body is gauge invariant, while the body-tidal-field interaction energy contained within the body's local asymptotic rest frame is gauge dependent. This is analogous to Newtonian theory, where the interaction energy is shown to depend on how one localizes gravitational energy, but the work done on the body is independent of that localization. These conclusions play a role in analyses, by others, of the dynamics and stability of the inspiraling neutron-star binaries whose gravitational waves are likely to be seen and studied by LIGO.
Resumo:
Optical microscopy has become an indispensable tool for biological researches since its invention, mostly owing to its sub-cellular spatial resolutions, non-invasiveness, instrumental simplicity, and the intuitive observations it provides. Nonetheless, obtaining reliable, quantitative spatial information from conventional wide-field optical microscopy is not always intuitive as it appears to be. This is because in the acquired images of optical microscopy the information about out-of-focus regions is spatially blurred and mixed with in-focus information. In other words, conventional wide-field optical microscopy transforms the three-dimensional spatial information, or volumetric information about the objects into a two-dimensional form in each acquired image, and therefore distorts the spatial information about the object. Several fluorescence holography-based methods have demonstrated the ability to obtain three-dimensional information about the objects, but these methods generally rely on decomposing stereoscopic visualizations to extract volumetric information and are unable to resolve complex 3-dimensional structures such as a multi-layer sphere.
The concept of optical-sectioning techniques, on the other hand, is to detect only two-dimensional information about an object at each acquisition. Specifically, each image obtained by optical-sectioning techniques contains mainly the information about an optically thin layer inside the object, as if only a thin histological section is being observed at a time. Using such a methodology, obtaining undistorted volumetric information about the object simply requires taking images of the object at sequential depths.
Among existing methods of obtaining volumetric information, the practicability of optical sectioning has made it the most commonly used and most powerful one in biological science. However, when applied to imaging living biological systems, conventional single-point-scanning optical-sectioning techniques often result in certain degrees of photo-damages because of the high focal intensity at the scanning point. In order to overcome such an issue, several wide-field optical-sectioning techniques have been proposed and demonstrated, although not without introducing new limitations and compromises such as low signal-to-background ratios and reduced axial resolutions. As a result, single-point-scanning optical-sectioning techniques remain the most widely used instrumentations for volumetric imaging of living biological systems to date.
In order to develop wide-field optical-sectioning techniques that has equivalent optical performance as single-point-scanning ones, this thesis first introduces the mechanisms and limitations of existing wide-field optical-sectioning techniques, and then brings in our innovations that aim to overcome these limitations. We demonstrate, theoretically and experimentally, that our proposed wide-field optical-sectioning techniques can achieve diffraction-limited optical sectioning, low out-of-focus excitation and high-frame-rate imaging in living biological systems. In addition to such imaging capabilities, our proposed techniques can be instrumentally simple and economic, and are straightforward for implementation on conventional wide-field microscopes. These advantages together show the potential of our innovations to be widely used for high-speed, volumetric fluorescence imaging of living biological systems.
Resumo:
[ES]Los objetivos del siguiente trabajo consisten en analizar e optimizar el proceso del torneado en duro del acero ASP-23 indagando de especial manera en la realización de diferentes soluciones para brochas. En este caso, este proyecto nace de la importancia de reducir así como los costes económicos y los costes temporales de fabricación de elementos basados en el acero ASP-23 mediante el torneado en duro; proceso de mecanizado, cuya importancia cada vez es mayor como en las industrias de automoción o aeronáutica. El desarrollo del proyecto es fruto de la necesidad de EKIN S. Coop, uno de los líderes en los procesos de máquina-herramienta de alta precisión para el brochado, de desarrollar un proceso de mecanizado más eficaz de las brochas que produce. Así en el aula máquina-herramienta (ETSIB) se han intentado demostrar los beneficios que tiene el torneado en duro en el mecanizado del ASP-23. Hoy en día, con el rápido desarrollo de nuevos materiales, los procesos de fabricación se están haciendo cada vez más complejos, por la amplia variedad de maquinas con las que se realizan los procesos, por la variedad de geometría/material de las herramientas empleadas, por las propiedades del material de la pieza a mecanizar, por los parámetros de corte tan variados con los que podemos implementar el proceso (profundidad de corte, velocidad, alimentación...) y por la diversidad de elementos de sujeción utilizados. Además debemos ser conscientes de que tal variedad implica grandes magnitudes de deformaciones, velocidades y temperaturas. He aquí la justificación y el gran interés en el proyecto a realizar. Por ello, en este proyecto intentamos dar un pequeño paso en el conocimiento del proceso del torneado en duro de aceros con poca maquinabilidad, siendo conscientes de la amplia variedad y dificultad del avance en la ingeniería de fabricación y del mucho trabajo que queda por hacer.
Resumo:
There is a growing interest in taking advantage of possible patterns and structures in data so as to extract the desired information and overcome the curse of dimensionality. In a wide range of applications, including computer vision, machine learning, medical imaging, and social networks, the signal that gives rise to the observations can be modeled to be approximately sparse and exploiting this fact can be very beneficial. This has led to an immense interest in the problem of efficiently reconstructing a sparse signal from limited linear observations. More recently, low-rank approximation techniques have become prominent tools to approach problems arising in machine learning, system identification and quantum tomography.
In sparse and low-rank estimation problems, the challenge is the inherent intractability of the objective function, and one needs efficient methods to capture the low-dimensionality of these models. Convex optimization is often a promising tool to attack such problems. An intractable problem with a combinatorial objective can often be "relaxed" to obtain a tractable but almost as powerful convex optimization problem. This dissertation studies convex optimization techniques that can take advantage of low-dimensional representations of the underlying high-dimensional data. We provide provable guarantees that ensure that the proposed algorithms will succeed under reasonable conditions, and answer questions of the following flavor:
- For a given number of measurements, can we reliably estimate the true signal?
- If so, how good is the reconstruction as a function of the model parameters?
More specifically, i) Focusing on linear inverse problems, we generalize the classical error bounds known for the least-squares technique to the lasso formulation, which incorporates the signal model. ii) We show that intuitive convex approaches do not perform as well as expected when it comes to signals that have multiple low-dimensional structures simultaneously. iii) Finally, we propose convex relaxations for the graph clustering problem and give sharp performance guarantees for a family of graphs arising from the so-called stochastic block model. We pay particular attention to the following aspects. For i) and ii), we aim to provide a general geometric framework, in which the results on sparse and low-rank estimation can be obtained as special cases. For i) and iii), we investigate the precise performance characterization, which yields the right constants in our bounds and the true dependence between the problem parameters.
Resumo:
Mean velocity profiles were measured in the 5” x 60” wind channel of the turbulence laboratory at the GALCIT, by the use of a hot-wire anemometer. The repeatability of results was established, and the accuracy of the instrumentation estimated. Scatter of experimental results is a little, if any, beyond this limit, although some effects might be expected to arise from variations in atmospheric humidity, no account of this factor having been taken in the present work. Also, slight unsteadiness in flow conditions will be responsible for some scatter.
Irregularities of a hot-wire in close proximity to a solid boundary at low speeds were observed, as have already been found by others.
That Kármán’s logarithmic law holds reasonably well over the main part of a fully developed turbulent flow was checked, the equation u/ut = 6.0 + 6.25 log10 yut/v being obtained, and, as has been previously the case, the experimental points do not quite form one straight line in the region where viscosity effects are small. The values of the constants for this law for the best over-all agreement were determined and compared with those obtained by others.
The range of Reynolds numbers used (based on half-width of channel) was from 20,000 to 60,000.
Resumo:
The behavior of spheres in non-steady translational flow has been studied experimentally for values of Reynolds number from 0.2 to 3000. The aim of the work was to improve our qualitative understanding of particle transport in turbulent gaseous media, a process of extreme importance in power plants and energy transfer mechanisms.
Particles, subjected to sinusoidal oscillations parallel to the direction of steady translation, were found to have changes in average drag coefficient depending upon their translational Reynolds number, the density ratio, and the dimensionless frequency and amplitude of the oscillations. When the Reynolds number based on sphere diameter was less than 200, the oscillation had negligible effect on the average particle drag.
For Reynolds numbers exceeding 300, the coefficient of the mean drag was increased significantly in a particular frequency range. For example, at a Reynolds number of 3000, a 25 per cent increase in drag coefficient can be produced with an amplitude of oscillation of only 2 per cent of the sphere diameter, providing the frequency is near the frequency at which vortices would be shed in a steady flow at the mean speed. Flow visualization shows that over a wide range of frequencies, the vortex shedding frequency locks in to the oscillation frequency. Maximum effect at the natural frequency and lock-in show that a non-linear interaction between wake vortex shedding and the oscillation is responsible for the increase in drag.