928 resultados para scaling laws
Resumo:
The past few years have seen remarkable progress in the development of laser-based particle accelerators. The ability to produce ultrabright beams of multi-megaelectronvolt protons routinely has many potential uses from engineering to medicine, but for this potential to be realized substantial improvements in the performances of these devices must be made. Here we show that in the laser-driven accelerator that has been demonstrated experimentally to produce the highest energy protons, scaling laws derived from fluid models and supported by numerical simulations can be used to accurately describe the acceleration of proton beams for a large range of laser and target parameters. This enables us to evaluate the laser parameters needed to produce high-energy and high-quality proton beams of interest for radiography of dense objects or proton therapy of deep-seated tumours.
Resumo:
We present a numerical and theoretical study of intense-field single-electron ionization of helium at 390 nm and 780 nm. Accurate ionization rates (over an intensity range of (0.175-34) X10^14 W/ cm^2 at 390 nm, and (0.275 - 14.4) X 10^14 W /cm^2 at 780 nm) are obtained from full-dimensionality integrations of the time-dependent helium-laser Schroedinger equation. We show that the power law of lowest order perturbation theory, modified with a ponderomotive-shifted ionization potential, is capable of modelling the ionization rates over an intensity range that extends up to two orders of magnitude higher than that applicable to perturbation theory alone. Writing the modified perturbation theory in terms of scaled wavelength and intensity variables, we obtain to first approximation a single ionization law for both the 390 nm and 780 nm cases. To model the data in the high intensity limit as well as in the low, a new function is introduced for the rate. This function has, in part, a resemblance to that derived from tunnelling theory but, importantly, retains the correct frequency-dependence and scaling behaviour derived from the perturbative-like models at lower intensities. Comparison with the predictions of classical ADK tunnelling theory confirms that ADK performs poorly in the frequency and intensity domain treated here.
Resumo:
We present high-accuracy calculations of ionization rates of helium at UV (195 nm) wavelengths. The data are obtained from full-dimensionality integrations of the helium-laser time-dependent Schrödinger equation. Comparison is made with our previously obtained data at 390 nm and 780 nm. We show that scaling laws introduced by Parker et al extend unmodified from the near-infrared limit into the UV limit. Static-field ionization rates of helium are also obtained, again from time-dependent full-dimensionality integrations of the helium Schrödinger equation. We compare the static-field ionization results with those of Scrinzi et al and Themelis et al, who also treat the full-dimensional helium atom, but with time-independent methods. Good agreement is obtained.
Resumo:
A significant amount of experimental work has been devoted over the last decade to the development and optimization of proton acceleration based on the so-called Target Normal Sheath acceleration mechanism. Several studies have been dedicated to the determination of scaling laws for the maximum energy of the protons as a function of the parameters of the irradiating pulses, studies based on experimental results and on models of the acceleration process. We briefly summarize the state of the art in this area, and review some of the scaling studies presented in the literature. We also discuss some recent results, and projected scalings, related to a different acceleration mechanism for ions, based on the Radiation Pressure of an ultraintense laser pulse.
Resumo:
Massive multiple-input multiple-output (MIMO) systems are cellular networks where the base stations (BSs) are equipped with unconventionally many antennas, deployed on colocated or distributed arrays. Huge spatial degrees-of-freedom are achieved by coherent processing over these massive arrays, which provide strong signal gains, resilience to imperfect channel knowledge, and low interference. This comes at the price of more infrastructure; the hardware cost and circuit power consumption scale linearly/affinely with the number of BS antennas N. Hence, the key to cost-efficient deployment of large arrays is low-cost antenna branches with low circuit power, in contrast to today’s conventional expensive and power-hungry BS antenna branches. Such low-cost transceivers are prone to hardware imperfections, but it has been conjectured that the huge degrees-of-freedom would bring robustness to such imperfections. We prove this claim for a generalized uplink system with multiplicative phasedrifts, additive distortion noise, and noise amplification. Specifically, we derive closed-form expressions for the user rates and a scaling law that shows how fast the hardware imperfections can increase with N while maintaining high rates. The connection between this scaling law and the power consumption of different transceiver circuits is rigorously exemplified. This reveals that one can make the circuit power increase as p N, instead of linearly, by careful circuit-aware system design.
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
El entorno espacial actual hay un gran numero de micro-meteoritos y basura espacial generada por el hombre, lo cual plantea un riesgo para la seguridad de las operaciones en el espacio. La situación se agrava continuamente a causa de las colisiones de basura espacial en órbita, y los nuevos lanzamientos de satélites. Una parte significativa de esta basura son satélites muertos, y fragmentos de satélites resultantes de explosiones y colisiones de objetos en órbita. La mitigación de este problema se ha convertido en un tema de preocupación prioritario para todas las instituciones que participan en operaciones espaciales. Entre las soluciones existentes, las amarras electrodinámicas (EDT) proporcionan un eficiente dispositivo para el rápido de-orbitado de los satélites en órbita terrestre baja (LEO), al final de su vida útil. El campo de investigación de las amarras electrodinámicas (EDT) ha sido muy fructífero desde los años 70. Gracias a estudios teóricos, y a misiones para la demostración del funcionamiento de las amarras en órbita, esta tecnología se ha desarrollado muy rápidamente en las últimas décadas. Durante este período de investigación, se han identificado y superado múltiples problemas técnicos de diversa índole. Gran parte del funcionamiento básico del sistema EDT depende de su capacidad de supervivencia ante los micro-meteoritos y la basura espacial. Una amarra puede ser cortada completamente por una partícula cuando ésta tiene un diámetro mínimo. En caso de corte debido al impacto de partículas, una amarra en sí misma, podría ser un riesgo para otros satélites en funcionamiento. Por desgracia, tras varias demostraciones en órbita, no se ha podido concluir que este problema sea importante para el funcionamiento del sistema. En esta tesis, se presenta un análisis teórico de la capacidad de supervivencia de las amarras en el espacio. Este estudio demuestra las ventajas de las amarras de sección rectangular (cinta), en cuanto a la probabilidad de supervivencia durante la misión, frente a las amarras convencionales (cables de sección circular). Debido a su particular geometría (longitud mucho mayor que la sección transversal), una amarra puede tener un riesgo relativamente alto de ser cortado por un único impacto con una partícula de pequeñas dimensiones. Un cálculo analítico de la tasa de impactos fatales para una amarra cilindrica y de tipo cinta de igual longitud y masa, considerando el flujo de partículas de basura espacial del modelo ORDEM2000 de la NASA, muestra mayor probabilidad de supervivencia para las cintas. Dicho análisis ha sido comparado con un cálculo numérico empleando los modelos de flujo el ORDEM2000 y el MASTER2005 de ESA. Además se muestra que, para igual tiempo en órbita, una cinta tiene una probabilidad de supervivencia un orden y medio de magnitud mayor que una amarra cilindrica con igual masa y longitud. Por otra parte, de-orbitar una cinta desde una cierta altitud, es mucho más rápido, debido a su mayor perímetro que le permite capturar más corriente. Este es un factor adicional que incrementa la probabilidad de supervivencia de la cinta, al estar menos tiempo expuesta a los posibles impactos de basura espacial. Por este motivo, se puede afirmar finalmente y en sentido práctico, que la capacidad de supervivencia de la cinta es bastante alta, en comparación con la de la amarra cilindrica. El segundo objetivo de este trabajo, consiste en la elaboración de un modelo analítico, mejorando la aproximación del flujo de ORDEM2000 y MASTER2009, que permite calcular con precisión, la tasa de impacto fatal al año para una cinta en un rango de altitudes e inclinaciones, en lugar de unas condiciones particulares. Se obtiene el numero de corte por un cierto tiempo en función de la geometría de la cinta y propiedades de la órbita. Para las mismas condiciones, el modelo analítico, se compara con los resultados obtenidos del análisis numérico. Este modelo escalable ha sido esencial para la optimización del diseño de la amarra para las misiones de de-orbitado de los satélites, variando la masa del satélite y la altitud inicial de la órbita. El modelo de supervivencia se ha utilizado para construir una función objetivo con el fin de optimizar el diseño de amarras. La función objectivo es el producto del cociente entre la masa de la amarra y la del satélite y el numero de corte por un cierto tiempo. Combinando el modelo de supervivencia con una ecuación dinámica de la amarra donde aparece la fuerza de Lorentz, se elimina el tiempo y se escribe la función objetivo como función de la geometría de la cinta y las propietades de la órbita. Este modelo de optimización, condujo al desarrollo de un software, que esta en proceso de registro por parte de la UPM. La etapa final de este estudio, consiste en la estimación del número de impactos fatales, en una cinta, utilizando por primera vez una ecuación de límite balístico experimental. Esta ecuación ha sido desarollada para cintas, y permite representar los efectos tanto de la velocidad de impacto como el ángulo de impacto. Los resultados obtenidos demuestran que la cinta es altamente resistente a los impactos de basura espacial, y para una cinta con una sección transversal definida, el número de impactos críticos debidos a partículas no rastreables es significativamente menor. ABSTRACT The current space environment, consisting of man-made debris and tiny meteoroids, poses a risk to safe operations in space, and the situation is continuously deteriorating due to in-orbit debris collisions and to new satellite launches. Among these debris a significant portion is due to dead satellites and fragments of satellites resulted from explosions and in-orbit collisions. Mitigation of space debris has become an issue of first concern for all the institutions involved in space operations. Bare electrodynamic tethers (EDT) can provide an efficient mechanism for rapid de-orbiting of defunct satellites from low Earth orbit (LEO) at end of life. The research on EDT has been a fruitful field since the 70’s. Thanks to both theoretical studies and in orbit demonstration missions, this technology has been developed very fast in the following decades. During this period, several technical issues were identified and overcome. The core functionality of EDT system greatly depends on their survivability to the micrometeoroids and orbital debris, and a tether can become itself a kind of debris for other operating satellites in case of cutoff due to particle impact; however, this very issue is still inconclusive and conflicting after having a number of space demonstrations. A tether can be completely cut by debris having some minimal diameter. This thesis presents a theoretical analysis of the survivability of tethers in space. The study demonstrates the advantages of tape tethers over conventional round wires particularly on the survivability during the mission. Because of its particular geometry (length very much larger than cross-sectional dimensions), a tether may have a relatively high risk of being severed by the single impact of small debris. As a first approach to the problem, survival probability has been compared for a round and a tape tether of equal mass and length. The rates of fatal impact of orbital debris on round and tape tether, evaluated with an analytical approximation to debris flux modeled by NASA’s ORDEM2000, shows much higher survival probability for tapes. A comparative numerical analysis using debris flux model ORDEM2000 and ESA’s MASTER2005 shows good agreement with the analytical result. It also shows that, for a given time in orbit, a tape has a probability of survival of about one and a half orders of magnitude higher than a round tether of equal mass and length. Because de-orbiting from a given altitude is much faster for the tape due to its larger perimeter, its probability of survival in a practical sense is quite high. As the next step, an analytical model derived in this work allows to calculate accurately the fatal impact rate per year for a tape tether. The model uses power laws for debris-size ranges, in both ORDEM2000 and MASTER2009 debris flux models, to calculate tape tether survivability at different LEO altitudes. The analytical model, which depends on tape dimensions (width, thickness) and orbital parameters (inclinations, altitudes) is then compared with fully numerical results for different orbit inclinations, altitudes and tape width for both ORDEM2000 and MASTER2009 flux data. This scalable model not only estimates the fatal impact count but has proved essential in optimizing tether design for satellite de-orbit missions varying satellite mass and initial orbital altitude and inclination. Within the frame of this dissertation, a simple analysis has been finally presented, showing the scalable property of tape tether, thanks to the survivability model developed, that allows analyze and compare de-orbit performance for a large range of satellite mass and orbit properties. The work explicitly shows the product of tether-to-satellite mass-ratio and fatal impact count as a function of tether geometry and orbital parameters. Combining the tether dynamic equation involving Lorentz drag with space debris impact survivability model, eliminates time from the expression. Hence the product, is independent of tether de-orbit history and just depends on mission constraints and tether length, width and thickness. This optimization model finally led to the development of a friendly software tool named BETsMA, currently in process of registration by UPM. For the final step, an estimation of fatal impact rate on a tape tether has been done, using for the first time an experimental ballistic limit equation that was derived for tapes and accounts for the effects of both the impact velocity and impact angle. It is shown that tape tethers are highly resistant to space debris impacts and considering a tape tether with a defined cross section, the number of critical events due to impact with non-trackable debris is always significantly low.
Resumo:
Despite decades of experimental and theoretical investigation on thin films, considerable uncertainty exists in the prediction of their critical rupture thickness. According to the spontaneous rupture mechanism, common thin films become unstable when capillary waves. at the interfaces begin to grow. In a horizontal film with symmetry at the midplane. unstable waves from adjacent interfaces grow towards the center of the film. As the film drains and becomes thinner, unstable waves osculate and cause the film to rupture, Uncertainty sterns from a number of sources including the theories used to predict film drainage and corrugation growth dynamics. In the early studies, (lie linear stability of small amplitude waves was investigated in the Context of the quasi-static approximation in which the dynamics of wave growth and film thinning are separated. The zeroth order wave growth equation of Vrij predicts faster wave growth rates than the first order equation derived by Sharma and Ruckenstein. It has been demonstrated in an accompanying paper that film drainage rates and times measured by numerous investigations are bounded by the predictions of the Reynolds equation and the more recent theory of Manev, Tsekov, and Radoev. Solutions to combinations of these equations yield simple scaling laws which should bound the critical rupture thickness of foam and emulsion films, In this paper, critical thickness measurements reported in the literature are compared to predictions from the bounding scaling equations and it is shown that the retarded Hamaker constants derived from approximate Lifshitz theory underestimate the critical thickness of foam and emulsion films, The non-retarded Hamaker constant more adequately bounds the critical thickness measurements over the entire range of film radii reported in the literature. This result reinforces observations made by other independent researchers that interfacial interactions in flexible liquid films are not adequately represented by the retarded Hamaker constant obtained from Lifshitz theory and that the interactions become significant at much greater separations than previously thought. (c) 2005 Elsevier B.V. All rights reserved.
Resumo:
We provide here a detailed theoretical explanation of the floating molecule or levitation effect, for molecules diffusing through nanopores, using the oscillator model theory (Phys. Rev. Lett. 2003, 91, 126102) recently developed in this laboratory. It is shown that on reduction of pore size the effect occurs due to decrease in frequency of wall collision of diffusing particles at a critical pore size. This effect is, however, absent at high temperatures where the ratio of kinetic energy to the solid-fluid interaction strength is sufficiently large. It is shown that the transport diffusivities scale with this ratio. Scaling of transport diffusivities with respect to mass is also observed, even in the presence of interactions.
Resumo:
When blood flows through small vessels, the two-phase nature of blood as a suspension of red cells (erythrocytes) in plasma cannot be neglected, and with decreasing vessel size, a homogeneous continuum model become less adequate in describing blood flow. Following the Haynes’ marginal zone theory, and viewing the flow as the result of concentric laminae of fluid moving axially, the present work provides models for fluid flow in dichotomous branching composed by larger and smaller vessels, respectively. Expressions for the branching sizes of parent and daughter vessels, that provides easier flow access, are obtained by means of a constrained optimization approach using the Lagrange multipliers. This study shows that when blood behaves as a Newtonian fluid, Hess – Murray law that states that the daughters-to-parent diameter ratio must equal to 2^(-1/3) is valid. However, when the nature of blood as a suspension becomes important, the expression for optimum branching diameters of vessels is dependent on the separation phase lengths. It is also shown that the same effect occurs for the relative lengths of daughters and parent vessels. For smaller vessels (e. g., arterioles and capillaries), it is found that the daughters-to-parent diameter ratio may varies from 0,741 to 0,849, and the daughters-to-parent length ratio varies from 0,260 to 2,42. For larger vessels (e. g., arteries), the daughters-to-parent diameter ratio and the daughters-to-parent length ratio range from 0,458 to 0,819, and from 0,100 to 6,27, respectively. In this paper, it is also demonstrated that the entropy generated when blood behaves as a single phase fluid (i. e., continuum viscous fluid) is greater than the entropy generated when the nature of blood as a suspension becomes important. Another important finding is that the manifestation of the particulate nature of blood in small vessels reduces entropy generation due to fluid friction, thereby maintaining the flow through dichotomous branching vessels at a relatively lower cost.
Resumo:
A new scaling analysis has been performed for the unsteady natural convection boundary layer under a downward facing inclined plate with uniform heat flux. The development of the thermal or viscous boundary layers may be classified into three distinct stages including a start-up stage, a transitional stage and a steady stage, which can be clearly identified in the analytical as well as numerical results. Earlier scaling shows that the existing scaling laws of the boundary layer thickness, velocity and steady state time scale for the natural convection flow on a heated plate of uniform heat flux provide a very poor prediction of the Prandtl number dependency of the flow. However, those scalings performed very well with Rayleigh number and aspect ratio dependency. In this study, a new Prandtl number scaling has been developed using a triple-layer integral approach for Pr > 1. It is seen that in comparison to the direct numerical simulations, the new scaling performs considerably better than the previous scaling.
Resumo:
An improved scaling analysis and direct numerical simulations are performed for the unsteady natural convection boundary layer adjacent to a downward facing inclined plate with uniform heat flux. The development of the thermal or viscous boundary layers may be classified into three distinct stages: a start-up stage, a transitional stage and a steady stage, which can be clearly identified in the analytical as well as the numerical results. Previous scaling shows that the existing scaling laws of the boundary layer thickness, velocity and steady state time scale for the natural convection flow on a heated plate of uniform heat flux provide a very poor prediction of the Prandtl number dependency of the flow. However, those scalings perform very well with Rayleigh number and aspect ratio dependency. In this study, a modified Prandtl number scaling is developed using a triple layer integral approach for Pr > 1. It is seen that in comparison to the direct numerical simulations, the modified scaling performs considerably better than the previous scaling.
Resumo:
A new scaling analysis has been performed for the unsteady natural convection boundary layer under a downward facing inclined plate with uniform heat flux. The development of the thermal or viscous boundary layers may be classified into three distinct stages including an early stage, a transitional stage and a steady stage, which can be clearly identified in the analytical as well as numerical results. Earlier scaling shows that the existing scaling laws of the boundary layer thickness, velocity and steady state time scales for the natural convection flow on a heated plate of uniform heat flux provide a very poor prediction of the Prandtl number dependency. However, those scalings performed very well with Rayleigh number and aspect ratio dependency. In this study, a modifed Prandtl number scaling has been developed using a triple-layer integral approach for Pr > 1. It is seen that in comparison to the direct numerical simulations, the new scaling performs considerably better than the previous scaling.
Resumo:
In this paper we use a simple normal form approach of scale invariant fields to investigate scaling laws of passive scalars in turbulence. The coupling equations for velocity and passive scalar moments are scale covariant. Their solution shows that passive scalars in turbulence do not generically follow a general scaling observed for velocity field because of coupling effects.