22 resultados para Few-body problem
Resumo:
An analytical solution of the two body problem perturbed by a constant tangential acceleration is derived with the aid of perturbation theory. The solution, which is valid for circular and elliptic orbits with generic eccentricity, describes the instantaneous time variation of all orbital elements. A comparison with high-accuracy numerical results shows that the analytical method can be effectively applied to multiple-revolution low-thrust orbit transfer around planets and in interplanetary space with negligible error.
Resumo:
A formulation of the perturbed two-body problem that relies on a new set of orbital elements is presented. The proposed method represents a generalization of the special perturbation method published by Peláez et al. (Celest Mech Dyn Astron 97(2):131?150,2007) for the case of a perturbing force that is partially or totally derivable from a potential. We accomplish this result by employing a generalized Sundman time transformation in the framework of the projective decomposition, which is a known approach for transforming the two-body problem into a set of linear and regular differential equations of motion. Numerical tests, carried out with examples extensively used in the literature, show the remarkable improvement of the performance of the new method for different kinds of perturbations and eccentricities. In particular, one notable result is that the quadratic dependence of the position error on the time-like argument exhibited by Peláez?s method for near-circular motion under the J2 perturbation is transformed into linear.Moreover, themethod reveals to be competitive with two very popular elementmethods derived from theKustaanheimo-Stiefel and Sperling-Burdet regularizations.
Resumo:
EDROMO is a special perturbation method for the propagation of elliptical orbits in the perturbed two-body problem. The state vector consists of a time-element and seven spatial elements, and the independent variable is a generalized eccentric anomaly introduced through a Sundman time transformation. The key role in the derivation of the method is played by an intermediate reference frame which enjoys the property of remaining fixed in space as long as perturbations are absent. Three elements of EDROMO characterize the dynamics in the orbital frame and its orientation with respect to the intermediate frame, and the Euler parameters associated to the intermediate frame represent the other four spatial elements. The performance of EDromo has been analyzed by considering some typical problems in astrodynamics. In almost all our tests the method is the best among other popular formulations based on elements.
Resumo:
In this article, an approximate analytical solution for the two body problem perturbed by a radial, low acceleration is obtained, using a regularized formulation of the orbital motion and the method of multiple scales. The results reveal that the physics of the problem evolve in two fundamental scales of the true anomaly. The first one drives the oscillations of the orbital parameters along each orbit. The second one is responsible of the long-term variations in the amplitude and mean values of these oscillations. A good agreement is found with high precision numerical solutions.
Resumo:
The two-body problem subject to a constant radial thrust is analyzed as a planar motion. The description of the problem is performed in terms of three perturbation methods: DROMO and two others due to Deprit. All of them rely on Hansen?s ideal frame concept. An explicit, analytic, closed-form solution is obtained for this problem when the initial orbit is circular (Tsien problem), based on the DROMO special perturbation method, and expressed in terms of elliptic integral functions. The analytical solution to the Tsien problem is later used as a reference to test the numerical performance of various orbit propagation methods, including DROMO and Deprit methods, as well as Cowell and Kustaanheimo?Stiefel methods.
Resumo:
The relativistic distorted-wave impulse approximation is used to describe the 3He(e, e′ p)2H process. We describe the 3He nucleus within the adiabatic hyperspherical expansion method with realistic nucleon-nucleon interactions. The overlap between the 3He and the deuteron wave functions can be accurately computed from a three-body calculation. The nucleons are described by solutions of the Dirac equation with scalar and vector (S–V) potentials. The wave function of the outgoing proton is obtained by solving the Dirac equation with a S–V optical potential fitted to elastic proton scattering data on the residual nucleus. Within this theoretical framework, we compute the cross section of the reaction and other observables like the transverse-longitudinal asymmetry, and compare them with the available experimental data measured at JLab.
Resumo:
A new method to study large scale neural networks is presented in this paper. The basis is the use of Feynman- like diagrams. These diagrams allow the analysis of collective and cooperative phenomena with a similar methodology to the employed in the Many Body Problem. The proposed method is applied to a very simple structure composed by an string of neurons with interaction among them. It is shown that a new behavior appears at the end of the row. This behavior is different to the initial dynamics of a single cell. When a feedback is present, as in the case of the hippocampus, this situation becomes more complex with a whole set of new frequencies, different from the proper frequencies of the individual neurons. Application to an optical neural network is reported.
Resumo:
This paper addresses the problem of optimal constant continuous low-thrust transfer in the context of the restricted two-body problem (R2BP). Using the Pontryagin’s principle, the problem is formulated as a two point boundary value problem (TPBVP) for a Hamiltonian system. Lie transforms obtained through the Deprit method allow us to obtain the canonical mapping of the phase flow as a series in terms of the order of magnitude of the thrust applied. The reachable set of states starting from a given initial condition using optimal control policy is obtained analytically. In addition, a particular optimal transfer can be computed as the solution of a non-linear algebraic equation. Se investiga el uso de series y transformadas de Lie en problemas de optimización de trayectorias de satélites impulsados por motores de bajo empuje
Resumo:
There exists an interest in performing pin-by-pin calculations coupled with thermal hydraulics so as to improve the accuracy of nuclear reactor analysis. In the framework of the EU NURISP project, INRNE and UPM have generated an experimental version of a few group diffusion cross sections library with discontinuity factors intended for VVER analysis at the pin level with the COBAYA3 code. The transport code APOLLO2 was used to perform the branching calculations. As a first proof of principle the library was created for fresh fuel and covers almost the full parameter space of steady state and transient conditions. The main objective is to test the calculation schemes and post-processing procedures, including multi-pin branching calculations. Two library options are being studied: one based on linear table interpolation and another one using a functional fitting of the cross sections. The libraries generated with APOLLO2 have been tested with the pin-by-pin diffusion model in COBAYA3 including discontinuity factors; first comparing 2D results against the APOLLO2 reference solutions and afterwards using the libraries to compute a 3D assembly problem coupled with a simplified thermal-hydraulic model.
Resumo:
Introduction. Most studies have described how the weight loss is when different treatments are compared (1-3), while others have also compared the weight loss by sex (4), or have taken into account psychosocial (5) and lifestyle (6, 7) variables. However, no studies have examined the interaction of different variables and the importance of them in the weight loss. Objective. Create a model to discriminate the range of weight loss, determining the importance of each variable. Methods. 89 overweight people (BMI: 25-29.9 kg?m-2), aged from 18 to 50 years, participated in the study. Four types of treatments were randomly assigned: strength training (S), endurance training (E), strength and endurance training (SE), and control group (C). All participants followed a 25% calorie restriction diet. Two multivariate discriminant models including the variables age, sex, height, daily energy expenditure (EE), type of treatment (T), caloric restriction (CR), initial body weight (BW), initial fat mass (FM), initial muscle mass (MM) and initial bone mineral density (BMD) were performed having into account two groups: the first and fourth quartile of the % of weight loss in the first model; the groups above and below the mean of the % of weight loss in the second model. The discriminant models were built using the inclusion method in SPSS allowing us to find a function that could predict the body weight loss range that an overweight person could achieve in a 6 months weight loss intervention.Results. The first discriminant analysis predicted that a combination of the studied variables would discriminate between the two ranges of body weight loss with 81.4% of correct classification. The discriminant function obtained was (Wilks? Lambda=0.475, p=0.003): Discriminant score=-18.266-(0.060xage)- (1.282xsex[0=female;1=male])+(14.701xheight)+(0.002xEE)- (0.006xT[1=S;2=E;3=SE;4=C])-(0.047xCR)- (0.558xBW)+(0.475xFM)+(0.398xMM)+(3.499xBMD) The second discriminant model obtained would discriminate between the two groups of body weight loss with 74.4% of correct classification. The discriminant function obtained was (Wilks? Lambda=0.725, p=0.005): Discriminant score=-5.021-(0.052xage)- (0.543xsex[0=female;1=male])+(3.530xheight)+(0.001xEE)- (0.493xT[1=S;2=E;3=SE;4=C])+(0.003xCR)- (0.365xBW)+(0.368xFM)+(0.296xMM)+(4.034xBMD) Conclusion. The first developed model could predict the percentage of weight loss in the following way: if the discriminant score is close to 1.051, the range of weight loss will be from 7.44 to -4.64% and if it is close to - 1.003, the range will be from -11.03 to -25,00% of the initial body weight. With the second model if the discriminant score is close to 0.623 the body weight loss will be above -7.93% and if it is close to -0.595 will be below - 7.93% of the initial body weight. References. 1. Brochu M, et al. Resistance training does not contribute to improving the metabolic profile after a 6-month weight loss program in overweight and obese postmenopausal women. J Clin Endocrinol Metab. 2009 Sep;94(9):3226-33. 2. Del Corral P, et al. Effect of dietary adherence with or without exercise on weight loss: a mechanistic approach to a global problem. J Clin Endocrinol Metab. 2009 May;94(5):1602-7. 3. Larson-Meyer DE, et al. Caloric Restriction with or without Exercise: The Fitness vs. Fatness Debate. Med Sci Sports Exerc. 2010;42(1):152-9. 4. Hagan RD, et al. The effects of aerobic conditioning and/or caloric restriction in overweight men and women. Medicine & Science in Sports & Exercise. 1986;18(1):87-94. 5. Teixeira PJ, et al. Mediators of weight loss and weight loss maintenance in middle-aged women. Obesity (Silver Spring). 2010 Apr;18(4):725-35. 6. Bautista-Castano I, et al. Variables predictive of adherence to diet and physical activity recommendations in the treatment of obesity and overweight, in a group of Spanish subjects. Int J Obes Relat Metab Disord. 2004 May;28(5):697-705.
Resumo:
The developments in materials over the last decade have been considerable within the automotive industry, being one of the leaders in innovative product applications. Sustainable product development of an automotive structure requires a balanced approach towards technological, economical and ecological aspects. The introduction of new materials and processes is dependent on satisfying different factors. Competitive and legislative pressures, creating the need for change, affect these factors considerably. The process, direction and speed of change are often reactive. Current paper shows the application of aluminium alloys, for the use in the bottom structure of a car to face the problem for the weight of the entire bottom structure under static load conditions, including stiffness, strength and buckling constraints. In addition to minimized mass and materials' price, the assessment of an environmental impact of materials-candidates during the entire life cycle of the structure is considered.
Resumo:
The localization of persons in indoor environments is nowadays an open problem. There are partial solutions based on the deployment of a network of sensors (Local Positioning Systems or LPS). Other solutions only require the installation of an inertial sensor on the person’s body (Pedestrian Dead-Reckoning or PDR). PDR solutions integrate the signals coming from an Inertial Measurement Unit (IMU), which usually contains 3 accelerometers and 3 gyroscopes. The main problem of PDR is the accumulation of positioning errors due to the drift caused by the noise in the sensors. This paper presents a PDR solution that incorporates a drift correction method based on detecting the access ramps usually found in buildings. The ramp correction method is implemented over a PDR framework that uses an Inertial Navigation algorithm (INS) and an IMU attached to the person’s foot. Unlike other approaches that use external sensors to correct the drift error, we only use one IMU on the foot. To detect a ramp, the slope of the terrain on which the user is walking, and the change in height sensed when moving forward, are estimated from the IMU. After detection, the ramp is checked for association with one of the existing in a database. For each associated ramp, a position correction is fed into the Kalman Filter in order to refine the INS-PDR solution. Drift-free localization is achieved with positioning errors below 2 meters for 1,000-meter-long routes in a building with a few ramps.
Resumo:
The new user cold start issue represents a serious problem in recommender systems as it can lead to the loss of new users who decide to stop using the system due to the lack of accuracy in the recommenda- tions received in that first stage in which they have not yet cast a significant number of votes with which to feed the recommender system?s collaborative filtering core. For this reason it is particularly important to design new similarity metrics which provide greater precision in the results offered to users who have cast few votes. This paper presents a new similarity measure perfected using optimization based on neu- ral learning, which exceeds the best results obtained with current metrics. The metric has been tested on the Netflix and Movielens databases, obtaining important improvements in the measures of accuracy, precision and recall when applied to new user cold start situations. The paper includes the mathematical formalization describing how to obtain the main quality measures of a recommender system using leave- one-out cross validation.
Resumo:
Computational fluid dynamic (CFD) methods are used in this paper to predict the power production from entire wind farms in complex terrain and to shed some light into the wake flow patterns. Two full three-dimensional Navier–Stokes solvers for incompressible fluid flow, employing k − ϵ and k − ω turbulence closures, are used. The wind turbines are modeled as momentum absorbers by means of their thrust coefficient through the actuator disk approach. Alternative methods for estimating the reference wind speed in the calculation of the thrust are tested. The work presented in this paper is part of the work being undertaken within the UpWind Integrated Project that aims to develop the design tools for next generation of large wind turbines. In this part of UpWind, the performance of wind farm and wake models is being examined in complex terrain environment where there are few pre-existing relevant measurements. The focus of the work being carried out is to evaluate the performance of CFD models in large wind farm applications in complex terrain and to examine the development of the wakes in a complex terrain environment.
Resumo:
There is general agreement within the scientific community in considering Biology as the science with more potential to develop in the XXI century. This is due to several reasons, but probably the most important one is the state of development of the rest of experimental and technological sciences. In this context, there are a very rich variety of mathematical tools, physical techniques and computer resources that permit to do biological experiments that were unbelievable only a few years ago. Biology is nowadays taking advantage of all these newly developed technologies, which are been applied to life sciences opening new research fields and helping to give new insights in many biological problems. Consequently, biologists have improved a lot their knowledge in many key areas as human function and human diseases. However there is one human organ that is still barely understood compared with the rest: The human brain. The understanding of the human brain is one of the main challenges of the XXI century. In this regard, it is considered a strategic research field for the European Union and the USA. Thus, there is a big interest in applying new experimental techniques for the study of brain function. Magnetoencephalography (MEG) is one of these novel techniques that are currently applied for mapping the brain activity1. This technique has important advantages compared to the metabolic-based brain imagining techniques like Functional Magneto Resonance Imaging2 (fMRI). The main advantage is that MEG has a higher time resolution than fMRI. Another benefit of MEG is that it is a patient friendly clinical technique. The measure is performed with a wireless set up and the patient is not exposed to any radiation. Although MEG is widely applied in clinical studies, there are still open issues regarding data analysis. The present work deals with the solution of the inverse problem in MEG, which is the most controversial and uncertain part of the analysis process3. This question is addressed using several variations of a new solving algorithm based in a heuristic method. The performance of those methods is analyzed by applying them to several test cases with known solutions and comparing those solutions with the ones provided by our methods.