871 resultados para Anisotropic Analytical Algorithm
Resumo:
Electric probes are objects immersed in the plasma with sharp boundaries which collect of emit charged particles. Consequently, the nearby plasma evolves under abrupt imposed and/or naturally emerging conditions. There could be localized currents, different time scales for plasma species evolution, charge separation and absorbing-emitting walls. The traditional numerical schemes based on differences often transform these disparate boundary conditions into computational singularities. This is the case of models using advection-diffusion differential equations with source-sink terms (also called Fokker-Planck equations). These equations are used in both, fluid and kinetic descriptions, to obtain the distribution functions or the density for each plasma species close to the boundaries. We present a resolution method grounded on an integral advancing scheme by using approximate Green's functions, also called short-time propagators. All the integrals, as a path integration process, are numerically calculated, what states a robust grid-free computational integral method, which is unconditionally stable for any time step. Hence, the sharp boundary conditions, as the current emission from a wall, can be treated during the short-time regime providing solutions that works as if they were known for each time step analytically. The form of the propagator (typically a multivariate Gaussian) is not unique and it can be adjusted during the advancing scheme to preserve the conserved quantities of the problem. The effects of the electric or magnetic fields can be incorporated into the iterative algorithm. The method allows smooth transitions of the evolving solutions even when abrupt discontinuities are present. In this work it is proposed a procedure to incorporate, for the very first time, the boundary conditions in the numerical integral scheme. This numerical scheme is applied to model the plasma bulk interaction with a charge-emitting electrode, dealing with fluid diffusion equations combined with Poisson equation self-consistently. It has been checked the stability of this computational method under any number of iterations, even for advancing in time electrons and ions having different time scales. This work establishes the basis to deal in future work with problems related to plasma thrusters or emissive probes in electromagnetic fields.
Resumo:
El trabajo realizado en la presente tesis doctoral se debe considerar parte del proyecto UPMSat-2, que se enmarca dentro del ámbito de la tecnología aeroespacial. El UPMSat-2 es un microsatélite (de bajo coste y pequeño tamaño) diseñado, construido, probado e integrado por la Universidad Politécnica de Madrid (España), para fines de demostración tecnológica y educación. El objetivo de la presente tesis doctoral es presentar nuevos modelos analíticos para estudiar la interdependencia energética entre los subsistemas de potencia y de control de actitud de un satélite. En primer lugar, se estudia la simulación del subsistema de potencia de un microsatélite, prestando especial atención a la simulación de la fuente de potencia, esto es, los paneles solares. En la tesis se presentan métodos sencillos pero precisos para simular la producción de energía de los paneles en condiciones ambientales variables a través de su circuito equivalente. Los métodos propuestos para el cálculo de los parámetros del circuito equivalente son explícitos (o al menos, con las variables desacopladas), no iterativos y directos; no se necesitan iteraciones o valores iniciales para calcular los parámetros. La precisión de este método se prueba y se compara con métodos similares de la literatura disponible, demostrando una precisión similar para mayor simplicidad. En segundo lugar, se presenta la simulación del subsistema de control de actitud de un microsatélite, prestando especial atención a la nueva ley de control propuesta. La tesis presenta un nuevo tipo de control magnético es aplicable a la órbita baja terrestre (LEO). La ley de control propuesta es capaz de ajustar la velocidad de rotación del satélite alrededor de su eje principal de inercia máximo o mínimo. Además, en el caso de órbitas de alta inclinación, la ley de control favorece la alineación del eje de rotación con la dirección normal al plano orbital. El algoritmo de control propuesto es simple, sólo se requieren magnetopares como actuadores; sólo se requieren magnetómetros como sensores; no hace falta estimar la velocidad angular; no incluye un modelo de campo magnético de la Tierra; no tiene por qué ser externamente activado con información sobre las características orbitales y permite el rearme automático después de un apagado total del subsistema de control de actitud. La viabilidad teórica de la citada ley de control se demuestra a través de análisis de Monte Carlo. Por último, en términos de producción de energía, se demuestra que la actitud propuesto (en eje principal perpendicular al plano de la órbita, y el satélite que gira alrededor de ella con una velocidad controlada) es muy adecuado para la misión UPMSat-2, ya que permite una área superior de los paneles apuntando hacia el sol cuando se compara con otras actitudes estudiadas. En comparación con el control de actitud anterior propuesto para el UPMSat-2 resulta en un incremento de 25% en la potencia disponible. Además, la actitud propuesto mostró mejoras significativas, en comparación con otros, en términos de control térmico, como la tasa de rotación angular por satélite puede seleccionarse para conseguir una homogeneización de la temperatura más alta que apunta satélite y la antena. ABSTRACT The work carried out in the present doctoral dissertation should be considered part of the UPMSat-2 project, falling within the scope of the aerospace technology. The UPMSat-2 is a microsatellite (low cost and small size) designed, constructed integrated and tested for educational and technology demonstration purposes at the Universidad Politécnica de Madrid (Spain). The aim of the present doctoral dissertation is to present new analytical models to study the energy interdependence between the power and the attitude control subsystems of a satellite. First, the simulation of the power subsystem of a microsatellite is studied, paying particular attention to the simulation of the power supply, i.e. the solar panels. Simple but accurate methods for simulate the power production under variable ambient conditions using its equivalent circuit are presented. The proposed methods for calculate the equivalent circuit parameters are explicit (or at least, with decoupled variables), non-iterative and straight forward; no iterations or initial values for the parameters are needed. The accuracy of this method is tested and compared with similar methods from the available literature demonstrating similar precision but higher simplicity. Second, the simulation of the control subsystem of a microsatellite is presented, paying particular attention to the new control law proposed. A new type of magnetic control applied to Low Earth Orbit (LEO) satellites has been presented. The proposed control law is able to set the satellite rotation speed around its maximum or minimum inertia principal axis. Besides, the proposed control law favors the alignment of this axis with the normal direction to the orbital plane for high inclination orbits. The proposed control algorithm is simples, only magnetorquers are required as actuators; only magnetometers are required as sensors; no estimation of the angular velocity is needed; it does not include an in-orbit Earth magnetic field model; it does not need to be externally activated with information about the orbital characteristics and it allows automatic reset after a total shutdown of attitude control subsystem. The theoretical viability of the control law is demonstrated through Monte Carlo analysis. Finally, in terms of power production, it is demonstrated that the proposed attitude (on principal axis perpendicular to the orbit plane, and the satellite rotating around it with a controlled rate) is quite suitable for the UPMSat-2 mission, as it allows a higher area of the panels pointing towards the sun when compared to other studied attitudes. Compared with the previous attitude control proposed for the UPMSat-2 it results in a 25% increment in available power. Besides, the proposed attitude showed significant improvements, when compared to others, in terms of thermal control, as the satellite angular rotation rate can be selected to achieve a higher temperature homogenization of the satellite and antenna pointing.
Resumo:
We study the effects of finite temperature on the dynamics of non-planar vortices in the classical, two-dimensional anisotropic Heisenberg model with XY- or easy-plane symmetry. To this end, we analyze a generalized Landau-Lifshitz equation including additive white noise and Gilbert damping. Using a collective variable theory with no adjustable parameters we derive an equation of motion for the vortices with stochastic forces which are shown to represent white noise with an effective diffusion constant linearly dependent on temperature. We solve these stochastic equations of motion by means of a Green's function formalism and obtain the mean vortex trajectory and its variance. We find a non-standard time dependence for the variance of the components perpendicular to the driving force. We compare the analytical results with Langevin dynamics simulations and find a good agreement up to temperatures of the order of 25% of the Kosterlitz-Thouless transition temperature. Finally, we discuss the reasons why our approach is not appropriate for higher temperatures as well as the discreteness effects observed in the numerical simulations.
Resumo:
Numerical modelling methodologies are important by their application to engineering and scientific problems, because there are processes where analytical mathematical expressions cannot be obtained to model them. When the only available information is a set of experimental values for the variables that determine the state of the system, the modelling problem is equivalent to determining the hyper-surface that best fits the data. This paper presents a methodology based on the Galerkin formulation of the finite elements method to obtain representations of relationships that are defined a priori, between a set of variables: y = z(x1, x2,...., xd). These representations are generated from the values of the variables in the experimental data. The approximation, piecewise, is an element of a Sobolev space and has derivatives defined in a general sense into this space. The using of this approach results in the need of inverting a linear system with a structure that allows a fast solver algorithm. The algorithm can be used in a variety of fields, being a multidisciplinary tool. The validity of the methodology is studied considering two real applications: a problem in hydrodynamics and a problem of engineering related to fluids, heat and transport in an energy generation plant. Also a test of the predictive capacity of the methodology is performed using a cross-validation method.
Resumo:
In this paper a Markov chain based analytical model is proposed to evaluate the slotted CSMA/CA algorithm specified in the MAC layer of IEEE 802.15.4 standard. The analytical model consists of two two-dimensional Markov chains, used to model the state transition of an 802.15.4 device, during the periods of a transmission and between two consecutive frame transmissions, respectively. By introducing the two Markov chains a small number of Markov states are required and the scalability of the analytical model is improved. The analytical model is used to investigate the impact of the CSMA/CA parameters, the number of contending devices, and the data frame size on the network performance in terms of throughput and energy efficiency. It is shown by simulations that the proposed analytical model can accurately predict the performance of slotted CSMA/CA algorithm for uplink, downlink and bi-direction traffic, with both acknowledgement and non-acknowledgement modes.
Resumo:
In this paper a new double-wavelet neuron architecture obtained by modification of standard wavelet neuron, and its learning algorithm are proposed. The offered architecture allows to improve the approximation properties of wavelet neuron. Double-wavelet neuron and its learning algorithm are examined for forecasting non-stationary chaotic time series.
Resumo:
Many practical routing algorithms are heuristic, adhoc and centralized, rendering generic and optimal path configurations difficult to obtain. Here we study a scenario whereby selected nodes in a given network communicate with fixed routers and employ statistical physics methods to obtain optimal routing solutions subject to a generic cost. A distributive message-passing algorithm capable of optimizing the path configuration in real instances is devised, based on the analytical derivation, and is greatly simplified by expanding the cost function around the optimized flow. Good algorithmic convergence is observed in most of the parameter regimes. By applying the algorithm, we study and compare the pros and cons of balanced traffic configurations to that of consolidated traffic, which provides important implications to practical communication and transportation networks. Interesting macroscopic phenomena are observed from the optimized states as an interplay between the communication density and the cost functions used. © 2013 IEEE.
Resumo:
The major barrier to practical optimization of pavement preservation programming has always been that for formulations where the identity of individual projects is preserved, the solution space grows exponentially with the problem size to an extent where it can become unmanageable by the traditional analytical optimization techniques within reasonable limit. This has been attributed to the problem of combinatorial explosion that is, exponential growth of the number of combinations. The relatively large number of constraints often presents in a real-life pavement preservation programming problems and the trade-off considerations required between preventive maintenance, rehabilitation and reconstruction, present yet another factor that contributes to the solution complexity. In this research study, a new integrated multi-year optimization procedure was developed to solve network level pavement preservation programming problems, through cost-effectiveness based evolutionary programming analysis, using the Shuffled Complex Evolution (SCE) algorithm.^ A case study problem was analyzed to illustrate the robustness and consistency of the SCE technique in solving network level pavement preservation problems. The output from this program is a list of maintenance and rehabilitation treatment (M&R) strategies for each identified segment of the network in each programming year, and the impact on the overall performance of the network, in terms of the performance levels of the recommended optimal M&R strategy. ^ The results show that the SCE is very efficient and consistent in the simultaneous consideration of the trade-off between various pavement preservation strategies, while preserving the identity of the individual network segments. The flexibility of the technique is also demonstrated, in the sense that, by suitably coding the problem parameters, it can be used to solve several forms of pavement management programming problems. It is recommended that for large networks, some sort of decomposition technique should be applied to aggregate sections, which exhibit similar performance characteristics into links, such that whatever M&R alternative is recommended for a link can be applied to all the sections connected to it. In this way the problem size, and hence the solution time, can be greatly reduced to a more manageable solution space. ^ The study concludes that the robust search characteristics of SCE are well suited for solving the combinatorial problems in long-term network level pavement M&R programming and provides a rich area for future research. ^
Resumo:
The ultrasonic non-destructive testing of components may encounter considerable difficulties to interpret some inspections results mainly in anisotropic crystalline structures. A numerical method for the simulation of elastic wave propagation in homogeneous elastically anisotropic media, based on the general finite element approach, is used to help this interpretation. The successful modeling of elastic field associated with NDE is based on the generation of a realistic pulsed ultrasonic wave, which is launched from a piezoelectric transducer into the material under inspection. The values of elastic constants are great interest information that provide the application of equations analytical models, until small and medium complexity problems through programs of numerical analysis as finite elements and/or boundary elements. The aim of this work is the comparison between the results of numerical solution of an ultrasonic wave, which is obtained from transient excitation pulse that can be specified by either force or displacement variation across the aperture of the transducer, and the results obtained from a experiment that was realized in an aluminum block in the IEN Ultrasonic Laboratory. The wave propagation can be simulated using all the characteristics of the material used in the experiment evaluation associated to boundary conditions and from these results, the comparison can be made.
Resumo:
We consider the a posteriori error analysis and hp-adaptation strategies for hp-version interior penalty discontinuous Galerkin methods for second-order partial differential equations with nonnegative characteristic form on anisotropically refined computational meshes with anisotropically enriched elemental polynomial degrees. In particular, we exploit duality based hp-error estimates for linear target functionals of the solution and design and implement the corresponding adaptive algorithms to ensure reliable and efficient control of the error in the prescribed functional to within a given tolerance. This involves exploiting both local isotropic and anisotropic mesh refinement and isotropic and anisotropic polynomial degree enrichment. The superiority of the proposed algorithm in comparison with standard hp-isotropic mesh refinement algorithms and an h-anisotropic/p-isotropic adaptive procedure is illustrated by a series of numerical experiments.
Resumo:
Ground deformation provides valuable insights on subsurface processes with pattens reflecting the characteristics of the source at depth. In active volcanic sites displacements can be observed in unrest phases; therefore, a correct interpretation is essential to assess the hazard potential. Inverse modeling is employed to obtain quantitative estimates of parameters describing the source. However, despite the robustness of the available approaches, a realistic imaging of these reservoirs is still challenging. While analytical models return quick but simplistic results, assuming an isotropic and elastic crust, more sophisticated numerical models, accounting for the effects of topographic loads, crust inelasticity and structural discontinuities, require much higher computational effort and information about the crust rheology may be challenging to infer. All these approaches are based on a-priori source shape constraints, influencing the solution reliability. In this thesis, we present a new approach aimed at overcoming the aforementioned limitations, modeling sources free of a-priori shape constraints with the advantages of FEM simulations, but with a cost-efficient procedure. The source is represented as an assembly of elementary units, consisting in cubic elements of a regular FE mesh loaded with a unitary stress tensors. The surface response due to each of the six stress tensor components is computed and linearly combined to obtain the total displacement field. In this way, the source can assume potentially any shape. Our tests prove the equivalence of the deformation fields due to our assembly and that of corresponding cavities with uniform boundary pressure. Our ability to simulate pressurized cavities in a continuum domain permits to pre-compute surface responses, avoiding remeshing. A Bayesian trans-dimensional inversion algorithm implementing this strategy is developed. 3D Voronoi cells are used to sample the model domain, selecting the elementary units contributing to the source solution and those remaining inactive as part of the crust.
Resumo:
The established isotropic tomographic models show the features of subduction zones in terms of seismic velocity anomalies, but they are generally subjected to the generation of artifacts due to the lack of anisotropy in forward modelling. There is evidence for the significant influence of seismic anisotropy in the mid-upper mantle, especially for boundary layers like subducting slabs. As consequence, in isotropic models artifacts may be misinterpreted as compositional or thermal heterogeneities. In this thesis project the application of a trans-dimensional Metropolis-Hastings method is investigated in the context of anisotropic seismic tomography. This choice arises as a response to the important limitations introduced by traditional inversion methods which use iterative procedures of optimization of a function object of the inversion. On the basis of a first implementation of the Bayesian sampling algorithm, the code is tested with some cartesian two-dimensional models, and then extended to polar coordinates and dimensions typical of subduction zones, the main focus proposed for this method. Synthetic experiments with increasing complexity are realized to test the performance of the method and the precautions for multiple contexts, taking into account also the possibility to apply seismic ray-tracing iteratively. The code developed is tested mainly for 2D inversions, future extensions will allow the anisotropic inversion of seismological data to provide more realistic imaging of real subduction zones, less subjected to generation of artifacts.
Resumo:
Lipidic mixtures present a particular phase change profile highly affected by their unique crystalline structure. However, classical solid-liquid equilibrium (SLE) thermodynamic modeling approaches, which assume the solid phase to be a pure component, sometimes fail in the correct description of the phase behavior. In addition, their inability increases with the complexity of the system. To overcome some of these problems, this study describes a new procedure to depict the SLE of fatty binary mixtures presenting solid solutions, namely the Crystal-T algorithm. Considering the non-ideality of both liquid and solid phases, this algorithm is aimed at the determination of the temperature in which the first and last crystal of the mixture melts. The evaluation is focused on experimental data measured and reported in this work for systems composed of triacylglycerols and fatty alcohols. The liquidus and solidus lines of the SLE phase diagrams were described by using excess Gibbs energy based equations, and the group contribution UNIFAC model for the calculation of the activity coefficients of both liquid and solid phases. Very low deviations of theoretical and experimental data evidenced the strength of the algorithm, contributing to the enlargement of the scope of the SLE modeling.
Resumo:
Raman imaging spectroscopy is a highly useful analytical tool that provides spatial and spectral information on a sample. However, CCD detectors used in dispersive instruments present the drawback of being sensitive to cosmic rays, giving rise to spikes in Raman spectra. Spikes influence variance structures and must be removed prior to the use of multivariate techniques. A new algorithm for correction of spikes in Raman imaging was developed using an approach based on comparison of nearest neighbor pixels. The algorithm showed characteristics including simplicity, rapidity, selectivity and high quality in spike removal from hyperspectral images.
Resumo:
PURPOSE: To compare the Full Threshold (FT) and SITA Standard (SS) strategies in glaucomatous patients undergoing automated perimetry for the first time. METHODS: Thirty-one glaucomatous patients who had never undergone perimetry underwent automated perimetry (Humphrey, program 30-2) with both FT and SS on the same day, with an interval of at least 15 minutes. The order of the examination was randomized, and only one eye per patient was analyzed. Three analyses were performed: a) all the examinations, regardless of the order of application; b) only the first examinations; c) only the second examinations. In order to calculate the sensitivity of both strategies, the following criteria were used to define abnormality: glaucoma hemifield test (GHT) outside normal limits, pattern standard deviation (PSD) <5%, or a cluster of 3 adjacent points with p<5% at the pattern deviation probability plot. RESULTS: When the results of all examinations were analyzed regardless of the order in which they were performed, the number of depressed points with p<0.5% in the pattern deviation probability map was significantly greater with SS (p=0.037), and the sensitivities were 87.1% for SS and 77.4% for FT (p=0.506). When only the first examinations were compared, there were no statistically significant differences regarding the number of depressed points, but the sensitivity of SS (100%) was significantly greater than that obtained with FT (70.6%) (p=0.048). When only the second examinations were compared, there were no statistically significant differences regarding the number of depressed points, and the sensitivities of SS (76.5%) and FT (85.7%) (p=0.664). CONCLUSION: SS may have a higher sensitivity than FT in glaucomatous patients undergoing automated perimetry for the first time. However, this difference tends to disappear in subsequent examinations.