963 resultados para numerical integration methods
Resumo:
Generalized linear mixed models (GLMM) are generalized linear models with normally distributed random effects in the linear predictor. Penalized quasi-likelihood (PQL), an approximate method of inference in GLMMs, involves repeated fitting of linear mixed models with “working” dependent variables and iterative weights that depend on parameter estimates from the previous cycle of iteration. The generality of PQL, and its implementation in commercially available software, has encouraged the application of GLMMs in many scientific fields. Caution is needed, however, since PQL may sometimes yield badly biased estimates of variance components, especially with binary outcomes. Recent developments in numerical integration, including adaptive Gaussian quadrature, higher order Laplace expansions, stochastic integration and Markov chain Monte Carlo (MCMC) algorithms, provide attractive alternatives to PQL for approximate likelihood inference in GLMMs. Analyses of some well known datasets, and simulations based on these analyses, suggest that PQL still performs remarkably well in comparison with more elaborate procedures in many practical situations. Adaptive Gaussian quadrature is a viable alternative for nested designs where the numerical integration is limited to a small number of dimensions. Higher order Laplace approximations hold the promise of accurate inference more generally. MCMC is likely the method of choice for the most complex problems that involve high dimensional integrals.
Resumo:
Power transformers are key components of the power grid and are also one of the most subjected to a variety of power system transients. The failure of a large transformer can cause severe monetary losses to a utility, thus adequate protection schemes are of great importance to avoid transformer damage and maximize the continuity of service. Computer modeling can be used as an efficient tool to improve the reliability of a transformer protective relay application. Unfortunately, transformer models presently available in commercial software lack completeness in the representation of several aspects such as internal winding faults, which is a common cause of transformer failure. It is also important to adequately represent the transformer at frequencies higher than the power frequency for a more accurate simulation of switching transients since these are a well known cause for the unwanted tripping of protective relays. This work develops new capabilities for the Hybrid Transformer Model (XFMR) implemented in ATPDraw to allow the representation of internal winding faults and slow-front transients up to 10 kHz. The new model can be developed using any of two sources of information: 1) test report data and 2) design data. When only test-report data is available, a higher-order leakage inductance matrix is created from standard measurements. If design information is available, a Finite Element Model is created to calculate the leakage parameters for the higher-order model. An analytical model is also implemented as an alternative to FEM modeling. Measurements on 15-kVA 240?/208Y V and 500-kVA 11430Y/235Y V distribution transformers were performed to validate the model. A transformer model that is valid for simulations for frequencies above the power frequency was developed after continuing the division of windings into multiple sections and including a higher-order capacitance matrix. Frequency-scan laboratory measurements were used to benchmark the simulations. Finally, a stability analysis of the higher-order model was made by analyzing the trapezoidal rule for numerical integration as used in ATP. Numerical damping was also added to suppress oscillations locally when discontinuities occurred in the solution. A maximum error magnitude of 7.84% was encountered in the simulated currents for different turn-to-ground and turn-to-turn faults. The FEM approach provided the most accurate means to determine the leakage parameters for the ATP model. The higher-order model was found to reproduce the short-circuit impedance acceptably up to about 10 kHz and the behavior at the first anti-resonant frequency was better matched with the measurements.
Resumo:
Several strategies relying on kriging have recently been proposed for adaptively estimating contour lines and excursion sets of functions under severely limited evaluation budget. The recently released R package KrigInv 3 is presented and offers a sound implementation of various sampling criteria for those kinds of inverse problems. KrigInv is based on the DiceKriging package, and thus benefits from a number of options concerning the underlying kriging models. Six implemented sampling criteria are detailed in a tutorial and illustrated with graphical examples. Different functionalities of KrigInv are gradually explained. Additionally, two recently proposed criteria for batch-sequential inversion are presented, enabling advanced users to distribute function evaluations in parallel on clusters or clouds of machines. Finally, auxiliary problems are discussed. These include the fine tuning of numerical integration and optimization procedures used within the computation and the optimization of the considered criteria.
Resumo:
The population of space debris increased drastically during the last years. Collisions involving massive objects may produce large number of fragments leading to significantly growth of the space debris population. An effective remediation measure in order to stabilize the population in LEO, is therefore the removal of large, massive space debris. To remove these objects, not only precise orbits, but also more detailed information about their attitude states will be required. One important property of an object targeted for removal is its spin period and spin axis orientation. If we observe a rotating object, the observer sees different surface areas of the object which leads to changes in the measured intensity. Rotating objects will produce periodic brightness vari ations with frequencies which are related to the spin periods. Photometric monitoring is the real tool for remote diagnostics of the satellite rotation around its center of mass. This information is also useful, for example, in case of contingency. Moreover, it is also important to take into account the orientation of non-spherical body (e.g. space debris) in the numerical integration of its motion when a close approach with the another spacecr aft is predicted. We introduce the two databases of light curves: the AIUB data base, which contains about a thousand light curves of LEO, MEO and high-altitude debris objects (including a few functional objects) obtained over more than seven years, and the data base of the Astronomical Observatory of Odessa University (Ukraine), which contains the results of more than 10 years of photometric monitoring of functioning satellites and large space debris objects in low Earth orbit. AIUB used its 1m ZIMLAT telescope for all light curves. For tracking low-orbit satellites, the Astronomical Observatory of Odessa used the KT-50 telescope, which has an alt-azimuth mount and allows tracking objects moving at a high angular velocity. The diameter of the KT-50 main mirror is 0.5 m, and the focal length is 3 m. The Odessa's Atlas of light curves includes almost 5,5 thousand light curves for ~500 correlated objects from a time period of 2005-2014. The processing of light curves and the determination of the rotation period in the inertial frame is challenging. Extracted frequencies and reconstructed phases for some interesting targets, e.g. GLONASS satellites, for which also SLR data were available for confirmation, will be presented. The rotation of the Envisat satellite after its sudden failure will be analyzed. The deceleration of its rotation rate within 3 years is studied together with the attempt to determine the orientation of the rotation axis.
Resumo:
A diferencia de otros parámetros, el efecto de la existencia de huecos en la aparición y desarrollo de los procesos de fisuración en los paños de fábrica no ha sido considerado por las distintas normativas existentes en la actualidad. En nuestros días se emplea una variada gama de tipologías de elementos de cerramiento para realizar las particiones en las obras de edificación, cada una de ellas con características mecánicas diferentes y distinta metodología de ejecución, siendo de aplicación la misma normativa relativa al cálculo y control de las deformaciones. Tal y como expresamos en el Capitulo 1, en el que se analiza el Estado del Conocimiento, los códigos actuales determinan de forma analítica la flecha probable que se alcanza en los elementos portantes estructurales bajo diferentes condiciones de servicio. Las distintas propuestas que existen respecto para la limitación de la flecha activa, una vez realizado el cálculo de las deformaciones, bien por el método de Branson ó mediante los métodos de integración de curvaturas, no contemplan como parámetro a considerar en la limitación de la flecha activa la existencia y tipología de huecos en un paño de fábrica soportado por la estructura. Sin embargo se intuye y podríamos afirmar que una discontinuidad en cualquier elemento sometido a esfuerzos tiene influencia en el estado tensional del mismo. Si consideramos que, de forma general, los procesos de fisuración se producen al superarse la resistencia a tracción de material constitutivo de la fábrica soportada, es claro que la variación tensional inducida por la existencia de huecos ha de tener cierta influencia en la aparición y desarrollo de los procesos de fisuración en los elementos de partición o de cerramiento de las obras de edificación. En los Capítulos 2 y 3 tras justificar la necesidad de realizar una investigación encaminada a confirmar la relación entre la existencia de huecos en un paño de fábrica y el desarrollo de procesos de fisuración en el mismo, se establece este aspecto como principal Objetivo y se expone la Metodología para su análisis. Hemos definido y justificado en el Capítulo 4 el modelo de cálculo que hemos utilizado para determinar las deformaciones y los procesos de fisuración que se producen en los casos a analizar, en los que se han considerado como variables: los valores de la luz del modelo, el estado de fisuración de los elementos portantes, los efectos de la fluencia y el porcentaje de transmisión de cargas desde el forjado superior al paño de fábrica en estudio. Además se adoptan dos valores de la resistencia a tracción de las fábricas, 0.75MPa y 1.00MPa. La capacidad de representar la fisuración, así como la robustez y fiabilidad ha condicionado y justificado la selección del programa de elementos finitos que se ha utilizado para realizar los cálculos. Aprovechando la posibilidad de reproducir de forma ajustada las características introducidas para cada parámetro, hemos planteado y realizado un análisis paramétricos que considera 360 cálculos iterativos, de cuya exposición es objeto el Capítulo 5, para obtener una serie representativa de resultados sobre los que se realizará el análisis posterior. En el Capítulo 6, de análisis de los resultados, hemos estudiado los valores de deformaciones y estados de fisuración obtenidos para los casos analizados. Hemos determinado la influencia que tiene la presencia de huecos en la aparición de los procesos de fisuración y en las deformaciones que se producen en las diferentes configuraciones estructurales. Las conclusiones que hemos obtenido tras analizar los resultados, incluidas en el Capítulo 7, no dejan lugar a dudas: la presencia, la posición y la tipología de los huecos en los elementos de fábricas soportadas sobre estructuras deformables son factores determinantes respecto de la fisuración y pueden tener influencia en las deformaciones que constituyen la flecha activa del elemento, lo que obliga a plantear una serie de recomendaciones frente al proyecto y frente a la reglamentación técnica. La investigación desarrollada para esta Tesis Doctoral y la metodología aplicada para su desarrollo abre nuevas líneas de estudio, que se esbozan en el Capítulo 8, para el análisis de otros aspectos que no han sido cubiertos por esta investigación a fin de mejorar las limitaciones que deberían establecerse para los Estados Límite de Servicio de Deformaciones correspondientes a las estructuras de edificación. SUMMARY. Unlike other parameters, the effect of the existence of voids in the arising and development of cracking processes in the masonry walls has not been considered by current Codes. Nowadays, a huge variety of enclosure elements types is used to execute partitions in buildings, each one with different mechanical characteristics and different execution methodology, being applied the same rules concerning deflection calculation and control. As indicated in Chapter 1, which analyzes the State of Art, current codes analytically determine the deflection likely to be achieved in structural supporting elements under different service conditions. The different proposals that exist related to live deflection limitation, once performed deformations calculation, either by Branson´s method or considering curvatures integration methods, do not consider in deflection limitation the existence and typology of voids in a masonry wall structured supported. But is sensed and it can be affirmed that a discontinuity in any element under stress influences the stress state of it. If we consider that, in general, cracking processes occur when masonry material tensile strength is exceeded, it is clear that tension variation induced by the existence of voids must have some influence on the emergence and development of cracking processes in enclosure elements of building works. In Chapters 2 and 3, after justifying the need for an investigation to confirm the relationship between the existence of voids in a masonry wall and the development of cracking process in it, is set as the main objective and it is shown the analysis Methodology. We have defined and justified in Chapter 4 the calculation model used to determine the deformation and cracking processes that occur in the cases analyzed, in which were considered as variables: model span values, bearing elements cracking state, creep effects and load transmission percentage from the upper floor to the studied masonry wall. In addition, two masonry tensile strength values 0.75MPa and 1.00MPa have been considered. The cracking consideration ability, robustness and reliability has determined and justified the selection of the finite element program that was used for the calculations. Taking advantage of the ability of accurately consider the characteristics introduced for each parameter, we have performed a parametric analyses that considers 360 iterative calculations, whose results are included in Chapter 5, in order to obtain a representative results set that will be analyzed later. In Chapter 6, results analysis, we studied the obtained values of deformation and cracking configurations for the cases analyzed. We determined the influence of the voids presence in the occurrence of cracking processes and deformations in different structural configurations. The conclusions we have obtained after analyzing the results, included in Chapter 7, leave no doubt: the presence, position and type of holes in masonry elements supported on deformable structures are determinative of cracking and can influence deformations which are the element live deflection, making necessary to raise a number of recommendations related to project and technical regulation. The research undertaken for this Doctoral Thesis and the applied methodology for its development opens up new lines of study, outlined in Chapter 8, for the analysis of other aspects that are not covered by this research, in order to improve the limitations that should be established for Deflections Serviceability Limit States related to building structures.
Resumo:
The stability analysis of open cavity flows is a problem of great interest in the aeronautical industry. This type of flow can appear, for example, in landing gears or auxiliary power unit configurations. Open cavity flows is very sensitive to any change in the configuration, either physical (incoming boundary layer, Reynolds or Mach numbers) or geometrical (length to depth and length to width ratio). In this work, we have focused on the effect of geometry and of the Reynolds number on the stability properties of a threedimensional spanwise periodic cavity flow in the incompressible limit. To that end, BiGlobal analysis is used to investigate the instabilities in this configuration. The basic flow is obtained by the numerical integration of the Navier-Stokes equations with laminar boundary layers imposed upstream. The 3D perturbation, assumed to be periodic in the spanwise direction, is obtained as the solution of the global eigenvalue problem. A parametric study has been performed, analyzing the stability of the flow under variation of the Reynolds number, the L/D ratio of the cavity, and the spanwise wavenumber β. For consistency, multidomain high order numerical schemes have been used in all the computations, either basic flow or eigenvalue problems. The results allow to define the neutral curves in the range of L/D = 1 to L/D = 3. A scaling relating the frequency of the eigenmodes and the length to depth ratio is provided, based on the analysis results.
Resumo:
A linear method is developed for solving the nonlinear differential equations of a lumped-parameter thermal model of a spacecraft moving in a closed orbit. This method, based on perturbation theory, is compared with heuristic linearizations of the same equations. The essential feature of the linear approach is that it provides a decomposition in thermal modes, like the decomposition of mechanical vibrations in normal modes. The stationary periodic solution of the linear equations can be alternately expressed as an explicit integral or as a Fourier series. This method is applied to a minimal thermal model of a satellite with ten isothermal parts (nodes), and the method is compared with direct numerical integration of the nonlinear equations. The computational complexity of this method is briefly studied for general thermal models of orbiting spacecraft, and it is concluded that it is certainly useful for reduced models and conceptual design but it can also be more efficient than the direct integration of the equations for large models. The results of the Fourier series computations for the ten-node satellite model show that the periodic solution at the second perturbative order is sufficiently accurate.
Resumo:
Modelling of entire wind farms in flat and complex terrain using a full 3D Navier–Stokes solver for incompressible flow is presented in this paper. Numerical integration of the governing equations is performed using an implicit pressure correction scheme, where the wind turbines (W/Ts) are modelled as momentum absorbers through their thrust coefficient. The k–ω turbulence model, suitably modified for atmospheric flows, is employed for closure. A correction is introduced to account for the underestimation of the near wake deficit, in which the turbulence time scale is bounded using a general “realizability” constraint for the fluctuating velocities. The second modelling issue that is discussed in this paper is related to the determination of the reference wind speed for the thrust calculation of the machines. Dealing with large wind farms and wind farms in complex terrain, determining the reference wind speed is not obvious when a W/T operates in the wake of another WT and/or in complex terrain. Two alternatives are compared: using the wind speed value at hub height one diameter upstream of the W/T and adopting an induction factor-based concept to overcome the utilization of a wind speed at a certain distance upwind of the rotor. Application is made in two wind farms, a five-machine one located in flat terrain and a 43-machine one located in complex terrain.
Resumo:
En los diseños y desarrollos de ingeniería, antes de comenzar la construcción e implementación de los objetivos de un proyecto, es necesario realizar una serie de análisis previos y simulaciones que corroboren las expectativas de la hipótesis inicial, con el fin de obtener una referencia empírica que satisfaga las condiciones de trabajo o funcionamiento de los objetivos de dicho proyecto. A menudo, los resultados que satisfacen las características deseadas se obtienen mediante la iteración de métodos de ensayo y error. Generalmente, éstos métodos utilizan el mismo procedimiento de análisis con la variación de una serie de parámetros que permiten adaptar una tecnología a la finalidad deseada. Hoy en día se dispone de computadoras potentes, así como algoritmos de resolución matemática que permiten resolver de forma veloz y eficiente diferentes tipos de problemas de cálculo. Resulta interesante el desarrollo de aplicaciones que permiten la resolución de éstos problemas de forma rápida y precisa en el análisis y síntesis de soluciones de ingeniería, especialmente cuando se tratan expresiones similares con variaciones de constantes, dado que se pueden desarrollar instrucciones de resolución con la capacidad de inserción de parámetros que definan el problema. Además, mediante la implementación de un código de acuerdo a la base teórica de una tecnología, se puede lograr un código válido para el estudio de cualquier problema relacionado con dicha tecnología. El desarrollo del presente proyecto pretende implementar la primera fase del simulador de dispositivos ópticos Slabsim, en cual se puede representar la distribución de la energía de una onda electromagnética en frecuencias ópticas guiada a través de una una guía dieléctrica plana, también conocida como slab. Este simulador esta constituido por una interfaz gráfica generada con el entorno de desarrollo de interfaces gráficas de usuario Matlab GUIDE, propiedad de Mathworks©, de forma que su manejo resulte sencillo e intuitivo para la ejecución de simulaciones con un bajo conocimiento de la base teórica de este tipo de estructuras por parte del usuario. De este modo se logra que el ingeniero requiera menor intervalo de tiempo para encontrar una solución que satisfaga los requisitos de un proyecto relacionado con las guías dieléctricas planas, e incluso utilizarlo para una amplia diversidad de objetivos basados en esta tecnología. Uno de los principales objetivos de este proyecto es la resolución de la base teórica de las guías slab a partir de métodos numéricos computacionales, cuyos procedimientos son extrapolables a otros problemas matemáticos y ofrecen al autor una contundente base conceptual de los mismos. Por este motivo, las resoluciones de las ecuaciones diferenciales y características que constituyen los problemas de este tipo de estructuras se realizan por estos medios de cálculo en el núcleo de la aplicación, dado que en algunos casos, no existe la alternativa de uso de expresiones analíticas útiles. ABSTRACT. The first step in engineering design and development is an analysis and simulation process which will successfully corroborate the initial hypothesis that was made and find solutions for a particular. In this way, it is possible to obtain empirical evidence which suitably substantiate the purposes of the project. Commonly, the characteristics to reach a particular target are found through iterative trial and error methods. These kinds of methods are based on the same theoretical analysis but with a variation of some parameters, with the objective to adapt the results for a particular aim. At present, powerful computers and mathematical algorithms are available to solve different kinds of calculation problems in a fast and efficient way. Computing application development is useful as it gives a high level of accurate results for engineering analysis and synthesis in short periods of time. This is more notable in cases where the mathematical expressions on a theoretical base are similar but with small variations of constant values. This is due to the ease of adaptation of the computer programming code into a parameter request system that defines a particular solution on each execution. Additionally, it is possible to code an application suitable to simulate any issue related to the studied technology. The aim of the present project consists of the construction of the first stage of an optoelectronics simulator named Slabsim. Slabism is capable of representing the energetic distribution of a light wave guided in the volume of a slab waveguide. The mentioned simulator is made through the graphic user interface development environment Matlab GUIDE, property of Mathworks©. It is designed for an easy and intuitive management by the user to execute simulations with a low knowledge of the technology theoretical bases. With this software it is possible to achieve several aims related to the slab waveguides by the user in low interval of time. One of the main purposes of this project is the mathematical solving of theoretical bases of slab structures through computing numerical analysis. This is due to the capability of adapting its criterion to other mathematical issues and provides a strong knowledge of its process. Based on these advantages, numerical solving methods are used in the core of the simulator to obtain differential and characteristic equations results that become represented on it.
Resumo:
Correct modeling of the equivalent circuits regarding solar cell and panels is today an essential tool for power optimization. However, the parameter extraction of those circuits is still a quite difficult task that normally requires both experimental data and calculation procedures, generally not available to the normal user. This paper presents a new analytical method that easily calculates the equivalent circuit parameters from the data that manufacturers usually provide. The analytical approximation is based on a new methodology, since methods developed until now to obtain the aforementioned equivalent circuit parameters from manufacturer's data have always been numerical or heuristic. Results from the present method are as accurate as the ones resulting from other more complex (numerical) existing methods in terms of calculation process and resources.
Resumo:
En los últimos años ha habido un gran aumento de fuentes de datos biomédicos. La aparición de nuevas técnicas de extracción de datos genómicos y generación de bases de datos que contienen esta información ha creado la necesidad de guardarla para poder acceder a ella y trabajar con los datos que esta contiene. La información contenida en las investigaciones del campo biomédico se guarda en bases de datos. Esto se debe a que las bases de datos permiten almacenar y manejar datos de una manera simple y rápida. Dentro de las bases de datos existen una gran variedad de formatos, como pueden ser bases de datos en Excel, CSV o RDF entre otros. Actualmente, estas investigaciones se basan en el análisis de datos, para a partir de ellos, buscar correlaciones que permitan inferir, por ejemplo, tratamientos nuevos o terapias más efectivas para una determinada enfermedad o dolencia. El volumen de datos que se maneja en ellas es muy grande y dispar, lo que hace que sea necesario el desarrollo de métodos automáticos de integración y homogeneización de los datos heterogéneos. El proyecto europeo p-medicine (FP7-ICT-2009-270089) tiene como objetivo asistir a los investigadores médicos, en este caso de investigaciones relacionadas con el cáncer, proveyéndoles con nuevas herramientas para el manejo de datos y generación de nuevo conocimiento a partir del análisis de los datos gestionados. La ingestión de datos en la plataforma de p-medicine, y el procesamiento de los mismos con los métodos proporcionados, buscan generar nuevos modelos para la toma de decisiones clínicas. Dentro de este proyecto existen diversas herramientas para integración de datos heterogéneos, diseño y gestión de ensayos clínicos, simulación y visualización de tumores y análisis estadístico de datos. Precisamente en el ámbito de la integración de datos heterogéneos surge la necesidad de añadir información externa al sistema proveniente de bases de datos públicas, así como relacionarla con la ya existente mediante técnicas de integración semántica. Para resolver esta necesidad se ha creado una herramienta, llamada Term Searcher, que permite hacer este proceso de una manera semiautomática. En el trabajo aquí expuesto se describe el desarrollo y los algoritmos creados para su correcto funcionamiento. Esta herramienta ofrece nuevas funcionalidades que no existían dentro del proyecto para la adición de nuevos datos provenientes de fuentes públicas y su integración semántica con datos privados.---ABSTRACT---Over the last few years, there has been a huge growth of biomedical data sources. The emergence of new techniques of genomic data generation and data base generation that contain this information, has created the need of storing it in order to access and work with its data. The information employed in the biomedical research field is stored in databases. This is due to the capability of databases to allow storing and managing data in a quick and simple way. Within databases there is a variety of formats, such as Excel, CSV or RDF. Currently, these biomedical investigations are based on data analysis, which lead to the discovery of correlations that allow inferring, for example, new treatments or more effective therapies for a specific disease or ailment. The volume of data handled in them is very large and dissimilar, which leads to the need of developing new methods for automatically integrating and homogenizing the heterogeneous data. The p-medicine (FP7-ICT-2009-270089) European project aims to assist medical researchers, in this case related to cancer research, providing them with new tools for managing and creating new knowledge from the analysis of the managed data. The ingestion of data into the platform and its subsequent processing with the provided tools aims to enable the generation of new models to assist in clinical decision support processes. Inside this project, there exist different tools related to areas such as the integration of heterogeneous data, the design and management of clinical trials, simulation and visualization of tumors and statistical data analysis. Particularly in the field of heterogeneous data integration, there is a need to add external information from public databases, and relate it to the existing ones through semantic integration methods. To solve this need a tool has been created: the term Searcher. This tool aims to make this process in a semiautomatic way. This work describes the development of this tool and the algorithms employed in its operation. This new tool provides new functionalities that did not exist inside the p-medicine project for adding new data from public databases and semantically integrate them with private data.
Resumo:
In order to build dynamic models for prediction and management of degraded Mediterranean forest areas was necessary to build MARIOLA model, which is a calculation computer program. This model includes the following subprograms. 1) bioshrub program, which calculates total, green and woody shrubs biomass and it establishes the time differences to calculate the growth. 2) selego program, which builds the flow equations from the experimental data. It is based on advanced procedures of statistical multiple regression. 3) VEGETATION program, which solves the state equations with Euler or Runge-Kutta integration methods. Each one of these subprograms can act as independent or as linked programs.
Resumo:
Thesis (Master's)--University of Washington, 2016-06
Resumo:
The multibody dynamics of a satellite in circular orbit, modeled as a central body with two hinge-connected deployable solar panel arrays, is investigated. Typically, the solar panel arrays are deployed in orbit using preloaded torsional springs at the hinges in a near symmetrical accordion manner, to minimize the shock loads at the hinges. There are five degrees of freedom of the interconnected rigid bodies, composed of coupled attitude motions (pitch, yaw and roll) of the central body plus relative rotations of the solar panel arrays. The dynamical equations of motion of the satellite system are derived using Kane's equations. These are then used to investigate the dynamic behavior of the system during solar panel deployment via the 7-8th-order Runge-Kutta integration algorithms and results are compared with approximate analytical solutions. Chaotic attitude motions of the completely deployed satellite in circular orbit under the influence of the gravity-gradient torques are subsequently investigated analytically using Melnikov's method and confirmed via numerical integration. The Hamiltonian equations in terms of Deprit's variables are used to facilitate the analysis. (C) 2003 Published by Elsevier Ltd.
Resumo:
The ab initio/Rice-Ramsperger-Kassel-Marcus (RRKM) approach has been applied to investigate the photodissociation mechanism of benzene at various wavelengths upon absorption of one or two UV photons followed by internal conversion into the ground electronic state. Reaction pathways leading to various decomposition products have been mapped out at the G2M level and then the RRKM and microcanonical variational transition state theories have been applied to compute rate constants for individual reaction steps. Relative product yields (branching ratios) for C6H5+H, C6H4+H-2, C4H4+C2H2, C4H2+C2H4, C3H3+C3H3, C5H3+CH3, and C4H3+C2H3 have been calculated subsequently using both numerical integration of kinetic master equations and the steady-state approach. The results show that upon absorption of a 248 nm photon dissociation is too slow to be observable in molecular beam experiments. In photodissociation at 193 nm, the dominant dissociation channel is H atom elimination (99.6%) and the minor reaction channel is H-2 elimination, with the branching ratio of only 0.4%. The calculated lifetime of benzene at 193 nm is about 11 mus, in excellent agreement with the experimental value of 10 mus. At 157 nm, the H loss remains the dominant channel but its branching ratio decreases to 97.5%, while that for H-2 elimination increases to 2.1%. The other channels leading to C3H3+C3H3, C5H3+CH3, C4H4+C2H2, and C4H3+C2H3 play insignificant role but might be observed. For photodissociation upon absorption of two UV photons occurring through the neutral hot benzene mechanism excluding dissociative ionization, we predict that the C6H5+H channel should be less dominant, while the contribution of C6H4+H-2 and the C3H3+C3H3, CH3+C5H3, and C4H3+C2H3 radical channels should significantly increase. (C) 2004 American Institute of Physics.