938 resultados para Second order moment functions
Resumo:
Spacecraft move with high speeds and suffer abrupt changes in acceleration. So, an onboard GPS receiver could calculate navigation solutions if the Doppler effect is taken into consideration during the satellite signals acquisition and tracking. Thus, for the receiver subject to such dynamic cope these shifts in the frequency signal, resulting from this effect, it is imperative to adjust its acquisition bandwidth and increase its tracking loop to a higher order. This paper presents the changes in the GPS Orion s software, an open architecture receiver produced by GEC Plessey Semiconductors, nowadays Zarlink, in order to make it able to generate navigation fix for vehicle under high dynamics, especially Low Earth Orbit satellites. GPS Architect development system, sold by the same company, supported the modifications. Furthermore, it presents GPS Monitor Aerospace s characteristics, a computational tool developed for monitoring navigation fix calculated by the GPS receiver, through graphics. Although it was not possible to simulate the software modifications implemented in the receiver in high dynamics, it was observed that the receiver worked in stationary tests, verified also in the new interface. This work also presents the results of GPS Receiver for Aerospace Applications experiment, achieved with the receiver s participation in a suborbital mission, Operation Maracati 2, in December 2010, using a digital second order carrier tracking loop. Despite an incident moments before the launch have hindered the effective navigation of the receiver, it was observed that the experiment worked properly, acquiring new satellites and tracking them during the VSB-30 rocket flight.
Resumo:
This work presents a modelling and identification method for a wheeled mobile robot, including the actuator dynamics. Instead of the classic modelling approach, where the robot position coordinates (x,y) are utilized as state variables (resulting in a non linear model), the proposed discrete model is based on the travelled distance increment Delta_l. Thus, the resulting model is linear and time invariant and it can be identified through classical methods such as Recursive Least Mean Squares. This approach has a problem: Delta_l can not be directly measured. In this paper, this problem is solved using an estimate of Delta_l based on a second order polynomial approximation. Experimental data were colected and the proposed method was used to identify the model of a real robot
Resumo:
This work proposes a computer simulator for sucker rod pumped vertical wells. The simulator is able to represent the dynamic behavior of the systems and the computation of several important parameters, allowing the easy visualization of several pertinent phenomena. The use of the simulator allows the execution of several tests at lower costs and shorter times, than real wells experiments. The simulation uses a model based on the dynamic behavior of the rod string. This dynamic model is represented by a second order partial differencial equation. Through this model, several common field situations can be verified. Moreover, the simulation includes 3D animations, facilitating the physical understanding of the process, due to a better visual interpretation of the phenomena. Another important characteristic is the emulation of the main sensors used in sucker rod pumping automation. The emulation of the sensors is implemented through a microcontrolled interface between the simulator and the industrial controllers. By means of this interface, the controllers interpret the simulator as a real well. A "fault module" was included in the simulator. This module incorporates the six more important faults found in sucker rod pumping. Therefore, the analysis and verification of these problems through the simulator, allows the user to identify such situations that otherwise could be observed only in the field. The simulation of these faults receives a different treatment due to the different boundary conditions imposed to the numeric solution of the problem. Possible applications of the simulator are: the design and analysis of wells, training of technicians and engineers, execution of tests in controllers and supervisory systems, and validation of control algorithms
Resumo:
The study of aerodynamic loading variations has many engineering applications, including helicopter rotor blades, wind turbines and turbo machinery. This work uses a Vortex Method to make a lagrangian description of the a twodimensional airfoil/ incident wake vortex interaction. The flow is incompressible, newtonian, homogeneus and the Reynolds Number is 5x105 .The airfoil is a NACA 0018 placed a angle of attack of the 0° and 5°simulates with the Painel Method with a constant density vorticity panels and a generation poit is near the painel. The protector layer is created does not permit vortex inside the body. The vortex Lamb convection is realized with the Euler Method (first order) and Adans-Bashforth (second order). The Random Walk Method is used to simulate the diffusion. The circular wake has 366 vortex all over positive or negative vorticity located at different heights with respect to the airfoil chord. The Lift was calculated based in the algorithm created by Ricci (2002). This simulation uses a ready algorithm vatidated with single body does not have a incident wake. The results are compared with a experimental work The comparasion concludes that the experimental results has a good agrement with this papper
Resumo:
Actually, surveys have been developed for obtaining new materials and methodologies that aim to minimize environmental problems due to discharges of industrial effluents contaminated with heavy metals. The adsorption has been used as an alternative technology effectively, economically viable and potentially important for the reduction of metals, especially when using natural adsorbents such as certain types of clay. Chitosan, a polymer of natural origin, present in the shells of crustaceans and insects, has also been used for this purpose. Among the clays, vermiculite is distinguished by its good ion exchange capacity and in its expanded form enhances its properties by greatly increasing its specific surface. This study aimed to evaluate the functionality of the hybrid material obtained through the modification of expanded vermiculite with chitosan in the removal of lead ions (II) in aqueous solution. The material was characterized by infrared spectroscopy (IR) in order to evaluate the efficiency of modification of matrix, the vermiculite, the organic material, chitosan. The thermal stability of the material and the ratio clay / polymer was evaluated by thermogravimetry. To evaluate the surface of the material was used in scanning electron microscopy (SEM) and (BET). The BET analysis revealed a significant increase in surface area of vermiculite that after interaction with chitosan, was obtained a value of 21, 6156 m2 / g. Adsorption tests were performed according to the particle size, concentration and time. The results show that the capacity of removal of ions through the vermiculite was on average 88.4% for lead in concentrations ranging from 20-200 mg / L and 64.2% in the concentration range of 1000 mg / L. Regarding the particle size, there was an increase in adsorption with decreasing particle size. In fuction to the time of contact, was observed adsorption equilibrium in 60 minutes with adsorption capacity. The data of the isotherms were fitted to equation Freundlich. The kinetic study of adsorption showed that the pseudo second- order model best describes the adsorption adsorption, having been found following values K2=0,024 g. mg-1 min-1and Qmax=25,75 mg/g, value very close to the calculated Qe = 26.31 mg / g. From the results we can conclude that the material can be used in wastewater treatment systems as a source of metal ions adsorbent due to its high adsorption capacity
Stochastic stability for Markovian jump linear systems associated with a finite number of jump times
Resumo:
This paper deals with a stochastic stability concept for discrete-time Markovian jump linear systems. The random jump parameter is associated to changes between the system operation modes due to failures or repairs, which can be well described by an underlying finite-state Markov chain. In the model studied, a fixed number of failures or repairs is allowed, after which, the system is brought to a halt for maintenance or for replacement. The usual concepts of stochastic stability are related to pure infinite horizon problems, and are not appropriate in this scenario. A new stability concept is introduced, named stochastic tau-stability that is tailored to the present setting. Necessary and sufficient conditions to ensure the stochastic tau-stability are provided, and the almost sure stability concept associated with this class of processes is also addressed. The paper also develops equivalences among second order concepts that parallels the results for infinite horizon problems. (C) 2003 Elsevier B.V. All rights reserved.
Resumo:
The extracellular glycerol kinase gene from Saccharomyces cerevisiae (GUT]) was cloned into the expression vector pPICZ alpha. A and integrated into the genome of the methylotrophic yeast Pichia pastoris X-33. The presence of the GUT1 insert was confirmed by PCR analysis. Four clones were selected and the functionality of the recombinant enzyme was assayed. Among the tested clones, one exhibited glycerol kinase activity of 0.32 U/mL, with specific activity of 0.025 U/mg of protein. A medium optimized for maximum biomass production by recombinant Pichia pastoris in shaker cultures was initially explored, using 2.31 % (by volume) glycerol as the carbon source. Optimization was carried out by response surface methodology (RSM). In preliminary experiments, following a Plackett-Burman design, glycerol volume fraction (phi(Gly)) and growth time (t) were selected as the most important factors in biomass production. Therefore, subsequent experiments, carried out to optimize biomass production, followed a central composite rotatable design as a function of phi(Gly) and time. Glycerol volume fraction proved to have a significant positive linear effect on biomass production. Also, time was a significant factor (at linear positive and quadratic levels) in biomass production. Experimental data were well fitted by a convex surface representing a second order polynomial model, in which biomass is a function of both factors (R(2)=0.946). Yield and specific activity of glycerol kinase were mainly affected by the additions of glycerol and methanol to the medium. The optimized medium composition for enzyme production was: 1 % yeast extract, 1 % peptone, 100 mM potassium phosphate buffer, pH=6.0, 1.34 % yeast nitrogen base (YNB), 4.10(-5) % biotin, 1 %, methanol and 1 %, glycerol, reaching 0.89 U/mL of glycerol kinase activity and 14.55 g/L of total protein in the medium after 48 h of growth.
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
In the Einstein s theory of General Relativity the field equations relate the geometry of space-time with the content of matter and energy, sources of the gravitational field. This content is described by a second order tensor, known as energy-momentum tensor. On the other hand, the energy-momentum tensors that have physical meaning are not specified by this theory. In the 700s, Hawking and Ellis set a couple of conditions, considered feasible from a physical point of view, in order to limit the arbitrariness of these tensors. These conditions, which became known as Hawking-Ellis energy conditions, play important roles in the gravitation scenario. They are widely used as powerful tools for analysis; from the demonstration of important theorems concerning to the behavior of gravitational fields and geometries associated, the gravity quantum behavior, to the analysis of cosmological models. In this dissertation we present a rigorous deduction of the several energy conditions currently in vogue in the scientific literature, such as: the Null Energy Condition (NEC), Weak Energy Condition (WEC), the Strong Energy Condition (SEC), the Dominant Energy Condition (DEC) and Null Dominant Energy Condition (NDEC). Bearing in mind the most trivial applications in Cosmology and Gravitation, the deductions were initially made for an energy-momentum tensor of a generalized perfect fluid and then extended to scalar fields with minimal and non-minimal coupling to the gravitational field. We also present a study about the possible violations of some of these energy conditions. Aiming the study of the single nature of some exact solutions of Einstein s General Relativity, in 1955 the Indian physicist Raychaudhuri derived an equation that is today considered fundamental to the study of the gravitational attraction of matter, which became known as the Raychaudhuri equation. This famous equation is fundamental for to understanding of gravitational attraction in Astrophysics and Cosmology and for the comprehension of the singularity theorems, such as, the Hawking and Penrose theorem about the singularity of the gravitational collapse. In this dissertation we derive the Raychaudhuri equation, the Frobenius theorem and the Focusing theorem for congruences time-like and null congruences of a pseudo-riemannian manifold. We discuss the geometric and physical meaning of this equation, its connections with the energy conditions, and some of its several aplications.
Resumo:
To the vertebrates, maintain body balance against the gravitational field and be able to orient themselves in the environment are fundamental aspects for survival, in which the participation of vestibular system is essential. As part of this system, the vestibular nuclear complex is the first central station that, by integrating many information (visual, proprioceptive), and the vestibular, assumes the lead role in maintaining balance. In this study, the vestibular nuclear complex was evaluated in relation to its cytoarchitecture and neurochemical content of cells and axon terminals, through the techniques of Nissl staining and immunohistochemistry for neuronal specific nuclear protein (NeuN), glutamate (Glu), substance P (SP), choline acetyltransferase (ChAT) (enzyme that synthesizes acetylcholine-Ach) and glutamic acid decarboxylase (GAD) (enzyme that synthesizes gamma-amino butyric acid-GABA). The common marmoset (Callithrix jacchus) was used as experimental animal, which is a small primate native from the Atlantic Forest in the Brazilian Northeast. As results, the Nissl technique, complemented by immunohistochemistry for NeuN allowed to delineate the vestibular nucleus superior, lateral, medial and inferior (or descending) in the brain of the common marmoset. Neurons and terminals immunoreactive to Glu and ChAT and only immunoreactive terminals to SP and GAD were seen in all nuclei, although in varying density. This study confirms the presence in the vestibular nuclei of the common marmoset, of Glu and SP in terminals, probably from the first order neurons of vestibular ganglion, and of GABA in terminals, presumably from Purkinge cells of the cerebellum. Second-order neurons of the vestibular nuclei seem to use Glu and Ach as neurotransmitters, judging by their expressive presence in the cell bodies of these nuclei in common marmosets, as reported in other species
Resumo:
The scheme is based on Ami Harten's ideas (Harten, 1994), the main tools coming from wavelet theory, in the framework of multiresolution analysis for cell averages. But instead of evolving cell averages on the finest uniform level, we propose to evolve just the cell averages on the grid determined by the significant wavelet coefficients. Typically, there are few cells in each time step, big cells on smooth regions, and smaller ones close to irregularities of the solution. For the numerical flux, we use a simple uniform central finite difference scheme, adapted to the size of each cell. If any of the required neighboring cell averages is not present, it is interpolated from coarser scales. But we switch to ENO scheme in the finest part of the grids. To show the feasibility and efficiency of the method, it is applied to a system arising in polymer-flooding of an oil reservoir. In terms of CPU time and memory requirements, it outperforms Harten's multiresolution algorithm.The proposed method applies to systems of conservation laws in 1Dpartial derivative(t)u(x, t) + partial derivative(x)f(u(x, t)) = 0, u(x, t) is an element of R-m. (1)In the spirit of finite volume methods, we shall consider the explicit schemeupsilon(mu)(n+1) = upsilon(mu)(n) - Deltat/hmu ((f) over bar (mu) - (f) over bar (mu)-) = [Dupsilon(n)](mu), (2)where mu is a point of an irregular grid Gamma, mu(-) is the left neighbor of A in Gamma, upsilon(mu)(n) approximate to 1/mu-mu(-) integral(mu-)(mu) u(x, t(n))dx are approximated cell averages of the solution, (f) over bar (mu) = (f) over bar (mu)(upsilon(n)) are the numerical fluxes, and D is the numerical evolution operator of the scheme.According to the definition of (f) over bar (mu), several schemes of this type have been proposed and successfully applied (LeVeque, 1990). Godunov, Lax-Wendroff, and ENO are some of the popular names. Godunov scheme resolves well the shocks, but accuracy (of first order) is poor in smooth regions. Lax-Wendroff is of second order, but produces dangerous oscillations close to shocks. ENO schemes are good alternatives, with high order and without serious oscillations. But the price is high computational cost.Ami Harten proposed in (Harten, 1994) a simple strategy to save expensive ENO flux calculations. The basic tools come from multiresolution analysis for cell averages on uniform grids, and the principle is that wavelet coefficients can be used for the characterization of local smoothness.. Typically, only few wavelet coefficients are significant. At the finest level, they indicate discontinuity points, where ENO numerical fluxes are computed exactly. Elsewhere, cheaper fluxes can be safely used, or just interpolated from coarser scales. Different applications of this principle have been explored by several authors, see for example (G-Muller and Muller, 1998).Our scheme also uses Ami Harten's ideas. But instead of evolving the cell averages on the finest uniform level, we propose to evolve the cell averages on sparse grids associated with the significant wavelet coefficients. This means that the total number of cells is small, with big cells in smooth regions and smaller ones close to irregularities. This task requires improved new tools, which are described next.
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)