960 resultados para Euler, Teorema de
Resumo:
本文根据便携式移动机器人的特点,采用四元数法解算机器人导航系统的姿态,避免了在机器人运动角度较大时出现奇异点的问题。文中应用改进的四阶龙格-库塔算法解算四元数微分方程,经仿真实验,精度完全能够达到要求。给出了合理的变换公式,在机器人运动范围内,满足了四元数与欧拉角之间转换的一一对应。
Resumo:
采用串联约束 /并联驱动的原理 ,通过加入约束机构 ,设计一种新型柔索驱动并联机器人。然而由于约束机构的引入 ,机器人的动力学分析变得更为复杂。在对机器人进行运动学分析的基础上 ,利用牛顿 欧拉法建立机器人动力学方程。仿真结果证明了该方法的有效性
Resumo:
There has been a growing concern about the use of fossil fuels and its adverse effects on the atmospheric greenhouse and ecological environment. A reduction in the release rate of CO2 into the atmosphere poses a major challenge to the land ecology of China. The most promising way of achieving CO2 reduction is to dispose of CO2 in deep saline aquifers. Deep aquifers have a large potential for CO2 sequestration in geological medium in terms of volume and duration. Through the numerical simulation of multiphase flow in a porous media, the transformation and motion of CO2 in saline aquifers has been implemented under various temperature and hydrostatic pressure conditions, which plays an important role to the assessment of the reliability and safety of CO2 geological storage. As expected, the calculated results can provide meaningful and scientific information for management purposes. The key problem to the numerical simulation of multiphase flow in a porous media is to accurately capture the mass interface and to deal with the geological heterogeneity. In this study, the updated CE/SE (Space and time conservation element and solution element) method has been proposed, and the Hybrid Particle Level Set method (HPLS) has extended for multiphase flows in porous medium, which can accurately trace the transformation of the mass interface. The benchmark problems have been applied to evaluate and validate the proposed method. In this study, the reliability of CO2 storage in saline aquifers in Daqingzi oil field in Sunlong basin has been discussed. The simulation code developed in this study takes into account the state for CO2 covering the triple point temperature and pressure to the supercritical region. The geological heterogeneity has been implemented, using the well known geostatistical model (GSLIB) on the base of the hard data. The 2D and 3D model have been set up to simulate the CO2 multiphase flow in the porous saline aquifer, applying the CE/SE method and the HPLS method .The main contents and results are summarized as followings. (1) The 2D CE/SE method with first and second –order accuracy has been extended to simulate the multiphase flow in porous medium, which takes into account the contribution of source and sink in the momentum equation. The 3D CE/SE method with the first accuracy has been deduced. The accuracy and efficiency of the proposed CE/SE method have been investigated, using the benchmark problems. (2) The hybrid particle level set method has been made appropriate and extended for capturing the mass interface of multiphase flows in porous media, and the numerical method for level set function calculated has been formulated. (3) The closed equations for multiphase flow in porous medium has been developed, adept to both the Darcy flow and non-Darcy flow, getting over the limitation of Reynolds number to the calculation. It is found that Darcy number has a decisive influence on pressure as well as velocity given the Darcy number. (4) The new Euler scheme for numerical simulations of multiphase flows in porous medium has been proposed, which is efficient and can accurately capture the mass interface. The artificial compressibility method has been used to couple the velocities and pressure. It is found that the Darcy number has determinant effects on the numerical convergence and stability. In terms of the different Darcy numbers, the coefficient of artificial compressibility and the time step have been obtained. (5) The time scale of the critical instability for critical CO2 in the saline aquifer has been found, which is comparable with that of completely CO2 dissolved saline aquifer. (6) The concept model for CO2 multiphase flows in the saline aquifer has been configured, based on the temperature, pressure, porosity as well as permeability of the field site .Numerical simulation of CO2 hydrodynamic trapping in saline aquifers has been performed, applying the proposed CE/SE method. The state for CO2 has been employed to take into account realistic reservoir conditions for CO2 geological sequestration. The geological heterogeneity has been sufficiently treated , using the geostatistical model. (7) It is found that the Rayleigh-Taylor instability phenomenon, which is associated with the penetration of saline fluid into CO2 fluid in the direction of gravity, has been observed in CO2 multiphase flows in the saline aquifer. Development of a mushroom-type spike is a strong indication of the formation of Kelvin-Helmholtz instability due to the developed short wavelength perturbations present along the interface and parallel to the bulk flow. Additional key findings: the geological heterogeneity can distort the flow convection. The ascending of CO2 can induce the persistent flow cycling effects. The results show that boundary conditions of the field site have determinant effects on the transformation and motion of CO2 in saline aquifers. It is confirmed that the proposed method and numerical model has the reliability to simulate the process of the hydrodynamic trapping, which is the controlling mechanism for the initial period of CO2 storage at time scale of 100 years.
Resumo:
This paper addresses the problem of efficiently computing the motor torques required to drive a lower-pair kinematic chain (e.g., a typical manipulator arm in free motion, or a mechanical leg in the swing phase) given the desired trajectory; i.e., the Inverse Dynamics problem. It investigates the high degree of parallelism inherent in the computations, and presents two "mathematically exact" formulations especially suited to high-speed, highly parallel implementations using special-purpose hardware or VLSI devices. In principle, the formulations should permit the calculations to run at a speed bounded only by I/O. The first presented is a parallel version of the recent linear Newton-Euler recursive algorithm. The time cost is also linear in the number of joints, but the real-time coefficients are reduced by almost two orders of magnitude. The second formulation reports a new parallel algorithm which shows that it is possible to improve upon the linear time dependency. The real time required to perform the calculations increases only as the [log2] of the number of joints. Either formulation is susceptible to a systolic pipelined architecture in which complete sets of joint torques emerge at successive intervals of four floating-point operations. Hardware requirements necessary to support the algorithm are considered and found not to be excessive, and a VLSI implementation architecture is suggested. We indicate possible applications to incorporating dynamical considerations into trajectory planning, e.g. it may be possible to build an on-line trajectory optimizer.
Resumo:
The class of all Exponential-Polynomial-Trigonometric (EPT) functions is classical and equal to the Euler-d’Alembert class of solutions of linear differential equations with constant coefficients. The class of non-negative EPT functions defined on [0;1) was discussed in Hanzon and Holland (2010) of which EPT probability density functions are an important subclass. EPT functions can be represented as ceAxb, where A is a square matrix, b a column vector and c a row vector where the triple (A; b; c) is the minimal realization of the EPT function. The minimal triple is only unique up to a basis transformation. Here the class of 2-EPT probability density functions on R is defined and shown to be closed under a variety of operations. The class is also generalised to include mixtures with the pointmass at zero. This class coincides with the class of probability density functions with rational characteristic functions. It is illustrated that the Variance Gamma density is a 2-EPT density under a parameter restriction. A discrete 2-EPT process is a process which has stochastically independent 2-EPT random variables as increments. It is shown that the distribution of the minimum and maximum of such a process is an EPT density mixed with a pointmass at zero. The Laplace Transform of these distributions correspond to the discrete time Wiener-Hopf factors of the discrete time 2-EPT process. A distribution of daily log-returns, observed over the period 1931-2011 from a prominent US index, is approximated with a 2-EPT density function. Without the non-negativity condition, it is illustrated how this problem is transformed into a discrete time rational approximation problem. The rational approximation software RARL2 is used to carry out this approximation. The non-negativity constraint is then imposed via a convex optimisation procedure after the unconstrained approximation. Sufficient and necessary conditions are derived to characterise infinitely divisible EPT and 2-EPT functions. Infinitely divisible 2-EPT density functions generate 2-EPT Lévy processes. An assets log returns can be modelled as a 2-EPT Lévy process. Closed form pricing formulae are then derived for European Options with specific times to maturity. Formulae for discretely monitored Lookback Options and 2-Period Bermudan Options are also provided. Certain Greeks, including Delta and Gamma, of these options are also computed analytically. MATLAB scripts are provided for calculations involving 2-EPT functions. Numerical option pricing examples illustrate the effectiveness of the 2-EPT approach to financial modelling.
Resumo:
Lovelock terms are polynomial scalar densities in the Riemann curvature tensor that have the remarkable property that their Euler-Lagrange derivatives contain derivatives of the metric of an order not higher than 2 (while generic polynomial scalar densities lead to Euler-Lagrange derivatives with derivatives of the metric of order 4). A characteristic feature of Lovelock terms is that their first nonvanishing term in the expansion g λμ = η λμ + h λμ of the metric around flat space is a total derivative. In this paper, we investigate generalized Lovelock terms defined as polynomial scalar densities in the Riemann curvature tensor and its covariant derivatives (of arbitrarily high but finite order) such that their first nonvanishing term in the expansion of the metric around flat space is a total derivative. This is done by reformulating the problem as a BRST cohomological one and by using cohomological tools. We determine all the generalized Lovelock terms. We find, in fact, that the class of nontrivial generalized Lovelock terms contains only the usual ones. Allowing covariant derivatives of the Riemann tensor does not lead to a new structure. Our work provides a novel algebraic understanding of the Lovelock terms in the context of BRST cohomology. © 2005 IOP Publishing Ltd.
Resumo:
The humoral immune system plays a critical role in the clearance of numerous pathogens. In the setting of HIV-1 infection, the virus infects, integrates its genome into the host's cells, replicates, and establishes a reservoir of virus-infected cells. The initial antibody response to HIV-1 infection is targeted to non-neutralizing epitopes on HIV-1 Env gp41, and when a neutralizing response does develop months after transmission, it is specific for the autologous founder virus and the virus escapes rapidly. After continuous waves of antibody mediated neutralization and viral escape, a small subset of infected individuals eventually develop broad and potent heterologous neutralizing antibodies years after infection. In this dissertation, I have studied the ontogeny of mucosal and systemic antibody responses to HIV-1 infection by means of three distinct aims: 1. Determine the origin of the initial antibody response to HIV-1 infection. 2. Characterize the role of restricted VH and VL gene segment usage in shaping the antibody response to HIV-1 infection. 3. Determine the role of persistence of B cell clonal lineages in shaping the mutation frequencies of HIV-1 reactive antibodies.
After the introduction (Chapter 1) and methods (Chapter 2), Chapter 3 of this dissertation describes a study of the antibody response of terminal ileum B cells to HIV-1 envelope (Env) in early and chronic HIV-1 infection and provides evidence for the role of environmental antigens in shaping the repertoire of B cells that respond to HIV-1 infection. Previous work by Liao et al. demonstrated that the initial plasma cell response in the blood to acute HIV-1 infection is to gp41 and is derived from a polyreactive memory B cell pool. Many of these antibodies cross-reacted with commensal bacteria, Therefore, in Chapter 3, the relationship of intestinal B cell reactivity with commensal bacteria to HIV-1 infection-induced antibody response was probed using single B cell sorting, reverse transcription and nested polymerase chain reaction (RT- PCR) methods, and recombinant antibody technology. The dominant B cell response in the terminal ileum was to HIV-1 envelope (Env) gp41, and 82% of gp41- reactive antibodies cross-reacted with commensal bacteria whole cell lysates. Pyrosequencing of blood B cells revealed HIV-1 antibody clonal lineages shared between ileum and blood. Mutated IgG antibodies cross-reactive with both Env gp41 and commensal bacteria could also be isolated from the terminal ileum of HIV-1 uninfected individuals. Thus, the antibody response to HIV-1 can be shaped by intestinal B cells stimulated by commensal bacteria prior to HIV-1 infection to develop a pre-infection pool of memory B cells cross-reactive with HIV-1 gp41.
Chapter 4 details the study of restricted VH and VL gene segment usage for gp41 and gp120 antibody induction following acute HIV-1 infection; mutations in gp41 lead to virus enhanced neutralization sensitivity. The B cell repertoire of antibodies induced in a HIV-1 infected African individual, CAP206, who developed broadly neutralizing antibodies (bnAbs) directed to the HIV-1 envelope gp41 membrane proximal external region (MPER), is characterized. Understanding the selection of virus mutants by neutralizing antibodies is critical to understanding the role of antibodies in control of HIV-1 replication and prevention from HIV-1 infection. Previously, an MPER neutralizing antibody, CAP206-CH12, with the binding footprint identical to that of MPER broadly neutralizing antibody 4E10, that like 4E10 utilized the VH1-69 and VK3-20 variable gene segments was isolated from this individual (Morris et al., 2011). Using single B cell sorting, RT- PCR methods, and recombinant antibody technology, Chapter 4 describes the isolation of a VH1-69, Vk3-20 glycan-dependent clonal lineage from CAP206, targeted to gp120, that has the property of neutralizing a neutralization sensitive CAP206 transmitted/founder (T/F) and heterologous viruses with mutations at amino acids 680 or 681 in the MPER 4E10/CH12 binding site. These data demonstrate sites within the MPER bnAb epitope (aa 680-681) in which mutations can be selected that lead to viruses with enhanced sensitivity to autologous and heterologous neutralizing antibodies.
In Chapter 5, I have completed a comparison of evolution of B cell clonal lineages in two HIV-1 infected individuals who have a predominant VH1-69 response to HIV-1 infection--one who produces broadly neutralizing MPER-reactive mAbs and one who does not. Autologous neutralization in the plasma takes ~12 weeks to develop (Gray et al., 2007; Tomaras et al., 2008b). Only a small subset of HIV-1 infected individuals develops high plasma levels of broad and potent heterologous neutralization, and when it does occur, it typically takes 3-4 years to develop (Euler et al., 2010; Gray et al., 2007; 2011; Tomaras et al., 2011). The HIV-1 bnAbs that have been isolated to date have a number of unusual characteristics including, autoreactivity and high levels of somatic hypermutations, which are typically tightly regulated by immune control mechanisms (Haynes et al., 2005; 2012b; Kwong and Mascola, 2012; Scheid et al., 2009a). The VH mutation frequencies of bnAbs average ~15% but have been shown to be as high as 32% (reviewed in Mascola and Haynes, 2013; Kwong and Mascola, 2012). The high frequency of somatic hypermutations suggests that the B cell clonal lineages that eventually produce bnAbs undergo high-levels of affinity maturation, implying prolonged germinal center (GC) reactions and high levels of T cell help. To study the duration of HIV-1- reactive B cell clonal persistence, HIV-1 reactive and non HIV-1- reactive B cell clonal lineages were isolated from an HIV-1 infected individual that produces bnAbs, CAP206, and an HIV-1 infected individual who does not produce bnAbs, 004-0. Single B cell sorting, RT-PCR and recombinant antibody technology was used to isolate and produce monoclonal antibodies from multiple time points from each individual. B cell sequences clonally related to mAbs isolated by single cell PCR were identified within pyrosequences of longitudinal samples of these two individuals. Both individuals produced long-lived B cell clones that persisted from 0-232 weeks in CAP206, and 0-238 weeks in 004-0. The average length of persistence of clones containing members isolated from two separate time points was 91.5 weeks both individuals. Examples of the continued evolution of clonal lineages were observed in both the bnAb and non-bnAb individual. These data indicated that the ability to generate persistent and evolving B cell clonal lineages occurs in both bnAb and non-bnAb individuals, suggesting that some alternative host or viral factor is critical for the generation of highly mutated broadly neutralizing antibodies.
Together the studies described in Chapter 3-5 show that multiple factors influence the antibody response to HIV-1 infection. The initial antibody response to HIV-1 Env gp41 can be shaped by a B cell response to intestinal commensal bacteria prior to HIV-1 infection. VH and VL gene segment restriction can impact the B cell response to multiple HIV-1 antigens, and virus escape mutations in the MPER can confer enhanced neutralization sensitivity to autologous and heterologous antibodies. Finally, the ability to generate long-lived HIV-1 clonal lineages in and of itself does not confer on the host the ability to produce bnAbs.
Resumo:
p.173-180
Resumo:
A finales del siglo XVIII, en Europa el conocimiento científico se había desarrollado extraordinariamente. Surgen los nombres de Lavoisier, Ritcher, Coulomb y Celsius entre otros muchos. Se enuncian leyes en química y física; junto a ellas también florece la matemática de la mano de Euler, Lagrange, D«Alambert, Monge, por citar sólo unos cuantos. Mientras tanto, el atraso de las matemáticas españolas se debía, entre otras causas, al pobre estado en que se encontraban las universidades: aún de tipo medieval y de carácter eclesiástico. Esto lo evidencia Fray Benito Jerónimo Feijoo en la carta titulada Causas del atraso que se padece en España en orden a las ciencias naturales, y el Marqués de la Ensenada quien, en 1748, se lo expresa al rey Fernando VI. Las deficiencias de las universidades tenían que ver con la enseñanza memorística, textos anticuados e interés primordial por disciplinas como derecho, teología y filosofía en detrimento de las matemáticas y las ciencias.
Resumo:
El presente trabajo se desprende de la práctica docente que se está llevando a cabo en el Centro Educativo Femenino de Antioquia (CEFA) en la ciudad de Medellín con estudiantes del grado décimo, el cual tiene como intención primordial retornar la geometría al aula de clase como una herramienta que facilita la interpretación de las ideas matemáticas y físicas, empleando la metodología de aula-taller como fundamento para alcanzar tal fin. Hasta ahora se ha logrado despertar un relevante interés en el manejo del lenguaje geométrico y una mejor interpretación de algunos conceptos como el teorema de Pitágoras y el número Pi, a partir de uso del material concreto que ayuda al estudiante a alcanzar una mejor apropiación de dichos conceptos.
Resumo:
Este trabajo de investigación ha centrado la atención en generar diseños didácticos que aborden temas del Cálculo y Precálculo del currículo actual, cuyos fundamentos teóricos están basados en investigaciones de corte socioepistemológico favoreciendo el uso inteligente de la tecnología en el aula de matemáticas. En éstos se retomarán aspectos que ayuden a la reconstrucción de significados de tópico matemáticos como el teorema de Thales, el uso de la subtangente para caracterizar una curva (máximos, mínimos y puntos de inflexión) y la noción de acumulación para abordar el área bajo la curva.
Resumo:
Se presenta un modelo geométrico para la construcción de un segmento llamado Escintor, que divide a un triángulo en dos poligonales de igual perímetro, además se demuestra la existencia de otras rectas notables en un triángulo denominadas Mescintriz y Vescintriz con propiedades similares a las otras rectas ya conocidas; así mismo se muestra como el Mescincentro y el Vescincentro, puntos donde se intersecan las Mescintrices y las Vescintrices respectivamente, están alineados con el Baricentro y el Incentro en una recta que guarda mucha semejanza con la Recta de Euler.
Resumo:
El rol del aprendizaje significativo mediante la utilización de nuevas estrategias de enseñanza. Este aprendizaje involucra un proceso en el que lo que aprendemos es el producto de la información nueva, interpretada a la luz de lo que ya sabemos. Para que haya aprendizaje significativo, es necesario que el alumno pueda relacionar el material de aprendizaje con la estructura de conocimientos de que ya dispone. De esta forma, junto con la motivación favorable para la comprensión, y, los esfuerzos que requiere, una condición esencial del aprendizaje de conceptos será que estos se relacionen con los conocimientos previos de los alumnos. El nuevo conocimiento, que queremos que el alumno aprenda en esta oportunidad, surgirá de un adecuado desarrollo del razonamiento deductivo y manejo de los conocimientos previos. Entendiendo por razonamiento deductivo al proceso de razonamiento en que, para obtener una conclusión lógicamente necesaria a partir de ciertas premisas, los pasos están encadenados siguiendo ciertas reglas lógicas y son justificados rigurosamente. Las justificaciones están basadas en los axiomas y definiciones de la teoría respectiva, en teoremas demostrados con anterioridad y en las premisas o hipótesis del problema o teorema. El docente debe ayudar al estudiante a desarrollar y usar el poder del razonamiento deductivo comprometiéndolo permanentemente a pensar, analizar y deducir conjeturas en clase, además debe crear y seleccionar tareas apropiadas que puedan involucrar la generalización, la organización de datos para validar o refutar una conjetura. Un grupo de bachillerato del último año desarrolló la demostración de un teorema de convergencia de series, con los resultados de un 46% que la realizó exitosamente, versus un 36% que no lo logró. Los alumnos que lograron hacer la demostración, no eran los más estudiosos pero tenían una buena capacidad de razonamiento. En cambio los que generalmente preparan las evaluaciones y que se apoyan mucho en la memoria, no lograron un buen desempeño.
Resumo:
Se pone de manifiesto la necesidad de que el profesor gestione la construcción de significado en el aula y lo haga a partir de las interpretaciones que pueda inferir de los aportes verbales de los estudiantes durante el proceso. Se muestra que la construcción de significado de una definición que un profesor podría despachar muy rápidamente (señalando un error, repitiendo la definición y pidiendo a los estudiantes que se fijen bien en ella para reformular la representación de la situación en la que el objeto definido se pone en juego), está lejos de ser un asunto baladí. En el segundo ejemplo que se presenta es posible ver cómo la gestión del profesor en pro de la construcción de significado de un objeto geométrico (en este caso, el enunciado del Teorema Localización de Puntos), no se agota en el momento en que se enuncia y demuestra el Teorema sino que se requiere también en momentos en que se usa en el marco de la resolución de un nuevo problema.
Resumo:
A mediados del siglo XVIII el prolífico y genial matemático suizo leonhard Euler analizó y resolvió un juego de probabilidad con cartas llamado Rencontre. Como otros problemas probabilísticos, el enunciado es fácilmente comprensible, su análisis no es elemental y el resultado parece contrario a la intuición o, cuando menos, sorprendente. Euler utiliza, para la resolución del problema, la combinatoria y la suma de ciertas sucesiones. En este artículo se pretende llegar a la misma conclusión recurriendo a unas matemáticas más cercanas al alumno de bachillerato.