987 resultados para Planar Point Set
Resumo:
In this study we investigated the hypothesis that the simple set of rules used to explain the modulation of muscle activities during single-joint movements could also be applied for reversal movements of the shoulder and elbow joints. The muscle torques of both joints were characterized by a triphasic impulse. The first impulse of each joint accelerated the limb to the target and was generated by an initial burst of the muscles activated first (primary mover). The second impulse decelerated the limb to the target, reversed movement direction and accelerated the limb back to the initial position, and was generated by an initial burst of the muscles activated second (secondary movers). A third impulse, in each joint, decelerated the limb to the initial position due to the generation of a second burst of the primary movers. The first burst of the primary mover decreased abruptly, and the latency between the activation of the primary and secondary movers varied in proportion with target distances for the elbow, but not for the shoulder muscles. All impulses and bursts increased with target distances and were well coupled. Therefore, as predicted, the bursts of muscle activities were modulated to generate the appropriate level of muscle torque. (C) 2005 Elsevier Ltd. All rights reserved.
Resumo:
The planar, circular, restricted three-body problem predicts the existence of periodic orbits around the Lagrangian equilibrium point L1. Considering the Earth-lunar-probe system, some of these orbits pass very close to the surfaces of the Earth and the Moon. These characteristics make it possible for these orbits, in spite of their instability, to be used in transfer maneuvers between Earth and lunar parking orbits. The main goal of this paper is to explore this scenario, adopting a more complex and realistic dynamical system, the four-body problem Sun-Earth-Moon-probe. We defined and investigated a set of paths, derived from the orbits around L1, which are capable of achieving transfer between low-altitude Earth (LEO) and lunar orbits, including high-inclination lunar orbits, at a low cost and with flight time between 13 and 15 days.
Resumo:
Includes bibliography
Resumo:
O método de empilhamento sísmico CRS simula seções sísmicas ZO a partir de dados de cobertura múltipla, independente do macro-modelo de velocidades. Para meios 2-D, a função tempo de trânsito de empilhamento depende de três parâmetros, a saber: do ângulo de emergência do raio de reflexão normal (em relação à normal da superfície) e das curvaturas das frentes de onda relacionadas às ondas hipotéticas, denominadas NIP e Normal. O empilhamento CRS consiste na soma das amplitudes dos traços sísmicos em dados de múltipla cobertura, ao longo da superfície definida pela função tempo de trânsito do empilhamento CRS, que melhor se ajusta aos dados. O resultado do empilhamento CRS é assinalado a pontos de uma malha pré-definida na seção ZO. Como resultado tem-se a simulação de uma seção sísmica ZO. Isto significa que para cada ponto da seção ZO deve-se estimar o trio de parâmetros ótimos que produz a máxima coerência entre os eventos de reflexão sísmica. Nesta Tese apresenta-se fórmulas para o método CRS 2-D e para a velocidade NMO, que consideram a topografia da superfície de medição. O algoritmo é baseado na estratégia de otimização dos parâmetros de fórmula CRS através de um processo em três etapas: 1) Busca dos parâmetros, o ângulo de emergência e a curvatura da onda NIP, aplicando uma otimização global, 2) busca de um parâmetro, a curvatura da onda N, aplicando uma otimização global, e 3) busca de três parâmetros aplicando uma otimização local para refinar os parâmetros estimados nas etapas anteriores. Na primeira e segunda etapas é usado o algoritmo Simulated Annealing (SA) e na terceira etapa é usado o algoritmo Variable Metric (VM). Para o caso de uma superfície de medição com variações topográficas suaves, foi considerada a curvatura desta superfície no algoritmo do método de empilhamento CRS 2-D, com aplicação a dados sintéticos. O resultado foi uma seção ZO simulada, de alta qualidade ao ser comparada com a seção ZO obtida por modelamento direto, com uma alta razão sinal-ruído, além da estimativa do trio de parâmetros da função tempo de trânsito. Foi realizada uma nálise de sensibilidade para a nova função de tempo de trânsito CRS em relação à curvatura da superfície de medição. Os resultados demonstraram que a função tempo de trânsito CRS é mais sensível nos pontos-médios afastados do ponto central e para grandes afastamentos. As expressões da velocidade NMO apresentadas foram aplicadas para estimar as velocidades e as profundidades dos refletores para um modelo 2-D com topografia suave. Para a inversão destas velocidades e profundidades dos refletores, foi considerado o algoritmo de inversão tipo Dix. A velocidade NMO para uma superfície de medição curva, permite estimar muito melhor estas velocidades e profundidades dos refletores, que as velocidades NMO referidas as superfícies planas. Também apresenta-se uma abordagem do empilhamento CRS no caso 3-D. neste caso a função tempo de trânsito depende de oito parâmetros. São abordadas cinco estratégias de busca destes parâmetros. A combinação de duas destas estratégias (estratégias das três aproximações dos tempos de trânsito e a estratégia das configurações e curvaturas arbitrárias) foi aplicada exitosamente no empilhamento CRS 3-D de dados sintéticos e reais.
Dry and wet seasons set the phytochemical profile of the Copaifera langsdorffii Desf. essential oils
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
Different mathematical methods have been applied to obtain the analytic result for the massless triangle Feynman diagram yielding a sum of four linearly independent (LI) hypergeometric functions of two variables F-4. This result is not physically acceptable when it is embedded in higher loops, because all four hypergeometric functions in the triangle result have the same region of convergence and further integration means going outside those regions of convergence. We could go outside those regions by using the well-known analytic continuation formulas obeyed by the F-4, but there are at least two ways we can do this. Which is the correct one? Whichever continuation one uses, it reduces a number of F-4 from four to three. This reduction in the number of hypergeometric functions can be understood by taking into account the fundamental physical constraint imposed by the conservation of momenta flowing along the three legs of the diagram. With this, the number of overall LI functions that enter the most general solution must reduce accordingly. It remains to determine which set of three LI solutions needs to be taken. To determine the exact structure and content of the analytic solution for the three-point function that can be embedded in higher loops, we use the analogy that exists between Feynman diagrams and electric circuit networks, in which the electric current flowing in the network plays the role of the momentum flowing in the lines of a Feynman diagram. This analogy is employed to define exactly which three out of the four hypergeometric functions are relevant to the analytic solution for the Feynman diagram. The analogy is built based on the equivalence between electric resistance circuit networks of types Y and Delta in which flows a conserved current. The equivalence is established via the theorem of minimum energy dissipation within circuits having these structures.
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
This paper proposes a technique for solving the multiobjective environmental/economic dispatch problem using the weighted sum and ε-constraint strategies, which transform the problem into a set of single-objective problems. In the first strategy, the objective function is a weighted sum of the environmental and economic objective functions. The second strategy considers one of the objective functions: in this case, the environmental function, as a problem constraint, bounded above by a constant. A specific predictor-corrector primal-dual interior point method which uses the modified log barrier is proposed for solving the set of single-objective problems generated by such strategies. The purpose of the modified barrier approach is to solve the problem with relaxation of its original feasible region, enabling the method to be initialized with unfeasible points. The tests involving the proposed solution technique indicate i) the efficiency of the proposed method with respect to the initialization with unfeasible points, and ii) its ability to find a set of efficient solutions for the multiobjective environmental/economic dispatch problem.
Resumo:
procera (pro) is a tall tomato (Solanum lycopersicum) mutant carrying a point mutation in the GRAS region of the gene encoding SlDELLA, a repressor in the gibberellin (GA) signaling pathway. Consistent with the SlDELLA loss of function, pro plants display a GA-constitutive response phenotype, mimicking wild-type plants treated with GA(3). The ovaries from both nonemasculated and emasculated pro flowers had very strong parthenocarpic capacity, associated with enhanced growth of preanthesis ovaries due to more and larger cells. pro parthenocarpy is facultative because seeded fruits were obtained by manual pollination. Most pro pistils had exserted stigmas, thus preventing self-pollination, similar to wild-type pistils treated with GA(3) or auxins. However, Style2.1, a gene responsible for long styles in noncultivated tomato, may not control the enhanced style elongation of pro pistils, because its expression was not higher in pro styles and did not increase upon GA(3) application. Interestingly, a high percentage of pro flowers had meristic alterations, with one additional petal, sepal, stamen, and carpel at each of the four whorls, respectively, thus unveiling a role of SlDELLA in flower organ development. Microarray analysis showed significant changes in the transcriptome of preanthesis pro ovaries compared with the wild type, indicating that the molecular mechanism underlying the parthenocarpic capacity of pro is complex and that it is mainly associated with changes in the expression of genes involved in GA and auxin pathways. Interestingly, it was found that GA activity modulates the expression of cell division and expansion genes and an auxin signaling gene (tomato AUXIN RESPONSE FACTOR7) during fruit-set.
Resumo:
Let phi: a"e(2) -> a"e(2) be an orientation-preserving C (1) involution such that phi(0) = 0. Let Spc(phi) = {Eigenvalues of D phi(p) | p a a"e(2)}. We prove that if Spc(phi) aS, a"e or Spc(phi) a (c) [1, 1 + epsilon) = a... for some epsilon > 0, then phi is globally C (1) conjugate to the linear involution D phi(0) via the conjugacy h = (I + D phi(0)phi)/2,where I: a"e(2) -> a"e(2) is the identity map. Similarly, we prove that if phi is an orientation-reversing C (1) involution such that phi(0) = 0 and Trace (D phi(0)D phi(p) > - 1 for all p a a"e(2), then phi is globally C (1) conjugate to the linear involution D phi(0) via the conjugacy h. Finally, we show that h may fail to be a global linearization of phi if the above conditions are not fulfilled.
Resumo:
Masonry spandrels together with shear walls are structural components of a masonry building subjected to lateral loads. Shear walls are the main components of this structural system, even if masonry spandrels are the elements that ensure the connection of shear wall panels and the distribution of stresses through the masonry piers. The use of prefabricated truss type bars in the transversal and longitudinal directions is usually considered a challenge, even if the simplicity of the applications suggested here alleviate some of the possible difficulties. This paper focus on the experimental behavior of masonry spandrels reinforced with prefabricated trusses, considering different possibilities for the arrangement of reinforcement and blocks. Reinforced spandrels with three and two hollow cell concrete blocks and with different reinforcement ratios have been built and tested using a four and three point loading test configuration. Horizontal bed joint reinforcement increased the capacity of deformation as well as the ultimate load, leading to ductile responses. Vertical reinforcement increased the shear strength of the masonry spandrels and its distribution play a central role on the shear behavior. (C) 2011 Elsevier Ltd. All rights reserved.
Resumo:
[EN] In this paper, we have used Geographical Information Systems (GIS) to solve the planar Huff problem considering different demand distributions and forbidden regions. Most of the papers connected with the competitive location problems consider that the demand is aggregated in a finite set of points. In other few cases, the models suppose that the demand is distributed along the feasible region according to a functional form, mainly a uniform distribution. In this case, in addition to the discrete and uniform demand distributions we have considered that the demand is represented by a population surface model, that is, a raster map where each pixel has associated a value corresponding to the population living in the area that it covers...
Resumo:
In the post genomic era with the massive production of biological data the understanding of factors affecting protein stability is one of the most important and challenging tasks for highlighting the role of mutations in relation to human maladies. The problem is at the basis of what is referred to as molecular medicine with the underlying idea that pathologies can be detailed at a molecular level. To this purpose scientific efforts focus on characterising mutations that hamper protein functions and by these affect biological processes at the basis of cell physiology. New techniques have been developed with the aim of detailing single nucleotide polymorphisms (SNPs) at large in all the human chromosomes and by this information in specific databases are exponentially increasing. Eventually mutations that can be found at the DNA level, when occurring in transcribed regions may then lead to mutated proteins and this can be a serious medical problem, largely affecting the phenotype. Bioinformatics tools are urgently needed to cope with the flood of genomic data stored in database and in order to analyse the role of SNPs at the protein level. In principle several experimental and theoretical observations are suggesting that protein stability in the solvent-protein space is responsible of the correct protein functioning. Then mutations that are found disease related during DNA analysis are often assumed to perturb protein stability as well. However so far no extensive analysis at the proteome level has investigated whether this is the case. Also computationally methods have been developed to infer whether a mutation is disease related and independently whether it affects protein stability. Therefore whether the perturbation of protein stability is related to what it is routinely referred to as a disease is still a big question mark. In this work we have tried for the first time to explore the relation among mutations at the protein level and their relevance to diseases with a large-scale computational study of the data from different databases. To this aim in the first part of the thesis for each mutation type we have derived two probabilistic indices (for 141 out of 150 possible SNPs): the perturbing index (Pp), which indicates the probability that a given mutation effects protein stability considering all the “in vitro” thermodynamic data available and the disease index (Pd), which indicates the probability of a mutation to be disease related, given all the mutations that have been clinically associated so far. We find with a robust statistics that the two indexes correlate with the exception of all the mutations that are somatic cancer related. By this each mutation of the 150 can be coded by two values that allow a direct comparison with data base information. Furthermore we also implement computational methods that starting from the protein structure is suited to predict the effect of a mutation on protein stability and find that overpasses a set of other predictors performing the same task. The predictor is based on support vector machines and takes as input protein tertiary structures. We show that the predicted data well correlate with the data from the databases. All our efforts therefore add to the SNP annotation process and more importantly found the relationship among protein stability perturbation and the human variome leading to the diseasome.
Resumo:
The main part of this thesis describes a method of calculating the massless two-loop two-point function which allows expanding the integral up to an arbitrary order in the dimensional regularization parameter epsilon by rewriting it as a double Mellin-Barnes integral. Closing the contour and collecting the residues then transforms this integral into a form that enables us to utilize S. Weinzierl's computer library nestedsums. We could show that multiple zeta values and rational numbers are sufficient for expanding the massless two-loop two-point function to all orders in epsilon. We then use the Hopf algebra of Feynman diagrams and its antipode, to investigate the appearance of Riemann's zeta function in counterterms of Feynman diagrams in massless Yukawa theory and massless QED. The class of Feynman diagrams we consider consists of graphs built from primitive one-loop diagrams and the non-planar vertex correction, where the vertex corrections only depend on one external momentum. We showed the absence of powers of pi in the counterterms of the non-planar vertex correction and diagrams built by shuffling it with the one-loop vertex correction. We also found the invariance of some coefficients of zeta functions under a change of momentum flow through these vertex corrections.
Resumo:
Im Rahmen der vorliegenden Dissertation wurde, basierend auf der Parallel-/Orthogonalraum-Methode, eine neue Methode zur Berechnung von allgemeinen massiven Zweischleifen-Dreipunkt-Tensorintegralen mit planarer und gedrehter reduzierter planarer Topologie entwickelt. Die Ausarbeitung und Implementation einer Tensorreduktion fuer Integrale, welche eine allgemeine Tensorstruktur im Minkowski-Raum besitzen koennen, wurde durchgefuehrt. Die Entwicklung und Implementation eines Algorithmus zur semi-analytischen Berechnung der schwierigsten Integrale, die nach der Tensorreduktion verbleiben, konnte vollendet werden. (Fuer die anderen Basisintegrale koennen wohlbekannte Methoden verwendet werden.) Die Implementation ist bezueglich der UV-endlichen Anteile der Masterintegrale, die auch nach Tensorreduktion noch die zuvor erwaehnten Topologien besitzen, abgeschlossen. Die numerischen Integrationen haben sich als stabil erwiesen. Fuer die verbleibenden Teile des Projektes koennen wohlbekannte Methoden verwendet werden. In weiten Teilen muessen lediglich noch Links zu existierenden Programmen geschrieben werden. Fuer diejenigen wenigen verbleibenden speziellen Topologien, welche noch zu implementieren sind, sind (wohlbekannte) Methoden zu implementieren. Die Computerprogramme, die im Rahmen dieses Projektes entstanden, werden auch fuer allgemeinere Prozesse in das xloops-Projekt einfliessen. Deswegen wurde sie soweit moeglich fuer allgemeine Prozesse entwickelt und implementiert. Der oben erwaehnte Algorithmus wurde insbesondere fuer die Evaluation der fermionischen NNLO-Korrekturen zum leptonischen schwachen Mischungswinkel sowie zu aehnlichen Prozessen entwickelt. Im Rahmen der vorliegenden Dissertation wurde ein Grossteil der fuer die fermionischen NNLO-Korrekturen zu den effektiven Kopplungskonstanten des Z-Zerfalls (und damit fuer den schachen Mischungswinkel) notwendigen Arbeit durchgefuehrt.