947 resultados para Nonnegative sine polynomial


Relevância:

10.00% 10.00%

Publicador:

Resumo:

The T. E. wave in cylindrical wavegulde filled with inhomogeneous plasma immersed in the external uniform longitudinal magnetic field is investigated. The analytic solution expressed in polynomial formed by cutting the confluent hypergeometric function is obtained. Furthermore the eigenfrequency of T. E. wave is obtained.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

O esboço da nova Constituição está pronto. Bernardo Cabral (PMDB-AM) relata que o esboço reflete exatamente o que foi decidido nas comissões temáticas. Sobre a reforma agrária, Fernando Henrique Cardoso (PMDB-SP) relata que o texto permite emendas que propiciem a manutenção da produção e o acesso à terra. Os direitos individuais também sofreram modificações e Nelson Jobim (PMDB-RS) relata que os conceitos foram bem sistematizados e fundamentados. O esboço da Constituição foi entregue ao presidente da Assembleia Nacional Constituinte (ANC), Ulysses Guimarães (PMDB-SP) e será reexaminado por todos os membros da Comissão de Sistematização da ANC. Na sessão O Povo Pergunta, cidadão quer saber como fica a questão do emprego. Domingos Leonelli (PMDB-BA) responde que houve um avanço considerável com a redução da jornada de trabalho para 40 horas semanais. José Walter Filho, coordenador do Sine-DF, diz que a questão do desemprego é estrutural e o próprio sistema estatal de emprego não chega a atender 2% do desemprego a nível nacional. Aloísio Vasconcelos (PMDB-MG) acredita que o desemprego está associado a políticas nacionais e que uma política de emprego se faz necessária, ao invés de políticas econômicas que geram emprego ou desemprego, dependendo da ocasião . Virgílio Galassi (PDS-MG) entende que o desemprego depende também do empregador. Celso Dourado (PMDB-BA) relata que deve-se evitar a política monetarista que valoriza o capital, não o trabalho. Olívio Dutra (PT-RS) afirma que dentro do capitalismo brasileiro há formas de minimizar o desemprego.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper deals with turbulence behavior inbenthalboundarylayers by means of large eddy simulation (LES). The flow is modeled by moving an infinite plate in an otherwise quiescent water with an oscillatory and a steady velocity components. The oscillatory one aims to simulate wave effect on the flow. A number of large-scale turbulence databases have been established, based on which we have obtained turbulencestatisticsof the boundarylayers, such as Reynolds stress, turbulence intensity, skewness and flatness ofturbulence, and temporal and spatial scales of turbulent bursts, etc. Particular attention is paid to the dependences of those statistics on two nondimensional parameters, namely the Reynolds number and the current-wave velocity ratio defined as the steady current velocity over the oscillatory velocity amplitude. It is found that the Reynolds stress and turbulence intensity profile differently from phase to phase, and exhibit two types of distributions in an oscillatory cycle. One is monotonic occurring during the time when current and wave-induced components are in the same direction, and the other inflectional occurring during the time when current and wave-induced components are in opposite directions. Current component makes an asymmetrical time series of Reynolds stress, as well as turbulence intensity, although the mean velocity series is symmetrical as a sine/cosine function. The skewness and flatness variations suggest that the turbulence distribution is not a normal function but approaches to a normal one with the increasing of Reynolds number and the current-wave velocity ratio as well. As for turbulent bursting, the dimensionless period and the mean area of all bursts per unit bed area tend to increase with Reynolds number and current-wave velocity ratio, rather than being constant as in steady channel flows.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A new method is presented here to analyse the Peierls-Nabarro model of an edge dislocation in a rectangular plate. The analysis is based on the superposition scheme and series expansions of complex potentials. The stress field and dislocation density field on the slip plane can be expressed as the first and the second Chebyshev polynomial series respectively. Two sets of governing equations are obtained on the slip plane and outer boundary of the rectangular plate respectively. Three numerical methods are used to solve the governing equations.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

本文针对发展新一代步兵战车复合材料履带板所面临的关键问题,结合其实际受载特点,设计制备了冲击疲劳实验加载装置,并着重从实验设计及机理分析上进行细致深入的探索,揭示了Al_2O_3/LC_4复合材料冲击疲劳破坏的微观过程和机理。首先分别对SiC_P/LC_4、Al_2O_(3P)/LC_4 及基体 LC_4 进行了显微组织的观察与定量分析,并对其拉伸、三点弯曲破坏过程进行了在位观察,结合其断裂形貌的观察与分析,揭示出颗粒增强铝基复合材料断裂破坏的根本原因是颗粒的聚集及脆性相在晶界的严重偏聚。针对这一结论,给材料制备单位提出工艺改进意见。对工艺改进后制备的复合材料进行常规力学性能的测试,结果表明,其拉伸性能明显优于改进前制备的相应材料。为了进行冲击疲劳的实验研究,在分析步兵战车履带板实际受载特点的基础上,自行设计制备了冲击疲劳实验的加载装置。主要包括主体框架和测量系统,前者与小型振动系统配合使用可以实现冲击能量为 0.3J、冲击频率为 1Hz、冲击速度为 0.6m/s 的多次冲击实验;后者可以准确记录下任意时刻的冲击载荷波形及冲击疲劳载荷的循环数。为了考察颗粒与加载速率对复合材料疲劳机理的影响,实验研究了 Al_2O_3/LC_4 复合材料和 LC_4 纯基体材料在冲击疲劳和常规疲劳过程中裂纹的扩展过程及扩展速率。综合结果发现:与LC_4纯基体材料相比,Al_2O_3/LC_4复合材料疲劳裂纹扩展得更为迅速。复合材料中,由于颗粒的加入,两种疲劳方式下袭纹都发生严重偏转;裂纹经过颗粒时,多数是绕过,少数是切过颗粒;冲击疲劳裂纹扩展速率明显高于常规疲劳裂纹扩展速率。纯基体材料中,两种加载方式下,裂纹基本都以穿晶的方式扩展,裂纹常常表现为小锯齿状;冲击疲劳裂纹尖端的塑性变形程度比常规疲劳更大;冲击疲劳裂纹比常规疲劳裂纹更曲折,表现出多尺度的锯齿状(Zig-Zag)特征;冲击疲劳裂纹扩展速率高于常规疲劳的裂纹扩展速率。在基本实验的基础上,进一步对断口及裂纹扩展途径进行了微观观察和定量分析,最后综合全文的实验和统计结果,讨论了颗粒增强铝基复合材料的冲击疲劳机理。复合材料疲劳裂纹扩展速率的提高主要与裂纹的偏转有关,裂纹更倾向于沿着颗粒与基体的界面扩展;两种材料的疲劳裂纹扩展速率均随加载速率的增加而增加,呈现加载速率的反作用。加载方式的改变,一方面,由于冲击情况下载荷持续时间降低,使裂纹扩展速率降低;另一方面,加载速率的提高使得断裂韧性值降低,材料变脆,裂纹扩展速率升高。这两个方面相互影响,相互竞争,决定实际的裂纹扩展速率。两种材料中,不同加载速率下的疲劳裂纹扩展的微观机制基本一致,没有明显的本质区别。

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The capacity degradation of bucket foundation in liquefied sand layer under cyclic loads such as equivalent dynamic ice-induced loads is studied. A simplified numerical model of liquefied sand layer has been presented based on the dynamic centrifuge experiment results. The ice-induced dynamic loads are modeled as equivalent sine cyclic loads, the liquefaction degree in different position of sand layer and effects of main factors are investigated. Subsequently, the sand resistance is represented by uncoupled, non-linear sand springs which describe the sub-failure behavior of the local sand resistance as well as the peak capacity of bucket foundation under some failure criterion. The capacity of bucket foundation is determined in liquefied sand layer and the rule of capacity degradation is analyzed. The capacity degradation in liquefied sand layer is analyzed comparing with that in non-liquefied sand layer. The results show that the liquefaction degree is 0.9 at the top and is only 0.06 at the bottom of liquefied sand layer. The numerical results are agreement well with the centrifugal experimental results. The value of the degradation of bucket capacity is 12% in numerical simulating whereas it is 17% in centrifugal experiments.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In Part I, a method for finding solutions of certain diffusive dispersive nonlinear evolution equations is introduced. The method consists of a straightforward iteration procedure, applied to the equation as it stands (in most cases), which can be carried out to all terms, followed by a summation of the resulting infinite series, sometimes directly and other times in terms of traces of inverses of operators in an appropriate space.

We first illustrate our method with Burgers' and Thomas' equations, and show how it quickly leads to the Cole-Hopft transformation, which is known to linearize these equations.

We also apply this method to the Korteweg and de Vries, nonlinear (cubic) Schrödinger, Sine-Gordon, modified KdV and Boussinesq equations. In all these cases the multisoliton solutions are easily obtained and new expressions for some of them follow. More generally we show that the Marcenko integral equations, together with the inverse problem that originates them, follow naturally from our expressions.

Only solutions that are small in some sense (i.e., they tend to zero as the independent variable goes to ∞) are covered by our methods. However, by the study of the effect of writing the initial iterate u_1 = u_(1)(x,t) as a sum u_1 = ^∼/u_1 + ^≈/u_1 when we know the solution which results if u_1 = ^∼/u_1, we are led to expressions that describe the interaction of two arbitrary solutions, only one of which is small. This should not be confused with Backlund transformations and is more in the direction of performing the inverse scattering over an arbitrary “base” solution. Thus we are able to write expressions for the interaction of a cnoidal wave with a multisoliton in the case of the KdV equation; these expressions are somewhat different from the ones obtained by Wahlquist (1976). Similarly, we find multi-dark-pulse solutions and solutions describing the interaction of envelope-solitons with a uniform wave train in the case of the Schrodinger equation.

Other equations tractable by our method are presented. These include the following equations: Self-induced transparency, reduced Maxwell-Bloch, and a two-dimensional nonlinear Schrodinger. Higher order and matrix-valued equations with nonscalar dispersion functions are also presented.

In Part II, the second Painleve transcendent is treated in conjunction with the similarity solutions of the Korteweg-de Vries equat ion and the modified Korteweg-de Vries equation.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Using an unperturbed scattering theory, the characteristics of H atom photoionization are studied respectively by a linearly- and by a circularly- polarized one-cycle laser pulse sequence. The asymmetry for photoelectrons in two directions opposite to each other is investigated. It is found that the asymmetry degree varies with the carrier-envelope (CE) phase, laser intensity, as well as the kinetic energy of photoelectrons. For the linear polarization, the maximal ionization rate varies with the CE phase, and the asymmetry degree varies with the CE phase in a sine-like pattern. For the circular polarization, the maximal ionization rate keeps constant for various CE phases, but the variation of asymmetry degree is still in a sine-like pattern.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The dissertation is concerned with the mathematical study of various network problems. First, three real-world networks are considered: (i) the human brain network (ii) communication networks, (iii) electric power networks. Although these networks perform very different tasks, they share similar mathematical foundations. The high-level goal is to analyze and/or synthesis each of these systems from a “control and optimization” point of view. After studying these three real-world networks, two abstract network problems are also explored, which are motivated by power systems. The first one is “flow optimization over a flow network” and the second one is “nonlinear optimization over a generalized weighted graph”. The results derived in this dissertation are summarized below.

Brain Networks: Neuroimaging data reveals the coordinated activity of spatially distinct brain regions, which may be represented mathematically as a network of nodes (brain regions) and links (interdependencies). To obtain the brain connectivity network, the graphs associated with the correlation matrix and the inverse covariance matrix—describing marginal and conditional dependencies between brain regions—have been proposed in the literature. A question arises as to whether any of these graphs provides useful information about the brain connectivity. Due to the electrical properties of the brain, this problem will be investigated in the context of electrical circuits. First, we consider an electric circuit model and show that the inverse covariance matrix of the node voltages reveals the topology of the circuit. Second, we study the problem of finding the topology of the circuit based on only measurement. In this case, by assuming that the circuit is hidden inside a black box and only the nodal signals are available for measurement, the aim is to find the topology of the circuit when a limited number of samples are available. For this purpose, we deploy the graphical lasso technique to estimate a sparse inverse covariance matrix. It is shown that the graphical lasso may find most of the circuit topology if the exact covariance matrix is well-conditioned. However, it may fail to work well when this matrix is ill-conditioned. To deal with ill-conditioned matrices, we propose a small modification to the graphical lasso algorithm and demonstrate its performance. Finally, the technique developed in this work will be applied to the resting-state fMRI data of a number of healthy subjects.

Communication Networks: Congestion control techniques aim to adjust the transmission rates of competing users in the Internet in such a way that the network resources are shared efficiently. Despite the progress in the analysis and synthesis of the Internet congestion control, almost all existing fluid models of congestion control assume that every link in the path of a flow observes the original source rate. To address this issue, a more accurate model is derived in this work for the behavior of the network under an arbitrary congestion controller, which takes into account of the effect of buffering (queueing) on data flows. Using this model, it is proved that the well-known Internet congestion control algorithms may no longer be stable for the common pricing schemes, unless a sufficient condition is satisfied. It is also shown that these algorithms are guaranteed to be stable if a new pricing mechanism is used.

Electrical Power Networks: Optimal power flow (OPF) has been one of the most studied problems for power systems since its introduction by Carpentier in 1962. This problem is concerned with finding an optimal operating point of a power network minimizing the total power generation cost subject to network and physical constraints. It is well known that OPF is computationally hard to solve due to the nonlinear interrelation among the optimization variables. The objective is to identify a large class of networks over which every OPF problem can be solved in polynomial time. To this end, a convex relaxation is proposed, which solves the OPF problem exactly for every radial network and every meshed network with a sufficient number of phase shifters, provided power over-delivery is allowed. The concept of “power over-delivery” is equivalent to relaxing the power balance equations to inequality constraints.

Flow Networks: In this part of the dissertation, the minimum-cost flow problem over an arbitrary flow network is considered. In this problem, each node is associated with some possibly unknown injection, each line has two unknown flows at its ends related to each other via a nonlinear function, and all injections and flows need to satisfy certain box constraints. This problem, named generalized network flow (GNF), is highly non-convex due to its nonlinear equality constraints. Under the assumption of monotonicity and convexity of the flow and cost functions, a convex relaxation is proposed, which always finds the optimal injections. A primary application of this work is in the OPF problem. The results of this work on GNF prove that the relaxation on power balance equations (i.e., load over-delivery) is not needed in practice under a very mild angle assumption.

Generalized Weighted Graphs: Motivated by power optimizations, this part aims to find a global optimization technique for a nonlinear optimization defined over a generalized weighted graph. Every edge of this type of graph is associated with a weight set corresponding to the known parameters of the optimization (e.g., the coefficients). The motivation behind this problem is to investigate how the (hidden) structure of a given real/complex valued optimization makes the problem easy to solve, and indeed the generalized weighted graph is introduced to capture the structure of an optimization. Various sufficient conditions are derived, which relate the polynomial-time solvability of different classes of optimization problems to weak properties of the generalized weighted graph such as its topology and the sign definiteness of its weight sets. As an application, it is proved that a broad class of real and complex optimizations over power networks are polynomial-time solvable due to the passivity of transmission lines and transformers.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis addresses whether it is possible to build a robust memory device for quantum information. Many schemes for fault-tolerant quantum information processing have been developed so far, one of which, called topological quantum computation, makes use of degrees of freedom that are inherently insensitive to local errors. However, this scheme is not so reliable against thermal errors. Other fault-tolerant schemes achieve better reliability through active error correction, but incur a substantial overhead cost. Thus, it is of practical importance and theoretical interest to design and assess fault-tolerant schemes that work well at finite temperature without active error correction.

In this thesis, a three-dimensional gapped lattice spin model is found which demonstrates for the first time that a reliable quantum memory at finite temperature is possible, at least to some extent. When quantum information is encoded into a highly entangled ground state of this model and subjected to thermal errors, the errors remain easily correctable for a long time without any active intervention, because a macroscopic energy barrier keeps the errors well localized. As a result, stored quantum information can be retrieved faithfully for a memory time which grows exponentially with the square of the inverse temperature. In contrast, for previously known types of topological quantum storage in three or fewer spatial dimensions the memory time scales exponentially with the inverse temperature, rather than its square.

This spin model exhibits a previously unexpected topological quantum order, in which ground states are locally indistinguishable, pointlike excitations are immobile, and the immobility is not affected by small perturbations of the Hamiltonian. The degeneracy of the ground state, though also insensitive to perturbations, is a complicated number-theoretic function of the system size, and the system bifurcates into multiple noninteracting copies of itself under real-space renormalization group transformations. The degeneracy, the excitations, and the renormalization group flow can be analyzed using a framework that exploits the spin model's symmetry and some associated free resolutions of modules over polynomial algebras.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A classical question in combinatorics is the following: given a partial Latin square $P$, when can we complete $P$ to a Latin square $L$? In this paper, we investigate the class of textbf{$epsilon$-dense partial Latin squares}: partial Latin squares in which each symbol, row, and column contains no more than $epsilon n$-many nonblank cells. Based on a conjecture of Nash-Williams, Daykin and H"aggkvist conjectured that all $frac{1}{4}$-dense partial Latin squares are completable. In this paper, we will discuss the proof methods and results used in previous attempts to resolve this conjecture, introduce a novel technique derived from a paper by Jacobson and Matthews on generating random Latin squares, and use this novel technique to study $ epsilon$-dense partial Latin squares that contain no more than $delta n^2$ filled cells in total.

In Chapter 2, we construct completions for all $ epsilon$-dense partial Latin squares containing no more than $delta n^2$ filled cells in total, given that $epsilon < frac{1}{12}, delta < frac{ left(1-12epsilonright)^{2}}{10409}$. In particular, we show that all $9.8 cdot 10^{-5}$-dense partial Latin squares are completable. In Chapter 4, we augment these results by roughly a factor of two using some probabilistic techniques. These results improve prior work by Gustavsson, which required $epsilon = delta leq 10^{-7}$, as well as Chetwynd and H"aggkvist, which required $epsilon = delta = 10^{-5}$, $n$ even and greater than $10^7$.

If we omit the probabilistic techniques noted above, we further show that such completions can always be found in polynomial time. This contrasts a result of Colbourn, which states that completing arbitrary partial Latin squares is an NP-complete task. In Chapter 3, we strengthen Colbourn's result to the claim that completing an arbitrary $left(frac{1}{2} + epsilonright)$-dense partial Latin square is NP-complete, for any $epsilon > 0$.

Colbourn's result hinges heavily on a connection between triangulations of tripartite graphs and Latin squares. Motivated by this, we use our results on Latin squares to prove that any tripartite graph $G = (V_1, V_2, V_3)$ such that begin{itemize} item $|V_1| = |V_2| = |V_3| = n$, item For every vertex $v in V_i$, $deg_+(v) = deg_-(v) geq (1- epsilon)n,$ and item $|E(G)| > (1 - delta)cdot 3n^2$ end{itemize} admits a triangulation, if $epsilon < frac{1}{132}$, $delta < frac{(1 -132epsilon)^2 }{83272}$. In particular, this holds when $epsilon = delta=1.197 cdot 10^{-5}$.

This strengthens results of Gustavsson, which requires $epsilon = delta = 10^{-7}$.

In an unrelated vein, Chapter 6 explores the class of textbf{quasirandom graphs}, a notion first introduced by Chung, Graham and Wilson cite{chung1989quasi} in 1989. Roughly speaking, a sequence of graphs is called "quasirandom"' if it has a number of properties possessed by the random graph, all of which turn out to be equivalent. In this chapter, we study possible extensions of these results to random $k$-edge colorings, and create an analogue of Chung, Graham and Wilson's result for such colorings.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The dietary carbohydrate requirement of Heterobranchus longifilis was evaluated in two separate experiments.In the first experiment, varying levels of carbohydrate ranging from 28, 24 to58 72% were fed to the fish of mean weight 1.83~c0.02g. Results revealed that the polynomial regression curve for the mean weight gain and the carbohydrate levels did not present a point where Y-max is equal to X-max and so the requirement was not obtained. The second experiment was therefore, conducted with lower levels of carbohydrate ranging from 17.00 to 20.86% and fed to fish with mean weight 0.49~c0.02g. Based on growth and feed efficiency data the carbohydrate requirement was determined to be 19.5%

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Many engineering applications face the problem of bounding the expected value of a quantity of interest (performance, risk, cost, etc.) that depends on stochastic uncertainties whose probability distribution is not known exactly. Optimal uncertainty quantification (OUQ) is a framework that aims at obtaining the best bound in these situations by explicitly incorporating available information about the distribution. Unfortunately, this often leads to non-convex optimization problems that are numerically expensive to solve.

This thesis emphasizes on efficient numerical algorithms for OUQ problems. It begins by investigating several classes of OUQ problems that can be reformulated as convex optimization problems. Conditions on the objective function and information constraints under which a convex formulation exists are presented. Since the size of the optimization problem can become quite large, solutions for scaling up are also discussed. Finally, the capability of analyzing a practical system through such convex formulations is demonstrated by a numerical example of energy storage placement in power grids.

When an equivalent convex formulation is unavailable, it is possible to find a convex problem that provides a meaningful bound for the original problem, also known as a convex relaxation. As an example, the thesis investigates the setting used in Hoeffding's inequality. The naive formulation requires solving a collection of non-convex polynomial optimization problems whose number grows doubly exponentially. After structures such as symmetry are exploited, it is shown that both the number and the size of the polynomial optimization problems can be reduced significantly. Each polynomial optimization problem is then bounded by its convex relaxation using sums-of-squares. These bounds are found to be tight in all the numerical examples tested in the thesis and are significantly better than Hoeffding's bounds.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The energy loss of protons and deuterons in D_2O ice has been measured over the energy range, E_p 18 - 541 kev. The double focusing magnetic spectrometer was used to measure the energy of the particles after they had traversed a known thickness of the ice target. One method of measurement is used to determine relative values of the stopping cross section as a function of energy; another method measures absolute values. The results are in very good agreement with the values calculated from Bethe’s semi-empirical formula. Possible sources of error are considered and the accuracy of the measurements is estimated to be ± 4%.

The D(dp)H^3 cross section has been measured by two methods. For E_D = 200 - 500 kev the spectrometer was used to obtain the momentum spectrum of the protons and tritons. From the yield and stopping cross section the reaction cross section at 90° has been obtained.

For E_D = 35 – 550 kev the proton yield from a thick target was differentiated to obtain the cross section. Both thin and thick target methods were used to measure the yield at each of ten angles. The angular distribution is expressed in terms of a Legendre polynomial expansion. The various sources of experimental error are considered in detail, and the probable error of the cross section measurements is estimated to be ± 5%.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Moving mesh methods (also called r-adaptive methods) are space-adaptive strategies used for the numerical simulation of time-dependent partial differential equations. These methods keep the total number of mesh points fixed during the simulation, but redistribute them over time to follow the areas where a higher mesh point density is required. There are a very limited number of moving mesh methods designed for solving field-theoretic partial differential equations, and the numerical analysis of the resulting schemes is challenging. In this thesis we present two ways to construct r-adaptive variational and multisymplectic integrators for (1+1)-dimensional Lagrangian field theories. The first method uses a variational discretization of the physical equations and the mesh equations are then coupled in a way typical of the existing r-adaptive schemes. The second method treats the mesh points as pseudo-particles and incorporates their dynamics directly into the variational principle. A user-specified adaptation strategy is then enforced through Lagrange multipliers as a constraint on the dynamics of both the physical field and the mesh points. We discuss the advantages and limitations of our methods. The proposed methods are readily applicable to (weakly) non-degenerate field theories---numerical results for the Sine-Gordon equation are presented.

In an attempt to extend our approach to degenerate field theories, in the last part of this thesis we construct higher-order variational integrators for a class of degenerate systems described by Lagrangians that are linear in velocities. We analyze the geometry underlying such systems and develop the appropriate theory for variational integration. Our main observation is that the evolution takes place on the primary constraint and the 'Hamiltonian' equations of motion can be formulated as an index 1 differential-algebraic system. We then proceed to construct variational Runge-Kutta methods and analyze their properties. The general properties of Runge-Kutta methods depend on the 'velocity' part of the Lagrangian. If the 'velocity' part is also linear in the position coordinate, then we show that non-partitioned variational Runge-Kutta methods are equivalent to integration of the corresponding first-order Euler-Lagrange equations, which have the form of a Poisson system with a constant structure matrix, and the classical properties of the Runge-Kutta method are retained. If the 'velocity' part is nonlinear in the position coordinate, we observe a reduction of the order of convergence, which is typical of numerical integration of DAEs. We also apply our methods to several models and present the results of our numerical experiments.