955 resultados para theorem
Resumo:
In an estuary, mixing and dispersion resulting from turbulence and small scale fluctuation has strong spatio-temporal variability which cannot be resolved in conventional hydrodynamic models while some models employs parameterizations large water bodies. This paper presents small scale diffusivity estimates from high resolution drifters sampled at 10 Hz for periods of about 4 hours to resolve turbulence and shear diffusivity within a tidal shallow estuary (depth < 3 m). Taylor's diffusion theorem forms the basis of a first order estimate for the diffusivity scale. Diffusivity varied between 0.001 – 0.02 m2/s during the flood tide experiment. The diffusivity showed strong dependence (R2 > 0.9) on the horizontal mean velocity within the channel. Enhanced diffusivity caused by shear dispersion resulting from the interaction of large scale flow with the boundary geometries was observed. Turbulence within the shallow channel showed some similarities with the boundary layer flow which include consistency with slope of 5/3 predicted by Kolmogorov's similarity hypothesis within the inertial subrange. The diffusivities scale locally by 4/3 power law following Okubo's scaling and the length scale scales as 3/2 power law of the time scale. The diffusivity scaling herein suggests that the modelling of small scale mixing within tidal shallow estuaries can be approached from classical turbulence scaling upon identifying pertinent parameters.
Resumo:
We prove two sided and one sided analogues of the Wiener-Tauberian theorem for the Euclidean motion group, M(2).
Resumo:
This paper considers the problem of the design of the quadratic weir notch, which finds application in the proportionate method of flow measurement in a by-pass, such that the discharge through it is proportional to the square root of the head measured above a certain datum. The weir notch consists of a bottom in the form of a rectangular weir of width 2W and depth a over which a designed curve is fitted. A theorem concerning the flow through compound weirs called the “slope discharge continuity theorem” is discussed and proved. Using this, the problem is reduced to the determination of an exact solution to Volterra's integral equation in Abel's form. It is shown that in the case of a quadratic weir notch, the discharge is proportional to the square root of the head measured above a datum Image a above the crest of the weir. Further, it is observed that the function defining the shape of the weir is rapidly convergent and its value almost approximates to zero at distances of 3a and above from the crest of the weir. This interesting and significant behaviour of the function incidentally provides a very good approximate solution to a particular Fredholm integral equation of the first kind, transforming the notch into a device called a “proportional-orifice”. A new concept of a “notch-orifice” capable of passing a discharge proportional to the square root of the head (above a particular datum) while acting both as a notch, and as an orifice, is given. A typical experiment with one such notch-orifice, having A = 4 in., and W = 6 in., shows a remarkable agreement with the theory and is found to have a constant coefficient of discharge of 0.61 in the ranges of both notch and orifice.
Resumo:
At the Tevatron, the total p_bar-p cross-section has been measured by CDF at 546 GeV and 1.8 TeV, and by E710/E811 at 1.8 TeV. The two results at 1.8 TeV disagree by 2.6 standard deviations, introducing big uncertainties into extrapolations to higher energies. At the LHC, the TOTEM collaboration is preparing to resolve the ambiguity by measuring the total p-p cross-section with a precision of about 1 %. Like at the Tevatron experiments, the luminosity-independent method based on the Optical Theorem will be used. The Tevatron experiments have also performed a vast range of studies about soft and hard diffractive events, partly with antiproton tagging by Roman Pots, partly with rapidity gap tagging. At the LHC, the combined CMS/TOTEM experiments will carry out their diffractive programme with an unprecedented rapidity coverage and Roman Pot spectrometers on both sides of the interaction point. The physics menu comprises detailed studies of soft diffractive differential cross-sections, diffractive structure functions, rapidity gap survival and exclusive central production by Double Pomeron Exchange.
Resumo:
This paper is concerned with a study of some of the properties of locally product and almost locally product structures on a differentiable manifold X n of class C k . Every locally product space has certain almost locally product structures which transform the local tangent space to X n at an arbitrary point P in a set fashion: this is studied in Theorem (2.2). Theorem (2.3) considers the nature of transformations that exist between two co-ordinate systems at a point whenever an almost locally product structure has the same local representation in each of these co-ordinate systems. A necessary and sufficient condition for X n to be a locally product manifold is obtained in terms of the pseudo-group of co-ordinate transformations on X n and the subpseudo-groups [cf., Theoren (2.1)]. Section 3 is entirely devoted to the study of integrable almost locally product structures.
Resumo:
Krishnan's reciprocity theorem in colloid optics, ρ{variant}u=1+l/ρ{variant}h/1+1/ρ{variant}v is generalised for the case when the scattering medium is subjected to an external orienting field. It is shown theoretically that a general relation of the type IBA=I′AB results in this case, where IBA is the intensity of the component of the scattered light having its electric vector inclined at an angle B to the vertical with the incident light polarised at an angle A to the vertical, the external field direction being parallel to the incident beam. I′AB is the corresponding intensity with the magnetic field parallel of the scattered ray. Experimental verification of the above generalisation is also given.
Resumo:
Nucleation is the first step in a phase transition where small nuclei of the new phase start appearing in the metastable old phase, such as the appearance of small liquid clusters in a supersaturated vapor. Nucleation is important in various industrial and natural processes, including atmospheric new particle formation: between 20 % to 80 % of atmospheric particle concentration is due to nucleation. These atmospheric aerosol particles have a significant effect both on climate and human health. Different simulation methods are often applied when studying things that are difficult or even impossible to measure, or when trying to distinguish between the merits of various theoretical approaches. Such simulation methods include, among others, molecular dynamics and Monte Carlo simulations. In this work molecular dynamics simulations of the homogeneous nucleation of Lennard-Jones argon have been performed. Homogeneous means that the nucleation does not occur on a pre-existing surface. The simulations include runs where the starting configuration is a supersaturated vapor and the nucleation event is observed during the simulation (direct simulations), as well as simulations of a cluster in equilibrium with a surrounding vapor (indirect simulations). The latter type are a necessity when the conditions prevent the occurrence of a nucleation event in a reasonable timeframe in the direct simulations. The effect of various temperature control schemes on the nucleation rate (the rate of appearance of clusters that are equally able to grow to macroscopic sizes and to evaporate) was studied and found to be relatively small. The method to extract the nucleation rate was also found to be of minor importance. The cluster sizes from direct and indirect simulations were used in conjunction with the nucleation theorem to calculate formation free energies for the clusters in the indirect simulations. The results agreed with density functional theory, but were higher than values from Monte Carlo simulations. The formation energies were also used to calculate surface tension for the clusters. The sizes of the clusters in the direct and indirect simulations were compared, showing that the direct simulation clusters have more atoms between the liquid-like core of the cluster and the surrounding vapor. Finally, the performance of various nucleation theories in predicting simulated nucleation rates was investigated, and the results among other things highlighted once again the inadequacy of the classical nucleation theory that is commonly employed in nucleation studies.
Resumo:
In this study, we derive a fast, novel time-domain algorithm to compute the nth-order moment of the power spectral density of the photoelectric current as measured in laser-Doppler flowmetry (LDF). It is well established that in the LDF literature these moments are closely related to fundamental physiological parameters, i.e. concentration of moving erythrocytes and blood flow. In particular, we take advantage of the link between moments in the Fourier domain and fractional derivatives in the temporal domain. Using Parseval's theorem, we establish an exact analytical equivalence between the time-domain expression and the conventional frequency-domain counterpart. Moreover, we demonstrate the appropriateness of estimating the zeroth-, first- and second-order moments using Monte Carlo simulations. Finally, we briefly discuss the feasibility of implementing the proposed algorithm in hardware.
Resumo:
A growing body of empirical research examines the structure and effectiveness of corporate governance systems around the world. An important insight from this literature is that corporate governance mechanisms address the excessive use of managerial discretionary powers to get private benefits by expropriating the value of shareholders. One possible way of expropriation is to reduce the quality of disclosed earnings by manipulating the financial statements. This lower quality of earnings should then be reflected by the stock price of firm according to value relevance theorem. Hence, instead of testing the direct effect of corporate governance on the firm’s market value, it is important to understand the causes of the lower quality of accounting earnings. This thesis contributes to the literature by increasing knowledge about the extent of the earnings management – measured as the extent of discretionary accruals in total disclosed earnings - and its determinants across the Transitional European countries. The thesis comprises of three essays of empirical analysis of which first two utilize the data of Russian listed firms whereas the third essay uses data from 10 European economies. More specifically, the first essay adds to existing research connecting earnings management to corporate governance. It testifies the impact of the Russian corporate governance reforms of 2002 on the quality of disclosed earnings in all publicly listed firms. This essay provides empirical evidence of the fact that the desired impact of reforms is not fully substantiated in Russia without proper enforcement. Instead, firm-level factors such as long-term capital investments and compliance with International financial reporting standards (IFRS) determine the quality of the earnings. The result presented in the essay support the notion proposed by Leuz et al. (2003) that the reforms aimed to bring transparency do not correspond to desired results in economies where investor protection is lower and legal enforcement is weak. The second essay focuses on the relationship between the internal-control mechanism such as the types and levels of ownership and the quality of disclosed earnings in Russia. The empirical analysis shows that the controlling shareholders in Russia use their powers to manipulate the reported performance in order to get private benefits of control. Comparatively, firms owned by the State have significantly better quality of disclosed earnings than other controllers such as oligarchs and foreign corporations. Interestingly, market performance of firms controlled by either State or oligarchs is better than widely held firms. The third essay provides useful evidence on the fact that both ownership structures and economic characteristics are important factors in determining the quality of disclosed earnings in three groups of countries in Europe. Evidence suggests that ownership structure is a more important determinant in developed and transparent countries, while economic determinants are important determinants in developing and transitional countries.
Resumo:
The collisionless Boltzmann equation governing self-gravitating systems such as galaxies has recently been shown to admit exact oscillating solutions with planar and spherical symmetry. The relation of the spherically symmetric solutions to the Virial theorem, as well as generalizations to non-uniform spheres, uniform spheroids and discs form the subject of this paper. These models generalize known families of static solutions. The case of the spheroid is worked out in some detail. Quasiperiodic as well as chaotic time variation of the two axes is demonstrated by studying the surface of section for the associated Hamiltonian system with two degrees of freedom. The relation to earlier work and possible implications for the general problem of collisionless relaxation in self gravitating systems are also discussed.
Resumo:
This paper analyzes the L2 stability of solutions of systems with time-varying coefficients of the form [A + C(t)]x′ = [B + D(t)]x + u, where A, B, C, D are matrices. Following proof of a lemma, the main result is derived, according to which the system is L2 stable if the eigenvalues of the coefficient matrices are related in a simple way. A corollary of the theorem dealing with small periodic perturbations of constant coefficient systems is then proved. The paper concludes with two illustrative examples, both of which deal with the attitude dynamics of a rigid, axisymmetric, spinning satellite in an eccentric orbit, subject to gravity gradient torques.
Resumo:
The present study of the stability of systems governed by a linear multidimensional time-varying equation, which are encountered in spacecraft dynamics, economics, demographics, and biological systems, gives attention the lemma dealing with L(inf) stability of an integral equation that results from the differential equation of the system under consideration. Using the proof of this lemma, the main result on L(inf) stability is derived according; a corollary of the theorem deals with constant coefficient systems perturbed by small periodic terms. (O.C.)
Resumo:
A simplified perturbational analysis is employed, together with the application of Green's theorem, to determine the first-order corrections to the reflection and transmission coefficients in the problem of diffraction of surface water waves by a nearly vertical barrier in two basically important cases: (i) when the barrier is partially immersed and (ii) when the barrier is completely submerged. The present analysis produces the desired results fairly easily and relatively quickly as compared with the known integral equation approach to this class of diffraction problems.
Resumo:
The problem of time variant reliability analysis of existing structures subjected to stationary random dynamic excitations is considered. The study assumes that samples of dynamic response of the structure, under the action of external excitations, have been measured at a set of sparse points on the structure. The utilization of these measurements m in updating reliability models, postulated prior to making any measurements, is considered. This is achieved by using dynamic state estimation methods which combine results from Markov process theory and Bayes' theorem. The uncertainties present in measurements as well as in the postulated model for the structural behaviour are accounted for. The samples of external excitations are taken to emanate from known stochastic models and allowance is made for ability (or lack of it) to measure the applied excitations. The future reliability of the structure is modeled using expected structural response conditioned on all the measurements made. This expected response is shown to have a time varying mean and a random component that can be treated as being weakly stationary. For linear systems, an approximate analytical solution for the problem of reliability model updating is obtained by combining theories of discrete Kalman filter and level crossing statistics. For the case of nonlinear systems, the problem is tackled by combining particle filtering strategies with data based extreme value analysis. In all these studies, the governing stochastic differential equations are discretized using the strong forms of Ito-Taylor's discretization schemes. The possibility of using conditional simulation strategies, when applied external actions are measured, is also considered. The proposed procedures are exemplifiedmby considering the reliability analysis of a few low-dimensional dynamical systems based on synthetically generated measurement data. The performance of the procedures developed is also assessed based on a limited amount of pertinent Monte Carlo simulations. (C) 2010 Elsevier Ltd. All rights reserved.
Resumo:
After Gödel's incompleteness theorems and the collapse of Hilbert's programme Gerhard Gentzen continued the quest for consistency proofs of Peano arithmetic. He considered a finitistic or constructive proof still possible and necessary for the foundations of mathematics. For a proof to be meaningful, the principles relied on should be considered more reliable than the doubtful elements of the theory concerned. He worked out a total of four proofs between 1934 and 1939. This thesis examines the consistency proofs for arithmetic by Gentzen from different angles. The consistency of Heyting arithmetic is shown both in a sequent calculus notation and in natural deduction. The former proof includes a cut elimination theorem for the calculus and a syntactical study of the purely arithmetical part of the system. The latter consistency proof in standard natural deduction has been an open problem since the publication of Gentzen's proofs. The solution to this problem for an intuitionistic calculus is based on a normalization proof by Howard. The proof is performed in the manner of Gentzen, by giving a reduction procedure for derivations of falsity. In contrast to Gentzen's proof, the procedure contains a vector assignment. The reduction reduces the first component of the vector and this component can be interpreted as an ordinal less than epsilon_0, thus ordering the derivations by complexity and proving termination of the process.