960 resultados para Order-preserving Functions
Resumo:
An expression for the probability density function of the second order response of a general FPSO in spreading seas is derived by using the Kac-Siegert approach. Various approximations of the second order force transfer functions are investigated for a ship-shaped FPSO. It is found that, when expressed in non-dimensional form, the probability density function of the response is not particularly sensitive to wave spreading, although the mean squared response and the resulting dimensional extreme values can be sensitive. The analysis is then applied to a Sevan FPSO, which is a large cylindrical buoy-like structure. The second order force transfer functions are derived by using an efficient semi-analytical hydrodynamic approach, and these are then employed to yield the extreme response. However, a significant effect of wave spreading on the statistics for a Sevan FPSO is found even in non-dimensional form. It implies that the exact statistics of a general ship-shaped FPSO may be sensitive to the wave direction, which needs to be verified in future work. It is also pointed out that the Newman's approximation regarding the frequency dependency of force transfer function is acceptable even for the spreading seas. An improvement on the results may be attained when considering the angular dependency exactly. Copyright © 2009 by ASME.
Resumo:
Energy functions (or characteristic functions) and basic equations for ferroelectrics in use today are given by those for ordinary dielectrics in the physical and mechanical communications. Based on these basic equations and energy functions, the finite element computation of the nonlinear behavior of the ferroelectrics has been carried out by several research groups. However, it is difficult to process the finite element computation further after domain switching, and the computation results are remarkably deviating from the experimental results. For the crack problem, the iterative solution of the finite element calculation could not converge and the solutions for fields near the crack tip oscillate. In order to finish the calculation smoothly, the finite element formulation should be modified to neglect the equivalent nodal load produced by spontaneous polarization gradient. Meanwhile, certain energy functions for ferroelectrics in use today are not compatible with the constitutive equations of ferroelectrics and need to be modified. This paper proposes a set of new formulae of the energy functions for ferroelectrics. With regard to the new formulae of the energy functions, the new basic equations for ferroelectrics are derived and can reasonably explain the question in the current finite element analysis for ferroelectrics.
Resumo:
The effects of complex boundary conditions on flows are represented by a volume force in the immersed boundary methods. The problem with this representation is that the volume force exhibits non-physical oscillations in moving boundary simulations. A smoothing technique for discrete delta functions has been developed in this paper to suppress the non-physical oscillations in the volume forces. We have found that the non-physical oscillations are mainly due to the fact that the derivatives of the regular discrete delta functions do not satisfy certain moment conditions. It has been shown that the smoothed discrete delta functions constructed in this paper have one-order higher derivative than the regular ones. Moreover, not only the smoothed discrete delta functions satisfy the first two discrete moment conditions, but also their derivatives satisfy one-order higher moment condition than the regular ones. The smoothed discrete delta functions are tested by three test cases: a one-dimensional heat equation with a moving singular force, a two-dimensional flow past an oscillating cylinder, and the vortex-induced vibration of a cylinder. The numerical examples in these cases demonstrate that the smoothed discrete delta functions can effectively suppress the non-physical oscillations in the volume forces and improve the accuracy of the immersed boundary method with direct forcing in moving boundary simulations.
Resumo:
Data were taken in 1979-80 by the CCFRR high energy neutrino experiment at Fermilab. A total of 150,000 neutrino and 23,000 antineutrino charged current events in the approximate energy range 25 < E_v < 250GeV are measured and analyzed. The structure functions F2 and xF_3 are extracted for three assumptions about σ_L/σ_T:R=0., R=0.1 and R= a QCD based expression. Systematic errors are estimated and their significance is discussed. Comparisons or the X and Q^2 behaviour or the structure functions with results from other experiments are made.
We find that statistical errors currently dominate our knowledge of the valence quark distribution, which is studied in this thesis. xF_3 from different experiments has, within errors and apart from level differences, the same dependence on x and Q^2, except for the HPWF results. The CDHS F_2 shows a clear fall-off at low-x from the CCFRR and EMC results, again apart from level differences which are calculable from cross-sections.
The result for the the GLS rule is found to be 2.83±.15±.09±.10 where the first error is statistical, the second is an overall level error and the third covers the rest of the systematic errors. QCD studies of xF_3 to leading and second order have been done. The QCD evolution of xF_3, which is independent of R and the strange sea, does not depend on the gluon distribution and fits yield
ʌ_(LO) = 88^(+163)_(-78) ^(+113)_(-70) MeV
The systematic errors are smaller than the statistical errors. Second order fits give somewhat different values of ʌ, although α_s (at Q^2_0 = 12.6 GeV^2) is not so different.
A fit using the better determined F_2 in place of xF_3 for x > 0.4 i.e., assuming q = 0 in that region, gives
ʌ_(LO) = 266^(+114)_(-104) ^(+85)_(-79) MeV
Again, the statistical errors are larger than the systematic errors. An attempt to measure R was made and the measurements are described. Utilizing the inequality q(x)≥0 we find that in the region x > .4 R is less than 0.55 at the 90% confidence level.
Resumo:
The connections between convexity and submodularity are explored, for purposes of minimizing and learning submodular set functions.
First, we develop a novel method for minimizing a particular class of submodular functions, which can be expressed as a sum of concave functions composed with modular functions. The basic algorithm uses an accelerated first order method applied to a smoothed version of its convex extension. The smoothing algorithm is particularly novel as it allows us to treat general concave potentials without needing to construct a piecewise linear approximation as with graph-based techniques.
Second, we derive the general conditions under which it is possible to find a minimizer of a submodular function via a convex problem. This provides a framework for developing submodular minimization algorithms. The framework is then used to develop several algorithms that can be run in a distributed fashion. This is particularly useful for applications where the submodular objective function consists of a sum of many terms, each term dependent on a small part of a large data set.
Lastly, we approach the problem of learning set functions from an unorthodox perspective---sparse reconstruction. We demonstrate an explicit connection between the problem of learning set functions from random evaluations and that of sparse signals. Based on the observation that the Fourier transform for set functions satisfies exactly the conditions needed for sparse reconstruction algorithms to work, we examine some different function classes under which uniform reconstruction is possible.
Resumo:
This thesis presents a new approach for the numerical solution of three-dimensional problems in elastodynamics. The new methodology, which is based on a recently introduced Fourier continuation (FC) algorithm for the solution of Partial Differential Equations on the basis of accurate Fourier expansions of possibly non-periodic functions, enables fast, high-order solutions of the time-dependent elastic wave equation in a nearly dispersionless manner, and it requires use of CFL constraints that scale only linearly with spatial discretizations. A new FC operator is introduced to treat Neumann and traction boundary conditions, and a block-decomposed (sub-patch) overset strategy is presented for implementation of general, complex geometries in distributed-memory parallel computing environments. Our treatment of the elastic wave equation, which is formulated as a complex system of variable-coefficient PDEs that includes possibly heterogeneous and spatially varying material constants, represents the first fully-realized three-dimensional extension of FC-based solvers to date. Challenges for three-dimensional elastodynamics simulations such as treatment of corners and edges in three-dimensional geometries, the existence of variable coefficients arising from physical configurations and/or use of curvilinear coordinate systems and treatment of boundary conditions, are all addressed. The broad applicability of our new FC elasticity solver is demonstrated through application to realistic problems concerning seismic wave motion on three-dimensional topographies as well as applications to non-destructive evaluation where, for the first time, we present three-dimensional simulations for comparison to experimental studies of guided-wave scattering by through-thickness holes in thin plates.
Resumo:
Part I
Numerical solutions to the S-limit equations for the helium ground state and excited triplet state and the hydride ion ground state are obtained with the second and fourth difference approximations. The results for the ground states are superior to previously reported values. The coupled equations resulting from the partial wave expansion of the exact helium atom wavefunction were solved giving accurate S-, P-, D-, F-, and G-limits. The G-limit is -2.90351 a.u. compared to the exact value of the energy of -2.90372 a.u.
Part II
The pair functions which determine the exact first-order wavefunction for the ground state of the three-electron atom are found with the matrix finite difference method. The second- and third-order energies for the (1s1s)1S, (1s2s)3S, and (1s2s)1S states of the two-electron atom are presented along with contour and perspective plots of the pair functions. The total energy for the three-electron atom with a nuclear charge Z is found to be E(Z) = -1.125•Z2 +1.022805•Z-0.408138-0.025515•(1/Z)+O(1/Z2)a.u.
Resumo:
The Fokker-Planck (FP) equation is used to develop a general method for finding the spectral density for a class of randomly excited first order systems. This class consists of systems satisfying stochastic differential equations of form ẋ + f(x) = m/Ʃ/j = 1 hj(x)nj(t) where f and the hj are piecewise linear functions (not necessarily continuous), and the nj are stationary Gaussian white noise. For such systems, it is shown how the Laplace-transformed FP equation can be solved for the transformed transition probability density. By manipulation of the FP equation and its adjoint, a formula is derived for the transformed autocorrelation function in terms of the transformed transition density. From this, the spectral density is readily obtained. The method generalizes that of Caughey and Dienes, J. Appl. Phys., 32.11.
This method is applied to 4 subclasses: (1) m = 1, h1 = const. (forcing function excitation); (2) m = 1, h1 = f (parametric excitation); (3) m = 2, h1 = const., h2 = f, n1 and n2 correlated; (4) the same, uncorrelated. Many special cases, especially in subclass (1), are worked through to obtain explicit formulas for the spectral density, most of which have not been obtained before. Some results are graphed.
Dealing with parametrically excited first order systems leads to two complications. There is some controversy concerning the form of the FP equation involved (see Gray and Caughey, J. Math. Phys., 44.3); and the conditions which apply at irregular points, where the second order coefficient of the FP equation vanishes, are not obvious but require use of the mathematical theory of diffusion processes developed by Feller and others. These points are discussed in the first chapter, relevant results from various sources being summarized and applied. Also discussed is the steady-state density (the limit of the transition density as t → ∞).
Resumo:
Let {Ƶn}∞n = -∞ be a stochastic process with state space S1 = {0, 1, …, D – 1}. Such a process is called a chain of infinite order. The transitions of the chain are described by the functions
Qi(i(0)) = Ƥ(Ƶn = i | Ƶn - 1 = i (0)1, Ƶn - 2 = i (0)2, …) (i ɛ S1), where i(0) = (i(0)1, i(0)2, …) ranges over infinite sequences from S1. If i(n) = (i(n)1, i(n)2, …) for n = 1, 2,…, then i(n) → i(0) means that for each k, i(n)k = i(0)k for all n sufficiently large.
Given functions Qi(i(0)) such that
(i) 0 ≤ Qi(i(0) ≤ ξ ˂ 1
(ii)D – 1/Ʃ/i = 0 Qi(i(0)) Ξ 1
(iii) Qi(i(n)) → Qi(i(0)) whenever i(n) → i(0),
we prove the existence of a stationary chain of infinite order {Ƶn} whose transitions are given by
Ƥ (Ƶn = i | Ƶn - 1, Ƶn - 2, …) = Qi(Ƶn - 1, Ƶn - 2, …)
With probability 1. The method also yields stationary chains {Ƶn} for which (iii) does not hold but whose transition probabilities are, in a sense, “locally Markovian.” These and similar results extend a paper by T.E. Harris [Pac. J. Math., 5 (1955), 707-724].
Included is a new proof of the existence and uniqueness of a stationary absolute distribution for an Nth order Markov chain in which all transitions are possible. This proof allows us to achieve our main results without the use of limit theorem techniques.
Resumo:
This report is a description of the organization, functions, and achievements of the IATTC. It has been prepared to provide, in a convenient format, answers to requests for information concerning the IATTC. It replaces similar, earlier reports (Carroz, 1965; IATTC Spec. Rep., 1 and 5), which are now largely outdated. In order to make each section of the report independent of the others, some aspects of the IATTC are described in more than one section. For example, work on the early life history of tunas financed by the Overseas Fishery Cooperation Foundation of Japan is mentioned in the subsection entitled Finance, the subsection entitled Biology of tunas and billfishes, and the section entitled RELATIONS WITH OTHER ORGANIZATIONS. Due to space constraints, however, it is not possible to describe the IATTC's activities in detail in this report. Additional information is available in publications of the IATTC, listed in Appendix 6, and in its web site, www.iattc.org. Many abbreviations are used in this report. The names of the organizations or the terms are written out the first time they are used, and, for convenience, they are also listed in the Glossary.
Resumo:
EXTRACT (SEE PDF FOR FULL ABSTRACT): High-resolution proxy records of climate, such as varves, ice cores, and tree-rings, provide the opportunity for reconstructing climate on a year-by-year basis. In order to do so it is necessary to approximate the complex nonlinear response function of the natural recording system using linear statistical models. Three problems with this approach were discussed, and possible solutions were suggested. Examples were given from a reconstruction of Santa Barbara precipitation based on tree-ring records from Santa Barbara County.
Resumo:
The growth of red sea urchins (Strongylocentrotus franciscanus) was modeled by using tag-recapture data from northern California. Red sea urchins (n=211) ranging in test diameter from 7 to 131 mm were examined for changes in size over one year. We used the function Jt+1 = Jt + f(Jt) to model growth, in which Jt is the jaw size (mm) at tagging, and Jt+1 is the jaw size one year later. The function f(Jt), represents one of six deterministic models: logistic dose response, Gaussian, Tanaka, Ricker, Richards, and von Bertalanffy with 3, 3, 3, 2, 3, and 2 minimization parameters, respectively. We found that three measures of goodness of fi t ranked the models similarly, in the order given. The results from these six models indicate that red sea urchins are slow growing animals (mean of 7.2 ±1.3 years to enter the fishery). We show that poor model selection or data from a limited range of urchin sizes (or both) produces erroneous growth parameter estimates and years-to-fishery estimates. Individual variation in growth dominated spatial variation at shallow and deep sites (F=0.246, n=199, P=0.62). We summarize the six models using a composite growth curve of jaw size, J, as a function of time, t: J = A(B – e–Ct) + Dt, in which each model is distinguished by the constants A, B, C, and D. We suggest that this composite model has the flexibility of the other six models and could be broadly applied. Given the robustness of our results regarding the number of years to enter the fishery, this information could be incorporated into future fishery management plans for red sea urchins in northern California.
Resumo:
Research has begun on Microbial Carbonate Precipitation (MCP), which shows promise as a soil improvement method because of its low carbon dioxide emission compared to cement stabilized agents. MCP produces calcium carbonate from carbonates and calcium in soil voids through ureolysis by "Bacillus Pasteurii". This study focuses on how the amount of calcium carbonate precipitation is affected by the injection conditions of the microorganism and nutrient salt, such as the number of injections and the soil type. Experiments were conducted to simulate soil improvement by bio-grouting soil in a syringe. The results indicate that the amount of precipitation is affected by injection conditions and soil type, suggesting that, in order for soil improvement by MCP to be effective, it is necessary to set injection conditions that are in accordance with the soil conditions. © 2011 ASCE.
Resumo:
Kolmogorov's two-thirds, ((Δv) 2) ∼ e 2/ 3r 2/ 3, and five-thirds, E ∼ e 2/ 3k -5/ 3, laws are formally equivalent in the limit of vanishing viscosity, v → 0. However, for most Reynolds numbers encountered in laboratory scale experiments, or numerical simulations, it is invariably easier to observe the five-thirds law. By creating artificial fields of isotropic turbulence composed of a random sea of Gaussian eddies whose size and energy distribution can be controlled, we show why this is the case. The energy of eddies of scale, s, is shown to vary as s 2/ 3, in accordance with Kolmogorov's 1941 law, and we vary the range of scales, γ = s max/s min, in any one realisation from γ = 25 to γ = 800. This is equivalent to varying the Reynolds number in an experiment from R λ = 60 to R λ = 600. While there is some evidence of a five-thirds law for g > 50 (R λ > 100), the two-thirds law only starts to become apparent when g approaches 200 (R λ ∼ 240). The reason for this discrepancy is that the second-order structure function is a poor filter, mixing information about energy and enstrophy, and from scales larger and smaller than r. In particular, in the inertial range, ((Δv) 2) takes the form of a mixed power-law, a 1+a 2r 2+a 3r 2/ 3, where a 2r 2 tracks the variation in enstrophy and a 3r 2/ 3 the variation in energy. These findings are shown to be consistent with experimental data where the polution of the r 2/ 3 law by the enstrophy contribution, a 2r 2, is clearly evident. We show that higherorder structure functions (of even order) suffer from a similar deficiency.
Resumo:
We investigate the performance of different variants of a suitably tailored Tabu Search optimisation algorithm on a higher-order design problem. We consider four objective func- tions to describe the performance of a compressor stator row, subject to a number of equality and inequality constraints. The same design problem has been previously in- vestigated through single-, bi- and three-objective optimisation studies. However, in this study we explore the capabilities of enhanced variants of our Multi-objective Tabu Search (MOTS) optimisation algorithm in the context of detailed 3D aerodynamic shape design. It is shown that with these enhancements to the local search of the MOTS algorithm we can achieve a rapid exploration of complicated design spaces, but there is a trade-off be- tween speed and the quality of the trade-off surface found. Rapidly explored design spaces reveal the extremes of the objective functions, but the compromise optimum areas are not very well explored. However, there are ways to adapt the behaviour of the optimiser and maintain both a very efficient rate of progress towards the global optimum Pareto front and a healthy number of design configurations lying on the trade-off surface and exploring the compromise optimum regions. These compromise solutions almost always represent the best qualitative balance between the objectives under consideration. Such enhancements to the effectiveness of design space exploration make engineering design optimisation with multiple objectives and robustness criteria ever more practicable and attractive for modern advanced engineering design. Finally, new research questions are addressed that highlight the trade-offs between intelligence in optimisation algorithms and acquisition of qualita- tive information through computational engineering design processes that reveal patterns and relations between design parameters and objective functions, but also speed versus optimum quality. © 2012 AIAA.