878 resultados para Degress of Freedom
Resumo:
This study is concerned with how the attractor dimension of the two-dimensional Navier–Stokes equations depends on characteristic length scales, including the system integral length scale, the forcing length scale, and the dissipation length scale. Upper bounds on the attractor dimension derived by Constantin, Foias and Temam are analysed. It is shown that the optimal attractor-dimension estimate grows linearly with the domain area (suggestive of extensive chaos), for a sufficiently large domain, if the kinematic viscosity and the amplitude and length scale of the forcing are held fixed. For sufficiently small domain area, a slightly “super-extensive” estimate becomes optimal. In the extensive regime, the attractor-dimension estimate is given by the ratio of the domain area to the square of the dissipation length scale defined, on physical grounds, in terms of the average rate of shear. This dissipation length scale (which is not necessarily the scale at which the energy or enstrophy dissipation takes place) can be identified with the dimension correlation length scale, the square of which is interpreted, according to the concept of extensive chaos, as the area of a subsystem with one degree of freedom. Furthermore, these length scales can be identified with a “minimum length scale” of the flow, which is rigorously deduced from the concept of determining nodes.
Resumo:
We consider the numerical treatment of second kind integral equations on the real line of the form ∅(s) = ∫_(-∞)^(+∞)▒〖κ(s-t)z(t)ϕ(t)dt,s=R〗 (abbreviated ϕ= ψ+K_z ϕ) in which K ϵ L_1 (R), z ϵ L_∞ (R) and ψ ϵ BC(R), the space of bounded continuous functions on R, are assumed known and ϕ ϵ BC(R) is to be determined. We first derive sharp error estimates for the finite section approximation (reducing the range of integration to [-A, A]) via bounds on (1-K_z )^(-1)as an operator on spaces of weighted continuous functions. Numerical solution by a simple discrete collocation method on a uniform grid on R is then analysed: in the case when z is compactly supported this leads to a coefficient matrix which allows a rapid matrix-vector multiply via the FFT. To utilise this possibility we propose a modified two-grid iteration, a feature of which is that the coarse grid matrix is approximated by a banded matrix, and analyse convergence and computational cost. In cases where z is not compactly supported a combined finite section and two-grid algorithm can be applied and we extend the analysis to this case. As an application we consider acoustic scattering in the half-plane with a Robin or impedance boundary condition which we formulate as a boundary integral equation of the class studied. Our final result is that if z (related to the boundary impedance in the application) takes values in an appropriate compact subset Q of the complex plane, then the difference between ϕ(s)and its finite section approximation computed numerically using the iterative scheme proposed is ≤C_1 [kh log〖(1⁄kh)+(1-Θ)^((-1)⁄2) (kA)^((-1)⁄2) 〗 ] in the interval [-ΘA,ΘA](Θ<1) for kh sufficiently small, where k is the wavenumber and h the grid spacing. Moreover this numerical approximation can be computed in ≤C_2 N logN operations, where N = 2A/h is the number of degrees of freedom. The values of the constants C1 and C2 depend only on the set Q and not on the wavenumber k or the support of z.
Resumo:
We propose first, a simple task for the eliciting attitudes toward risky choice, the SGG lottery-panel task, which consists in a series of lotteries constructed to compensate riskier options with higher risk-return trade-offs. Using Principal Component Analysis technique, we show that the SGG lottery-panel task is capable of capturing two dimensions of individual risky decision making i.e. subjects’ average risk taking and their sensitivity towards variations in risk-return. From the results of a large experimental dataset, we confirm that the task systematically captures a number of regularities such as: A tendency to risk averse behavior (only around 10% of choices are compatible with risk neutrality); An attraction to certain payoffs compared to low risk lotteries, compatible with over-(under-) weighting of small (large) probabilities predicted in PT and; Gender differences, i.e. males being consistently less risk averse than females but both genders being similarly responsive to the increases in risk-premium. Another interesting result is that in hypothetical choices most individuals increase their risk taking responding to the increase in return to risk, as predicted by PT, while across panels with real rewards we see even more changes, but opposite to the expected pattern of riskier choices for higher risk-returns. Therefore, we conclude from our data that an “economic anomaly” emerges in the real reward choices opposite to the hypothetical choices. These findings are in line with Camerer's (1995) view that although in many domains, paid subjects probably do exert extra mental effort which improves their performance, choice over money gambles is not likely to be a domain in which effort will improve adherence to rational axioms (p. 635). Finally, we demonstrate that both dimensions of risk attitudes, average risk taking and sensitivity towards variations in the return to risk, are desirable not only to describe behavior under risk but also to explain behavior in other contexts, as illustrated by an example. In the second study, we propose three additional treatments intended to elicit risk attitudes under high stakes and mixed outcome (gains and losses) lotteries. Using a dataset obtained from a hypothetical implementation of the tasks we show that the new treatments are able to capture both dimensions of risk attitudes. This new dataset allows us to describe several regularities, both at the aggregate and within-subjects level. We find that in every treatment over 70% of choices show some degree of risk aversion and only between 0.6% and 15.3% of individuals are consistently risk neutral within the same treatment. We also confirm the existence of gender differences in the degree of risk taking, that is, in all treatments females prefer safer lotteries compared to males. Regarding our second dimension of risk attitudes we observe, in all treatments, an increase in risk taking in response to risk premium increases. Treatment comparisons reveal other regularities, such as a lower degree of risk taking in large stake treatments compared to low stake treatments and a lower degree of risk taking when losses are incorporated into the large stake lotteries. Results that are compatible with previous findings in the literature, for stake size effects (e.g., Binswanger, 1980; Antoni Bosch-Domènech & Silvestre, 1999; Hogarth & Einhorn, 1990; Holt & Laury, 2002; Kachelmeier & Shehata, 1992; Kühberger et al., 1999; B. J. Weber & Chapman, 2005; Wik et al., 2007) and domain effect (e.g., Brooks and Zank, 2005, Schoemaker, 1990, Wik et al., 2007). Whereas for small stake treatments, we find that the effect of incorporating losses into the outcomes is not so clear. At the aggregate level an increase in risk taking is observed, but also more dispersion in the choices, whilst at the within-subjects level the effect weakens. Finally, regarding responses to risk premium, we find that compared to only gains treatments sensitivity is lower in the mixed lotteries treatments (SL and LL). In general sensitivity to risk-return is more affected by the domain than the stake size. After having described the properties of risk attitudes as captured by the SGG risk elicitation task and its three new versions, it is important to recall that the danger of using unidimensional descriptions of risk attitudes goes beyond the incompatibility with modern economic theories like PT, CPT etc., all of which call for tests with multiple degrees of freedom. Being faithful to this recommendation, the contribution of this essay is an empirically and endogenously determined bi-dimensional specification of risk attitudes, useful to describe behavior under uncertainty and to explain behavior in other contexts. Hopefully, this will contribute to create large datasets containing a multidimensional description of individual risk attitudes, while at the same time allowing for a robust context, compatible with present and even future more complex descriptions of human attitudes towards risk.
Resumo:
We study the approximation of harmonic functions by means of harmonic polynomials in two-dimensional, bounded, star-shaped domains. Assuming that the functions possess analytic extensions to a delta-neighbourhood of the domain, we prove exponential convergence of the approximation error with respect to the degree of the approximating harmonic polynomial. All the constants appearing in the bounds are explicit and depend only on the shape-regularity of the domain and on delta. We apply the obtained estimates to show exponential convergence with rate O(exp(−b square root N)), N being the number of degrees of freedom and b>0, of a hp-dGFEM discretisation of the Laplace equation based on piecewise harmonic polynomials. This result is an improvement over the classical rate O(exp(−b cubic root N )), and is due to the use of harmonic polynomial spaces, as opposed to complete polynomial spaces.
Resumo:
Visual motion cues play an important role in animal and humans locomotion without the need to extract actual ego-motion information. This paper demonstrates a method for estimating the visual motion parameters, namely the Time-To-Contact (TTC), Focus of Expansion (FOE), and image angular velocities, from a sparse optical flow estimation registered from a downward looking camera. The presented method is capable of estimating the visual motion parameters in a complicated 6 degrees of freedom motion and in real time with suitable accuracy for mobile robots visual navigation.
Resumo:
The Ultra Weak Variational Formulation (UWVF) is a powerful numerical method for the approximation of acoustic, elastic and electromagnetic waves in the time-harmonic regime. The use of Trefftz-type basis functions incorporates the known wave-like behaviour of the solution in the discrete space, allowing large reductions in the required number of degrees of freedom for a given accuracy, when compared to standard finite element methods. However, the UWVF is not well disposed to the accurate approximation of singular sources in the interior of the computational domain. We propose an adjustment to the UWVF for seismic imaging applications, which we call the Source Extraction UWVF. Differing fields are solved for in subdomains around the source, and matched on the inter-domain boundaries. Numerical results are presented for a domain of constant wavenumber and for a domain of varying sound speed in a model used for seismic imaging.
Resumo:
In this paper we propose and analyse a hybrid numerical-asymptotic boundary element method for the solution of problems of high frequency acoustic scattering by a class of sound-soft nonconvex polygons. The approximation space is enriched with carefully chosen oscillatory basis functions; these are selected via a study of the high frequency asymptotic behaviour of the solution. We demonstrate via a rigorous error analysis, supported by numerical examples, that to achieve any desired accuracy it is sufficient for the number of degrees of freedom to grow only in proportion to the logarithm of the frequency as the frequency increases, in contrast to the at least linear growth required by conventional methods. This appears to be the first such numerical analysis result for any problem of scattering by a nonconvex obstacle. Our analysis is based on new frequency-explicit bounds on the normal derivative of the solution on the boundary and on its analytic continuation into the complex plane.
Resumo:
We investigate in detail the initial susceptibility, magnetization curves, and microstructure of ferrofluids in various concentration and particle dipole moment ranges by means of molecular dynamics simulations. We use the Ewald summation for the long-range dipolar interactions, take explicitly into account the translational and rotational degrees of freedom, coupled to a Langevin thermostat. When the dipolar interaction energy is comparable with the thermal energy, the simulation results on the magnetization properties agree with the theoretical predictions very well. For stronger dipolar couplings, however, we find systematic deviations from the theoretical curves. We analyze in detail the observed microstructure of the fluids under different conditions. The formation of clusters is found to enhance the magnetization at weak fields and thus leads to a larger initial susceptibility. The influence of the particle aggregation is isolated by studying ferro-solids, which consist of magnetic dipoles frozen in at random locations but which are free to rotate. Due to the artificial suppression of clusters in ferrosolids the observed susceptibility is considerably lowered when compared to ferrofluids.
Resumo:
The state-resolved reaction probability of CH4 on Pt�110-�1�2 was measured as a function of CH4 translational energy for four vibrational eigenstates comprising different amounts of C-H stretch and bend excitation. Mode-specific reactivity is observed both between states from different polyads and between isoenergetic states belonging to the same polyad of CH4. For the stretch/bend combination states, the vibrational efficacy of reaction activation is observed to be higher than for either pure C-H stretching or pure bending states, demonstrating a concerted role of stretch and bend excitation in C-H bond scission. This concerted role, reflected by the nonadditivity of the vibrational efficacies, is consistent with transition state structures found by ab initio calculations and indicates that current dynamical models of CH4 chemisorption neglect an important degree of freedom by including only C-H stretching motion.
Resumo:
4-Dimensional Variational Data Assimilation (4DVAR) assimilates observations through the minimisation of a least-squares objective function, which is constrained by the model flow. We refer to 4DVAR as strong-constraint 4DVAR (sc4DVAR) in this thesis as it assumes the model is perfect. Relaxing this assumption gives rise to weak-constraint 4DVAR (wc4DVAR), leading to a different minimisation problem with more degrees of freedom. We consider two wc4DVAR formulations in this thesis, the model error formulation and state estimation formulation. The 4DVAR objective function is traditionally solved using gradient-based iterative methods. The principle method used in Numerical Weather Prediction today is the Gauss-Newton approach. This method introduces a linearised `inner-loop' objective function, which upon convergence, updates the solution of the non-linear `outer-loop' objective function. This requires many evaluations of the objective function and its gradient, which emphasises the importance of the Hessian. The eigenvalues and eigenvectors of the Hessian provide insight into the degree of convexity of the objective function, while also indicating the difficulty one may encounter while iterative solving 4DVAR. The condition number of the Hessian is an appropriate measure for the sensitivity of the problem to input data. The condition number can also indicate the rate of convergence and solution accuracy of the minimisation algorithm. This thesis investigates the sensitivity of the solution process minimising both wc4DVAR objective functions to the internal assimilation parameters composing the problem. We gain insight into these sensitivities by bounding the condition number of the Hessians of both objective functions. We also precondition the model error objective function and show improved convergence. We show that both formulations' sensitivities are related to error variance balance, assimilation window length and correlation length-scales using the bounds. We further demonstrate this through numerical experiments on the condition number and data assimilation experiments using linear and non-linear chaotic toy models.
Resumo:
Data from 58 strong-lensing events surveyed by the Sloan Lens ACS Survey are used to estimate the projected galaxy mass inside their Einstein radii by two independent methods: stellar dynamics and strong gravitational lensing. We perform a joint analysis of these two estimates inside models with up to three degrees of freedom with respect to the lens density profile, stellar velocity anisotropy, and line-of-sight (LOS) external convergence, which incorporates the effect of the large-scale structure on strong lensing. A Bayesian analysis is employed to estimate the model parameters, evaluate their significance, and compare models. We find that the data favor Jaffe`s light profile over Hernquist`s, but that any particular choice between these two does not change the qualitative conclusions with respect to the features of the system that we investigate. The density profile is compatible with an isothermal, being sightly steeper and having an uncertainty in the logarithmic slope of the order of 5% in models that take into account a prior ignorance on anisotropy and external convergence. We identify a considerable degeneracy between the density profile slope and the anisotropy parameter, which largely increases the uncertainties in the estimates of these parameters, but we encounter no evidence in favor of an anisotropic velocity distribution on average for the whole sample. An LOS external convergence following a prior probability distribution given by cosmology has a small effect on the estimation of the lens density profile, but can increase the dispersion of its value by nearly 40%.
Resumo:
Statistical properties of a two-dimensional ideal dispersion of polydisperse micelles are derived by analyzing the convergence properties of a sum rule set by mass conservation. Internal micellar degrees of freedom are accounted for by a microscopic model describing small displacements of the constituting amphiphiles with respect to their equilibrium positions. The transfer matrix (TM) method is employed to compute internal micelle partition function. We show that the conditions under which the sum rule is saturated by the largest eigenvalue of the TM determine the value of amphiphile concentration above which the dispersion becomes highly polydisperse and micelle sizes approach a Schultz distribution. (C) 2011 Elsevier B.V. All rights reserved.
Resumo:
Relativistic heavy ion collisions are the ideal experimental tool to explore the QCD phase diagram. Several results show that a very hot medium with a high energy density and partonic degrees of freedom is formed in these collisions, creating a new state of matter. Measurements of strange hadrons can bring important information about the bulk properties of such matter. The elliptic flow of strange hadrons such as phi, K(S)(0), Lambda, Xi and Omega shows that collectivity is developed at partonic level and at intermediate p(T) the quark coalescence is the dominant mechanism of hadronization. The nuclear modification factor is an another indicator of the presence of a very dense medium. The comparison between measurements of Au+Au and d+Au collisions, where only cold nuclear matter effects are expected, can shed more light on the bulk properties. In these proceedings, recent results from the STAR experiment on bulk matter properties are presented.
Resumo:
We study the photoassociation of Bose-Einstein condensed atoms into molecules using an optical cavity field. The driven cavity field introduces a dynamical degree of freedom into the photoassociation process, whose role in determining the stationary behavior has not previously been considered. The semiclassical stationary solutions for the atom and molecules as well as the intracavity field are found and their stability and scaling properties are determined in terms of experimentally controllable parameters including driving amplitude of the cavity and the nonlinear interactions between atoms and molecules. For weak cavity driving, we find a bifurcation in the atom and molecule number occurs that signals a transition from a stable steady state to nonlinear Rabi oscillations. For a strongly driven cavity, there exists bistability in the atom and molecule number.
Resumo:
I am honored to respond to Paul Guyer’s elaboration on the role of examples of perfectionism in Cavell’s and Kant’s philosophies. Guyer’s appeal to Kant’s notion of freedom opens the way for suggestive readings of Cavell’s work on moral perfectionism but also, as I will show, for controversy. There are salient aspects of both Kant’s and Cavell’s philosophy that are crucial to understanding perfectionism and, let me call it, perfectionist education, that I wish to emphasize in response to Guyer. In responding to Guyer’s text, I shall do three things. First, I shall explain why I think it is misleading to speak of Cavell’s view that moral perfectionism is involved in a struggle to make oneself intelligible to oneself and others in terms of necessary and sufficient conditions for moral perfection. Rather, I will suggest that the constant work on oneself that is at the core of Cavell’s moral perfectionism is a constant work for intelligibility. Second, I shall recall a feature of Cavell’s perfectionism that Guyer does not explicitly speak of: the idea that perfectionism is a theme, “outlook or dimension of thought embodied and developed in a set of texts.” Or, as Cavell goes on to say, “there is a place in mind where good books are in conversation. … [W]hat they often talk about … is how they can be, or sound, so much better than the people that compose them.” This involves what I would call a perfectionist conception of the history of philosophy and the kinds of texts we take to belong to such history. Third, I shall sketch out how the struggle for intelligibility and a perfectionist view of engagement with texts and philosophy can lead to a view of philosophy as a form of education in itself. In concluding these three “criticisms,” I reach a position that I think is quite close to Guyer’s, but with a slightly shifted emphasis on what it means to read Kant and Cavell from a perfectionist point of view.