91 resultados para second-order models


Relevância:

80.00% 80.00%

Publicador:

Resumo:

Disturbances of arbitrary amplitude are superposed on a basic flow which is assumed to be steady and either (a) two-dimensional, homogeneous, and incompressible (rotating or non-rotating) or (b) stably stratified and quasi-geostrophic. Flow over shallow topography is allowed in either case. The basic flow, as well as the disturbance, is assumed to be subject neither to external forcing nor to dissipative processes like viscosity. An exact, local ‘wave-activity conservation theorem’ is derived in which the density A and flux F are second-order ‘wave properties’ or ‘disturbance properties’, meaning that they are O(a2) in magnitude as disturbance amplitude a [rightward arrow] 0, and that they are evaluable correct to O(a2) from linear theory, to O(a3) from second-order theory, and so on to higher orders in a. For a disturbance in the form of a single, slowly varying, non-stationary Rossby wavetrain, $\overline{F}/\overline{A}$ reduces approximately to the Rossby-wave group velocity, where (${}^{-}$) is an appropriate averaging operator. F and A have the formal appearance of Eulerian quantities, but generally involve a multivalued function the correct branch of which requires a certain amount of Lagrangian information for its determination. It is shown that, in a certain sense, the construction of conservable, quasi-Eulerian wave properties like A is unique and that the multivaluedness is inescapable in general. The connection with the concepts of pseudoenergy (quasi-energy), pseudomomentum (quasi-momentum), and ‘Eliassen-Palm wave activity’ is noted. The relationship of this and similar conservation theorems to dynamical fundamentals and to Arnol'd's nonlinear stability theorems is discussed in the light of recent advances in Hamiltonian dynamics. These show where such conservation theorems come from and how to construct them in other cases. An elementary proof of the Hamiltonian structure of two-dimensional Eulerian vortex dynamics is put on record, with explicit attention to the boundary conditions. The connection between Arnol'd's second stability theorem and the suppression of shear and self-tuning resonant instabilities by boundary constraints is discussed, and a finite-amplitude counterpart to Rayleigh's inflection-point theorem noted

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The concept of slow vortical dynamics and its role in theoretical understanding is central to geophysical fluid dynamics. It leads, for example, to “potential vorticity thinking” (Hoskins et al. 1985). Mathematically, one imagines an invariant manifold within the phase space of solutions, called the slow manifold (Leith 1980; Lorenz 1980), to which the dynamics are constrained. Whether this slow manifold truly exists has been a major subject of inquiry over the past 20 years. It has become clear that an exact slow manifold is an exceptional case, restricted to steady or perhaps temporally periodic flows (Warn 1997). Thus the concept of a “fuzzy slow manifold” (Warn and Ménard 1986) has been suggested. The idea is that nearly slow dynamics will occur in a stochastic layer about the putative slow manifold. The natural question then is, how thick is this layer? In a recent paper, Ford et al. (2000) argue that Lighthill emission—the spontaneous emission of freely propagating acoustic waves by unsteady vortical flows—is applicable to the problem of balance, with the Mach number Ma replaced by the Froude number F, and that it is a fundamental mechanism for this fuzziness. They consider the rotating shallow-water equations and find emission of inertia–gravity waves at O(F2). This is rather surprising at first sight, because several studies of balanced dynamics with the rotating shallow-water equations have gone beyond second order in F, and found only an exponentially small unbalanced component (Warn and Ménard 1986; Lorenz and Krishnamurthy 1987; Bokhove and Shepherd 1996; Wirosoetisno and Shepherd 2000). We have no technical objection to the analysis of Ford et al. (2000), but wish to point out that it depends crucially on R 1, where R is the Rossby number. This condition requires the ratio of the characteristic length scale of the flow L to the Rossby deformation radius LR to go to zero in the limit F → 0. This is the low Froude number scaling of Charney (1963), which, while originally designed for the Tropics, has been argued to be also relevant to mesoscale dynamics (Riley et al. 1981). If L/LR is fixed, however, then F → 0 implies R → 0, which is the standard quasigeostrophic scaling of Charney (1948; see, e.g., Pedlosky 1987). In this limit there is reason to expect the fuzziness of the slow manifold to be “exponentially thin,” and balance to be much more accurate than is consistent with (algebraic) Lighthill emission.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Geophysical fluid models often support both fast and slow motions. As the dynamics are often dominated by the slow motions, it is desirable to filter out the fast motions by constructing balance models. An example is the quasi geostrophic (QG) model, which is used widely in meteorology and oceanography for theoretical studies, in addition to practical applications such as model initialization and data assimilation. Although the QG model works quite well in the mid-latitudes, its usefulness diminishes as one approaches the equator. Thus far, attempts to derive similar balance models for the tropics have not been entirely successful as the models generally filter out Kelvin waves, which contribute significantly to tropical low-frequency variability. There is much theoretical interest in the dynamics of planetary-scale Kelvin waves, especially for atmospheric and oceanic data assimilation where observations are generally only of the mass field and thus do not constrain the wind field without some kind of diagnostic balance relation. As a result, estimates of Kelvin wave amplitudes can be poor. Our goal is to find a balance model that includes Kelvin waves for planetary-scale motions. Using asymptotic methods, we derive a balance model for the weakly nonlinear equatorial shallow-water equations. Specifically we adopt the ‘slaving’ method proposed by Warn et al. (Q. J. R. Meteorol. Soc., vol. 121, 1995, pp. 723–739), which avoids secular terms in the expansion and thus can in principle be carried out to any order. Different from previous approaches, our expansion is based on a long-wave scaling and the slow dynamics is described using the height field instead of potential vorticity. The leading-order model is equivalent to the truncated long-wave model considered previously (e.g. Heckley & Gill, Q. J. R. Meteorol. Soc., vol. 110, 1984, pp. 203–217), which retains Kelvin waves in addition to equatorial Rossby waves. Our method allows for the derivation of higher-order models which significantly improve the representation of Rossby waves in the isotropic limit. In addition, the ‘slaving’ method is applicable even when the weakly nonlinear assumption is relaxed, and the resulting nonlinear model encompasses the weakly nonlinear model. We also demonstrate that the method can be applied to more realistic stratified models, such as the Boussinesq model.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We present a dynamic causal model that can explain context-dependent changes in neural responses, in the rat barrel cortex, to an electrical whisker stimulation at different frequencies. Neural responses were measured in terms of local field potentials. These were converted into current source density (CSD) data, and the time series of the CSD sink was extracted to provide a time series response train. The model structure consists of three layers (approximating the responses from the brain stem to the thalamus and then the barrel cortex), and the latter two layers contain nonlinearly coupled modules of linear second-order dynamic systems. The interaction of these modules forms a nonlinear regulatory system that determines the temporal structure of the neural response amplitude for the thalamic and cortical layers. The model is based on the measured population dynamics of neurons rather than the dynamics of a single neuron and was evaluated against CSD data from experiments with varying stimulation frequency (1–40 Hz), random pulse trains, and awake and anesthetized animals. The model parameters obtained by optimization for different physiological conditions (anesthetized or awake) were significantly different. Following Friston, Mechelli, Turner, and Price (2000), this work is part of a formal mathematical system currently being developed (Zheng et al., 2005) that links stimulation to the blood oxygen level dependent (BOLD) functional magnetic resonance imaging (fMRI) signal through neural activity and hemodynamic variables. The importance of the model described here is that it can be used to invert the hemodynamic measurements of changes in blood flow to estimate the underlying neural activity.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We present a Galerkin method with piecewise polynomial continuous elements for fully nonlinear elliptic equations. A key tool is the discretization proposed in Lakkis and Pryer, 2011, allowing us to work directly on the strong form of a linear PDE. An added benefit to making use of this discretization method is that a recovered (finite element) Hessian is a byproduct of the solution process. We build on the linear method and ultimately construct two different methodologies for the solution of second order fully nonlinear PDEs. Benchmark numerical results illustrate the convergence properties of the scheme for some test problems as well as the Monge–Amp`ere equation and the Pucci equation.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Global wetlands are believed to be climate sensitive, and are the largest natural emitters of methane (CH4). Increased wetland CH4 emissions could act as a positive feedback to future warming. The Wetland and Wetland CH4 Inter-comparison of Models Project (WETCHIMP) investigated our present ability to simulate large-scale wetland characteristics and corresponding CH4 emissions. To ensure inter-comparability, we used a common experimental protocol driving all models with the same climate and carbon dioxide (CO2) forcing datasets. The WETCHIMP experiments were conducted for model equilibrium states as well as transient simulations covering the last century. Sensitivity experiments investigated model response to changes in selected forcing inputs (precipitation, temperature, and atmospheric CO2 concentration). Ten models participated, covering the spectrum from simple to relatively complex, including models tailored either for regional or global simulations. The models also varied in methods to calculate wetland size and location, with some models simulating wetland area prognostically, while other models relied on remotely sensed inundation datasets, or an approach intermediate between the two. Four major conclusions emerged from the project. First, the suite of models demonstrate extensive disagreement in their simulations of wetland areal extent and CH4 emissions, in both space and time. Simple metrics of wetland area, such as the latitudinal gradient, show large variability, principally between models that use inundation dataset information and those that independently determine wetland area. Agreement between the models improves for zonally summed CH4 emissions, but large variation between the models remains. For annual global CH4 emissions, the models vary by ±40% of the all-model mean (190 Tg CH4 yr−1). Second, all models show a strong positive response to increased atmospheric CO2 concentrations (857 ppm) in both CH4 emissions and wetland area. In response to increasing global temperatures (+3.4 °C globally spatially uniform), on average, the models decreased wetland area and CH4 fluxes, primarily in the tropics, but the magnitude and sign of the response varied greatly. Models were least sensitive to increased global precipitation (+3.9 % globally spatially uniform) with a consistent small positive response in CH4 fluxes and wetland area. Results from the 20th century transient simulation show that interactions between climate forcings could have strong non-linear effects. Third, we presently do not have sufficient wetland methane observation datasets adequate to evaluate model fluxes at a spatial scale comparable to model grid cells (commonly 0.5°). This limitation severely restricts our ability to model global wetland CH4 emissions with confidence. Our simulated wetland extents are also difficult to evaluate due to extensive disagreements between wetland mapping and remotely sensed inundation datasets. Fourth, the large range in predicted CH4 emission rates leads to the conclusion that there is both substantial parameter and structural uncertainty in large-scale CH4 emission models, even after uncertainties in wetland areas are accounted for.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Stochastic methods are a crucial area in contemporary climate research and are increasingly being used in comprehensive weather and climate prediction models as well as reduced order climate models. Stochastic methods are used as subgrid-scale parameterizations (SSPs) as well as for model error representation, uncertainty quantification, data assimilation, and ensemble prediction. The need to use stochastic approaches in weather and climate models arises because we still cannot resolve all necessary processes and scales in comprehensive numerical weather and climate prediction models. In many practical applications one is mainly interested in the largest and potentially predictable scales and not necessarily in the small and fast scales. For instance, reduced order models can simulate and predict large-scale modes. Statistical mechanics and dynamical systems theory suggest that in reduced order models the impact of unresolved degrees of freedom can be represented by suitable combinations of deterministic and stochastic components and non-Markovian (memory) terms. Stochastic approaches in numerical weather and climate prediction models also lead to the reduction of model biases. Hence, there is a clear need for systematic stochastic approaches in weather and climate modeling. In this review, we present evidence for stochastic effects in laboratory experiments. Then we provide an overview of stochastic climate theory from an applied mathematics perspective. We also survey the current use of stochastic methods in comprehensive weather and climate prediction models and show that stochastic parameterizations have the potential to remedy many of the current biases in these comprehensive models.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Despite the generally positive contribution of supply management capabilities to firm performance their respective routines require more depth of assessment. Using the resource-based view we examine four routines bundles comprising ostensive and performative aspects of supply management capability – supply management integration, coordinated sourcing, collaboration management and performance assessment. Using structural equation modelling we measure supply management capability empirically as a second-order latent variable and estimate its effect on a series of financial and operational performance measures. The routines-based approach allows us to demonstrate a different, more fine-grained approach for assessing consistent bundles of homogeneous patterns of activity across firms. The results suggest supply management capability is formed of internally consistent routine bundles, which are significantly related to financial performance, mediated by operational performance. Our results confirm an indirect effect of firm performance for ‘core’ routines forming the architecture of a supply management capability. Supply management capability primarily improves the operational performance of the business, which is subsequently translated into improved financial performance. The study is significant for practice as it offers a different view about the face-valid rationale of supply management directly influencing firm financial performance. We confound this assumption, prompting caution when placing too much importance on directly assessing supply management capability using financial performance of the business.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A new sparse kernel density estimator is introduced based on the minimum integrated square error criterion for the finite mixture model. Since the constraint on the mixing coefficients of the finite mixture model is on the multinomial manifold, we use the well-known Riemannian trust-region (RTR) algorithm for solving this problem. The first- and second-order Riemannian geometry of the multinomial manifold are derived and utilized in the RTR algorithm. Numerical examples are employed to demonstrate that the proposed approach is effective in constructing sparse kernel density estimators with an accuracy competitive with those of existing kernel density estimators.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Dynamic soundtracking presents various practical and aesthetic challenges to composers working with games. This paper presents an implementation of a system addressing some of these challenges with an affectively-driven music generation algorithm based on a second order Markov-model. The system can respond in real-time to emotional trajectories derived from 2-dimensions of affect on the circumplex model (arousal and valence), which are mapped to five musical parameters. A transition matrix is employed to vary the generated output in continuous response to the affective state intended by the gameplay.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Cerrãdo savannas have the greatest fire activity of all major global land-cover types and play a significant role in the global carbon cycle. During the 21st century, temperatures are projected to increase by ∼ 3 ◦C coupled with a precipitation decrease of ∼ 20 %. Although these conditions could potentially intensify drought stress, it is unknown how that might alter vegetation composition and fire regimes. To assess how Neotropical savannas responded to past climate changes, a 14 500-year, high-resolution, sedimentary record from Huanchaca Mesetta, a palm swamp located in the cerrãdo savanna in northeastern Bolivia, was analyzed with phytoliths, stable isotopes, and charcoal. A nonanalogue, cold-adapted vegetation community dominated the Lateglacial–early Holocene period (14 500–9000 cal yr BP, which included trees and C3 Pooideae and C4 Panicoideae grasses. The Lateglacial vegetation was fire-sensitive and fire activity during this period was low, likely responding to fuel availability and limitation. Although similar vegetation characterized the early Holocene, the warming conditions associated with the onset of the Holocene led to an initial increase in fire activity. Huanchaca Mesetta became increasingly firedependent during the middle Holocene with the expansion of C4 fire-adapted grasses. However, as warm, dry conditions, characterized by increased length and severity of the dry season, continued, fuel availability decreased. The establishment of the modern palm swamp vegetation occurred at 5000 cal yr BP. Edaphic factors are the first-order control on vegetation on the rocky quartzite mesetta. Where soils are sufficiently thick, climate is the second-order control of vegetation on the mesetta. The presence of the modern palm swamp is attributed to two factors: (1) increased precipitation that increased water table levels and (2) decreased frequency and duration of surazos (cold wind incursions from Patagonia) leading to increased temperature minima. Natural (soil, climate, fire) drivers rather than anthropogenic drivers control the vegetation and fire activity at Huanchaca Mesetta. Thus the cerrãdo savanna ecosystem of the Huanchaca Plateau has exhibited ecosystem resilience to major climatic changes in both temperature and precipitation since the Lateglacial period.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In order to move the nodes in a moving mesh method a time-stepping scheme is required which is ideally explicit and non-tangling (non-overtaking in one dimension (1-D)). Such a scheme is discussed in this paper, together with its drawbacks, and illustrated in 1-D in the context of a velocity-based Lagrangian conservation method applied to first order and second order examples which exhibit a regime change after node compression. An implementation in multidimensions is also described in some detail.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Background  Access to, and the use of, information and communication technology (ICT) is increasingly becoming a vital component of mainstream life. First-order (e.g. time and money) and second-order factors (e.g. beliefs of staff members) affect the use of ICT in different contexts. It is timely to investigate what these factors may be in the context of service provision for adults with intellectual disabilities given the role ICT could play in facilitating communication and access to information and opportunities as suggested in Valuing People. Method  Taking a qualitative approach, nine day service sites within one organization were visited over a period of 6 months to observe ICT-related practice and seek the views of staff members working with adults with intellectual disabilities. All day services were equipped with modern ICT equipment including computers, digital cameras, Internet connections and related peripherals. Results  Staff members reported time, training and budget as significant first-order factors. Organizational culture and beliefs about the suitability of technology for older or less able service users were the striking second-order factors mentioned. Despite similar levels of equipment, support and training, ICT use had developed in very different ways across sites. Conclusion  The provision of ICT equipment and training is not sufficient to ensure their use; the beliefs of staff members and organizational culture of sites play a substantial role in how ICT is used with and by service users. Activity theory provides a useful framework for considering how first- and second-order factors are related. Staff members need to be given clear information about the broader purpose of activities in day services, especially in relation to the lifelong learning agenda, in order to see the relevance and usefulness of ICT resources for all service users.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Deuterium (dD) and oxygen (d18O) isotopes are powerful tracers of the hydrological cycle and have been extensively used for paleoclimate reconstructions as they can provide information on past precipitation, temperature and atmospheric circulation. More recently, the use of 17Oexcess derived from precise measurement of d17O and d18O gives new and additional insights in tracing the hydrological cycle whereas uncertainties surround this proxy. However, 17Oexcess could provide additional information on the atmospheric conditions at the moisture source as well as about fractionations associated with transport and site processes. In this paper we trace water stable isotopes (dD, d17O and d18O) along their path from precipitation to cave drip water and finally to speleothem fluid inclusions for Milandre cave in northwestern Switzerland. A two year-long daily resolved precipitation isotope record close to the cave site is compared to collected cave drip water (3 months average resolution) and fluid inclusions of modern and Holocene stalagmites. Amount weighted mean dD, d18O and d17O are �71.0‰, �9.9‰, �5.2‰ for precipitation, �60.3‰, �8.7‰, �4.6‰ for cave drip water and �61.3‰, �8.3‰, �4.7‰ for recent fluid inclusions respectively. Second order parameters have also been derived in precipitation and drip water and present similar values with 18 per meg for 17Oexcess whereas d-excess is 1.5‰ more negative in drip water. Furthermore, the atmospheric signal is shifted towards enriched values in the drip water and fluid inclusions (D of ~ þ 10‰ for dD). The isotopic composition of cave drip water exhibits a weak seasonal signal which is shifted by around 8e10 months (groundwater residence time) when compared to the precipitation. Moreover, we carried out the first d17O measurement in speleothem fluid inclusions, as well as the first comparison of the d17O behaviour from the meteoric water to the fluid inclusions entrapment in speleothems. This study on precipitation, drip water and fluid inclusions will be used as a speleothem proxy calibration for Milandre cave in order to reconstruct paleotemperatures and moisture source variations for Western Central Europe.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Tensor clustering is an important tool that exploits intrinsically rich structures in real-world multiarray or Tensor datasets. Often in dealing with those datasets, standard practice is to use subspace clustering that is based on vectorizing multiarray data. However, vectorization of tensorial data does not exploit complete structure information. In this paper, we propose a subspace clustering algorithm without adopting any vectorization process. Our approach is based on a novel heterogeneous Tucker decomposition model taking into account cluster membership information. We propose a new clustering algorithm that alternates between different modes of the proposed heterogeneous tensor model. All but the last mode have closed-form updates. Updating the last mode reduces to optimizing over the multinomial manifold for which we investigate second order Riemannian geometry and propose a trust-region algorithm. Numerical experiments show that our proposed algorithm compete effectively with state-of-the-art clustering algorithms that are based on tensor factorization.