933 resultados para 3-DIMENSIONAL POLYMERS
Resumo:
With the introduction of new observing systems based on asynoptic observations, the analysis problem has changed in character. In the near future we may expect that a considerable part of meteorological observations will be unevenly distributed in four dimensions, i.e. three dimensions in space and one in time. The term analysis, or objective analysis in meteorology, means the process of interpolating observed meteorological observations from unevenly distributed locations to a network of regularly spaced grid points. Necessitated by the requirement of numerical weather prediction models to solve the governing finite difference equations on such a grid lattice, the objective analysis is a three-dimensional (or mostly two-dimensional) interpolation technique. As a consequence of the structure of the conventional synoptic network with separated data-sparse and data-dense areas, four-dimensional analysis has in fact been intensively used for many years. Weather services have thus based their analysis not only on synoptic data at the time of the analysis and climatology, but also on the fields predicted from the previous observation hour and valid at the time of the analysis. The inclusion of the time dimension in objective analysis will be called four-dimensional data assimilation. From one point of view it seems possible to apply the conventional technique on the new data sources by simply reducing the time interval in the analysis-forecasting cycle. This could in fact be justified also for the conventional observations. We have a fairly good coverage of surface observations 8 times a day and several upper air stations are making radiosonde and radiowind observations 4 times a day. If we have a 3-hour step in the analysis-forecasting cycle instead of 12 hours, which is applied most often, we may without any difficulties treat all observations as synoptic. No observation would thus be more than 90 minutes off time and the observations even during strong transient motion would fall within a horizontal mesh of 500 km * 500 km.
Resumo:
This study is concerned with how the attractor dimension of the two-dimensional Navier–Stokes equations depends on characteristic length scales, including the system integral length scale, the forcing length scale, and the dissipation length scale. Upper bounds on the attractor dimension derived by Constantin, Foias and Temam are analysed. It is shown that the optimal attractor-dimension estimate grows linearly with the domain area (suggestive of extensive chaos), for a sufficiently large domain, if the kinematic viscosity and the amplitude and length scale of the forcing are held fixed. For sufficiently small domain area, a slightly “super-extensive” estimate becomes optimal. In the extensive regime, the attractor-dimension estimate is given by the ratio of the domain area to the square of the dissipation length scale defined, on physical grounds, in terms of the average rate of shear. This dissipation length scale (which is not necessarily the scale at which the energy or enstrophy dissipation takes place) can be identified with the dimension correlation length scale, the square of which is interpreted, according to the concept of extensive chaos, as the area of a subsystem with one degree of freedom. Furthermore, these length scales can be identified with a “minimum length scale” of the flow, which is rigorously deduced from the concept of determining nodes.
Resumo:
We study two-dimensional (2D) turbulence in a doubly periodic domain driven by a monoscale-like forcing and damped by various dissipation mechanisms of the form νμ(−Δ)μ. By “monoscale-like” we mean that the forcing is applied over a finite range of wavenumbers kmin≤k≤kmax, and that the ratio of enstrophy injection η≥0 to energy injection ε≥0 is bounded by kmin2ε≤η≤kmax2ε. Such a forcing is frequently considered in theoretical and numerical studies of 2D turbulence. It is shown that for μ≥0 the asymptotic behaviour satisfies ∥u∥12≤kmax2∥u∥2, where ∥u∥2 and ∥u∥12 are the energy and enstrophy, respectively. If the condition of monoscale-like forcing holds only in a time-mean sense, then the inequality holds in the time mean. It is also shown that for Navier–Stokes turbulence (μ=1), the time-mean enstrophy dissipation rate is bounded from above by 2ν1kmax2. These results place strong constraints on the spectral distribution of energy and enstrophy and of their dissipation, and thereby on the existence of energy and enstrophy cascades, in such systems. In particular, the classical dual cascade picture is shown to be invalid for forced 2D Navier–Stokes turbulence (μ=1) when it is forced in this manner. Inclusion of Ekman drag (μ=0) along with molecular viscosity permits a dual cascade, but is incompatible with the log-modified −3 power law for the energy spectrum in the enstrophy-cascading inertial range. In order to achieve the latter, it is necessary to invoke an inverse viscosity (μ<0). These constraints on permissible power laws apply for any spectrally localized forcing, not just for monoscale-like forcing.
Resumo:
The Kagome lattice, comprising a two-dimensional array of corner-sharing equilateral triangles, is central to the exploration of magnetic frustration. In such a lattice, antiferromagnetic coupling between ions in triangular plaquettes prevents all of the exchange interactions being simultaneously satisfied and a variety of novel magnetic ground states may result at low temperature. Experimental realization of a Kagome lattice remains difficult. The jarosite family of materials of nominal composition AM3(SO4)2(OH)6 (A = monovalent cation; M= Fe3+, Cr3+), offers perhaps one of the most promising manifestations of the phenomenon of magnetic frustration in two dimensions. The magnetic properties of jarosites are however extremely sensitive to the degree of coverage of magnetic sites. Consequently, there is considerable interest in the use of soft chemical techniques for the design and synthesis of novel materials in which to explore the effects of spin, degree of site coverage and connectivity on magnetic frustration.
Resumo:
The energy-Casimir stability method, also known as the Arnold stability method, has been widely used in fluid dynamical applications to derive sufficient conditions for nonlinear stability. The most commonly studied system is two-dimensional Euler flow. It is shown that the set of two-dimensional Euler flows satisfying the energy-Casimir stability criteria is empty for two important cases: (i) domains having the topology of the sphere, and (ii) simply-connected bounded domains with zero net vorticity. The results apply to both the first and the second of Arnold’s stability theorems. In the spirit of Andrews’ theorem, this puts a further limitation on the applicability of the method. © 2000 American Institute of Physics.
Resumo:
The non-quadratic conservation laws of the two-dimensional Euler equations are used to show that the gravest modes in a doubly-periodic domain with aspect ratio L = 1 are stable up to translations (or structurally stable) for finite-amplitude disturbances. This extends a previous result based on conservation of energy and enstrophy alone. When L 1, a saturation bound is established for the mode with wavenumber |k| = L −1 (the next-gravest mode), which is linearly unstable. The method is applied to prove nonlinear structural stability of planetary wave two on a rotating sphere.
Resumo:
The quantitative effects of uniform strain and background rotation on the stability of a strip of constant vorticity (a simple shear layer) are examined. The thickness of the strip decreases in time under the strain, so it is necessary to formulate the linear stability analysis for a time-dependent basic flow. The results show that even a strain rate γ (scaled with the vorticity of the strip) as small as 0.25 suppresses the conventional Rayleigh shear instability mechanism, in the sense that the r.m.s. wave steepness cannot amplify by more than a certain factor, and must eventually decay. For γ < 0.25 the amplification factor increases as γ decreases; however, it is only 3 when γ e 0.065. Numerical simulations confirm the predictions of linear theory at small steepness and predict a threshold value necessary for the formation of coherent vortices. The results help to explain the impression from numerous simulations of two-dimensional turbulence reported in the literature that filaments of vorticity infrequently roll up into vortices. The stabilization effect may be expected to extend to two- and three-dimensional quasi-geostrophic flows.
Resumo:
Faced with the strongly nonlinear and apparently random behaviour of the energy-containing scales in the atmosphere, geophysical fluid dynamicists have attempted to understand the synoptic-scale atmospheric flow within the context of two-dimensional homogeneous turbulence theory (e.g. FJØRTOFT [1]; LEITH [2]). However atmospheric observations (BOER and SHEPHERD [3] and Fig.1) show that the synoptic-scale transient flow evolves in the presence of a planetary-scale, quasi-stationary background flow which is approximately zonal (east-west). Classical homogeneous 2-D turbulence theory is therefore not strictly applicable to the transient flow. One is led instead to study 2-D turbulence in the presence of a large-scale (barotropically stable) zonal jet inhomogeneity.
Resumo:
We study the degree to which Kraichnan–Leith–Batchelor (KLB) phenomenology describes two-dimensional energy cascades in α turbulence, governed by ∂θ/∂t+J(ψ,θ)=ν∇2θ+f, where θ=(−Δ)α/2ψ is generalized vorticity, and ψ^(k)=k−αθ^(k) in Fourier space. These models differ in spectral non-locality, and include surface quasigeostrophic flow (α=1), regular two-dimensional flow (α=2) and rotating shallow flow (α=3), which is the isotropic limit of a mantle convection model. We re-examine arguments for dual inverse energy and direct enstrophy cascades, including Fjørtoft analysis, which we extend to general α, and point out their limitations. Using an α-dependent eddy-damped quasinormal Markovian (EDQNM) closure, we seek self-similar inertial range solutions and study their characteristics. Our present focus is not on coherent structures, which the EDQNM filters out, but on any self-similar and approximately Gaussian turbulent component that may exist in the flow and be described by KLB phenomenology. For this, the EDQNM is an appropriate tool. Non-local triads contribute increasingly to the energy flux as α increases. More importantly, the energy cascade is downscale in the self-similar inertial range for 2.5<α<10. At α=2.5 and α=10, the KLB spectra correspond, respectively, to enstrophy and energy equipartition, and the triad energy transfers and flux vanish identically. Eddy turnover time and strain rate arguments suggest the inverse energy cascade should obey KLB phenomenology and be self-similar for α<4. However, downscale energy flux in the EDQNM self-similar inertial range for α>2.5 leads us to predict that any inverse cascade for α≥2.5 will not exhibit KLB phenomenology, and specifically the KLB energy spectrum. Numerical simulations confirm this: the inverse cascade energy spectrum for α≥2.5 is significantly steeper than the KLB prediction, while for α<2.5 we obtain the KLB spectrum.
Resumo:
The low-molecular-weight (LMW) glutenin subunits are components of the highly cross-linked glutenin polymers that confer viscoelastic properties to gluten and dough. They have both quantitative and qualitative effects on dough quality that may relate to differences in their ability to form the inter-chain disulphide bonds that stabilise the polymers. In order to determine the relationship between dough quality and the amounts and properties of the LMW subunits, we have transformed the pasta wheat cultivars Svevo and Ofanto with three genes encoding proteins, which differ in their numbers or positions of cysteine residues. The transgenes were delivered under control of the high-molecular-weight (HMW) subunit 1Dx5 gene promoter and terminator regions, and the encoded proteins were C-terminally tagged by the introduction of the c-myc epitope. Stable transformants were obtained with both cultivars, and the use of a specific antibody to the c-myc epitope tag allowed the transgene products to be readily detected in the complex mixture of LMW subunits. A range of transgene expression levels was observed. The addition of the epitope tag did not compromise the correct folding of the trangenic subunits and their incorporation into the glutenin polymers. Our results demonstrate that the ability to specifically epitope-tag LMW glutenin transgenes can greatly assist in the elucidation of their individual contributions to the functionality of the complex gluten system.
Resumo:
An open-framework indium selenide, [C7H10N][In9Se14], has been prepared under solvothermal conditions in the presence of 3,5-dimethylpyridine, and characterized by single crystal diffraction, thermogravimetry, elemental analysis, FTIR spectroscopy and UV-Vis diffuse reflectance. The crystal structure of [C7H10N][In9Se14] contains an unusual building unit, in which corner-linked and edge-linked InSe45- tetrahedra coexist. The presence of one-dimensional circular channels, of ca. 6 Å diameter, results in approximately 25% of solvent accessible void space.
Resumo:
We propose first, a simple task for the eliciting attitudes toward risky choice, the SGG lottery-panel task, which consists in a series of lotteries constructed to compensate riskier options with higher risk-return trade-offs. Using Principal Component Analysis technique, we show that the SGG lottery-panel task is capable of capturing two dimensions of individual risky decision making i.e. subjects’ average risk taking and their sensitivity towards variations in risk-return. From the results of a large experimental dataset, we confirm that the task systematically captures a number of regularities such as: A tendency to risk averse behavior (only around 10% of choices are compatible with risk neutrality); An attraction to certain payoffs compared to low risk lotteries, compatible with over-(under-) weighting of small (large) probabilities predicted in PT and; Gender differences, i.e. males being consistently less risk averse than females but both genders being similarly responsive to the increases in risk-premium. Another interesting result is that in hypothetical choices most individuals increase their risk taking responding to the increase in return to risk, as predicted by PT, while across panels with real rewards we see even more changes, but opposite to the expected pattern of riskier choices for higher risk-returns. Therefore, we conclude from our data that an “economic anomaly” emerges in the real reward choices opposite to the hypothetical choices. These findings are in line with Camerer's (1995) view that although in many domains, paid subjects probably do exert extra mental effort which improves their performance, choice over money gambles is not likely to be a domain in which effort will improve adherence to rational axioms (p. 635). Finally, we demonstrate that both dimensions of risk attitudes, average risk taking and sensitivity towards variations in the return to risk, are desirable not only to describe behavior under risk but also to explain behavior in other contexts, as illustrated by an example. In the second study, we propose three additional treatments intended to elicit risk attitudes under high stakes and mixed outcome (gains and losses) lotteries. Using a dataset obtained from a hypothetical implementation of the tasks we show that the new treatments are able to capture both dimensions of risk attitudes. This new dataset allows us to describe several regularities, both at the aggregate and within-subjects level. We find that in every treatment over 70% of choices show some degree of risk aversion and only between 0.6% and 15.3% of individuals are consistently risk neutral within the same treatment. We also confirm the existence of gender differences in the degree of risk taking, that is, in all treatments females prefer safer lotteries compared to males. Regarding our second dimension of risk attitudes we observe, in all treatments, an increase in risk taking in response to risk premium increases. Treatment comparisons reveal other regularities, such as a lower degree of risk taking in large stake treatments compared to low stake treatments and a lower degree of risk taking when losses are incorporated into the large stake lotteries. Results that are compatible with previous findings in the literature, for stake size effects (e.g., Binswanger, 1980; Antoni Bosch-Domènech & Silvestre, 1999; Hogarth & Einhorn, 1990; Holt & Laury, 2002; Kachelmeier & Shehata, 1992; Kühberger et al., 1999; B. J. Weber & Chapman, 2005; Wik et al., 2007) and domain effect (e.g., Brooks and Zank, 2005, Schoemaker, 1990, Wik et al., 2007). Whereas for small stake treatments, we find that the effect of incorporating losses into the outcomes is not so clear. At the aggregate level an increase in risk taking is observed, but also more dispersion in the choices, whilst at the within-subjects level the effect weakens. Finally, regarding responses to risk premium, we find that compared to only gains treatments sensitivity is lower in the mixed lotteries treatments (SL and LL). In general sensitivity to risk-return is more affected by the domain than the stake size. After having described the properties of risk attitudes as captured by the SGG risk elicitation task and its three new versions, it is important to recall that the danger of using unidimensional descriptions of risk attitudes goes beyond the incompatibility with modern economic theories like PT, CPT etc., all of which call for tests with multiple degrees of freedom. Being faithful to this recommendation, the contribution of this essay is an empirically and endogenously determined bi-dimensional specification of risk attitudes, useful to describe behavior under uncertainty and to explain behavior in other contexts. Hopefully, this will contribute to create large datasets containing a multidimensional description of individual risk attitudes, while at the same time allowing for a robust context, compatible with present and even future more complex descriptions of human attitudes towards risk.
Resumo:
This paper presents the development of a rapid method with ultraperformance liquid chromatography–tandem mass spectrometry (UPLC-MS/MS) for the qualitative and quantitative analyses of plant proanthocyanidins directly from crude plant extracts. The method utilizes a range of cone voltages to achieve the depolymerization step in the ion source of both smaller oligomers and larger polymers. The formed depolymerization products are further fragmented in the collision cell to enable their selective detection. This UPLC-MS/MS method is able to separately quantitate the terminal and extension units of the most common proanthocyanidin subclasses, that is, procyanidins and prodelphinidins. The resulting data enable (1) quantitation of the total proanthocyanidin content, (2) quantitation of total procyanidins and prodelphinidins including the procyanidin/prodelphinidin ratio, (3) estimation of the mean degree of polymerization for the oligomers and polymers, and (4) estimation of how the different procyanidin and prodelphinidin types are distributed along the chromatographic hump typically produced by large proanthocyanidins. All of this is achieved within the 10 min period of analysis, which makes the presented method a significant addition to the chemistry tools currently available for the qualitative and quantitative analyses of complex proanthocyanidin mixtures from plant extracts.
Resumo:
A set of high-resolution radar observations of convective storms has been collected to evaluate such storms in the UK Met Office Unified Model during the DYMECS project (Dynamical and Microphysical Evolution of Convective Storms). The 3-GHz Chilbolton Advanced Meteorological Radar was set up with a scan-scheduling algorithm to automatically track convective storms identified in real-time from the operational rainfall radar network. More than 1,000 storm observations gathered over fifteen days in 2011 and 2012 are used to evaluate the model under various synoptic conditions supporting convection. In terms of the detailed three-dimensional morphology, storms in the 1500-m grid-length simulations are shown to produce horizontal structures a factor 1.5–2 wider compared to radar observations. A set of nested model runs at grid lengths down to 100m show that the models converge in terms of storm width, but the storm structures in the simulations with the smallest grid lengths are too narrow and too intense compared to the radar observations. The modelled storms were surrounded by a region of drizzle without ice reflectivities above 0 dBZ aloft, which was related to the dominance of ice crystals and was improved by allowing only aggregates as an ice particle habit. Simulations with graupel outperformed the standard configuration for heavy-rain profiles, but the storm structures were a factor 2 too wide and the convective cores 2 km too deep.