130 resultados para 3-DIMENSIONAL FRAMEWORK
Resumo:
We study the degree to which Kraichnan–Leith–Batchelor (KLB) phenomenology describes two-dimensional energy cascades in α turbulence, governed by ∂θ/∂t+J(ψ,θ)=ν∇2θ+f, where θ=(−Δ)α/2ψ is generalized vorticity, and ψ^(k)=k−αθ^(k) in Fourier space. These models differ in spectral non-locality, and include surface quasigeostrophic flow (α=1), regular two-dimensional flow (α=2) and rotating shallow flow (α=3), which is the isotropic limit of a mantle convection model. We re-examine arguments for dual inverse energy and direct enstrophy cascades, including Fjørtoft analysis, which we extend to general α, and point out their limitations. Using an α-dependent eddy-damped quasinormal Markovian (EDQNM) closure, we seek self-similar inertial range solutions and study their characteristics. Our present focus is not on coherent structures, which the EDQNM filters out, but on any self-similar and approximately Gaussian turbulent component that may exist in the flow and be described by KLB phenomenology. For this, the EDQNM is an appropriate tool. Non-local triads contribute increasingly to the energy flux as α increases. More importantly, the energy cascade is downscale in the self-similar inertial range for 2.5<α<10. At α=2.5 and α=10, the KLB spectra correspond, respectively, to enstrophy and energy equipartition, and the triad energy transfers and flux vanish identically. Eddy turnover time and strain rate arguments suggest the inverse energy cascade should obey KLB phenomenology and be self-similar for α<4. However, downscale energy flux in the EDQNM self-similar inertial range for α>2.5 leads us to predict that any inverse cascade for α≥2.5 will not exhibit KLB phenomenology, and specifically the KLB energy spectrum. Numerical simulations confirm this: the inverse cascade energy spectrum for α≥2.5 is significantly steeper than the KLB prediction, while for α<2.5 we obtain the KLB spectrum.
Resumo:
The scientific community is developing new global, regional, and sectoral scenarios to facilitate interdisciplinary research and assessment to explore the range of possible future climates and related physical changes that could pose risks to human and natural systems; how these changes could interact with social, economic, and environmental development pathways; the degree to which mitigation and adaptation policies can avoid and reduce risks; the costs and benefits of various policy mixes; residual impacts under alternative pathways; and the relationship of future climate change and adaptation and mitigation policy responses with sustainable development. This paper provides the background to and process of developing the conceptual framework for these scenarios, as described in the three subsequent papers in this Special Issue (Van Vuuren et al.; O’Neill et al.; Kriegler et al.). The paper also discusses research needs to further develop and apply this framework. A key goal of the current framework design and its future development is to facilitate the collaboration of climate change researchers from a broad range of perspectives and disciplines to develop policy- and decision-relevant scenarios and explore the challenges and opportunities human and natural systems could face with additional climate change.
Resumo:
We propose first, a simple task for the eliciting attitudes toward risky choice, the SGG lottery-panel task, which consists in a series of lotteries constructed to compensate riskier options with higher risk-return trade-offs. Using Principal Component Analysis technique, we show that the SGG lottery-panel task is capable of capturing two dimensions of individual risky decision making i.e. subjects’ average risk taking and their sensitivity towards variations in risk-return. From the results of a large experimental dataset, we confirm that the task systematically captures a number of regularities such as: A tendency to risk averse behavior (only around 10% of choices are compatible with risk neutrality); An attraction to certain payoffs compared to low risk lotteries, compatible with over-(under-) weighting of small (large) probabilities predicted in PT and; Gender differences, i.e. males being consistently less risk averse than females but both genders being similarly responsive to the increases in risk-premium. Another interesting result is that in hypothetical choices most individuals increase their risk taking responding to the increase in return to risk, as predicted by PT, while across panels with real rewards we see even more changes, but opposite to the expected pattern of riskier choices for higher risk-returns. Therefore, we conclude from our data that an “economic anomaly” emerges in the real reward choices opposite to the hypothetical choices. These findings are in line with Camerer's (1995) view that although in many domains, paid subjects probably do exert extra mental effort which improves their performance, choice over money gambles is not likely to be a domain in which effort will improve adherence to rational axioms (p. 635). Finally, we demonstrate that both dimensions of risk attitudes, average risk taking and sensitivity towards variations in the return to risk, are desirable not only to describe behavior under risk but also to explain behavior in other contexts, as illustrated by an example. In the second study, we propose three additional treatments intended to elicit risk attitudes under high stakes and mixed outcome (gains and losses) lotteries. Using a dataset obtained from a hypothetical implementation of the tasks we show that the new treatments are able to capture both dimensions of risk attitudes. This new dataset allows us to describe several regularities, both at the aggregate and within-subjects level. We find that in every treatment over 70% of choices show some degree of risk aversion and only between 0.6% and 15.3% of individuals are consistently risk neutral within the same treatment. We also confirm the existence of gender differences in the degree of risk taking, that is, in all treatments females prefer safer lotteries compared to males. Regarding our second dimension of risk attitudes we observe, in all treatments, an increase in risk taking in response to risk premium increases. Treatment comparisons reveal other regularities, such as a lower degree of risk taking in large stake treatments compared to low stake treatments and a lower degree of risk taking when losses are incorporated into the large stake lotteries. Results that are compatible with previous findings in the literature, for stake size effects (e.g., Binswanger, 1980; Antoni Bosch-Domènech & Silvestre, 1999; Hogarth & Einhorn, 1990; Holt & Laury, 2002; Kachelmeier & Shehata, 1992; Kühberger et al., 1999; B. J. Weber & Chapman, 2005; Wik et al., 2007) and domain effect (e.g., Brooks and Zank, 2005, Schoemaker, 1990, Wik et al., 2007). Whereas for small stake treatments, we find that the effect of incorporating losses into the outcomes is not so clear. At the aggregate level an increase in risk taking is observed, but also more dispersion in the choices, whilst at the within-subjects level the effect weakens. Finally, regarding responses to risk premium, we find that compared to only gains treatments sensitivity is lower in the mixed lotteries treatments (SL and LL). In general sensitivity to risk-return is more affected by the domain than the stake size. After having described the properties of risk attitudes as captured by the SGG risk elicitation task and its three new versions, it is important to recall that the danger of using unidimensional descriptions of risk attitudes goes beyond the incompatibility with modern economic theories like PT, CPT etc., all of which call for tests with multiple degrees of freedom. Being faithful to this recommendation, the contribution of this essay is an empirically and endogenously determined bi-dimensional specification of risk attitudes, useful to describe behavior under uncertainty and to explain behavior in other contexts. Hopefully, this will contribute to create large datasets containing a multidimensional description of individual risk attitudes, while at the same time allowing for a robust context, compatible with present and even future more complex descriptions of human attitudes towards risk.
Resumo:
A set of high-resolution radar observations of convective storms has been collected to evaluate such storms in the UK Met Office Unified Model during the DYMECS project (Dynamical and Microphysical Evolution of Convective Storms). The 3-GHz Chilbolton Advanced Meteorological Radar was set up with a scan-scheduling algorithm to automatically track convective storms identified in real-time from the operational rainfall radar network. More than 1,000 storm observations gathered over fifteen days in 2011 and 2012 are used to evaluate the model under various synoptic conditions supporting convection. In terms of the detailed three-dimensional morphology, storms in the 1500-m grid-length simulations are shown to produce horizontal structures a factor 1.5–2 wider compared to radar observations. A set of nested model runs at grid lengths down to 100m show that the models converge in terms of storm width, but the storm structures in the simulations with the smallest grid lengths are too narrow and too intense compared to the radar observations. The modelled storms were surrounded by a region of drizzle without ice reflectivities above 0 dBZ aloft, which was related to the dominance of ice crystals and was improved by allowing only aggregates as an ice particle habit. Simulations with graupel outperformed the standard configuration for heavy-rain profiles, but the storm structures were a factor 2 too wide and the convective cores 2 km too deep.
Resumo:
Automatic generation of classification rules has been an increasingly popular technique in commercial applications such as Big Data analytics, rule based expert systems and decision making systems. However, a principal problem that arises with most methods for generation of classification rules is the overfit-ting of training data. When Big Data is dealt with, this may result in the generation of a large number of complex rules. This may not only increase computational cost but also lower the accuracy in predicting further unseen instances. This has led to the necessity of developing pruning methods for the simplification of rules. In addition, classification rules are used further to make predictions after the completion of their generation. As efficiency is concerned, it is expected to find the first rule that fires as soon as possible by searching through a rule set. Thus a suit-able structure is required to represent the rule set effectively. In this chapter, the authors introduce a unified framework for construction of rule based classification systems consisting of three operations on Big Data: rule generation, rule simplification and rule representation. The authors also review some existing methods and techniques used for each of the three operations and highlight their limitations. They introduce some novel methods and techniques developed by them recently. These methods and techniques are also discussed in comparison to existing ones with respect to efficient processing of Big Data.
Resumo:
We present a novel method for retrieving high-resolution, three-dimensional (3-D) nonprecipitating cloud fields in both overcast and broken-cloud situations. The method uses scanning cloud radar and multiwavelength zenith radiances to obtain gridded 3-D liquid water content (LWC) and effective radius (re) and 2-D column mean droplet number concentration (Nd). By using an adaption of the ensemble Kalman filter, radiances are used to constrain the optical properties of the clouds using a forward model that employs full 3-D radiative transfer while also providing full error statistics given the uncertainty in the observations. To evaluate the new method, we first perform retrievals using synthetic measurements from a challenging cumulus cloud field produced by a large-eddy simulation snapshot. Uncertainty due to measurement error in overhead clouds is estimated at 20% in LWC and 6% in re, but the true error can be greater due to uncertainties in the assumed droplet size distribution and radiative transfer. Over the entire domain, LWC and re are retrieved with average error 0.05–0.08 g m-3 and ~2 μm, respectively, depending on the number of radiance channels used. The method is then evaluated using real data from the Atmospheric Radiation Measurement program Mobile Facility at the Azores. Two case studies are considered, one stratocumulus and one cumulus. Where available, the liquid water path retrieved directly above the observation site was found to be in good agreement with independent values obtained from microwave radiometer measurements, with an error of 20 g m-2.
Resumo:
The “cotton issue” has been a topic of several academic discussions for trade policy analysts. However the design of trade and agricultural policy in the EU and the USA has become a politically sensitive matter throughout the last five years. This study utilizing the Agricultural Trade Policy Simulation Model (ATPSM) aims to gain insights into the global cotton market, to explain why domestic support for cotton has become an issue, to quantify the impact of the new EU agricultural policy on the cotton sector, and to measure the effect of eliminating support policies on production and trade. Results indicate that full trade liberalization would lead the four West African countries to better terms of trade with the EU. If tariff reduction follows the so-called Swiss formula, world prices would increase by 3.5%.
Resumo:
Land cover maps at different resolutions and mapping extents contribute to modeling and support decision making processes. Because land cover affects and is affected by climate change, it is listed among the 13 terrestrial essential climate variables. This paper describes the generation of a land cover map for Latin America and the Caribbean (LAC) for the year 2008. It was developed in the framework of the project Latin American Network for Monitoring and Studying of Natural Resources (SERENA), which has been developed within the GOFC-GOLD Latin American network of remote sensing and forest fires (RedLaTIF). The SERENA land cover map for LAC integrates: 1) the local expertise of SERENA network members to generate the training and validation data, 2) a methodology for land cover mapping based on decision trees using MODIS time series, and 3) class membership estimates to account for pixel heterogeneity issues. The discrete SERENA land cover product, derived from class memberships, yields an overall accuracy of 84% and includes an additional layer representing the estimated per-pixel confidence. The study demonstrates in detail the use of class memberships to better estimate the area of scarce classes with a scattered spatial distribution. The land cover map is already available as a printed wall map and will be released in digital format in the near future. The SERENA land cover map was produced with a legend and classification strategy similar to that used by the North American Land Change Monitoring System (NALCMS) to generate a land cover map of the North American continent, that will allow to combine both maps to generate consistent data across America facilitating continental monitoring and modeling
Resumo:
We study the scaling properties and Kraichnan–Leith–Batchelor (KLB) theory of forced inverse cascades in generalized two-dimensional (2D) fluids (α-turbulence models) simulated at resolution 8192x8192. We consider α=1 (surface quasigeostrophic flow), α=2 (2D Euler flow) and α=3. The forcing scale is well resolved, a direct cascade is present and there is no large-scale dissipation. Coherent vortices spanning a range of sizes, most larger than the forcing scale, are present for both α=1 and α=2. The active scalar field for α=3 contains comparatively few and small vortices. The energy spectral slopes in the inverse cascade are steeper than the KLB prediction −(7−α)/3 in all three systems. Since we stop the simulations well before the cascades have reached the domain scale, vortex formation and spectral steepening are not due to condensation effects; nor are they caused by large-scale dissipation, which is absent. One- and two-point p.d.f.s, hyperflatness factors and structure functions indicate that the inverse cascades are intermittent and non-Gaussian over much of the inertial range for α=1 and α=2, while the α=3 inverse cascade is much closer to Gaussian and non-intermittent. For α=3 the steep spectrum is close to that associated with enstrophy equipartition. Continuous wavelet analysis shows approximate KLB scaling ℰ(k)∝k−2 (α=1) and ℰ(k)∝k−5/3 (α=2) in the interstitial regions between the coherent vortices. Our results demonstrate that coherent vortex formation (α=1 and α=2) and non-realizability (α=3) cause 2D inverse cascades to deviate from the KLB predictions, but that the flow between the vortices exhibits KLB scaling and non-intermittent statistics for α=1 and α=2.
Resumo:
Among existing remote sensing applications, land-based X-band radar is an effective technique to monitor the wave fields, and spatial wave information could be obtained from the radar images. Two-dimensional Fourier Transform (2-D FT) is the common algorithm to derive the spectra of radar images. However, the wave field in the nearshore area is highly non-homogeneous due to wave refraction, shoaling, and other coastal mechanisms. When applied in nearshore radar images, 2-D FT would lead to ambiguity of wave characteristics in wave number domain. In this article, we introduce two-dimensional Wavelet Transform (2-D WT) to capture the non-homogeneity of wave fields from nearshore radar images. The results show that wave number spectra by 2-D WT at six parallel space locations in the given image clearly present the shoaling of nearshore waves. Wave number of the peak wave energy is increasing along the inshore direction, and dominant direction of the spectra changes from South South West (SSW) to West South West (WSW). To verify the results of 2-D WT, wave shoaling in radar images is calculated based on dispersion relation. The theoretical calculation results agree with the results of 2-D WT on the whole. The encouraging performance of 2-D WT indicates its strong capability of revealing the non-homogeneity of wave fields in nearshore X-band radar images.
Resumo:
Assessing the ways in which rural agrarian areas provide Cultural Ecosystem Services (CES) is proving difficult to achieve. This research has developed an innovative methodological approach named as Multi Scale Indicator Framework (MSIF) for capturing the CES embedded into the rural agrarian areas. This framework reconciles a literature review with a trans-disciplinary participatory workshop. Both of these sources reveal that societal preferences diverge upon judgemental criteria which in turn relate to different visual concepts that can be drawn from analysing attributes, elements, features and characteristics of rural areas. We contend that it is now possible to list a group of possible multi scale indicators for stewardship, diversity and aesthetics. These results might also be of use for improving any existing European indicators frameworks by also including CES. This research carries major implications for policy at different levels of governance, as it makes possible to target and monitor policy instruments to the physical rural settings so that cultural dimensions are adequately considered. There is still work to be developed on regional specific values and thresholds for each criteria and its indicator set. In practical terms, by developing the conceptual design within a common framework as described in this paper, a considerable step forward towards the inclusion of the cultural dimension in European wide assessments can be made.
Resumo:
The EU Water Framework Directive (WFD) requires that the ecological and chemical status of water bodies in Europe should be assessed, and action taken where possible to ensure that at least "good" quality is attained in each case by 2015. This paper is concerned with the accuracy and precision with which chemical status in rivers can be measured given certain sampling strategies, and how this can be improved. High-frequency (hourly) chemical data from four rivers in southern England were subsampled to simulate different sampling strategies for four parameters used for WFD classification: dissolved phosphorus, dissolved oxygen, pH and water temperature. These data sub-sets were then used to calculate the WFD classification for each site. Monthly sampling was less precise than weekly sampling, but the effect on WFD classification depended on the closeness of the range of concentrations to the class boundaries. In some cases, monthly sampling for a year could result in the same water body being assigned to three or four of the WFD classes with 95% confidence, due to random sampling effects, whereas with weekly sampling this was one or two classes for the same cases. In the most extreme case, the same water body could have been assigned to any of the five WFD quality classes. Weekly sampling considerably reduces the uncertainties compared to monthly sampling. The width of the weekly sampled confidence intervals was about 33% that of the monthly for P species and pH, about 50% for dissolved oxygen, and about 67% for water temperature. For water temperature, which is assessed as the 98th percentile in the UK, monthly sampling biases the mean downwards by about 1 °C compared to the true value, due to problems of assessing high percentiles with limited data. Low-frequency measurements will generally be unsuitable for assessing standards expressed as high percentiles. Confining sampling to the working week compared to all 7 days made little difference, but a modest improvement in precision could be obtained by sampling at the same time of day within a 3 h time window, and this is recommended. For parameters with a strong diel variation, such as dissolved oxygen, the value obtained, and thus possibly the WFD classification, can depend markedly on when in the cycle the sample was taken. Specifying this in the sampling regime would be a straightforward way to improve precision, but there needs to be agreement about how best to characterise risk in different types of river. These results suggest that in some cases it will be difficult to assign accurate WFD chemical classes or to detect likely trends using current sampling regimes, even for these largely groundwater-fed rivers. A more critical approach to sampling is needed to ensure that management actions are appropriate and supported by data.
Resumo:
A new iron(II) coordination polymer, [FeCl2(NC7H9)2(N2C12H12)], has been synthesized under solvothermal conditions and structurally characterized by single-crystal X-ray diffraction. This material crystallizes in the monoclinic space group C2/c, with a = 11.2850(6), b = 13.8925(7), c = 17.0988(9) Å and β = 94.300(3)º (Z = 4). The crystal structure consists of neutral zig-zag chains, in which the iron(II) ions are octahedrally coordinated. The infinite polymer chains are packed into a three-dimensional structure through C–H···Cl interactions. Magnetic susceptibility measurements reveal the existence of weak antiferromagnetic interactions between the iron(II) ions. The effective magnetic moment, μ eff = 5.33 μ B , is consistent with a high-spin iron(II) configuration.