864 resultados para Sharable Content Object Resource Model (SCORM)
Resumo:
A new coupled cloud physics–radiation parameterization of the bulk optical properties of ice clouds is presented. The parameterization is consistent with assumptions in the cloud physics scheme regarding particle size distributions (PSDs) and mass–dimensional relationships. The parameterization is based on a weighted ice crystal habit mixture model, and its bulk optical properties are parameterized as simple functions of wavelength and ice water content (IWC). This approach directly couples IWC to the bulk optical properties, negating the need for diagnosed variables, such as the ice crystal effective dimension. The parameterization is implemented into the Met Office Unified Model Global Atmosphere 5.0 (GA5) configuration. The GA5 configuration is used to simulate the annual 20-yr shortwave (SW) and longwave (LW) fluxes at the top of the atmosphere (TOA), as well as the temperature structure of the atmosphere, under various microphysical assumptions. The coupled parameterization is directly compared against the current operational radiation parameterization, while maintaining the same cloud physics assumptions. In this experiment, the impacts of the two parameterizations on the SW and LW radiative effects at TOA are also investigated and compared against observations. The 20-yr simulations are compared against the latest observations of the atmospheric temperature and radiative fluxes at TOA. The comparisons demonstrate that the choice of PSD and the assumed ice crystal shape distribution are as important as each other. Moreover, the consistent radiation parameterization removes a long-standing tropical troposphere cold temperature bias but slightly warms the southern midlatitudes by about 0.5 K.
Resumo:
The subgrid-scale spatial variability in cloud water content can be described by a parameter f called the fractional standard deviation. This is equal to the standard deviation of the cloud water content divided by the mean. This parameter is an input to schemes that calculate the impact of subgrid-scale cloud inhomogeneity on gridbox-mean radiative fluxes and microphysical process rates. A new regime-dependent parametrization of the spatial variability of cloud water content is derived from CloudSat observations of ice clouds. In addition to the dependencies on horizontal and vertical resolution and cloud fraction included in previous parametrizations, the new parametrization includes an explicit dependence on cloud type. The new parametrization is then implemented in the Global Atmosphere 6 (GA6) configuration of the Met Office Unified Model and used to model the effects of subgrid variability of both ice and liquid water content on radiative fluxes and autoconversion and accretion rates in three 20-year atmosphere-only climate simulations. These simulations show the impact of the new regime-dependent parametrization on diagnostic radiation calculations, interactive radiation calculations and both interactive radiation calculations and in a new warm microphysics scheme. The control simulation uses a globally constant f value of 0.75 to model the effect of cloud water content variability on radiative fluxes. The use of the new regime-dependent parametrization in the model results in a global mean which is higher than the control's fixed value and a global distribution of f which is closer to CloudSat observations. When the new regime-dependent parametrization is used in radiative transfer calculations only, the magnitudes of short-wave and long-wave top of atmosphere cloud radiative forcing are reduced, increasing the existing global mean biases in the control. When also applied in a new warm microphysics scheme, the short-wave global mean bias is reduced.
Resumo:
The North Atlantic Ocean subpolar gyre (NA SPG) is an important region for initialising decadal climate forecasts. Climate model simulations and palaeo climate reconstructions have indicated that this region could also exhibit large, internally generated variability on decadal timescales. Understanding these modes of variability, their consistency across models, and the conditions in which they exist, is clearly important for improving the skill of decadal predictions — particularly when these predictions are made with the same underlying climate models. Here we describe and analyse a mode of internal variability in the NA SPG in a state-of-the-art, high resolution, coupled climate model. This mode has a period of 17 years and explains 15–30% of the annual variance in related ocean indices. It arises due to the advection of heat content anomalies around the NA SPG. Anomalous circulation drives the variability in the southern half of the NA SPG, whilst mean circulation and anomalous temperatures are important in the northern half. A negative feedback between Labrador Sea temperatures/densities and those in the North Atlantic Current is identified, which allows for the phase reversal. The atmosphere is found to act as a positive feedback on to this mode via the North Atlantic Oscillation which itself exhibits a spectral peak at 17 years. Decadal ocean density changes associated with this mode are driven by variations in temperature, rather than salinity — a point which models often disagree on and which we suggest may affect the veracity of the underlying assumptions of anomaly-assimilating decadal prediction methodologies.
Resumo:
A basic data requirement of a river flood inundation model is a Digital Terrain Model (DTM) of the reach being studied. The scale at which modeling is required determines the accuracy required of the DTM. For modeling floods in urban areas, a high resolution DTM such as that produced by airborne LiDAR (Light Detection And Ranging) is most useful, and large parts of many developed countries have now been mapped using LiDAR. In remoter areas, it is possible to model flooding on a larger scale using a lower resolution DTM, and in the near future the DTM of choice is likely to be that derived from the TanDEM-X Digital Elevation Model (DEM). A variable-resolution global DTM obtained by combining existing high and low resolution data sets would be useful for modeling flood water dynamics globally, at high resolution wherever possible and at lower resolution over larger rivers in remote areas. A further important data resource used in flood modeling is the flood extent, commonly derived from Synthetic Aperture Radar (SAR) images. Flood extents become more useful if they are intersected with the DTM, when water level observations (WLOs) at the flood boundary can be estimated at various points along the river reach. To illustrate the utility of such a global DTM, two examples of recent research involving WLOs at opposite ends of the spatial scale are discussed. The first requires high resolution spatial data, and involves the assimilation of WLOs from a real sequence of high resolution SAR images into a flood model to update the model state with observations over time, and to estimate river discharge and model parameters, including river bathymetry and friction. The results indicate the feasibility of such an Earth Observation-based flood forecasting system. The second example is at a larger scale, and uses SAR-derived WLOs to improve the lower-resolution TanDEM-X DEM in the area covered by the flood extents. The resulting reduction in random height error is significant.
Resumo:
Atmospheric CO2 concentration is expected to continue rising in the coming decades, but natural or artificial processes may eventually reduce it. We show that, in the FAMOUS atmosphere-ocean general circulation model, the reduction of ocean heat content as radiative forcing decreases is greater than would be expected from a linear model simulation of the response to the applied forcings. We relate this effect to the behavior of the Atlantic meridional overturning circulation (AMOC): the ocean cools more efficiently with a strong AMOC. The AMOC weakens as CO2 rises, then strengthens as CO2 declines, but temporarily overshoots its original strength. This nonlinearity comes mainly from the accumulated advection of salt into the North Atlantic, which gives the system a longer memory. This implies that changes observed in response to different CO2 scenarios or from different initial states, such as from past changes, may not be a reliable basis for making projections.
Resumo:
We present ocean model sensitivity experiments aimed at separating the influence of the projected changes in the “thermal” (near-surface air temperature) and “wind” (near-surface winds) forcing on the patterns of sea level and ocean heat content. In the North Atlantic, the distribution of sea level change is more due to the “thermal” forcing, whereas it is more due to the “wind” forcing in the North Pacific; in the Southern Ocean, the “thermal” and “wind” forcing have a comparable influence. In the ocean adjacent to Antarctica the “thermal” forcing leads to an inflow of warmer waters on the continental shelves, which is somewhat attenuated by the “wind” forcing. The structure of the vertically integrated heat uptake is set by different processes at low and high latitudes: at low latitudes it is dominated by the heat transport convergence, whereas at high latitudes it represents a small residual of changes in the surface flux and advection of heat. The structure of the horizontally integrated heat content tendency is set by the increase of downward heat flux by the mean circulation and comparable decrease of upward heat flux by the subgrid-scale processes; the upward eddy heat flux decreases and increases by almost the same magnitude in response to, respectively, the “thermal” and “wind” forcing. Regionally, the surface heat loss and deep convection weaken in the Labrador Sea, but intensify in the Greenland Sea in the region of sea ice retreat. The enhanced heat flux anomaly in the subpolar Atlantic is mainly caused by the “thermal” forcing.
Resumo:
Effective public policy to mitigate climate change footprints should build on data-driven analysis of firm-level strategies. This article’s conceptual approach augments the resource-based view (RBV) of the firm and identifies investments in four firm-level resource domains (Governance, Information management, Systems, and Technology [GISTe]) to develop capabilities in climate change impact mitigation. The authors denote the resulting framework as the GISTe model, which frames their analysis and public policy recommendations. This research uses the 2008 Carbon Disclosure Project (CDP) database, with high-quality information on firm-level climate change strategies for 552 companies from North America and Europe. In contrast to the widely accepted myth that European firms are performing better than North American ones, the authors find a different result. Many firms, whether European or North American, do not just “talk” about climate change impact mitigation, but actually do “walk the talk.” European firms appear to be better than their North American counterparts in “walk I,” denoting attention to governance, information management, and systems. But when it comes down to “walk II,” meaning actual Technology-related investments, North American firms’ performance is equal or superior to that of the European companies. The authors formulate public policy recommendations to accelerate firm-level, sector-level, and cluster-level implementation of climate change strategies.
Resumo:
In many lower-income countries, the establishment of marine protected areas (MPAs) involves significant opportunity costs for artisanal fishers, reflected in changes in how they allocate their labor in response to the MPA. The resource economics literature rarely addresses such labor allocation decisions of artisanal fishers and how, in turn, these contribute to the impact of MPAs on fish stocks, yield, and income. This paper develops a spatial bio-economic model of a fishery adjacent to a village of people who allocate their labor between fishing and on-shore wage opportunities to establish a spatial Nash equilibrium at a steady state fish stock in response to various locations for no-take zone MPAs and managed access MPAs. Villagers’ fishing location decisions are based on distance costs, fishing returns, and wages. Here, the MPA location determines its impact on fish stocks, fish yield, and villager income due to distance costs, congestion, and fish dispersal. Incorporating wage labor opportunities into the framework allows examination of the MPA’s impact on rural incomes, with results determining that win-wins between yield and stocks occur in very different MPA locations than do win-wins between income and stocks. Similarly, villagers in a high-wage setting face a lower burden from MPAs than do those in low-wage settings. Motivated by issues of central importance in Tanzania and Costa Rica, we impose various policies on this fishery – location specific no-take zones, increasing on-shore wages, and restricting MPA access to a subset of villagers – to analyze the impact of an MPA on fish stocks and rural incomes in such settings.
Resumo:
Initializing the ocean for decadal predictability studies is a challenge, as it requires reconstructing the little observed subsurface trajectory of ocean variability. In this study we explore to what extent surface nudging using well-observed sea surface temperature (SST) can reconstruct the deeper ocean variations for the 1949–2005 period. An ensemble made with a nudged version of the IPSLCM5A model and compared to ocean reanalyses and reconstructed datasets. The SST is restored to observations using a physically-based relaxation coefficient, in contrast to earlier studies, which use a much larger value. The assessment is restricted to the regions where the ocean reanalyses agree, i.e. in the upper 500 m of the ocean, although this can be latitude and basin dependent. Significant reconstruction of the subsurface is achieved in specific regions, namely region of subduction in the subtropical Atlantic, below the thermocline in the equatorial Pacific and, in some cases, in the North Atlantic deep convection regions. Beyond the mean correlations, ocean integrals are used to explore the time evolution of the correlation over 20-year windows. Classical fixed depth heat content diagnostics do not exhibit any significant reconstruction between the different existing observation-based references and can therefore not be used to assess global average time-varying correlations in the nudged simulations. Using the physically based average temperature above an isotherm (14 °C) alleviates this issue in the tropics and subtropics and shows significant reconstruction of these quantities in the nudged simulations for several decades. This skill is attributed to the wind stress reconstruction in the tropics, as already demonstrated in a perfect model study using the same model. Thus, we also show here the robustness of this result in an historical and observational context.
Resumo:
A data insertion method, where a dispersion model is initialized from ash properties derived from a series of satellite observations, is used to model the 8 May 2010 Eyjafjallajökull volcanic ash cloud which extended from Iceland to northern Spain. We also briefly discuss the application of this method to the April 2010 phase of the Eyjafjallajökull eruption and the May 2011 Grímsvötn eruption. An advantage of this method is that very little knowledge about the eruption itself is required because some of the usual eruption source parameters are not used. The method may therefore be useful for remote volcanoes where good satellite observations of the erupted material are available, but little is known about the properties of the actual eruption. It does, however, have a number of limitations related to the quality and availability of the observations. We demonstrate that, using certain configurations, the data insertion method is able to capture the structure of a thin filament of ash extending over northern Spain that is not fully captured by other modeling methods. It also verifies well against the satellite observations according to the quantitative object-based quality metric, SAL—structure, amplitude, location, and the spatial coverage metric, Figure of Merit in Space.
Resumo:
The deterpenation of bergamot essential oil can be performed by liquid liquid extraction using hydrous ethanol as the solvent. A ternary mixture composed of 1-methyl-4-prop-1-en-2-yl-cydohexene (limonene), 3,7-dimethylocta-1,6-dien-3-yl-acetate (linalyl acetate), and 3,7-dimethylocta-1,6-dien-3-ol (linalool), three major compounds commonly found in bergamot oil, was used to simulate this essential oil. Liquid liquid equilibrium data were experimentally determined for systems containing essential oil compounds, ethanol, and water at 298.2 K and are reported in this paper. The experimental data were correlated using the NRTL and UNIQUAC models, and the mean deviations between calculated and experimental data were lower than 0.0062 in all systems, indicating the good descriptive quality of the molecular models. To verify the effect of the water mass fraction in the solvent and the linalool mass fraction in the terpene phase on the distribution coefficients of the essential oil compounds, nonlinear regression analyses were performed, obtaining mathematical models with correlation coefficient values higher than 0.99. The results show that as the water content in the solvent phase increased, the kappa value decreased, regardless of the type of compound studied. Conversely, as the linalool content increased, the distribution coefficients of hydrocarbon terpene and ester also increased. However, the linalool distribution coefficient values were negatively affected when the terpene alcohol content increased in the terpene phase.
Resumo:
Many generalist populations may actually be composed of relatively specialist individuals. This `individual specialization` may have important ecological and evolutionary implications. Although this phenomenon has been documented in more than one hundred taxa, it is still unclear how individuals within a population actually partition resources. Here we applied several methods based on network theory to investigate the intrapopulation patterns of resource use in the gracile mouse opossum Gracilinanus microtarsus. We found evidence of significant individual specialization in this species and that the diets of specialists are nested within the diets of generalists. This novel pattern is consistent with a recently proposed model of optimal foraging and implies strong asymmetry in the interactions among individuals of a population.
Resumo:
P>1. Much of the current understanding of ecological systems is based on theory that does not explicitly take into account individual variation within natural populations. However, individuals may show substantial variation in resource use. This variation in turn may be translated into topological properties of networks that depict interactions among individuals and the food resources they consume (individual-resource networks). 2. Different models derived from optimal diet theory (ODT) predict highly distinct patterns of trophic interactions at the individual level that should translate into distinct network topologies. As a consequence, individual-resource networks can be useful tools in revealing the incidence of different patterns of resource use by individuals and suggesting their mechanistic basis. 3. In the present study, using data from several dietary studies, we assembled individual-resource networks of 10 vertebrate species, previously reported to show interindividual diet variation, and used a network-based approach to investigate their structure. 4. We found significant nestedness, but no modularity, in all empirical networks, indicating that (i) these populations are composed of both opportunistic and selective individuals and (ii) the diets of the latter are ordered as predictable subsets of the diets of the more opportunistic individuals. 5. Nested patterns are a common feature of species networks, and our results extend its generality to trophic interactions at the individual level. This pattern is consistent with a recently proposed ODT model, in which individuals show similar rank preferences but differ in their acceptance rate for alternative resources. Our findings therefore suggest a common mechanism underlying interindividual variation in resource use in disparate taxa.
Resumo:
Attention is a critical mechanism for visual scene analysis. By means of attention, it is possible to break down the analysis of a complex scene to the analysis of its parts through a selection process. Empirical studies demonstrate that attentional selection is conducted on visual objects as a whole. We present a neurocomputational model of object-based selection in the framework of oscillatory correlation. By segmenting an input scene and integrating the segments with their conspicuity obtained from a saliency map, the model selects salient objects rather than salient locations. The proposed system is composed of three modules: a saliency map providing saliency values of image locations, image segmentation for breaking the input scene into a set of objects, and object selection which allows one of the objects of the scene to be selected at a time. This object selection system has been applied to real gray-level and color images and the simulation results show the effectiveness of the system. (C) 2010 Elsevier Ltd. All rights reserved.
Resumo:
This paper presents an approach for assisting low-literacy readers in accessing Web online information. The oEducational FACILITAo tool is a Web content adaptation tool that provides innovative features and follows more intuitive interaction models regarding accessibility concerns. Especially, we propose an interaction model and a Web application that explore the natural language processing tasks of lexical elaboration and named entity labeling for improving Web accessibility. We report on the results obtained from a pilot study on usability analysis carried out with low-literacy users. The preliminary results show that oEducational FACILITAo improves the comprehension of text elements, although the assistance mechanisms might also confuse users when word sense ambiguity is introduced, by gathering, for a complex word, a list of synonyms with multiple meanings. This fact evokes a future solution in which the correct sense for a complex word in a sentence is identified, solving this pervasive characteristic of natural languages. The pilot study also identified that experienced computer users find the tool to be more useful than novice computer users do.