73 resultados para Models and modeling


Relevância:

100.00% 100.00%

Publicador:

Resumo:

In mid-March 2005 the northern lower stratospheric polar vortex experienced a severe stretching episode, bringing a large polar filament far south of Alaska toward Hawaii. This meridional intrusion of rare extent, coinciding with the polar vortex final warming and breakdown, was followed by a zonal stretching in the wake of the easterly propagating subtropical main flow. This caused polar air to remain over Hawaii for several days before diluting into the subtropics. After being successfully forecasted to pass over Hawaii by the high-resolution potential vorticity advection model Modèle Isentrope du transport Méso-échelle de l'Ozone Stratosphérique par Advection (MIMOSA), the filament was observed on isentropic surfaces between 415 K and 455 K (17–20 km) by the Jet Propulsion Laboratory stratospheric ozone lidar measurements at Mauna Loa Observatory, Hawaii, between 16 and 19 March 2005. It was materialized as a thin layer of enhanced ozone peaking at 1.6 ppmv in a region where the climatological values usually average 1.0 ppmv. These values were compared to those obtained by the three-dimensional Chemistry-Transport Model MIMOSA-CHIM. Agreement between lidar and model was excellent, particularly in the similar appearance of the ozone peak near 435 K (18.5 km) on 16 March, and the persistence of this layer at higher isentropic levels for the following three days. Passive ozone, also modeled by MIMOSA-CHIM, was at about 3–4 ppmv inside the filament while above Hawaii. A detailed history of the modeled chemistry inside the filament suggests that the air mass was still polar ozone–depleted when passing over Hawaii. The filament quickly separated from the main vortex after its Hawaiian overpass. It never reconnected and, in less than 10 days, dispersed entirely in the subtropics

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In mid-March 2005, a rare lower stratospheric polar vortex filamentation event was observed simultaneously by the JPL lidar at Mauna Loa Observatory, Hawaii, and by the EOS MLS instrument onboard the Aura satellite. The event coincided with the beginning of the spring 2005 final warming. On 16 March, the filament was observed by lidar around 0600 UT between 415 K and 455 K, and by MLS six hours earlier. It was seen on both the lidar and MLS profiles as a layer of enhanced ozone, peaking at 1.7 ppmv in a region where the climatological values are usually around or below 1 ppmv. Ozone profiles measured by lidar and MLS were compared to profiles from the Chemical Transport Model MIMOSA-CHIM. The agreement between lidar, MLS, and the model is excellent considering the difference in the sampling techniques. MLS was also able to identify the filament at another location north of Hawaii.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

An unusually strong and prolonged stratospheric sudden warming (SSW) in January 2006 was the first major SSW for which globally distributed long-lived trace gas data are available covering the upper troposphere through the lower mesosphere. We use Aura Microwave Limb Sounder (MLS), Atmospheric Chemistry Experiment-Fourier Transform Spectrometer (ACE-FTS) data, the SLIMCAT Chemistry Transport Model (CTM), and assimilated meteorological analyses to provide a comprehensive picture of transport during this event. The upper tropospheric ridge that triggered the SSW was associated with an elevated tropopause and layering in trace gas profiles in conjunction with stratospheric and tropospheric intrusions. Anomalous poleward transport (with corresponding quasi-isentropic troposphere-to-stratosphere exchange at the lowest levels studied) in the region over the ridge extended well into the lower stratosphere. In the middle and upper stratosphere, the breakdown of the polar vortex transport barrier was seen in a signature of rapid, widespread mixing in trace gases, including CO, H2O, CH4 and N2O. The vortex broke down slightly later and more slowly in the lower than in the middle stratosphere. In the middle and lower stratosphere, small remnants with trace gas values characteristic of the pre-SSW vortex lingered through the weak and slow recovery of the vortex. The upper stratospheric vortex quickly reformed, and, as enhanced diabatic descent set in, CO descended into this strong vortex, echoing the fall vortex development. Trace gas evolution in the SLIMCAT CTM agrees well with that in the satellite trace gas data from the upper troposphere through the middle stratosphere. In the upper stratosphere and lower mesosphere, the SLIMCAT simulation does not capture the strong descent of mesospheric CO and H2O values into the reformed vortex; this poor CTM performance in the upper stratosphere and lower mesosphere results primarily from biases in the diabatic descent in assimilated analyses.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The growing energy consumption in the residential sector represents about 30% of global demand. This calls for Demand Side Management solutions propelling change in behaviors of end consumers, with the aim to reduce overall consumption as well as shift it to periods in which demand is lower and where the cost of generating energy is lower. Demand Side Management solutions require detailed knowledge about the patterns of energy consumption. The profile of electricity demand in the residential sector is highly correlated with the time of active occupancy of the dwellings; therefore in this study the occupancy patterns in Spanish properties was determined using the 2009–2010 Time Use Survey (TUS), conducted by the National Statistical Institute of Spain. The survey identifies three peaks in active occupancy, which coincide with morning, noon and evening. This information has been used to input into a stochastic model which generates active occupancy profiles of dwellings, with the aim to simulate domestic electricity consumption. TUS data were also used to identify which appliance-related activities could be considered for Demand Side Management solutions during the three peaks of occupancy.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In 2007, the world reached the unprecedented milestone of half of its people living in cities, and that proportion is projected to be 60% in 2030. The combined effect of global climate change and rapid urban growth, accompanied by economic and industrial development, will likely make city residents more vulnerable to a number of urban environmental problems, including extreme weather and climate conditions, sea-level rise, poor public health and air quality, atmospheric transport of accidental or intentional releases of toxic material, and limited water resources. One fundamental aspect of predicting the future risks and defining mitigation strategies is to understand the weather and regional climate affected by cities. For this reason, dozens of researchers from many disciplines and nations attended the Urban Weather and Climate Workshop.1 Twenty-five students from Chinese universities and institutes also took part. The presentations by the workshop's participants span a wide range of topics, from the interaction between the urban climate and energy consumption in climate-change environments to the impact of urban areas on storms and local circulations, and from the impact of urbanization on the hydrological cycle to air quality and weather prediction.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper presents single-column model (SCM) simulations of a tropical squall-line case observed during the Coupled Ocean-Atmosphere Response Experiment of the Tropical Ocean/Global Atmosphere Programme. This case-study was part of an international model intercomparison project organized by Working Group 4 ‘Precipitating Convective Cloud Systems’ of the GEWEX (Global Energy and Water-cycle Experiment) Cloud System Study. Eight SCM groups using different deep-convection parametrizations participated in this project. The SCMs were forced by temperature and moisture tendencies that had been computed from a reference cloud-resolving model (CRM) simulation using open boundary conditions. The comparison of the SCM results with the reference CRM simulation provided insight into the ability of current convection and cloud schemes to represent organized convection. The CRM results enabled a detailed evaluation of the SCMs in terms of the thermodynamic structure and the convective mass flux of the system, the latter being closely related to the surface convective precipitation. It is shown that the SCMs could reproduce reasonably well the time evolution of the surface convective and stratiform precipitation, the convective mass flux, and the thermodynamic structure of the squall-line system. The thermodynamic structure simulated by the SCMs depended on how the models partitioned the precipitation between convective and stratiform. However, structural differences persisted in the thermodynamic profiles simulated by the SCMs and the CRM. These differences could be attributed to the fact that the total mass flux used to compute the SCM forcing differed from the convective mass flux. The SCMs could not adequately represent these organized mesoscale circulations and the microphysicallradiative forcing associated with the stratiform region. This issue is generally known as the ‘scale-interaction’ problem that can only be properly addressed in fully three-dimensional simulations. Sensitivity simulations run by several groups showed that the time evolution of the surface convective precipitation was considerably smoothed when the convective closure was based on convective available potential energy instead of moisture convergence. Finally, additional SCM simulations without using a convection parametrization indicated that the impact of a convection parametrization in forced SCM runs was more visible in the moisture profiles than in the temperature profiles because convective transport was particularly important in the moisture budget.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The term neural population models (NPMs) is used here as catchall for a wide range of approaches that have been variously called neural mass models, mean field models, neural field models, bulk models, and so forth. All NPMs attempt to describe the collective action of neural assemblies directly. Some NPMs treat the densely populated tissue of cortex as an excitable medium, leading to spatially continuous cortical field theories (CFTs). An indirect approach would start by modelling individual cells and then would explain the collective action of a group of cells by coupling many individual models together. In contrast, NPMs employ collective state variables, typically defined as averages over the group of cells, in order to describe the population activity directly in a single model. The strength and the weakness of his approach are hence one and the same: simplification by bulk. Is this justified and indeed useful, or does it lead to oversimplification which fails to capture the pheno ...

Relevância:

100.00% 100.00%

Publicador:

Resumo:

It is well-known that social insects such as ants show interesting collective behaviors. How do they organize such behaviors? To expand understanding of collective behaviors of social insects, we focused on ants, Diacamma, and analyzed the behavior of a few individuals. In an experimental set-up, ants are placed in hemisphere without a nest and food and the trajectory of ants is recorded. From this bottom-up approach, we found following characteristics: 1. Activity of individuals increases and decreases periodically. 2. Spontaneous meeting process is observed between two ants and meeting spot of two ants is localized in the experimental field.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We argue that population modeling can add value to ecological risk assessment by reducing uncertainty when extrapolating from ecotoxicological observations to relevant ecological effects. We review other methods of extrapolation, ranging from application factors to species sensitivity distributions to suborganismal (biomarker and "-omics'') responses to quantitative structure activity relationships and model ecosystems, drawing attention to the limitations of each. We suggest a simple classification of population models and critically examine each model in an extrapolation context. We conclude that population models have the potential for adding value to ecological risk assessment by incorporating better understanding of the links between individual responses and population size and structure and by incorporating greater levels of ecological complexity. A number of issues, however, need to be addressed before such models are likely to become more widely used. In a science context, these involve challenges in parameterization, questions about appropriate levels of complexity, issues concerning how specific or general the models need to be, and the extent to which interactions through competition and trophic relationships can be easily incorporated.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Seventeen simulations of the Last Glacial Maximum (LGM) climate have been performed using atmospheric general circulation models (AGCM) in the framework of the Paleoclimate Modeling Intercomparison Project (PMIP). These simulations use the boundary conditions for CO2, insolation and ice-sheets; surface temperatures (SSTs) are either (a) prescribed using CLIMAP data set (eight models) or (b) computed by coupling the AGCM with a slab ocean (nine models). The present-day (PD) tropical climate is correctly depicted by all the models, except the coarser resolution models, and the simulated geographical distribution of annual mean temperature is in good agreement with climatology. Tropical cooling at the LGM is less than at middle and high latitudes, but greatly exceeds the PD temperature variability. The LGM simulations with prescribed SSTs underestimate the observed temperature changes except over equatorial Africa where the models produce a temperature decrease consistent with the data. Our results confirm previous analyses showing that CLIMAP (1981) SSTs only produce a weak terrestrial cooling. When SSTs are computed, the models depict a cooling over the Pacific and Indian oceans in contrast with CLIMAP and most models produce cooler temperatures over land. Moreover four of the nine simulations, produce a cooling in good agreement with terrestrial data. Two of these model results over ocean are consistent with new SST reconstructions whereas two models simulate a homogeneous cooling. Finally, the LGM aridity inferred for most of the tropics from the data, is globally reproduced by the models with a strong underestimation for models using computed SSTs.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper evaluates the current status of global modeling of the organic aerosol (OA) in the troposphere and analyzes the differences between models as well as between models and observations. Thirty-one global chemistry transport models (CTMs) and general circulation models (GCMs) have participated in this intercomparison, in the framework of AeroCom phase II. The simulation of OA varies greatly between models in terms of the magnitude of primary emissions, secondary OA (SOA) formation, the number of OA species used (2 to 62), the complexity of OA parameterizations (gas-particle partitioning, chemical aging, multiphase chemistry, aerosol microphysics), and the OA physical, chemical and optical properties. The diversity of the global OA simulation results has increased since earlier AeroCom experiments, mainly due to the increasing complexity of the SOA parameterization in models, and the implementation of new, highly uncertain, OA sources. Diversity of over one order of magnitude exists in the modeled vertical distribution of OA concentrations that deserves a dedicated future study. Furthermore, although the OA / OC ratio depends on OA sources and atmospheric processing, and is important for model evaluation against OA and OC observations, it is resolved only by a few global models. The median global primary OA (POA) source strength is 56 Tg a−1 (range 34–144 Tg a−1) and the median SOA source strength (natural and anthropogenic) is 19 Tg a−1 (range 13–121 Tg a−1). Among the models that take into account the semi-volatile SOA nature, the median source is calculated to be 51 Tg a−1 (range 16–121 Tg a−1), much larger than the median value of the models that calculate SOA in a more simplistic way (19 Tg a−1; range 13–20 Tg a−1, with one model at 37 Tg a−1). The median atmospheric burden of OA is 1.4 Tg (24 models in the range of 0.6–2.0 Tg and 4 between 2.0 and 3.8 Tg), with a median OA lifetime of 5.4 days (range 3.8–9.6 days). In models that reported both OA and sulfate burdens, the median value of the OA/sulfate burden ratio is calculated to be 0.77; 13 models calculate a ratio lower than 1, and 9 models higher than 1. For 26 models that reported OA deposition fluxes, the median wet removal is 70 Tg a−1 (range 28–209 Tg a−1), which is on average 85% of the total OA deposition. Fine aerosol organic carbon (OC) and OA observations from continuous monitoring networks and individual field campaigns have been used for model evaluation. At urban locations, the model–observation comparison indicates missing knowledge on anthropogenic OA sources, both strength and seasonality. The combined model–measurements analysis suggests the existence of increased OA levels during summer due to biogenic SOA formation over large areas of the USA that can be of the same order of magnitude as the POA, even at urban locations, and contribute to the measured urban seasonal pattern. Global models are able to simulate the high secondary character of OA observed in the atmosphere as a result of SOA formation and POA aging, although the amount of OA present in the atmosphere remains largely underestimated, with a mean normalized bias (MNB) equal to −0.62 (−0.51) based on the comparison against OC (OA) urban data of all models at the surface, −0.15 (+0.51) when compared with remote measurements, and −0.30 for marine locations with OC data. The mean temporal correlations across all stations are low when compared with OC (OA) measurements: 0.47 (0.52) for urban stations, 0.39 (0.37) for remote stations, and 0.25 for marine stations with OC data. The combination of high (negative) MNB and higher correlation at urban stations when compared with the low MNB and lower correlation at remote sites suggests that knowledge about the processes that govern aerosol processing, transport and removal, on top of their sources, is important at the remote stations. There is no clear change in model skill with increasing model complexity with regard to OC or OA mass concentration. However, the complexity is needed in models in order to distinguish between anthropogenic and natural OA as needed for climate mitigation, and to calculate the impact of OA on climate accurately.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Applications such as neuroscience, telecommunication, online social networking, transport and retail trading give rise to connectivity patterns that change over time. In this work, we address the resulting need for network models and computational algorithms that deal with dynamic links. We introduce a new class of evolving range-dependent random graphs that gives a tractable framework for modelling and simulation. We develop a spectral algorithm for calibrating a set of edge ranges from a sequence of network snapshots and give a proof of principle illustration on some neuroscience data. We also show how the model can be used computationally and analytically to investigate the scenario where an evolutionary process, such as an epidemic, takes place on an evolving network. This allows us to study the cumulative effect of two distinct types of dynamics.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Reanalysis data provide an excellent test bed for impacts prediction systems. because they represent an upper limit on the skill of climate models. Indian groundnut (Arachis hypogaea L.) yields have been simulated using the General Large-Area Model (GLAM) for annual crops and the European Centre for Medium-Range Weather Forecasts (ECMWF) 40-yr reanalysis (ERA-40). The ability of ERA-40 to represent the Indian summer monsoon has been examined. The ability of GLAM. when driven with daily ERA-40 data, to model both observed yields and observed relationships between subseasonal weather and yield has been assessed. Mean yields "were simulated well across much of India. Correlations between observed and modeled yields, where these are significant. are comparable to correlations between observed yields and ERA-40 rainfall. Uncertainties due to the input planting window, crop duration, and weather data have been examined. A reduction in the root-mean-square error of simulated yields was achieved by applying bias correction techniques to the precipitation. The stability of the relationship between weather and yield over time has been examined. Weather-yield correlations vary on decadal time scales. and this has direct implications for the accuracy of yield simulations. Analysis of the skewness of both detrended yields and precipitation suggest that nonclimatic factors are partly responsible for this nonstationarity. Evidence from other studies, including data on cereal and pulse yields, indicates that this result is not particular to groundnut yield. The detection and modeling of nonstationary weather-yield relationships emerges from this study as an important part of the process of understanding and predicting the impacts of climate variability and change on crop yields.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This is the first of two articles presenting a detailed review of the historical evolution of mathematical models applied in the development of building technology, including conventional buildings and intelligent buildings. After presenting the technical differences between conventional and intelligent buildings, this article reviews the existing mathematical models, the abstract levels of these models, and their links to the literature for intelligent buildings. The advantages and limitations of the applied mathematical models are identified and the models are classified in terms of their application range and goal. We then describe how the early mathematical models, mainly physical models applied to conventional buildings, have faced new challenges for the design and management of intelligent buildings and led to the use of models which offer more flexibility to better cope with various uncertainties. In contrast with the early modelling techniques, model approaches adopted in neural networks, expert systems, fuzzy logic and genetic models provide a promising method to accommodate these complications as intelligent buildings now need integrated technologies which involve solving complex, multi-objective and integrated decision problems.