946 resultados para System model
Resumo:
Atmospheric dust is an important feedback in the climate system, potentially affecting the radiative balance and chemical composition of the atmosphere and providing nutrients to terrestrial and marine ecosystems. Yet the potential impact of dust on the climate system, both in the anthropogenically disturbed future and the naturally varying past, remains to be quantified. The geologic record of dust provides the opportunity to test earth system models designed to simulate dust. Records of dust can be obtained from ice cores, marine sediments, and terrestrial (loess) deposits. Although rarely unequivocal, these records document a variety of processes (source, transport and deposition) in the dust cycle, stored in each archive as changes in clay mineralogy, isotopes, grain size, and concentration of terrigenous materials. Although the extraction of information from each type of archive is slightly different, the basic controls on these dust indicators are the same. Changes in the dust flux and particle size might be controlled by a combination of (a) source area extent, (b) dust emission efficiency (wind speed) and atmospheric transport, (c) atmospheric residence time of dust, and/or (d) relative contributions of dry settling and rainout of dust. Similarly, changes in mineralogy reflect (a) source area mineralogy and weathering and (b) shifts in atmospheric transport. The combination of these geological data with process-based, forward-modelling schemes in global earth system models provides an excellent means of achieving a comprehensive picture of the global pattern of dust accumulation rates, their controlling mechanisms, and how those mechanisms may vary regionally. The Dust Indicators and Records of Terrestrial and MArine Palaeoenvironments (DIRTMAP) data base has been established to provide a global palaeoenvironmental data set that can be used to validate earth system model simulations of the dust cycle over the past 150,000 years.
Resumo:
A statistical–dynamical downscaling (SDD) approach for the regionalization of wind energy output (Eout) over Europe with special focus on Germany is proposed. SDD uses an extended circulation weather type (CWT) analysis on global daily mean sea level pressure fields with the central point being located over Germany. Seventy-seven weather classes based on the associated CWT and the intensity of the geostrophic flow are identified. Representatives of these classes are dynamically downscaled with the regional climate model COSMO-CLM. By using weather class frequencies of different data sets, the simulated representatives are recombined to probability density functions (PDFs) of near-surface wind speed and finally to Eout of a sample wind turbine for present and future climate. This is performed for reanalysis, decadal hindcasts and long-term future projections. For evaluation purposes, results of SDD are compared to wind observations and to simulated Eout of purely dynamical downscaling (DD) methods. For the present climate, SDD is able to simulate realistic PDFs of 10-m wind speed for most stations in Germany. The resulting spatial Eout patterns are similar to DD-simulated Eout. In terms of decadal hindcasts, results of SDD are similar to DD-simulated Eout over Germany, Poland, Czech Republic, and Benelux, for which high correlations between annual Eout time series of SDD and DD are detected for selected hindcasts. Lower correlation is found for other European countries. It is demonstrated that SDD can be used to downscale the full ensemble of the Earth System Model of the Max Planck Institute (MPI-ESM) decadal prediction system. Long-term climate change projections in Special Report on Emission Scenarios of ECHAM5/MPI-OM as obtained by SDD agree well to the results of other studies using DD methods, with increasing Eout over northern Europe and a negative trend over southern Europe. Despite some biases, it is concluded that SDD is an adequate tool to assess regional wind energy changes in large model ensembles.
Resumo:
Though many global aerosols models prognose surface deposition, only a few models have been used to directly simulate the radiative effect from black carbon (BC) deposition to snow and sea ice. Here, we apply aerosol deposition fields from 25 models contributing to two phases of the Aerosol Comparisons between Observations and Models (AeroCom) project to simulate and evaluate within-snow BC concentrations and radiative effect in the Arctic. We accomplish this by driving the offline land and sea ice components of the Community Earth System Model with different deposition fields and meteorological conditions from 2004 to 2009, during which an extensive field campaign of BC measurements in Arctic snow occurred. We find that models generally underestimate BC concentrations in snow in northern Russia and Norway, while overestimating BC amounts elsewhere in the Arctic. Although simulated BC distributions in snow are poorly correlated with measurements, mean values are reasonable. The multi-model mean (range) bias in BC concentrations, sampled over the same grid cells, snow depths, and months of measurements, are −4.4 (−13.2 to +10.7) ng g−1 for an earlier phase of AeroCom models (phase I), and +4.1 (−13.0 to +21.4) ng g−1 for a more recent phase of AeroCom models (phase II), compared to the observational mean of 19.2 ng g−1. Factors determining model BC concentrations in Arctic snow include Arctic BC emissions, transport of extra-Arctic aerosols, precipitation, deposition efficiency of aerosols within the Arctic, and meltwater removal of particles in snow. Sensitivity studies show that the model–measurement evaluation is only weakly affected by meltwater scavenging efficiency because most measurements were conducted in non-melting snow. The Arctic (60–90° N) atmospheric residence time for BC in phase II models ranges from 3.7 to 23.2 days, implying large inter-model variation in local BC deposition efficiency. Combined with the fact that most Arctic BC deposition originates from extra-Arctic emissions, these results suggest that aerosol removal processes are a leading source of variation in model performance. The multi-model mean (full range) of Arctic radiative effect from BC in snow is 0.15 (0.07–0.25) W m−2 and 0.18 (0.06–0.28) W m−2 in phase I and phase II models, respectively. After correcting for model biases relative to observed BC concentrations in different regions of the Arctic, we obtain a multi-model mean Arctic radiative effect of 0.17 W m−2 for the combined AeroCom ensembles. Finally, there is a high correlation between modeled BC concentrations sampled over the observational sites and the Arctic as a whole, indicating that the field campaign provided a reasonable sample of the Arctic.
Resumo:
The WFDEI meteorological forcing data set has been generated using the same methodology as the widely used WATCH Forcing Data (WFD) by making use of the ERA-Interim reanalysis data. We discuss the specifics of how changes in the reanalysis and processing have led to improvement over the WFD. We attribute improvements in precipitation and wind speed to the latest reanalysis basis data and improved downward shortwave fluxes to the changes in the aerosol corrections. Covering 1979–2012, the WFDEI will allow more thorough comparisons of hydrological and Earth System model outputs with hydrologically and phenologically relevant satellite products than using the WFD.
Resumo:
Climate change is projected to cause substantial alterations in vegetation distribution, but these have been given little attention in comparison to land-use in the Representative Concentration Pathway (RCP) scenarios. Here we assess the climate-induced land cover changes (CILCC) in the RCPs, and compare them to land-use land cover change (LULCC). To do this, we use an ensemble of simulations with and without LULCC in earth system model HadGEM2-ES for RCP2.6, RCP4.5 and RCP8.5. We find that climate change causes an expansion poleward of vegetation that affects more land area than LULCC in all of the RCPs considered here. The terrestrial carbon changes from CILCC are also larger than for LULCC. When considering only forest, the LULCC is larger, but the CILCC is highly variable with the overall radiative forcing of the scenario. The CILCC forest increase compensates 90% of the global anthropogenic deforestation by 2100 in RCP8.5, but just 3% in RCP2.6. Overall, bigger land cover changes tend to originate from LULCC in the shorter term or lower radiative forcing scenarios, and from CILCC in the longer term and higher radiative forcing scenarios. The extent to which CILCC could compensate for LULCC raises difficult questions regarding global forest and biodiversity offsetting, especially at different timescales. This research shows the importance of considering the relative size of CILCC to LULCC, especially with regard to the ecological effects of the different RCPs.
Resumo:
In this paper, we investigate half-duplex two-way dual-hop channel state information (CSI)-assisted amplify-and-forward (AF) relaying in the presence of high-power amplifier (HPA) nonlinearity at relays. The expression for the end-to-end signal-to-noise ratio (SNR) is derived as per the modified system model by taking into account the interference caused by relaying scheme and HPA nonlinearity. The system performance of the considered relaying network is evaluated in terms of average symbol error probability (SEP) in Nakagami-$m$ fading channels, by making use of the moment-generating function (MGF) approach. Numerical results are provided and show the effects of several parameters, such as quadrature amplitude modulation (QAM) order, number of relays, HPA parameters, and Nakagami parameter, on performance.
Resumo:
Uncertainty of Arctic seasonal to interannual predictions arising from model errors and initial state uncertainty has been widely discussed in the literature, whereas the irreducible forecast uncertainty (IFU) arising from the chaoticity of the climate system has received less attention. However, IFU provides important insights into the mechanisms through which predictability is lost, and hence can inform prioritization of model development and observations deployment. Here, we characterize how internal oceanic and surface atmospheric heat fluxes contribute to IFU of Arctic sea ice and upper ocean heat content in an Earth system model by analyzing a set of idealized ensemble prediction experiments. We find that atmospheric and oceanic heat flux are often equally important for driving unpredictable Arctic-wide changes in sea ice and surface water temperatures, and hence contribute equally to IFU. Atmospheric surface heat flux tends to dominate Arctic-wide changes for lead times of up to a year, whereas oceanic heat flux tends to dominate regionally and on interannual time scales. There is in general a strong negative covariance between surface heat flux and ocean vertical heat flux at depth, and anomalies of lateral ocean heat transport are wind-driven, which suggests that the unpredictable oceanic heat flux variability is mainly forced by the atmosphere. These results are qualitatively robust across different initial states, but substantial variations in the amplitude of IFU exist. We conclude that both atmospheric variability and the initial state of the upper ocean are key ingredients for predictions of Arctic surface climate on seasonal to interannual time scales.
Resumo:
Decadal predictions on timescales from one year to one decade are gaining importance since this time frame falls within the planning horizon of politics, economy and society. The present study examines the decadal predictability of regional wind speed and wind energy potentials in three generations of the MiKlip (‘Mittelfristige Klimaprognosen’) decadal prediction system. The system is based on the global Max-Planck-Institute Earth System Model (MPI-ESM), and the three generations differ primarily in the ocean initialisation. Ensembles of uninitialised historical and yearly initialised hindcast experiments are used to assess the forecast skill for 10 m wind speeds and wind energy output (Eout) over Central Europe with lead times from one year to one decade. With this aim, a statistical-dynamical downscaling (SDD) approach is used for the regionalisation. Its added value is evaluated by comparison of skill scores for MPI-ESM large-scale wind speeds and SDD-simulated regional wind speeds. All three MPI-ESM ensemble generations show some forecast skill for annual mean wind speed and Eout over Central Europe on yearly and multi-yearly time scales. This forecast skill is mostly limited to the first years after initialisation. Differences between the three ensemble generations are generally small. The regionalisation preserves and sometimes increases the forecast skills of the global runs but results depend on lead time and ensemble generation. Moreover, regionalisation often improves the ensemble spread. Seasonal Eout skills are generally lower than for annual means. Skill scores are lowest during summer and persist longest in autumn. A large-scale westerly weather type with strong pressure gradients over Central Europe is identified as potential source of the skill for wind energy potentials, showing a similar forecast skill and a high correlation with Eout anomalies. These results are promising towards the establishment of a decadal prediction system for wind energy applications over Central Europe.
Resumo:
Despite the importance of dust aerosol in the Earth system, state-of-the-art models show a large variety for North African dust emission. This study presents a systematic evaluation of dust emitting-winds in 30 years of the historical model simulation with the UK Met Office Earth-system model HadGEM2-ES for the Coupled Model Intercomparison Project Phase 5. Isolating the effect of winds on dust emission and using an automated detection for nocturnal low-level jets (NLLJs) allow an in-depth evaluation of the model performance for dust emission from a meteorological perspective. The findings highlight that NLLJs are a key driver for dust emission in HadGEM2-ES in terms of occurrence frequency and strength. The annually and spatially averaged occurrence frequency of NLLJs is similar in HadGEM2-ES and ERA-Interim from the European Centre for Medium-Range Weather Forecasts. Compared to ERA-Interim, a stronger pressure ridge over northern Africa in winter and the southward displaced heat low in summer result in differences in location and strength of NLLJs. Particularly the larger geostrophic winds associated with the stronger ridge have a strengthening effect on NLLJs over parts of West Africa in winter. Stronger NLLJs in summer may rather result from an artificially increased mixing coefficient under stable stratification that is weaker in HadGEM2-ES. NLLJs in the Bodélé Depression are affected by stronger synoptic-scale pressure gradients in HadGEM2-ES. Wintertime geostrophic winds can even be so strong that the associated vertical wind shear prevents the formation of NLLJs. These results call for further model improvements in the synoptic-scale dynamics and the physical parametrization of the nocturnal stable boundary layer to better represent dust-emitting processes in the atmospheric model. The new approach could be used for identifying systematic behavior in other models with respect to meteorological processes for dust emission. This would help to improve dust emission simulations and contribute to decreasing the currently large uncertainty in climate change projections with respect to dust aerosol.
Resumo:
Photoexpansion and photobleaching effects have been observed in amorphous GeS(2) + Ga(2)O(3) (GGSO) thin films, when their surfaces were exposed to UV light. The photoinduced changes on the surface of the samples are indications that the structure has been changed as a result of photoexcitation. In this paper, micro-Raman, energy dispersive X-ray analysis (EDX) and backscattering electrons (BSE) microscopy were the techniques used to identify the origin of these effects. Raman spectra revealed that these phenomena are a consequence of the Ge-S bonds` breakdown and the formation of new Ge-O bonds, with an increase of the modes associated with Ge-O-Ge bonds and mixed oxysulphide tetrahedral units (S-Ge-O). The chemical composition measured by EDX and BSE microscopy images indicated that the irradiated area is oxygen rich. So, the present paper provides fundamental insights into the influence of the oxygen within the glass matrix on the considered photoinduced effects. (C) 2010 Elsevier B.V. All rights reserved.
Resumo:
Many solutions to AI problems require the task to be represented in one of a multitude of rigorous mathematical formalisms. The construction of such mathematical models forms a difficult problem which is often left to the user of the problem solver. This void between problem solvers and the problems is studied by the eclectic field of automated modelling. Within this field, compositional modelling, a knowledge-based methodology for system modelling, has established itself as a leading approach. In general, a compositional modeller organises knowledge in a structure of composable fragments that relate to particular system components or processes. Its embedded inference mechanism chooses the appropriate fragments with respect to a given problem, instantiates and assembles them into a consistent system model. Many different types of compositional modeller exist, however, with significant differences in their knowledge representation and approach to inference. This paper examines compositional modelling. It presents a general framework for building and analysing compositional modellers. Based on this framework, a number of influential compositional modellers are examined and compared. The paper also identifies the strengths and weaknesses of compositional modelling and discusses some typical applications.
Resumo:
Formal methods and software testing are tools to obtain and control software quality. When used together, they provide mechanisms for software specification, verification and error detection. Even though formal methods allow software to be mathematically verified, they are not enough to assure that a system is free of faults, thus, software testing techniques are necessary to complement the process of verification and validation of a system. Model Based Testing techniques allow tests to be generated from other software artifacts such as specifications and abstract models. Using formal specifications as basis for test creation, we can generate better quality tests, because these specifications are usually precise and free of ambiguity. Fernanda Souza (2009) proposed a method to define test cases from B Method specifications. This method used information from the machine s invariant and the operation s precondition to define positive and negative test cases for an operation, using equivalent class partitioning and boundary value analysis based techniques. However, the method proposed in 2009 was not automated and had conceptual deficiencies like, for instance, it did not fit in a well defined coverage criteria classification. We started our work with a case study that applied the method in an example of B specification from the industry. Based in this case study we ve obtained subsidies to improve it. In our work we evolved the proposed method, rewriting it and adding characteristics to make it compatible with a test classification used by the community. We also improved the method to support specifications structured in different components, to use information from the operation s behavior on the test case generation process and to use new coverage criterias. Besides, we have implemented a tool to automate the method and we have submitted it to more complex case studies
Resumo:
OBJETIVO: Avaliar a sensibilidade do aparelho photoscreener na detecção de alterações oculares em crianças informantes, comparando os dados à acuidade visual obtida pela tabela E de Snellen. MÉTODOS: Foram avaliadas 500 crianças de idades entre 5 e 12 anos, de escola do município de Botucatu, estado de São Paulo. As crianças foram submetidas ao teste de acuidade visual pela tabela E de Snellen e foram fotografadas utilizando-se o aparelho photoscreenerTM system model MTI-PS100, seguindo-se a análise das fotos obtidas. RESULTADOS: Houve concordância negativa (criança com boa acuidade visual e teste negativo com o photoscreener) em 81,0%; concordância positiva (acuidade visual alterada e teste positivo) em 7,6% e não houve concordância de resultados em 11,0% dos casos. CONCLUSÃO: A avaliação comparativa entre o método da acuidade visual pela tabela E de Snellen e o photocreener para detecção de problemas visuais mostrou alta concordância. Os autores sugerem entretanto, a triagem usando tabelas de acuidade visual quando se trata de crianças informantes, devido aos custos com o aparelho.
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)