975 resultados para Global Processing Speed
Resumo:
Northern Hemisphere tropical cyclone (TC) activity is investigated in multiyear global climate simulations with theECMWFIntegrated Forecast System (IFS) at 10-km resolution forced by the observed records of sea surface temperature and sea ice. The results are compared to analogous simulationswith the 16-, 39-, and 125-km versions of the model as well as observations. In the North Atlantic, mean TC frequency in the 10-km model is comparable to the observed frequency, whereas it is too low in the other versions. While spatial distributions of the genesis and track densities improve systematically with increasing resolution, the 10-km model displays qualitatively more realistic simulation of the track density in the western subtropical North Atlantic. In the North Pacific, the TC count tends to be too high in thewest and too low in the east for all resolutions. These model errors appear to be associated with the errors in the large-scale environmental conditions that are fairly similar in this region for all model versions. The largest benefits of the 10-km simulation are the dramatically more accurate representation of the TC intensity distribution and the structure of the most intense storms. The model can generate a supertyphoon with a maximum surface wind speed of 68.4 m s21. The life cycle of an intense TC comprises intensity fluctuations that occur in apparent connection with the variations of the eyewall/rainband structure. These findings suggest that a hydrostatic model with cumulus parameterization and of high enough resolution could be efficiently used to simulate the TC intensity response (and the associated structural changes) to future climate change.
Resumo:
Global flood hazard maps can be used in the assessment of flood risk in a number of different applications, including (re)insurance and large scale flood preparedness. Such global hazard maps can be generated using large scale physically based models of rainfall-runoff and river routing, when used in conjunction with a number of post-processing methods. In this study, the European Centre for Medium Range Weather Forecasts (ECMWF) land surface model is coupled to ERA-Interim reanalysis meteorological forcing data, and resultant runoff is passed to a river routing algorithm which simulates floodplains and flood flow across the global land area. The global hazard map is based on a 30 yr (1979–2010) simulation period. A Gumbel distribution is fitted to the annual maxima flows to derive a number of flood return periods. The return periods are calculated initially for a 25×25 km grid, which is then reprojected onto a 1×1 km grid to derive maps of higher resolution and estimate flooded fractional area for the individual 25×25 km cells. Several global and regional maps of flood return periods ranging from 2 to 500 yr are presented. The results compare reasonably to a benchmark data set of global flood hazard. The developed methodology can be applied to other datasets on a global or regional scale.
Resumo:
Recent research indicates gender differences in the impact of stress on decision behavior, but little is known about the brain mechanisms involved in these gender-specific stress effects. The current study used functional magnetic resonance imaging (fMRI) to determine whether induced stress resulted in gender-specific patterns of brain activation during a decision task involving monetary reward. Specifically, we manipulated physiological stress levels using a cold pressor task, prior to a risky decision making task. Healthy men (n = 24, 12 stressed) and women (n = 23, 11 stressed) completed the decision task after either cold pressor stress or a control task during the period of cortisol response to the cold pressor. Gender differences in behavior were present in stressed participants but not controls, such that stress led to greater reward collection and faster decision speed in males but less reward collection and slower decision speed in females. A gender-by-stress interaction was observed for the dorsal striatum and anterior insula. With cold stress, activation in these regions was increased in males but decreased in females. The findings of this study indicate that the impact of stress on reward-related decision processing differs depending on gender.
Resumo:
Background: The interpretation of ambiguous subject pronouns in a null subject language, like Greek, requires that one possesses grammatical knowledge of the two subject pronominal forms, i.e., null and overt, and that discourse constraints regulating the distribution of the two pronouns in context are respected. Aims: We investigated whether the topic-shift feature encoded in overt subject pronouns would exert similar interpretive effects in a group of seven participants with Broca’s aphasia and a group of language-unimpaired adults during online processing of null and overt subject pronouns in referentially ambiguous contexts. Method & Procedures: An offline picture–sentence matching task was initially administered to investigate whether the participants with Broca’s aphasia had access to the gender and number features of clitic pronouns. An online self-paced listening picture-verification task was subsequently administered to examine how the aphasic individuals resolve pronoun ambiguities in contexts with either null or overt subject pronouns and how their performance compares to that of language-unimpaired adults. Outcomes & Results: Results demonstrate that the Broca group, along with controls, had intact access to the morphosyntactic features of clitic pronouns. However, the aphasic individuals showed decreased preference for non-salient antecedents in object position during the online resolution of ambiguous overt subject pronouns and preferred to pick the subject antecedent instead. Conclusions: Broca’s aphasic participants’ parsing decisions in the online task reflect their difficulty with establishing topic-shifted interpretations of the ambiguous overt subject pronouns. The presence of a local topic-shift effect in the immediate temporal vicinity of the overt pronoun suggests that sensitivity to the marked informational status of overt pronouns is preserved in the aphasic individuals, yet, it is blocked under conditions of global sentential processing.
Resumo:
The Water and Global Change (WATCH) project evaluation of the terrestrial water cycle involves using land surface models and general hydrological models to assess hydrologically important variables including evaporation, soil moisture, and runoff. Such models require meteorological forcing data, and this paper describes the creation of the WATCH Forcing Data for 1958–2001 based on the 40-yr ECMWF Re-Analysis (ERA-40) and for 1901–57 based on reordered reanalysis data. It also discusses and analyses modelindependent estimates of reference crop evaporation. Global average annual cumulative reference crop evaporation was selected as a widely adopted measure of potential evapotranspiration. It exhibits no significant trend from 1979 to 2001 although there are significant long-term increases in global average vapor pressure deficit and concurrent significant decreases in global average net radiation and wind speed. The near-constant global average of annual reference crop evaporation in the late twentieth century masks significant decreases in some regions (e.g., the Murray–Darling basin) with significant increases in others.
Resumo:
High resolution surface wind fields covering the global ocean, estimated from remotely sensed wind data and ECMWF wind analyses, have been available since 2005 with a spatial resolution of 0.25 degrees in longitude and latitude, and a temporal resolution of 6h. Their quality is investigated through various comparisons with surface wind vectors from 190 buoys moored in various oceanic basins, from research vessels and from QuikSCAT scatterometer data taken during 2005-2006. The NCEP/NCAR and NCDC blended wind products are also considered. The comparisons performed during January-December 2005 show that speeds and directions compare well to in-situ observations, including from moored buoys and ships, as well as to the remotely sensed data. The root-mean-squared differences of the wind speed and direction for the new blended wind data are lower than 2m/s and 30 degrees, respectively. These values are similar to those estimated in the comparisons of hourly buoy measurements and QuickSCAT near real time retrievals. At global scale, it is found that the new products compare well with the wind speed and wind vector components observed by QuikSCAT. No significant dependencies on the QuikSCAT wind speed or on the oceanic region considered are evident.Evaluation of high-resolution surface wind products at global and regional scales
Resumo:
The UPSCALE (UK on PRACE: weather-resolving Simulations of Climate for globAL Environmental risk) project, using PRACE (Partnership for Advanced Computing in Europe) resources, constructed and ran an ensemble of atmosphere-only global climate model simulations, using the Met Office Unified Model GA3 configuration. Each simulation is 27 years in length for both the present climate and an end-of-century future climate, at resolutions of N96 (130 km), N216 (60 km) and N512 (25 km), in order to study the impact of model resolution on high impact climate features such as tropical cyclones. Increased model resolution is found to improve the simulated frequency of explicitly tracked tropical cyclones, and correlations of interannual variability in the North Atlantic and North West Pacific lie between 0.6 and 0.75. Improvements in the deficit of genesis in the eastern North Atlantic as resolution increases appear to be related to the representation of African Easterly Waves and the African Easterly Jet. However, the intensity of the modelled tropical cyclones as measured by 10 m wind speed remain weak, and there is no indication of convergence over this range of resolutions. In the future climate ensemble, there is a reduction of 50% in the frequency of Southern Hemisphere tropical cyclones, while in the Northern Hemisphere there is a reduction in the North Atlantic, and a shift in the Pacific with peak intensities becoming more common in the Central Pacific. There is also a change in tropical cyclone intensities, with the future climate having fewer weak storms and proportionally more stronger storms
Resumo:
This paper evaluates the current status of global modeling of the organic aerosol (OA) in the troposphere and analyzes the differences between models as well as between models and observations. Thirty-one global chemistry transport models (CTMs) and general circulation models (GCMs) have participated in this intercomparison, in the framework of AeroCom phase II. The simulation of OA varies greatly between models in terms of the magnitude of primary emissions, secondary OA (SOA) formation, the number of OA species used (2 to 62), the complexity of OA parameterizations (gas-particle partitioning, chemical aging, multiphase chemistry, aerosol microphysics), and the OA physical, chemical and optical properties. The diversity of the global OA simulation results has increased since earlier AeroCom experiments, mainly due to the increasing complexity of the SOA parameterization in models, and the implementation of new, highly uncertain, OA sources. Diversity of over one order of magnitude exists in the modeled vertical distribution of OA concentrations that deserves a dedicated future study. Furthermore, although the OA / OC ratio depends on OA sources and atmospheric processing, and is important for model evaluation against OA and OC observations, it is resolved only by a few global models. The median global primary OA (POA) source strength is 56 Tg a−1 (range 34–144 Tg a−1) and the median SOA source strength (natural and anthropogenic) is 19 Tg a−1 (range 13–121 Tg a−1). Among the models that take into account the semi-volatile SOA nature, the median source is calculated to be 51 Tg a−1 (range 16–121 Tg a−1), much larger than the median value of the models that calculate SOA in a more simplistic way (19 Tg a−1; range 13–20 Tg a−1, with one model at 37 Tg a−1). The median atmospheric burden of OA is 1.4 Tg (24 models in the range of 0.6–2.0 Tg and 4 between 2.0 and 3.8 Tg), with a median OA lifetime of 5.4 days (range 3.8–9.6 days). In models that reported both OA and sulfate burdens, the median value of the OA/sulfate burden ratio is calculated to be 0.77; 13 models calculate a ratio lower than 1, and 9 models higher than 1. For 26 models that reported OA deposition fluxes, the median wet removal is 70 Tg a−1 (range 28–209 Tg a−1), which is on average 85% of the total OA deposition. Fine aerosol organic carbon (OC) and OA observations from continuous monitoring networks and individual field campaigns have been used for model evaluation. At urban locations, the model–observation comparison indicates missing knowledge on anthropogenic OA sources, both strength and seasonality. The combined model–measurements analysis suggests the existence of increased OA levels during summer due to biogenic SOA formation over large areas of the USA that can be of the same order of magnitude as the POA, even at urban locations, and contribute to the measured urban seasonal pattern. Global models are able to simulate the high secondary character of OA observed in the atmosphere as a result of SOA formation and POA aging, although the amount of OA present in the atmosphere remains largely underestimated, with a mean normalized bias (MNB) equal to −0.62 (−0.51) based on the comparison against OC (OA) urban data of all models at the surface, −0.15 (+0.51) when compared with remote measurements, and −0.30 for marine locations with OC data. The mean temporal correlations across all stations are low when compared with OC (OA) measurements: 0.47 (0.52) for urban stations, 0.39 (0.37) for remote stations, and 0.25 for marine stations with OC data. The combination of high (negative) MNB and higher correlation at urban stations when compared with the low MNB and lower correlation at remote sites suggests that knowledge about the processes that govern aerosol processing, transport and removal, on top of their sources, is important at the remote stations. There is no clear change in model skill with increasing model complexity with regard to OC or OA mass concentration. However, the complexity is needed in models in order to distinguish between anthropogenic and natural OA as needed for climate mitigation, and to calculate the impact of OA on climate accurately.
Resumo:
Decadal predictions on timescales from one year to one decade are gaining importance since this time frame falls within the planning horizon of politics, economy and society. The present study examines the decadal predictability of regional wind speed and wind energy potentials in three generations of the MiKlip (‘Mittelfristige Klimaprognosen’) decadal prediction system. The system is based on the global Max-Planck-Institute Earth System Model (MPI-ESM), and the three generations differ primarily in the ocean initialisation. Ensembles of uninitialised historical and yearly initialised hindcast experiments are used to assess the forecast skill for 10 m wind speeds and wind energy output (Eout) over Central Europe with lead times from one year to one decade. With this aim, a statistical-dynamical downscaling (SDD) approach is used for the regionalisation. Its added value is evaluated by comparison of skill scores for MPI-ESM large-scale wind speeds and SDD-simulated regional wind speeds. All three MPI-ESM ensemble generations show some forecast skill for annual mean wind speed and Eout over Central Europe on yearly and multi-yearly time scales. This forecast skill is mostly limited to the first years after initialisation. Differences between the three ensemble generations are generally small. The regionalisation preserves and sometimes increases the forecast skills of the global runs but results depend on lead time and ensemble generation. Moreover, regionalisation often improves the ensemble spread. Seasonal Eout skills are generally lower than for annual means. Skill scores are lowest during summer and persist longest in autumn. A large-scale westerly weather type with strong pressure gradients over Central Europe is identified as potential source of the skill for wind energy potentials, showing a similar forecast skill and a high correlation with Eout anomalies. These results are promising towards the establishment of a decadal prediction system for wind energy applications over Central Europe.
Resumo:
We present a new technique for obtaining model fittings to very long baseline interferometric images of astrophysical jets. The method minimizes a performance function proportional to the sum of the squared difference between the model and observed images. The model image is constructed by summing N(s) elliptical Gaussian sources characterized by six parameters: two-dimensional peak position, peak intensity, eccentricity, amplitude, and orientation angle of the major axis. We present results for the fitting of two main benchmark jets: the first constructed from three individual Gaussian sources, the second formed by five Gaussian sources. Both jets were analyzed by our cross-entropy technique in finite and infinite signal-to-noise regimes, the background noise chosen to mimic that found in interferometric radio maps. Those images were constructed to simulate most of the conditions encountered in interferometric images of active galactic nuclei. We show that the cross-entropy technique is capable of recovering the parameters of the sources with a similar accuracy to that obtained from the very traditional Astronomical Image Processing System Package task IMFIT when the image is relatively simple (e. g., few components). For more complex interferometric maps, our method displays superior performance in recovering the parameters of the jet components. Our methodology is also able to show quantitatively the number of individual components present in an image. An additional application of the cross-entropy technique to a real image of a BL Lac object is shown and discussed. Our results indicate that our cross-entropy model-fitting technique must be used in situations involving the analysis of complex emission regions having more than three sources, even though it is substantially slower than current model-fitting tasks (at least 10,000 times slower for a single processor, depending on the number of sources to be optimized). As in the case of any model fitting performed in the image plane, caution is required in analyzing images constructed from a poorly sampled (u, v) plane.
Resumo:
Application of optimization algorithm to PDE modeling groundwater remediation can greatly reduce remediation cost. However, groundwater remediation analysis requires a computational expensive simulation, therefore, effective parallel optimization could potentially greatly reduce computational expense. The optimization algorithm used in this research is Parallel Stochastic radial basis function. This is designed for global optimization of computationally expensive functions with multiple local optima and it does not require derivatives. In each iteration of the algorithm, an RBF is updated based on all the evaluated points in order to approximate expensive function. Then the new RBF surface is used to generate the next set of points, which will be distributed to multiple processors for evaluation. The criteria of selection of next function evaluation points are estimated function value and distance from all the points known. Algorithms created for serial computing are not necessarily efficient in parallel so Parallel Stochastic RBF is different algorithm from its serial ancestor. The application for two Groundwater Superfund Remediation sites, Umatilla Chemical Depot, and Former Blaine Naval Ammunition Depot. In the study, the formulation adopted treats pumping rates as decision variables in order to remove plume of contaminated groundwater. Groundwater flow and contamination transport is simulated with MODFLOW-MT3DMS. For both problems, computation takes a large amount of CPU time, especially for Blaine problem, which requires nearly fifty minutes for a simulation for a single set of decision variables. Thus, efficient algorithm and powerful computing resource are essential in both cases. The results are discussed in terms of parallel computing metrics i.e. speedup and efficiency. We find that with use of up to 24 parallel processors, the results of the parallel Stochastic RBF algorithm are excellent with speed up efficiencies close to or exceeding 100%.
Resumo:
O primeiro capítulo trata do problema de pesquisa consubstanciado na pergunta: como a crise financeira mundial de 2008 atingiu o equilíbrio de poder global e quais foram os desdobramentos no sistema internacional, e a respectiva argumentação de objetivos, delimitação e relevância do estudo. O capítulo dois aborda a questão metodológica do ponto de vista dos métodos de abordagem e da coleta e tratamento das informações. O capítulo terceiro apresenta a complexidade da interação entre ordem internacional e governança global, termos difíceis de serem definidos, porém constantes nas agendas da diplomacia e política internacional. O capítulo quarto introduz o conceito de governança multinível para expressar a interação de diversos atores em diversas camadas abaixo e acima do Estado. O capítulo quinto trata das crises pré-2008, buscando verificar possíveis características comuns entre elas. O capítulo sexto trata da crise de 2008 e alargamento e prolongamento para a Europa, articulando variáveis econômicas e financeiras globais. O capítulo sete procura relacionar a política externa brasileira à arquitetura da governança global, aspirando uma participação mais ativa nos fóruns internacionais. No capítulo nono é apresentada a conclusão do estudo em termos de dilemas e obstáculos comportamentais e/ou estruturais e os campos que devem ser melhor investigados e aprofundados.
Resumo:
O presente estudo investiga a existência de contágio financeiro entre os países do G20, com base em uma análise sobre os retornos dos principais índices de ações, abrangendo o período de 2000 a 2012. A abordagem utilizada consiste na utilização de modelos multivariados de volatilidade da família DCC-GARCH, na versão proposta por Engle e Sheppard (2001). Com base nos testes efetuados, conclui-se que houve mudanças estruturais nas séries analisadas em praticamente todos os 14 países analisados, sendo que os resultados obtidos demonstram evidências favoráveis para a hipótese de contágio financeiro entre países do G20. Verificou-se também que dentre as diversas crises financeiras ocorridas durante o período analisado, a Crise do Subprime destaca-se das demais crises, devido a sua magnitude e velocidade com que se propagou, afetando tanto países desenvolvidos como países emergentes.
Resumo:
The metaheuristics techiniques are known to solve optimization problems classified as NP-complete and are successful in obtaining good quality solutions. They use non-deterministic approaches to generate solutions that are close to the optimal, without the guarantee of finding the global optimum. Motivated by the difficulties in the resolution of these problems, this work proposes the development of parallel hybrid methods using the reinforcement learning, the metaheuristics GRASP and Genetic Algorithms. With the use of these techniques, we aim to contribute to improved efficiency in obtaining efficient solutions. In this case, instead of using the Q-learning algorithm by reinforcement learning, just as a technique for generating the initial solutions of metaheuristics, we use it in a cooperative and competitive approach with the Genetic Algorithm and GRASP, in an parallel implementation. In this context, was possible to verify that the implementations in this study showed satisfactory results, in both strategies, that is, in cooperation and competition between them and the cooperation and competition between groups. In some instances were found the global optimum, in others theses implementations reach close to it. In this sense was an analyze of the performance for this proposed approach was done and it shows a good performance on the requeriments that prove the efficiency and speedup (gain in speed with the parallel processing) of the implementations performed