864 resultados para Representation of polynomials
Resumo:
The Hadley Centre Global Environmental Model (HadGEM) includes two aerosol schemes: the Coupled Large-scale Aerosol Simulator for Studies in Climate (CLASSIC), and the new Global Model of Aerosol Processes (GLOMAP-mode). GLOMAP-mode is a modal aerosol microphysics scheme that simulates not only aerosol mass but also aerosol number, represents internally-mixed particles, and includes aerosol microphysical processes such as nucleation. In this study, both schemes provide hindcast simulations of natural and anthropogenic aerosol species for the period 2000–2006. HadGEM simulations of the aerosol optical depth using GLOMAP-mode compare better than CLASSIC against a data-assimilated aerosol re-analysis and aerosol ground-based observations. Because of differences in wet deposition rates, GLOMAP-mode sulphate aerosol residence time is two days longer than CLASSIC sulphate aerosols, whereas black carbon residence time is much shorter. As a result, CLASSIC underestimates aerosol optical depths in continental regions of the Northern Hemisphere and likely overestimates absorption in remote regions. Aerosol direct and first indirect radiative forcings are computed from simulations of aerosols with emissions for the year 1850 and 2000. In 1850, GLOMAP-mode predicts lower aerosol optical depths and higher cloud droplet number concentrations than CLASSIC. Consequently, simulated clouds are much less susceptible to natural and anthropogenic aerosol changes when the microphysical scheme is used. In particular, the response of cloud condensation nuclei to an increase in dimethyl sulphide emissions becomes a factor of four smaller. The combined effect of different 1850 baselines, residence times, and abilities to affect cloud droplet number, leads to substantial differences in the aerosol forcings simulated by the two schemes. GLOMAP-mode finds a presentday direct aerosol forcing of −0.49Wm−2 on a global average, 72% stronger than the corresponding forcing from CLASSIC. This difference is compensated by changes in first indirect aerosol forcing: the forcing of −1.17Wm−2 obtained with GLOMAP-mode is 20% weaker than with CLASSIC. Results suggest that mass-based schemes such as CLASSIC lack the necessary sophistication to provide realistic input to aerosol-cloud interaction schemes. Furthermore, the importance of the 1850 baseline highlights how model skill in predicting present-day aerosol does not guarantee reliable forcing estimates. Those findings suggest that the more complex representation of aerosol processes in microphysical schemes improves the fidelity of simulated aerosol forcings.
Resumo:
The CWRF is developed as a climate extension of the Weather Research and Forecasting model (WRF) by incorporating numerous improvements in the representation of physical processes and integration of external (top, surface, lateral) forcings that are crucial to climate scales, including interactions between land, atmosphere, and ocean; convection and microphysics; and cloud, aerosol, and radiation; and system consistency throughout all process modules. This extension inherits all WRF functionalities for numerical weather prediction while enhancing the capability for climate modeling. As such, CWRF can be applied seamlessly to weather forecast and climate prediction. The CWRF is built with a comprehensive ensemble of alternative parameterization schemes for each of the key physical processes, including surface (land, ocean), planetary boundary layer, cumulus (deep, shallow), microphysics, cloud, aerosol, and radiation, and their interactions. This facilitates the use of an optimized physics ensemble approach to improve weather or climate prediction along with a reliable uncertainty estimate. The CWRF also emphasizes the societal service capability to provide impactrelevant information by coupling with detailed models of terrestrial hydrology, coastal ocean, crop growth, air quality, and a recently expanded interactive water quality and ecosystem model. This study provides a general CWRF description and basic skill evaluation based on a continuous integration for the period 1979– 2009 as compared with that of WRF, using a 30-km grid spacing over a domain that includes the contiguous United States plus southern Canada and northern Mexico. In addition to advantages of greater application capability, CWRF improves performance in radiation and terrestrial hydrology over WRF and other regional models. Precipitation simulation, however, remains a challenge for all of the tested models.
Resumo:
The dependence of the annual mean tropical precipitation on horizontal resolution is investigated in the atmospheric version of the Hadley Centre General Environment Model (HadGEM1). Reducing the grid spacing from about 350 km to 110 km improves the precipitation distribution in most of the tropics. In particular, characteristic dry biases over South and Southeast Asia including the Maritime Continent as well as wet biases over the western tropical oceans are reduced. The annual-mean precipitation bias is reduced by about one third over the Maritime Continent and the neighbouring ocean basins associated with it via the Walker circulation. Sensitivity experiments show that much of the improvement with resolution in the Maritime Continent region is due to the specification of better resolved surface boundary conditions (land fraction, soil and vegetation parameters) at the higher resolution. It is shown that in particular the formulation of the coastal tiling scheme may cause resolution sensitivity of the mean simulated climate. The improvement in the tropical mean precipitation in this region is not primarily associated with the better representation of orography at the higher resolution, nor with changes in the eddy transport of moisture. Sizeable sensitivity to changes in the surface fields may be one of the reasons for the large variation of the mean tropical precipitation distribution seen across climate models.
Resumo:
This article discusses the aesthetic and spatial representational strategies of the popular studio-based musical television drama serials Rock Follies and Rock Follies of ’77. It analyses how the texts’ themes relating to women and the entertainment industry are mediated through their postmodern ironic mode and representation of fantastic spaces. Rock Follies’ distinctive stylised aesthetic and mode of caricature are analysed with reference to the visual intentions and ‘voice’ of the writer, Howard Schuman. Through considering the programmes’ various spatial strategies, the article draws attention to the importance of visual and performance style in their postmodern discourse on culture, fantasy, gender and subjectivity. Analysis of the spaces of musical performance, characters’ domestic environments and simulated entertainment spaces reveals how a dialectic is established between the escapist imaginative pleasures of fantasy and the manipulative and exploitative practices of the culture industry. The shift from the optimism of the first series, when the LittleLadies first form, to the darker mood of the second series, in which they are increasingly divided by industry pressures, is traced through changes in the aesthetics of space and characterisation. As a space of artifice, performance and electronic visual manipulation that facilitates the texts’ reflexive representation of culture and feminised fantasy, the studio’s unique aesthetic strengths emerge through this case study.
Resumo:
The Tropical Rainfall Measuring Mission 3B42 precipitation estimates are widely used in tropical regions for hydrometeorological research. Recently, version 7 of the product was released. Major revisions to the algorithm involve the radar refl ectivity - rainfall rates relationship, surface clutter detection over high terrain, a new reference database for the passive microwave algorithm, and a higher quality gauge analysis product for monthly bias correction. To assess the impacts of the improved algorithm, we compare the version 7 and the older version 6 product with data from 263 rain gauges in and around the northern Peruvian Andes. The region covers humid tropical rainforest, tropical mountains, and arid to humid coastal plains. We and that the version 7 product has a significantly lower bias and an improved representation of the rainfall distribution. We further evaluated the performance of versions 6 and 7 products as forcing data for hydrological modelling, by comparing the simulated and observed daily streamfl ow in 9 nested Amazon river basins. We find that the improvement in the precipitation estimation algorithm translates to an increase in the model Nash-Sutcliffe effciency, and a reduction in the percent bias between the observed and simulated flows by 30 to 95%.
Resumo:
14C-dated pollen and lake-level data from Europe are used to assess the spatial patterns of climate change between 6000 yr BP and present, as simulated by the NCAR CCM1 (National Center for Atmospheric Research, Community Climate Model, version 1) in response to the change in the Earth’s orbital parameters during this perod. First, reconstructed 6000 yr BP values of bioclimate variables obtained from pollen and lake-level data with the constrained-analogue technique are compared with simulated values. Then a 6000 yr BP biome map obtained from pollen data with an objective biome reconstruction (biomization) technique is compared with BIOME model results derived from the same simulation. Data and simulations agree in some features: warmer-than-present growing seasons in N and C Europe allowed forests to extend further north and to higher elevations than today, and warmer winters in C and E Europe prevented boreal conifers from spreading west. More generally, however, the agreement is poor. Predominantly deciduous forest types in Fennoscandia imply warmer winters than the model allows. The model fails to simulate winters cold enough, or summers wet enough, to allow temperate deciduous forests their former extended distribution in S Europe, and it incorrectly simulates a much expanded area of steppe vegetation in SE Europe. Similar errors have also been noted in numerous 6000 yr BP simulations with prescribed modern sea surface temperatures. These errors are evidently not resolved by the inclusion of interactive sea-surface conditions in the CCM1. Accurate representation of mid-Holocene climates in Europe may require the inclusion of dynamical ocean–atmosphere and/or vegetation–atmosphere interactions that most palaeoclimate model simulations have so far disregarded.
Resumo:
This paper presents a software-based study of a hardware-based non-sorting median calculation method on a set of integer numbers. The method divides the binary representation of each integer element in the set into bit slices in order to find the element located in the middle position. The method exhibits a linear complexity order and our analysis shows that the best performance in execution time is obtained when slices of 4-bit in size are used for 8-bit and 16-bit integers, in mostly any data set size. Results suggest that software implementation of bit slice method for median calculation outperforms sorting-based methods with increasing improvement for larger data set size. For data set sizes of N > 5, our simulations show an improvement of at least 40%.
Resumo:
A global climatology (1979–2012) from the Modern-Era Retrospective Analysis for Research and Applications (MERRA) shows distributions and seasonal evolution of upper tropospheric jets and their relationships to the stratospheric subvortex and multiple tropopauses. The overall climatological patterns of upper tropospheric jets confirm those seen in previous studies, indicating accurate representation of jet stream dynamics in MERRA. The analysis shows a Northern Hemisphere (NH) upper tropospheric jet stretching nearly zonally from the mid-Atlantic across Africa and Asia. In winter–spring, this jet splits over the eastern Pacific, merges again over eastern North America, and then shifts poleward over the North Atlantic. The jets associated with tropical circulations are also captured, with upper tropospheric westerlies demarking cyclonic flow downstream from the Australian and Asian monsoon anticyclones and associated easterly jets. Multiple tropopauses associated with the thermal tropopause “break” commonly extend poleward from the subtropical upper tropospheric jet. In Southern Hemisphere (SH) summer, the tropopause break, along with a poleward-stretching secondary tropopause, often occurs across the tropical westerly jet downstream of the Australian monsoon region. SH high-latitude multiple tropopauses, nearly ubiquitous in June–July, are associated with the unique polar winter thermal structure. High-latitude multiple tropopauses in NH fall–winter are, however, sometimes associated with poleward-shifted upper tropospheric jets. The SH subvortex jet extends down near the level of the subtropical jet core in winter and spring. Most SH subvortex jets merge with an upper tropospheric jet between May and December; although much less persistent than in the SH, merged NH subvortex jets are common between November and April.
Resumo:
With the prospect of exascale computing, computational methods requiring only local data become especially attractive. Consequently, the typical domain decomposition of atmospheric models means horizontally-explicit vertically-implicit (HEVI) time-stepping schemes warrant further attention. In this analysis, Runge-Kutta implicit-explicit schemes from the literature are analysed for their stability and accuracy using a von Neumann stability analysis of two linear systems. Attention is paid to the numerical phase to indicate the behaviour of phase and group velocities. Where the analysis is tractable, analytically derived expressions are considered. For more complicated cases, amplification factors have been numerically generated and the associated amplitudes and phase diagnosed. Analysis of a system describing acoustic waves has necessitated attributing the three resultant eigenvalues to the three physical modes of the system. To do so, a series of algorithms has been devised to track the eigenvalues across the frequency space. The result enables analysis of whether the schemes exactly preserve the non-divergent mode; and whether there is evidence of spurious reversal in the direction of group velocities or asymmetry in the damping for the pair of acoustic modes. Frequency ranges that span next-generation high-resolution weather models to coarse-resolution climate models are considered; and a comparison is made of errors accumulated from multiple stability-constrained shorter time-steps from the HEVI scheme with a single integration from a fully implicit scheme over the same time interval. Two schemes, “Trap2(2,3,2)” and “UJ3(1,3,2)”, both already used in atmospheric models, are identified as offering consistently good stability and representation of phase across all the analyses. Furthermore, according to a simple measure of computational cost, “Trap2(2,3,2)” is the least expensive.
Resumo:
This study investigates the potential contribution of observed changes in lower stratospheric water vapour to stratospheric temperature variations over the past three decades using a comprehensive global climate model (GCM). Three case studies are considered. In the first, the net increase in stratospheric water vapour (SWV) from 1980–2010 (derived from the Boulder frost-point hygrometer record using the gross assumption that this is globally representative) is estimated to have cooled the lower stratosphere by up to ∼0.2 K decade−1 in the global and annual mean; this is ∼40% of the observed cooling trend over this period. In the Arctic winter stratosphere there is a dynamical response to the increase in SWV, with enhanced polar cooling of 0.6 K decade−1 at 50 hPa and warming of 0.5 K decade−1 at 1 hPa. In the second case study, the observed decrease in tropical lower stratospheric water vapour after the year 2000 (imposed in the GCM as a simplified representation of the observed changes derived from satellite data) is estimated to have caused a relative increase in tropical lower stratospheric temperatures by ∼0.3 K at 50 hPa. In the third case study, the wintertime dehydration in the Antarctic stratospheric polar vortex (again using a simplified representation of the changes seen in a satellite dataset) is estimated to cause a relative warming of the Southern Hemisphere polar stratosphere by up to 1 K at 100 hPa from July–October. This is accompanied by a weakening of the westerly winds on the poleward flank of the stratospheric jet by up to 1.5 m s−1 in the GCM. The results show that, if the measurements are representative of global variations, SWV should be considered as important a driver of transient and long-term variations in lower stratospheric temperature over the past 30 years as increases in long-lived greenhouse gases and stratospheric ozone depletion.
Resumo:
he first international urban land surface model comparison was designed to identify three aspects of the urban surface-atmosphere interactions: (1) the dominant physical processes, (2) the level of complexity required to model these, and 3) the parameter requirements for such a model. Offline simulations from 32 land surface schemes, with varying complexity, contributed to the comparison. Model results were analysed within a framework of physical classifications and over four stages. The results show that the following are important urban processes; (i) multiple reflections of shortwave radiation within street canyons, (ii) reduction in the amount of visible sky from within the canyon, which impacts on the net long-wave radiation, iii) the contrast in surface temperatures between building roofs and street canyons, and (iv) evaporation from vegetation. Models that use an appropriate bulk albedo based on multiple solar reflections, represent building roof surfaces separately from street canyons and include a representation of vegetation demonstrate more skill, but require parameter information on the albedo, height of the buildings relative to the width of the streets (height to width ratio), the fraction of building roofs compared to street canyons from a plan view (plan area fraction) and the fraction of the surface that is vegetated. These results, whilst based on a single site and less than 18 months of data, have implications for the future design of urban land surface models, the data that need to be measured in urban observational campaigns, and what needs to be included in initiatives for regional and global parameter databases.
Resumo:
The potential risk of agricultural pesticides to mammals typically depends on internal concentrations within individuals, and these are determined by the amount ingested and by absorption, distribution, metabolism, and excretion (ADME). Pesticide residues ingested depend, amongst other things, on individual spatial choices which determine how much and when feeding sites and areas of pesticide application overlap, and can be calculated using individual-based models (IBMs). Internal concentrations can be calculated using toxicokinetic (TK) models, which are quantitative representations of ADME processes. Here we provide a population model for the wood mouse (Apodemus sylvaticus) in which TK submodels were incorporated into an IBM representation of individuals making choices about where to feed. This allows us to estimate the contribution of individual spatial choice and TK processes to risk. We compared the risk predicted by four IBMs: (i) “AllExposed-NonTK”: assuming no spatial choice so all mice have 100% exposure, no TK, (ii) “AllExposed-TK”: identical to (i) except that the TK processes are included where individuals vary because they have different temporal patterns of ingestion in the IBM, (iii) “Spatial-NonTK”: individual spatial choice, no TK, and (iv) “Spatial-TK”: individual spatial choice and with TK. The TK parameters for hypothetical pesticides used in this study were selected such that a conventional risk assessment would fail. Exposures were standardised using risk quotients (RQ; exposure divided by LD50 or LC50). We found that for the exposed sub-population including either spatial choice or TK reduced the RQ by 37–85%, and for the total population the reduction was 37–94%. However spatial choice and TK together had little further effect in reducing RQ. The reasons for this are that when the proportion of time spent in treated crop (PT) approaches 1, TK processes dominate and spatial choice has very little effect, and conversely if PT is small spatial choice dominates and TK makes little contribution to exposure reduction. The latter situation means that a short time spent in the pesticide-treated field mimics exposure from a small gavage dose, but TK only makes a substantial difference when the dose was consumed over a longer period. We concluded that a combined TK-IBM is most likely to bring added value to the risk assessment process when the temporal pattern of feeding, time spent in exposed area and TK parameters are at an intermediate level; for instance wood mice in foliar spray scenarios spending more time in crop fields because of better plant cover.
Resumo:
The current work discusses the compositional analysis of spectra that may be related to amorphous materials that lack discernible Lorentzian, Debye or Drude responses. We propose to model such response using a 3-dimensional random RLC network using a descriptor formulation which is converted into an input-output transfer function representation. A wavelet identification study of these networks is performed to infer the composition of the networks. It was concluded that wavelet filter banks enable a parsimonious representation of the dynamics in excited randomly connected RLC networks. Furthermore, chemometric classification using the proposed technique enables the discrimination of dielectric samples with different composition. The methodology is promising for the classification of amorphous dielectrics.
Resumo:
This paper discusses ECG signal classification after parametrizing the ECG waveforms in the wavelet domain. Signal decomposition using perfect reconstruction quadrature mirror filter banks can provide a very parsimonious representation of ECG signals. In the current work, the filter parameters are adjusted by a numerical optimization algorithm in order to minimize a cost function associated to the filter cut-off sharpness. The goal consists of achieving a better compromise between frequency selectivity and time resolution at each decomposition level than standard orthogonal filter banks such as those of the Daubechies and Coiflet families. Our aim is to optimally decompose the signals in the wavelet domain so that they can be subsequently used as inputs for training to a neural network classifier.
Resumo:
Building Information Modeling (BIM) is the process of structuring, capturing, creating, and managing a digital representation of physical and/or functional characteristics of a built space [1]. Current BIM has limited ability to represent dynamic semantics, social information, often failing to consider building activity, behavior and context; thus limiting integration with intelligent, built-environment management systems. Research, such as the development of Semantic Exchange Modules, and/or the linking of IFC with semantic web structures, demonstrates the need for building models to better support complex semantic functionality. To implement model semantics effectively, however, it is critical that model designers consider semantic information constructs. This paper discusses semantic models with relation to determining the most suitable information structure. We demonstrate how semantic rigidity can lead to significant long-term problems that can contribute to model failure. A sufficiently detailed feasibility study is advised to maximize the value from the semantic model. In addition we propose a set of questions, to be used during a model’s feasibility study, and guidelines to help assess the most suitable method for managing semantics in a built environment.