912 resultados para Numerical Algorithms and Problems
Resumo:
Almost all research fields in geosciences use numerical models and observations and combine these using data-assimilation techniques. With ever-increasing resolution and complexity, the numerical models tend to be highly nonlinear and also observations become more complicated and their relation to the models more nonlinear. Standard data-assimilation techniques like (ensemble) Kalman filters and variational methods like 4D-Var rely on linearizations and are likely to fail in one way or another. Nonlinear data-assimilation techniques are available, but are only efficient for small-dimensional problems, hampered by the so-called ‘curse of dimensionality’. Here we present a fully nonlinear particle filter that can be applied to higher dimensional problems by exploiting the freedom of the proposal density inherent in particle filtering. The method is illustrated for the three-dimensional Lorenz model using three particles and the much more complex 40-dimensional Lorenz model using 20 particles. By also applying the method to the 1000-dimensional Lorenz model, again using only 20 particles, we demonstrate the strong scale-invariance of the method, leading to the optimistic conjecture that the method is applicable to realistic geophysical problems. Copyright c 2010 Royal Meteorological Society
Resumo:
The A-Train constellation of satellites provides a new capability to measure vertical cloud profiles that leads to more detailed information on ice-cloud microphysical properties than has been possible up to now. A variational radar–lidar ice-cloud retrieval algorithm (VarCloud) takes advantage of the complementary nature of the CloudSat radar and Cloud–Aerosol Lidar and Infrared Pathfinder Satellite Observations (CALIPSO) lidar to provide a seamless retrieval of ice water content, effective radius, and extinction coefficient from the thinnest cirrus (seen only by the lidar) to the thickest ice cloud (penetrated only by the radar). In this paper, several versions of the VarCloud retrieval are compared with the CloudSat standard ice-only retrieval of ice water content, two empirical formulas that derive ice water content from radar reflectivity and temperature, and retrievals of vertically integrated properties from the Moderate Resolution Imaging Spectroradiometer (MODIS) radiometer. The retrieved variables typically agree to within a factor of 2, on average, and most of the differences can be explained by the different microphysical assumptions. For example, the ice water content comparison illustrates the sensitivity of the retrievals to assumed ice particle shape. If ice particles are modeled as oblate spheroids rather than spheres for radar scattering then the retrieved ice water content is reduced by on average 50% in clouds with a reflectivity factor larger than 0 dBZ. VarCloud retrieves optical depths that are on average a factor-of-2 lower than those from MODIS, which can be explained by the different assumptions on particle mass and area; if VarCloud mimics the MODIS assumptions then better agreement is found in effective radius and optical depth is overestimated. MODIS predicts the mean vertically integrated ice water content to be around a factor-of-3 lower than that from VarCloud for the same retrievals, however, because the MODIS algorithm assumes that its retrieved effective radius (which is mostly representative of cloud top) is constant throughout the depth of the cloud. These comparisons highlight the need to refine microphysical assumptions in all retrieval algorithms and also for future studies to compare not only the mean values but also the full probability density function.
Resumo:
A new database of weather and circulation type catalogs is presented comprising 17 automated classification methods and five subjective classifications. It was compiled within COST Action 733 "Harmonisation and Applications of Weather Type Classifications for European regions" in order to evaluate different methods for weather and circulation type classification. This paper gives a technical description of the included methods using a new conceptual categorization for classification methods reflecting the strategy for the definition of types. Methods using predefined types include manual and threshold based classifications while methods producing types derived from the input data include those based on eigenvector techniques, leader algorithms and optimization algorithms. In order to allow direct comparisons between the methods, the circulation input data and the methods' configuration were harmonized for producing a subset of standard catalogs of the automated methods. The harmonization includes the data source, the climatic parameters used, the classification period as well as the spatial domain and the number of types. Frequency based characteristics of the resulting catalogs are presented, including variation of class sizes, persistence, seasonal and inter-annual variability as well as trends of the annual frequency time series. The methodological concept of the classifications is partly reflected by these properties of the resulting catalogs. It is shown that the types of subjective classifications compared to automated methods show higher persistence, inter-annual variation and long-term trends. Among the automated classifications optimization methods show a tendency for longer persistence and higher seasonal variation. However, it is also concluded that the distance metric used and the data preprocessing play at least an equally important role for the properties of the resulting classification compared to the algorithm used for type definition and assignment.
Resumo:
Our group considered the desirability of including representations of uncertainty in the development of parameterizations. (By ‘uncertainty’ here we mean the deviation of sub-grid scale fluxes or tendencies in any given model grid box from truth.) We unanimously agreed that the ECWMF should attempt to provide a more physical basis for uncertainty estimates than the very effective but ad hoc methods being used at present. Our discussions identified several issues that will arise.
Resumo:
The technique of constructing a transformation, or regrading, of a discrete data set such that the histogram of the transformed data matches a given reference histogram is commonly known as histogram modification. The technique is widely used for image enhancement and normalization. A method which has been previously derived for producing such a regrading is shown to be “best” in the sense that it minimizes the error between the cumulative histogram of the transformed data and that of the given reference function, over all single-valued, monotone, discrete transformations of the data. Techniques for smoothed regrading, which provide a means of balancing the error in matching a given reference histogram against the information lost with respect to a linear transformation are also examined. The smoothed regradings are shown to optimize certain cost functionals. Numerical algorithms for generating the smoothed regradings, which are simple and efficient to implement, are described, and practical applications to the processing of LANDSAT image data are discussed.
Resumo:
Methods for producing nonuniform transformations, or regradings, of discrete data are discussed. The transformations are useful in image processing, principally for enhancement and normalization of scenes. Regradings which “equidistribute” the histogram of the data, that is, which transform it into a constant function, are determined. Techniques for smoothing the regrading, dependent upon a continuously variable parameter, are presented. Generalized methods for constructing regradings such that the histogram of the data is transformed into any prescribed function are also discussed. Numerical algorithms for implementing the procedures and applications to specific examples are described.
Resumo:
The task of this paper is to develop a Time-Domain Probe Method for the reconstruction of impenetrable scatterers. The basic idea of the method is to use pulses in the time domain and the time-dependent response of the scatterer to reconstruct its location and shape. The method is based on the basic causality principle of timedependent scattering. The method is independent of the boundary condition and is applicable for limited aperture scattering data. In particular, we discuss the reconstruction of the shape of a rough surface in three dimensions from time-domain measurements of the scattered field. In practise, measurement data is collected where the incident field is given by a pulse. We formulate the time-domain fieeld reconstruction problem equivalently via frequency-domain integral equations or via a retarded boundary integral equation based on results of Bamberger, Ha-Duong, Lubich. In contrast to pure frequency domain methods here we use a time-domain characterization of the unknown shape for its reconstruction. Our paper will describe the Time-Domain Probe Method and relate it to previous frequency-domain approaches on sampling and probe methods by Colton, Kirsch, Ikehata, Potthast, Luke, Sylvester et al. The approach significantly extends recent work of Chandler-Wilde and Lines (2005) and Luke and Potthast (2006) on the timedomain point source method. We provide a complete convergence analysis for the method for the rough surface scattering case and provide numerical simulations and examples.
Resumo:
In this paper, the global market potential of solar thermal, photovoltaic (PV) and combined photovoltaic/thermal (PV/T) technologies in current time and near future was discussed. The concept of the PV/T and the theory behind the PV/T operation were briefly introduced, and standards for evaluating technical, economic and environmental performance of the PV/T systems were addressed. A comprehensive literature review into R&D works and practical application of the PV/T technology was illustrated and the review results were critically analysed in terms of PV/T type and research methodology used. The major features, current status, research focuses and existing difficulties/barriers related to the various types of PV/T were identified. The research methods, including theoretical analyses and computer simulation, experimental and combined experimental/theoretical investigation, demonstration and feasibility study, as well as economic and environmental analyses, applied into the PV/T technology were individually discussed, and the achievement and problems remaining in each research method category were described. Finally, opportunities for further work to carry on PV/T study were identified. The review research indicated that air/water-based PV/T systems are the commonly used technologies but their thermal removal effectiveness is lower. Refrigerant/heat-pipe-based PV/Ts, although still in research/laboratory stage, could achieve much higher solar conversion efficiencies over the air/water-based systems. However, these systems were found a few technical challenges in practice which require further resolutions. The review research suggested that further works could be undertaken to (1) develop new feasible, economic and energy efficient PV/T systems; (2) optimise the structural/geometrical configurations of the existing PV/T systems; (3) study long term dynamic performance of the PV/T systems; (4) demonstrate the PV/T systems in real buildings and conduct the feasibility study; and (5) carry on advanced economic and environmental analyses. This review research helps finding the questions remaining in PV/T technology, identify new research topics/directions to further improve the performance of the PV/T, remove the barriers in PV/T practical application, establish the standards/regulations related to PV/T design and installation, and promote its market penetration throughout the world.
Resumo:
Results from aircraft and surface observations provided evidence for the existence of mesoscale circulations over the Boreal Ecosystem-Atmosphere Study (BOREAS) domain. Using an integrated approach that included the use of analytical modeling, numerical modeling, and data analysis, we have found that there are substantial contributions to the total budgets of heat over the BOREAS domain generated by mesoscale circulations. This effect is largest when the synoptic flow is relatively weak, yet it is present under less favorable conditions, as shown by the case study presented here. While further analysis is warranted to document this effect, the existence of mesoscale flow is not surprising, since it is related to the presence of landscape patches, including lakes, which are of a size on the order of the local Rossby radius and which have spatial differences in maximum sensible heat flux of about 300 W m−2. We have also analyzed the vertical temperature profile simulated in our case study as well as high-resolution soundings and we have found vertical profiles of temperature change above the boundary layer height, which we attribute in part to mesoscale contributions. Our conclusion is that in regions with organized landscapes, such as BOREAS, even with relatively strong synoptic winds, dynamical scaling criteria should be used to assess whether mesoscale effects should be parameterized or explicitly resolved in numerical models of the atmosphere.
Comparing the thermal performance of horizontal slinky-loop and vertical slinky-loop heat exchangers
Resumo:
The heat pump market in the UK has grown rapidly over the last few years. Performance analyses of vertical ground-loop heat exchanger configurations have been widely carried out using both numerical modelling and experiments. However, research findings and design recommendations on horizontal slinky-loop and vertical slinky-loop heat exchangers are far fewer compared with those for vertical ground-loop heat exchanger configurations, especially where the long-term operation of the systems is concerned. The paper presents the results obtained from a numerical simulation for the horizontal slinky-loop and vertical slinky-loop heat exchangers of a ground-source heat pump system. A three-dimensional numerical heat transfer model was developed to study the thermal performance of various heat exchanger configurations. The influence of the loop pitch (loop spacing) and the depth of a vertical slinky-loop installation were investigated and the thermal performance and excavation work required for the horizontal and vertical slinky-loop heat exchangers were compared. The influence of the installation depth for vertical slinky-loop configurations was also investigated. The results of this study show that the influence of the installation depth of the vertical slinky-loop heat exchanger on the thermal performance of the system is small. The maximum difference in the thermal performance between the vertical and horizontal slinky-loop heat exchangers with the same loop diameter and loop pitch is less than 5%.
Resumo:
The goal of the Chemistry‐Climate Model Validation (CCMVal) activity is to improve understanding of chemistry‐climate models (CCMs) through process‐oriented evaluation and to provide reliable projections of stratospheric ozone and its impact on climate. An appreciation of the details of model formulations is essential for understanding how models respond to the changing external forcings of greenhouse gases and ozonedepleting substances, and hence for understanding the ozone and climate forecasts produced by the models participating in this activity. Here we introduce and review the models used for the second round (CCMVal‐2) of this intercomparison, regarding the implementation of chemical, transport, radiative, and dynamical processes in these models. In particular, we review the advantages and problems associated with approaches used to model processes of relevance to stratospheric dynamics and chemistry. Furthermore, we state the definitions of the reference simulations performed, and describe the forcing data used in these simulations. We identify some developments in chemistry‐climate modeling that make models more physically based or more comprehensive, including the introduction of an interactive ocean, online photolysis, troposphere‐stratosphere chemistry, and non‐orographic gravity‐wave deposition as linked to tropospheric convection. The relatively new developments indicate that stratospheric CCM modeling is becoming more consistent with our physically based understanding of the atmosphere.
Cross-layer design for MIMO systems over spatially correlated and keyhole Nakagami-m fading channels
Resumo:
Cross-layer design is a generic designation for a set of efficient adaptive transmission schemes, across multiple layers of the protocol stack, that are aimed at enhancing the spectral efficiency and increasing the transmission reliability of wireless communication systems. In this paper, one such cross-layer design scheme that combines physical layer adaptive modulation and coding (AMC) with link layer truncated automatic repeat request (T-ARQ) is proposed for multiple-input multiple-output (MIMO) systems employing orthogonal space--time block coding (OSTBC). The performance of the proposed cross-layer design is evaluated in terms of achievable average spectral efficiency (ASE), average packet loss rate (PLR) and outage probability, for which analytical expressions are derived, considering transmission over two types of MIMO fading channels, namely, spatially correlated Nakagami-m fading channels and keyhole Nakagami-m fading channels. Furthermore, the effects of the maximum number of ARQ retransmissions, numbers of transmit and receive antennas, Nakagami fading parameter and spatial correlation parameters, are studied and discussed based on numerical results and comparisons. Copyright © 2009 John Wiley & Sons, Ltd.
Resumo:
As one of the most important geological events in Cenozoic era, the uplift of the Tibetan Plateau (TP) has had profound influences on the Asian and global climate and environment evolution. During the past four decades, many scholars from China and abroad have studied climatic and environmental effects of the TP uplift by using a variety of geological records and paleoclimate numerical simulations. The existing research results enrich our understanding of the mechanisms of Asian monsoon changes and interior aridification, but so far there are still a lot of issues that need to be thought deeply and investigated further. This paper attempts to review the research on the influence of the TP uplift on the Asian monsoon-arid environment, summarize three types of numerical simulations including bulk-plateau uplift, phased uplift and sub-regional uplift, and especially to analyze regional differences in responses of climate and environment to different forms of tectonic uplifts. From previous modeling results, the land-sea distribution and the Himalayan uplift may have a large effect in the establishment and development of the South Asian monsoon. However, the formation and evolution of the monsoon in northern East Asia, the intensified dryness north of the TP and enhanced Asian dust cycle may be more closely related to the uplift of the main body, especially the northern part of the TP. In this review, we also discuss relative roles of the TP uplift and other impact factors, origins of the South Asian monsoon and East Asian monsoon, feedback effects and nonlinear responses of climatic and environmental changes to the plateau uplift. Finally, we make comparisons between numerical simulations and geological records, discuss their uncertainties, and highlight some problems worthy of further studying.
Resumo:
This article explores the translation and reception of the Memoirs and Travels (1790) of Count Mauritius Augustus Benyowsky (1746-86) in the Netherlands, and examines the complications, tensions and problems that transfer between a major and a more minor European language involves. I analyse how the Dutch translator Petrus Loosjes Adriaanszoon positioned himself as a mediator between these very different source and target cultures and ask how he dealt with the problems of plausibility and ‘credit’ which had beleaguered the reception of the Memoirs and Travels from the outset. In this article I am concerned to restore minority languages to the discussion of how travel literature circulated in Western Europe at the close of the eighteenth century and to demonstrate how major/minor language translation was central to the construction of Dutch-language culture in the Low Countries in this period.
Resumo:
Numerical climate models constitute the best available tools to tackle the problem of climate prediction. Two assumptions lie at the heart of their suitability: (1) a climate attractor exists, and (2) the numerical climate model's attractor lies on the actual climate attractor, or at least on the projection of the climate attractor on the model's phase space. In this contribution, the Lorenz '63 system is used both as a prototype system and as an imperfect model to investigate the implications of the second assumption. By comparing results drawn from the Lorenz '63 system and from numerical weather and climate models, the implications of using imperfect models for the prediction of weather and climate are discussed. It is shown that the imperfect model's orbit and the system's orbit are essentially different, purely due to model error and not to sensitivity to initial conditions. Furthermore, if a model is a perfect model, then the attractor, reconstructed by sampling a collection of initialised model orbits (forecast orbits), will be invariant to forecast lead time. This conclusion provides an alternative method for the assessment of climate models.