466 resultados para Parameterized polygons
Resumo:
In this paper we address the problem of extracting representative point samples from polygonal models. The goal of such a sampling algorithm is to find points that are evenly distributed. We propose star-discrepancy as a measure for sampling quality and propose new sampling methods based on global line distributions. We investigate several line generation algorithms including an efficient hardware-based sampling method. Our method contributes to the area of point-based graphics by extracting points that are more evenly distributed than by sampling with current algorithms
Resumo:
Continuamente aparecen nuevas plataformas de gestión de cartografía web, con el inconveniente de que cada una de ellas utiliza un API propia. Dada la gran heterogeneidad de APIs de Mapas existente, sería conveniente disponer de una librería de mapas capaz de abstraer al desarrollador de las pequeñas diferencias entre ellas. Este es el objetivo de la librería Javascript de código abierto Mapstraction. Este tipo de API recibe el nombre de «API Universal y Políglota». Gracias a Mapstraction se pueden desarrollar aplicaciones en las que el usuario puede visualizar la información cartográfica con varios proveedores, pero presenta el inconveniente de no proporcionar mecanismos de creación y/o edición. En este documento se recogen las principales novedades que presenta la librería IDELab MapstractionInteractive, una extensión de Mapstraction que ofrece nueva funcionalidad para solventar las carencias de ésta. Las nuevas funcionalidades implementadas para los proveedores que se incluyen dentro de la librería brindan al usuario la posibilidad de poder editar y crear geometrías sobre el mapa (puntos, líneas y polígonos). Por otra parte, también se implementan dentro de la librería nuevos eventos para los mapas, de forma que el programador puede tener un mayor control de lo que el usuario hace sobre éstos
Resumo:
The transport of stratospheric air deep into the troposphere via convection is investigated numerically using the UK Met Office Unified Model. A convective system that formed on 27 June 2004 near southeast England, in the vicinity an upper level potential vorticity anomaly and a lowered tropopause, provides the basis for analysis. Transport is diagnosed using a stratospheric tracer that can either be passed through or withheld from the model’s convective parameterization scheme. Three simulations are performed at increasingly finer resolutions, with horizontal grid lengths of 12, 4, and 1 km. In the 12 and 4 km simulations, tracer is transported deeply into the troposphere by the parameterized convection. In the 1 km simulation, for which the convective parameterization is disengaged, deep transport is still accomplished but with a much smaller magnitude. However, the 1 km simulation resolves stirring along the tropopause that does not exist in the coarser simulations. In all three simulations, the concentration of the deeply transported tracer is small, three orders of magnitude less than that of the shallow transport near the tropopause, most likely because of the efficient dilution of parcels in the lower troposphere.
Resumo:
The influence of surface waves and an applied wind stress is studied in an ensemble of large eddy simulations to investigate the nature of deeply penetrating jets into an unstratified mixed layer. The influence of a steady monochromatic surface wave propagating parallel to the wind direction is parameterized using the wave-filtered Craik-Leibovich equations. Tracer trajectories and instantaneous downwelling velocities reveal classic counterrotating Langmuir rolls. The associated downwelling jets penetrate to depths in excess of the wave's Stokes depth scale, δs. Qualitative evidence suggests the depth of the jets is controlled by the Ekman depth scale. Analysis of turbulent kinetic energy (tke) budgets reveals a dynamical distinction between Langmuir turbulence and shear-driven turbulence. In the former, tke production is dominated by Stokes shear and a vertical flux term transports tke to a depth where it is dissipated. In the latter, tke production is from the mean shear and is locally balanced by dissipation. We define the turbulent Langmuir number Lat = (v*/Us)0.5 (v* is the ocean's friction velocity and Us is the surface Stokes drift velocity) and a turbulent anisotropy coefficient Rt = /( + ). The transition between shear-driven and Langmuir turbulence is investigated by varying external wave parameters δs and Lat and by diagnosing Rt and the Eulerian mean and Stokes shears. When either Lat or δs are sufficiently small the Stokes shear dominates the mean shear and the flow is preconditioned to Langmuir turbulence and the associated deeply penetrating jets.
Resumo:
We report on a numerical study of the impact of short, fast inertia-gravity waves on the large-scale, slowly-evolving flow with which they co-exist. A nonlinear quasi-geostrophic numerical model of a stratified shear flow is used to simulate, at reasonably high resolution, the evolution of a large-scale mode which grows due to baroclinic instability and equilibrates at finite amplitude. Ageostrophic inertia-gravity modes are filtered out of the model by construction, but their effects on the balanced flow are incorporated using a simple stochastic parameterization of the potential vorticity anomalies which they induce. The model simulates a rotating, two-layer annulus laboratory experiment, in which we recently observed systematic inertia-gravity wave generation by an evolving, large-scale flow. We find that the impact of the small-amplitude stochastic contribution to the potential vorticity tendency, on the model balanced flow, is generally small, as expected. In certain circumstances, however, the parameterized fast waves can exert a dominant influence. In a flow which is baroclinically-unstable to a range of zonal wavenumbers, and in which there is a close match between the growth rates of the multiple modes, the stochastic waves can strongly affect wavenumber selection. This is illustrated by a flow in which the parameterized fast modes dramatically re-partition the probability-density function for equilibrated large-scale zonal wavenumber. In a second case study, the stochastic perturbations are shown to force spontaneous wavenumber transitions in the large-scale flow, which do not occur in their absence. These phenomena are due to a stochastic resonance effect. They add to the evidence that deterministic parameterizations in general circulation models, of subgrid-scale processes such as gravity wave drag, cannot always adequately capture the full details of the nonlinear interaction.
Resumo:
The shallow water equations are solved using a mesh of polygons on the sphere, which adapts infrequently to the predicted future solution. Infrequent mesh adaptation reduces the cost of adaptation and load-balancing and will thus allow for more accurate mapping on adaptation. We simulate the growth of a barotropically unstable jet adapting the mesh every 12 h. Using an adaptation criterion based largely on the gradient of the vorticity leads to a mesh with around 20 per cent of the cells of a uniform mesh that gives equivalent results. This is a similar proportion to previous studies of the same test case with mesh adaptation every 1–20 min. The prediction of the mesh density involves solving the shallow water equations on a coarse mesh in advance of the locally refined mesh in order to estimate where features requiring higher resolution will grow, decay or move to. The adaptation criterion consists of two parts: that resolved on the coarse mesh, and that which is not resolved and so is passively advected on the coarse mesh. This combination leads to a balance between resolving features controlled by the large-scale dynamics and maintaining fine-scale features.
Resumo:
An algorithm is presented for the generation of molecular models of defective graphene fragments, containing a majority of 6-membered rings with a small number of 5- and 7-membered rings as defects. The structures are generated from an initial random array of points in 2D space, which are then subject to Delaunay triangulation. The dual of the triangulation forms a Voronoi tessellation of polygons with a range of ring sizes. An iterative cycle of refinement, involving deletion and addition of points followed by further triangulation, is performed until the user-defined criteria for the number of defects are met. The array of points and connectivities are then converted to a molecular structure and subject to geometry optimization using a standard molecular modeling package to generate final atomic coordinates. On the basis of molecular mechanics with minimization, this automated method can generate structures, which conform to user-supplied criteria and avoid the potential bias associated with the manual building of structures. One application of the algorithm is the generation of structures for the evaluation of the reactivity of different defect sites. Ab initio electronic structure calculations on a representative structure indicate preferential fluorination close to 5-ring defects.
Resumo:
This article describes the development and evaluation of the U.K.’s new High-Resolution Global Environmental Model (HiGEM), which is based on the latest climate configuration of the Met Office Unified Model, known as the Hadley Centre Global Environmental Model, version 1 (HadGEM1). In HiGEM, the horizontal resolution has been increased to 0.83° latitude × 1.25° longitude for the atmosphere, and 1/3° × 1/3° globally for the ocean. Multidecadal integrations of HiGEM, and the lower-resolution HadGEM, are used to explore the impact of resolution on the fidelity of climate simulations. Generally, SST errors are reduced in HiGEM. Cold SST errors associated with the path of the North Atlantic drift improve, and warm SST errors are reduced in upwelling stratocumulus regions where the simulation of low-level cloud is better at higher resolution. The ocean model in HiGEM allows ocean eddies to be partially resolved, which dramatically improves the representation of sea surface height variability. In the Southern Ocean, most of the heat transports in HiGEM is achieved by resolved eddy motions, which replaces the parameterized eddy heat transport in the lower-resolution model. HiGEM is also able to more realistically simulate small-scale features in the wind stress curl around islands and oceanic SST fronts, which may have implications for oceanic upwelling and ocean biology. Higher resolution in both the atmosphere and the ocean allows coupling to occur on small spatial scales. In particular, the small-scale interaction recently seen in satellite imagery between the atmosphere and tropical instability waves in the tropical Pacific Ocean is realistically captured in HiGEM. Tropical instability waves play a role in improving the simulation of the mean state of the tropical Pacific, which has important implications for climate variability. In particular, all aspects of the simulation of ENSO (spatial patterns, the time scales at which ENSO occurs, and global teleconnections) are much improved in HiGEM.
Resumo:
Estimates of soil organic carbon (SOC) stocks and changes under different land use systems can help determine vulnerability to land degradation. Such information is important for countries in and areas with high susceptibility to desertification. SOC stocks, and predicted changes between 2000 and 2030, were determined at the national scale for Jordan using The Global Environment Facility Soil Organic Carbon (GEFSOC) Modelling System. For the purpose of this study, Jordan was divided into three natural regions (The Jordan Valley, the Uplands and the Badia) and three developmental regions (North, Middle and South). Based on this division, Jordan was divided into five zones (based on the dominant land use): the Jordan Valley, the North Uplands, the Middle Uplands, the South Uplands and the Badia. This information was merged using GIS, along with a map of rainfall isohyets, to produce a map with 498 polygons. Each of these was given a unique ID, a land management unit identifier and was characterized in terms of its dominant soil type. Historical land use data, current land use and future land use change scenarios were also assembled, forming major inputs of the modelling system. The GEFSOC Modelling System was then run to produce C stocks in Jordan for the years 1990, 2000 and 2030. The results were compared with conventional methods of estimating carbon stocks, such as the mapping based SOTER method. The results of these comparisons showed that the model runs are acceptable, taking into consideration the limited availability of long-term experimental soil data that can be used to validate them. The main findings of this research show that between 2000 and 2030, SOC may increase in heavily used areas under irrigation and will likely decrease in grazed rangelands that cover most of Jordan giving an overall decrease in total SOC over time if the land is indeed used under the estimated forms of land use. (C) 2007 Elsevier B.V. All rights reserved.
Resumo:
1. The UK Biodiversity Action Plan (UKBAP) identifies invertebrate species in danger of national extinction. For many of these species, targets for recovery specify the number of populations that should exist by a specific future date but offer no procedure to plan strategically to achieve the target for any species. 2. Here we describe techniques based upon geographic information systems (GIS) that produce conservation strategy maps (CSM) to assist with achieving recovery targets based on all available and relevant information. 3. The heath fritillary Mellicta athalia is a UKBAP species used here to illustrate the use of CSM. A phase 1 habitat survey was used to identify habitat polygons across the county of Kent, UK. These were systematically filtered using relevant habitat, botanical and autecological data to identify seven types of polygon, including those with extant colonies or in the vicinity of extant colonies, areas managed for conservation but without colonies, and polygons that had the appropriate habitat structure and may therefore be suitable for reintroduction. 4. Five clusters of polygons of interest were found across the study area. The CSM of two of them are illustrated here: the Blean Wood complex, which contains the existing colonies of heath fritillary in Kent, and the Orlestone Forest complex, which offers opportunities for reintroduction. 5. Synthesis and applications. Although the CSM concept is illustrated here for the UK, we suggest that CSM could be part of species conservation programmes throughout the world. CSM are dynamic and should be stored in electronic format, preferably on the world-wide web, so that they can be easily viewed and updated. CSM can be used to illustrate opportunities and to develop strategies with scientists and non-scientists, enabling the engagement of all communities in a conservation programme. CSM for different years can be presented to illustrate the progress of a plan or to provide continuous feedback on how a field scenario develops.
Resumo:
A new model, RothPC-1, is described for the turnover of organic C in the top metre of soil. RothPC-1 is a version of RothC-26.3, an earlier model for the turnover of C in topsoils. In RothPC-1 two extra parameters are used to model turnover in the top metre of soil: one, p, which moves organic C down the profile by an advective process, and the other, s, which slows decomposition with depth. RothPC-1 is parameterized and tested using measurements (described in Part 1, this issue) of total organic C and radiocarbon on soil profiles from the Rothamsted long-term field experiments, collected over a period of more than 100 years. RothPC-1 gives fits to measurements of organic C and radiocarbon in the 0-23, 23-46, 46-69 and 69-92 cm layers of soil that are almost all within (or close to) measurement error in two areas of regenerating woodland (Geescroft and Broadbalk Wildernesses) and an area of cultivated land from the Broadbalk Continuous Wheat Experiment. The fits to old grassland (the Park Grass Experiment) are less close. Two other sites that provide the requisite pre- and post-bomb data are also fitted; a prairie Chernozem from Russia and an annual grassland from California. Roth-PC-1 gives a close fit to measurements of organic C and radiocarbon down the Chernozem profile, provided that allowance is made for soil age; with the annual grassland the fit is acceptable in the upper part of the profile, but not in the clay-rich Bt horizon below. Calculations suggest that treating the top metre of soil as a homogeneous unit will greatly overestimate the effects of global warming in accelerating the decomposition of soil C and hence on the enhanced release of CO2 from soil organic matter; more realistic estimates will be obtained from multi-layer models such as RothPC-1.
Resumo:
Models of the dynamics of nitrogen in soil (soil-N) can be used to aid the fertilizer management of a crop. The predictions of soil-N models can be validated by comparison with observed data. Validation generally involves calculating non-spatial statistics of the observations and predictions, such as their means, their mean squared-difference, and their correlation. However, when the model predictions are spatially distributed across a landscape the model requires validation with spatial statistics. There are three reasons for this: (i) the model may be more or less successful at reproducing the variance of the observations at different spatial scales; (ii) the correlation of the predictions with the observations may be different at different spatial scales; (iii) the spatial pattern of model error may be informative. In this study we used a model, parameterized with spatially variable input information about the soil, to predict the mineral-N content of soil in an arable field, and compared the results with observed data. We validated the performance of the N model spatially with a linear mixed model of the observations and model predictions, estimated by residual maximum likelihood. This novel approach allowed us to describe the joint variation of the observations and predictions as: (i) independent random variation that occurred at a fine spatial scale; (ii) correlated random variation that occurred at a coarse spatial scale; (iii) systematic variation associated with a spatial trend. The linear mixed model revealed that, in general, the performance of the N model changed depending on the spatial scale of interest. At the scales associated with random variation, the N model underestimated the variance of the observations, and the predictions were correlated poorly with the observations. At the scale of the trend, the predictions and observations shared a common surface. The spatial pattern of the error of the N model suggested that the observations were affected by the local soil condition, but this was not accounted for by the N model. In summary, the N model would be well-suited to field-scale management of soil nitrogen, but suited poorly to management at finer spatial scales. This information was not apparent with a non-spatial validation. (c),2007 Elsevier B.V. All rights reserved.
Resumo:
An efficient method is described for the approximate calculation of the intensity of multiply scattered lidar returns. It divides the outgoing photons into three populations, representing those that have experienced zero, one, and more than one forward-scattering event. Each population is parameterized at each range gate by its total energy, its spatial variance, the variance of photon direction, and the covariance, of photon direction and position. The result is that for an N-point profile the calculation is O(N-2) efficient and implicitly includes up to N-order scattering, making it ideal for use in iterative retrieval algorithms for which speed is crucial. In contrast, models that explicitly consider each scattering order separately are at best O(N-m/m!) efficient for m-order scattering and often cannot be performed to more than the third or fourth order in retrieval algorithms. For typical cloud profiles and a wide range of lidar fields of view, the new algorithm is as accurate as an explicit calculation truncated at the fifth or sixth order but faster by several orders of magnitude. (C) 2006 Optical Society of America.
Resumo:
The problem of modeling solar energetic particle (SEP) events is important to both space weather research and forecasting, and yet it has seen relatively little progress. Most important SEP events are associated with coronal mass ejections (CMEs) that drive coronal and interplanetary shocks. These shocks can continuously produce accelerated particles from the ambient medium to well beyond 1 AU. This paper describes an effort to model real SEP events using a Center for Integrated Space weather Modeling (CISM) MHD solar wind simulation including a cone model of CMEs to initiate the related shocks. In addition to providing observation-inspired shock geometry and characteristics, this MHD simulation describes the time-dependent observer field line connections to the shock source. As a first approximation, we assume a shock jump-parameterized source strength and spectrum, and that scatter-free transport occurs outside of the shock source, thus emphasizing the role the shock evolution plays in determining the modeled SEP event profile. Three halo CME events on May 12, 1997, November 4, 1997 and December 13, 2006 are used to test the modeling approach. While challenges arise in the identification and characterization of the shocks in the MHD model results, this approach illustrates the importance to SEP event modeling of globally simulating the underlying heliospheric event. The results also suggest the potential utility of such a model for forcasting and for interpretation of separated multipoint measurements such as those expected from the STEREO mission.
Resumo:
The objective of this work was to construct a dynamic model of hepatic amino acid metabolism in the lactating dairy cow that could be parameterized using net flow data from in vivo experiments. The model considers 22 amino acids, ammonia, urea, and 13 energetic metabolites, and was parameterized using a steady-state balance model and two in vivo, net flow experiments conducted with mid-lactation dairy cows. Extracellular flows were derived directly from the observed data. An optimization routine was used to derive nine intracellular flows. The resulting dynamic model was found to be stable across a range of inputs suggesting that it can be perturbed and applied to other physiological states. Although nitrogen was generally in balance, leucine was in slight deficit compared to predicted needs for export protein synthesis, suggesting that an alternative source of leucine (e.g. peptides) was utilized. Simulations of varying glucagon concentrations indicated that an additional 5 mol/d of glucose could be synthesized at the reference substrate concentrations and blood flows. The increased glucose production was supported by increased removal from blood of lactate, glutamate, aspartate, alanine, asparagine, and glutamine. As glucose Output increased, ketone body and acetate release increased while CO2 release declined. The pattern of amino acids appearing in hepatic vein blood was affected by changes in amino acid concentration in portal vein blood, portal blood flow rate and glucagon concentration, with methionine and phenylalanine being the most affected of essential amino acids. Experimental evidence is insufficient to determine whether essential amino acids are affected by varying gluconeogenic demands. (C) 2004 Published by Elsevier Ltd.