709 resultados para Scaled semivariogram
Resumo:
The authors present a systolic design for a simple GA mechanism which provides high throughput and unidirectional pipelining by exploiting the inherent parallelism in the genetic operators. The design computes in O(N+G) time steps using O(N2) cells where N is the population size and G is the chromosome length. The area of the device is independent of the chromosome length and so can be easily scaled by replicating the arrays or by employing fine-grain migration. The array is generic in the sense that it does not rely on the fitness function and can be used as an accelerator for any GA application using uniform crossover between pairs of chromosomes. The design can also be used in hybrid systems as an add-on to complement existing designs and methods for fitness function acceleration and island-style population management
Resumo:
Chemical and meteorological parameters measured on board the Facility for Airborne Atmospheric Measurements (FAAM) BAe 146 Atmospheric Research Aircraft during the African Monsoon Multidisciplinary Analysis (AMMA) campaign are presented to show the impact of NOx emissions from recently wetted soils in West Africa. NO emissions from soils have been previously observed in many geographical areas with different types of soil/vegetation cover during small scale studies and have been inferred at large scales from satellite measurements of NOx. This study is the first dedicated to showing the emissions of NOx at an intermediate scale between local surface sites and continental satellite measurements. The measurements reveal pronounced mesoscale variations in NOx concentrations closely linked to spatial patterns of antecedent rainfall. Fluxes required to maintain the NOx concentrations observed by the BAe-146 in a number of cases studies and for a range of assumed OH concentrations (1×106 to 1×107 molecules cm−3) are calculated to be in the range 8.4 to 36.1 ng N m−2 s−1. These values are comparable to the range of fluxes from 0.5 to 28 ng N m−2 s−1 reported from small scale field studies in a variety of non-nutrient rich tropical and sub-tropical locations reported in the review of Davidson and Kingerlee (1997). The fluxes calculated in the present study have been scaled up to cover the area of the Sahel bounded by 10 to 20 N and 10 E to 20 W giving an estimated emission of 0.03 to 0.30 Tg N from this area for July and August 2006. The observed chemical data also suggest that the NOx emitted from soils is taking part in ozone formation as ozone concentrations exhibit similar fine scale structure to the NOx, with enhancements over the wet soils. Such variability can not be explained on the basis of transport from other areas. Delon et al. (2008) is a companion paper to this one which models the impact of soil NOx emissions on the NOx and ozone concentration over West Africa during AMMA. It employs an artificial neural network to define the emissions of NOx from soils, integrated into a coupled chemistry-dynamics model. The results are compared to the observed data presented in this paper. Here we compare fluxes deduced from the observed data with the model-derived values from Delon et al. (2008).
Resumo:
We advocate the use of systolic design techniques to create custom hardware for Custom Computing Machines. We have developed a hardware genetic algorithm based on systolic arrays to illustrate the feasibility of the approach. The architecture is independent of the lengths of chromosomes used and can be scaled in size to accommodate different population sizes. An FPGA prototype design can process 16 million genes per second.
Resumo:
We report the results of variational calculations of the rovibrational energy levels of HCN for J = 0, 1 and 2, where we reproduce all the ca. 100 observed vibrational states for all observed isotopic species, with energies up to 18000 cm$^{-1}$, to about $\pm $1 cm$^{-1}$, and the corresponding rotational constants to about $\pm $0.001 cm$^{-1}$. We use a hamiltonian expressed in internal coordinates r$_{1}$, r$_{2}$ and $\theta $, using the exact expression for the kinetic energy operator T obtained by direct transformation from the cartesian representation. The potential energy V is expressed as a polynomial expansion in the Morse coordinates y$_{i}$ for the bond stretches and the interbond angle $\theta $. The basis functions are built as products of appropriately scaled Morse functions in the bond-stretches and Legendre or associated Legendre polynomials of cos $\theta $ in the angle bend, and we evaluate matrix elements by Gauss quadrature. The hamiltonian matripx is factorized using the full rovibrational symmetry, and the basis is contracted to an optimized form; the dimensions of the final hamiltonian matrix vary from 240 $\times $ 240 to 1000 $\times $ 1000.We believe that our calculation is converged to better than 1 cm$^{-1}$ at 18 000 cm$^{-1}$. Our potential surface is expressed in terms of 31 parameters, about half of which have been refined by least squares to optimize the fit to the experimental data. The advantages and disadvantages and the future potential of calculations of this type are discussed.
Resumo:
A simple theoretical model for the intensification of tropical cyclones and polar lows is developed using a minimal set of physical assumptions. These disturbances are assumed to be balanced systems intensifying through the WISHE (Wind-Induced Surface Heat Exchange) intensification mechanism, driven by surface fluxes of heat and moisture into an atmosphere which is neutral to moist convection. The equation set is linearized about a resting basic state and solved as an initial-value problem. A system is predicted to intensify with an exponential perturbation growth rate scaled by the radial gradient of an efficiency parameter which crudely represents the effects of unsaturated processes. The form of this efficiency parameter is assumed to be defined by initial conditions, dependent on the nature of a pre-existing vortex required to precondition the atmosphere to a state in which the vortex can intensify. Evaluation of the simple model using a primitive-equation, nonlinear numerical model provides support for the prediction of exponential perturbation growth. Good agreement is found between the simple and numerical models for the sensitivities of the measured growth rate to various parameters, including surface roughness, the rate of transfer of heat and moisture from the ocean surface, and the scale for the growing vortex.
Resumo:
High resolution descriptions of plant distribution have utility for many ecological applications but are especially useful for predictive modeling of gene flow from transgenic crops. Difficulty lies in the extrapolation errors that occur when limited ground survey data are scaled up to the landscape or national level. This problem is epitomized by the wide confidence limits generated in a previous attempt to describe the national abundance of riverside Brassica rapa (a wild relative of cultivated rapeseed) across the United Kingdom. Here, we assess the value of airborne remote sensing to locate B. rapa over large areas and so reduce the need for extrapolation. We describe results from flights over the river Nene in England acquired using Airborne Thematic Mapper (ATM) and Compact Airborne Spectrographic Imager (CASI) imagery, together with ground truth data. It proved possible to detect 97% of flowering B. rapa on the basis of spectral profiles. This included all stands of plants that occupied >2m square (>5 plants), which were detected using single-pixel classification. It also included very small populations (<5 flowering plants, 1-2m square) that generated mixed pixels, which were detected using spectral unmixing. The high detection accuracy for flowering B. rapa was coupled with a rather large false positive rate (43%). The latter could be reduced by using the image detections to target fieldwork to confirm species identity, or by acquiring additional remote sensing data such as laser altimetry or multitemporal imagery.
Resumo:
Stephens and Donnelly have introduced a simple yet powerful importance sampling scheme for computing the likelihood in population genetic models. Fundamental to the method is an approximation to the conditional probability of the allelic type of an additional gene, given those currently in the sample. As noted by Li and Stephens, the product of these conditional probabilities for a sequence of draws that gives the frequency of allelic types in a sample is an approximation to the likelihood, and can be used directly in inference. The aim of this note is to demonstrate the high level of accuracy of "product of approximate conditionals" (PAC) likelihood when used with microsatellite data. Results obtained on simulated microsatellite data show that this strategy leads to a negligible bias over a wide range of the scaled mutation parameter theta. Furthermore, the sampling variance of likelihood estimates as well as the computation time are lower than that obtained with importance sampling on the whole range of theta. It follows that this approach represents an efficient substitute to IS algorithms in computer intensive (e.g. MCMC) inference methods in population genetics. (c) 2006 Elsevier Inc. All rights reserved.
Resumo:
Background: We report an analysis of a protein network of functionally linked proteins, identified from a phylogenetic statistical analysis of complete eukaryotic genomes. Phylogenetic methods identify pairs of proteins that co-evolve on a phylogenetic tree, and have been shown to have a high probability of correctly identifying known functional links. Results: The eukaryotic correlated evolution network we derive displays the familiar power law scaling of connectivity. We introduce the use of explicit phylogenetic methods to reconstruct the ancestral presence or absence of proteins at the interior nodes of a phylogeny of eukaryote species. We find that the connectivity distribution of proteins at the point they arise on the tree and join the network follows a power law, as does the connectivity distribution of proteins at the time they are lost from the network. Proteins resident in the network acquire connections over time, but we find no evidence that 'preferential attachment' - the phenomenon of newly acquired connections in the network being more likely to be made to proteins with large numbers of connections - influences the network structure. We derive a 'variable rate of attachment' model in which proteins vary in their propensity to form network interactions independently of how many connections they have or of the total number of connections in the network, and show how this model can produce apparent power-law scaling without preferential attachment. Conclusion: A few simple rules can explain the topological structure and evolutionary changes to protein-interaction networks: most change is concentrated in satellite proteins of low connectivity and small phenotypic effect, and proteins differ in their propensity to form attachments. Given these rules of assembly, power law scaled networks naturally emerge from simple principles of selection, yielding protein interaction networks that retain a high-degree of robustness on short time scales and evolvability on longer evolutionary time scales.
Resumo:
In the forecasting of binary events, verification measures that are “equitable” were defined by Gandin and Murphy to satisfy two requirements: 1) they award all random forecasting systems, including those that always issue the same forecast, the same expected score (typically zero), and 2) they are expressible as the linear weighted sum of the elements of the contingency table, where the weights are independent of the entries in the table, apart from the base rate. The authors demonstrate that the widely used “equitable threat score” (ETS), as well as numerous others, satisfies neither of these requirements and only satisfies the first requirement in the limit of an infinite sample size. Such measures are referred to as “asymptotically equitable.” In the case of ETS, the expected score of a random forecasting system is always positive and only falls below 0.01 when the number of samples is greater than around 30. Two other asymptotically equitable measures are the odds ratio skill score and the symmetric extreme dependency score, which are more strongly inequitable than ETS, particularly for rare events; for example, when the base rate is 2% and the sample size is 1000, random but unbiased forecasting systems yield an expected score of around −0.5, reducing in magnitude to −0.01 or smaller only for sample sizes exceeding 25 000. This presents a problem since these nonlinear measures have other desirable properties, in particular being reliable indicators of skill for rare events (provided that the sample size is large enough). A potential way to reconcile these properties with equitability is to recognize that Gandin and Murphy’s two requirements are independent, and the second can be safely discarded without losing the key advantages of equitability that are embodied in the first. This enables inequitable and asymptotically equitable measures to be scaled to make them equitable, while retaining their nonlinearity and other properties such as being reliable indicators of skill for rare events. It also opens up the possibility of designing new equitable verification measures.
Resumo:
Under multipath conditions, video intermediate frequency (VIF) detectors may generate a local oscillator phase error and consequently produce a dispersed detected signal due to the presence of additional IF carriers. A new detector is presented which by processing the detected phase quadrature information, derives the correct phase for synchronous detection in the presence of multipath effects. This minimises dispersion and produces a detected video signal with the linear addition of amplitude scaled ghosts.
Resumo:
The intensity and distribution of daily precipitation is predicted to change under scenarios of increased greenhouse gases (GHGs). In this paper, we analyse the ability of HadCM2, a general circulation model (GCM), and a high-resolution regional climate model (RCM), both developed at the Met Office's Hadley Centre, to simulate extreme daily precipitation by reference to observations. A detailed analysis of daily precipitation is made at two UK grid boxes, where probabilities of reaching daily thresholds in the GCM and RCM are compared with observations. We find that the RCM generally overpredicts probabilities of extreme daily precipitation but that, when the GCM and RCM simulated values are scaled to have the same mean as the observations, the RCM captures the upper-tail distribution more realistically. To compare regional changes in daily precipitation in the GHG-forced period 2080-2100 in the GCM and the RCM, we develop two methods. The first considers the fractional changes in probability of local daily precipitation reaching or exceeding a fixed 15 mm threshold in the anomaly climate compared with the control. The second method uses the upper one-percentile of the control at each point as the threshold. Agreement between the models is better in both seasons with the latter method, which we suggest may be more useful when considering larger scale spatial changes. On average, the probability of precipitation exceeding the 1% threshold increases by a factor of 2.5 (GCM and RCM) in winter and by I .7 (GCM) or 1.3 (RCM) in summer.
Resumo:
Urban boundary layers (UBLs) can be highly complex due to the heterogeneous roughness and heating of the surface, particularly at night. Due to a general lack of observations, it is not clear whether canonical models of boundary layer mixing are appropriate in modelling air quality in urban areas. This paper reports Doppler lidar observations of turbulence profiles in the centre of London, UK, as part of the second REPARTEE campaign in autumn 2007. Lidar-measured standard deviation of vertical velocity averaged over 30 min intervals generally compared well with in situ sonic anemometer measurements at 190 m on the BT telecommunications Tower. During calm, nocturnal periods, the lidar underestimated turbulent mixing due mainly to limited sampling rate. Mixing height derived from the turbulence, and aerosol layer height from the backscatter profiles, showed similar diurnal cycles ranging from c. 300 to 800 m, increasing to c. 200 to 850 m under clear skies. The aerosol layer height was sometimes significantly different to the mixing height, particularly at night under clear skies. For convective and neutral cases, the scaled turbulence profiles resembled canonical results; this was less clear for the stable case. Lidar observations clearly showed enhanced mixing beneath stratocumulus clouds reaching down on occasion to approximately half daytime boundary layer depth. On one occasion the nocturnal turbulent structure was consistent with a nocturnal jet, suggesting a stable layer. Given the general agreement between observations and canonical turbulence profiles, mixing timescales were calculated for passive scalars released at street level to reach the BT Tower using existing models of turbulent mixing. It was estimated to take c. 10 min to diffuse up to 190 m, rising to between 20 and 50 min at night, depending on stability. Determination of mixing timescales is important when comparing to physico-chemical processes acting on pollutant species measured simultaneously at both the ground and at the BT Tower during the campaign. From the 3 week autumnal data-set there is evidence for occasional stable layers in central London, effectively decoupling surface emissions from air aloft.
Resumo:
Road transport and shipping are copious sources of aerosols, which exert a 9 significant radiative forcing, compared to, for example, the CO2 emitted by these sectors. An 10 advanced atmospheric general circulation model, coupled to a mixed-layer ocean, is used to 11 calculate the climate response to the direct radiative forcing from such aerosols. The cases 12 considered include imposed distributions of black carbon and sulphate aerosols from road 13 transport, and sulphate aerosols from shipping; these are compared to the climate response 14 due to CO2 increases. The difficulties in calculating the climate response due to small 15 forcings are discussed, as the actual forcings have to be scaled by large amounts to enable a 16 climate response to be easily detected. Despite the much greater geographical inhomogeneity 17 in the sulphate forcing, the patterns of zonal and annual-mean surface temperature response 18 (although opposite in sign) closely resembles that resulting from homogeneous changes in 19 CO2. The surface temperature response to black carbon aerosols from road transport is shown 20 to be notably non-linear in scaling applied, probably due to the semi-direct response of clouds 21 to these aerosols. For the aerosol forcings considered here, the most widespread method of 22 calculating radiative forcing significantly overestimates their effect, relative to CO2, 23 compared to surface temperature changes calculated using the climate model.
Resumo:
We investigate the spatial characteristics of urban-like canopy flow by applying particle image velocimetry (PIV) to atmospheric turbulence. The study site was a Comprehensive Outdoor Scale MOdel (COSMO) experiment for urban climate in Japan. The PIV system captured the two-dimensional flow field within the canopy layer continuously for an hour with a sampling frequency of 30 Hz, thereby providing reliable outdoor turbulence statistics. PIV measurements in a wind-tunnel facility using similar roughness geometry, but with a lower sampling frequency of 4 Hz, were also done for comparison. The turbulent momentum flux from COSMO, and the wind tunnel showed similar values and distributions when scaled using friction velocity. Some different characteristics between outdoor and indoor flow fields were mainly caused by the larger fluctuations in wind direction for the atmospheric turbulence. The focus of the analysis is on a variety of instantaneous turbulent flow structures. One remarkable flow structure is termed 'flushing', that is, a large-scale upward motion prevailing across the whole vertical cross-section of a building gap. This is observed intermittently, whereby tracer particles are flushed vertically out from the canopy layer. Flushing phenomena are also observed in the wind tunnel where there is neither thermal stratification nor outer-layer turbulence. It is suggested that flushing phenomena are correlated with the passing of large-scale low-momentum regions above the canopy.
Resumo:
Property portfolio diversification takes many forms, most of which can be associated with asset size. In other words larger property portfolios are assumed to have greater diversification potential than small portfolios. In addition, since greater diversification is generally associated with lower risk it is assumed that larger property portfolios will also have reduced return variability compared with smaller portfolios. If large property portfolios can simply be regarded as scaled-up, better-diversified versions of small property portfolios, then the greater a portfolio’s asset size, the lower its risk. This suggests a negative relationship between asset size and risk. However, if large property portfolios are not simply scaled-up versions of small portfolios, the relationship between asset size and risk may be unclear. For instance, if large portfolios hold riskier assets or pursue more volatile investment strategies, it may be that a positive relationship between asset size and risk would be observed, even if large property portfolios are more diversified. This paper tests the empirical relationship between property portfolio size, diversification and risk, in Institutional portfolios in the UK, during the period from 1989 to 1999 to determine which of these two characterisations is more appropriate.