78 resultados para data types and operators


Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper we develop and apply methods for the spectral analysis of non-selfadjoint tridiagonal infinite and finite random matrices, and for the spectral analysis of analogous deterministic matrices which are pseudo-ergodic in the sense of E. B. Davies (Commun. Math. Phys. 216 (2001), 687–704). As a major application to illustrate our methods we focus on the “hopping sign model” introduced by J. Feinberg and A. Zee (Phys. Rev. E 59 (1999), 6433–6443), in which the main objects of study are random tridiagonal matrices which have zeros on the main diagonal and random ±1’s as the other entries. We explore the relationship between spectral sets in the finite and infinite matrix cases, and between the semi-infinite and bi-infinite matrix cases, for example showing that the numerical range and p-norm ε - pseudospectra (ε > 0, p ∈ [1,∞] ) of the random finite matrices converge almost surely to their infinite matrix counterparts, and that the finite matrix spectra are contained in the infinite matrix spectrum Σ. We also propose a sequence of inclusion sets for Σ which we show is convergent to Σ, with the nth element of the sequence computable by calculating smallest singular values of (large numbers of) n×n matrices. We propose similar convergent approximations for the 2-norm ε -pseudospectra of the infinite random matrices, these approximations sandwiching the infinite matrix pseudospectra from above and below.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Purpose: To investigate the relationship between research data management (RDM) and data sharing in the formulation of RDM policies and development of practices in higher education institutions (HEIs). Design/methodology/approach: Two strands of work were undertaken sequentially: firstly, content analysis of 37 RDM policies from UK HEIs; secondly, two detailed case studies of institutions with different approaches to RDM based on semi-structured interviews with staff involved in the development of RDM policy and services. The data are interpreted using insights from Actor Network Theory. Findings: RDM policy formation and service development has created a complex set of networks within and beyond institutions involving different professional groups with widely varying priorities shaping activities. Data sharing is considered an important activity in the policies and services of HEIs studied, but its prominence can in most cases be attributed to the positions adopted by large research funders. Research limitations/implications: The case studies, as research based on qualitative data, cannot be assumed to be universally applicable but do illustrate a variety of issues and challenges experienced more generally, particularly in the UK. Practical implications: The research may help to inform development of policy and practice in RDM in HEIs and funder organisations. Originality/value: This paper makes an early contribution to the RDM literature on the specific topic of the relationship between RDM policy and services, and openness – a topic which to date has received limited attention.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The weak-constraint inverse for nonlinear dynamical models is discussed and derived in terms of a probabilistic formulation. The well-known result that for Gaussian error statistics the minimum of the weak-constraint inverse is equal to the maximum-likelihood estimate is rederived. Then several methods based on ensemble statistics that can be used to find the smoother (as opposed to the filter) solution are introduced and compared to traditional methods. A strong point of the new methods is that they avoid the integration of adjoint equations, which is a complex task for real oceanographic or atmospheric applications. they also avoid iterative searches in a Hilbert space, and error estimates can be obtained without much additional computational effort. the feasibility of the new methods is illustrated in a two-layer quasigeostrophic model.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper discusses the requirements on the numerical precision for a practical Multiband Ultra-Wideband (UWB) consumer electronic solution. To this end we first present the possibilities that UWB has to offer to the consumer electronics market and the possible range of devices. We then show the performance of a model of the UWB baseband system implemented using floating point precision. Then, by simulation we find the minimal numerical precision required to maintain floating-point performance for each of the specific data types and signals present in the UWB baseband. Finally, we present a full description of the numerical requirements for both the transmit and receive components of the UWB baseband. The numerical precision results obtained in this paper can then be used by baseband designers to implement cost effective UWB systems using System-on-Chip (SoC), FPGA and ASIC technology solutions biased toward the competitive consumer electronics market(1).

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The term 'big data' has recently emerged to describe a range of technological and commercial trends enabling the storage and analysis of huge amounts of customer data, such as that generated by social networks and mobile devices. Much of the commercial promise of big data is in the ability to generate valuable insights from collecting new types and volumes of data in ways that were not previously economically viable. At the same time a number of questions have been raised about the implications for individual privacy. This paper explores key perspectives underlying the emergence of big data, and considers both the opportunities and ethical challenges raised for market research.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The Environmental Data Abstraction Library provides a modular data management library for bringing new and diverse datatypes together for visualisation within numerous software packages, including the ncWMS viewing service, which already has very wide international uptake. The structure of EDAL is presented along with examples of its use to compare satellite, model and in situ data types within the same visualisation framework. We emphasize the value of this capability for cross calibration of datasets and evaluation of model products against observations, including preparation for data assimilation.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Virtual globe technology holds many exciting possibilities for environmental science. These easy-to-use, intuitive systems provide means for simultaneously visualizing four-dimensional environmental data from many different sources, enabling the generation of new hypotheses and driving greater understanding of the Earth system. Through the use of simple markup languages, scientists can publish and consume data in interoperable formats without the need for technical assistance. In this paper we give, with examples from our own work, a number of scientific uses for virtual globes, demonstrating their particular advantages. We explain how we have used Web Services to connect virtual globes with diverse data sources and enable more sophisticated usage such as data analysis and collaborative visualization. We also discuss the current limitations of the technology, with particular regard to the visualization of subsurface data and vertical sections.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Severe wind storms are one of the major natural hazards in the extratropics and inflict substantial economic damages and even casualties. Insured storm-related losses depend on (i) the frequency, nature and dynamics of storms, (ii) the vulnerability of the values at risk, (iii) the geographical distribution of these values, and (iv) the particular conditions of the risk transfer. It is thus of great importance to assess the impact of climate change on future storm losses. To this end, the current study employs—to our knowledge for the first time—a coupled approach, using output from high-resolution regional climate model scenarios for the European sector to drive an operational insurance loss model. An ensemble of coupled climate-damage scenarios is used to provide an estimate of the inherent uncertainties. Output of two state-of-the-art global climate models (HadAM3, ECHAM5) is used for present (1961–1990) and future climates (2071–2100, SRES A2 scenario). These serve as boundary data for two nested regional climate models with a sophisticated gust parametrizations (CLM, CHRM). For validation and calibration purposes, an additional simulation is undertaken with the CHRM driven by the ERA40 reanalysis. The operational insurance model (Swiss Re) uses a European-wide damage function, an average vulnerability curve for all risk types, and contains the actual value distribution of a complete European market portfolio. The coupling between climate and damage models is based on daily maxima of 10 m gust winds, and the strategy adopted consists of three main steps: (i) development and application of a pragmatic selection criterion to retrieve significant storm events, (ii) generation of a probabilistic event set using a Monte-Carlo approach in the hazard module of the insurance model, and (iii) calibration of the simulated annual expected losses with a historic loss data base. The climate models considered agree regarding an increase in the intensity of extreme storms in a band across central Europe (stretching from southern UK and northern France to Denmark, northern Germany into eastern Europe). This effect increases with event strength, and rare storms show the largest climate change sensitivity, but are also beset with the largest uncertainties. Wind gusts decrease over northern Scandinavia and Southern Europe. Highest intra-ensemble variability is simulated for Ireland, the UK, the Mediterranean, and parts of Eastern Europe. The resulting changes on European-wide losses over the 110-year period are positive for all layers and all model runs considered and amount to 44% (annual expected loss), 23% (10 years loss), 50% (30 years loss), and 104% (100 years loss). There is a disproportionate increase in losses for rare high-impact events. The changes result from increases in both severity and frequency of wind gusts. Considerable geographical variability of the expected losses exists, with Denmark and Germany experiencing the largest loss increases (116% and 114%, respectively). All countries considered except for Ireland (−22%) experience some loss increases. Some ramifications of these results for the socio-economic sector are discussed, and future avenues for research are highlighted. The technique introduced in this study and its application to realistic market portfolios offer exciting prospects for future research on the impact of climate change that is relevant for policy makers, scientists and economists.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Site-specific management requires accurate knowledge of the spatial variation in a range of soil properties within fields. This involves considerable sampling effort, which is costly. Ancillary data, such as crop yield, elevation and apparent electrical conductivity (ECa) of the soil, can provide insight into the spatial variation of some soil properties. A multivariate classification with spatial constraint imposed by the variogram was used to classify data from two arable crop fields. The yield data comprised 5 years of crop yield, and the ancillary data 3 years of yield data, elevation and ECa. Information on soil chemical and physical properties was provided by intensive surveys of the soil. Multivariate variograms computed from these data were used to constrain sites spatially within classes to increase their contiguity. The constrained classifications resulted in coherent classes, and those based on the ancillary data were similar to those from the soil properties. The ancillary data seemed to identify areas in the field where the soil is reasonably homogeneous. The results of targeted sampling showed that these classes could be used as a basis for management and to guide future sampling of the soil.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The Holocene vegetation history of the Arabian Peninsula is poorly understood, with few palaeobotanical studies to date. At Awafi, Ras al-Khaimah, UAE, a 3.3 m lake sediment sequence records the vegetation development for the period 8500 cal. yr BP to similar to3000 cal. yr BP. delta(13)C isotope, pollen and phytolith analyses indicate that C3 Pooid grassland with a strong woody element existed during the early Holocene (between 8500 and 6000 cal. yr BP) and became replaced by mixed C3 and C4 grasses with a strong C4 Panicoid tall grass element between 5900 and 5400 cal. yr BP. An intense, arid event Occurred at 4100 cal. yr BP when the lake desiccated and was infilled by Aeolian sand. From 4100 cal. yr BP the vegetation was dominated by C4 Chloridoid types and Cyperaceae, suggesting an incomplete vegetation cover and Aeolian dune reactivation owing to increased regional aridity. These data outline the ecosystem dynamics and carbon cycling in response to palaeomon-soon and north-westerly variability during the Holocene. Copyright (C) 2004 John Wiley Sons, Ltd.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A regional overview of the water quality and ecology of the River Lee catchment is presented. Specifically, data describing the chemical, microbiological and macrobiological water quality and fisheries communities have been analysed, based on a division into river, sewage treatment works, fish-farm, lake and industrial samples. Nutrient enrichment and the highest concentrations of metals and micro-organics were found in the urbanised, lower reaches of the Lee and in the Lee Navigation. Average annual concentrations of metals were generally within environmental quality standards although, oil many occasions, concentrations of cadmium, copper, lead, mercury and zinc were in excess of the standards. Various organic substances (used as herbicides, fungicides, insecticides, chlorination by-products and industrial solvents) were widely detected in the Lee system. Concentrations of ten micro-organic substances were observed in excess of their environmental quality standards, though not in terms of annual averages. Sewage treatment works were the principal point source input of nutrients. metals and micro-organic determinands to the catchment. Diffuse nitrogen sources contributed approximately 60% and 27% of the in-stream load in the upper and lower Lee respectively, whereas approximately 60% and 20% of the in-stream phosphorus load was derived from diffuse sources in the upper and lower Lee. For metals, the most significant source was the urban runoff from North London. In reaches less affected by effluent discharges, diffuse runoff from urban and agricultural areas dominated trends. Flig-h microbiological content, observed in the River Lee particularly in urbanised reaches, was far in excess of the EC Bathing Water Directive standards. Water quality issues and degraded habitat in the lower reaches of the Lee have led to impoverished aquatic fauna but, within the mid-catchment reaches and upper agricultural tributaries, less nutrient enrichment and channel alteration has permitted more diverse aquatic fauna.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Airborne scanning laser altimetry (LiDAR) is an important new data source for river flood modelling. LiDAR can give dense and accurate DTMs of floodplains for use as model bathymetry. Spatial resolutions of 0.5m or less are possible, with a height accuracy of 0.15m. LiDAR gives a Digital Surface Model (DSM), so vegetation removal software (e.g. TERRASCAN) must be used to obtain a DTM. An example used to illustrate the current state of the art will be the LiDAR data provided by the EA, which has been processed by their in-house software to convert the raw data to a ground DTM and separate vegetation height map. Their method distinguishes trees from buildings on the basis of object size. EA data products include the DTM with or without buildings removed, a vegetation height map, a DTM with bridges removed, etc. Most vegetation removal software ignores short vegetation less than say 1m high. We have attempted to extend vegetation height measurement to short vegetation using local height texture. Typically most of a floodplain may be covered in such vegetation. The idea is to assign friction coefficients depending on local vegetation height, so that friction is spatially varying. This obviates the need to calibrate a global floodplain friction coefficient. It’s not clear at present if the method is useful, but it’s worth testing further. The LiDAR DTM is usually determined by looking for local minima in the raw data, then interpolating between these to form a space-filling height surface. This is a low pass filtering operation, in which objects of high spatial frequency such as buildings, river embankments and walls may be incorrectly classed as vegetation. The problem is particularly acute in urban areas. A solution may be to apply pattern recognition techniques to LiDAR height data fused with other data types such as LiDAR intensity or multispectral CASI data. We are attempting to use digital map data (Mastermap structured topography data) to help to distinguish buildings from trees, and roads from areas of short vegetation. The problems involved in doing this will be discussed. A related problem of how best to merge historic river cross-section data with a LiDAR DTM will also be considered. LiDAR data may also be used to help generate a finite element mesh. In rural area we have decomposed a floodplain mesh according to taller vegetation features such as hedges and trees, so that e.g. hedge elements can be assigned higher friction coefficients than those in adjacent fields. We are attempting to extend this approach to urban area, so that the mesh is decomposed in the vicinity of buildings, roads, etc as well as trees and hedges. A dominant points algorithm is used to identify points of high curvature on a building or road, which act as initial nodes in the meshing process. A difficulty is that the resulting mesh may contain a very large number of nodes. However, the mesh generated may be useful to allow a high resolution FE model to act as a benchmark for a more practical lower resolution model. A further problem discussed will be how best to exploit data redundancy due to the high resolution of the LiDAR compared to that of a typical flood model. Problems occur if features have dimensions smaller than the model cell size e.g. for a 5m-wide embankment within a raster grid model with 15m cell size, the maximum height of the embankment locally could be assigned to each cell covering the embankment. But how could a 5m-wide ditch be represented? Again, this redundancy has been exploited to improve wetting/drying algorithms using the sub-grid-scale LiDAR heights within finite elements at the waterline.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A new snow-soil-vegetation-atmosphere transfer (Snow-SVAT) scheme, which simulates the accumulation and ablation of the snow cover beneath a forest canopy, is presented. The model was formulated by coupling a canopy optical and thermal radiation model to a physically-based multi-layer snow model. This canopy radiation model is physically-based yet requires few parameters, so can be used when extensive in-situ field measurements are not available. Other forest effects such as the reduction of wind speed, interception of snow on the canopy and the deposition of litter were incorporated within this combined model, SNOWCAN, which was tested with data taken as part of the Boreal Ecosystem-Atmosphere Study (BOREAS) international collaborative experiment. Snow depths beneath four different canopy types and at an open site were simulated. Agreement between observed and simulated snow depths was generally good, with correlation coefficients ranging between r^2=0.94 and r^2=0.98 for all sites where automatic measurements were available. However, the simulated date of total snowpack ablation generally occurred later than the observed date. A comparison between simulated solar radiation and limited measurements of sub-canopy radiation at one site indicates that the model simulates the sub-canopy downwelling solar radiation early in the season to within measurement uncertainty.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The influence matrix is used in ordinary least-squares applications for monitoring statistical multiple-regression analyses. Concepts related to the influence matrix provide diagnostics on the influence of individual data on the analysis - the analysis change that would occur by leaving one observation out, and the effective information content (degrees of freedom for signal) in any sub-set of the analysed data. In this paper, the corresponding concepts have been derived in the context of linear statistical data assimilation in numerical weather prediction. An approximate method to compute the diagonal elements of the influence matrix (the self-sensitivities) has been developed for a large-dimension variational data assimilation system (the four-dimensional variational system of the European Centre for Medium-Range Weather Forecasts). Results show that, in the boreal spring 2003 operational system, 15% of the global influence is due to the assimilated observations in any one analysis, and the complementary 85% is the influence of the prior (background) information, a short-range forecast containing information from earlier assimilated observations. About 25% of the observational information is currently provided by surface-based observing systems, and 75% by satellite systems. Low-influence data points usually occur in data-rich areas, while high-influence data points are in data-sparse areas or in dynamically active regions. Background-error correlations also play an important role: high correlation diminishes the observation influence and amplifies the importance of the surrounding real and pseudo observations (prior information in observation space). Incorrect specifications of background and observation-error covariance matrices can be identified, interpreted and better understood by the use of influence-matrix diagnostics for the variety of observation types and observed variables used in the data assimilation system. Copyright © 2004 Royal Meteorological Society

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Purpose. Accommodation can mask hyperopia and reduce the accuracy of non-cycloplegic refraction. It is, therefore, important to minimize accommodation to obtain a measure of hyperopia as accurate as possible. To characterize the parameters required to measure the maximally hyperopic error using photorefraction, we used different target types and distances to determine which target was most likely to maximally relax accommodation and thus more accurately detect hyperopia in an individual. Methods. A PlusoptiX SO4 infra-red photorefractor was mounted in a remote haploscope which presented the targets. All participants were tested with targets at four fixation distances between 0.3 and 2 m containing all combinations of blur, disparity, and proximity/looming cues. Thirty-eight infants (6 to 44 weeks) were studied longitudinally, and 104 children [4 to 15 years (mean 6.4)] and 85 adults, with a range of refractive errors and binocular vision status, were tested once. Cycloplegic refraction data were available for a sub-set of 59 participants spread across the age range. Results. The maximally hyperopic refraction (MHR) found at any time in the session was most frequently found when fixating the most distant targets and those containing disparity and dynamic proximity/looming cues. Presence or absence of blur was less significant, and targets in which only single cues to depth were present were also less likely to produce MHR. MHR correlated closely with cycloplegic refraction (r = 0.93, mean difference 0.07 D, p = n.s., 95% confidence interval +/-<0.25 D) after correction by a calibration factor. Conclusions. Maximum relaxation of accommodation occurred for binocular targets receding into the distance. Proximal and disparity cues aid relaxation of accommodation to a greater extent than blur, and thus non-cycloplegic refraction targets should incorporate these cues. This is especially important in screening contexts with a brief opportunity to test for significant hyperopia. MHR in our laboratory was found to be a reliable estimation of cycloplegic refraction. (Optom Vis Sci 2009;86:1276-1286)