115 resultados para in-domain data requirement


Relevância:

100.00% 100.00%

Publicador:

Resumo:

A basic data requirement of a river flood inundation model is a Digital Terrain Model (DTM) of the reach being studied. The scale at which modeling is required determines the accuracy required of the DTM. For modeling floods in urban areas, a high resolution DTM such as that produced by airborne LiDAR (Light Detection And Ranging) is most useful, and large parts of many developed countries have now been mapped using LiDAR. In remoter areas, it is possible to model flooding on a larger scale using a lower resolution DTM, and in the near future the DTM of choice is likely to be that derived from the TanDEM-X Digital Elevation Model (DEM). A variable-resolution global DTM obtained by combining existing high and low resolution data sets would be useful for modeling flood water dynamics globally, at high resolution wherever possible and at lower resolution over larger rivers in remote areas. A further important data resource used in flood modeling is the flood extent, commonly derived from Synthetic Aperture Radar (SAR) images. Flood extents become more useful if they are intersected with the DTM, when water level observations (WLOs) at the flood boundary can be estimated at various points along the river reach. To illustrate the utility of such a global DTM, two examples of recent research involving WLOs at opposite ends of the spatial scale are discussed. The first requires high resolution spatial data, and involves the assimilation of WLOs from a real sequence of high resolution SAR images into a flood model to update the model state with observations over time, and to estimate river discharge and model parameters, including river bathymetry and friction. The results indicate the feasibility of such an Earth Observation-based flood forecasting system. The second example is at a larger scale, and uses SAR-derived WLOs to improve the lower-resolution TanDEM-X DEM in the area covered by the flood extents. The resulting reduction in random height error is significant.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Aims. Protein kinases are potential therapeutic targets for heart failure, but most studies of cardiac protein kinases derive from other systems, an approach that fails to account for specific kinases expressed in the heart and the contractile cardiomyocytes. We aimed to define the cardiomyocyte kinome (i.e. the protein kinases expressed in cardiomyocytes) and identify kinases with altered expression in human failing hearts. Methods and Results. Expression profiling (Affymetrix microarrays) detected >400 protein kinase mRNAs in rat neonatal ventricular myocytes (NVMs) and/or adult ventricular myocytes (AVMs), 32 and 93 of which were significantly upregulated or downregulated (>2-fold), respectively, in AVMs. Data for AGC family members were validated by qPCR. Proteomics analysis identified >180 cardiomyocyte protein kinases, with high relative expression of mitogen-activated protein kinase cascades and other known cardiomyocyte kinases (e.g. CAMKs, cAMP-dependent protein kinase). Other kinases are poorly-investigated (e.g. Slk, Stk24, Oxsr1). Expression of Akt1/2/3, BRaf, ERK1/2, Map2k1, Map3k8, Map4k4, MST1/3, p38-MAPK, PKCδ, Pkn2, Ripk1/2, Tnni3k and Zak was confirmed by immunoblotting. Relative to total protein, Map3k8 and Tnni3k were upregulated in AVMs vs NVMs. Microarray data for human hearts demonstrated variation in kinome expression that may influence responses to kinase inhibitor therapies. Furthermore, some kinases were upregulated (e.g. NRK, JAK2, STK38L) or downregulated (e.g. MAP2K1, IRAK1, STK40) in human failing hearts. Conclusions. This characterization of the spectrum of kinases expressed in cardiomyocytes and the heart (cardiomyocyte and cardiac kinomes) identified novel kinases, some of which are differentially expressed in failing human hearts and could serve as potential therapeutic targets.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Widespread commercial use of the internet has significantly increased the volume and scope of data being collected by organisations. ‘Big data’ has emerged as a term to encapsulate both the technical and commercial aspects of this growing data collection activity. To date, much of the discussion of big data has centred upon its transformational potential for innovation and efficiency, yet there has been less reflection on its wider implications beyond commercial value creation. This paper builds upon normal accident theory (NAT) to analyse the broader ethical implications of big data. It argues that the strategies behind big data require organisational systems that leave them vulnerable to normal accidents, that is to say some form of accident or disaster that is both unanticipated and inevitable. Whilst NAT has previously focused on the consequences of physical accidents, this paper suggests a new form of system accident that we label data accidents. These have distinct, less tangible and more complex characteristics and raise significant questions over the role of individual privacy in a ‘data society’. The paper concludes by considering the ways in which the risks of such data accidents might be managed or mitigated.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The Arctic is an important region in the study of climate change, but monitoring surface temperatures in this region is challenging, particularly in areas covered by sea ice. Here in situ, satellite and reanalysis data were utilised to investigate whether global warming over recent decades could be better estimated by changing the way the Arctic is treated in calculating global mean temperature. The degree of difference arising from using five different techniques, based on existing temperature anomaly dataset techniques, to estimate Arctic SAT anomalies over land and sea ice were investigated using reanalysis data as a testbed. Techniques which interpolated anomalies were found to result in smaller errors than non-interpolating techniques. Kriging techniques provided the smallest errors in anomaly estimates. Similar accuracies were found for anomalies estimated from in situ meteorological station SAT records using a kriging technique. Whether additional data sources, which are not currently utilised in temperature anomaly datasets, would improve estimates of Arctic surface air temperature anomalies was investigated within the reanalysis testbed and using in situ data. For the reanalysis study, the additional input anomalies were reanalysis data sampled at certain supplementary data source locations over Arctic land and sea ice areas. For the in situ data study, the additional input anomalies over sea ice were surface temperature anomalies derived from the Advanced Very High Resolution Radiometer satellite instruments. The use of additional data sources, particularly those located in the Arctic Ocean over sea ice or on islands in sparsely observed regions, can lead to substantial improvements in the accuracy of estimated anomalies. Decreases in Root Mean Square Error can be up to 0.2K for Arctic-average anomalies and more than 1K for spatially resolved anomalies. Further improvements in accuracy may be accomplished through the use of other data sources.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Geospatial information of many kinds, from topographic maps to scientific data, is increasingly being made available through web mapping services. These allow georeferenced map images to be served from data stores and displayed in websites and geographic information systems, where they can be integrated with other geographic information. The Open Geospatial Consortium’s Web Map Service (WMS) standard has been widely adopted in diverse communities for sharing data in this way. However, current services typically provide little or no information about the quality or accuracy of the data they serve. In this paper we will describe the design and implementation of a new “quality-enabled” profile of WMS, which we call “WMS-Q”. This describes how information about data quality can be transmitted to the user through WMS. Such information can exist at many levels, from entire datasets to individual measurements, and includes the many different ways in which data uncertainty can be expressed. We also describe proposed extensions to the Symbology Encoding specification, which include provision for visualizing uncertainty in raster data in a number of different ways, including contours, shading and bivariate colour maps. We shall also describe new open-source implementations of the new specifications, which include both clients and servers.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A strong correlation between the speed of the eddy-driven jet and the width of the Hadley cell is found to exist in the Southern Hemisphere, both in reanalysis data and in twenty-first-century integrations from the Intergovernmental Panel on Climate Change (IPCC) Fifth Assessment Report multimodel archive. Analysis of the space–time spectra of eddy momentum flux reveals that variations in eddy-driven jet speed are related to changes in the mean phase speed of midlatitude eddies. An increase in eddy phase speeds induces a poleward shift of the critical latitudes and a poleward expansion of the region of subtropical wave breaking. The associated changes in eddy momentum flux convergence are balanced by anomalous meridional winds consistent with a wider Hadley cell. At the same time, faster eddies are also associated with a strengthened poleward eddy momentum flux, sustaining a stronger westerly jet in midlatitudes. The proposed mechanism is consistent with the seasonal dependence of the interannual variability of the Hadley cell width and appears to explain at least part of the projected twenty-first-century trends.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The question is addressed whether using unbalanced updates in ocean-data assimilation schemes for seasonal forecasting systems can result in a relatively poor simulation of zonal currents. An assimilation scheme, where temperature observations are used for updating only the density field, is compared to a scheme where updates of density field and zonal velocities are related by geostrophic balance. This is done for an equatorial linear shallow-water model. It is found that equatorial zonal velocities can be detoriated if velocity is not updated in the assimilation procedure. Adding balanced updates to the zonal velocity is shown to be a simple remedy for the shallow-water model. Next, optimal interpolation (OI) schemes with balanced updates of the zonal velocity are implemented in two ocean general circulation models. First tests indicate a beneficial impact on equatorial upper-ocean zonal currents.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Ocean–sea ice reanalyses are crucial for assessing the variability and recent trends in the Arctic sea ice cover. This is especially true for sea ice volume, as long-term and large scale sea ice thickness observations are inexistent. Results from the Ocean ReAnalyses Intercomparison Project (ORA-IP) are presented, with a focus on Arctic sea ice fields reconstructed by state-of-the-art global ocean reanalyses. Differences between the various reanalyses are explored in terms of the effects of data assimilation, model physics and atmospheric forcing on properties of the sea ice cover, including concentration, thickness, velocity and snow. Amongst the 14 reanalyses studied here, 9 assimilate sea ice concentration, and none assimilate sea ice thickness data. The comparison reveals an overall agreement in the reconstructed concentration fields, mainly because of the constraints in surface temperature imposed by direct assimilation of ocean observations, prescribed or assimilated atmospheric forcing and assimilation of sea ice concentration. However, some spread still exists amongst the reanalyses, due to a variety of factors. In particular, a large spread in sea ice thickness is found within the ensemble of reanalyses, partially caused by the biases inherited from their sea ice model components. Biases are also affected by the assimilation of sea ice concentration and the treatment of sea ice thickness in the data assimilation process. An important outcome of this study is that the spatial distribution of ice volume varies widely between products, with no reanalysis standing out as clearly superior as compared to altimetry estimates. The ice thickness from systems without assimilation of sea ice concentration is not worse than that from systems constrained with sea ice observations. An evaluation of the sea ice velocity fields reveals that ice drifts too fast in most systems. As an ensemble, the ORA-IP reanalyses capture trends in Arctic sea ice area and extent relatively well. However, the ensemble can not be used to get a robust estimate of recent trends in the Arctic sea ice volume. Biases in the reanalyses certainly impact the simulated air–sea fluxes in the polar regions, and questions the suitability of current sea ice reanalyses to initialize seasonal forecasts.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Land cover data derived from satellites are commonly used to prescribe inputs to models of the land surface. Since such data inevitably contains errors, quantifying how uncertainties in the data affect a model’s output is important. To do so, a spatial distribution of possible land cover values is required to propagate through the model’s simulation. However, at large scales, such as those required for climate models, such spatial modelling can be difficult. Also, computer models often require land cover proportions at sites larger than the original map scale as inputs, and it is the uncertainty in these proportions that this article discusses. This paper describes a Monte Carlo sampling scheme that generates realisations of land cover proportions from the posterior distribution as implied by a Bayesian analysis that combines spatial information in the land cover map and its associated confusion matrix. The technique is computationally simple and has been applied previously to the Land Cover Map 2000 for the region of England and Wales. This article demonstrates the ability of the technique to scale up to large (global) satellite derived land cover maps and reports its application to the GlobCover 2009 data product. The results show that, in general, the GlobCover data possesses only small biases, with the largest belonging to non–vegetated surfaces. In vegetated surfaces, the most prominent area of uncertainty is Southern Africa, which represents a complex heterogeneous landscape. It is also clear from this study that greater resources need to be devoted to the construction of comprehensive confusion matrices.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Charities need to understand why volunteers choose one brand rather than another in order to attract more volunteers to their organisation. There has been considerable academic interest in understanding why people volunteer generally. However, this research explores the more specific question of why a volunteer chooses one charity brand rather than another. It builds on previous conceptualisations of volunteering as a consumption decision. Seen through the lens of the individual volunteer, it considers the under-researched area of the decision-making process. The research adopts an interpretivist epistemology and subjectivist ontology. Qualitative data was collected through depth interviews and analysed using both Means-End Chain (MEC) and Framework Analysis methodology. The primary contribution of the research is to theory: understanding the role of brand in the volunteer decision-making process. It identifies two roles for brand. The first is as a specific reason for choice, an ‘attribute’ of the decision. Through MEC, volunteering for a well-known brand connects directly through to a sense of self, both self-respect but also social recognition by others. All four components of the symbolic consumption construct are found in the data: volunteers choose a well-known brand to say something about themselves. The brand brings credibility and reassurance, it reduces the risk and enables the volunteer to meet their need to make a difference and achieve a sense of accomplishment. The second closely related role for brand is within the process of making the volunteering decision. Volunteers built up knowledge about the charity brands from a variety of brand touchpoints, over time. At the point of decision-making that brand knowledge and engagement becomes relevant, enabling some to make an automatic choice despite the significant level of commitment being made. The research identifies four types of decision-making behaviour. The research also makes secondary contributions to MEC methodology and to the non-profit context. It concludes within practical implications for management practice and a rich agenda for future research.