214 resultados para flood extent mapping
Resumo:
An aim of government and the international community is to respond to global processes and crises through a range of policy and practical approaches that help limit damage from shocks and stresses. Three approaches to vulnerability reduction that have become particularly prominent in recent years are social protection (SP), disaster risk reduction (DRR) and climate change adaptation (CCA). Although these approaches have much in common, they have developed separately over the last two decades. However, given the increasingly complex and interlinked array of risks that poor and vulnerable people face, it is likely that they will not be sufficient in the long run if they continue to be applied in isolation from one another. In recognition of this challenge, the concept of Adaptive Social Protection (ASP) has been developed. ASP refers to a series of measures which aims to build resilience of the poorest and most vulnerable people to climate change by combining elements of SP, DRR and CCA in programmes and projects. The aim of this paper is to provide an initial assessment of the ways in which these elements are being brought together in development policy and practice. It does this by conducting a meta-analysis of 124 agricultural programmes implemented in five countries in south Asia. These are Afghanistan, Bangladesh, India, Nepal and Pakistan. The findings show that full integration of SP, DRR and CCA is relatively limited in south Asia, although there has been significant progress in combining SP and DRR in the last ten years. Projects that combine elements of SP, DRR and CCA tend to emphasise broad poverty and vulnerability reduction goals relative to those that do not. Such approaches can provide valuable lessons and insights for the promotion of climate resilient livelihoods amongst policymakers and practitioners.
Resumo:
5-Hydroxymethylcytosine (5hmC), a modified form of cytosine that is considered the sixth nucleobase in DNA, has been detected in mammals and is believed to play an important role in gene regulation. In this study, 5hmC modification was detected in rice by employing a dot-blot assay, and its levels was further quantified in DNA from different rice tissues using liquid chromatography-multistage mass spectrometry (LC-MS/MS/MS). The results showed large intertissue variation in 5hmC levels. The genome-wide profiles of 5hmC modification in three different rice cultivars were also obtained using a sensitive chemical labelling followed by a next-generation sequencing method. Thousands of 5hmC peaks were identified, and a comparison of the distributions of 5hmC among different rice cultivars revealed the specificity and conservation of 5hmC modification. The identified 5hmC peaks were significantly enriched in heterochromatin regions,and mainly located in transposable element (TE) genes, especially around retrotransposons. The correlation analysis of 5hmC and gene expression data revealed a close association between 5hmC and silent TEs. These findings provide a resource for plant DNA 5hmC epigenetic studies and expand our knowledge of 5hmC modification.
Resumo:
BACKGROUND: Social networks are common in digital health. A new stream of research is beginning to investigate the mechanisms of digital health social networks (DHSNs), how they are structured, how they function, and how their growth can be nurtured and managed. DHSNs increase in value when additional content is added, and the structure of networks may resemble the characteristics of power laws. Power laws are contrary to traditional Gaussian averages in that they demonstrate correlated phenomena. OBJECTIVES: The objective of this study is to investigate whether the distribution frequency in four DHSNs can be characterized as following a power law. A second objective is to describe the method used to determine the comparison. METHODS: Data from four DHSNs—Alcohol Help Center (AHC), Depression Center (DC), Panic Center (PC), and Stop Smoking Center (SSC)—were compared to power law distributions. To assist future researchers and managers, the 5-step methodology used to analyze and compare datasets is described. RESULTS: All four DHSNs were found to have right-skewed distributions, indicating the data were not normally distributed. When power trend lines were added to each frequency distribution, R(2) values indicated that, to a very high degree, the variance in post frequencies can be explained by actor rank (AHC .962, DC .975, PC .969, SSC .95). Spearman correlations provided further indication of the strength and statistical significance of the relationship (AHC .987. DC .967, PC .983, SSC .993, P<.001). CONCLUSIONS: This is the first study to investigate power distributions across multiple DHSNs, each addressing a unique condition. Results indicate that despite vast differences in theme, content, and length of existence, DHSNs follow properties of power laws. The structure of DHSNs is important as it gives insight to researchers and managers into the nature and mechanisms of network functionality. The 5-step process undertaken to compare actor contribution patterns can be replicated in networks that are managed by other organizations, and we conjecture that patterns observed in this study could be found in other DHSNs. Future research should analyze network growth over time and examine the characteristics and survival rates of superusers.
Resumo:
In this invited article the authors present an evaluative report on the development of the MESHGuides project (http://www.meshguides.org/). MESHGuides’ objective is to provide education with an international knowledge management system. MESHGuides were conceived as research summaries for supporting teachers’ in developing evidence-based practice. Their aim is to enhance teachers’ capacity to engage actively with research in their own classrooms. The original thinking for MESH arose from the work of UK-based academics Professor Marilyn Leask and Dr Sarah Younie in response to a desire, which has recently gathered momentum in the UK, for the development of a more research-informed teaching profession and for the establishment of an on-line platform to support evidence-based practice (DfE, 2015; Leask and Younie 2001; OECD 2009). The focus of this article is on how the MESHGuides project was conceived and structured, the technical systems supporting it and the practical reality for academics and teachers of composing and using MESHGuides. The project and the guides are in the early stages of development, and discussion indicates future possibilities for more global engagement with this knowledge management system.
Resumo:
An ability to quantify the reliability of probabilistic flood inundation predictions is a requirement not only for guiding model development but also for their successful application. Probabilistic flood inundation predictions are usually produced by choosing a method of weighting the model parameter space, but previous study suggests that this choice leads to clear differences in inundation probabilities. This study aims to address the evaluation of the reliability of these probabilistic predictions. However, a lack of an adequate number of observations of flood inundation for a catchment limits the application of conventional methods of evaluating predictive reliability. Consequently, attempts have been made to assess the reliability of probabilistic predictions using multiple observations from a single flood event. Here, a LISFLOOD-FP hydraulic model of an extreme (>1 in 1000 years) flood event in Cockermouth, UK, is constructed and calibrated using multiple performance measures from both peak flood wrack mark data and aerial photography captured post-peak. These measures are used in weighting the parameter space to produce multiple probabilistic predictions for the event. Two methods of assessing the reliability of these probabilistic predictions using limited observations are utilized; an existing method assessing the binary pattern of flooding, and a method developed in this paper to assess predictions of water surface elevation. This study finds that the water surface elevation method has both a better diagnostic and discriminatory ability, but this result is likely to be sensitive to the unknown uncertainties in the upstream boundary condition
Resumo:
We have performed systematic Monte Carlo studies on the influence of shifting the walls in slit-like systems constructed from folded graphene sheets on their adsorption properties. Specifically, we have analysed the effect on the mechanism of argon adsorption (T = 87 K) and on adsorption and separation of three binary gas mixtures: CO2/N2, CO2/CH4 and CH4/N2 (T = 298 K). The effects of the changes in interlayer distance were also determined. We show that folding of the walls significantly improves the adsorption and separation properties in comparison to ideal slit-like systems. Moreover, we demonstrate that mutual shift of sheets (for small interlayer distances) causes the appearance of small pores between opposite bulges. This causes an increase in vapour adsorption at low pressures. Due to overlapping of interactions with opposite walls causing an increase in adsorption energy, the mutual shift of sheets is also connected with the rise in efficiency of mixtures separation. The effects connected with sheet orientation vanish as the interlayer distance increases.
Resumo:
The emergence and development of digital imaging technologies and their impact on mainstream filmmaking is perhaps the most familiar special effects narrative associated with the years 1981-1999. This is in part because some of the questions raised by the rise of the digital still concern us now, but also because key milestone films showcasing advancements in digital imaging technologies appear in this period, including Tron (1982) and its computer generated image elements, the digital morphing in The Abyss (1989) and Terminator 2: Judgment Day (1991), computer animation in Jurassic Park (1993) and Toy Story (1995), digital extras in Titanic (1997), and ‘bullet time’ in The Matrix (1999). As a result it is tempting to characterize 1981-1999 as a ‘transitional period’ in which digital imaging processes grow in prominence and technical sophistication, and what we might call ‘analogue’ special effects processes correspondingly become less common. But such a narrative risks eliding the other practices that also shape effects sequences in this period. Indeed, the 1980s and 1990s are striking for the diverse range of effects practices in evidence in both big budget films and lower budget productions, and for the extent to which analogue practices persist independently of or alongside digital effects work in a range of production and genre contexts. The chapter seeks to document and celebrate this diversity and plurality, this sustaining of earlier traditions of effects practice alongside newer processes, this experimentation with materials and technologies old and new in the service of aesthetic aspirations alongside budgetary and technical constraints. The common characterization of the period as a series of rapid transformations in production workflows, practices and technologies will be interrogated in relation to the persistence of certain key figures as Douglas Trumbull, John Dykstra, and James Cameron, but also through a consideration of the contexts for and influences on creative decision-making. Comparative analyses of the processes used to articulate bodies, space and scale in effects sequences drawn from different generic sites of special effects work, including science fiction, fantasy, and horror, will provide a further frame for the chapter’s mapping of the commonalities and specificities, continuities and variations in effects practices across the period. In the process, the chapter seeks to reclaim analogue processes’ contribution both to moments of explicit spectacle, and to diegetic verisimilitude, in the decades most often associated with the digital’s ‘arrival’.
Resumo:
Heterosis refers to the phenomenon in which an F1 hybrid exhibits enhanced growth or agronomic performance. However, previous theoretical studies on heterosis have been based on bi-parental segregating populations instead of F1 hybrids. To understand the genetic basis of heterosis, here we used a subset of F1 hybrids, named a partial North Carolina II design, to perform association mapping for dependent variables: original trait value, general combining ability (GCA), specific combining ability (SCA) and mid-parental heterosis (MPH). Our models jointly fitted all the additive, dominance and epistatic effects. The analyses resulted in several important findings: 1) Main components are additive and additive-by-additive effects for GCA and dominance-related effects for SCA and MPH, and additive-by-dominant effect for MPH was partly identified as additive effect; 2) the ranking of factors affecting heterosis was dominance > dominance-by-dominance > over-dominance > complete dominance; and 3) increasing the proportion of F1 hybrids in the population could significantly increase the power to detect dominance-related effects, and slightly reduce the power to detect additive and additive-by-additive effects. Analyses of cotton and rapeseed datasets showed that more additive-by-additive QTL were detected from GCA than from trait phenotype, and fewer QTL were from MPH than from other dependent variables.
Resumo:
This study investigates flash flood forecast and warning communication, interpretation, and decision making, using data from a survey of 418 members of the public in Boulder, Colorado, USA. Respondents to the public survey varied in their perceptions and understandings of flash flood risks in Boulder, and some had misconceptions about flash flood risks, such as the safety of crossing fast-flowing water. About 6% of respondents indicated consistent reversals of US watch-warning alert terminology. However, more in-depth analysis illustrates the multi-dimensional, situationally dependent meanings of flash flood alerts, as well as the importance of evaluating interpretation and use of warning information along with alert terminology. Some public respondents estimated low likelihoods of flash flooding given a flash flood warning; these were associated with lower anticipated likelihood of taking protective action given a warning. Protective action intentions were also lower among respondents who had less trust in flash flood warnings, those who had not made prior preparations for flash flooding, and those who believed themselves to be safer from flash flooding. Additional analysis, using open-ended survey questions about responses to warnings, elucidates the complex, contextual nature of protective decision making during flash flood threats. These findings suggest that warnings can play an important role not only by notifying people that there is a threat and helping motivate people to take protective action, but also by helping people evaluate what actions to take given their situation.
Resumo:
Floods are the most frequent of natural disasters, affecting millions of people across the globe every year. The anticipation and forecasting of floods at the global scale is crucial to preparing for severe events and providing early awareness where local flood models and warning services may not exist. As numerical weather prediction models continue to improve, operational centres are increasingly using the meteorological output from these to drive hydrological models, creating hydrometeorological systems capable of forecasting river flow and flood events at much longer lead times than has previously been possible. Furthermore, developments in, for example, modelling capabilities, data and resources in recent years have made it possible to produce global scale flood forecasting systems. In this paper, the current state of operational large scale flood forecasting is discussed, including probabilistic forecasting of floods using ensemble prediction systems. Six state-of-the-art operational large scale flood forecasting systems are reviewed, describing similarities and differences in their approaches to forecasting floods at the global and continental scale. Currently, operational systems have the capability to produce coarse-scale discharge forecasts in the medium-range and disseminate forecasts and, in some cases, early warning products, in real time across the globe, in support of national forecasting capabilities. With improvements in seasonal weather forecasting, future advances may include more seamless hydrological forecasting at the global scale, alongside a move towards multi-model forecasts and grand ensemble techniques, responding to the requirement of developing multi-hazard early warning systems for disaster risk reduction.
Resumo:
Heavy precipitation affected Central Europe in May/June 2013, triggering damaging floods both on the Danube and the Elbe rivers. Based on a modelling approach with COSMO-CLM, moisture fluxes, backward trajectories, cyclone tracks and precipitation fields are evaluated for the relevant time period 30 May–2 June 2013. We identify potential moisture sources and quantify their contribution to the flood event focusing on the Danube basin through sensitivity experiments: Control simulations are performed with undisturbed ERA-Interim boundary conditions, while multiple sensitivity experiments are driven with modified evaporation characteristics over selected marine and land areas. Two relevant cyclones are identified both in reanalysis and in our simulations, which moved counter-clockwise in a retrograde path from Southeastern Europe over Eastern Europe towards the northern slopes of the Alps. The control simulations represent the synoptic evolution of the event reasonably well. The evolution of the precipitation event in the control simulations shows some differences in terms of its spatial and temporal characteristics compared to observations. The main precipitation event can be separated into two phases concerning the moisture sources. Our modelling results provide evidence that the two main sources contributing to the event were the continental evapotranspiration (moisture recycling; both phases) and the North Atlantic Ocean (first phase only). The Mediterranean Sea played only a minor role as a moisture source. This study confirms the importance of continental moisture recycling for heavy precipitation events over Central Europe during the summer half year.
Resumo:
Searching for and mapping the physical extent of unmarked graves using geophysical techniques has proven difficult in many cases. The success of individual geophysical techniques for detecting graves depends on a site-by-site basis. Significantly, detection of graves often results from measured contrasts that are linked to the background soils rather than the type of archaeological feature associated with the grave. It is evident that investigation of buried remains should be considered within a 3D space as the variation in burial environment can be extremely varied through the grave. Within this paper, we demonstrate the need for a multi-method survey strategy to investigate unmarked graves, as applied at a “planned” but unmarked pauper’s cemetery. The outcome from this case study provides new insights into the strategy that is required at such sites. Perhaps the most significant conclusion is that unmarked graves are best understood in terms of characterization rather than identification. In this paper, we argue for a methodological approach that, while following the current trends to use multiple techniques, is fundamentally dependent on a structured approach to the analysis of the data. The ramifications of this case study illustrate the necessity of an integrated strategy to provide a more holistic understanding of unmarked graves that may help aid in management of these unseen but important aspects of our heritage. It is concluded that the search for graves is still a current debate and one that will be solved by methodological rather than technique-based arguments.
Resumo:
Rootstock-induced dwarfing of apple scions revolutionized global apple production during the twentieth century, leading to the development of modern intensive orchards. A high root bark percentage (the percentage of the whole root area constituted by root cortex) has previously been associated with rootstock induced dwarfing in apple. In this study, the root bark percentage was measured in a full-sib family of ungrafted apple rootstocks and found to be under the control of three loci. Two QTL for root bark percentage were found to co-localise to the same genomic regions on chromosome 5 and chromosome 11 previously identified as controlling dwarfing, Dw1 and Dw2, respectively. A third QTL was identified on chromosome 13 in a region that has not been previously associated with dwarfing. The development of closely linked 3 Sequence-tagged site STS markers improved the resolution of allelic classes thereby allowing the detection of dominance and epistatic interactions between loci, with high root bark percentage only occurring in specific allelic combinations. In addition, we report a significant negative correlation between root bark percentage and stem diameter (an indicator of tree vigour), measured on a clonally propagated grafted subset of the mapping population. The demonstrated link between root bark percentage and rootstock-induced dwarfing of the scion leads us to propose a three-locus model that is able to explain levels of dwarfing from the dwarf ‘M.27’ to the semi-invigorating rootstock ‘M.116’. Moreover, we suggest that the QTL on chromosome 13 (Rb3) might be analogous to a third dwarfing QTL, Dw3 that has not previously been identified.
Resumo:
This paper describes new advances in the exploitation of oxygen A-band measurements from POLDER3 sensor onboard PARASOL, satellite platform within the A-Train. These developments result from not only an account of the dependence of POLDER oxygen parameters to cloud optical thickness τ and to the scene's geometrical conditions but also, and more importantly, from the finer understanding of the sensitivity of these parameters to cloud vertical extent. This sensitivity is made possible thanks to the multidirectional character of POLDER measurements. In the case of monolayer clouds that represent most of cloudy conditions, new oxygen parameters are obtained and calibrated from POLDER3 data colocalized with the measurements of the two active sensors of the A-Train: CALIOP/CALIPSO and CPR/CloudSat. From a parameterization that is (μs, τ) dependent, with μs the cosine of the solar zenith angle, a cloud top oxygen pressure (CTOP) and a cloud middle oxygen pressure (CMOP) are obtained, which are estimates of actual cloud top and middle pressures (CTP and CMP). Performances of CTOP and CMOP are presented by class of clouds following the ISCCP classification. In 2008, the coefficient of the correlation between CMOP and CMP is 0.81 for cirrostratus, 0.79 for stratocumulus, 0.75 for deep convective clouds. The coefficient of the correlation between CTOP and CTP is 0.75, 0.73, and 0.79 for the same cloud types. The score obtained by CTOP, defined as the confidence in the retrieval for a particular range of inferred value and for a given error, is higher than the one of MODIS CTP estimate. Scores of CTOP are the highest for bin value of CTP superior in numbers. For liquid (ice) clouds and an error of 30 hPa (50 hPa), the score of CTOP reaches 50% (70%). From the difference between CTOP and CMOP, a first estimate of the cloud vertical extent h is possible. A second estimate of h comes from the correlation between the angular standard deviation of POLDER oxygen pressure σPO2 and the cloud vertical extent. This correlation is studied in detail in the case of liquid clouds. It is shown to be spatially and temporally robust, except for clouds above land during winter months. The analysis of the correlation's dependence on the scene's characteristics leads to a parameterization providing h from σPO2. For liquid water clouds above ocean in 2008, the mean difference between the actual cloud vertical extent and the one retrieved from σPO2 (from the pressure difference) is 5 m (−12 m). The standard deviation of the mean difference is close to 1000 m for the two methods. POLDER estimates of the cloud geometrical thickness obtain a global score of 50% confidence for a relative error of 20% (40%) of the estimate for ice (liquid) clouds over ocean. These results need to be validated outside of the CALIPSO/CloudSat track.