5 resultados para WIDE-RANGE CURRENT MEASUREMENT

em Duke University


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Habitat loss, fragmentation, and degradation threaten the World’s ecosystems and species. These, and other threats, will likely be exacerbated by climate change. Due to a limited budget for conservation, we are forced to prioritize a few areas over others. These places are selected based on their uniqueness and vulnerability. One of the most famous examples is the biodiversity hotspots: areas where large quantities of endemic species meet alarming rates of habitat loss. Most of these places are in the tropics, where species have smaller ranges, diversity is higher, and ecosystems are most threatened.

Species distributions are useful to understand ecological theory and evaluate extinction risk. Small-ranged species, or those endemic to one place, are more vulnerable to extinction than widely distributed species. However, current range maps often overestimate the distribution of species, including areas that are not within the suitable elevation or habitat for a species. Consequently, assessment of extinction risk using these maps could underestimate vulnerability.

In order to be effective in our quest to conserve the World’s most important places we must: 1) Translate global and national priorities into practical local actions, 2) Find synergies between biodiversity conservation and human welfare, 3) Evaluate the different dimensions of threats, in order to design effective conservation measures and prepare for future threats, and 4) Improve the methods used to evaluate species’ extinction risk and prioritize areas for conservation. The purpose of this dissertation is to address these points in Colombia and other global biodiversity hotspots.

In Chapter 2, I identified the global, strategic conservation priorities and then downscaled to practical local actions within the selected priorities in Colombia. I used existing range maps of 171 bird species to identify priority conservation areas that would protect the greatest number of species at risk in Colombia (endemic and small-ranged species). The Western Andes had the highest concentrations of such species—100 in total—but the lowest densities of national parks. I then adjusted the priorities for this region by refining these species ranges by selecting only areas of suitable elevation and remaining habitat. The estimated ranges of these species shrank by 18–100% after accounting for habitat and suitable elevation. Setting conservation priorities on the basis of currently available range maps excluded priority areas in the Western Andes and, by extension, likely elsewhere and for other taxa. By incorporating detailed maps of remaining natural habitats, I made practical recommendations for conservation actions. One recommendation was to restore forest connections to a patch of cloud forest about to become isolated from the main Andes.

For Chapter 3, I identified areas where bird conservation met ecosystem service protection in the Central Andes of Colombia. Inspired by the November 11th (2011) landslide event near Manizales, and the current poor results of Colombia’s Article 111 of Law 99 of 1993 as a conservation measure in this country, I set out to prioritize conservation and restoration areas where landslide prevention would complement bird conservation in the Central Andes. This area is one of the most biodiverse places on Earth, but also one of the most threatened. Using the case of the Rio Blanco Reserve, near Manizales, I identified areas for conservation where endemic and small-range bird diversity was high, and where landslide risk was also high. I further prioritized restoration areas by overlapping these conservation priorities with a forest cover map. Restoring forests in bare areas of high landslide risk and important bird diversity yields benefits for both biodiversity and people. I developed a simple landslide susceptibility model using slope, forest cover, aspect, and stream proximity. Using publicly available bird range maps, refined by elevation, I mapped concentrations of endemic and small-range bird species. I identified 1.54 km2 of potential restoration areas in the Rio Blanco Reserve, and 886 km2 in the Central Andes region. By prioritizing these areas, I facilitate the application of Article 111 which requires local and regional governments to invest in land purchases for the conservation of watersheds.

Chapter 4 dealt with elevational ranges of montane birds and the impact of lowland deforestation on their ranges in the Western Andes of Colombia, an important biodiversity hotspot. Using point counts and mist-nets, I surveyed six altitudinal transects spanning 2200 to 2800m. Three transects were forested from 2200 to 2800m, and three were partially deforested with forest cover only above 2400m. I compared abundance-weighted mean elevation, minimum elevation, and elevational range width. In addition to analyzing the effect of deforestation on 134 species, I tested its impact within trophic guilds and habitat preference groups. Abundance-weighted mean and minimum elevations were not significantly different between forested and partially deforested transects. Range width was marginally different: as expected, ranges were larger in forested transects. Species in different trophic guilds and habitat preference categories showed different trends. These results suggest that deforestation may affect species’ elevational ranges, even within the forest that remains. Climate change will likely exacerbate harmful impacts of deforestation on species’ elevational distributions. Future conservation strategies need to account for this by protecting connected forest tracts across a wide range of elevations.

In Chapter 5, I refine the ranges of 726 species from six biodiversity hotspots by suitable elevation and habitat. This set of 172 bird species for the Atlantic Forest, 138 for Central America, 100 for the Western Andes of Colombia, 57 for Madagascar, 102 for Sumatra, and 157 for Southeast Asia met the criteria for range size, endemism, threat, and forest use. Of these 586 species, the Red List deems 108 to be threatened: 15 critically endangered, 29 endangered, and 64 vulnerable. When ranges are refined by elevational limits and remaining forest cover, 10 of those critically endangered species have ranges < 100km2, but then so do 2 endangered species, seven vulnerable, and eight non-threatened ones. Similarly, 4 critically endangered species, 20 endangered, and 12 vulnerable species have refined ranges < 5000km2, but so do 66 non-threatened species. A striking 89% of these species I have classified in higher threat categories have <50% of their refined ranges inside protected areas. I find that for 43% of the species I assessed, refined range sizes fall within thresholds that typically have higher threat categories than their current assignments. I recommend these species for closer inspection by those who assess risk. These assessments are not only important on a species-by-species basis, but by combining distributions of threatened species, I create maps of conservation priorities. They differ significantly from those created from unrefined ranges.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Aims: Measurement of glycated hemoglobin (HbA1c) is an important indicator of glucose control over time. Point-of-care (POC) devices allow for rapid and convenient measurement of HbA1c, greatly facilitating diabetes care. We assessed two POC analyzers in the Peruvian Amazon where laboratory-based HbA1c testing is not available.

Methods: Venous blood samples were collected from 203 individuals from six different Amazonian communities with a wide range of HbA1c, 4.4-9.0% (25-75 mmol/mol). The results of the Afinion AS100 and the DCA Vantage POC analyzers were compared to a central laboratory using the Premier Hb9210 high-performance liquid chromatography (HPLC) method. Imprecision was assessed by performing 14 successive tests of a single blood sample.

Results: The correlation coefficient r for POC and HPLC results was 0.92 for the Afinion and 0.93 for the DCA Vantage. The Afinion generated higher HbA1c results than the HPLC (mean difference = +0.56% [+6 mmol/mol]; p < 0.001), as did the DCA Vantage (mean difference = +0.32% [4 mmol/mol]). The bias observed between POC and HPLC did not vary by HbA1c level for the DCA Vantage (p = 0.190), but it did for the Afinion (p < 0.001). Imprecision results were: CV = 1.75% for the Afinion, CV = 4.01% for the DCA Vantage. Sensitivity was 100% for both devices, specificity was 48.3% for the Afinion and 85.1% for the DCA Vantage, positive predictive value (PPV) was 14.4% for the Afinion and 34.9% for the DCA Vantage, and negative predictive value (NPV) for both devices was 100%. The area under the receiver operating characteristic (ROC) curve was 0.966 for the Afinion and 0.982 for the DCA Vantage. Agreement between HPLC and POC in classifying diabetes and prediabetes status was slight for the Afinion (Kappa = 0.12) and significantly different (McNemar’s statistic = 89; p < 0.001), and moderate for the DCA Vantage (Kappa = 0.45) and significantly different (McNemar’s statistic = 28; p < 0.001).

Conclusions: Despite significant variation of HbA1c results between the Afinion and DCA Vantage analyzers compared to HPLC, we conclude that both analyzers should be considered in health clinics in the Peruvian Amazon for therapeutic adjustments if healthcare workers are aware of the differences relative to testing in a clinical laboratory. However, imprecision and bias were not low enough to recommend either device for screening purposes, and the local prevalence of anemia and malaria may interfere with diagnostic determinations for a substantial portion of the population.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Surveys can collect important data that inform policy decisions and drive social science research. Large government surveys collect information from the U.S. population on a wide range of topics, including demographics, education, employment, and lifestyle. Analysis of survey data presents unique challenges. In particular, one needs to account for missing data, for complex sampling designs, and for measurement error. Conceptually, a survey organization could spend lots of resources getting high-quality responses from a simple random sample, resulting in survey data that are easy to analyze. However, this scenario often is not realistic. To address these practical issues, survey organizations can leverage the information available from other sources of data. For example, in longitudinal studies that suffer from attrition, they can use the information from refreshment samples to correct for potential attrition bias. They can use information from known marginal distributions or survey design to improve inferences. They can use information from gold standard sources to correct for measurement error.

This thesis presents novel approaches to combining information from multiple sources that address the three problems described above.

The first method addresses nonignorable unit nonresponse and attrition in a panel survey with a refreshment sample. Panel surveys typically suffer from attrition, which can lead to biased inference when basing analysis only on cases that complete all waves of the panel. Unfortunately, the panel data alone cannot inform the extent of the bias due to attrition, so analysts must make strong and untestable assumptions about the missing data mechanism. Many panel studies also include refreshment samples, which are data collected from a random sample of new

individuals during some later wave of the panel. Refreshment samples offer information that can be utilized to correct for biases induced by nonignorable attrition while reducing reliance on strong assumptions about the attrition process. To date, these bias correction methods have not dealt with two key practical issues in panel studies: unit nonresponse in the initial wave of the panel and in the

refreshment sample itself. As we illustrate, nonignorable unit nonresponse

can significantly compromise the analyst's ability to use the refreshment samples for attrition bias correction. Thus, it is crucial for analysts to assess how sensitive their inferences---corrected for panel attrition---are to different assumptions about the nature of the unit nonresponse. We present an approach that facilitates such sensitivity analyses, both for suspected nonignorable unit nonresponse

in the initial wave and in the refreshment sample. We illustrate the approach using simulation studies and an analysis of data from the 2007-2008 Associated Press/Yahoo News election panel study.

The second method incorporates informative prior beliefs about

marginal probabilities into Bayesian latent class models for categorical data.

The basic idea is to append synthetic observations to the original data such that

(i) the empirical distributions of the desired margins match those of the prior beliefs, and (ii) the values of the remaining variables are left missing. The degree of prior uncertainty is controlled by the number of augmented records. Posterior inferences can be obtained via typical MCMC algorithms for latent class models, tailored to deal efficiently with the missing values in the concatenated data.

We illustrate the approach using a variety of simulations based on data from the American Community Survey, including an example of how augmented records can be used to fit latent class models to data from stratified samples.

The third method leverages the information from a gold standard survey to model reporting error. Survey data are subject to reporting error when respondents misunderstand the question or accidentally select the wrong response. Sometimes survey respondents knowingly select the wrong response, for example, by reporting a higher level of education than they actually have attained. We present an approach that allows an analyst to model reporting error by incorporating information from a gold standard survey. The analyst can specify various reporting error models and assess how sensitive their conclusions are to different assumptions about the reporting error process. We illustrate the approach using simulations based on data from the 1993 National Survey of College Graduates. We use the method to impute error-corrected educational attainments in the 2010 American Community Survey using the 2010 National Survey of College Graduates as the gold standard survey.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Terrestrial ecosystems, occupying more than 25% of the Earth's surface, can serve as

`biological valves' in regulating the anthropogenic emissions of atmospheric aerosol

particles and greenhouse gases (GHGs) as responses to their surrounding environments.

While the signicance of quantifying the exchange rates of GHGs and atmospheric

aerosol particles between the terrestrial biosphere and the atmosphere is

hardly questioned in many scientic elds, the progress in improving model predictability,

data interpretation or the combination of the two remains impeded by

the lack of precise framework elucidating their dynamic transport processes over a

wide range of spatiotemporal scales. The diculty in developing prognostic modeling

tools to quantify the source or sink strength of these atmospheric substances

can be further magnied by the fact that the climate system is also sensitive to the

feedback from terrestrial ecosystems forming the so-called `feedback cycle'. Hence,

the emergent need is to reduce uncertainties when assessing this complex and dynamic

feedback cycle that is necessary to support the decisions of mitigation and

adaptation policies associated with human activities (e.g., anthropogenic emission

controls and land use managements) under current and future climate regimes.

With the goal to improve the predictions for the biosphere-atmosphere exchange

of biologically active gases and atmospheric aerosol particles, the main focus of this

dissertation is on revising and up-scaling the biotic and abiotic transport processes

from leaf to canopy scales. The validity of previous modeling studies in determining

iv

the exchange rate of gases and particles is evaluated with detailed descriptions of their

limitations. Mechanistic-based modeling approaches along with empirical studies

across dierent scales are employed to rene the mathematical descriptions of surface

conductance responsible for gas and particle exchanges as commonly adopted by all

operational models. Specically, how variation in horizontal leaf area density within

the vegetated medium, leaf size and leaf microroughness impact the aerodynamic attributes

and thereby the ultrane particle collection eciency at the leaf/branch scale

is explored using wind tunnel experiments with interpretations by a porous media

model and a scaling analysis. A multi-layered and size-resolved second-order closure

model combined with particle

uxes and concentration measurements within and

above a forest is used to explore the particle transport processes within the canopy

sub-layer and the partitioning of particle deposition onto canopy medium and forest

oor. For gases, a modeling framework accounting for the leaf-level boundary layer

eects on the stomatal pathway for gas exchange is proposed and combined with sap

ux measurements in a wind tunnel to assess how leaf-level transpiration varies with

increasing wind speed. How exogenous environmental conditions and endogenous

soil-root-stem-leaf hydraulic and eco-physiological properties impact the above- and

below-ground water dynamics in the soil-plant system and shape plant responses

to droughts is assessed by a porous media model that accommodates the transient

water

ow within the plant vascular system and is coupled with the aforementioned

leaf-level gas exchange model and soil-root interaction model. It should be noted

that tackling all aspects of potential issues causing uncertainties in forecasting the

feedback cycle between terrestrial ecosystem and the climate is unrealistic in a single

dissertation but further research questions and opportunities based on the foundation

derived from this dissertation are also brie

y discussed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The use of DNA as a polymeric building material transcends its function in biology and is exciting in bionanotechnology for applications ranging from biosensing, to diagnostics, and to targeted drug delivery. These applications are enabled by DNA’s unique structural and chemical properties, embodied as a directional polyanion that exhibits molecular recognition capabilities. Hence, the efficient and precise synthesis of high molecular weight DNA materials has become key to advance DNA bionanotechnology. Current synthesis methods largely rely on either solid phase chemical synthesis or template-dependent polymerase amplification. The inherent step-by-step fashion of solid phase synthesis limits the length of the resulting DNA to typically less than 150 nucleotides. In contrast, polymerase based enzymatic synthesis methods (e.g., polymerase chain reaction) are not limited by product length, but require a DNA template to guide the synthesis. Furthermore, advanced DNA bionanotechnology requires tailorable structural and self-assembly properties. Current synthesis methods, however, often involve multiple conjugating reactions and extensive purification steps.

The research described in this dissertation aims to develop a facile method to synthesize high molecular weight, single stranded DNA (or polynucleotide) with versatile functionalities. We exploit the ability of a template-independent DNA polymerase−terminal deoxynucleotidyl transferase (TdT) to catalyze the polymerization of 2’-deoxyribonucleoside 5’-triphosphates (dNTP, monomer) from the 3’-hydroxyl group of an oligodeoxyribonucleotide (initiator). We termed this enzymatic synthesis method: TdT catalyzed enzymatic polymerization, or TcEP.

Specifically, this dissertation is structured to address three specific research aims. With the objective to generate high molecular weight polynucleotides, Specific Aim 1 studies the reaction kinetics of TcEP by investigating the polymerization of 2’-deoxythymidine 5’-triphosphates (monomer) from the 3’-hydroxyl group of oligodeoxyribothymidine (initiator) using in situ 1H NMR and fluorescent gel electrophoresis. We found that TcEP kinetics follows the “living” chain-growth polycondensation mechanism, and like in “living” polymerizations, the molecular weight of the final product is determined by the starting molar ratio of monomer to initiator. The distribution of the molecular weight is crucially influenced by the molar ratio of initiator to TdT. We developed a reaction kinetics model that allows us to quantitatively describe the reaction and predict the molecular weight of the reaction products.

Specific Aim 2 further explores TcEP’s ability to transcend homo-polynucleotide synthesis by varying the choices of initiators and monomers. We investigated the effects of initiator length and sequence on TcEP, and found that the minimum length of an effective initiator should be 10 nucleotides and that the formation of secondary structures close to the 3’-hydroxyl group can impede the polymerization reaction. We also demonstrated TcEP’s capacity to incorporate a wide range of unnatural dNTPs into the growing chain, such as, hydrophobic fluorescent dNTP and fluoro modified dNTP. By harnessing the encoded nucleotide sequence of an initiator and the chemical diversity of monomers, TcEP enables us to introduce molecular recognition capabilities and chemical functionalities on the 5’-terminus and 3’-terminus, respectively.

Building on TcEP’s synthesis capacities, in Specific Aim 3 we invented a two-step strategy to synthesize diblock amphiphilic polynucleotides, in which the first, hydrophilic block serves as a macro-initiator for the growth of the second block, comprised of natural and/or unnatural nucleotides. By tuning the hydrophilic length, we synthesized the amphiphilic diblock polynucleotides that can self-assemble into micellar structures ranging from star-like to crew-cut morphologies. The observed self-assembly behaviors agree with predictions from dissipative particle dynamics simulations as well as scaling law for polyelectrolyte block copolymers.

In summary, we developed an enzymatic synthesis method (i.e., TcEP) that enables the facile synthesis of high molecular weight polynucleotides with low polydispersity. Although we can control the nucleotide sequence only to a limited extent, TcEP offers a method to integrate an oligodeoxyribonucleotide with specific sequence at the 5’-terminus and to incorporate functional groups along the growing chains simultaneously. Additionally, we used TcEP to synthesize amphiphilic polynucleotides that display self-assemble ability. We anticipate that our facile synthesis method will not only advance molecular biology, but also invigorate materials science and bionanotechnology.