9 resultados para Presence-absence Data

em Duke University


Relevância:

40.00% 40.00%

Publicador:

Resumo:

Human use of the oceans is increasingly in conflict with conservation of endangered species. Methods for managing the spatial and temporal placement of industries such as military, fishing, transportation and offshore energy, have historically been post hoc; i.e. the time and place of human activity is often already determined before assessment of environmental impacts. In this dissertation, I build robust species distribution models in two case study areas, US Atlantic (Best et al. 2012) and British Columbia (Best et al. 2015), predicting presence and abundance respectively, from scientific surveys. These models are then applied to novel decision frameworks for preemptively suggesting optimal placement of human activities in space and time to minimize ecological impacts: siting for offshore wind energy development, and routing ships to minimize risk of striking whales. Both decision frameworks relate the tradeoff between conservation risk and industry profit with synchronized variable and map views as online spatial decision support systems.

For siting offshore wind energy development (OWED) in the U.S. Atlantic (chapter 4), bird density maps are combined across species with weights of OWED sensitivity to collision and displacement and 10 km2 sites are compared against OWED profitability based on average annual wind speed at 90m hub heights and distance to transmission grid. A spatial decision support system enables toggling between the map and tradeoff plot views by site. A selected site can be inspected for sensitivity to a cetaceans throughout the year, so as to capture months of the year which minimize episodic impacts of pre-operational activities such as seismic airgun surveying and pile driving.

Routing ships to avoid whale strikes (chapter 5) can be similarly viewed as a tradeoff, but is a different problem spatially. A cumulative cost surface is generated from density surface maps and conservation status of cetaceans, before applying as a resistance surface to calculate least-cost routes between start and end locations, i.e. ports and entrance locations to study areas. Varying a multiplier to the cost surface enables calculation of multiple routes with different costs to conservation of cetaceans versus cost to transportation industry, measured as distance. Similar to the siting chapter, a spatial decisions support system enables toggling between the map and tradeoff plot view of proposed routes. The user can also input arbitrary start and end locations to calculate the tradeoff on the fly.

Essential to the input of these decision frameworks are distributions of the species. The two preceding chapters comprise species distribution models from two case study areas, U.S. Atlantic (chapter 2) and British Columbia (chapter 3), predicting presence and density, respectively. Although density is preferred to estimate potential biological removal, per Marine Mammal Protection Act requirements in the U.S., all the necessary parameters, especially distance and angle of observation, are less readily available across publicly mined datasets.

In the case of predicting cetacean presence in the U.S. Atlantic (chapter 2), I extracted datasets from the online OBIS-SEAMAP geo-database, and integrated scientific surveys conducted by ship (n=36) and aircraft (n=16), weighting a Generalized Additive Model by minutes surveyed within space-time grid cells to harmonize effort between the two survey platforms. For each of 16 cetacean species guilds, I predicted the probability of occurrence from static environmental variables (water depth, distance to shore, distance to continental shelf break) and time-varying conditions (monthly sea-surface temperature). To generate maps of presence vs. absence, Receiver Operator Characteristic (ROC) curves were used to define the optimal threshold that minimizes false positive and false negative error rates. I integrated model outputs, including tables (species in guilds, input surveys) and plots (fit of environmental variables, ROC curve), into an online spatial decision support system, allowing for easy navigation of models by taxon, region, season, and data provider.

For predicting cetacean density within the inner waters of British Columbia (chapter 3), I calculated density from systematic, line-transect marine mammal surveys over multiple years and seasons (summer 2004, 2005, 2008, and spring/autumn 2007) conducted by Raincoast Conservation Foundation. Abundance estimates were calculated using two different methods: Conventional Distance Sampling (CDS) and Density Surface Modelling (DSM). CDS generates a single density estimate for each stratum, whereas DSM explicitly models spatial variation and offers potential for greater precision by incorporating environmental predictors. Although DSM yields a more relevant product for the purposes of marine spatial planning, CDS has proven to be useful in cases where there are fewer observations available for seasonal and inter-annual comparison, particularly for the scarcely observed elephant seal. Abundance estimates are provided on a stratum-specific basis. Steller sea lions and harbour seals are further differentiated by ‘hauled out’ and ‘in water’. This analysis updates previous estimates (Williams & Thomas 2007) by including additional years of effort, providing greater spatial precision with the DSM method over CDS, novel reporting for spring and autumn seasons (rather than summer alone), and providing new abundance estimates for Steller sea lion and northern elephant seal. In addition to providing a baseline of marine mammal abundance and distribution, against which future changes can be compared, this information offers the opportunity to assess the risks posed to marine mammals by existing and emerging threats, such as fisheries bycatch, ship strikes, and increased oil spill and ocean noise issues associated with increases of container ship and oil tanker traffic in British Columbia’s continental shelf waters.

Starting with marine animal observations at specific coordinates and times, I combine these data with environmental data, often satellite derived, to produce seascape predictions generalizable in space and time. These habitat-based models enable prediction of encounter rates and, in the case of density surface models, abundance that can then be applied to management scenarios. Specific human activities, OWED and shipping, are then compared within a tradeoff decision support framework, enabling interchangeable map and tradeoff plot views. These products make complex processes transparent for gaming conservation, industry and stakeholders towards optimal marine spatial management, fundamental to the tenets of marine spatial planning, ecosystem-based management and dynamic ocean management.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Periods of drought and low streamflow can have profound impacts on both human and natural systems. People depend on a reliable source of water for numerous reasons including potable water supply and to produce economic value through agriculture or energy production. Aquatic ecosystems depend on water in addition to the economic benefits they provide to society through ecosystem services. Given that periods of low streamflow may become more extreme and frequent in the future, it is important to study the factors that control water availability during these times. In the absence of precipitation the slower hydrological response of groundwater systems will play an amplified role in water supply. Understanding the variability of the fraction of streamflow contribution from baseflow or groundwater during periods of drought provides insight into what future water availability may look like and how it can best be managed. The Mills River Basin in North Carolina is chosen as a case-study to test this understanding. First, obtaining a physically meaningful estimation of baseflow from USGS streamflow data via computerized hydrograph analysis techniques is carried out. Then applying a method of time series analysis including wavelet analysis can highlight signals of non-stationarity and evaluate the changes in variance required to better understand the natural variability of baseflow and low flows. In addition to natural variability, human influence must be taken into account in order to accurately assess how the combined system reacts to periods of low flow. Defining a combined demand that consists of both natural and human demand allows us to be more rigorous in assessing the level of sustainable use of a shared resource, in this case water. The analysis of baseflow variability can differ based on regional location and local hydrogeology, but it was found that baseflow varies from multiyear scales such as those associated with ENSO (3.5, 7 years) up to multi decadal time scales, but with most of the contributing variance coming from decadal or multiyear scales. It was also found that the behavior of baseflow and subsequently water availability depends a great deal on overall precipitation, the tracks of hurricanes or tropical storms and associated climate indices, as well as physiography and hydrogeology. Evaluating and utilizing the Duke Combined Hydrology Model (DCHM), reasonably accurate estimates of streamflow during periods of low flow were obtained in part due to the model’s ability to capture subsurface processes. Being able to accurately simulate streamflow levels and subsurface interactions during periods of drought can be very valuable to water suppliers, decision makers, and ultimately impact citizens. Knowledge of future droughts and periods of low flow in addition to tracking customer demand will allow for better management practices on the part of water suppliers such as knowing when they should withdraw more water during a surplus so that the level of stress on the system is minimized when there is not ample water supply.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper describes a methodology for detecting anomalies from sequentially observed and potentially noisy data. The proposed approach consists of two main elements: 1) filtering, or assigning a belief or likelihood to each successive measurement based upon our ability to predict it from previous noisy observations and 2) hedging, or flagging potential anomalies by comparing the current belief against a time-varying and data-adaptive threshold. The threshold is adjusted based on the available feedback from an end user. Our algorithms, which combine universal prediction with recent work on online convex programming, do not require computing posterior distributions given all current observations and involve simple primal-dual parameter updates. At the heart of the proposed approach lie exponential-family models which can be used in a wide variety of contexts and applications, and which yield methods that achieve sublinear per-round regret against both static and slowly varying product distributions with marginals drawn from the same exponential family. Moreover, the regret against static distributions coincides with the minimax value of the corresponding online strongly convex game. We also prove bounds on the number of mistakes made during the hedging step relative to the best offline choice of the threshold with access to all estimated beliefs and feedback signals. We validate the theory on synthetic data drawn from a time-varying distribution over binary vectors of high dimensionality, as well as on the Enron email dataset. © 1963-2012 IEEE.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Described here is a mass spectrometry-based screening assay for the detection of protein-ligand binding interactions in multicomponent protein mixtures. The assay utilizes an oxidation labeling protocol that involves using hydrogen peroxide to selectively oxidize methionine residues in proteins in order to probe the solvent accessibility of these residues as a function of temperature. The extent to which methionine residues in a protein are oxidized after specified reaction times at a range of temperatures is determined in a MALDI analysis of the intact proteins and/or an LC-MS analysis of tryptic peptide fragments generated after the oxidation reaction is quenched. Ultimately, the mass spectral data is used to construct thermal denaturation curves for the detected proteins. In this proof-of-principle work, the protocol is applied to a four-protein model mixture comprised of ubiquitin, ribonuclease A (RNaseA), cyclophilin A (CypA), and bovine carbonic anhydrase II (BCAII). The new protocol's ability to detect protein-ligand binding interactions by comparing thermal denaturation data obtained in the absence and in the presence of ligand is demonstrated using cyclosporin A (CsA) as a test ligand. The known binding interaction between CsA and CypA was detected using both the MALDI- and LC-MS-based readouts described here.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Therapeutic anticancer vaccines are designed to boost patients' immune responses to tumors. One approach is to use a viral vector to deliver antigen to in situ DCs, which then activate tumor-specific T cell and antibody responses. However, vector-specific neutralizing antibodies and suppressive cell populations such as Tregs remain great challenges to the efficacy of this approach. We report here that an alphavirus vector, packaged in virus-like replicon particles (VRP) and capable of efficiently infecting DCs, could be repeatedly administered to patients with metastatic cancer expressing the tumor antigen carcinoembryonic antigen (CEA) and that it overcame high titers of neutralizing antibodies and elevated Treg levels to induce clinically relevant CEA-specific T cell and antibody responses. The CEA-specific antibodies mediated antibody-dependent cellular cytotoxicity against tumor cells from human colorectal cancer metastases. In addition, patients with CEA-specific T cell responses exhibited longer overall survival. These data suggest that VRP-based vectors can overcome the presence of neutralizing antibodies to break tolerance to self antigen and may be clinically useful for immunotherapy in the setting of tumor-induced immunosuppression.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND: Biological processes occur on a vast range of time scales, and many of them occur concurrently. As a result, system-wide measurements of gene expression have the potential to capture many of these processes simultaneously. The challenge however, is to separate these processes and time scales in the data. In many cases the number of processes and their time scales is unknown. This issue is particularly relevant to developmental biologists, who are interested in processes such as growth, segmentation and differentiation, which can all take place simultaneously, but on different time scales. RESULTS: We introduce a flexible and statistically rigorous method for detecting different time scales in time-series gene expression data, by identifying expression patterns that are temporally shifted between replicate datasets. We apply our approach to a Saccharomyces cerevisiae cell-cycle dataset and an Arabidopsis thaliana root developmental dataset. In both datasets our method successfully detects processes operating on several different time scales. Furthermore we show that many of these time scales can be associated with particular biological functions. CONCLUSIONS: The spatiotemporal modules identified by our method suggest the presence of multiple biological processes, acting at distinct time scales in both the Arabidopsis root and yeast. Using similar large-scale expression datasets, the identification of biological processes acting at multiple time scales in many organisms is now possible.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND: Historically, only partial assessments of data quality have been performed in clinical trials, for which the most common method of measuring database error rates has been to compare the case report form (CRF) to database entries and count discrepancies. Importantly, errors arising from medical record abstraction and transcription are rarely evaluated as part of such quality assessments. Electronic Data Capture (EDC) technology has had a further impact, as paper CRFs typically leveraged for quality measurement are not used in EDC processes. METHODS AND PRINCIPAL FINDINGS: The National Institute on Drug Abuse Treatment Clinical Trials Network has developed, implemented, and evaluated methodology for holistically assessing data quality on EDC trials. We characterize the average source-to-database error rate (14.3 errors per 10,000 fields) for the first year of use of the new evaluation method. This error rate was significantly lower than the average of published error rates for source-to-database audits, and was similar to CRF-to-database error rates reported in the published literature. We attribute this largely to an absence of medical record abstraction on the trials we examined, and to an outpatient setting characterized by less acute patient conditions. CONCLUSIONS: Historically, medical record abstraction is the most significant source of error by an order of magnitude, and should be measured and managed during the course of clinical trials. Source-to-database error rates are highly dependent on the amount of structured data collection in the clinical setting and on the complexity of the medical record, dependencies that should be considered when developing data quality benchmarks.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

CD8+ T cells are associated with long term control of virus replication to low or undetectable levels in a population of HIV+ therapy-naïve individuals known as virus controllers (VCs; <5000 RNA copies/ml and CD4+ lymphocyte counts >400 cells/µl). These subjects' ability to control viremia in the absence of therapy makes them the gold standard for the type of CD8+ T-cell response that should be induced with a vaccine. Studying the regulation of CD8+ T cells responses in these VCs provides the opportunity to discover mechanisms of durable control of HIV-1. Previous research has shown that the CD8+ T cell population in VCs is heterogeneous in its ability to inhibit virus replication and distinct T cells are responsible for virus inhibition. Further defining both the functional properties and regulation of the specific features of the select CD8+ T cells responsible for potent control of viremia the in VCs would enable better evaluation of T cell-directed vaccine strategies and may inform the design of new therapies.

Here we discuss the progress made in elucidating the features and regulation of CD8+ T cell response in virus controllers. We first detail the development of assays to quantify CD8+ T cells' ability to inhibit virus replication. This includes the use of a multi-clade HIV-1 panel which can subsequently be used as a tool for evaluation of T cell directed vaccines. We used these assays to evaluate the CD8+ response among cohorts of HIV-1 seronegative, HIV-1 acutely infected, and HIV-1 chronically infected (both VC and chronic viremic) patients. Contact and soluble CD8+ T cell virus inhibition assays (VIAs) are able to distinguish these patient groups based on the presence and magnitude of the responses. When employed in conjunction with peptide stimulation, the soluble assay reveals peptide stimulation induces CD8+ T cell responses with a prevalence of Gag p24 and Nef specificity among the virus controllers tested. Given this prevalence, we aimed to determine the gene expression profile of Gag p24-, Nef-, and unstimulated CD8+ T cells. RNA was isolated from CD8+ T-cells from two virus controllers with strong virus inhibition and one seronegative donor after a 5.5 hour stimulation period then analyzed using the Illumina Human BeadChip platform (Duke Center for Human Genome Variation). Analysis revealed that 565 (242 Nef and 323 Gag) genes were differentially expressed in CD8+ T-cells that were able to inhibit virus replication compared to those that could not. We compared the differentially expressed genes to published data sets from other CD8+ T-cell effector function experiments focusing our analysis on the most recurring genes with immunological, gene regulatory, apoptotic or unknown functions. The most commonly identified gene in these studies was TNFRSF9. Using PCR in a larger cohort of virus controllers we confirmed the up-regulation of TNFRSF9 in Gag p24 and Nef-specific CD8+ T cell mediated virus inhibition. We also observed increase in the mRNA encoding antiviral cytokines macrophage inflammatory proteins (MIP-1α, MIP-1αP, MIP-1β), interferon gamma (IFN-γ), granulocyte-macrophage colony-stimulating factor (GM-CSF), and recently identified lymphotactin (XCL1).

Our previous work suggests the CD8+ T-cell response to HIV-1 can be regulated at the level of gene regulation. Because RNA abundance is modulated by transcription of new mRNAs and decay of new and existing RNA we aimed to evaluate the net rate of transcription and mRNA decay for the cytokines we identified as differentially regulated. To estimate rate of mRNA synthesis and decay, we stimulated isolated CD8+ T-cells with Gag p24 and Nef peptides adding 4-thiouridine (4SU) during the final hour of stimulation, allowing for separation of RNA made during the final hour of stimulation. Subsequent PCR of RNA isolated from these cells, allowed us to determine how much mRNA was made for our genes of interest during the final hour which we used to calculate rate of transcription. To assess if stimulation caused a change in RNA stability, we calculated the decay rates of these mRNA over time. In Gag p24 and Nef stimulated T cells , the abundance of the mRNA of many of the cytokines examined was dependent on changes in both transcription and mRNA decay with evidence for potential differences in the regulation of mRNA between Nef and Gag specific CD8+ T cells. The results were highly reproducible in that in one subject that was measured in three independent experiments the results were concordant.

This data suggests that mRNA stability, in addition to transcription, is key in regulating the direct anti-HIV-1 function of antigen-specific memory CD8+ T cells by enabling rapid recall of anti-HIV-1 effector functions, namely the production and increased stability of antiviral cytokines. We have started to uncover the mechanisms employed by CD8+ T cell subsets with antigen-specific anti-HIV-1 activity, in turn, enhancing our ability to inhibit virus replication by informing both cure strategies and HIV-1 vaccine designs that aim to reduce transmission and can aid in blocking HIV-1 acquisition.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND: Ritonavir inhibition of cytochrome P450 3A4 decreases the elimination clearance of fentanyl by 67%. We used a pharmacokinetic model developed from published data to simulate the effect of sample patient-controlled epidural labor analgesic regimens on plasma fentanyl concentrations in the absence and presence of ritonavir-induced cytochrome P450 3A4 inhibition. METHODS: Fentanyl absorption from the epidural space was modeled using tanks-in-series delay elements. Systemic fentanyl disposition was described using a three-compartment pharmacokinetic model. Parameters for epidural drug absorption were estimated by fitting the model to reported plasma fentanyl concentrations measured after epidural administration. The validity of the model was assessed by comparing predicted plasma concentrations after epidural administration to published data. The effect of ritonavir was modeled as a 67% decrease in fentanyl elimination clearance. Plasma fentanyl concentrations were simulated for six sample patient-controlled epidural labor analgesic regimens over 24 h using ritonavir and control models. Simulated data were analyzed to determine if plasma fentanyl concentrations producing a 50% decrease in minute ventilation (6.1 ng/mL) were achieved. RESULTS: Simulated plasma fentanyl concentrations in the ritonavir group were higher than those in the control group for all sample labor analgesic regimens. Maximum plasma fentanyl concentrations were 1.8 ng/mL and 3.4 ng/mL for the normal and ritonavir simulations, respectively, and did not reach concentrations associated with 50% decrease in minute ventilation. CONCLUSION: Our model predicts that even with maximal clinical dosing regimens of epidural fentanyl over 24 h, ritonavir-induced cytochrome P450 3A4 inhibition is unlikely to produce plasma fentanyl concentrations associated with a decrease in minute ventilation.