951 resultados para WIDE-RANGE CURRENT MEASUREMENT
Resumo:
Habitat loss, fragmentation, and degradation threaten the World’s ecosystems and species. These, and other threats, will likely be exacerbated by climate change. Due to a limited budget for conservation, we are forced to prioritize a few areas over others. These places are selected based on their uniqueness and vulnerability. One of the most famous examples is the biodiversity hotspots: areas where large quantities of endemic species meet alarming rates of habitat loss. Most of these places are in the tropics, where species have smaller ranges, diversity is higher, and ecosystems are most threatened.
Species distributions are useful to understand ecological theory and evaluate extinction risk. Small-ranged species, or those endemic to one place, are more vulnerable to extinction than widely distributed species. However, current range maps often overestimate the distribution of species, including areas that are not within the suitable elevation or habitat for a species. Consequently, assessment of extinction risk using these maps could underestimate vulnerability.
In order to be effective in our quest to conserve the World’s most important places we must: 1) Translate global and national priorities into practical local actions, 2) Find synergies between biodiversity conservation and human welfare, 3) Evaluate the different dimensions of threats, in order to design effective conservation measures and prepare for future threats, and 4) Improve the methods used to evaluate species’ extinction risk and prioritize areas for conservation. The purpose of this dissertation is to address these points in Colombia and other global biodiversity hotspots.
In Chapter 2, I identified the global, strategic conservation priorities and then downscaled to practical local actions within the selected priorities in Colombia. I used existing range maps of 171 bird species to identify priority conservation areas that would protect the greatest number of species at risk in Colombia (endemic and small-ranged species). The Western Andes had the highest concentrations of such species—100 in total—but the lowest densities of national parks. I then adjusted the priorities for this region by refining these species ranges by selecting only areas of suitable elevation and remaining habitat. The estimated ranges of these species shrank by 18–100% after accounting for habitat and suitable elevation. Setting conservation priorities on the basis of currently available range maps excluded priority areas in the Western Andes and, by extension, likely elsewhere and for other taxa. By incorporating detailed maps of remaining natural habitats, I made practical recommendations for conservation actions. One recommendation was to restore forest connections to a patch of cloud forest about to become isolated from the main Andes.
For Chapter 3, I identified areas where bird conservation met ecosystem service protection in the Central Andes of Colombia. Inspired by the November 11th (2011) landslide event near Manizales, and the current poor results of Colombia’s Article 111 of Law 99 of 1993 as a conservation measure in this country, I set out to prioritize conservation and restoration areas where landslide prevention would complement bird conservation in the Central Andes. This area is one of the most biodiverse places on Earth, but also one of the most threatened. Using the case of the Rio Blanco Reserve, near Manizales, I identified areas for conservation where endemic and small-range bird diversity was high, and where landslide risk was also high. I further prioritized restoration areas by overlapping these conservation priorities with a forest cover map. Restoring forests in bare areas of high landslide risk and important bird diversity yields benefits for both biodiversity and people. I developed a simple landslide susceptibility model using slope, forest cover, aspect, and stream proximity. Using publicly available bird range maps, refined by elevation, I mapped concentrations of endemic and small-range bird species. I identified 1.54 km2 of potential restoration areas in the Rio Blanco Reserve, and 886 km2 in the Central Andes region. By prioritizing these areas, I facilitate the application of Article 111 which requires local and regional governments to invest in land purchases for the conservation of watersheds.
Chapter 4 dealt with elevational ranges of montane birds and the impact of lowland deforestation on their ranges in the Western Andes of Colombia, an important biodiversity hotspot. Using point counts and mist-nets, I surveyed six altitudinal transects spanning 2200 to 2800m. Three transects were forested from 2200 to 2800m, and three were partially deforested with forest cover only above 2400m. I compared abundance-weighted mean elevation, minimum elevation, and elevational range width. In addition to analyzing the effect of deforestation on 134 species, I tested its impact within trophic guilds and habitat preference groups. Abundance-weighted mean and minimum elevations were not significantly different between forested and partially deforested transects. Range width was marginally different: as expected, ranges were larger in forested transects. Species in different trophic guilds and habitat preference categories showed different trends. These results suggest that deforestation may affect species’ elevational ranges, even within the forest that remains. Climate change will likely exacerbate harmful impacts of deforestation on species’ elevational distributions. Future conservation strategies need to account for this by protecting connected forest tracts across a wide range of elevations.
In Chapter 5, I refine the ranges of 726 species from six biodiversity hotspots by suitable elevation and habitat. This set of 172 bird species for the Atlantic Forest, 138 for Central America, 100 for the Western Andes of Colombia, 57 for Madagascar, 102 for Sumatra, and 157 for Southeast Asia met the criteria for range size, endemism, threat, and forest use. Of these 586 species, the Red List deems 108 to be threatened: 15 critically endangered, 29 endangered, and 64 vulnerable. When ranges are refined by elevational limits and remaining forest cover, 10 of those critically endangered species have ranges < 100km2, but then so do 2 endangered species, seven vulnerable, and eight non-threatened ones. Similarly, 4 critically endangered species, 20 endangered, and 12 vulnerable species have refined ranges < 5000km2, but so do 66 non-threatened species. A striking 89% of these species I have classified in higher threat categories have <50% of their refined ranges inside protected areas. I find that for 43% of the species I assessed, refined range sizes fall within thresholds that typically have higher threat categories than their current assignments. I recommend these species for closer inspection by those who assess risk. These assessments are not only important on a species-by-species basis, but by combining distributions of threatened species, I create maps of conservation priorities. They differ significantly from those created from unrefined ranges.
Resumo:
Aims: Measurement of glycated hemoglobin (HbA1c) is an important indicator of glucose control over time. Point-of-care (POC) devices allow for rapid and convenient measurement of HbA1c, greatly facilitating diabetes care. We assessed two POC analyzers in the Peruvian Amazon where laboratory-based HbA1c testing is not available.
Methods: Venous blood samples were collected from 203 individuals from six different Amazonian communities with a wide range of HbA1c, 4.4-9.0% (25-75 mmol/mol). The results of the Afinion AS100 and the DCA Vantage POC analyzers were compared to a central laboratory using the Premier Hb9210 high-performance liquid chromatography (HPLC) method. Imprecision was assessed by performing 14 successive tests of a single blood sample.
Results: The correlation coefficient r for POC and HPLC results was 0.92 for the Afinion and 0.93 for the DCA Vantage. The Afinion generated higher HbA1c results than the HPLC (mean difference = +0.56% [+6 mmol/mol]; p < 0.001), as did the DCA Vantage (mean difference = +0.32% [4 mmol/mol]). The bias observed between POC and HPLC did not vary by HbA1c level for the DCA Vantage (p = 0.190), but it did for the Afinion (p < 0.001). Imprecision results were: CV = 1.75% for the Afinion, CV = 4.01% for the DCA Vantage. Sensitivity was 100% for both devices, specificity was 48.3% for the Afinion and 85.1% for the DCA Vantage, positive predictive value (PPV) was 14.4% for the Afinion and 34.9% for the DCA Vantage, and negative predictive value (NPV) for both devices was 100%. The area under the receiver operating characteristic (ROC) curve was 0.966 for the Afinion and 0.982 for the DCA Vantage. Agreement between HPLC and POC in classifying diabetes and prediabetes status was slight for the Afinion (Kappa = 0.12) and significantly different (McNemar’s statistic = 89; p < 0.001), and moderate for the DCA Vantage (Kappa = 0.45) and significantly different (McNemar’s statistic = 28; p < 0.001).
Conclusions: Despite significant variation of HbA1c results between the Afinion and DCA Vantage analyzers compared to HPLC, we conclude that both analyzers should be considered in health clinics in the Peruvian Amazon for therapeutic adjustments if healthcare workers are aware of the differences relative to testing in a clinical laboratory. However, imprecision and bias were not low enough to recommend either device for screening purposes, and the local prevalence of anemia and malaria may interfere with diagnostic determinations for a substantial portion of the population.
Resumo:
Sudden changes in the stiffness of a structure are often indicators of structural damage. Detection of such sudden stiffness change from the vibrations of structures is important for Structural Health Monitoring (SHM) and damage detection. Non-contact measurement of these vibrations is a quick and efficient way for successful detection of sudden stiffness change of a structure. In this paper, we demonstrate the capability of Laser Doppler Vibrometry to detect sudden stiffness change in a Single Degree Of Freedom (SDOF) oscillator within a laboratory environment. The dynamic response of the SDOF system was measured using a Polytec RSV-150 Remote Sensing Vibrometer. This instrument employs Laser Doppler Vibrometry for measuring dynamic response. Additionally, the vibration response of the SDOF system was measured through a MicroStrain G-Link Wireless Accelerometer mounted on the SDOF system. The stiffness of the SDOF system was experimentally determined through calibrated linear springs. The sudden change of stiffness was simulated by introducing the failure of a spring at a certain instant in time during a given period of forced vibration. The forced vibration on the SDOF system was in the form of a white noise input. The sudden change in stiffness was successfully detected through the measurements using Laser Doppler Vibrometry. This detection from optically obtained data was compared with a detection using data obtained from the wireless accelerometer. The potential of this technique is deemed important for a wide range of applications. The method is observed to be particularly suitable for rapid damage detection and health monitoring of structures under a model-free condition or where information related to the structure is not sufficient.
Resumo:
Surveys can collect important data that inform policy decisions and drive social science research. Large government surveys collect information from the U.S. population on a wide range of topics, including demographics, education, employment, and lifestyle. Analysis of survey data presents unique challenges. In particular, one needs to account for missing data, for complex sampling designs, and for measurement error. Conceptually, a survey organization could spend lots of resources getting high-quality responses from a simple random sample, resulting in survey data that are easy to analyze. However, this scenario often is not realistic. To address these practical issues, survey organizations can leverage the information available from other sources of data. For example, in longitudinal studies that suffer from attrition, they can use the information from refreshment samples to correct for potential attrition bias. They can use information from known marginal distributions or survey design to improve inferences. They can use information from gold standard sources to correct for measurement error.
This thesis presents novel approaches to combining information from multiple sources that address the three problems described above.
The first method addresses nonignorable unit nonresponse and attrition in a panel survey with a refreshment sample. Panel surveys typically suffer from attrition, which can lead to biased inference when basing analysis only on cases that complete all waves of the panel. Unfortunately, the panel data alone cannot inform the extent of the bias due to attrition, so analysts must make strong and untestable assumptions about the missing data mechanism. Many panel studies also include refreshment samples, which are data collected from a random sample of new
individuals during some later wave of the panel. Refreshment samples offer information that can be utilized to correct for biases induced by nonignorable attrition while reducing reliance on strong assumptions about the attrition process. To date, these bias correction methods have not dealt with two key practical issues in panel studies: unit nonresponse in the initial wave of the panel and in the
refreshment sample itself. As we illustrate, nonignorable unit nonresponse
can significantly compromise the analyst's ability to use the refreshment samples for attrition bias correction. Thus, it is crucial for analysts to assess how sensitive their inferences---corrected for panel attrition---are to different assumptions about the nature of the unit nonresponse. We present an approach that facilitates such sensitivity analyses, both for suspected nonignorable unit nonresponse
in the initial wave and in the refreshment sample. We illustrate the approach using simulation studies and an analysis of data from the 2007-2008 Associated Press/Yahoo News election panel study.
The second method incorporates informative prior beliefs about
marginal probabilities into Bayesian latent class models for categorical data.
The basic idea is to append synthetic observations to the original data such that
(i) the empirical distributions of the desired margins match those of the prior beliefs, and (ii) the values of the remaining variables are left missing. The degree of prior uncertainty is controlled by the number of augmented records. Posterior inferences can be obtained via typical MCMC algorithms for latent class models, tailored to deal efficiently with the missing values in the concatenated data.
We illustrate the approach using a variety of simulations based on data from the American Community Survey, including an example of how augmented records can be used to fit latent class models to data from stratified samples.
The third method leverages the information from a gold standard survey to model reporting error. Survey data are subject to reporting error when respondents misunderstand the question or accidentally select the wrong response. Sometimes survey respondents knowingly select the wrong response, for example, by reporting a higher level of education than they actually have attained. We present an approach that allows an analyst to model reporting error by incorporating information from a gold standard survey. The analyst can specify various reporting error models and assess how sensitive their conclusions are to different assumptions about the reporting error process. We illustrate the approach using simulations based on data from the 1993 National Survey of College Graduates. We use the method to impute error-corrected educational attainments in the 2010 American Community Survey using the 2010 National Survey of College Graduates as the gold standard survey.
Resumo:
Terrestrial ecosystems, occupying more than 25% of the Earth's surface, can serve as
`biological valves' in regulating the anthropogenic emissions of atmospheric aerosol
particles and greenhouse gases (GHGs) as responses to their surrounding environments.
While the signicance of quantifying the exchange rates of GHGs and atmospheric
aerosol particles between the terrestrial biosphere and the atmosphere is
hardly questioned in many scientic elds, the progress in improving model predictability,
data interpretation or the combination of the two remains impeded by
the lack of precise framework elucidating their dynamic transport processes over a
wide range of spatiotemporal scales. The diculty in developing prognostic modeling
tools to quantify the source or sink strength of these atmospheric substances
can be further magnied by the fact that the climate system is also sensitive to the
feedback from terrestrial ecosystems forming the so-called `feedback cycle'. Hence,
the emergent need is to reduce uncertainties when assessing this complex and dynamic
feedback cycle that is necessary to support the decisions of mitigation and
adaptation policies associated with human activities (e.g., anthropogenic emission
controls and land use managements) under current and future climate regimes.
With the goal to improve the predictions for the biosphere-atmosphere exchange
of biologically active gases and atmospheric aerosol particles, the main focus of this
dissertation is on revising and up-scaling the biotic and abiotic transport processes
from leaf to canopy scales. The validity of previous modeling studies in determining
iv
the exchange rate of gases and particles is evaluated with detailed descriptions of their
limitations. Mechanistic-based modeling approaches along with empirical studies
across dierent scales are employed to rene the mathematical descriptions of surface
conductance responsible for gas and particle exchanges as commonly adopted by all
operational models. Specically, how variation in horizontal leaf area density within
the vegetated medium, leaf size and leaf microroughness impact the aerodynamic attributes
and thereby the ultrane particle collection eciency at the leaf/branch scale
is explored using wind tunnel experiments with interpretations by a porous media
model and a scaling analysis. A multi-layered and size-resolved second-order closure
model combined with particle
uxes and concentration measurements within and
above a forest is used to explore the particle transport processes within the canopy
sub-layer and the partitioning of particle deposition onto canopy medium and forest
oor. For gases, a modeling framework accounting for the leaf-level boundary layer
eects on the stomatal pathway for gas exchange is proposed and combined with sap
ux measurements in a wind tunnel to assess how leaf-level transpiration varies with
increasing wind speed. How exogenous environmental conditions and endogenous
soil-root-stem-leaf hydraulic and eco-physiological properties impact the above- and
below-ground water dynamics in the soil-plant system and shape plant responses
to droughts is assessed by a porous media model that accommodates the transient
water
ow within the plant vascular system and is coupled with the aforementioned
leaf-level gas exchange model and soil-root interaction model. It should be noted
that tackling all aspects of potential issues causing uncertainties in forecasting the
feedback cycle between terrestrial ecosystem and the climate is unrealistic in a single
dissertation but further research questions and opportunities based on the foundation
derived from this dissertation are also brie
y discussed.
Resumo:
The use of DNA as a polymeric building material transcends its function in biology and is exciting in bionanotechnology for applications ranging from biosensing, to diagnostics, and to targeted drug delivery. These applications are enabled by DNA’s unique structural and chemical properties, embodied as a directional polyanion that exhibits molecular recognition capabilities. Hence, the efficient and precise synthesis of high molecular weight DNA materials has become key to advance DNA bionanotechnology. Current synthesis methods largely rely on either solid phase chemical synthesis or template-dependent polymerase amplification. The inherent step-by-step fashion of solid phase synthesis limits the length of the resulting DNA to typically less than 150 nucleotides. In contrast, polymerase based enzymatic synthesis methods (e.g., polymerase chain reaction) are not limited by product length, but require a DNA template to guide the synthesis. Furthermore, advanced DNA bionanotechnology requires tailorable structural and self-assembly properties. Current synthesis methods, however, often involve multiple conjugating reactions and extensive purification steps.
The research described in this dissertation aims to develop a facile method to synthesize high molecular weight, single stranded DNA (or polynucleotide) with versatile functionalities. We exploit the ability of a template-independent DNA polymerase−terminal deoxynucleotidyl transferase (TdT) to catalyze the polymerization of 2’-deoxyribonucleoside 5’-triphosphates (dNTP, monomer) from the 3’-hydroxyl group of an oligodeoxyribonucleotide (initiator). We termed this enzymatic synthesis method: TdT catalyzed enzymatic polymerization, or TcEP.
Specifically, this dissertation is structured to address three specific research aims. With the objective to generate high molecular weight polynucleotides, Specific Aim 1 studies the reaction kinetics of TcEP by investigating the polymerization of 2’-deoxythymidine 5’-triphosphates (monomer) from the 3’-hydroxyl group of oligodeoxyribothymidine (initiator) using in situ 1H NMR and fluorescent gel electrophoresis. We found that TcEP kinetics follows the “living” chain-growth polycondensation mechanism, and like in “living” polymerizations, the molecular weight of the final product is determined by the starting molar ratio of monomer to initiator. The distribution of the molecular weight is crucially influenced by the molar ratio of initiator to TdT. We developed a reaction kinetics model that allows us to quantitatively describe the reaction and predict the molecular weight of the reaction products.
Specific Aim 2 further explores TcEP’s ability to transcend homo-polynucleotide synthesis by varying the choices of initiators and monomers. We investigated the effects of initiator length and sequence on TcEP, and found that the minimum length of an effective initiator should be 10 nucleotides and that the formation of secondary structures close to the 3’-hydroxyl group can impede the polymerization reaction. We also demonstrated TcEP’s capacity to incorporate a wide range of unnatural dNTPs into the growing chain, such as, hydrophobic fluorescent dNTP and fluoro modified dNTP. By harnessing the encoded nucleotide sequence of an initiator and the chemical diversity of monomers, TcEP enables us to introduce molecular recognition capabilities and chemical functionalities on the 5’-terminus and 3’-terminus, respectively.
Building on TcEP’s synthesis capacities, in Specific Aim 3 we invented a two-step strategy to synthesize diblock amphiphilic polynucleotides, in which the first, hydrophilic block serves as a macro-initiator for the growth of the second block, comprised of natural and/or unnatural nucleotides. By tuning the hydrophilic length, we synthesized the amphiphilic diblock polynucleotides that can self-assemble into micellar structures ranging from star-like to crew-cut morphologies. The observed self-assembly behaviors agree with predictions from dissipative particle dynamics simulations as well as scaling law for polyelectrolyte block copolymers.
In summary, we developed an enzymatic synthesis method (i.e., TcEP) that enables the facile synthesis of high molecular weight polynucleotides with low polydispersity. Although we can control the nucleotide sequence only to a limited extent, TcEP offers a method to integrate an oligodeoxyribonucleotide with specific sequence at the 5’-terminus and to incorporate functional groups along the growing chains simultaneously. Additionally, we used TcEP to synthesize amphiphilic polynucleotides that display self-assemble ability. We anticipate that our facile synthesis method will not only advance molecular biology, but also invigorate materials science and bionanotechnology.
Resumo:
The GloboLakes project, a global observatory of lake responses to environmental change, aims to exploit current satellite missions and long remote-sensing archives to synoptically study multiple lake ecosystems, assess their current condition, reconstruct past trends to system trajectories, and assess lake sensitivity to multiple drivers of change. Here we describe the selection protocol for including lakes in the global observatory based upon remote-sensing techniques and an initial pool of the largest 3721 lakes and reservoirs in the world, as listed in the Global Lakes and Wetlands Database. An 18-year-long archive of satellite data was used to create spatial and temporal filters for the identification of waterbodies that are appropriate for remote-sensing methods. Further criteria were applied and tested to ensure the candidate sites span a wide range of ecological settings and characteristics; a total 960 lakes, lagoons, and reservoirs were selected. The methodology proposed here is applicable to new generation satellites, such as the European Space Agency Sentinel-series.
Resumo:
In contrast to the wide range of studies carried out in temperate and high-latitude oceanic regions, only a few studies have focused on recent and Holocene organic-walled dinoflagellate cyst assemblages from the tropics. This information is, however, essential for fully understanding the ability of species to adapt to different oceanographic regimes, and ultimately their potential application to local and regional palaeoenvironmental and palaeoceanographic reconstructions. Surface sediment samples of the western equatorial Atlantic Ocean north of Brazil, an area greatly influenced by Amazon River discharge waters, were therefore analysed in detail for their organic-walled dinoflagellate cyst content. A diverse association of 43 taxa was identified, and large differences in cyst distribution were observed. The cyst thanatocoenosis in bottom sediments reflects the seasonal advection of Amazon River discharge water through the Guyana Current and the North Equatorial Countercurrent well into the North Atlantic. To establish potential links between cyst distribution and the environmental conditions of the upper water column, distribution patterns were compared with mean temperature, salinity, density and stratification gradients within the upper water column (0-100 m) over different times of the year, using correspondence analysis and canonical correspondence analysis. The analyses show that differences in these parameters only play a subordinate role in determining species distribution. Instead, nutrient availability, or related factors, dominates the distribution pattern. The only possible indicators of slightly reduced salinities are Trinovantedinium applanatum and Lingulodinium machaerophorum. Four assemblage groups of cyst taxa with similar environmental affinities related to specific water masses/currents can be distinguished and have potential for palaeoenvironmental reconstruction.
Resumo:
Bidirectional DC-DC converters are widely used in different applications such as energy storage systems, Electric Vehicles (EVs), UPS, etc. In particular, future EVs require bidirectional power flow in order to integrate energy storage units into smart grids. These bidirectional power converters provide Grid to Vehicle (V2G)/ Vehicle to Grid (G2V) power flow capability for future EVs. Generally, there are two control loops used for bidirectional DC-DC converters: The inner current loop and The outer loop. The control of DAB converters used in EVs are proved to be challenging due to the wide range of operating conditions and non-linear behavior of the converter. In this thesis, the precise mathematical model of the converter is derived and non-linear control schemes are proposed for the control system of bidirectional DC-DC converters based on the derived model. The proposed inner current control technique is developed based on a novel Geometric-Sequence Control (GSC) approach. The proposed control technique offers significantly improved performance as compared to one for conventional control approaches. The proposed technique utilizes a simple control algorithm which saves on the computational resources. Therefore, it has higher reliability, which is essential in this application. Although, the proposed control technique is based on the mathematical model of the converter, its robustness against parameter uncertainties is proven. Three different control modes for charging the traction batteries in EVs are investigated in this thesis: the voltage mode control, the current mode control, and the power mode control. The outer loop control is determined by each of the three control modes. The structure of the outer control loop provides the current reference for the inner current loop. Comprehensive computer simulations have been conducted in order to evaluate the performance of the proposed control methods. In addition, the proposed control have been verified on a 3.3 kW experimental prototype. Simulation and experimental results show the superior performance of the proposed control techniques over the conventional ones.
Resumo:
Canalization is a result of intrinsic developmental buffering that ensures phenotypic robustness under genetic variation and environmental perturbation. As a consequence, animal phenotypes are remarkably consistent within a species under a wide range of conditions, a property that seems contradictory to evolutionary change. Study of laboratory model species has uncovered several possible canalization mechanisms, however, we still do not understand how the level of buffering is controlled in natural populations. We exploit wild populations of the marine chordate Ciona intestinalis to show that levels of buffering are maternally inherited. Comparative transcriptomics show expression levels of genes encoding canonical chaperones such as Hsp70 and Hsp90 do not correlate with buffering. However the expression of genes encoding endoplasmic reticulum (ER) chaperones does correlate. We also show that ER chaperone genes are widely conserved amongst animals. Contrary to previous beliefs that expression level of Heat Shock Proteins (HSPs) can be used as a measurement of buffering levels, we propose that ER associated chaperones comprise a cellular basis for canalization. ER chaperones have been neglected by the fields of development, evolution and ecology, but their study will enhance understanding of both our evolutionary past and the impact of global environmental change.
Resumo:
Canalization is a result of intrinsic developmental buffering that ensures phenotypic robustness under genetic variation and environmental perturbation. As a consequence, animal phenotypes are remarkably consistent within a species under a wide range of conditions, a property that seems contradictory to evolutionary change. Study of laboratory model species has uncovered several possible canalization mechanisms, however, we still do not understand how the level of buffering is controlled in natural populations. We exploit wild populations of the marine chordate Ciona intestinalis to show that levels of buffering are maternally inherited. Comparative transcriptomics show expression levels of genes encoding canonical chaperones such as Hsp70 and Hsp90 do not correlate with buffering. However the expression of genes encoding endoplasmic reticulum (ER) chaperones does correlate. We also show that ER chaperone genes are widely conserved amongst animals. Contrary to previous beliefs that expression level of Heat Shock Proteins (HSPs) can be used as a measurement of buffering levels, we propose that ER associated chaperones comprise a cellular basis for canalization. ER chaperones have been neglected by the fields of development, evolution and ecology, but their study will enhance understanding of both our evolutionary past and the impact of global environmental change.
Resumo:
The past decade has seen a dramatic increase in interest in the use of gold nanoparticles (GNPs) as radiation sensitizers for radiation therapy. This interest was initially driven by their strong absorption of ionizing radiation and the resulting ability to increase dose deposited within target volumes even at relatively low concentrations. These early observations are supported by extensive experimental validation, showing GNPs' efficacy at sensitizing tumors in both in vitro and in vivo systems to a range of types of ionizing radiation, including kilovoltage and megavoltage X rays as well as charged particles. Despite this experimental validation, there has been limited translation of GNP-mediated radiation sensitization to a clinical setting. One of the key challenges in this area is the wide range of experimental systems that have been investigated, spanning a range of particle sizes, shapes, and preparations. As a result, mechanisms of uptake and radiation sensitization have remained difficult to clearly identify. This has proven a significant impediment to the identification of optimal GNP formulations which strike a balance among their radiation sensitizing properties, their specificity to the tumors, their biocompatibility, and their imageability in vivo. This white paper reviews the current state of knowledge in each of the areas concerning the use of GNPs as radiosensitizers, and outlines the steps which will be required to advance GNP-enhanced radiation therapy from their current pre-clinical setting to clinical trials and eventual routine usage.
Resumo:
Current trends in the automotive industry have placed increased importance on engine downsizing for passenger vehicles. Engine downsizing often results in reduced power output and turbochargers have been relied upon to restore the power output and maintain drivability. As improved power output is required across a wide range of engine operating conditions, it is necessary for the turbocharger to operate effectively at both design and off-design conditions. One off-design condition of considerable importance for turbocharger turbines is low velocity ratio operation, which refers to the combination of high exhaust gas velocity and low turbine rotational speed. Conventional radial flow turbines are constrained to achieve peak efficiency at the relatively high velocity ratio of 0.7, due the requirement to maintain a zero inlet blade angle for structural reasons. Several methods exist to potentially shift turbine peak efficiency to lower velocity ratios. One method is to utilize a mixed flow turbine as an alternative to a radial flow turbine. In addition to radial and circumferential components, the flow entering a mixed flow turbine also has an axial component. This allows the flow to experience a non-zero inlet blade angle, potentially shifting peak efficiency to a lower velocity ratio when compared to an equivalent radial flow turbine.
This study examined the effects of varying the flow conditions at the inlet to a mixed flow turbine and evaluated the subsequent impact on performance. The primary parameters examined were average inlet flow angle, the spanwise distribution of flow angle across the inlet and inlet flow cone angle. The results have indicated that the inlet flow angle significantly influenced the degree of reaction across the rotor and the turbine efficiency. The rotor studied was a custom in-house design based on a state-of-the-art radial flow turbine design. A numerical approach was used as the basis for this investigation and the numerical model has been validated against experimental data obtained from the cold flow turbine test rig at Queen’s University Belfast. The results of the study have provided a useful insight into how the flow conditions at rotor inlet influence the performance of a mixed flow turbine.
Resumo:
Ce mémoire présente la conception, le contrôle et la validation expérimentale d’une boussole haptique servant à diriger les utilisateurs aux prises avec une déficience visuelle, et ce, dans tous les environnements. La revue de littérature décrit le besoin pour un guidage haptique et permet de mettre en perspective cette technologie dans le marché actuel. La boussole proposée utilise le principe de couples asymétriques. Son design est basé sur une architecture de moteur à entraînement direct et un contrôle en boucle ouverte étalonné au préalable. Cette conception permet d’atteindre une vaste plage de fréquences pour la rétroaction haptique. Les propriétés mécaniques de l’assemblage sont évaluées. Puis, l’étalonnage des couples permet d’assurer que le contrôle en boucle ouverte produit des couples avec une précision suffisante. Un premier test avec des utilisateurs a permis d’identifier que les paramètres de fréquence entre 5 et 15 Hz combinés avec des couples au-delà de 40 mNm permettent d’atteindre une efficacité intéressante pour la tâche. L’expérience suivante démontre qu’utiliser une rétroaction haptique proportionnelle à l’erreur d’orientation améliore significativement les performances. Le concept est ensuite éprouvé avec dix-neuf sujets qui doivent se diriger sur un parcours avec l’aide seule de cette boussole haptique. Les résultats montrent que tous les sujets ont réussi à rencontrer tous les objectifs de la route, tout en maintenant des déviations latérales relativement faibles (0:39 m en moyenne). Les performances obtenues et les impressions des utilisateurs sont prometteuses et plaident en faveur de ce dispositif. Pour terminer, un modèle simplifié du comportement d’un individu pour la tâche d’orientation est développé et démontre l’importance de la personnalisation de l’appareil. Ce modèle est ensuite utilisé pour mettre en valeur la stratégie d’horizon défilant pour le placement de la cible intermédiaire actuelle dans un parcours sur une longue distance.
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-06