1000 resultados para Michigan Tech Lode
Resumo:
Complex human diseases are a major challenge for biological research. The goal of my research is to develop effective methods for biostatistics in order to create more opportunities for the prevention and cure of human diseases. This dissertation proposes statistical technologies that have the ability of being adapted to sequencing data in family-based designs, and that account for joint effects as well as gene-gene and gene-environment interactions in the GWA studies. The framework includes statistical methods for rare and common variant association studies. Although next-generation DNA sequencing technologies have made rare variant association studies feasible, the development of powerful statistical methods for rare variant association studies is still underway. Chapter 2 demonstrates two adaptive weighting methods for rare variant association studies based on family data for quantitative traits. The results show that both proposed methods are robust to population stratification, robust to the direction and magnitude of the effects of causal variants, and more powerful than the methods using weights suggested by Madsen and Browning [2009]. In Chapter 3, I extended the previously proposed test for Testing the effect of an Optimally Weighted combination of variants (TOW) [Sha et al., 2012] for unrelated individuals to TOW &ndash F, TOW for Family &ndash based design. Simulation results show that TOW &ndash F can control for population stratification in wide range of population structures including spatially structured populations, is robust to the directions of effect of causal variants, and is relatively robust to percentage of neutral variants. In GWA studies, this dissertation consists of a two &ndash locus joint effect analysis and a two-stage approach accounting for gene &ndash gene and gene &ndash environment interaction. Chapter 4 proposes a novel two &ndash stage approach, which is promising to identify joint effects, especially for monotonic models. The proposed approach outperforms a single &ndash marker method and a regular two &ndash stage analysis based on the two &ndash locus genotypic test. In Chapter 5, I proposed a gene &ndash based two &ndash stage approach to identify gene &ndash gene and gene &ndash environment interactions in GWA studies which can include rare variants. The two &ndash stage approach is applied to the GAW 17 dataset to identify the interaction between KDR gene and smoking status.
Resumo:
A basic approach to study a NVH problem is to break down the system in three basic elements – source, path and receiver. While the receiver (response) and the transfer path can be measured, it is difficult to measure the source (forces) acting on the system. It becomes necessary to predict these forces to know how they influence the responses. This requires inverting the transfer path. Singular Value Decomposition (SVD) method is used to decompose the transfer path matrix into its principle components which is required for the inversion. The usual approach to force prediction requires rejecting the small singular values obtained during SVD by setting a threshold, as these small values dominate the inverse matrix. This assumption of the threshold may be subjected to rejecting important singular values severely affecting force prediction. The new approach discussed in this report looks at the column space of the transfer path matrix which is the basis for the predicted response. The response participation is an indication of how the small singular values influence the force participation. The ability to accurately reconstruct the response vector is important to establish a confidence in force vector prediction. The goal of this report is to suggest a solution that is mathematically feasible, physically meaningful, and numerically more efficient through examples. This understanding adds new insight to the effects of current code and how to apply algorithms and understanding to new codes.
Resumo:
As foundational species, oaks (Quercus : Fagaceae) support the activities of both humans and wildlife. However, many oaks in North America are declining, a crisis exacerbated by the previous disappearance of other hard mast-producing trees. In addition, the economic demands placed on this drought-tolerant group may intensify if climate change extirpates other, relatively mesophytic species. Genetic tools can help address these management challenges. To this end, we developed a suite of 27 microsatellite markers, of which 22 are derived from expressed sequence tags (ESTs). Many of these markers bear significant homology to known genes and may be able to directly assay functional genetic variation. Markers obtained from enriched microsatellite libraries, on the other hand, are typically located in heterochromatic regions and should reflect demographic processes. Considered jointly, genic and genomic microsatellites can elucidate patterns of gene-flow and natural selection, which are fundamental to both an organism's evolutionary ecology and conservation biology. To this end, we employed the developed markers in an FST-based genome scan to detect the signature of divergent selection among the red oaks (Quercus section Lobatae). Three candidate genes with putative roles in stress responses demonstrated patterns of diversity consistent with adaptation to heterogeneous selective pressures. These genes may be important in both local genetic adaptation within species and divergence among them. Next, we used an isolation-with-migration model to quantify levels of gene-flow among four red oaks species during speciation. Both speciation in allopatry and speciation with gene-flow were found to be major drivers of red oak biodiversity. Loci playing a key role in speciation are also likely to be ecologically important within species
Resumo:
This thesis focuses on the impact of the American shale gas boom on the European natural gas market. The study presents different tests in order to analyze the dynamics of natural gas prices in the U.S., U.K. and German natural gas market. The question of cointegration between these different markets are analyzed using several tests. More specifically, the ADF tests for the presence of a unit root. The error correction model test and the Johansen cointegration procedure are applied in order to accept or reject the hypothesis of an integrated market. The results suggest no evidence of cointegration between these markets. There currently is no evidence of an impact of the U.S. shale gas boom on the European market.
Resumo:
Individual life history theory is largely focused on understanding the extent to which various phenotypes of an organism are adaptive and whether they represent life history trade-offs. Compensatory growth (CG) is increasingly appreciated as a phenotype of interest to evolutionary ecologists. CG or catch-up growth involves the ability of an organism to grow at a faster-than-normal rate following periods of under-nutrition once conditions subsequently improve. Here, I examine CG in a population of moose (Alces alces) living on Isle Royale, a remote island in Lake Superior, North America. I gained insights about CG from measurements of skeletal remains of 841 moose born throughout a 52-year period. In particular, I compared the length of the metatarsal bone (ML) with several skull measurements. While ML is an index of growth while the moose is in utero and during the first year or two of life, a moose skull continues to grow until a moose is approximately 5 years of age. Because of these differences, the strength of correlation between ML and skull measurements, for a group of moose (say female moose) is an indication of that group’s capacity for CG. Using this logic, I conducted analyses whose results suggest that the capacity for CG did not differ between sexes, between individuals born during periods of high and low population densities, or between individuals exhibiting signs of senescence and those that do not. The analysis did however suggest that long-lived individuals had a greater capacity for CG than short-lived individuals. These results suggest that CG in moose is an adaptive trait and might not be associated with life history trade-offs.
Resumo:
The developmental processes and functions of an organism are controlled by the genes and the proteins that are derived from these genes. The identification of key genes and the reconstruction of gene networks can provide a model to help us understand the regulatory mechanisms for the initiation and progression of biological processes or functional abnormalities (e.g. diseases) in living organisms. In this dissertation, I have developed statistical methods to identify the genes and transcription factors (TFs) involved in biological processes, constructed their regulatory networks, and also evaluated some existing association methods to find robust methods for coexpression analyses. Two kinds of data sets were used for this work: genotype data and gene expression microarray data. On the basis of these data sets, this dissertation has two major parts, together forming six chapters. The first part deals with developing association methods for rare variants using genotype data (chapter 4 and 5). The second part deals with developing and/or evaluating statistical methods to identify genes and TFs involved in biological processes, and construction of their regulatory networks using gene expression data (chapter 2, 3, and 6). For the first part, I have developed two methods to find the groupwise association of rare variants with given diseases or traits. The first method is based on kernel machine learning and can be applied to both quantitative as well as qualitative traits. Simulation results showed that the proposed method has improved power over the existing weighted sum method (WS) in most settings. The second method uses multiple phenotypes to select a few top significant genes. It then finds the association of each gene with each phenotype while controlling the population stratification by adjusting the data for ancestry using principal components. This method was applied to GAW 17 data and was able to find several disease risk genes. For the second part, I have worked on three problems. First problem involved evaluation of eight gene association methods. A very comprehensive comparison of these methods with further analysis clearly demonstrates the distinct and common performance of these eight gene association methods. For the second problem, an algorithm named the bottom-up graphical Gaussian model was developed to identify the TFs that regulate pathway genes and reconstruct their hierarchical regulatory networks. This algorithm has produced very significant results and it is the first report to produce such hierarchical networks for these pathways. The third problem dealt with developing another algorithm called the top-down graphical Gaussian model that identifies the network governed by a specific TF. The network produced by the algorithm is proven to be of very high accuracy.
Resumo:
The development of embedded control systems for a Hybrid Electric Vehicle (HEV) is a challenging task due to the multidisciplinary nature of HEV powertrain and its complex structures. Hardware-In-the-Loop (HIL) simulation provides an open and convenient environment for the modeling, prototyping, testing and analyzing HEV control systems. This thesis focuses on the development of such a HIL system for the hybrid electric vehicle study. The hardware architecture of the HIL system, including dSPACE eDrive HIL simulator, MicroAutoBox II and MotoTron Engine Control Module (ECM), is introduced. Software used in the system includes dSPACE Real-Time Interface (RTI) blockset, Automotive Simulation Models (ASM), Matlab/Simulink/Stateflow, Real-time Workshop, ControlDesk Next Generation, ModelDesk and MotoHawk/MotoTune. A case study of the development of control systems for a single shaft parallel hybrid electric vehicle is presented to summarize the functionality of this HIL system.
Resumo:
Gas sensors have been used widely in different important area including industrial control, environmental monitoring, counter-terrorism and chemical production. Micro-fabrication offers a promising way to achieve sensitive and inexpensive gas sensors. Over the years, various MEMS gas sensors have been investigated and fabricated. One significant type of MEMS gas sensors is based on mass change detection and the integration with specific polymer. This dissertation aims to make contributions to the design and fabrication of MEMS resonant mass sensors with capacitance actuation and sensing that lead to improved sensitivity. To accomplish this goal, the research has several objectives: (1) Define an effective measure for evaluating the sensitivity of resonant mass devices; (2) Model the effects of air damping on microcantilevers and validate models using laser measurement system (3) Develop design guidelines for improving sensitivity in the presence of air damping; (4) Characterize the degree of uncertainty in performance arising from fabrication variation for one or more process sequences, and establish design guidelines for improved robustness. Work has been completed toward these objectives. An evaluation measure has been developed and compared to an RMS based measure. Analytic models of air damping for parallel plate that include holes are compared with a COMSOL model. The models have been used to identify cantilever design parameters that maximize sensitivity. Additional designs have been modeled with COMSOL and the development of an analytical model for Fixed-free cantilever geometries with holes has been developed. Two process flows have been implemented and compared. A number of cantilever designs have been fabricated and the uncertainty in process has been investigated. Variability from processing have been evaluated and characterized.
Resumo:
The proposed work aims to facilitate the development of a microfluidic platform for the production of advanced microcapsules containing active agents which can be the functional constituents of self-healing composites. The creation of such microcapsules is enabled by the unique flow characteristics within microchannels including precise control over shear and interfacial forces for droplet creation and manipulation as well as the ability to form a solid shell either chemically or via the addition of thermal or irradiative energy. Microchannel design and a study of the fluid dynamics and mechanisms for shell creation are undertaken in order to establish a fabrication approach capable of producing healing-agent-containing microcapsules. An in-depth study of the process parameters has been undertaken in order to elucidate the advantages of this production technique including precise control of size (i.e., monodispersity) and surface morphology of the microcapsules. This project also aims to aid the optimization of the mechanical properties as well as healing performance of self-healing composites by studying the effects of the advantageous properties of the as-produced microcapsules. Scale-up of the microfluidic fabrication using parallel devices on a single chip as well as on-chip microcapsule production and shape control will also be investigated. It will be demonstrated that microfluidic fabrication is a versatile approach for the efficient creation of functional microcapsules allowing for superior design of self-healing composites.
Resumo:
Emerging nanogenerators have attracted the attention of the research community, focusing on energy generation using piezoelectric nanomaterials. Nanogenerators can be utilized for powering NEMS/MEMS devices. Understanding the piezoelectric properties of ZnO one-dimensional materials such as ZnO nanobelts (NBs) and Nanowires (NWs) can have a significant impact on the design of new devices. The goal of this dissertation is to study the piezoelectric properties of one-dimensional ZnO nanostructures both experimentally and theoretically. First, the experimental procedure for producing the ZnO nanostructures is discussed. The produced ZnO nanostructures were characterized using an in-situ atomic force microscope and a piezoelectric force microscope. It is shown that the electrical conductivity of ZnO NBs is a function of applied mechanical force and its crystalline structure. This phenomenon was described in the context of formation of an electric field due to the piezoelectric property of ZnO NBs. In the PFM studies, it was shown that the piezoelectric response of the ZnO NBs depends on their production method and presence of defects in the NB. Second, a model was proposed for making nanocomposite electrical generators based on ZnO nanowires. The proposed model has advantages over the original configuration of nanogenerators which uses an AFM tip for bending the ZnO NWs. Higher stability of the electric source, capability for producing larger electric fields, and lower production costs are advantages of this configuration. Finally, piezoelectric properties of ZnO NBs were simulated using the molecular dynamics (MD) technique. The size-scale effect on piezoelectric properties of ZnO NBs was captured, and it is shown that the piezoelectric coefficient of ZnO NBs decreases by increasing their lateral dimensions. This phenomenon is attributed to the surface charge redistribution and compression of unit cells that are placed on the outer shell of ZnO NBs.
Resumo:
Manual drilling is a popular solution for programs seeking to increase drinking water supply in rural Madagascar. Lightweight, affordable and locally produced drilling equipment allows rapid implementation where access is problematic and funds are limited. This report will look at the practical implications of using manual drilling as a one-step solution to potable water in rural development. The main benefits of using these techniques are time and cost savings. The author uses his experience managing a drilling campaign in northeastern Madagascar to explore the benefits and limitations of one particular drilling methodology – BushProof’s Madrill technique. Just under 200 wells were drilled using this method in the course of one fiscal year (September 2011-September 2012). The paper explores what compromises must be considered in the quest for cost-effective boreholes and whether everybody - from the implementers to project managers to clients and lawmakers - are in agreement about the consequences of such compromises. The paper also discusses water quality issues encountered when drilling in shallow aquifers.
Resumo:
Our research explored the influence of deer and gap size on nitrogen cycling, soil compaction, and vegetation trajectories in twelve canopy gaps of varying sizes in a hemlock-northern hardwood forest. Each gap contained two fenced and two unfenced plots. Gap size, soil compaction, winter deer use, and available nitrogen were measured in 2011. Vegetation was assessed in 2007 and 2011, and non-metric multi-dimensional scaling was used to determine vegetative change. Results show that winter deer use was greater in smaller gaps. Deer accessibility did not influence compaction but did significantly increase total available nitrogen in April. April ammonium, April nitrate, and May nitrate were positively related to gap size. The relationship between gap size and vegetative community change was positive for fenced plots but unrelated for unfenced plots. In conclusion, deer are positively contributing to nitrogen dynamics and altering the relationship between canopy gap size and vegetative community change.
Resumo:
A subset of forest management techniques, termed ecological forestry, have been developed in order to produce timber and maintain the ecological integrity of forest communities through practices that more closely mirror natural disturbance regimes. Even though alternative methods have been described and tested, these approaches still need to be established and analyzed in a variety of geographic regions in order to calibrate and measure effectiveness across different forest types. The primary objective of this research project was to assess whether group selection combined with legacy-tree retention could enhance mid-tolerant tree recruitment in a late-successional northern hardwood forest. In order to evaluate a novel alternative regeneration technique, 49 group-selection openings in three size classes were created in 2003 with a biological legacy tree retained in the center of each opening. Twenty reference sites, managed using single-tree selection, were also analyzed for comparison. The specific goals of the project were to: 1) determine the fate and persistence of the openings and legacy trees 2) assess the understory response of the group-selection openings versus the single-tree selection reference sites, and 3) evaluate the spatial patterns of yellow birch (Betula alleghaniensis Britt.) and eastern hemlock (Tsuga canadensis (L.) Carr.) in the group-selection openings. The results from 8-9 years post-study implementation and the changes that have occurred between 2004/5 and 2011/12 are discussed. The alternative regeneration technique developed and assessed in this study has the potential to enrich biodiversity in a range of forest types. Projected group-selection opening persistence rates ranged from 41-91 years. Openings from 500-1500 m2 are predicted to persist long enough for mid-tolerant tree recruitment. The legacy trees responded well to release and experienced a low mortality rate. Yellow birch (the primary shade mid-tolerant tree in the study area) densities increased with opening size. Maples surpassed all other species in abundance. In the sapling layer, sugar maple (Acer saccharum Marsh.) was 2 to over 300 times more abundant in the group-selection openings and 2 to 3 times more abundant in the references sites than all other species present. Red maple (Acer rubrum L.) was the second most abundant species present in the openings and reference sites. Spatial patterns of yellow birch and eastern hemlock in the openings were mostly aggregated. The southern edges of the largest openings contained the highest magnitude of yellow birch and eastern hemlock per unit area. Continued monitoring and additional treatments will likely be necessary in order to ensure underrepresented species successfully reach maturity.
Resumo:
In this report we will investigate the effect of negative energy density in a classic Friedmann cosmology. Although never measured and possibly unphysical, the evolution of a Universe containing a significant cosmological abundance of any of a number of hypothetical stable negative energy components is explored. These negative energy (Ω < 0) forms include negative phantom energy (w<-1), negative cosmological constant (w=-1), negative domain walls (w=-2/3), negative cosmic strings (w= -1/3), negative mass (w=0), negative radiation (w=1/3), and negative ultra-light (w > 1/3). Assuming that such universe components generate pressures as perfect fluids, the attractive or repulsive nature of each negative energy component is reviewed. The Friedmann equations can only be balanced when negative energies are coupled to a greater magnitude of positive energy or positive curvature, and minimal cases of both of these are reviewed. The future and fate of such universes in terms of curvature, temperature, acceleration, and energy density are reviewed including endings categorized as a Big Crunch, Big Void, or Big Rip and further qualified as "Warped", "Curved", or "Flat", "Hot" versus "Cold", "Accelerating" versus" Decelerating" versus "Coasting". A universe that ends by contracting to zero energy density is termed a Big Poof. Which contracting universes ``bounce" in expansion and which expanding universes ``turnover" into contraction are also reviewed. The name by which the ending of the Universe is mentioned is our own nomenclature.
Resumo:
In a statistical inference scenario, the estimation of target signal or its parameters is done by processing data from informative measurements. The estimation performance can be enhanced if we choose the measurements based on some criteria that help to direct our sensing resources such that the measurements are more informative about the parameter we intend to estimate. While taking multiple measurements, the measurements can be chosen online so that more information could be extracted from the data in each measurement process. This approach fits well in Bayesian inference model often used to produce successive posterior distributions of the associated parameter. We explore the sensor array processing scenario for adaptive sensing of a target parameter. The measurement choice is described by a measurement matrix that multiplies the data vector normally associated with the array signal processing. The adaptive sensing of both static and dynamic system models is done by the online selection of proper measurement matrix over time. For the dynamic system model, the target is assumed to move with some distribution and the prior distribution at each time step is changed. The information gained through adaptive sensing of the moving target is lost due to the relative shift of the target. The adaptive sensing paradigm has many similarities with compressive sensing. We have attempted to reconcile the two approaches by modifying the observation model of adaptive sensing to match the compressive sensing model for the estimation of a sparse vector.