922 resultados para series-parallel model
Resumo:
Multi-site time series studies of air pollution and mortality and morbidity have figured prominently in the literature as comprehensive approaches for estimating acute effects of air pollution on health. Hierarchical models are generally used to combine site-specific information and estimate pooled air pollution effects taking into account both within-site statistical uncertainty, and across-site heterogeneity. Within a site, characteristics of time series data of air pollution and health (small pollution effects, missing data, highly correlated predictors, non linear confounding etc.) make modelling all sources of uncertainty challenging. One potential consequence is underestimation of the statistical variance of the site-specific effects to be combined. In this paper we investigate the impact of variance underestimation on the pooled relative rate estimate. We focus on two-stage normal-normal hierarchical models and on under- estimation of the statistical variance at the first stage. By mathematical considerations and simulation studies, we found that variance underestimation does not affect the pooled estimate substantially. However, some sensitivity of the pooled estimate to variance underestimation is observed when the number of sites is small and underestimation is severe. These simulation results are applicable to any two-stage normal-normal hierarchical model for combining information of site-specific results, and they can be easily extended to more general hierarchical formulations. We also examined the impact of variance underestimation on the national average relative rate estimate from the National Morbidity Mortality Air Pollution Study and we found that variance underestimation as much as 40% has little effect on the national average.
Resumo:
While many time-series studies of ozone and daily mortality identified positive associations,others yielded null or inconclusive results. We performed a meta-analysis of 144 effect estimates from 39 time-series studies, and estimated pooled effects by lags, age groups,cause-specific mortality, and concentration metrics. We compared results to estimates from the National Morbidity, Mortality, and Air Pollution Study (NMMAPS), a time-series study of 95 large U.S. cities from 1987 to 2000. Both meta-analysis and NMMAPS results provided strong evidence of a short-term association between ozone and mortality, with larger effects for cardiovascular and respiratory mortality, the elderly, and current day ozone exposure as compared to other single day lags. In both analyses, results were not sensitive to adjustment for particulate matter and model specifications. In the meta-analysis we found that a 10 ppb increase in daily ozone is associated with a 0.83 (95% confidence interval: 0.53, 1.12%) increase in total mortality, whereas the corresponding NMMAPS estimate is 0.25%(0.12, 0.39%). Meta-analysis results were consistently larger than those from NMMAPS,indicating publication bias. Additional publication bias is evident regarding the choice of lags in time-series studies, and the larger heterogeneity in posterior city-specific estimates in the meta-analysis, as compared with NMAMPS.
Resumo:
Numerous time series studies have provided strong evidence of an association between increased levels of ambient air pollution and increased levels of hospital admissions, typically at 0, 1, or 2 days after an air pollution episode. An important research aim is to extend existing statistical models so that a more detailed understanding of the time course of hospitalization after exposure to air pollution can be obtained. Information about this time course, combined with prior knowledge about biological mechanisms, could provide the basis for hypotheses concerning the mechanism by which air pollution causes disease. Previous studies have identified two important methodological questions: (1) How can we estimate the shape of the distributed lag between increased air pollution exposure and increased mortality or morbidity? and (2) How should we estimate the cumulative population health risk from short-term exposure to air pollution? Distributed lag models are appropriate tools for estimating air pollution health effects that may be spread over several days. However, estimation for distributed lag models in air pollution and health applications is hampered by the substantial noise in the data and the inherently weak signal that is the target of investigation. We introduce an hierarchical Bayesian distributed lag model that incorporates prior information about the time course of pollution effects and combines information across multiple locations. The model has a connection to penalized spline smoothing using a special type of penalty matrix. We apply the model to estimating the distributed lag between exposure to particulate matter air pollution and hospitalization for cardiovascular and respiratory disease using data from a large United States air pollution and hospitalization database of Medicare enrollees in 94 counties covering the years 1999-2002.
Resumo:
A time series is a sequence of observations made over time. Examples in public health include daily ozone concentrations, weekly admissions to an emergency department or annual expenditures on health care in the United States. Time series models are used to describe the dependence of the response at each time on predictor variables including covariates and possibly previous values in the series. Time series methods are necessary to account for the correlation among repeated responses over time. This paper gives an overview of time series ideas and methods used in public health research.
Resumo:
The purpose of this study is to develop statistical methodology to facilitate indirect estimation of the concentration of antiretroviral drugs and viral loads in the prostate gland and the seminal vesicle. The differences in antiretroviral drug concentrations in these organs may lead to suboptimal concentrations in one gland compared to the other. Suboptimal levels of the antiretroviral drugs will not be able to fully suppress the virus in that gland, lead to a source of sexually transmissible virus and increase the chance of selecting for drug resistant virus. This information may be useful selecting antiretroviral drug regimen that will achieve optimal concentrations in most of male genital tract glands. Using fractionally collected semen ejaculates, Lundquist (1949) measured levels of surrogate markers in each fraction that are uniquely produced by specific male accessory glands. To determine the original glandular concentrations of the surrogate markers, Lundquist solved a simultaneous series of linear equations. This method has several limitations. In particular, it does not yield a unique solution, it does not address measurement error, and it disregards inter-subject variability in the parameters. To cope with these limitations, we developed a mechanistic latent variable model based on the physiology of the male genital tract and surrogate markers. We employ a Bayesian approach and perform a sensitivity analysis with regard to the distributional assumptions on the random effects and priors. The model and Bayesian approach is validated on experimental data where the concentration of a drug should be (biologically) differentially distributed between the two glands. In this example, the Bayesian model-based conclusions are found to be robust to model specification and this hierarchical approach leads to more scientifically valid conclusions than the original methodology. In particular, unlike existing methods, the proposed model based approach was not affected by a common form of outliers.
Resumo:
Objectives: The goal of the present study was to elucidate the contribution of the newly recognized virulence factor choline to the pathogenesis of Streptococcus pneumoniae in an animal model of meningitis. Results: The choline containing strain D39Cho(-) and its isogenic choline-free derivative D39Cho(-)licA64 -each expressing the capsule polysaccharide 2 - were introduced intracisternally at an inoculum size of 10(3) CFU into 11 days old Wistar rats. During the first 8 h post infection both strains multiplied and stimulated a similar immune response that involved expression of high levels of proinflammatory cytokines, the matrix metalloproteinase 9 (MMP-9), IL-10, and the influx of white blood cells into the CSF. Virtually identical immune response was also elicited by intracisternal inoculation of 10(7) CFU equivalents of either choline-containing or choline-free cell walls. At sampling times past 8 h strain D39Cho(-) continued to replicate accompanied by an intense inflammatory response and strong granulocytic pleiocytosis. Animals infected with D39Cho(-) died within 20 h and histopathology revealed brain damage in the cerebral cortex and hippocampus. In contrast, the initial immune response generated by the choline-free strain D39Cho(-)licA64 began to decline after the first 8 h accompanied by elimination of the bacteria from the CSF in parallel with a strong WBC response peaking at 8 h after infection. All animals survived and there was no evidence for brain damage. Conclusion: Choline in the cell wall is essential for pneumococci to remain highly virulent and survive within the host and establish pneumococcal meningitis.
Resumo:
Linear programs, or LPs, are often used in optimization problems, such as improving manufacturing efficiency of maximizing the yield from limited resources. The most common method for solving LPs is the Simplex Method, which will yield a solution, if one exists, but over the real numbers. From a purely numerical standpoint, it will be an optimal solution, but quite often we desire an optimal integer solution. A linear program in which the variables are also constrained to be integers is called an integer linear program or ILP. It is the focus of this report to present a parallel algorithm for solving ILPs. We discuss a serial algorithm using a breadth-first branch-and-bound search to check the feasible solution space, and then extend it into a parallel algorithm using a client-server model. In the parallel mode, the search may not be truly breadth-first, depending on the solution time for each node in the solution tree. Our search takes advantage of pruning, often resulting in super-linear improvements in solution time. Finally, we present results from sample ILPs, describe a few modifications to enhance the algorithm and improve solution time, and offer suggestions for future work.
Resumo:
ntense liver regeneration and almost 100% survival follows partial hepatectomy of up to 70% of liver mass in rodents. More extensive resections of 70 to 80% have an increased mortality and partial hepatectomies of >80% constantly lead to acute hepatic failure and death in mice. The aim of the study was to determine the effect of systemically administered granulocyte colony stimulating factor (G-CSF) on animal survival and liver regeneration in a small for size liver remnant mouse model after 83% partial hepatectomy (liver weight <0.8% of mouse body weight). Methods: Male Balb C mice (n=80, 20-24g) were preconditioned daily for five days with 5μg G-CSF subcutaneously or sham injected (aqua ad inj). Subsequently 83% hepatic resection was performed and daily sham or G-CSF injection continued. Survival was determined in both groups (G-CSF n=35; Sham: n=33). In a second series BrdU was injected (50mg/kg Body weight) two hours prior to tissue harvest and animals euthanized 36 and 48 hours after 83% liver resection (n=3 each group). To measure hepatic regeneration the BrdU labeling index and Ki67 expression were determined by immunohistochemistry by two independent observers. Harvested liver tissue was dried to constant weight at 65 deg C for 48 hours. Results: Survival was 0% in the sham group on day 3 postoperatively and significantly better (26.2% on day 7 and thereafter) in the G-CSF group (Log rank test: p<0.0001). Dry liver weight was increased in the G-CSF group (T-test: p<0.05) 36 hours after 83% partial hepatectomy. Ki67 expression was elevated in the G-CSF group at 36 hours (2.8±2.6% (Standard deviation) vs 0.03±0.2%; Rank sum test: p<0.0001) and at 48 hours (45.1±34.6% vs 0.7±1.0%; Rank sum test: p<0.0001) after 83% liver resection. BrdU labeling at 48 hours was 0.1±0.3% in the sham and 35.2±34.2% in the G-CSF group (Rank sum test: p<0.0001) Conclusions: The surgical 83% resection mouse model is suitable to test hepatic supportive regimens in the setting of small for size liver remnants. Administration of G-CSF supports hepatic regeneration after microsurgical 83% partial hepatectomy and leads to improved long-term survival in the mouse. G-CSF might prove to be a clinically valuable supportive substance in small for size liver remnants in humans after major hepatic resections due to primary or secondary liver tumors or in the setting of living related liver donation.
Resumo:
To mitigate greenhouse gas (GHG) emissions and reduce U.S. dependence on imported oil, the United States (U.S.) is pursuing several options to create biofuels from renewable woody biomass (hereafter referred to as “biomass”). Because of the distributed nature of biomass feedstock, the cost and complexity of biomass recovery operations has significant challenges that hinder increased biomass utilization for energy production. To facilitate the exploration of a wide variety of conditions that promise profitable biomass utilization and tapping unused forest residues, it is proposed to develop biofuel supply chain models based on optimization and simulation approaches. The biofuel supply chain is structured around four components: biofuel facility locations and sizes, biomass harvesting/forwarding, transportation, and storage. A Geographic Information System (GIS) based approach is proposed as a first step for selecting potential facility locations for biofuel production from forest biomass based on a set of evaluation criteria, such as accessibility to biomass, railway/road transportation network, water body and workforce. The development of optimization and simulation models is also proposed. The results of the models will be used to determine (1) the number, location, and size of the biofuel facilities, and (2) the amounts of biomass to be transported between the harvesting areas and the biofuel facilities over a 20-year timeframe. The multi-criteria objective is to minimize the weighted sum of the delivered feedstock cost, energy consumption, and GHG emissions simultaneously. Finally, a series of sensitivity analyses will be conducted to identify the sensitivity of the decisions, such as the optimal site selected for the biofuel facility, to changes in influential parameters, such as biomass availability and transportation fuel price. Intellectual Merit The proposed research will facilitate the exploration of a wide variety of conditions that promise profitable biomass utilization in the renewable biofuel industry. The GIS-based facility location analysis considers a series of factors which have not been considered simultaneously in previous research. Location analysis is critical to the financial success of producing biofuel. The modeling of woody biomass supply chains using both optimization and simulation, combing with the GIS-based approach as a precursor, have not been done to date. The optimization and simulation models can help to ensure the economic and environmental viability and sustainability of the entire biofuel supply chain at both the strategic design level and the operational planning level. Broader Impacts The proposed models for biorefineries can be applied to other types of manufacturing or processing operations using biomass. This is because the biomass feedstock supply chain is similar, if not the same, for biorefineries, biomass fired or co-fired power plants, or torrefaction/pelletization operations. Additionally, the research results of this research will continue to be disseminated internationally through publications in journals, such as Biomass and Bioenergy, and Renewable Energy, and presentations at conferences, such as the 2011 Industrial Engineering Research Conference. For example, part of the research work related to biofuel facility identification has been published: Zhang, Johnson and Sutherland [2011] (see Appendix A). There will also be opportunities for the Michigan Tech campus community to learn about the research through the Sustainable Future Institute.
Resumo:
Ultra-high performance fiber reinforced concrete (UHPFRC) has arisen from the implementation of a variety of concrete engineering and materials science concepts developed over the last century. This material offers superior strength, serviceability, and durability over its conventional counterparts. One of the most important differences for UHPFRC over other concrete materials is its ability to resist fracture through the use of randomly dispersed discontinuous fibers and improvements to the fiber-matrix bond. Of particular interest is the materials ability to achieve higher loads after first crack, as well as its high fracture toughness. In this research, a study of the fracture behavior of UHPFRC with steel fibers was conducted to look at the effect of several parameters related to the fracture behavior and to develop a fracture model based on a non-linear curve fit of the data. To determine this, a series of three-point bending tests were performed on various single edge notched prisms (SENPs). Compression tests were also performed for quality assurance. Testing was conducted on specimens of different cross-sections, span/depth (S/D) ratios, curing regimes, ages, and fiber contents. By comparing the results from prisms of different sizes this study examines the weakening mechanism due to the size effect. Furthermore, by employing the concept of fracture energy it was possible to obtain a comparison of the fracture toughness and ductility. The model was determined based on a fit to P-w fracture curves, which was cross referenced for comparability to the results. Once obtained the model was then compared to the models proposed by the AFGC in the 2003 and to the ACI 544 model for conventional fiber reinforced concretes.
Resumo:
As awareness of potential human and environmental impacts from toxins has increased, so has the development of innovative sensors. Bacteriorhodopsin (bR) is a light activated proton pump contained in the purple membrane (PM) of the bacteria Halobacterium salinarum. Bacteriorhodopsin is a robust protein which can function in both wet and dry states and can withstand extreme environmental conditions. A single electron transistor(SET) is a nano-scale device that exploits the quantum mechanical properties of electrons to switch on and off. SETs have tremendous potential in practical applications due to their size, ultra low power requirements, and electrometer-like sensitivity. The main goal of this research was to create a bionanohybrid device by integrating bR with a SET device. This was achieved by a multidisciplinary approach. The SET devices were created by a combination of sputtering, photolithography, and focused ion beam machining. The bionanomaterial bacteriorhodopsin was created through oxidative fermentation and a series of transmembrane purification processes. The bR was then integrated with the SET by electrophoretic deposition, creating a bionanohybrid device. The bionanohybrid device was then characterized using a semiconductor parametric analyzer. Characterization demonstrated that the bR modulated the operational characteristics of the SET when bR was activated with light within its absorbance spectrum. To effectively integrate bacteriorhodopsin with microelectromechanical systems (MEMS) and nanoelectromechanical systems (NEMS), it is critical to know the electrical properties of the material and to understand how it will affect the functionality of the device. Tests were performed on dried films of bR to determine if there is a relationship between inductance, capacitance, and resistance (LCR) measurements and orientation, light-on/off, frequency, and time. The results indicated that the LCR measurements of the bR depended on the thickness and area of the film, but not on the orientation, as with other biological materials such as muscle. However, there was a transient LCR response for both oriented and unoriented bR which depended on light intensity. From the impedance measurements an empirical model was suggested for the bionanohybrid device. The empirical model is based on the dominant electrical characteristics of the bR which were the parallel capacitance and resistance. The empirical model suggests that it is possible to integrate bR with a SET without influencing its functional characteristics.
Resumo:
A range of societal issues have been caused by fossil fuel consumption in the transportation sector in the United States (U.S.), including health related air pollution, climate change, the dependence on imported oil, and other oil related national security concerns. Biofuels production from various lignocellulosic biomass types such as wood, forest residues, and agriculture residues have the potential to replace a substantial portion of the total fossil fuel consumption. This research focuses on locating biofuel facilities and designing the biofuel supply chain to minimize the overall cost. For this purpose an integrated methodology was proposed by combining the GIS technology with simulation and optimization modeling methods. The GIS based methodology was used as a precursor for selecting biofuel facility locations by employing a series of decision factors. The resulted candidate sites for biofuel production served as inputs for simulation and optimization modeling. As a precursor to simulation or optimization modeling, the GIS-based methodology was used to preselect potential biofuel facility locations for biofuel production from forest biomass. Candidate locations were selected based on a set of evaluation criteria, including: county boundaries, a railroad transportation network, a state/federal road transportation network, water body (rivers, lakes, etc.) dispersion, city and village dispersion, a population census, biomass production, and no co-location with co-fired power plants. The simulation and optimization models were built around key supply activities including biomass harvesting/forwarding, transportation and storage. The built onsite storage served for spring breakup period where road restrictions were in place and truck transportation on certain roads was limited. Both models were evaluated using multiple performance indicators, including cost (consisting of the delivered feedstock cost, and inventory holding cost), energy consumption, and GHG emissions. The impact of energy consumption and GHG emissions were expressed in monetary terms to keep consistent with cost. Compared with the optimization model, the simulation model represents a more dynamic look at a 20-year operation by considering the impacts associated with building inventory at the biorefinery to address the limited availability of biomass feedstock during the spring breakup period. The number of trucks required per day was estimated and the inventory level all year around was tracked. Through the exchange of information across different procedures (harvesting, transportation, and biomass feedstock processing procedures), a smooth flow of biomass from harvesting areas to a biofuel facility was implemented. The optimization model was developed to address issues related to locating multiple biofuel facilities simultaneously. The size of the potential biofuel facility is set up with an upper bound of 50 MGY and a lower bound of 30 MGY. The optimization model is a static, Mathematical Programming Language (MPL)-based application which allows for sensitivity analysis by changing inputs to evaluate different scenarios. It was found that annual biofuel demand and biomass availability impacts the optimal results of biofuel facility locations and sizes.