923 resultados para series-parallel model
Resumo:
The purpose of this study is to develop statistical methodology to facilitate indirect estimation of the concentration of antiretroviral drugs and viral loads in the prostate gland and the seminal vesicle. The differences in antiretroviral drug concentrations in these organs may lead to suboptimal concentrations in one gland compared to the other. Suboptimal levels of the antiretroviral drugs will not be able to fully suppress the virus in that gland, lead to a source of sexually transmissible virus and increase the chance of selecting for drug resistant virus. This information may be useful selecting antiretroviral drug regimen that will achieve optimal concentrations in most of male genital tract glands. Using fractionally collected semen ejaculates, Lundquist (1949) measured levels of surrogate markers in each fraction that are uniquely produced by specific male accessory glands. To determine the original glandular concentrations of the surrogate markers, Lundquist solved a simultaneous series of linear equations. This method has several limitations. In particular, it does not yield a unique solution, it does not address measurement error, and it disregards inter-subject variability in the parameters. To cope with these limitations, we developed a mechanistic latent variable model based on the physiology of the male genital tract and surrogate markers. We employ a Bayesian approach and perform a sensitivity analysis with regard to the distributional assumptions on the random effects and priors. The model and Bayesian approach is validated on experimental data where the concentration of a drug should be (biologically) differentially distributed between the two glands. In this example, the Bayesian model-based conclusions are found to be robust to model specification and this hierarchical approach leads to more scientifically valid conclusions than the original methodology. In particular, unlike existing methods, the proposed model based approach was not affected by a common form of outliers.
Resumo:
Objectives: The goal of the present study was to elucidate the contribution of the newly recognized virulence factor choline to the pathogenesis of Streptococcus pneumoniae in an animal model of meningitis. Results: The choline containing strain D39Cho(-) and its isogenic choline-free derivative D39Cho(-)licA64 -each expressing the capsule polysaccharide 2 - were introduced intracisternally at an inoculum size of 10(3) CFU into 11 days old Wistar rats. During the first 8 h post infection both strains multiplied and stimulated a similar immune response that involved expression of high levels of proinflammatory cytokines, the matrix metalloproteinase 9 (MMP-9), IL-10, and the influx of white blood cells into the CSF. Virtually identical immune response was also elicited by intracisternal inoculation of 10(7) CFU equivalents of either choline-containing or choline-free cell walls. At sampling times past 8 h strain D39Cho(-) continued to replicate accompanied by an intense inflammatory response and strong granulocytic pleiocytosis. Animals infected with D39Cho(-) died within 20 h and histopathology revealed brain damage in the cerebral cortex and hippocampus. In contrast, the initial immune response generated by the choline-free strain D39Cho(-)licA64 began to decline after the first 8 h accompanied by elimination of the bacteria from the CSF in parallel with a strong WBC response peaking at 8 h after infection. All animals survived and there was no evidence for brain damage. Conclusion: Choline in the cell wall is essential for pneumococci to remain highly virulent and survive within the host and establish pneumococcal meningitis.
Resumo:
Linear programs, or LPs, are often used in optimization problems, such as improving manufacturing efficiency of maximizing the yield from limited resources. The most common method for solving LPs is the Simplex Method, which will yield a solution, if one exists, but over the real numbers. From a purely numerical standpoint, it will be an optimal solution, but quite often we desire an optimal integer solution. A linear program in which the variables are also constrained to be integers is called an integer linear program or ILP. It is the focus of this report to present a parallel algorithm for solving ILPs. We discuss a serial algorithm using a breadth-first branch-and-bound search to check the feasible solution space, and then extend it into a parallel algorithm using a client-server model. In the parallel mode, the search may not be truly breadth-first, depending on the solution time for each node in the solution tree. Our search takes advantage of pruning, often resulting in super-linear improvements in solution time. Finally, we present results from sample ILPs, describe a few modifications to enhance the algorithm and improve solution time, and offer suggestions for future work.
Resumo:
ntense liver regeneration and almost 100% survival follows partial hepatectomy of up to 70% of liver mass in rodents. More extensive resections of 70 to 80% have an increased mortality and partial hepatectomies of >80% constantly lead to acute hepatic failure and death in mice. The aim of the study was to determine the effect of systemically administered granulocyte colony stimulating factor (G-CSF) on animal survival and liver regeneration in a small for size liver remnant mouse model after 83% partial hepatectomy (liver weight <0.8% of mouse body weight). Methods: Male Balb C mice (n=80, 20-24g) were preconditioned daily for five days with 5μg G-CSF subcutaneously or sham injected (aqua ad inj). Subsequently 83% hepatic resection was performed and daily sham or G-CSF injection continued. Survival was determined in both groups (G-CSF n=35; Sham: n=33). In a second series BrdU was injected (50mg/kg Body weight) two hours prior to tissue harvest and animals euthanized 36 and 48 hours after 83% liver resection (n=3 each group). To measure hepatic regeneration the BrdU labeling index and Ki67 expression were determined by immunohistochemistry by two independent observers. Harvested liver tissue was dried to constant weight at 65 deg C for 48 hours. Results: Survival was 0% in the sham group on day 3 postoperatively and significantly better (26.2% on day 7 and thereafter) in the G-CSF group (Log rank test: p<0.0001). Dry liver weight was increased in the G-CSF group (T-test: p<0.05) 36 hours after 83% partial hepatectomy. Ki67 expression was elevated in the G-CSF group at 36 hours (2.8±2.6% (Standard deviation) vs 0.03±0.2%; Rank sum test: p<0.0001) and at 48 hours (45.1±34.6% vs 0.7±1.0%; Rank sum test: p<0.0001) after 83% liver resection. BrdU labeling at 48 hours was 0.1±0.3% in the sham and 35.2±34.2% in the G-CSF group (Rank sum test: p<0.0001) Conclusions: The surgical 83% resection mouse model is suitable to test hepatic supportive regimens in the setting of small for size liver remnants. Administration of G-CSF supports hepatic regeneration after microsurgical 83% partial hepatectomy and leads to improved long-term survival in the mouse. G-CSF might prove to be a clinically valuable supportive substance in small for size liver remnants in humans after major hepatic resections due to primary or secondary liver tumors or in the setting of living related liver donation.
Resumo:
To mitigate greenhouse gas (GHG) emissions and reduce U.S. dependence on imported oil, the United States (U.S.) is pursuing several options to create biofuels from renewable woody biomass (hereafter referred to as “biomass”). Because of the distributed nature of biomass feedstock, the cost and complexity of biomass recovery operations has significant challenges that hinder increased biomass utilization for energy production. To facilitate the exploration of a wide variety of conditions that promise profitable biomass utilization and tapping unused forest residues, it is proposed to develop biofuel supply chain models based on optimization and simulation approaches. The biofuel supply chain is structured around four components: biofuel facility locations and sizes, biomass harvesting/forwarding, transportation, and storage. A Geographic Information System (GIS) based approach is proposed as a first step for selecting potential facility locations for biofuel production from forest biomass based on a set of evaluation criteria, such as accessibility to biomass, railway/road transportation network, water body and workforce. The development of optimization and simulation models is also proposed. The results of the models will be used to determine (1) the number, location, and size of the biofuel facilities, and (2) the amounts of biomass to be transported between the harvesting areas and the biofuel facilities over a 20-year timeframe. The multi-criteria objective is to minimize the weighted sum of the delivered feedstock cost, energy consumption, and GHG emissions simultaneously. Finally, a series of sensitivity analyses will be conducted to identify the sensitivity of the decisions, such as the optimal site selected for the biofuel facility, to changes in influential parameters, such as biomass availability and transportation fuel price. Intellectual Merit The proposed research will facilitate the exploration of a wide variety of conditions that promise profitable biomass utilization in the renewable biofuel industry. The GIS-based facility location analysis considers a series of factors which have not been considered simultaneously in previous research. Location analysis is critical to the financial success of producing biofuel. The modeling of woody biomass supply chains using both optimization and simulation, combing with the GIS-based approach as a precursor, have not been done to date. The optimization and simulation models can help to ensure the economic and environmental viability and sustainability of the entire biofuel supply chain at both the strategic design level and the operational planning level. Broader Impacts The proposed models for biorefineries can be applied to other types of manufacturing or processing operations using biomass. This is because the biomass feedstock supply chain is similar, if not the same, for biorefineries, biomass fired or co-fired power plants, or torrefaction/pelletization operations. Additionally, the research results of this research will continue to be disseminated internationally through publications in journals, such as Biomass and Bioenergy, and Renewable Energy, and presentations at conferences, such as the 2011 Industrial Engineering Research Conference. For example, part of the research work related to biofuel facility identification has been published: Zhang, Johnson and Sutherland [2011] (see Appendix A). There will also be opportunities for the Michigan Tech campus community to learn about the research through the Sustainable Future Institute.
Resumo:
Ultra-high performance fiber reinforced concrete (UHPFRC) has arisen from the implementation of a variety of concrete engineering and materials science concepts developed over the last century. This material offers superior strength, serviceability, and durability over its conventional counterparts. One of the most important differences for UHPFRC over other concrete materials is its ability to resist fracture through the use of randomly dispersed discontinuous fibers and improvements to the fiber-matrix bond. Of particular interest is the materials ability to achieve higher loads after first crack, as well as its high fracture toughness. In this research, a study of the fracture behavior of UHPFRC with steel fibers was conducted to look at the effect of several parameters related to the fracture behavior and to develop a fracture model based on a non-linear curve fit of the data. To determine this, a series of three-point bending tests were performed on various single edge notched prisms (SENPs). Compression tests were also performed for quality assurance. Testing was conducted on specimens of different cross-sections, span/depth (S/D) ratios, curing regimes, ages, and fiber contents. By comparing the results from prisms of different sizes this study examines the weakening mechanism due to the size effect. Furthermore, by employing the concept of fracture energy it was possible to obtain a comparison of the fracture toughness and ductility. The model was determined based on a fit to P-w fracture curves, which was cross referenced for comparability to the results. Once obtained the model was then compared to the models proposed by the AFGC in the 2003 and to the ACI 544 model for conventional fiber reinforced concretes.
Resumo:
As awareness of potential human and environmental impacts from toxins has increased, so has the development of innovative sensors. Bacteriorhodopsin (bR) is a light activated proton pump contained in the purple membrane (PM) of the bacteria Halobacterium salinarum. Bacteriorhodopsin is a robust protein which can function in both wet and dry states and can withstand extreme environmental conditions. A single electron transistor(SET) is a nano-scale device that exploits the quantum mechanical properties of electrons to switch on and off. SETs have tremendous potential in practical applications due to their size, ultra low power requirements, and electrometer-like sensitivity. The main goal of this research was to create a bionanohybrid device by integrating bR with a SET device. This was achieved by a multidisciplinary approach. The SET devices were created by a combination of sputtering, photolithography, and focused ion beam machining. The bionanomaterial bacteriorhodopsin was created through oxidative fermentation and a series of transmembrane purification processes. The bR was then integrated with the SET by electrophoretic deposition, creating a bionanohybrid device. The bionanohybrid device was then characterized using a semiconductor parametric analyzer. Characterization demonstrated that the bR modulated the operational characteristics of the SET when bR was activated with light within its absorbance spectrum. To effectively integrate bacteriorhodopsin with microelectromechanical systems (MEMS) and nanoelectromechanical systems (NEMS), it is critical to know the electrical properties of the material and to understand how it will affect the functionality of the device. Tests were performed on dried films of bR to determine if there is a relationship between inductance, capacitance, and resistance (LCR) measurements and orientation, light-on/off, frequency, and time. The results indicated that the LCR measurements of the bR depended on the thickness and area of the film, but not on the orientation, as with other biological materials such as muscle. However, there was a transient LCR response for both oriented and unoriented bR which depended on light intensity. From the impedance measurements an empirical model was suggested for the bionanohybrid device. The empirical model is based on the dominant electrical characteristics of the bR which were the parallel capacitance and resistance. The empirical model suggests that it is possible to integrate bR with a SET without influencing its functional characteristics.
Resumo:
A range of societal issues have been caused by fossil fuel consumption in the transportation sector in the United States (U.S.), including health related air pollution, climate change, the dependence on imported oil, and other oil related national security concerns. Biofuels production from various lignocellulosic biomass types such as wood, forest residues, and agriculture residues have the potential to replace a substantial portion of the total fossil fuel consumption. This research focuses on locating biofuel facilities and designing the biofuel supply chain to minimize the overall cost. For this purpose an integrated methodology was proposed by combining the GIS technology with simulation and optimization modeling methods. The GIS based methodology was used as a precursor for selecting biofuel facility locations by employing a series of decision factors. The resulted candidate sites for biofuel production served as inputs for simulation and optimization modeling. As a precursor to simulation or optimization modeling, the GIS-based methodology was used to preselect potential biofuel facility locations for biofuel production from forest biomass. Candidate locations were selected based on a set of evaluation criteria, including: county boundaries, a railroad transportation network, a state/federal road transportation network, water body (rivers, lakes, etc.) dispersion, city and village dispersion, a population census, biomass production, and no co-location with co-fired power plants. The simulation and optimization models were built around key supply activities including biomass harvesting/forwarding, transportation and storage. The built onsite storage served for spring breakup period where road restrictions were in place and truck transportation on certain roads was limited. Both models were evaluated using multiple performance indicators, including cost (consisting of the delivered feedstock cost, and inventory holding cost), energy consumption, and GHG emissions. The impact of energy consumption and GHG emissions were expressed in monetary terms to keep consistent with cost. Compared with the optimization model, the simulation model represents a more dynamic look at a 20-year operation by considering the impacts associated with building inventory at the biorefinery to address the limited availability of biomass feedstock during the spring breakup period. The number of trucks required per day was estimated and the inventory level all year around was tracked. Through the exchange of information across different procedures (harvesting, transportation, and biomass feedstock processing procedures), a smooth flow of biomass from harvesting areas to a biofuel facility was implemented. The optimization model was developed to address issues related to locating multiple biofuel facilities simultaneously. The size of the potential biofuel facility is set up with an upper bound of 50 MGY and a lower bound of 30 MGY. The optimization model is a static, Mathematical Programming Language (MPL)-based application which allows for sensitivity analysis by changing inputs to evaluate different scenarios. It was found that annual biofuel demand and biomass availability impacts the optimal results of biofuel facility locations and sizes.
Resumo:
BACKGROUND: Wheezing disorders in childhood vary widely in clinical presentation and disease course. During the last years, several ways to classify wheezing children into different disease phenotypes have been proposed and are increasingly used for clinical guidance, but validation of these hypothetical entities is difficult. METHODOLOGY/PRINCIPAL FINDINGS: The aim of this study was to develop a testable disease model which reflects the full spectrum of wheezing illness in preschool children. We performed a qualitative study among a panel of 7 experienced clinicians from 4 European countries working in primary, secondary and tertiary paediatric care. In a series of questionnaire surveys and structured discussions, we found a general consensus that preschool wheezing disorders consist of several phenotypes, with a great heterogeneity of specific disease concepts between clinicians. Initially, 24 disease entities were described among the 7 physicians. In structured discussions, these could be narrowed down to three entities which were linked to proposed mechanisms: a) allergic wheeze, b) non-allergic wheeze due to structural airway narrowing and c) non-allergic wheeze due to increased immune response to viral infections. This disease model will serve to create an artificial dataset that allows the validation of data-driven multidimensional methods, such as cluster analysis, which have been proposed for identification of wheezing phenotypes in children. CONCLUSIONS/SIGNIFICANCE: While there appears to be wide agreement among clinicians that wheezing disorders consist of several diseases, there is less agreement regarding their number and nature. A great diversity of disease concepts exist but a unified phenotype classification reflecting underlying disease mechanisms is lacking. We propose a disease model which may help guide future research so that proposed mechanisms are measured at the right time and their role in disease heterogeneity can be studied.
Resumo:
The three-step test is central to the regulation of copyright limitations at the international level. Delineating the room for exemptions with abstract criteria, the three-step test is by far the most important and comprehensive basis for the introduction of national use privileges. It is an essential, flexible element in the international limitation infrastructure that allows national law makers to satisfy domestic social, cultural, and economic needs. Given the universal field of application that follows from the test’s open-ended wording, the provision creates much more breathing space than the more specific exceptions recognized in international copyright law. EC copyright legislation, however, fails to take advantage of the flexibility inherent in the three-step test. Instead of using the international provision as a means to open up the closed EC catalogue of permissible exceptions, offer sufficient breathing space for social, cultural, and economic needs, and enable EC copyright law to keep pace with the rapid development of the Internet, the Copyright Directive 2001/29/EC encourages the application of the three-step test to further restrict statutory exceptions that are often defined narrowly in national legislation anyway. In the current online environment, however, enhanced flexibility in the field of copyright limitations is indispensable. From a social and cultural perspective, the web 2.0 promotes and enhances freedom of expression and information with its advanced search engine services, interactive platforms, and various forms of user-generated content. From an economic perspective, it creates a parallel universe of traditional content providers relying on copyright protection, and emerging Internet industries whose further development depends on robust copyright limita- tions. In particular, the newcomers in the online market – social networking sites, video forums, and virtual worlds – promise a remarkable potential for economic growth that has already attracted the attention of the OECD. Against this background, the time is ripe to debate the introduction of an EC fair use doctrine on the basis of the three-step test. Otherwise, EC copyright law is likely to frustrate important opportunities for cultural, social, and economic development. To lay groundwork for the debate, the differences between the continental European and the Anglo-American approach to copyright limitations (section 1), and the specific merits of these two distinct approaches (section 2), will be discussed first. An analysis of current problems that have arisen under the present dysfunctional EC system (section 3) will then serve as a starting point for proposing an EC fair use doctrine based on the three-step test (section 4). Drawing conclusions, the international dimension of this fair use proposal will be considered (section 5).
Resumo:
Fossil pollen data from stratigraphic cores are irregularly spaced in time due to non-linear age-depth relations. Moreover, their marginal distributions may vary over time. We address these features in a nonparametric regression model with errors that are monotone transformations of a latent continuous-time Gaussian process Z(T). Although Z(T) is unobserved, due to monotonicity, under suitable regularity conditions, it can be recovered facilitating further computations such as estimation of the long-memory parameter and the Hermite coefficients. The estimation of Z(T) itself involves estimation of the marginal distribution function of the regression errors. These issues are considered in proposing a plug-in algorithm for optimal bandwidth selection and construction of confidence bands for the trend function. Some high-resolution time series of pollen records from Lago di Origlio in Switzerland, which go back ca. 20,000 years are used to illustrate the methods.
Resumo:
The Earth's bow shock is very efficient in accelerating ions out of the incident solar wind distribution to high energies (≈ 200 keV/e). Fluxes of energetic ions accelerated at the quasi-parallel bow shock, also known as diffuse ions, are best represented by exponential spectra in energy/charge, which require additional assumptions to be incorporated into these model spectra. One of these assumptions is a so-called "free escape boundary" along the interplanetary magnetic field into the upstream direction. Locations along the IBEX orbit are ideally suited for in situ measurements to investigate the existence of an upstream free escape boundary for bow shock accelerated ions. In this study we use 2 years of ion measurements from the background monitor on the IBEX spacecraft, supported by ACE solar wind observations. The IBEX Background Monitor is sensitive to protons > 14 keV, which includes the energy of the maximum flux for diffuse ions. With increasing distance from the bow shock along the interplanetary magnetic field, the count rates for diffuse ions stay constant for ions streaming away from the bow shock, while count rates for diffuse ions streaming toward the shock gradually decrease from a maximum value to ~1/e at distances of about 10 RE to 14 RE. These observations of a gradual decrease support the transition to a free escape continuum for ions of energy >14 keV at distances from 10 RE to 14 RE from the bow shock.
Resumo:
The accuracy of Global Positioning System (GPS) time series is degraded by the presence of offsets. To assess the effectiveness of methods that detect and remove these offsets, we designed and managed the Detection of Offsets in GPS Experiment. We simulated time series that mimicked realistic GPS data consisting of a velocity component, offsets, white and flicker noises (1/f spectrum noises) composed in an additive model. The data set was made available to the GPS analysis community without revealing the offsets, and several groups conducted blind tests with a range of detection approaches. The results show that, at present, manual methods (where offsets are hand picked) almost always give better results than automated or semi‒automated methods (two automated methods give quite similar velocity bias as the best manual solutions). For instance, the fifth percentile range (5% to 95%) in velocity bias for automated approaches is equal to 4.2 mm/year (most commonly ±0.4 mm/yr from the truth), whereas it is equal to 1.8 mm/yr for the manual solutions (most commonly 0.2 mm/yr from the truth). The magnitude of offsets detectable by manual solutions is smaller than for automated solutions, with the smallest detectable offset for the best manual and automatic solutions equal to 5 mm and 8 mm, respectively. Assuming the simulated time series noise levels are representative of real GPS time series, robust geophysical interpretation of individual site velocities lower than 0.2–0.4 mm/yr is therefore certainly not robust, although a limit of nearer 1 mm/yr would be a more conservative choice. Further work to improve offset detection in GPS coordinates time series is required before we can routinely interpret sub‒mm/yr velocities for single GPS stations.