847 resultados para Discrete Regression and Qualitative Choice Models
Resumo:
The Alzheimer’s disease (AD), the most prevalent form of age-related dementia, is a multifactorial and heterogeneous neurodegenerative disease. The molecular mechanisms underlying the pathogenesis of AD are yet largely unknown. However, the etiopathogenesis of AD likely resides in the interaction between genetic and environmental risk factors. Among the different factors that contribute to the pathogenesis of AD, amyloid-beta peptides and the genetic risk factor apoE4 are prominent on the basis of genetic evidence and experimental data. ApoE4 transgenic mice have deficits in spatial learning and memory associated with inflammation and brain atrophy. Evidences suggest that apoE4 is implicated in amyloid-beta accumulation, imbalance of cellular antioxidant system and in apoptotic phenomena. The mechanisms by which apoE4 interacts with other AD risk factors leading to an increased susceptibility to the dementia are still unknown. The aim of this research was to provide new insights into molecular mechanisms of AD neurodegeneration, investigating the effect of amyloid-beta peptides and apoE4 genotype on the modulation of genes and proteins differently involved in cellular processes related to aging and oxidative balance such as PIN1, SIRT1, PSEN1, BDNF, TRX1 and GRX1. In particular, we used human neuroblastoma cells exposed to amyloid-beta or apoE3 and apoE4 proteins at different time-points, and selected brain regions of human apoE3 and apoE4 targeted replacement mice, as in vitro and in vivo models, respectively. All genes and proteins studied in the present investigation are modulated by amyloid-beta and apoE4 in different ways, suggesting their involvement in the neurodegenerative mechanisms underlying the AD. Finally, these proteins might represent novel potential diagnostic and therapeutic targets in AD.
Resumo:
The present work is a collection of three essays devoted at understanding the determinants and implications of the adoption of environmental innovations EI by firms, by adopting different but strictly related schumpeterian perspectives. Each of the essays is an empirical analysis that investigates one original research question, formulated to properly fill the gaps that emerged in previous literature, as the broad introduction of this thesis outlines. The first Chapter is devoted at understanding the determinants of EI by focusing on the role that knowledge sources external to the boundaries of the firm, such as those coming from business suppliers or customers or even research organizations, play in spurring their adoption. The second Chapter answers the question on what induces climate change technologies, adopting regional and sectoral lens, and explores the relation among green knowledge generation, inducement in climate change and environmental performances. Chapter 3 analyzes the economic implications of the adoption of EI for firms, and proposes to disentangle EI by different typologies of innovations, such as externality reducing innovations and energy and resource efficient innovations. Each Chapter exploits different dataset and heterogeneous econometric models, that allow a better extension of the results and to overcome the limits that the choice of one dataset with respect to its alternatives engenders. The first and third Chapter are based on an empirical investigation on microdata, i.e. firm level data extracted from innovation surveys. The second Chapter is based on the analysis of patent data in green technologies that have been extracted by the PATSTAT and REGPAT database. A general conclusive Chapter will follow the three essays and will outline how each Chapter filled the research gaps that emerged, how its results can be interpreted, which policy implications can be derived and which are the possible future lines of research in the field.
Resumo:
Model based calibration has gained popularity in recent years as a method to optimize increasingly complex engine systems. However virtually all model based techniques are applied to steady state calibration. Transient calibration is by and large an emerging technology. An important piece of any transient calibration process is the ability to constrain the optimizer to treat the problem as a dynamic one and not as a quasi-static process. The optimized air-handling parameters corresponding to any instant of time must be achievable in a transient sense; this in turn depends on the trajectory of the same parameters over previous time instances. In this work dynamic constraint models have been proposed to translate commanded to actually achieved air-handling parameters. These models enable the optimization to be realistic in a transient sense. The air handling system has been treated as a linear second order system with PD control. Parameters for this second order system have been extracted from real transient data. The model has been shown to be the best choice relative to a list of appropriate candidates such as neural networks and first order models. The selected second order model was used in conjunction with transient emission models to predict emissions over the FTP cycle. It has been shown that emission predictions based on air-handing parameters predicted by the dynamic constraint model do not differ significantly from corresponding emissions based on measured air-handling parameters.
Resumo:
Vascular surgical training currently has to cope with various challenges, including restrictions on work hours, significant reduction of open surgical training cases in many countries, an increasing diversity of open and endovascular procedures, and distinct expectations by trainees. Even more important, patients and the public no longer accept a "learning by doing" training philosophy that leaves the learning curve on the patient's side. The Vascular International (VI) Foundation and School aims to overcome these obstacles by training conventional vascular and endovascular techniques before they are applied on patients. To achieve largely realistic training conditions, lifelike pulsatile models with exchangeable synthetic arterial inlays were created to practice carotid endarterectomy and patch plasty, open abdominal aortic aneurysm surgery, and peripheral bypass surgery, as well as for endovascular procedures, including endovascular aneurysm repair, thoracic endovascular aortic repair, peripheral balloon dilatation, and stenting. All models are equipped with a small pressure pump inside to create pulsatile flow conditions with variable peak pressures of ~90 mm Hg. The VI course schedule consists of a series of 2-hour modules teaching different open or endovascular procedures step-by-step in a standardized fashion. Trainees practice in pairs with continuous supervision and intensive advice provided by highly experienced vascular surgical trainers (trainer-to-trainee ratio is 1:4). Several evaluations of these courses show that tutor-assisted training on lifelike models in an educational-centered and motivated environment is associated with a significant increase of general and specific vascular surgical technical competence within a short period of time. Future studies should evaluate whether these benefits positively influence the future learning curve of vascular surgical trainees and clarify to what extent sophisticated models are useful to assess the level of technical skills of vascular surgical residents at national or international board examinations. This article gives an overview of our experiences of >20 years of practical training of beginners and advanced vascular surgeons using lifelike pulsatile vascular surgical training models.
Resumo:
The aim of this project was to evaluate the present state and possible changes of water resources in Lake Ladoga and its drainage basin for the purposes of the sustainable development of North-Western Russia and Finland. The group assessed the state of the water resources in quantitative and qualitative terms, taking the system of sustainable development indicators suggested by the International Commission on Sustainable Development as a basis for assessment. These include pressure indicators (annual withdrawals of ground and surface water, domestic consumption of water per capita), state indicators (ground water reserves, concentration of faecalcoliform in fresh water, biochemical oxygen demand), and response indicators (waste-water treatment coverage, density of hydrological networks). The group proposed the following additional indicators and indices for the complex evaluation of the qualitative and quantitative state of the region's water resources: * Pressure indicators (external load, coefficient of anthropogenic pressure) * State indicators and indices (concentrations of chemicals in water, concentrations of chemicals in sediments, index of water pollution, critical load, critical limit, internal load, load/critical load, concentration/critical limit, internal load/external load, trophic state, biotic indicators and indices) * Response indicators (discharges of pure water, polluted water, partly treated water and the ratio between these, trans-boundary fluxes of pollutants, state expenditure on environmental protection, human life span) The assessment considered both temporal and spatial aspects and produced a regional classification of the area according to the index of water pollution. Mathematical models were developed to describe and forecast the processes under way in the lake and can be used to estimate the influence of climatic changes on the hydrological regime, as well as the influence of anthropogenic load on the trophic state of Lake Ladoga and to assess the consequences of accidental discharges of polluting admixtures of different kinds into the lake. The results of this mathematical modelling may be of use to decision-makers responsible for the management of water resources.
Resumo:
BACKGROUND: Many studies showing effects of traffic-related air pollution on health rely on self-reported exposure, which may be inaccurate. We estimated the association between self-reported exposure to road traffic and respiratory symptoms in preschool children, and investigated whether the effect could have been caused by reporting bias. METHODS: In a random sample of 8700 preschool children in Leicestershire, UK, exposure to road traffic and respiratory symptoms were assessed by a postal questionnaire (response rate 80%). The association between traffic exposure and respiratory outcomes was assessed using unconditional logistic regression and conditional regression models (matching by postcode). RESULTS: Prevalence odds ratios (95% confidence intervals) for self-reported road traffic exposure, comparing the categories 'moderate' and 'dense', respectively, with 'little or no' were for current wheezing: 1.26 (1.13-1.42) and 1.30 (1.09-1.55); chronic rhinitis: 1.18 (1.05-1.31) and 1.31 (1.11-1.56); night cough: 1.17 (1.04-1.32) and 1.36 (1.14-1.62); and bronchodilator use: 1.20 (1.04-1.38) and 1.18 (0.95-1.46). Matched analysis only comparing symptomatic and asymptomatic children living at the same postcode (thus exposed to similar road traffic) showed similar ORs, suggesting that parents of children with respiratory symptoms reported more road traffic than parents of asymptomatic children. CONCLUSIONS: Our study suggests that reporting bias could explain some or even all the association between reported exposure to road traffic and disease. Over-reporting of exposure by only 10% of parents of symptomatic children would be sufficient to produce the effect sizes shown in this study. Future research should be based only on objective measurements of traffic exposure.
Resumo:
In epidemiological work, outcomes are frequently non-normal, sample sizes may be large, and effects are often small. To relate health outcomes to geographic risk factors, fast and powerful methods for fitting spatial models, particularly for non-normal data, are required. We focus on binary outcomes, with the risk surface a smooth function of space. We compare penalized likelihood models, including the penalized quasi-likelihood (PQL) approach, and Bayesian models based on fit, speed, and ease of implementation. A Bayesian model using a spectral basis representation of the spatial surface provides the best tradeoff of sensitivity and specificity in simulations, detecting real spatial features while limiting overfitting and being more efficient computationally than other Bayesian approaches. One of the contributions of this work is further development of this underused representation. The spectral basis model outperforms the penalized likelihood methods, which are prone to overfitting, but is slower to fit and not as easily implemented. Conclusions based on a real dataset of cancer cases in Taiwan are similar albeit less conclusive with respect to comparing the approaches. The success of the spectral basis with binary data and similar results with count data suggest that it may be generally useful in spatial models and more complicated hierarchical models.
Resumo:
Generalized linear mixed models (GLMMs) provide an elegant framework for the analysis of correlated data. Due to the non-closed form of the likelihood, GLMMs are often fit by computational procedures like penalized quasi-likelihood (PQL). Special cases of these models are generalized linear models (GLMs), which are often fit using algorithms like iterative weighted least squares (IWLS). High computational costs and memory space constraints often make it difficult to apply these iterative procedures to data sets with very large number of cases. This paper proposes a computationally efficient strategy based on the Gauss-Seidel algorithm that iteratively fits sub-models of the GLMM to subsetted versions of the data. Additional gains in efficiency are achieved for Poisson models, commonly used in disease mapping problems, because of their special collapsibility property which allows data reduction through summaries. Convergence of the proposed iterative procedure is guaranteed for canonical link functions. The strategy is applied to investigate the relationship between ischemic heart disease, socioeconomic status and age/gender category in New South Wales, Australia, based on outcome data consisting of approximately 33 million records. A simulation study demonstrates the algorithm's reliability in analyzing a data set with 12 million records for a (non-collapsible) logistic regression model.
Resumo:
This paper proposes Poisson log-linear multilevel models to investigate population variability in sleep state transition rates. We specifically propose a Bayesian Poisson regression model that is more flexible, scalable to larger studies, and easily fit than other attempts in the literature. We further use hierarchical random effects to account for pairings of individuals and repeated measures within those individuals, as comparing diseased to non-diseased subjects while minimizing bias is of epidemiologic importance. We estimate essentially non-parametric piecewise constant hazards and smooth them, and allow for time varying covariates and segment of the night comparisons. The Bayesian Poisson regression is justified through a re-derivation of a classical algebraic likelihood equivalence of Poisson regression with a log(time) offset and survival regression assuming piecewise constant hazards. This relationship allows us to synthesize two methods currently used to analyze sleep transition phenomena: stratified multi-state proportional hazards models and log-linear models with GEE for transition counts. An example data set from the Sleep Heart Health Study is analyzed.
Resumo:
Reproductive skew theory seeks to integrate social and ecological factors thought to influence the division of reproduction among group-living animals. However, most reproductive skew models only examine interactions between individuals of the same sex. Here, we suggest that females can influence group stability and conflict among males by modifying their clutch size and may do so if they benefit from the presence of subordinate male helpers or from reduced conflict. We develop 3 models, based on concessions-based, restraint, and tug-of-war models, in which female clutch size is variable and ask when females will increase their clutch size above that which would be optimal in the absence of male-male conflict. In concessions-based and restraint models, females should increase clutch size above their optima if the benefits of staying for subordinate males are relatively low. Relatedness between males has no effect on clutch size. When females do increase clutch size, the division of reproduction between males is not influenced by relatedness and does not differ between restraint and concessions-based models. Both of these predictions are in sharp contrast to previous models. In tug-of-war models, clutch size is strongly influenced by relatedness between males, with the largest clutches, but the fewest surviving offspring, produced when males are unrelated. These 3 models demonstrate the importance of considering third-party interests in the decisions of group-living organisms.
Resumo:
The emissions, filtration and oxidation characteristics of a diesel oxidation catalyst (DOC) and a catalyzed particulate filter (CPF) in a Johnson Matthey catalyzed continuously regenerating trap (CCRT ®) were studied by using computational models. Experimental data needed to calibrate the models were obtained by characterization experiments with raw exhaust sampling from a Cummins ISM 2002 engine with variable geometry turbocharging (VGT) and programmed exhaust gas recirculation (EGR). The experiments were performed at 20, 40, 60 and 75% of full load (1120 Nm) at rated speed (2100 rpm), with and without the DOC upstream of the CPF. This was done to study the effect of temperature and CPF-inlet NO2 concentrations on particulate matter oxidation in the CCRT ®. A previously developed computational model was used to determine the kinetic parameters describing the oxidation characteristics of HCs, CO and NO in the DOC and the pressure drop across it. The model was calibrated at five temperatures in the range of 280 – 465° C, and exhaust volumetric flow rates of 0.447 – 0.843 act-m3/sec. The downstream HCs, CO and NO concentrations were predicted by the DOC model to within ±3 ppm. The HCs and CO oxidation kinetics in the temperature range of 280 - 465°C and an exhaust volumetric flow rate of 0.447 - 0.843 act-m3/sec can be represented by one ’apparent’ activation energy and pre-exponential factor. The NO oxidation kinetics in the same temperature and exhaust flow rate range can be represented by ’apparent’ activation energies and pre-exponential factors in two regimes. The DOC pressure drop was always predicted within 0.5 kPa by the model. The MTU 1-D 2-layer CPF model was enhanced in several ways to better model the performance of the CCRT ®. A model to simulate the oxidation of particulate inside the filter wall was developed. A particulate cake layer filtration model which describes particle filtration in terms of more fundamental parameters was developed and coupled to the wall oxidation model. To better model the particulate oxidation kinetics, a model to take into account the NO2 produced in the washcoat of the CPF was developed. The overall 1-D 2-layer model can be used to predict the pressure drop of the exhaust gas across the filter, the evolution of particulate mass inside the filter, the particulate mass oxidized, the filtration efficiency and the particle number distribution downstream of the CPF. The model was used to better understand the internal performance of the CCRT®, by determining the components of the total pressure drop across the filter, by classifying the total particulate matter in layer I, layer II, the filter wall, and by the means of oxidation i.e. by O2, NO2 entering the filter and by NO2 being produced in the filter. The CPF model was calibrated at four temperatures in the range of 280 – 465 °C, and exhaust volumetric flow rates of 0.447 – 0.843 act-m3/sec, in CPF-only and CCRT ® (DOC+CPF) configurations. The clean filter wall permeability was determined to be 2.00E-13 m2, which is in agreement with values in the literature for cordierite filters. The particulate packing density in the filter wall had values between 2.92 kg/m3 - 3.95 kg/m3 for all the loads. The mean pore size of the catalyst loaded filter wall was found to be 11.0 µm. The particulate cake packing densities and permeabilities, ranged from 131 kg/m3 - 134 kg/m3, and 0.42E-14 m2 and 2.00E-14 m2 respectively, and are in agreement with the Peclet number correlations in the literature. Particulate cake layer porosities determined from the particulate cake layer filtration model ranged between 0.841 and 0.814 and decreased with load, which is about 0.1 lower than experimental and more complex discrete particle simulations in the literature. The thickness of layer I was kept constant at 20 µm. The model kinetics in the CPF-only and CCRT ® configurations, showed that no ’catalyst effect’ with O2 was present. The kinetic parameters for the NO2-assisted oxidation of particulate in the CPF were determined from the simulation of transient temperature programmed oxidation data in the literature. It was determined that the thermal and NO2 kinetic parameters do not change with temperature, exhaust flow rate or NO2 concentrations. However, different kinetic parameters are used for particulate oxidation in the wall and on the wall. Model results showed that oxidation of particulate in the pores of the filter wall can cause disproportionate decreases in the filter pressure drop with respect to particulate mass. The wall oxidation model along with the particulate cake filtration model were developed to model the sudden and rapid decreases in pressure drop across the CPF. The particulate cake and wall filtration models result in higher particulate filtration efficiencies than with just the wall filtration model, with overall filtration efficiencies of 98-99% being predicted by the model. The pre-exponential factors for oxidation by NO2 did not change with temperature or NO2 concentrations because of the NO2 wall production model. In both CPF-only and CCRT ® configurations, the model showed NO2 and layer I to be the dominant means and dominant physical location of particulate oxidation respectively. However, at temperatures of 280 °C, NO2 is not a significant oxidizer of particulate matter, which is in agreement with studies in the literature. The model showed that 8.6 and 81.6% of the CPF-inlet particulate matter was oxidized after 5 hours at 20 and 75% load in CCRT® configuration. In CPF-only configuration at the same loads, the model showed that after 5 hours, 4.4 and 64.8% of the inlet particulate matter was oxidized. The increase in NO2 concentrations across the DOC contributes significantly to the oxidation of particulate in the CPF and is supplemented by the oxidation of NO to NO2 by the catalyst in the CPF, which increases the particulate oxidation rates. From the model, it was determined that the catalyst in the CPF modeslty increases the particulate oxidation rates in the range of 4.5 – 8.3% in the CCRT® configuration. Hence, the catalyst loading in the CPF of the CCRT® could possibly be reduced without significantly decreasing particulate oxidation rates leading to catalyst cost savings and better engine performance due to lower exhaust backpressures.
Resumo:
Background mortality is an essential component of any forest growth and yield model. Forecasts of mortality contribute largely to the variability and accuracy of model predictions at the tree, stand and forest level. In the present study, I implement and evaluate state-of-the-art techniques to increase the accuracy of individual tree mortality models, similar to those used in many of the current variants of the Forest Vegetation Simulator, using data from North Idaho and Montana. The first technique addresses methods to correct for bias induced by measurement error typically present in competition variables. The second implements survival regression and evaluates its performance against the traditional logistic regression approach. I selected the regression calibration (RC) algorithm as a good candidate for addressing the measurement error problem. Two logistic regression models for each species were fitted, one ignoring the measurement error, which is the “naïve” approach, and the other applying RC. The models fitted with RC outperformed the naïve models in terms of discrimination when the competition variable was found to be statistically significant. The effect of RC was more obvious where measurement error variance was large and for more shade-intolerant species. The process of model fitting and variable selection revealed that past emphasis on DBH as a predictor variable for mortality, while producing models with strong metrics of fit, may make models less generalizable. The evaluation of the error variance estimator developed by Stage and Wykoff (1998), and core to the implementation of RC, in different spatial patterns and diameter distributions, revealed that the Stage and Wykoff estimate notably overestimated the true variance in all simulated stands, but those that are clustered. Results show a systematic bias even when all the assumptions made by the authors are guaranteed. I argue that this is the result of the Poisson-based estimate ignoring the overlapping area of potential plots around a tree. Effects, especially in the application phase, of the variance estimate justify suggested future efforts of improving the accuracy of the variance estimate. The second technique implemented and evaluated is a survival regression model that accounts for the time dependent nature of variables, such as diameter and competition variables, and the interval-censored nature of data collected from remeasured plots. The performance of the model is compared with the traditional logistic regression model as a tool to predict individual tree mortality. Validation of both approaches shows that the survival regression approach discriminates better between dead and alive trees for all species. In conclusion, I showed that the proposed techniques do increase the accuracy of individual tree mortality models, and are a promising first step towards the next generation of background mortality models. I have also identified the next steps to undertake in order to advance mortality models further.
Analysis of spring break-up and its effects on a biomass feedstock supply chain in northern Michigan
Resumo:
Demand for bio-fuels is expected to increase, due to rising prices of fossil fuels and concerns over greenhouse gas emissions and energy security. The overall cost of biomass energy generation is primarily related to biomass harvesting activity, transportation, and storage. With a commercial-scale cellulosic ethanol processing facility in Kinross Township of Chippewa County, Michigan about to be built, models including a simulation model and an optimization model have been developed to provide decision support for the facility. Both models track cost, emissions and energy consumption. While the optimization model provides guidance for a long-term strategic plan, the simulation model aims to present detailed output for specified operational scenarios over an annual period. Most importantly, the simulation model considers the uncertainty of spring break-up timing, i.e., seasonal road restrictions. Spring break-up timing is important because it will impact the feasibility of harvesting activity and the time duration of transportation restrictions, which significantly changes the availability of feedstock for the processing facility. This thesis focuses on the statistical model of spring break-up used in the simulation model. Spring break-up timing depends on various factors, including temperature, road conditions and soil type, as well as individual decision making processes at the county level. The spring break-up model, based on the historical spring break-up data from 27 counties over the period of 2002-2010, starts by specifying the probability distribution of a particular county’s spring break-up start day and end day, and then relates the spring break-up timing of the other counties in the harvesting zone to the first county. In order to estimate the dependence relationship between counties, regression analyses, including standard linear regression and reduced major axis regression, are conducted. Using realizations (scenarios) of spring break-up generated by the statistical spring breakup model, the simulation model is able to probabilistically evaluate different harvesting and transportation plans to help the bio-fuel facility select the most effective strategy. For early spring break-up, which usually indicates a longer than average break-up period, more log storage is required, total cost increases, and the probability of plant closure increases. The risk of plant closure may be partially offset through increased use of rail transportation, which is not subject to spring break-up restrictions. However, rail availability and rail yard storage may then become limiting factors in the supply chain. Rail use will impact total cost, energy consumption, system-wide CO2 emissions, and the reliability of providing feedstock to the bio-fuel processing facility.
Resumo:
The primary challenge in groundwater and contaminant transport modeling is obtaining the data needed for constructing, calibrating and testing the models. Large amounts of data are necessary for describing the hydrostratigraphy in areas with complex geology. Increasingly states are making spatial data available that can be used for input to groundwater flow models. The appropriateness of this data for large-scale flow systems has not been tested. This study focuses on modeling a plume of 1,4-dioxane in a heterogeneous aquifer system in Scio Township, Washtenaw County, Michigan. The analysis consisted of: (1) characterization of hydrogeology of the area and construction of a conceptual model based on publicly available spatial data, (2) development and calibration of a regional flow model for the site, (3) conversion of the regional model to a more highly resolved local model, (4) simulation of the dioxane plume, and (5) evaluation of the model's ability to simulate field data and estimation of the possible dioxane sources and subsequent migration until maximum concentrations are at or below the Michigan Department of Environmental Quality's residential cleanup standard for groundwater (85 ppb). MODFLOW-2000 and MT3D programs were utilized to simulate the groundwater flow and the development and movement of the 1, 4-dioxane plume, respectively. MODFLOW simulates transient groundwater flow in a quasi-3-dimensional sense, subject to a variety of boundary conditions that can simulate recharge, pumping, and surface-/groundwater interactions. MT3D simulates solute advection with groundwater flow (using the flow solution from MODFLOW), dispersion, source/sink mixing, and chemical reaction of contaminants. This modeling approach was successful at simulating the groundwater flows by calibrating recharge and hydraulic conductivities. The plume transport was adequately simulated using literature dispersivity and sorption coefficients, although the plume geometries were not well constrained.