983 resultados para Mixed-integer linear programing
Resumo:
Background To examine the association of education with body mass index (BMI) and waist circumference (WC) in the European Prospective Investigation into Cancer and Nutrition (EPIC). Method This study included 141,230 male and 336,637 female EPIC-participants, who were recruited between 1992 and 2000. Education, which was assessed by questionnaire, was classified into four categories; BMI and WC, measured by trained personnel in most participating centers, were modeled as continuous dependent variables. Associations were estimated using multilevel mixed effects linear regression models. Results Compared with the lowest education level, BMI and WC were significantly lower for all three higher education categories, which was consistent for all countries. Women with university degree had a 2.1 kg/m2 lower BMI compared with women with lowest education level. For men, a statistically significant, but less pronounced difference was observed (1.3 kg/m2). The association between WC and education level was also of greater magnitude for women: compared with the lowest education level, average WC of women was lower by 5.2 cm for women in the highest category. For men the difference was 2.9 cm. Conclusion In this European cohort, there is an inverse association between higher BMI as well as higher WC and lower education level. Public Health Programs that aim to reduce overweight and obesity should primarily focus on the lower educated population.
Resumo:
OBJECTIVE We investigated the association between the proportion of long-chain n-3 polyunsaturated fatty acids (PUFA) in plasma phospholipids from blood samples drawn at enrollment and subsequent change in body weight. Sex, age, and BMI were considered as potential effect modifiers. METHOD A total of 1,998 women and men participating in the European Prospective Investigation into Cancer and Nutrition (EPIC) were followed for a median of 4.9 years. The associations between the proportion of plasma phospholipid long-chain n-3 PUFA and change in weight were investigated using mixed-effect linear regression. RESULTS The proportion of long-chain n-3 PUFA was not associated with change in weight. Among all participants, the 1-year weight change was -0.7 g per 1% point higher long-chain n-3 PUFA level (95% confidence interval: -20.7 to 19.3). The results when stratified by sex, age, or BMI groups were not systematically different. CONCLUSION The results of this study suggest that the proportion of long-chain n-3 PUFA in plasma phospholipids is not associated with subsequent change in body weight within the range of exposure in the general population.
Resumo:
Purpose: Cardiac 18F-FDG PET is considered as the gold standard to assess myocardial metabolism and infarct size. The myocardial demand for glucose can be influenced by fasting and/or following pharmacological preparation. In the rat, it has been previously shown that fasting combined with preconditioning with acipimox, a nicotinic acid derivate and lipidlowering agent, increased dramatically 18F-FDG uptake in the myocardium. Strategies aimed at reducing infarct scar are evaluated in a variety of mouse models. PET would particularly useful for assessing cardiac viability in the mouse. However, prior knowledge of the best preparation protocol is a prerequisite for accurate measurement of glucose uptake in mice. Therefore, we studied the effect of different protocols on 18F-FDG uptake in the mouse heart.Methods: Mice (n = 15) were separated into three treatment groups according to preconditioning and underwent a 18FDG PET scan. Group 1: No preconditioning (n = 3); Group 2: Overnight fasting (n = 8); and Group 3: Overnight fasting and acipimox (25mg/kg SC) (n = 4). MicroPET images were processed with PMOD to determine 18F-FDG mean standard uptake value (SUV) at 30 min for the whole left ventricle (LV) and for each region of the 17-segments AHA model. For comparisons, we used Mann-Whitney test and multilevel mixed-effects linear regression (Stata 11.0).Results: In total, 27 microPET were performed successfully in 15 animals. Overnight fasting led to a dramatic increase in LV-SUV compared to mice without preconditioning (8.6±0.7g/mL vs. 3.7±1.1g/mL, P<0.001). In addition, LV-SUV was slightly but not significantly higher in animals treated with acipimox compared to animals with overnight fasting alone (10.2±0.5 g/mL, P = 0.06). Fastening increased segmental SUV by 5.1±0.5g/mL as compared to free-feeding mice (from 3.7±0.8g/mL to 8.8±0.4g/mL, P<0.001); segmental-SUV also significantly increased after administration of acipimox (from 8.8±0.4g/mL to 10.1±0.4g/mL, P<0.001).Conclusion: Overnight fasting led to myocardial glucose deprivation and increases 18F-FDG myocardial uptake. Additional administration of acipimox enhances myocardial 18F-FDG uptake, at least at the segmental level. Thus, preconditioning with acipimox may provide better image quality that may help for assessing segmental myocardial metabolism.
Resumo:
AIMS: In patients with alcohol dependence, health-related quality of life (QOL) is reduced compared with that of a normal healthy population. The objective of the current analysis was to describe the evolution of health-related QOL in adults with alcohol dependence during a 24-month period after initial assessment for alcohol-related treatment in a routine practice setting, and its relation to drinking pattern which was evaluated across clusters based on the predominant pattern of alcohol use, set against the influence of baseline variables METHODS: The Medical Outcomes Study 36-Item Short-Form Survey (MOS-SF-36) was used to measure QOL at baseline and quarterly for 2 years among participants in CONTROL, a prospective observational study of patients initiating treatment for alcohol dependence. The sample consisted of 160 adults with alcohol dependence (65.6% males) with a mean (SD) age of 45.6 (12.0) years. Alcohol use data were collected using TimeLine Follow-Back. Based on the participant's reported alcohol use, three clusters were identified: 52 (32.5%) mostly abstainers, 64 (40.0%) mostly moderate drinkers and 44 (27.5%) mostly heavy drinkers. Mixed-effect linear regression analysis was used to identify factors that were potentially associated with the mental and physical summary MOS-SF-36 scores at each time point. RESULTS: The mean (SD) MOS-SF-36 mental component summary score (range 0-100, norm 50) was 35.7 (13.6) at baseline [mostly abstainers: 40.4 (14.6); mostly moderate drinkers 35.6 (12.4); mostly heavy drinkers 30.1 (12.1)]. The score improved to 43.1 (13.4) at 3 months [mostly abstainers: 47.4 (12.3); mostly moderate drinkers 44.2 (12.7); mostly heavy drinkers 35.1 (12.9)], to 47.3 (11.4) at 12 months [mostly abstainers: 51.7 (9.7); mostly moderate drinkers 44.8 (11.9); mostly heavy drinkers 44.1 (11.3)], and to 46.6 (11.1) at 24 months [mostly abstainers: 49.2 (11.6); mostly moderate drinkers 45.7 (11.9); mostly heavy drinkers 43.7 (8.8)]. Mixed-effect linear regression multivariate analyses indicated that there was a significant association between a lower 2-year follow-up MOS-SF-36 mental score and being a mostly heavy drinker (-6.97, P < 0.001) or mostly moderate drinker (-3.34 points, P = 0.018) [compared to mostly abstainers], being female (-3.73, P = 0.004), and having a Beck Inventory scale score ≥8 (-6.54, P < 0.001), at baseline. The mean (SD) MOS-SF-36 physical component summary score was 48.8 (10.6) at baseline, remained stable over the follow-up and did not differ across the three clusters. Mixed-effect linear regression univariate analyses found that the average 2-year follow-up MOS-SF-36 physical score was increased (compared with mostly abstainers) in mostly heavy drinkers (+4.44, P = 0.007); no other variables tested influenced the MOS-SF-36 physical score. CONCLUSION: Among individuals with alcohol dependence, a rapid improvement was seen in the mental dimension of QOL following treatment initiation, which was maintained during 24 months. Improvement was associated with the pattern of alcohol use, becoming close to the general population norm in patients classified as mostly abstainers, improving substantially in mostly moderate drinkers and improving only slightly in mostly heavy drinkers. The physical dimension of QOL was generally in the normal range but was not associated with drinking patterns.
Resumo:
Abstract
4B.05: Plasma Lasma copeptin is associated with insulin resistance in a Swiss population-based study
Resumo:
OBJECTIVE: Previous studies suggest that arginine vasopressin may have a role in metabolic syndrome (MetS) and diabetes by altering liver glycogenolysis, insulin, and glucagon secretion and pituitary ACTH release. We tested whether plasma copeptin, the stable C-terminal fragment of arginine vasopressin prohormone, was associated with insulin resistance and MetS in a Swiss population-based study. DESIGN AND METHOD: We analyzed data from the population-based Swiss Kidney Project on Genes in Hypertension. Copeptin was assessed by an immunoluminometric assay. Insulin resistance was derived from the HOMA model and calculated as follows: (FPI x FPG)/22.5, where FPI is fasting plasma insulin concentration (mU/L) and FPG fasting plasma glucose (mmol/L). Subjects were classified as having the MetS according to the National Cholesterol Education Program Adult Treatment Panel III criteria. Mixed multivariate linear regression models were built to explore the association of insulin resistance with copeptin. In addition, multivariate logistic regression models were built to explore the association between MetS and copeptin. In the two analyses, adjustment was done for age, gender, center, tobacco and alcohol consumption, socioeconomic status, physical activity, intake of fruits and vegetables and 24 h urine flow rate. Copeptin was log-transformed for the analyses. RESULTS: Among the 1,089 subjects included in this analysis, 47% were male. Mean (SD) age and body mass index were 47.4 (17.6) years 25.0 (4.5) kg/m2. The prevalence of MetS was 10.5%. HOMA-IR was higher in men (median 1.3, IQR 0.7-2.1) than in women (median 1.0, IQR 0.5-1.6,P < 0.0001). Plasma copeptin was higher in men (median 5.2, IQR 3.7-7.8 pmol/L) than in women (median 3.0, IQR 2.2-4.3 pmol/L), P < 0.0001. HOMA-IR was positively associated with log-copeptin after full adjustment (β (95% CI) 0.19 (0.09-0.29), P < 0.001). MetS was not associated with copeptin after full adjustment (P = 0.92). CONCLUSIONS: Insulin resistance, but not MetS, was associated with higher copeptin levels. Further studies should examine whether modifying pharmacologically the arginine vasopressin system might improve insulin resistance, thereby providing insight into the causal nature of this association.
Resumo:
Wavelength division multiplexing (WDM) networks have been adopted as a near-future solution for the broadband Internet. In previous work we proposed a new architecture, named enhanced grooming (G+), that extends the capabilities of traditional optical routes (lightpaths). In this paper, we compare the operational expenditures incurred by routing a set of demands using lightpaths with that of lighttours. The comparison is done by solving an integer linear programming (ILP) problem based on a path formulation. Results show that, under the assumption of single-hop routing, almost 15% of the operational cost can be reduced with our architecture. In multi-hop routing the operation cost is reduced in 7.1% and at the same time the ratio of operational cost to number of optical-electro-optical conversions is reduced for our architecture. This means that ISPs could provide the same satisfaction in terms of delay to the end-user with a lower investment in the network architecture
Resumo:
In this article, a new technique for grooming low-speed traffic demands into high-speed optical routes is proposed. This enhancement allows a transparent wavelength-routing switch (WRS) to aggregate traffic en route over existing optical routes without incurring expensive optical-electrical-optical (OEO) conversions. This implies that: a) an optical route may be considered as having more than one ingress node (all inline) and, b) traffic demands can partially use optical routes to reach their destination. The proposed optical routes are named "lighttours" since the traffic originating from different sources can be forwarded together in a single optical route, i.e., as taking a "tour" over different sources towards the same destination. The possibility of creating lighttours is the consequence of a novel WRS architecture proposed in this article, named "enhanced grooming" (G+). The ability to groom more traffic in the middle of a lighttour is achieved with the support of a simple optical device named lambda-monitor (previously introduced in the RingO project). In this article, we present the new WRS architecture and its advantages. To compare the advantages of lighttours with respect to classical lightpaths, an integer linear programming (ILP) model is proposed for the well-known multilayer problem: traffic grooming, routing and wavelength assignment The ILP model may be used for several objectives. However, this article focuses on two objectives: maximizing the network throughput, and minimizing the number of optical-electro-optical conversions used. Experiments show that G+ can route all the traffic using only half of the total OEO conversions needed by classical grooming. An heuristic is also proposed, aiming at achieving near optimal results in polynomial time
Resumo:
Environmental issues, including global warming, have been serious challenges realized worldwide, and they have become particularly important for the iron and steel manufacturers during the last decades. Many sites has been shut down in developed countries due to environmental regulation and pollution prevention while a large number of production plants have been established in developing countries which has changed the economy of this business. Sustainable development is a concept, which today affects economic growth, environmental protection, and social progress in setting up the basis for future ecosystem. A sustainable headway may attempt to preserve natural resources, recycle and reuse materials, prevent pollution, enhance yield and increase profitability. To achieve these objectives numerous alternatives should be examined in the sustainable process design. Conventional engineering work cannot address all of these substitutes effectively and efficiently to find an optimal route of processing. A systematic framework is needed as a tool to guide designers to make decisions based on overall concepts of the system, identifying the key bottlenecks and opportunities, which lead to an optimal design and operation of the systems. Since the 1980s, researchers have made big efforts to develop tools for what today is referred to as Process Integration. Advanced mathematics has been used in simulation models to evaluate various available alternatives considering physical, economic and environmental constraints. Improvements on feed material and operation, competitive energy market, environmental restrictions and the role of Nordic steelworks as energy supplier (electricity and district heat) make a great motivation behind integration among industries toward more sustainable operation, which could increase the overall energy efficiency and decrease environmental impacts. In this study, through different steps a model is developed for primary steelmaking, with the Finnish steel sector as a reference, to evaluate future operation concepts of a steelmaking site regarding sustainability. The research started by potential study on increasing energy efficiency and carbon dioxide reduction due to integration of steelworks with chemical plants for possible utilization of available off-gases in the system as chemical products. These off-gases from blast furnace, basic oxygen furnace and coke oven furnace are mainly contained of carbon monoxide, carbon dioxide, hydrogen, nitrogen and partially methane (in coke oven gas) and have proportionally low heating value but are currently used as fuel within these industries. Nonlinear optimization technique is used to assess integration with methanol plant under novel blast furnace technologies and (partially) substitution of coal with other reducing agents and fuels such as heavy oil, natural gas and biomass in the system. Technical aspect of integration and its effect on blast furnace operation regardless of capital expenditure of new operational units are studied to evaluate feasibility of the idea behind the research. Later on the concept of polygeneration system added and a superstructure generated with alternative routes for off-gases pretreatment and further utilization on a polygeneration system producing electricity, district heat and methanol. (Vacuum) pressure swing adsorption, membrane technology and chemical absorption for gas separation; partial oxidation, carbon dioxide and steam methane reforming for methane gasification; gas and liquid phase methanol synthesis are the main alternative process units considered in the superstructure. Due to high degree of integration in process synthesis, and optimization techniques, equation oriented modeling is chosen as an alternative and effective strategy to previous sequential modelling for process analysis to investigate suggested superstructure. A mixed integer nonlinear programming is developed to study behavior of the integrated system under different economic and environmental scenarios. Net present value and specific carbon dioxide emission is taken to compare economic and environmental aspects of integrated system respectively for different fuel systems, alternative blast furnace reductants, implementation of new blast furnace technologies, and carbon dioxide emission penalties. Sensitivity analysis, carbon distribution and the effect of external seasonal energy demand is investigated with different optimization techniques. This tool can provide useful information concerning techno-environmental and economic aspects for decision-making and estimate optimal operational condition of current and future primary steelmaking under alternative scenarios. The results of the work have demonstrated that it is possible in the future to develop steelmaking towards more sustainable operation.
Influence of intrauterine and extrauterine growth on neurodevelopmental outcome of monozygotic twins
Resumo:
There have been indications that intrauterine and early extrauterine growth can influence childhood mental and motor function. The objective of the present study was to evaluate the influence of intrauterine growth restriction and early extrauterine head growth on the neurodevelopmental outcome of monozygotic twins. Thirty-six monozygous twin pairs were evaluated at the corrected age of 12 to 42 months. Intrauterine growth restriction was quantified using the fetal growth ratio. The effects of birth weight ratio, head circumference at birth and current head circumference on mental and motor outcomes were estimated using mixed-effect linear regression models. Separate estimates of the between (interpair) and within (intrapair) effects of each measure on development were thus obtained. Neurodevelopment was assessed with the Bayley Scales of Infant Development, 2nd edition, by a psychologist blind to the exposure. A standardized neurological examination was performed by a neuropediatrician who was unaware of the exposures under investigation. After adjustment, birth weight ratio and head circumference at birth were not associated with motor or mental outcomes. Current head circumference was associated with mental but not with motor outcomes. Only the intrapair twin effect was significant. An increase of 1 cm in current head circumference of one twin compared with the other was associated with 3.2 points higher in Mental Developmental Index (95%CI = 1.06-5.32; P < 0.03). Thus, no effect of intrauterine growth was found on cognition and only postnatal head growth was associated with cognition. This effect was not shared by the co-twin.
Resumo:
The objective of the present study was to determine to what extent, if any, swimming training applied before immobilization in a cast interferes with the rehabilitation process in rat muscles. Female Wistar rats, mean weight 260.52 ± 16.26 g, were divided into 4 groups of 6 rats each: control, 6 weeks under baseline conditions; trained, swimming training for 6 weeks; trained-immobilized, swimming training for 6 weeks and then immobilized for 1 week; trained-immobilized-rehabilitated, swimming training for 6 weeks, immobilized for 1 week and then remobilized with swimming for 2 weeks. The animals were then sacrificed and the soleus and tibialis anterior muscles were dissected, frozen in liquid nitrogen and processed histochemically (H&E and mATPase). Data were analyzed statistically by the mixed effects linear model (P < 0.05). Cytoarchitectural changes such as degenerative characteristics in the immobilized group and regenerative characteristics such as centralized nucleus, fiber size variation and cell fragmentation in the groups submitted to swimming were more significant in the soleus muscle. The diameters of the lesser soleus type 1 and type 2A fibers were significantly reduced in the trained-immobilized group compared to the trained group (P < 0.001). In the tibialis anterior, there was an increase in the number of type 2B fibers and a reduction in type 2A fibers when trained-immobilized rats were compared to trained rats (P < 0.001). In trained-immobilized-rehabilitated rats, there was a reduction in type 2B fibers and an increase in type 2A fibers compared to trained-immobilized rats (P < 0.009). We concluded that swimming training did not minimize the deleterious effects of immobilization on the muscles studied and that remobilization did not favor tissue re-adaptation.
Resumo:
L’industrie forestière est un secteur qui, même s’il est en déclin, se trouve au cœur du débat sur la mondialisation et le développement durable. Pour de nombreux pays tels que le Canada, la Suède et le Chili, les objectifs sont de maintenir un secteur florissant sans nuire à l’environnement et en réalisant le caractère fini des ressources. Il devient important d’être compétitif et d’exploiter de manière efficace les territoires forestiers, de la récolte jusqu’à la fabrication des produits aux usines, en passant par le transport, dont les coûts augmentent rapidement. L’objectif de ce mémoire est de développer un modèle de planification tactique/opérationnelle qui permet d’ordonnancer les activités pour une année de récolte de façon à satisfaire les demandes des usines, sans perdre de vue le transport des quantités récoltées et la gestion des inventaires en usine. L’année se divise en 26 périodes de deux semaines. Nous cherchons à obtenir les horaires et l’affectation des équipes de récolte aux blocs de coupe pour une année. Le modèle mathématique développé est un problème linéaire mixte en nombres entiers dont la structure est basée sur chaque étape de la chaine d’approvisionnement forestière. Nous choisissons de le résoudre par une méthode exacte, le branch-and-bound. Nous avons pu évaluer combien la résolution directe de notre problème de planification était difficile pour les instances avec un grand nombre de périodes. Cependant l’approche des horizons roulants s’est avérée fructueuse. Grâce à elle en une journée, il est possible de planifier les activités de récolte des blocs pour l’année entière (26 périodes).
Resumo:
Cette thèse étudie une approche intégrant la gestion de l’horaire et la conception de réseaux de services pour le transport ferroviaire de marchandises. Le transport par rail s’articule autour d’une structure à deux niveaux de consolidation où l’affectation des wagons aux blocs ainsi que des blocs aux services représentent des décisions qui complexifient grandement la gestion des opérations. Dans cette thèse, les deux processus de consolidation ainsi que l’horaire d’exploitation sont étudiés simultanément. La résolution de ce problème permet d’identifier un plan d’exploitation rentable comprenant les politiques de blocage, le routage et l’horaire des trains, de même que l’habillage ainsi que l’affectation du traffic. Afin de décrire les différentes activités ferroviaires au niveau tactique, nous étendons le réseau physique et construisons une structure de réseau espace-temps comprenant trois couches dans lequel la dimension liée au temps prend en considération les impacts temporels sur les opérations. De plus, les opérations relatives aux trains, blocs et wagons sont décrites par différentes couches. Sur la base de cette structure de réseau, nous modélisons ce problème de planification ferroviaire comme un problème de conception de réseaux de services. Le modèle proposé se formule comme un programme mathématique en variables mixtes. Ce dernie r s’avère très difficile à résoudre en raison de la grande taille des instances traitées et de sa complexité intrinsèque. Trois versions sont étudiées : le modèle simplifié (comprenant des services directs uniquement), le modèle complet (comprenant des services directs et multi-arrêts), ainsi qu’un modèle complet à très grande échelle. Plusieurs heuristiques sont développées afin d’obtenir de bonnes solutions en des temps de calcul raisonnables. Premièrement, un cas particulier avec services directs est analysé. En considérant une cara ctéristique spécifique du problème de conception de réseaux de services directs nous développons un nouvel algorithme de recherche avec tabous. Un voisinage par cycles est privilégié à cet effet. Celui-ci est basé sur la distribution du flot circulant sur les blocs selon les cycles issus du réseau résiduel. Un algorithme basé sur l’ajustement de pente est développé pour le modèle complet, et nous proposons une nouvelle méthode, appelée recherche ellipsoidale, permettant d’améliorer davantage la qualité de la solution. La recherche ellipsoidale combine les bonnes solutions admissibles générées par l’algorithme d’ajustement de pente, et regroupe les caractéristiques des bonnes solutions afin de créer un problème élite qui est résolu de facon exacte à l’aide d’un logiciel commercial. L’heuristique tire donc avantage de la vitesse de convergence de l’algorithme d’ajustement de pente et de la qualité de solution de la recherche ellipsoidale. Les tests numériques illustrent l’efficacité de l’heuristique proposée. En outre, l’algorithme représente une alternative intéressante afin de résoudre le problème simplifié. Enfin, nous étudions le modèle complet à très grande échelle. Une heuristique hybride est développée en intégrant les idées de l’algorithme précédemment décrit et la génération de colonnes. Nous proposons une nouvelle procédure d’ajustement de pente où, par rapport à l’ancienne, seule l’approximation des couts liés aux services est considérée. La nouvelle approche d’ajustement de pente sépare ainsi les décisions associées aux blocs et aux services afin de fournir une décomposition naturelle du problème. Les résultats numériques obtenus montrent que l’algorithme est en mesure d’identifier des solutions de qualité dans un contexte visant la résolution d’instances réelles.
Resumo:
Thèse réalisée en cotutelle avec l'Université d'Avignon.
Resumo:
De nombreux problèmes en transport et en logistique peuvent être formulés comme des modèles de conception de réseau. Ils requièrent généralement de transporter des produits, des passagers ou encore des données dans un réseau afin de satisfaire une certaine demande tout en minimisant les coûts. Dans ce mémoire, nous nous intéressons au problème de conception de réseau avec coûts fixes et capacités. Ce problème consiste à ouvrir un sous-ensemble des liens dans un réseau afin de satisfaire la demande, tout en respectant les contraintes de capacités sur les liens. L'objectif est de minimiser les coûts fixes associés à l'ouverture des liens et les coûts de transport des produits. Nous présentons une méthode exacte pour résoudre ce problème basée sur des techniques utilisées en programmation linéaire en nombres entiers. Notre méthode est une variante de l'algorithme de branch-and-bound, appelée branch-and-price-and-cut, dans laquelle nous exploitons à la fois la génération de colonnes et de coupes pour la résolution d'instances de grande taille, en particulier, celles ayant un grand nombre de produits. En nous comparant à CPLEX, actuellement l'un des meilleurs logiciels d'optimisation mathématique, notre méthode est compétitive sur les instances de taille moyenne et supérieure sur les instances de grande taille ayant un grand nombre de produits, et ce, même si elle n'utilise qu'un seul type d'inégalités valides.