947 resultados para 005 Computer programming, programs


Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper proposes a Fuzzy Goal Programming model (FGP) for a real aggregate production-planning problem. To do so, an application was made in a Brazilian Sugar and Ethanol Milling Company. The FGP Model depicts the comprehensive production process of sugar, ethanol, molasses and derivatives, and considers the uncertainties involved in ethanol and sugar production. Decision-makings, related to the agricultural and logistics phases, were considered on a weekly-basis planning horizon to include the whole harvesting season and the periods between harvests. The research has provided interesting results about decisions in the agricultural stages of cutting, loading and transportation to sugarcane suppliers and, especially, in milling decisions, whose choice of production process includes storage and logistics distribution. (C)2014 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Research on the micro-structural characterization of metal-matrix composites uses X-ray computed tomography to collect information about the interior features of the samples, in order to elucidate their exhibited properties. The tomographic raw data needs several steps of computational processing in order to eliminate noise and interference. Our experience with a program (Tritom) that handles these questions has shown that in some cases the processing steps take a very long time and that it is not easy for a Materials Science specialist to interact with Tritom in order to define the most adequate parameter values and the proper sequence of the available processing steps. For easing the use of Tritom, a system was built which addresses the aspects described before and that is based on the OpenDX visualization system. OpenDX visualization facilities constitute a great benefit to Tritom. The visual programming environment of OpenDX allows an easy definition of a sequence of processing steps thus fulfilling the requirement of an easy use by non-specialists on Computer Science. Also the possibility of incorporating external modules in a visual OpenDX program allows the researchers to tackle the aspect of reducing the long execution time of some processing steps. The longer processing steps of Tritom have been parallelized in two different types of hardware architectures (message-passing and shared-memory); the corresponding parallel programs can be easily incorporated in a sequence of processing steps defined in an OpenDX program. The benefits of our system are illustrated through an example where the tool is applied in the study of the sensitivity to crushing and the implications thereof of the reinforcements used in a functionally graded syntactic metallic foam.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The aims of this study were to investigate work conditions, to estimate the prevalence and to describe risk factors associated with Computer Vision Syndrome among two call centers' operators in Sao Paulo (n = 476). The methods include a quantitative cross-sectional observational study and an ergonomic work analysis, using work observation, interviews and questionnaires. The case definition was the presence of one or more specific ocular symptoms answered as always, often or sometimes. The multiple logistic regression model, were created using the stepwise forward likelihood method and remained the variables with levels below 5% (p < 0.05). The operators were mainly female and young (from 15 to 24 years old). The call center was opened 24 hours and the operators weekly hours were 36 hours with break time from 21 to 35 minutes per day. The symptoms reported were eye fatigue (73.9%), "weight" in the eyes (68.2%), "burning" eyes (54.6%), tearing (43.9%) and weakening of vision (43.5%). The prevalence of Computer Vision Syndrome was 54.6%. Associations verified were: being female (OR 2.6, 95% CI 1.6 to 4.1), lack of recognition at work (OR 1.4, 95% CI 1.1 to 1.8), organization of work in call center (OR 1.4, 95% CI 1.1 to 1.7) and high demand at work (OR 1.1, 95% CI 1.0 to 1.3). The organization and psychosocial factors at work should be included in prevention programs of visual syndrome among call centers' operators.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: Shift work was recently described as a factor that increases the risk of Type 2 diabetes mellitus. In addition, rats born to mothers subjected to a phase shift throughout pregnancy are glucose intolerant. However, the mechanism by which a phase shift transmits metabolic information to the offspring has not been determined. Among several endocrine secretions, phase shifts in the light/dark cycle were described as altering the circadian profile of melatonin production by the pineal gland. The present study addresses the importance of maternal melatonin for the metabolic programming of the offspring. Methodology/Principal Findings: Female Wistar rats were submitted to SHAM surgery or pinealectomy (PINX). The PINX rats were divided into two groups and received either melatonin (PM) or vehicle. The SHAM, the PINX vehicle and the PM females were housed with male Wistar rats. Rats were allowed to mate and after weaning, the male and female offspring were subjected to a glucose tolerance test (GTT), a pyruvate tolerance test (PTT) and an insulin tolerance test (ITT). Pancreatic islets were isolated for insulin secretion, and insulin signaling was assessed in the liver and in the skeletal muscle by western blots. We found that male and female rats born to PINX mothers display glucose intolerance at the end of the light phase of the light/dark cycle, but not at the beginning. We further demonstrate that impaired glucose-stimulated insulin secretion and hepatic insulin resistance are mechanisms that may contribute to glucose intolerance in the offspring of PINX mothers. The metabolic programming described here occurs due to an absence of maternal melatonin because the offspring born to PINX mothers treated with melatonin were not glucose intolerant. Conclusions/Significance: The present results support the novel concept that maternal melatonin is responsible for the programming of the daily pattern of energy metabolism in their offspring.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Osmoregulatory mechanisms can be vulnerable to electrolyte and/or endocrine environmental changes during the perinatal period, differentially programming the developing offspring and affecting them even in adulthood. The aim of this study was to evaluate whether availability of hypertonic sodium solution during the perinatal period may induce a differential programming in adult offspring osmoregulatory mechanisms. With this aim, we studied water and sodium intake after Furosemide-sodium depletion in adult offspring exposed to hypertonic sodium solution from 1 week before mating until postnatal day 28 of the offspring, used as a perinatal manipulation model [PM-Na group]. In these animals, we also identified the cell population groups in brain nuclei activated by Furosemide-sodium depletion treatment, analyzing the spatial patterns of Fos and Fos-vasopressin immunoreactivity. In sodium depleted rats, sodium and water intake were significantly lower in the PM-Na group vs. animals without access to hypertonic sodium solution [PM-Ctrol group]. Interestingly, when comparing the volumes consumed of both solutions in each PM group, our data show the expected significant differences between both solutions ingested in the PM-Ctrol group, which makes an isotonic cocktail: however, in the PM-Na group there were no significant differences in the volumes of both solutions consumed after Furosemide-sodium depletion, and therefore the sodium concentration of total fluid ingested by this group was significantly higher than that in the PM-Ctrol group. With regard to brain Fos immunoreactivity, we observed that Furosemide-sodium depletion in the PM-Na group induced a higher number of activated cells in the subfornical organ, ventral subdivision of the paraventricular nucleus and vasopressinergic neurons of the supraoptic nucleus than in the PM-Ctrol animals. Moreover, along the brainstem, we found a decreased number of sodium depletion-activated cells within the nucleus of the solitary tract of the PM-Na group. Our data indicate that early sodium availability induces a long-term effect on fluid drinking and on the cell activity of brain nuclei involved in the control of hydromineral balance. These results also suggest that availability of a rich source of sodium during the perinatal period may provoke a larger anticipatory response in the offspring, activating the vasopressinergic system and reducing thirst after water and sodium depletion, as a result of central osmosensitive mechanism alterations. (C) 2011 Elsevier Inc. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Field-Programmable Gate Arrays (FPGAs) are becoming increasingly important in embedded and high-performance computing systems. They allow performance levels close to the ones obtained with Application-Specific Integrated Circuits, while still keeping design and implementation flexibility. However, to efficiently program FPGAs, one needs the expertise of hardware developers in order to master hardware description languages (HDLs) such as VHDL or Verilog. Attempts to furnish a high-level compilation flow (e.g., from C programs) still have to address open issues before broader efficient results can be obtained. Bearing in mind an FPGA available resources, it has been developed LALP (Language for Aggressive Loop Pipelining), a novel language to program FPGA-based accelerators, and its compilation framework, including mapping capabilities. The main ideas behind LALP are to provide a higher abstraction level than HDLs, to exploit the intrinsic parallelism of hardware resources, and to allow the programmer to control execution stages whenever the compiler techniques are unable to generate efficient implementations. Those features are particularly useful to implement loop pipelining, a well regarded technique used to accelerate computations in several application domains. This paper describes LALP, and shows how it can be used to achieve high-performance computing solutions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Mixed integer programming is up today one of the most widely used techniques for dealing with hard optimization problems. On the one side, many practical optimization problems arising from real-world applications (such as, e.g., scheduling, project planning, transportation, telecommunications, economics and finance, timetabling, etc) can be easily and effectively formulated as Mixed Integer linear Programs (MIPs). On the other hand, 50 and more years of intensive research has dramatically improved on the capability of the current generation of MIP solvers to tackle hard problems in practice. However, many questions are still open and not fully understood, and the mixed integer programming community is still more than active in trying to answer some of these questions. As a consequence, a huge number of papers are continuously developed and new intriguing questions arise every year. When dealing with MIPs, we have to distinguish between two different scenarios. The first one happens when we are asked to handle a general MIP and we cannot assume any special structure for the given problem. In this case, a Linear Programming (LP) relaxation and some integrality requirements are all we have for tackling the problem, and we are ``forced" to use some general purpose techniques. The second one happens when mixed integer programming is used to address a somehow structured problem. In this context, polyhedral analysis and other theoretical and practical considerations are typically exploited to devise some special purpose techniques. This thesis tries to give some insights in both the above mentioned situations. The first part of the work is focused on general purpose cutting planes, which are probably the key ingredient behind the success of the current generation of MIP solvers. Chapter 1 presents a quick overview of the main ingredients of a branch-and-cut algorithm, while Chapter 2 recalls some results from the literature in the context of disjunctive cuts and their connections with Gomory mixed integer cuts. Chapter 3 presents a theoretical and computational investigation of disjunctive cuts. In particular, we analyze the connections between different normalization conditions (i.e., conditions to truncate the cone associated with disjunctive cutting planes) and other crucial aspects as cut rank, cut density and cut strength. We give a theoretical characterization of weak rays of the disjunctive cone that lead to dominated cuts, and propose a practical method to possibly strengthen those cuts arising from such weak extremal solution. Further, we point out how redundant constraints can affect the quality of the generated disjunctive cuts, and discuss possible ways to cope with them. Finally, Chapter 4 presents some preliminary ideas in the context of multiple-row cuts. Very recently, a series of papers have brought the attention to the possibility of generating cuts using more than one row of the simplex tableau at a time. Several interesting theoretical results have been presented in this direction, often revisiting and recalling other important results discovered more than 40 years ago. However, is not clear at all how these results can be exploited in practice. As stated, the chapter is a still work-in-progress and simply presents a possible way for generating two-row cuts from the simplex tableau arising from lattice-free triangles and some preliminary computational results. The second part of the thesis is instead focused on the heuristic and exact exploitation of integer programming techniques for hard combinatorial optimization problems in the context of routing applications. Chapters 5 and 6 present an integer linear programming local search algorithm for Vehicle Routing Problems (VRPs). The overall procedure follows a general destroy-and-repair paradigm (i.e., the current solution is first randomly destroyed and then repaired in the attempt of finding a new improved solution) where a class of exponential neighborhoods are iteratively explored by heuristically solving an integer programming formulation through a general purpose MIP solver. Chapters 7 and 8 deal with exact branch-and-cut methods. Chapter 7 presents an extended formulation for the Traveling Salesman Problem with Time Windows (TSPTW), a generalization of the well known TSP where each node must be visited within a given time window. The polyhedral approaches proposed for this problem in the literature typically follow the one which has been proven to be extremely effective in the classical TSP context. Here we present an overall (quite) general idea which is based on a relaxed discretization of time windows. Such an idea leads to a stronger formulation and to stronger valid inequalities which are then separated within the classical branch-and-cut framework. Finally, Chapter 8 addresses the branch-and-cut in the context of Generalized Minimum Spanning Tree Problems (GMSTPs) (i.e., a class of NP-hard generalizations of the classical minimum spanning tree problem). In this chapter, we show how some basic ideas (and, in particular, the usage of general purpose cutting planes) can be useful to improve on branch-and-cut methods proposed in the literature.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The increasing precision of current and future experiments in high-energy physics requires a likewise increase in the accuracy of the calculation of theoretical predictions, in order to find evidence for possible deviations of the generally accepted Standard Model of elementary particles and interactions. Calculating the experimentally measurable cross sections of scattering and decay processes to a higher accuracy directly translates into including higher order radiative corrections in the calculation. The large number of particles and interactions in the full Standard Model results in an exponentially growing number of Feynman diagrams contributing to any given process in higher orders. Additionally, the appearance of multiple independent mass scales makes even the calculation of single diagrams non-trivial. For over two decades now, the only way to cope with these issues has been to rely on the assistance of computers. The aim of the xloops project is to provide the necessary tools to automate the calculation procedures as far as possible, including the generation of the contributing diagrams and the evaluation of the resulting Feynman integrals. The latter is based on the techniques developed in Mainz for solving one- and two-loop diagrams in a general and systematic way using parallel/orthogonal space methods. These techniques involve a considerable amount of symbolic computations. During the development of xloops it was found that conventional computer algebra systems were not a suitable implementation environment. For this reason, a new system called GiNaC has been created, which allows the development of large-scale symbolic applications in an object-oriented fashion within the C++ programming language. This system, which is now also in use for other projects besides xloops, is the main focus of this thesis. The implementation of GiNaC as a C++ library sets it apart from other algebraic systems. Our results prove that a highly efficient symbolic manipulator can be designed in an object-oriented way, and that having a very fine granularity of objects is also feasible. The xloops-related parts of this work consist of a new implementation, based on GiNaC, of functions for calculating one-loop Feynman integrals that already existed in the original xloops program, as well as the addition of supplementary modules belonging to the interface between the library of integral functions and the diagram generator.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Interactive theorem provers are tools designed for the certification of formal proofs developed by means of man-machine collaboration. Formal proofs obtained in this way cover a large variety of logical theories, ranging from the branches of mainstream mathematics, to the field of software verification. The border between these two worlds is marked by results in theoretical computer science and proofs related to the metatheory of programming languages. This last field, which is an obvious application of interactive theorem proving, poses nonetheless a serious challenge to the users of such tools, due both to the particularly structured way in which these proofs are constructed, and to difficulties related to the management of notions typical of programming languages like variable binding. This thesis is composed of two parts, discussing our experience in the development of the Matita interactive theorem prover and its use in the mechanization of the metatheory of programming languages. More specifically, part I covers: - the results of our effort in providing a better framework for the development of tactics for Matita, in order to make their implementation and debugging easier, also resulting in a much clearer code; - a discussion of the implementation of two tactics, providing infrastructure for the unification of constructor forms and the inversion of inductive predicates; we point out interactions between induction and inversion and provide an advancement over the state of the art. In the second part of the thesis, we focus on aspects related to the formalization of programming languages. We describe two works of ours: - a discussion of basic issues we encountered in our formalizations of part 1A of the Poplmark challenge, where we apply the extended inversion principles we implemented for Matita; - a formalization of an algebraic logical framework, posing more complex challenges, including multiple binding and a form of hereditary substitution; this work adopts, for the encoding of binding, an extension of Masahiko Sato's canonical locally named representation we designed during our visit to the Laboratory for Foundations of Computer Science at the University of Edinburgh, under the supervision of Randy Pollack.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

La crescente disponibilit di dispositivi meccanici e -soprattutto - elettronici le cui performance aumentano mentre il loro costo diminuisce, ha permesso al campo della robotica di compiere notevoli progressi. Tali progressi non sono stati fatti unicamente per ci che riguarda la robotica per uso industriale, nelle catene di montaggio per esempio, ma anche per quella branca della robotica che comprende i robot autonomi domestici. Questi sistemi autonomi stanno diventando, per i suddetti motivi, sempre pi pervasivi, ovvero sono immersi nello stesso ambiente nel quale vivono gli essere umani, e interagiscono con questi in maniera proattiva. Essi stanno compiendo quindi lo stesso percorso che hanno attraversato i personal computer all'incirca 30 anni fa, passando dall'essere costosi ed ingombranti mainframe a disposizione unicamente di enti di ricerca ed universit, ad essere presenti all'interno di ogni abitazione, per un utilizzo non solo professionale ma anche di assistenza alle attivit quotidiane o anche di intrattenimento. Per questi motivi la robotica un campo dell'Information Technology che interessa sempre pi tutti i tipi di programmatori software. Questa tesi analizza per prima cosa gli aspetti salienti della programmazione di controllori per robot autonomi (ovvero senza essere guidati da un utente), quindi, come l'approccio basato su agenti sia appropriato per la programmazione di questi sistemi. In particolare si mostrer come un approccio ad agenti, utilizzando il linguaggio di programmazione Jason e quindi l'architettura BDI, sia una scelta significativa, dal momento che il modello sottostante a questo tipo di linguaggio basato sul ragionamento pratico degli esseri umani (Human Practical Reasoning) e quindi adatto alla implementazione di sistemi che agiscono in maniera autonoma. Dato che le possibilit di utilizzare un vero e proprio sistema autonomo per poter testare i controllori sono ridotte, per motivi pratici, economici e temporali, mostreremo come facile e performante arrivare in maniera rapida ad un primo prototipo del robot tramite l'utilizzo del simulatore commerciale Webots. Il contributo portato da questa tesi include la possibilit di poter programmare un robot in maniera modulare e rapida per mezzo di poche linee di codice, in modo tale che l'aumento delle funzionalit di questo risulti un collo di bottiglia, come si verifica nella programmazione di questi sistemi tramite i classici linguaggi di programmazione imperativi. L'organizzazione di questa tesi prevede un capitolo di background nel quale vengono riportare le basi della robotica, della sua programmazione e degli strumenti atti allo scopo, un capitolo che riporta le nozioni di programmazione ad agenti, tramite il linguaggio Jason -quindi l'architettura BDI - e perch tale approccio adatto alla programmazione di sistemi di controllo per la robotica. Successivamente viene presentata quella che la struttura completa del nostro ambiente di lavoro software che comprende l'ambiente ad agenti e il simulatore, quindi nel successivo capitolo vengono mostrate quelle che sono le esplorazioni effettuate utilizzando Jason e un approccio classico (per mezzo di linguaggi classici), attraverso diversi casi di studio di crescente complessit; dopodich, verr effettuata una valutazione tra i due approcci analizzando i problemi e i vantaggi che comportano questi. Infine, la tesi terminer con un capitolo di conclusioni e di riflessioni sulle possibili estensioni e lavori futuri.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Im Bereich sicherheitsrelevanter eingebetteter Systeme stellt sich der Designprozess von Anwendungen als sehr komplex dar. Entsprechend einer gegebenen Hardwarearchitektur lassen sich Steuergerte aufrsten, um alle bestehenden Prozesse und Signale pnktlich auszufhren. Die zeitlichen Anforderungen sind strikt und mssen in jeder periodischen Wiederkehr der Prozesse erfllt sein, da die Sicherstellung der parallelen Ausfhrung von grter Bedeutung ist. Existierende Anstze knnen schnell Designalternativen berechnen, aber sie gewhrleisten nicht, dass die Kosten fr die ntigen Hardwarenderungen minimal sind. Wir stellen einen Ansatz vor, der kostenminimale Lsungen fr das Problem berechnet, die alle zeitlichen Bedingungen erfllen. Unser Algorithmus verwendet Lineare Programmierung mit Spaltengenerierung, eingebettet in eine Baumstruktur, um untere und obere Schranken whrend des Optimierungsprozesses bereitzustellen. Die komplexen Randbedingungen zur Gewhrleistung der periodischen Ausfhrung verlagern sich durch eine Zerlegung des Hauptproblems in unabhngige Unterprobleme, die als ganzzahlige lineare Programme formuliert sind. Sowohl die Analysen zur Prozessausfhrung als auch die Methoden zur Signalbertragung werden untersucht und linearisierte Darstellungen angegeben. Des Weiteren prsentieren wir eine neue Formulierung fr die Ausfhrung mit fixierten Prioritten, die zustzlich Prozessantwortzeiten im schlimmsten anzunehmenden Fall berechnet, welche fr Szenarien ntig sind, in denen zeitliche Bedingungen an Teilmengen von Prozessen und Signalen gegeben sind. Wir weisen die Anwendbarkeit unserer Methoden durch die Analyse von Instanzen nach, welche Prozessstrukturen aus realen Anwendungen enthalten. Unsere Ergebnisse zeigen, dass untere Schranken schnell berechnet werden knnen, um die Optimalitt von heuristischen Lsungen zu beweisen. Wenn wir optimale Lsungen mit Antwortzeiten liefern, stellt sich unsere neue Formulierung in der Laufzeitanalyse vorteilhaft gegenber anderen Anstzen dar. Die besten Resultate werden mit einem hybriden Ansatz erzielt, der heuristische Startlsungen, eine Vorverarbeitung und eine heuristische mit einer kurzen nachfolgenden exakten Berechnungsphase verbindet.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The aim of this work is to develop a prototype of an e-learning environment that can foster Content and Language Integrated Learning (CLIL) for students enrolled in an aircraft maintenance training program, which allows them to obtain a license valid in all EU member states. Background research is conducted to retrace the evolution of the field of educational technology, analyzing different learning theories behaviorism, cognitivism, and (socio-)constructivism and reflecting on how technology and its use in educational contexts has changed over time. Particular attention is given to technologies that have been used and proved effective in Computer Assisted Language Learning (CALL). Based on the background research and on students learning objectives, i.e. learning highly specialized contents and aeronautical technical English, a bilingual approach is chosen, three main tools are identified a hypertextbook, an exercise creation activity, and a discussion forum and the learning management system Moodle is chosen as delivery medium. The hypertextbook is based on the technical textbook written in English students already use. In order to foster text comprehension, the hypertextbook is enriched by hyperlinks and tooltips. Hyperlinks redirect students to webpages containing additional information both in English and in Italian, while tooltips show Italian equivalents of English technical terms. The exercise creation activity and the discussion forum foster interaction and collaboration among students, according to socio-constructivist principles. In the exercise creation activity, students collaboratively create a workbook, which allow them to deeply analyze and master the contents of the hypertextbook and at the same time create a learning tool that can help them, as well as future students, to enhance learning. In the discussion forum students can discuss their individual issues, content-related, English-related or e-learning environment-related, helping one other and offering instructors suggestions on how to improve both the hypertextbook and the workbook based on their needs.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

After almost 10 years from The Free Lunch Is Over article, where the need to parallelize programs started to be a real and mainstream issue, a lot of stuffs did happened: Processor manufacturers are reaching the physical limits with most of their approaches to boosting CPU performance, and are instead turning to hyperthreading and multicore architectures; Applications are increasingly need to support concurrency; Programming languages and systems are increasingly forced to deal well with concurrency. This thesis is an attempt to propose an overview of a paradigm that aims to properly abstract the problem of propagating data changes: Reactive Programming (RP). This paradigm proposes an asynchronous non-blocking approach to concurrency and computations, abstracting from the low-level concurrency mechanisms.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

When healthy observers make a saccade that is erroneously directed toward a distracter stimulus, they often produce a corrective saccade within 100ms after the end of the primary saccade. Such short inter-saccadic intervals indicate that programming of the secondary saccade has been initiated prior to the execution of the primary saccade and hence that the two saccades have been programmed concurrently. Here we show that concurrent saccade programming is bilaterally impaired in left spatial neglect, a strongly lateralized disorder of visual attention resulting from extensive right cerebral damage. Neglect patients were asked to make saccades to targets presented left or right of fixation while disregarding a distracter presented in the opposite hemifield. We examined those experimental trials on which participants first made a saccade to the distracter, followed by a secondary (corrective) saccade to the target. Compared to healthy and right-hemisphere damaged control participants the proportion of secondary saccades directing gaze to the target instead of bringing it even closer to the distracter was bilaterally reduced in neglect patients. In addition, the characteristic reduction of secondary saccade latency observed in both control groups was absent in neglect patients, whether the secondary saccade was directed to the left or right hemifield. This pattern is consistent with a severe, bilateral impairment of concurrent saccade programming in left spatial neglect.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Alcohol use disorder (AUD) and depressive disorders often co-occur. Findings on the effects of major depressive disorder (MDD) or depressive symptoms on posttreatment alcohol relapse are controversial. The study's aim is to examine the association of MDD and depressive symptoms with treatment outcomes after residential AUD programs. In a naturalistic-prospective, multisite study with 12 residential AUD treatment programs in the German-speaking part of Switzerland, 64 patients with AUD with MDD, 283 patients with AUD with clinically significant depressive symptoms at admission, and 81 patients with AUD with such problems at discharge were compared with patients with AUD only on alcohol use, depressive symptoms, and treatment service utilization. MDD was provisionally identified at admission and definitively defined at discharge. Whereas patients with MDD did not differ from patients with AUD only at 1-year follow-up, patients with AUD with clinically significant depressive symptoms had significantly shorter time-to-first-drink and a lower abstinence rate. These patients also had elevated AUD indices and treatment service utilization for psychiatric disorders. Our results suggest that clinically significant depressive symptoms are a substantial risk factor for relapse so that it may be important to treat them during and after residential AUD treatment programs.