927 resultados para Analysis of Algorithms and Problem Complexity


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The amount of installed wind power has been growing exponentially during the past ten years. As wind turbines have become a significant source of electrical energy, the interactions between the turbines and the electric power network need to be studied more thoroughly than before. Especially, the behavior of the turbines in fault situations is of prime importance; simply disconnecting all wind turbines from the network during a voltage drop is no longer acceptable, since this would contribute to a total network collapse. These requirements have been a contributor to the increased role of simulations in the study and design of the electric drive train of a wind turbine. When planning a wind power investment, the selection of the site and the turbine are crucial for the economic feasibility of the installation. Economic feasibility, on the other hand, is the factor that determines whether or not investment in wind power will continue, contributing to green electricity production and reduction of emissions. In the selection of the installation site and the turbine (siting and site matching), the properties of the electric drive train of the planned turbine have so far been generally not been taken into account. Additionally, although the loss minimization of some of the individual components of the drive train has been studied, the drive train as a whole has received less attention. Furthermore, as a wind turbine will typically operate at a power level lower than the nominal most of the time, efficiency analysis in the nominal operating point is not sufficient. This doctoral dissertation attempts to combine the two aforementioned areas of interest by studying the applicability of time domain simulations in the analysis of the economicfeasibility of a wind turbine. The utilization of a general-purpose time domain simulator, otherwise applied to the study of network interactions and control systems, in the economic analysis of the wind energy conversion system is studied. The main benefits of the simulation-based method over traditional methods based on analytic calculation of losses include the ability to reuse and recombine existing models, the ability to analyze interactions between the components and subsystems in the electric drive train (something which is impossible when considering different subsystems as independent blocks, as is commonly done in theanalytical calculation of efficiencies), the ability to analyze in a rather straightforward manner the effect of selections other than physical components, for example control algorithms, and the ability to verify assumptions of the effects of a particular design change on the efficiency of the whole system. Based on the work, it can be concluded that differences between two configurations can be seen in the economic performance with only minor modifications to the simulation models used in the network interaction and control method study. This eliminates the need ofdeveloping analytic expressions for losses and enables the study of the system as a whole instead of modeling it as series connection of independent blocks with no lossinterdependencies. Three example cases (site matching, component selection, control principle selection) are provided to illustrate the usage of the approach and analyze its performance.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The theoretical research of the study concentrated on finding theoretical frameworks to optimize the amount of needed stock keeping units (SKUs) in manufacturing industry. The goal was to find ways for a company to acquire an optimal collection of stock keeping units needed for manufacturing needed amount of end products. The research follows constructive research approach leaning towards practical problem solving. In the empirical part of this study, a recipe search tool was developed to an existing database used in the target company. The purpose of the tools was to find all the recipes meeting the EUPS performance standard and put the recipes in a ranking order using the data available in the database. The ranking of the recipes was formed from the combination of the performance measures and price of the recipes. In addition, the tool researched what kind of paper SKUs were needed to manufacture the best performing recipes. The tool developed during this process meets the requirements. It eases and makes it much faster to search for all the recipes meeting the EUPS standard. Furthermore, many future development possibilities for the tool were discovered while writing the thesis.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Tässä työssä on tutkittu modulaarisen aktiivimagneettilaakeroidun koelaitteen mekaanista suunnittelua ja analysointia. Suurnopeusroottorin suunnittelun teoria on esitelty. Lisäksi monia analyyttisiä mallinnusmenetelmiä mekaanisten kuormitusten mallintamiseksi on esitelty. Koska kyseessä on suurnopeussähkökone, roottoridynamiikka ja sen soveltuvuus suunnittelussa on esitelty. Magneettilaakerien rakenteeseen ja toimintaan on tutustuttu osana tätä työtä. Kirjallisuuskatsaus nykyisistä koelaitteista esimerkiksi komponenttien ominaisuuksien tunnistamiseen ja roottoridynamiikan tutkimuksiin on esitelty. Työn rajauksena on konseptisuunnittelu muunneltavalle magneettilaakeroidulle (AMB) koelaitteelle ja suunnitteluprosessin dokumentointi. Muunneltavuuteen päädyttiin, koska se mahdollistaa erilaisten komponenttiasetteluiden testaamisen erilaisille magneettilaakerikokoonpanoille ja roottoreille. Pääpaino tässä työssä on suurnopeus induktiokoneen roottorin suunnittelussa ja mallintamisessa. Modulaaristen toimilaitteiden kuten magneettilaakerien ja induktiosähkömoottorin rakenne on esitelty ja modulaarisen rakenteen käytettävyyden hyödyistä koelaitekäytössä on dokumentoitu. Analyyttisiä ja elementtimenetelmään perustuvia tutkimusmenetelmiä on käytetty tutkittaessa suunniteltua suurnopeusroottoria. Suunnittelun ja analysoinnin tulokset on esitelty ja verrattu keskenään eri mallinnusmenetelmien välillä. Lisäksi johtopäätökset sähkömagneettisten osien liittämisen monimutkaisuudesta ja vaatimuksista roottoriin ja toimilaitteisiin sekä mekaanisten että sähkömagneettisten ominaisuuksien optimoimiseksi on dokumentoitu.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Measles virus is a highly contagious agent which causes a major health problem in developing countries. The viral genomic RNA is single-stranded, nonsegmented and of negative polarity. Many live attenuated vaccines for measles virus have been developed using either the prototype Edmonston strain or other locally isolated measles strains. Despite the diverse geographic origins of the vaccine viruses and the different attenuation methods used, there was remarkable sequence similarity of H, F and N genes among all vaccine strains. CAM-70 is a Japanese measles attenuated vaccine strain widely used in Brazilian children and produced by Bio-Manguinhos since 1982. Previous studies have characterized this vaccine biologically and genomically. Nevertheless, only the F, H and N genes have been sequenced. In the present study we have sequenced the remaining P, M and L genes (approximately 1.6, 1.4 and 6.5 kb, respectively) to complete the genomic characterization of CAM-70 and to assess the extent of genetic relationship between CAM-70 and other current vaccines. These genes were amplified using long-range or standard RT-PCR techniques, and the cDNA was cloned and automatically sequenced using the dideoxy chain-termination method. The sequence analysis comparing previously sequenced genotype A strains with the CAM-70 Bio-Manguinhos strain showed a low divergence among them. However, the CAM-70 strains (CAM-70 Bio-Manguinhos and a recently sequenced CAM-70 submaster seed strain) were assigned to a specific group by phylogenetic analysis using the neighbor-joining method. Information about our product at the genomic level is important for monitoring vaccination campaigns and for future studies of measles virus attenuation.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Laser additive manufacturing (LAM), known also as 3D printing, is a powder bed fusion (PBF) type of additive manufacturing (AM) technology used to manufacture metal parts layer by layer by assist of laser beam. The development of the technology from building just prototype parts to functional parts is due to design flexibility. And also possibility to manufacture tailored and optimised components in terms of performance and strength to weight ratio of final parts. The study of energy and raw material consumption in LAM is essential as it might facilitate the adoption and usage of the technique in manufacturing industries. The objective this thesis was find the impact of LAM on environmental and economic aspects and to conduct life cycle inventory of CNC machining and LAM in terms of energy and raw material consumption at production phases. Literature overview in this thesis include sustainability issues in manufacturing industries with focus on environmental and economic aspects. Also life cycle assessment and its applicability in manufacturing industry were studied. UPLCI-CO2PE! Initiative was identified as mostly applied exiting methodology to conduct LCI analysis in discrete manufacturing process like LAM. Many of the reviewed literature had focused to PBF of polymeric material and only few had considered metallic materials. The studies that had included metallic materials had only measured input and output energy or materials of the process and compared to different AM systems without comparing to any competitive process. Neither did any include effect of process variation when building metallic parts with LAM. Experimental testing were carried out to make dissimilar samples with CNC machining and LAM in this thesis. Test samples were designed to include part complexity and weight reductions. PUMA 2500Y lathe machine was used in the CNC machining whereas a modified research machine representing EOSINT M-series was used for the LAM. The raw material used for making the test pieces were stainless steel 316L bar (CNC machined parts) and stainless steel 316L powder (LAM built parts). An analysis of power, time, and the energy consumed in each of the manufacturing processes on production phase showed that LAM utilises more energy than CNC machining. The high energy consumption was as result of duration of production. Energy consumption profiles in CNC machining showed fluctuations with high and low power ranges. LAM energy usage within specific mode (standby, heating, process, sawing) remained relatively constant through the production. CNC machining was limited in terms of manufacturing freedom as it was not possible to manufacture all the designed sample by machining. And the one which was possible was aided with large amount of material removed as waste. Planning phase in LAM was shorter than in CNC machining as the latter required many preparation steps. Specific energy consumption (SEC) were estimated in LAM based on the practical results and assumed platform utilisation. The estimated platform utilisation showed SEC could reduce when more parts were placed in one build than it was in with the empirical results in this thesis (six parts).

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We compared the cost-benefit of two algorithms, recently proposed by the Centers for Disease Control and Prevention, USA, with the conventional one, the most appropriate for the diagnosis of hepatitis C virus (HCV) infection in the Brazilian population. Serum samples were obtained from 517 ELISA-positive or -inconclusive blood donors who had returned to Fundação Pró-Sangue/Hemocentro de São Paulo to confirm previous results. Algorithm A was based on signal-to-cut-off (s/co) ratio of ELISA anti-HCV samples that show s/co ratio ³95% concordance with immunoblot (IB) positivity. For algorithm B, reflex nucleic acid amplification testing by PCR was required for ELISA-positive or -inconclusive samples and IB for PCR-negative samples. For algorithm C, all positive or inconclusive ELISA samples were submitted to IB. We observed a similar rate of positive results with the three algorithms: 287, 287, and 285 for A, B, and C, respectively, and 283 were concordant with one another. Indeterminate results from algorithms A and C were elucidated by PCR (expanded algorithm) which detected two more positive samples. The estimated cost of algorithms A and B was US$21,299.39 and US$32,397.40, respectively, which were 43.5 and 14.0% more economic than C (US$37,673.79). The cost can vary according to the technique used. We conclude that both algorithms A and B are suitable for diagnosing HCV infection in the Brazilian population. Furthermore, algorithm A is the more practical and economical one since it requires supplemental tests for only 54% of the samples. Algorithm B provides early information about the presence of viremia.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The credibility of the rules and the elements of power constitute fundamental keys in the analysis of the political institutions. This paper opens the "black box" of the European Union institutions and analyses the problem of credibility in the commitment of the Stability and Growth pact (SGP). This Pact (SGP) constituted a formal rule that tried to enforce budgetary discipline on the European States. Compliance with this contract could be ensured by the existence of "third party enforcement" or by the coincidence of the ex-ante and ex-post interests of the States (reputational capital). The fact is that states such as France or Germany failed to comply with the ruling and managed to avoid the application of sanctions. This article studies the transactions and the hierarchy of power that exists in the European institutions, and analyses the institutional framework included in the new European Constitution.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis provides an analysis of how the nexus between climate change and human rights shapes public policy agendas and alternatives. It draws upon seminal work conducted by John Kingdon, whose landmark publication “Agendas, alternatives, and public policy” described how separate streams of problems, solutions, and politics converge to move an issue onto the public policy agenda toward potential government action. Building on Kingdon’s framework, this research explores how human rights contribute to surfacing the problem of climate change; developing alternative approaches to tackling climate change; and improving the political environment necessary for addressing climate change with sufficient ambition. The study reveals that climate change undermines the realization of human rights and that human rights can be effective tools in building climate resilience. This analysis was developed using a mixed methods approach and drawing upon substantial literature review, the researcher’s own participation in international climate policy design; elite interviews with thought leaders dealing with climate change and human rights; and regular inputs from focus groups comprised of practitioners drawn from the fields of climate change, development and human rights. This is a journal based thesis with a total of six articles submitted for evaluation, published in peer‐reviewed publications, over a five year period. Denna avhandling analyserar hur klimatfrågan och mänskliga rättigheter i samverkan formar den politiska agendan och det politiskt möjliga. Den bygger på banbrytande forskning av John Kingdon, vars publikation “Agendas, alternatives, and public policy” beskriver hur en fråga blir politiskt viktig och lyfts upp på den politiska agendan. Med utgångspunkt i Kingdons ramverk, utforskar avhandlingen hur mänskliga rättigheter bidrar till att blottlägga klimatfrågan som problem; utveckla alternativa metoder för att angripa och hantera klimatfrågan; samt skapa ett politiskt klimat nödvändigt för att på ett ambitiöst sätt kunna angripa klimatfrågan. Studien visar att klimatförändringar undergräver mänskliga rättigheterna men att arbete med mänskliga rättigheter kan vara ett effektivt verktyg för att stå emot och hantera effekterna av klimatförändringar. Analysen har genomförts med hjälp av en rad olika metoder vilka inkluderar litteraturstudier, författarens egna observationer under klimatförhandlingar; intervjuer med ledande tänkare inom klimatfrågan och mänskliga rättigheter; samt data insamlad genom fokusgrupper bestående av yrkesverksamma inom klimat, utveckling och mänskliga rättigheter. Avhandlingen är baserad på totalt sex artiklar som publicerats i fackgranskade tidskrifter under en femårsperiod.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This research attempted to address the question of the role of explicit algorithms and episodic contexts in the acquisition of computational procedures for regrouping in subtraction. Three groups of students having difficulty learning to subtract with regrouping were taught procedures for doing so through either an explicit algorithm, an episodic content or an examples approach. It was hypothesized that the use of an explicit algorithm represented in a flow chart format would facilitate the acquisition and retention of specific procedural steps relative to the other two conditions. On the other hand, the use of paragraph stories to create episodic content was expected to facilitate the retrieval of algorithms, particularly in a mixed presentation format. The subjects were tested on similar, near, and far transfer questions over a four-day period. Near and far transfer algorithms were also introduced on Day Two. The results suggested that both explicit and episodic context facilitate performance on questions requiring subtraction with regrouping. However, the differential effects of these two approaches on near and far transfer questions were not as easy to identify. Explicit algorithms may facilitate the acquisition of specific procedural steps while at the same time inhibiting the application of such steps to transfer questions. Similarly, the value of episodic context in cuing the retrieval of an algorithm may be limited by the ability of a subject to identify and classify a new question as an exemplar of a particular episodically deflned problem type or category. The implications of these findings in relation to the procedures employed in the teaching of Mathematics to students with learning problems are discussed in detail.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The present study examined the bullying experiences of a group of students, age 10-14 years, identified as having behaviour problems. A total often students participated in a series of mixed methodology activities, including self-report questionnaires, story telling exercises, and interview style joumaling. The main research questions were related to the prevalence of bully/victims and the type of bullying experiences in this population. Questionnaires gathered information about their involvement in bullying, as well as about psychological risk factors including normative beliefs about antisocial acts, impulsivity, problem solving, and coping strategies. Journal questions expanded on these themes and allowed students to explain their personal experiences as bullies and victims as well as provide suggestions for intervention. The overall results indicated that all of the ten students in this sample have participated in bullying as both a bully and a victim. This high prevalence of bully/victim involvement in students from behavioural classrooms is in sharp contrast with the general population where the prevalence is about 33%. In addition, a common thread was found that indicated that these students who participated in this study demonstrate characteristics of emotionally dysregulated reactive bullies. Theoretical implication and educational practices are discussed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In Canada freedom of information must be viewed in the context of governing -- how do you deal with an abundance of information while balancing a diversity of competing interests? How can you ensure people are informed enough to participate in crucial decision-making, yet willing enough to let some administrative matters be dealt with in camera without their involvement in every detail. In an age when taxpayers' coalition groups are on the rise, and the government is encouraging the establishment of Parent Council groups for schools, the issues and challenges presented by access to information and protection of privacy legislation are real ones. The province of Ontario's decision to extend freedom of information legislation to local governments does not ensure, or equate to, full public disclosure of all facts or necessarily guarantee complete public comprehension of an issue. The mere fact that local governments, like school boards, decide to collect, assemble or record some information and not to collect other information implies that a prior decision was made by "someone" on what was important to record or keep. That in itself means that not all the facts are going to be disclosed, regardless of the presence of legislation. The resulting lack of information can lead to public mistrust and lack of confidence in those who govern. This is completely contrary to the spirit of the legislation which was to provide interested members of the community with facts so that values like political accountability and trust could be ensured and meaningful criticism and input obtained on matters affecting the whole community. This thesis first reviews the historical reasons for adopting freedom of information legislation, reasons which are rooted in our parliamentary system of government. However, the same reasoning for enacting such legislation cannot be applied carte blanche to the municipal level of government in Ontario, or - ii - more specifially to the programs, policies or operations of a school board. The purpose of this thesis is to examine whether the Municipal Freedom of Information and Protection of Privacy Act, 1989 (MFIPPA) was a neccessary step to ensure greater openness from school boards. Based on a review of the Orders made by the Office of the Information and Privacy Commissioner/Ontario, it also assesses how successfully freedom of information legislation has been implemented at the municipal level of government. The Orders provide an opportunity to review what problems school boards have encountered, and what guidance the Commissioner has offered. Reference is made to a value framework as an administrative tool in critically analyzing the suitability of MFIPPA to school boards. The conclusion is drawn that MFIPPA appears to have inhibited rather than facilitated openness in local government. This may be attributed to several factors inclusive of the general uncertainty, confusion and discretion in interpreting various provisions and exemptions in the Act. Some of the uncertainty is due to the fact that an insufficient number of school board staff are familiar with the Act. The complexity of the Act and its legalistic procedures have over-formalized the processes of exchanging information. In addition there appears to be a concern among municipal officials that granting any access to information may be violating personal privacy rights of others. These concerns translate into indecision and extreme caution in responding to inquiries. The result is delay in responding to information requests and lack of uniformity in the responses given. However, the mandatory review of the legislation does afford an opportunity to address some of these problems and to make this complex Act more suitable for application to school boards. In order for the Act to function more efficiently and effectively legislative changes must be made to MFIPPA. It is important that the recommendations for improving the Act be adopted before the government extends this legislation to any other public entities.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

As the complexity of evolutionary design problems grow, so too must the quality of solutions scale to that complexity. In this research, we develop a genetic programming system with individuals encoded as tree-based generative representations to address scalability. This system is capable of multi-objective evaluation using a ranked sum scoring strategy. We examine Hornby's features and measures of modularity, reuse and hierarchy in evolutionary design problems. Experiments are carried out, using the system to generate three-dimensional forms, and analyses of feature characteristics such as modularity, reuse and hierarchy were performed. This work expands on that of Hornby's, by examining a new and more difficult problem domain. The results from these experiments show that individuals encoded with those three features performed best overall. It is also seen, that the measures of complexity conform to the results of Hornby. Moving forward with only this best performing encoding, the system was applied to the generation of three-dimensional external building architecture. One objective considered was passive solar performance, in which the system was challenged with generating forms that optimize exposure to the Sun. The results from these and other experiments satisfied the requirements. The system was shown to scale well to the architectural problems studied.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In a recent paper, Bai and Perron (1998) considered theoretical issues related to the limiting distribution of estimators and test statistics in the linear model with multiple structural changes. In this companion paper, we consider practical issues for the empirical applications of the procedures. We first address the problem of estimation of the break dates and present an efficient algorithm to obtain global minimizers of the sum of squared residuals. This algorithm is based on the principle of dynamic programming and requires at most least-squares operations of order O(T 2) for any number of breaks. Our method can be applied to both pure and partial structural-change models. Secondly, we consider the problem of forming confidence intervals for the break dates under various hypotheses about the structure of the data and the errors across segments. Third, we address the issue of testing for structural changes under very general conditions on the data and the errors. Fourth, we address the issue of estimating the number of breaks. We present simulation results pertaining to the behavior of the estimators and tests in finite samples. Finally, a few empirical applications are presented to illustrate the usefulness of the procedures. All methods discussed are implemented in a GAUSS program available upon request for non-profit academic use.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Naïvement perçu, le processus d’évolution est une succession d’événements de duplication et de mutations graduelles dans le génome qui mènent à des changements dans les fonctions et les interactions du protéome. La famille des hydrolases de guanosine triphosphate (GTPases) similaire à Ras constitue un bon modèle de travail afin de comprendre ce phénomène fondamental, car cette famille de protéines contient un nombre limité d’éléments qui diffèrent en fonctionnalité et en interactions. Globalement, nous désirons comprendre comment les mutations singulières au niveau des GTPases affectent la morphologie des cellules ainsi que leur degré d’impact sur les populations asynchrones. Mon travail de maîtrise vise à classifier de manière significative différents phénotypes de la levure Saccaromyces cerevisiae via l’analyse de plusieurs critères morphologiques de souches exprimant des GTPases mutées et natives. Notre approche à base de microscopie et d’analyses bioinformatique des images DIC (microscopie d’interférence différentielle de contraste) permet de distinguer les phénotypes propres aux cellules natives et aux mutants. L’emploi de cette méthode a permis une détection automatisée et une caractérisation des phénotypes mutants associés à la sur-expression de GTPases constitutivement actives. Les mutants de GTPases constitutivement actifs Cdc42 Q61L, Rho5 Q91H, Ras1 Q68L et Rsr1 G12V ont été analysés avec succès. En effet, l’implémentation de différents algorithmes de partitionnement, permet d’analyser des données qui combinent les mesures morphologiques de population native et mutantes. Nos résultats démontrent que l’algorithme Fuzzy C-Means performe un partitionnement efficace des cellules natives ou mutantes, où les différents types de cellules sont classifiés en fonction de plusieurs facteurs de formes cellulaires obtenus à partir des images DIC. Cette analyse démontre que les mutations Cdc42 Q61L, Rho5 Q91H, Ras1 Q68L et Rsr1 G12V induisent respectivement des phénotypes amorphe, allongé, rond et large qui sont représentés par des vecteurs de facteurs de forme distincts. Ces distinctions sont observées avec différentes proportions (morphologie mutante / morphologie native) dans les populations de mutants. Le développement de nouvelles méthodes automatisées d’analyse morphologique des cellules natives et mutantes s’avère extrêmement utile pour l’étude de la famille des GTPases ainsi que des résidus spécifiques qui dictent leurs fonctions et réseau d’interaction. Nous pouvons maintenant envisager de produire des mutants de GTPases qui inversent leur fonction en ciblant des résidus divergents. La substitution fonctionnelle est ensuite détectée au niveau morphologique grâce à notre nouvelle stratégie quantitative. Ce type d’analyse peut également être transposé à d’autres familles de protéines et contribuer de manière significative au domaine de la biologie évolutive.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Les décisions de localisation sont souvent soumises à des aspects dynamiques comme des changements dans la demande des clients. Pour y répondre, la solution consiste à considérer une flexibilité accrue concernant l’emplacement et la capacité des installations. Même lorsque la demande est prévisible, trouver le planning optimal pour le déploiement et l'ajustement dynamique des capacités reste un défi. Dans cette thèse, nous nous concentrons sur des problèmes de localisation avec périodes multiples, et permettant l'ajustement dynamique des capacités, en particulier ceux avec des structures de coûts complexes. Nous étudions ces problèmes sous différents points de vue de recherche opérationnelle, en présentant et en comparant plusieurs modèles de programmation linéaire en nombres entiers (PLNE), l'évaluation de leur utilisation dans la pratique et en développant des algorithmes de résolution efficaces. Cette thèse est divisée en quatre parties. Tout d’abord, nous présentons le contexte industriel à l’origine de nos travaux: une compagnie forestière qui a besoin de localiser des campements pour accueillir les travailleurs forestiers. Nous présentons un modèle PLNE permettant la construction de nouveaux campements, l’extension, le déplacement et la fermeture temporaire partielle des campements existants. Ce modèle utilise des contraintes de capacité particulières, ainsi qu’une structure de coût à économie d’échelle sur plusieurs niveaux. L'utilité du modèle est évaluée par deux études de cas. La deuxième partie introduit le problème dynamique de localisation avec des capacités modulaires généralisées. Le modèle généralise plusieurs problèmes dynamiques de localisation et fournit de meilleures bornes de la relaxation linéaire que leurs formulations spécialisées. Le modèle peut résoudre des problèmes de localisation où les coûts pour les changements de capacité sont définis pour toutes les paires de niveaux de capacité, comme c'est le cas dans le problème industriel mentionnée ci-dessus. Il est appliqué à trois cas particuliers: l'expansion et la réduction des capacités, la fermeture temporaire des installations, et la combinaison des deux. Nous démontrons des relations de dominance entre notre formulation et les modèles existants pour les cas particuliers. Des expériences de calcul sur un grand nombre d’instances générées aléatoirement jusqu’à 100 installations et 1000 clients, montrent que notre modèle peut obtenir des solutions optimales plus rapidement que les formulations spécialisées existantes. Compte tenu de la complexité des modèles précédents pour les grandes instances, la troisième partie de la thèse propose des heuristiques lagrangiennes. Basées sur les méthodes du sous-gradient et des faisceaux, elles trouvent des solutions de bonne qualité même pour les instances de grande taille comportant jusqu’à 250 installations et 1000 clients. Nous améliorons ensuite la qualité de la solution obtenue en résolvent un modèle PLNE restreint qui tire parti des informations recueillies lors de la résolution du dual lagrangien. Les résultats des calculs montrent que les heuristiques donnent rapidement des solutions de bonne qualité, même pour les instances où les solveurs génériques ne trouvent pas de solutions réalisables. Finalement, nous adaptons les heuristiques précédentes pour résoudre le problème industriel. Deux relaxations différentes sont proposées et comparées. Des extensions des concepts précédents sont présentées afin d'assurer une résolution fiable en un temps raisonnable.