866 resultados para Core Sets
Resumo:
Core-shell macromolecules with dendritic polyphenylene core and polymer shell Zusammenfassung / Abstract Core-shell macromolecular structures have become of great interest in materials science because they gave an opportunity to combine a large variety of chemical and physical properties in the single molecule, by combination of different (in terms of chemistry and physics) cores and shells. The interest in such complex structures was provoked by their potential applications in the coating and painting industry (latexes), as supports for catalysts in polymer industry, or as nano-containers and transporters for genes or drug delivery. The aim of this study was the synthesis, characterization and further application of core-shell macromolecules possessing a hydrophobic stiff core (polyphenylene dendrimers) surrounded with a hydrophilic, soft, covalently bonded polymer shell (poly(ethylene oxide) and its copolymers). The requirements for such complex substances were that they should be well-defined in terms of molecular weight (narrow molecular weight distribution) and in molecular structure. The preparation of core-shell molecules containing dendrimer as a core was possible via two synthetic routs: “grafting-onto” and “grafting-from”. The resulting core-shell macromolecules possessed narrow polydispersity as guaranteed by the excellent structural and functional definition of the dendrimer and the narrow polydispersity of the PEO, PS-b-PEO and PI-b-PEO attached to the dendrimer surface. Additional investigation of the size of the particles indicated a relation between both the length and the number of the polymer chains and the hydrodynamic radius determined by Dynamic Light Scattering and Fluorescent Correlation Spectroscopy. Core-shell nano-particles were applied as metallocene supports in heterogeneous olefin polymerizations. Our results indicate that such catalyst systems, that have a size of at least one order of magnitude smaller than the used by now organic supports, could be very useful as model compounds for investigations on catalyst fragmentation and its influence on the product parameters.
Resumo:
Coupled-Cluster-Theorie (CC) ist in der heutigen Quantenchemie eine der erfolgreichsten Methoden zur genauen Beschreibung von Molekülen. Die in dieser Arbeit vorgestellten Ergebnisse zeigen, daß neben den Berechnungen von Energien eine Reihe von Eigenschaften wie Strukturparameter, Schwingungsfrequenzen und Rotations-Schwingungs-Parameter kleiner und mittelgrofler Moleküle zuverlässig und präzise vorhergesagt werden können. Im ersten Teil der Arbeit wird mit dem Spin-adaptierten Coupled-Cluster-Ansatz (SA-CC) ein neuer Weg zur Verbesserung der Beschreibung von offenschaligen Systemen vorgestellt. Dabei werden zur Bestimmung der unbekannten Wellenfunktionsparameter zusätzlich die CC-Spingleichungen gelöst. Durch dieses Vorgehen wird gewährleistet, daß die erhaltene Wellenfunktion eine Spineigenfunktion ist. Die durchgeführte Implementierung des Spin-adaptierten CC-Ansatzes unter Berücksichtigung von Einfach- und Zweifachanregungen (CCSD) für high-spin Triplett-Systeme wird ausführlich erläutert. Im zweiten Teil werden CC-Additionsschemata vorgestellt, die auf der Annahme der Additivität von Elektronenkorrelations- und Basissatzeffekten basieren. Die etablierte Vorgehensweise, verschiedene Beiträge zur Energie mit an den Rechenaufwand angepassten Basissätzen separat zu berechnen und aufzusummieren, wird hier auf Gradienten und Kraftkonstanten übertragen. Für eine Beschreibung von Bindungslängen und harmonischen Schwingungsfrequenzen mit experimenteller Genauigkeit ist die Berücksichtigung von Innerschalenkorrelationseffekten sowie Dreifach- und Vierfachanregungen im Clusteroperator der Wellenfunktion nötig. Die Basissatzkonvergenz wird dabei zusätzlich mit Extrapolationsmethoden beschleunigt. Die quantitative Vorhersage der Bindungslängen von 17 kleinen Molekülen, aufgebaut aus Atomen der ersten Langperiode, ist so mit einer Genauigkeit von wenigen Hundertstel Pikometern möglich. Für die Schwingungsfrequenzen dieser Moleküle weist das CC-Additionsschema basierend auf den berechneten Kraftkonstanten im Vergleich zu experimentellen Ergebnissen einen mittleren absoluten Fehler von 3.5 cm-1 und eine Standardabweichung von 2.2 cm-1 auf. Darüber hinaus werden zur Unterstützung von experimentellen Untersuchungen berechnete spektroskopische Daten einiger größerer Moleküle vorgelegt. Die in dieser Arbeit vorgestellten Untersuchungen zur Isomerisierung von Dihalogensulfanen XSSX (X= F, Cl) oder die Berechnung von Struktur- und Rotations-Schwingungs-Parametern für die Moleküle CHCl2F und CHClF2 zeigen, daß bereits störungstheoretische CCSD(T)-Näherungsmethoden qualitativ gute Vorhersagen experimenteller Resultate liefern. Desweiteren werden Diskrepanzen von experimentellen und berechneten Bindungsabständen bei den Molekülen Borhydrid- und Carbenylium durch die Berücksichtigung des elektronischen Beitrages zum Trägheitsmoment beseitigt.
Resumo:
Diese Arbeit hatte zum Ziel, den Ausschleusungsmechanismus des Hepatitis-B Virus zu beleuchten. Es ist bisher unbekannt, wie das virale Nukleokapsid umhüllt und das reife Virion aus der Leberzelle freigesetzt wird. Bei einigen RNA-Viren, beispielsweise HIV-1, Ebola oder RSV, vermitteln so genannte Late-Domänen im viralen Kapsid- oder Matrix-Protein die Knospung der Viren an intrazellulären Membranen oder der Plasmamembran. Da das HBV-Core-Protein ähnliche Sequenzen trägt, wurde in der vorliegenden Arbeit überprüft, welche Rolle diese im viralen Replikationszyklus spielen. Meine Ergebnisse zeigen, dass die beiden Prolin-reichen Sequenzen PPAY (129-132) und PPNAP (134-138), die retroviralen Late-Domänen ähneln, für die HBV-Morphogenese essentiell sind. Mutationen einzelner Aminosäuren innerhalb dieser Motive führen zu Phänotypen mit verändertem Kapsid-, Nukleokapsid- und Virus-Bildungs-Vermögen. Insbesondere sind die Aminosäure Tyrosin 132 des Motivs PPAY und die Prolinreste 134 und 135 des Motivs PPNAP erforderlich, da diese schon für die Bildung der Kapside unentbehrlich sind. Charakteristisch für beide Motive sind auch die hier gezeigten Interaktionen mit speziellen Wirtszellfaktoren, deren physiologische Funktion es ist, zelluläre Proteine in den endosomalen Sortierungsprozess einzuschleusen. Im Vordergrund stehen hier die E3 Ub-Ligase Nedd4, welche Proteine mit Ub konjugiert und diese so signifikant für die Einschleusung in das endosomale System markiert, und Tsg101, das als zentrale Komponente des ESCRT-I-Komplexes für die Erkennung von ubiquitinierten Proteinen zuständig ist und diese dadurch in die ESCRT-Kaskade des multivesikulären Endosoms einführt. Für die genannten Interaktionen ist das Motiv PPAY und hier wieder speziell das Tyrosin 132 des HBV-Core-Proteins für die Wechselwirkung mit Nedd4 notwendig. Hingegen vermittelt die L-Domänen-ähnliche Sequenz PPNAP die Assoziation von Core mit Tsg101, wobei die beiden Prolinreste 134 und 135 und auch das Asparagin 136 für die Interaktion essentiell sind. Sowohl Nedd4 als auch Tsg101 wirken im Zusammenhang mit Ubiquitin, weshalb eine Ubiquitinierung von Core, trotz bislang negativer Nachweise, wahrscheinlich ist. Zugunsten dieser Annahme spricht auch mein Nachweis, dass der Lysinrest an Position 96 des Core-Proteins, als potentieller Ub-Akzeptor, gerade in späten Schritten eine essentielle Rolle spielt. Weiterhin klärungsbedürftig ist auch die Frage, ob Core direkt mit Tsg101 und Nedd4 interagiert, oder ob andere Faktoren dazwischen geschaltet sind. Auch könnte mit Hilfe von siRNA-vermittelten Depletionsversuchen die physiologische Relevanz der Tsg101/Core-und Nedd4/Core-Interaktion weiterführend untersucht werden. Zudem zeigen meine Arbeiten, dass Core mit intrazellulären Membranen assoziiert, weshalb es interessant wäre, zu untersuchen, ob es sich hierbei um Membranen des endosomalen Systems handelt, an denen die finalen Schritte der Virus-Morphogenese stattfinden könnten.
Resumo:
Il core catalitico della DNA polimerasi III, composto dalle tre subunità α, ε e θ, è il complesso minimo responsabile della replicazione del DNA cromosomiale in Escherichia coli. Nell'oloenzima, α ed ε possiedono rispettivamente un'attività 5'-3' polimerasica ed un'attività 3'-5' esonucleasica, mentre θ non ha funzioni enzimatiche. Il presente studio si è concentrato sulle regioni del core che interagiscono direttamente con ε, ovvero θ (interagente all'estremità N-terminale di ε) e il dominio PHP di α (interagente all'estremità C-terminale di ε), delle quali non è stato sinora identificato il ruolo. Al fine di assegnare loro una funzione sono state seguite tre linee di ricerca parallele. Innanzitutto il ruolo di θ è stato studiato utilizzando approcci ex-vivo ed in vivo. I risultati presentati in questo studio mostrano che θ incrementa significativamente la stabilità della subunità ε, intrinsecamente labile. Durante gli esperimenti condotti è stata anche identificata una nuova forma dimerica di ε. Per quanto la funzione del dimero non sia definita, si è dimostrato che esso è attivamente dissociato da θ, che potrebbe quindi fungere da suo regolatore. Inoltre, è stato ritrovato e caratterizzato il primo fenotipo di θ associato alla crescita. Per quanto concerne il dominio PHP, si è dimostrato che esso possiede un'attività pirofosfatasica utilizzando un nuovo saggio, progettato per seguire le cinetiche di reazione catalizzate da enzimi rilascianti fosfato o pirofosfato. L'idrolisi del pirofosfato catalizzata dal PHP è stata dimostrata in grado di sostenere l'attività polimerasica di α in vitro, il che suggerisce il suo possibile ruolo in vivo durante la replicazione del DNA. Infine, è stata messa a punto una nuova procedura per la coespressione e purificazione del complesso α-ε-θ
Resumo:
The aim of this thesis was to design, synthesize and develop a nanoparticle based system to be used as a chemosensor or as a label in bioanalytical applications. A versatile fluorescent functionalizable nanoarchitecture has been effectively produced based on the hydrolysis and condensation of TEOS in direct micelles of Pluronic® F 127, obtaining highly monodisperse silica - core / PEG - shell nanoparticles with a diameter of about 20 nm. Surface functionalized nanoparticles have been obtained in a one-pot procedure by chemical modification of the hydroxyl terminal groups of the surfactant. To make them fluorescent, a whole library of triethoxysilane fluorophores (mainly BODIPY based), encompassing the whole visible spectrum has been synthesized: this derivatization allows a high degree of doping, but the close proximity of the molecules inside the silica matrix leads to the development of self - quenching processes at high doping levels, with the concomitant fall of the fluorescence signal intensity. In order to bypass this parasite phenomenon, multichromophoric systems have been prepared, where highly efficient FRET processes occur, showing that this energy pathway is faster than self - quenching, recovering the fluorescence signal. The FRET efficiency remains very high even four dye nanoparticles, increasing the pseudo Stokes shift of the system, attractive feature for multiplexing analysis. These optimized nanoparticles have been successfully exploited in molecular imaging applications such as in vitro, in vivo and ex vivo imaging, proving themselves superior to conventional molecular fluorophores as signaling units.
Resumo:
This study focuses on the processes of change that firms undertake to overcome conditions of organizational rigidity and develop new dynamic capabilities, thanks to the contribution of external knowledge. When external contingencies highlight firms’ core rigidities, external actors can intervene in change projects, providing new competences to firms’ managers. Knowledge transfer and organizational learning processes can lead to the development of new dynamic capabilities. Existing literature does not completely explain how these processes develop and how external knowledge providers, as management consultants, influence them. Dynamic capabilities literature has become very rich in the last years; however, the models that explain how dynamic capabilities evolve are not particularly investigated. Adopting a qualitative approach, this research proposes four relevant case studies in which external actors introduce new knowledge within organizations, activating processes of change. Each case study consists of a management consulting project. Data are collected through in-depth interviews with consultants and managers. A large amount of documents supports evidences from interviews. A narrative approach is adopted to account for change processes and a synthetic approach is proposed to compare case studies along relevant dimensions. This study presents a model of capabilities evolution, supported by empirical evidence, to explain how external knowledge intervenes in capabilities evolution processes: first, external actors solve gaps between environmental demands and firms’ capabilities, changing organizational structures and routines; second, a knowledge transfer between consultants and managers leads to the creation of new ordinary capabilities; third, managers can develop new dynamic capabilities through a deliberate learning process that internalizes new tacit knowledge from consultants. After the end of the consulting project, two elements can influence the deliberate learning process: new external contingencies and changes in the perceptions about external actors.
Resumo:
Spinal cord injury (SCI) results not only in paralysis; but it is also associated with a range of autonomic dysregulation that can interfere with cardiovascular, bladder, bowel, temperature, and sexual function. The entity of the autonomic dysfunction is related to the level and severity of injury to descending autonomic (sympathetic) pathways. For many years there was limited awareness of these issues and the attention given to them by the scientific and medical community was scarce. Yet, even if a new system to document the impact of SCI on autonomic function has recently been proposed, the current standard of assessment of SCI (American Spinal Injury Association (ASIA) examination) evaluates motor and sensory pathways, but not severity of injury to autonomic pathways. Beside the severe impact on quality of life, autonomic dysfunction in persons with SCI is associated with increased risk of cardiovascular disease and mortality. Therefore, obtaining information regarding autonomic function in persons with SCI is pivotal and clinical examinations and laboratory evaluations to detect the presence of autonomic dysfunction and quantitate its severity are mandatory. Furthermore, previous studies demonstrated that there is an intimate relationship between the autonomic nervous system and sleep from anatomical, physiological, and neurochemical points of view. Although, even if previous epidemiological studies demonstrated that sleep problems are common in spinal cord injury (SCI), so far only limited polysomnographic (PSG) data are available. Finally, until now, circadian and state dependent autonomic regulation of blood pressure (BP), heart rate (HR) and body core temperature (BcT) were never assessed in SCI patients. Aim of the current study was to establish the association between the autonomic control of the cardiovascular function and thermoregulation, sleep parameters and increased cardiovascular risk in SCI patients.
Resumo:
Obwohl die funktionelle Magnetresonanztomographie (fMRI) interiktaler Spikes mit simultaner EEG-Ableitung bei Patienten mit fokalen Anfallsleiden seit einigen Jahren zur Lokalisation beteiligter Hirnstrukturen untersucht wird, ist sie nach wie vor eine experimentelle Methode. Um zuverlässig Ergebnisse zu erhalten, ist insbesondere die Verbesserung des Signal-zu-Rausch-Verhältnisses in der statistischen Bilddatenauswertung von Bedeutung. Frühere Untersuchungen zur sog. event-related fMRI weisen auf einen Zusammenhang zwischen Häufigkeit von Einzelreizen und nachfolgender hämodynamischer Signalantwort in der fMRI hin. Um einen möglichen Einfluss der Häufigkeit interiktaler Spikes auf die Signalantwort nachzuweisen, wurden 20 Kinder mit fokaler Epilepsie mit der EEG-fMRI untersucht. Von 11 dieser Patienten konnten die Daten ausgewertet werden. In einer zweifachen Analyse mit dem Softwarepaket SPM99 wurden die Bilddaten zuerst ausschließlich je nach Auftreten interiktaler Spikes der „Reiz“- oder „Ruhe“-Bedingung zugeordnet, unabhängig von der jeweiligen Anzahl der Spikes je Messzeitpunkt (on/off-Analyse). In einem zweiten Schritt wurden die „Reiz“- Bedingungen auch differenziert nach jeweiliger Anzahl einzelner Spikes ausgewertet (häufigkeitskorrelierte Analyse). Die Ergebnisse dieser Analysen zeigten bei 5 der 11 Patienten eine Zunahme von Sensitivität und Signifikanzen der in der fMRI nachgewiesenen Aktivierungen. Eine höhere Spezifität konnte hingegen nicht gezeigt werden. Diese Ergebnisse weisen auf eine positive Korrelation von Reizhäufigkeit und nachfolgender hämodynamischer Antwort auch bei interiktalen Spikes hin, welche für die EEG-fMRI nutzbar ist. Bei 6 Patienten konnte keine fMRI-Aktivierung nachgewiesen werden. Mögliche technische und physiologische Ursachen hierfür werden diskutiert.
Resumo:
The efficient emulation of a many-core architecture is a challenging task, each core could be emulated through a dedicated thread and such threads would be interleaved on an either single-core or a multi-core processor. The high number of context switches will results in an unacceptable performance. To support this kind of application, the GPU computational power is exploited in order to schedule the emulation threads on the GPU cores. This presents a non trivial divergence issue, since GPU computational power is offered through SIMD processing elements, that are forced to synchronously execute the same instruction on different memory portions. Thus, a new emulation technique is introduced in order to overcome this limitation: instead of providing a routine for each ISA opcode, the emulator mimics the behavior of the Micro Architecture level, here instructions are date that a unique routine takes as input. Our new technique has been implemented and compared with the classic emulation approach, in order to investigate the chance of a hybrid solution.
Resumo:
During the last years great effort has been devoted to the fabrication of superhydrophobic surfaces because of their self-cleaning properties. A water drop on a superhydrophobic surface rolls off even at inclinations of only a few degrees while taking up contaminants encountered on its way. rnSuperhydrophobic, self-cleaning coatings are desirable for convenient and cost-effective maintenance of a variety of surfaces. Ideally, such coatings should be easy to make and apply, mechanically resistant, and long-term stable. None of the existing methods have yet mastered the challenge of meeting all of these criteria.rnSuperhydrophobicity is associated with surface roughness. The lotus leave, with its dual scale roughness, is one of the most efficient examples of superhydrophobic surface. This thesis work proposes a novel technique to prepare superhydrophobic surfaces that introduces the two length scale roughness by growing silica particles (~100 nm in diameter) onto micrometer-sized polystyrene particles using the well-established Stöber synthesis. Mechanical resistance is conferred to the resulting “raspberries” by the synthesis of a thin silica shell on their surface. Besides of being easy to make and handle, these particles offer the possibility for improving suitability or technical applications: since they disperse in water, multi-layers can be prepared on substrates by simple drop casting even on surfaces with grooves and slots. The solution of the main problem – stabilizing the multilayer – also lies in the design of the particles: the shells – although mechanically stable – are porous enough to allow for leakage of polystyrene from the core. Under tetrahydrofuran vapor polystyrene bridges form between the particles that render the multilayer-film stable. rnMulti-layers are good candidate to design surfaces whose roughness is preserved after scratch. If the top-most layer is removed, the roughness can still be ensured by the underlying layer.rnAfter hydrophobization by chemical vapor deposition (CVD) of a semi-fluorinated silane, the surfaces are superhydrophobic with a tilting angle of a few degrees. rnrnrn
Resumo:
Modern embedded systems embrace many-core shared-memory designs. Due to constrained power and area budgets, most of them feature software-managed scratchpad memories instead of data caches to increase the data locality. It is therefore programmers’ responsibility to explicitly manage the memory transfers, and this make programming these platform cumbersome. Moreover, complex modern applications must be adequately parallelized before they can the parallel potential of the platform into actual performance. To support this, programming languages were proposed, which work at a high level of abstraction, and rely on a runtime whose cost hinders performance, especially in embedded systems, where resources and power budget are constrained. This dissertation explores the applicability of the shared-memory paradigm on modern many-core systems, focusing on the ease-of-programming. It focuses on OpenMP, the de-facto standard for shared memory programming. In a first part, the cost of algorithms for synchronization and data partitioning are analyzed, and they are adapted to modern embedded many-cores. Then, the original design of an OpenMP runtime library is presented, which supports complex forms of parallelism such as multi-level and irregular parallelism. In the second part of the thesis, the focus is on heterogeneous systems, where hardware accelerators are coupled to (many-)cores to implement key functional kernels with orders-of-magnitude of speedup and energy efficiency compared to the “pure software” version. However, three main issues rise, namely i) platform design complexity, ii) architectural scalability and iii) programmability. To tackle them, a template for a generic hardware processing unit (HWPU) is proposed, which share the memory banks with cores, and the template for a scalable architecture is shown, which integrates them through the shared-memory system. Then, a full software stack and toolchain are developed to support platform design and to let programmers exploiting the accelerators of the platform. The OpenMP frontend is extended to interact with it.
Resumo:
The aim of this work was to study the dense cloud structures and to obtain the mass distribution of the dense cores (CMF) within the NGC6357 complex, from observations of the dust continuum at 450 and 850~$\mu$m of a 30 $\times$ 30 arcmin$^2$ region containing the H\textsc{ii} regions, G353.2+0.9 and G353.1+0.6.
Resumo:
Despite the several issues faced in the past, the evolutionary trend of silicon has kept its constant pace. Today an ever increasing number of cores is integrated onto the same die. Unfortunately, the extraordinary performance achievable by the many-core paradigm is limited by several factors. Memory bandwidth limitation, combined with inefficient synchronization mechanisms, can severely overcome the potential computation capabilities. Moreover, the huge HW/SW design space requires accurate and flexible tools to perform architectural explorations and validation of design choices. In this thesis we focus on the aforementioned aspects: a flexible and accurate Virtual Platform has been developed, targeting a reference many-core architecture. Such tool has been used to perform architectural explorations, focusing on instruction caching architecture and hybrid HW/SW synchronization mechanism. Beside architectural implications, another issue of embedded systems is considered: energy efficiency. Near Threshold Computing is a key research area in the Ultra-Low-Power domain, as it promises a tenfold improvement in energy efficiency compared to super-threshold operation and it mitigates thermal bottlenecks. The physical implications of modern deep sub-micron technology are severely limiting performance and reliability of modern designs. Reliability becomes a major obstacle when operating in NTC, especially memory operation becomes unreliable and can compromise system correctness. In the present work a novel hybrid memory architecture is devised to overcome reliability issues and at the same time improve energy efficiency by means of aggressive voltage scaling when allowed by workload requirements. Variability is another great drawback of near-threshold operation. The greatly increased sensitivity to threshold voltage variations in today a major concern for electronic devices. We introduce a variation-tolerant extension of the baseline many-core architecture. By means of micro-architectural knobs and a lightweight runtime control unit, the baseline architecture becomes dynamically tolerant to variations.
Resumo:
This thesis deals with heterogeneous architectures in standard workstations. Heterogeneous architectures represent an appealing alternative to traditional supercomputers because they are based on commodity components fabricated in large quantities. Hence their price-performance ratio is unparalleled in the world of high performance computing (HPC). In particular, different aspects related to the performance and consumption of heterogeneous architectures have been explored. The thesis initially focuses on an efficient implementation of a parallel application, where the execution time is dominated by an high number of floating point instructions. Then the thesis touches the central problem of efficient management of power peaks in heterogeneous computing systems. Finally it discusses a memory-bounded problem, where the execution time is dominated by the memory latency. Specifically, the following main contributions have been carried out: A novel framework for the design and analysis of solar field for Central Receiver Systems (CRS) has been developed. The implementation based on desktop workstation equipped with multiple Graphics Processing Units (GPUs) is motivated by the need to have an accurate and fast simulation environment for studying mirror imperfection and non-planar geometries. Secondly, a power-aware scheduling algorithm on heterogeneous CPU-GPU architectures, based on an efficient distribution of the computing workload to the resources, has been realized. The scheduler manages the resources of several computing nodes with a view to reducing the peak power. The two main contributions of this work follow: the approach reduces the supply cost due to high peak power whilst having negligible impact on the parallelism of computational nodes. from another point of view the developed model allows designer to increase the number of cores without increasing the capacity of the power supply unit. Finally, an implementation for efficient graph exploration on reconfigurable architectures is presented. The purpose is to accelerate graph exploration, reducing the number of random memory accesses.
Resumo:
Decomposition based approaches are recalled from primal and dual point of view. The possibility of building partially disaggregated reduced master problems is investigated. This extends the idea of aggregated-versus-disaggregated formulation to a gradual choice of alternative level of aggregation. Partial aggregation is applied to the linear multicommodity minimum cost flow problem. The possibility of having only partially aggregated bundles opens a wide range of alternatives with different trade-offs between the number of iterations and the required computation for solving it. This trade-off is explored for several sets of instances and the results are compared with the ones obtained by directly solving the natural node-arc formulation. An iterative solution process to the route assignment problem is proposed, based on the well-known Frank Wolfe algorithm. In order to provide a first feasible solution to the Frank Wolfe algorithm, a linear multicommodity min-cost flow problem is solved to optimality by using the decomposition techniques mentioned above. Solutions of this problem are useful for network orientation and design, especially in relation with public transportation systems as the Personal Rapid Transit. A single-commodity robust network design problem is addressed. In this, an undirected graph with edge costs is given together with a discrete set of balance matrices, representing different supply/demand scenarios. The goal is to determine the minimum cost installation of capacities on the edges such that the flow exchange is feasible for every scenario. A set of new instances that are computationally hard for the natural flow formulation are solved by means of a new heuristic algorithm. Finally, an efficient decomposition-based heuristic approach for a large scale stochastic unit commitment problem is presented. The addressed real-world stochastic problem employs at its core a deterministic unit commitment planning model developed by the California Independent System Operator (ISO).