968 resultados para Multi-stage programming
Resumo:
[en] It is known that most of the problems applied in the real life present uncertainty. In the rst part of the dissertation, basic concepts and properties of the Stochastic Programming have been introduced to the reader, also known as Optimization under Uncertainty. Moreover, since stochastic programs are complex to compute, we have presented some other models such as wait-and-wee, expected value and the expected result of using expected value. The expected value of perfect information and the value of stochastic solution measures quantify how worthy the Stochastic Programming is, with respect to the other models. In the second part, it has been designed and implemented with the modeller GAMS and the optimizer CPLEX an application that optimizes the distribution of non-perishable products, guaranteeing some nutritional requirements with minimum cost. It has been developed within Hazia project, managed by Sortarazi association and associated with Food Bank of Biscay and Basic Social Services of several districts of Biscay.
Resumo:
Macroeconomic policy makers are typically concerned with several indicators of economic performance. We thus propose to tackle the design of macroeconomic policy using Multicriteria Decision Making (MCDM) techniques. More specifically, we employ Multiobjective Programming (MP) to seek so-called efficient policies. The MP approach is combined with a computable general equilibrium (CGE) model. We chose use of a CGE model since they have the dual advantage of being consistent with standard economic theory while allowing one to measure the effect(s) of a specific policy with real data. Applying the proposed methodology to Spain (via the 1995 Social Accounting Matrix) we first quantified the trade-offs between two specific policy objectives: growth and inflation, when designing fiscal policy. We then constructed a frontier of efficient policies involving real growth and inflation. In doing so, we found that policy in 1995 Spain displayed some degree of inefficiency with respect to these two policy objectives. We then offer two sets of policy recommendations that, ostensibly, could have helped Spain at the time. The first deals with efficiency independent of the importance given to both growth and inflation by policy makers (we label this set: general policy recommendations). A second set depends on which policy objective is seen as more important by policy makers: increasing growth or controlling inflation (we label this one: objective-specific recommendations).
Resumo:
We report a two-stage diode-pumped Er-doped fiber amplifier operating at the wavelength of 1550 nm at the repetition rate of 10-100 kHz with an average output power of up to 10 W. The first stage comprising Er-doped fiber was core-pumped at the wavelength of 1480 nm, whereas the second stage comprising double-clad Er/Yb-doped fiber was clad-pumped at the wavelength of 975 nm. The estimated peak power for the 0.4-nm full-width at half-maximum laser emission at the wavelength of 1550 nm exceeded 4-kW level. The initial 100-ns seed diode laser pulse was compressed to 3.5 ns as a result of the 34-dB total amplification. The observed 30-fold efficient pulse compression reveals a promising new nonlinear optical technique for the generation of high power short pulses for applications in eye-safe ranging and micromachining.
Resumo:
Process systems design, operation and synthesis problems under uncertainty can readily be formulated as two-stage stochastic mixed-integer linear and nonlinear (nonconvex) programming (MILP and MINLP) problems. These problems, with a scenario based formulation, lead to large-scale MILPs/MINLPs that are well structured. The first part of the thesis proposes a new finitely convergent cross decomposition method (CD), where Benders decomposition (BD) and Dantzig-Wolfe decomposition (DWD) are combined in a unified framework to improve the solution of scenario based two-stage stochastic MILPs. This method alternates between DWD iterations and BD iterations, where DWD restricted master problems and BD primal problems yield a sequence of upper bounds, and BD relaxed master problems yield a sequence of lower bounds. A variant of CD, which includes multiple columns per iteration of DW restricted master problem and multiple cuts per iteration of BD relaxed master problem, called multicolumn-multicut CD is then developed to improve solution time. Finally, an extended cross decomposition method (ECD) for solving two-stage stochastic programs with risk constraints is proposed. In this approach, a CD approach at the first level and DWD at a second level is used to solve the original problem to optimality. ECD has a computational advantage over a bilevel decomposition strategy or solving the monolith problem using an MILP solver. The second part of the thesis develops a joint decomposition approach combining Lagrangian decomposition (LD) and generalized Benders decomposition (GBD), to efficiently solve stochastic mixed-integer nonlinear nonconvex programming problems to global optimality, without the need for explicit branch and bound search. In this approach, LD subproblems and GBD subproblems are systematically solved in a single framework. The relaxed master problem obtained from the reformulation of the original problem, is solved only when necessary. A convexification of the relaxed master problem and a domain reduction procedure are integrated into the decomposition framework to improve solution efficiency. Using case studies taken from renewable resource and fossil-fuel based application in process systems engineering, it can be seen that these novel decomposition approaches have significant benefit over classical decomposition methods and state-of-the-art MILP/MINLP global optimization solvers.
Resumo:
This paper proposes a method to indicate potential problems when planning dye penetrant and x-ray inspection of welded components. Inspection has been found to be an important part of the manufacturability evaluation made in a large CAD-based parametric environment for making multidisciplinary design simulations in early stages of design at an aircraft component manufacturer. The paper explains how the proposed method is to be included in the design platform at the company. It predicts the expected probability of detection of cracks (POD) in situations where the geometry of the parts is unfavourable for inspection so that potential problems can be discovered and solved in early stages. It is based on automatically extracting information from CAD-models and making a rule-based evaluation. It also provides a scale for how favourable the geometry is for inspection. In the paper it is also shown that the manufacturability evaluation need to take into consideration the expected stresses in the structures, highlighting the importance of multi-disciplinary simulations.
Resumo:
High Energy efficiency and high performance are the key regiments for Internet of Things (IoT) end-nodes. Exploiting cluster of multiple programmable processors has recently emerged as a suitable solution to address this challenge. However, one of the main bottlenecks for multi-core architectures is the instruction cache. While private caches fall into data replication and wasting area, fully shared caches lack scalability and form a bottleneck for the operating frequency. Hence we propose a hybrid solution where a larger shared cache (L1.5) is shared by multiple cores connected through a low-latency interconnect to small private caches (L1). However, it is still limited by large capacity miss with a small L1. Thus, we propose a sequential prefetch from L1 to L1.5 to improve the performance with little area overhead. Moreover, to cut the critical path for better timing, we optimized the core instruction fetch stage with non-blocking transfer by adopting a 4 x 32-bit ring buffer FIFO and adding a pipeline for the conditional branch. We present a detailed comparison of different instruction cache architectures' performance and energy efficiency recently proposed for Parallel Ultra-Low-Power clusters. On average, when executing a set of real-life IoT applications, our two-level cache improves the performance by up to 20% and loses 7% energy efficiency with respect to the private cache. Compared to a shared cache system, it improves performance by up to 17% and keeps the same energy efficiency. In the end, up to 20% timing (maximum frequency) improvement and software control enable the two-level instruction cache with prefetch adapt to various battery-powered usage cases to balance high performance and energy efficiency.
Resumo:
The topic of this thesis is the design and the implementation of mathematical models and control system algorithms for rotary-wing unmanned aerial vehicles to be used in cooperative scenarios. The use of rotorcrafts has many attractive advantages, since these vehicles have the capability to take-off and land vertically, to hover and to move backward and laterally. Rotary-wing aircraft missions require precise control characteristics due to their unstable and heavy coupling aspects. As a matter of fact, flight test is the most accurate way to evaluate flying qualities and to test control systems. However, it may be very expensive and/or not feasible in case of early stage design and prototyping. A good compromise is made by a preliminary assessment performed by means of simulations and a reduced flight testing campaign. Consequently, having an analytical framework represents an important stage for simulations and control algorithm design. In this work mathematical models for various helicopter configurations are implemented. Different flight control techniques for helicopters are presented with theoretical background and tested via simulations and experimental flight tests on a small-scale unmanned helicopter. The same platform is used also in a cooperative scenario with a rover. Control strategies, algorithms and their implementation to perform missions are presented for two main scenarios. One of the main contributions of this thesis is to propose a suitable control system made by a classical PID baseline controller augmented with L1 adaptive contribution. In addition a complete analytical framework and the study of the dynamics and the stability of a synch-rotor are provided. At last, the implementation of cooperative control strategies for two main scenarios that include a small-scale unmanned helicopter and a rover.
Resumo:
This thesis investigates mechanisms and boundary conditions that steer the early localisation of deformation and strain in carbonate multilayers involved in thrust systems, under shallow and mid-crustal conditions. Much is already understood about deformation localisation, but some key points remain loosely constrained. They encompass i) the understanding of which structural domains can preserve evidence of early stages of tectonic shortening, ii) the recognition of which mechanisms assist deformation during these stages and iii) the identification of parameters that actually steer the beginning of localisation. To clarify these points, the thesis presents the results of an integrated, multiscale and multi-technique structural study that relied on field and laboratory data to analyse the structural, architectural, mineralogical and geochemical features that govern deformation during compressional tectonics. By focusing on two case studies, the Eastern Southern Alps (northern Italy), where deformation is mainly brittle, and the Oman Mountains (northeastern Oman), where ductile deformation dominates, the thesis shows that the deformation localisation is steered by several mechanisms that mutually interact at different stages during compression. At shallow crustal conditions, derived conceptual and numerical models show that both inherited (e.g., stratigraphic) and acquired (e.g., structural) features play a key role in steering deformation and differentiating the seismic behaviour of the multilayer succession. At the same time, at deeper crustal conditions, strain localises in narrow domains in which fluids, temperature, shear strain and pressure act together during the development of the internal fabric and the chemical composition of mylonitic shear zones, in which localisation took place under high-pressure (HP) and low-temperature (LT) conditions. In particular, results indicate that those shear zones acted as “sheltering structural capsules” in which peculiar processes can happen and where the results of these processes can be successively preserved even over hundreds of millions of years.
Resumo:
Machine (and deep) learning technologies are more and more present in several fields. It is undeniable that many aspects of our society are empowered by such technologies: web searches, content filtering on social networks, recommendations on e-commerce websites, mobile applications, etc., in addition to academic research. Moreover, mobile devices and internet sites, e.g., social networks, support the collection and sharing of information in real time. The pervasive deployment of the aforementioned technological instruments, both hardware and software, has led to the production of huge amounts of data. Such data has become more and more unmanageable, posing challenges to conventional computing platforms, and paving the way to the development and widespread use of the machine and deep learning. Nevertheless, machine learning is not only a technology. Given a task, machine learning is a way of proceeding (a way of thinking), and as such can be approached from different perspectives (points of view). This, in particular, will be the focus of this research. The entire work concentrates on machine learning, starting from different sources of data, e.g., signals and images, applied to different domains, e.g., Sport Science and Social History, and analyzed from different perspectives: from a non-data scientist point of view through tools and platforms; setting a problem stage from scratch; implementing an effective application for classification tasks; improving user interface experience through Data Visualization and eXtended Reality. In essence, not only in a quantitative task, not only in a scientific environment, and not only from a data-scientist perspective, machine (and deep) learning can do the difference.
Resumo:
In medicine, innovation depends on a better knowledge of the human body mechanism, which represents a complex system of multi-scale constituents. Unraveling the complexity underneath diseases proves to be challenging. A deep understanding of the inner workings comes with dealing with many heterogeneous information. Exploring the molecular status and the organization of genes, proteins, metabolites provides insights on what is driving a disease, from aggressiveness to curability. Molecular constituents, however, are only the building blocks of the human body and cannot currently tell the whole story of diseases. This is why nowadays attention is growing towards the contemporary exploitation of multi-scale information. Holistic methods are then drawing interest to address the problem of integrating heterogeneous data. The heterogeneity may derive from the diversity across data types and from the diversity within diseases. Here, four studies conducted data integration using customly designed workflows that implement novel methods and views to tackle the heterogeneous characterization of diseases. The first study devoted to determine shared gene regulatory signatures for onco-hematology and it showed partial co-regulation across blood-related diseases. The second study focused on Acute Myeloid Leukemia and refined the unsupervised integration of genomic alterations, which turned out to better resemble clinical practice. In the third study, network integration for artherosclerosis demonstrated, as a proof of concept, the impact of network intelligibility when it comes to model heterogeneous data, which showed to accelerate the identification of new potential pharmaceutical targets. Lastly, the fourth study introduced a new method to integrate multiple data types in a unique latent heterogeneous-representation that facilitated the selection of important data types to predict the tumour stage of invasive ductal carcinoma. The results of these four studies laid the groundwork to ease the detection of new biomarkers ultimately beneficial to medical practice and to the ever-growing field of Personalized Medicine.
Resumo:
Latency can be defined as the sum of the arrival times at the customers. Minimum latency problems are specially relevant in applications related to humanitarian logistics. This thesis presents algorithms for solving a family of vehicle routing problems with minimum latency. First the latency location routing problem (LLRP) is considered. It consists of determining the subset of depots to be opened, and the routes that a set of homogeneous capacitated vehicles must perform in order to visit a set of customers such that the sum of the demands of the customers assigned to each vehicle does not exceed the capacity of the vehicle. For solving this problem three metaheuristic algorithms combining simulated annealing and variable neighborhood descent, and an iterated local search (ILS) algorithm, are proposed. Furthermore, the multi-depot cumulative capacitated vehicle routing problem (MDCCVRP) and the multi-depot k-traveling repairman problem (MDk-TRP) are solved with the proposed ILS algorithm. The MDCCVRP is a special case of the LLRP in which all the depots can be opened, and the MDk-TRP is a special case of the MDCCVRP in which the capacity constraints are relaxed. Finally, a LLRP with stochastic travel times is studied. A two-stage stochastic programming model and a variable neighborhood search algorithm are proposed for solving the problem. Furthermore a sampling method is developed for tackling instances with an infinite number of scenarios. Extensive computational experiments show that the proposed methods are effective for solving the problems under study.
Resumo:
In Brazil, the consumption of extra-virgin olive oil (EVOO) is increasing annually, but there are no experimental studies concerning the phenolic compound contents of commercial EVOO. The aim of this work was to optimise the separation of 17 phenolic compounds already detected in EVOO. A Doehlert matrix experimental design was used, evaluating the effects of pH and electrolyte concentration. Resolution, runtime and migration time relative standard deviation values were evaluated. Derringer's desirability function was used to simultaneously optimise all 37 responses. The 17 peaks were separated in 19min using a fused-silica capillary (50μm internal diameter, 72cm of effective length) with an extended light path and 101.3mmolL(-1) of boric acid electrolyte (pH 9.15, 30kV). The method was validated and applied to 15 EVOO samples found in Brazilian supermarkets.
Resumo:
Assessment of central blood pressure (BP) has grown substantially over recent years because evidence has shown that central BP is more relevant to cardiovascular outcomes than peripheral BP. Thus, different classes of antihypertensive drugs have different effects on central BP despite similar reductions in brachial BP. The aim of this study was to investigate the effect of nebivolol, a β-blocker with vasodilator properties, on the biochemical and hemodynamic parameters of hypertensive patients. Experimental single cohort study conducted in the outpatient clinic of a university hospital. Twenty-six patients were recruited. All of them underwent biochemical and hemodynamic evaluation (BP, heart rate (HR), central BP and augmentation index) before and after 3 months of using nebivolol. 88.5% of the patients were male; their mean age was 49.7 ± 9.3 years and most of them were overweight (29.6 ± 3.1 kg/m2) with large abdominal waist (102.1 ± 7.2 cm). There were significant decreases in peripheral systolic BP (P = 0.0020), diastolic BP (P = 0.0049), HR (P < 0.0001) and central BP (129.9 ± 12.3 versus 122.3 ± 10.3 mmHg; P = 0.0083) after treatment, in comparison with the baseline values. There was no statistical difference in the augmentation index or in the biochemical parameters, from before to after the treatment. Nebivolol use seems to be associated with significant reduction of central BP in stage I hypertensive patients, in addition to reductions in brachial systolic and diastolic BP.
Resumo:
Genipap fruits, native to the Amazon region, were classified in relation to their stage of ripeness according to firmness and peel color. The influence of the part of the genipap fruit and ripeness stage on the iridoid and phenolic compound profiles was evaluated by HPLC-DAD-MS(n), and a total of 17 compounds were identified. Geniposide was the major compound in both parts of the unripe genipap fruits, representing >70% of the total iridoids, whereas 5-caffeoylquinic acid was the major phenolic compound. In ripe fruits, genipin gentiobioside was the major compound in the endocarp (38%) and no phenolic compounds were detected. During ripening, the total iridoid content decreased by >90%, which could explain the absence of blue pigment formation in the ripe fruits after their injury. This is the first time that the phenolic compound composition and iridoid contents of genipap fruits have been reported in the literature.
Resumo:
The formation of mono-species biofilm (Listeria monocytogenes) and multi-species biofilms (Enterococcus faecium, Enterococcus faecalis, and L. monocytogenes) was evaluated. In addition, the effectiveness of sanitation procedures for the control of the multi-species biofilm also was evaluated. The biofilms were grown on stainless steel coupons at various incubation temperatures (7, 25 and 39°C) and contact times (0, 1, 2, 4, 6 and 8days). In all tests, at 7°C, the microbial counts were below 0.4 log CFU/cm(2) and not characteristic of biofilms. In mono-species biofilm, the counts of L. monocytogenes after 8days of contact were 4.1 and 2.8 log CFU/cm(2) at 25 and 39°C, respectively. In the multi-species biofilms, Enterococcus spp. were present at counts of 8 log CFU/cm(2) at 25 and 39°C after 8days of contact. However, the L. monocytogenes in multi-species biofilms was significantly affected by the presence of Enterococcus spp. and by temperature. At 25°C, the growth of L. monocytogenes biofilms was favored in multi-species cultures, with counts above 6 log CFU/cm(2) after 8days of contact. In contrast, at 39°C, a negative effect was observed for L. monocytogenes biofilm growth in mixed cultures, with a significant reduction in counts over time and values below 0.4 log CFU/cm(2) starting at day 4. Anionic tensioactive cleaning complemented with another procedure (acid cleaning, disinfection or acid cleaning+disinfection) eliminated the multi-species biofilms under all conditions tested (counts of all micro-organisms<0.4 log CFU/cm(2)). Peracetic acid was the most effective disinfectant, eliminating the multi-species biofilms under all tested conditions (counts of the all microorganisms <0.4 log CFU/cm(2)). In contrast, biguanide was the least effective disinfectant, failing to eliminate biofilms under all the test conditions.