957 resultados para Optimal reactive dispatch problem
Resumo:
UNLABELLED: Pancreatic cancer (PC) is one of the most lethal human malignancies and a major health problem. Patients diagnosed with PC and treated with conventional approaches have an overall 5-year survival rate of less than 5%. Novel strategies are needed to treat this disease. Herein, we propose a combinatorial strategy that targets two unrelated metabolic enzymes overexpressed in PC cells: NAD(P)H: quinone oxidoreductase-1 (NQO1) and nicotinamide phosphoribosyl transferase (NAMPT) using β-lapachone (BL) and APO866, respectively. We show that BL tremendously enhances the antitumor activity of APO866 on various PC cell lines without affecting normal cells, in a PARP-1 dependent manner. The chemopotentiation of APO866 with BL was characterized by the following: (i) nicotinamide adenine dinucleotide (NAD) depletion; (ii) catalase (CAT) degradation; (iii) excessive H2O2 production; (iv) dramatic drop of mitochondrial membrane potential (MMP); and finally (v) autophagic-associated cell death. H2O2 production, loss of MMP and cell death (but not NAD depletion) were abrogated by exogenous supplementation with CAT or pharmacological or genetic inhibition of PARP-1. Our data demonstrates that the combination of a non-lethal dose of BL and low dose of APO866 optimizes significantly cell death on various PC lines over both compounds given separately and open new and promising combination in PC therapy.
Resumo:
En option är ett finansiellt kontrakt som ger dess innehavare en rättighet (men medför ingen skyldighet) att sälja eller köpa någonting (till exempel en aktie) till eller från säljaren av optionen till ett visst pris vid en bestämd tidpunkt i framtiden. Den som säljer optionen binder sig till att gå med på denna framtida transaktion ifall optionsinnehavaren längre fram bestämmer sig för att inlösa optionen. Säljaren av optionen åtar sig alltså en risk av att den framtida transaktion som optionsinnehavaren kan tvinga honom att göra visar sig vara ofördelaktig för honom. Frågan om hur säljaren kan skydda sig mot denna risk leder till intressanta optimeringsproblem, där målet är att hitta en optimal skyddsstrategi under vissa givna villkor. Sådana optimeringsproblem har studerats mycket inom finansiell matematik. Avhandlingen "The knapsack problem approach in solving partial hedging problems of options" inför en ytterligare synpunkt till denna diskussion: I en relativt enkel (ändlig och komplett) marknadsmodell kan nämligen vissa partiella skyddsproblem beskrivas som så kallade kappsäcksproblem. De sistnämnda är välkända inom en gren av matematik som heter operationsanalys. I avhandlingen visas hur skyddsproblem som tidigare lösts på andra sätt kan alternativt lösas med hjälp av metoder som utvecklats för kappsäcksproblem. Förfarandet tillämpas även på helt nya skyddsproblem i samband med så kallade amerikanska optioner.
Resumo:
Sequestration of carbon dioxide in mineral rocks, also known as CO2 Capture and Mineralization (CCM), is considered to have a huge potential in stabilizing anthropogenic CO2 emissions. One of the CCM routes is the ex situ indirect gas/sold carbonation of reactive materials, such as Mg(OH)2, produced from abundantly available Mg-silicate rocks. The gas/solid carbonation method is intensively researched at Åbo Akademi University (ÅAU ), Finland because it is energetically attractive and utilizes the exothermic chemistry of Mg(OH)2 carbonation. In this thesis, a method for producing Mg(OH)2 from Mg-silicate rocks for CCM was investigated, and the process efficiency, energy and environmental impact assessed. The Mg(OH)2 process studied here was first proposed in 2008 in a Master’s Thesis by the author. At that time the process was applied to only one Mg-silicate rock (Finnish serpentinite from the Hitura nickel mine site of Finn Nickel) and the optimum process conversions, energy and environmental performance were not known. Producing Mg(OH)2 from Mg-silicate rocks involves a two-staged process of Mg extraction and Mg(OH)2 precipitation. The first stage extracts Mg and other cations by reacting pulverized serpentinite or olivine rocks with ammonium sulfate (AS) salt at 400 - 550 oC (preferably < 450 oC). In the second stage, ammonia solution reacts with the cations (extracted from the first stage after they are leached in water) to form mainly FeOOH, high purity Mg(OH)2 and aqueous (dissolved) AS. The Mg(OH)2 process described here is closed loop in nature; gaseous ammonia and water vapour are produced from the extraction stage, recovered and used as reagent for the precipitation stage. The AS reagent is thereafter recovered after the precipitation stage. The Mg extraction stage, being the conversion-determining and the most energy-intensive step of the entire CCM process chain, received a prominent attention in this study. The extraction behavior and reactivity of different rocks types (serpentinite and olivine rocks) from different locations worldwide (Australia, Finland, Lithuania, Norway and Portugal) was tested. Also, parametric evaluation was carried out to determine the optimal reaction temperature, time and chemical reagent (AS). Effects of reactor types and configuration, mixing and scale-up possibilities were also studied. The Mg(OH)2 produced can be used to convert CO2 to thermodynamically stable and environmentally benign magnesium carbonate. Therefore, the process energy and life cycle environmental performance of the ÅAU CCM technique that first produces Mg(OH)2 and the carbonates in a pressurized fluidized bed (FB) were assessed. The life cycle energy and environmental assessment approach applied in this thesis is motivated by the fact that the CCM technology should in itself offer a solution to what is both an energy and environmental problem. Results obtained in this study show that different Mg-silicate rocks react differently; olivine rocks being far less reactive than serpentinite rocks. In summary, the reactivity of Mg-silicate rocks is a function of both the chemical and physical properties of rocks. Reaction temperature and time remain important parameters to consider in process design and operation. Heat transfer properties of the reactor determine the temperature at which maximum Mg extraction is obtained. Also, an increase in reaction temperature leads to an increase in the extent of extraction, reaching a maximum yield at different temperatures depending on the reaction time. Process energy requirement for producing Mg(OH)2 from a hypothetical case of an iron-free serpentine rock is 3.62 GJ/t-CO2. This value can increase by 16 - 68% depending on the type of iron compound (FeO, Fe2O3 or Fe3O4) in the mineral. This suggests that the benefit from the potential use of FeOOH as an iron ore feedstock in iron and steelmaking should be determined by considering the energy, cost and emissions associated with the FeOOH by-product. AS recovery through crystallization is the second most energy intensive unit operation after the extraction reaction. However, the choice of mechanical vapor recompression (MVR) over the “simple evaporation” crystallization method has a potential energy savings of 15.2 GJ/t-CO2 (84 % savings). Integrating the Mg(OH)2 production method and the gas/solid carbonation process could provide up to an 25% energy offset to the CCM process energy requirements. Life cycle inventory assessment (LCIA) results show that for every ton of CO2 mineralized, the ÅAU CCM process avoids 430 - 480 kg CO2. The Mg(OH)2 process studied in this thesis has many promising features. Even at the current high energy and environmental burden, producing Mg(OH)2 from Mg-silicates can play a significant role in advancing CCM processes. However, dedicated future research and development (R&D) have potential to significantly improve the Mg(OH)2 process performance.
Resumo:
In recent years the analysis and synthesis of (mechanical) control systems in descriptor form has been established. This general description of dynamical systems is important for many applications in mechanics and mechatronics, in electrical and electronic engineering, and in chemical engineering as well. This contribution deals with linear mechanical descriptor systems and its control design with respect to a quadratic performance criterion. Here, the notion of properness plays an important role whether the standard Riccati approach can be applied as usual or not. Properness and non-properness distinguish between the cases if the descriptor system is exclusively governed by the control input or by its higher-order time-derivatives additionally. In the unusual case of non-proper systems a quite different problem of optimal control design has to be considered. Both cases will be solved completely.
Resumo:
In this paper, the optimum design of 3R manipulators is formulated and solved by using an algebraic formulation of workspace boundary. A manipulator design can be approached as a problem of optimization, in which the objective functions are the size of the manipulator and workspace volume; and the constrains can be given as a prescribed workspace volume. The numerical solution of the optimization problem is investigated by using two different numerical techniques, namely, sequential quadratic programming and simulated annealing. Numerical examples illustrate a design procedure and show the efficiency of the proposed algorithms.
Resumo:
The main topic of the thesis is optimal stopping. This is treated in two research articles. In the first article we introduce a new approach to optimal stopping of general strong Markov processes. The approach is based on the representation of excessive functions as expected suprema. We present a variety of examples, in particular, the Novikov-Shiryaev problem for Lévy processes. In the second article on optimal stopping we focus on differentiability of excessive functions of diffusions and apply these results to study the validity of the principle of smooth fit. As an example we discuss optimal stopping of sticky Brownian motion. The third research article offers a survey like discussion on Appell polynomials. The crucial role of Appell polynomials in optimal stopping of Lévy processes was noticed by Novikov and Shiryaev. They described the optimal rule in a large class of problems via these polynomials. We exploit the probabilistic approach to Appell polynomials and show that many classical results are obtained with ease in this framework. In the fourth article we derive a new relationship between the generalized Bernoulli polynomials and the generalized Euler polynomials.
Resumo:
Malaria continues to infect millions and kill hundreds of thousands of people worldwide each year, despite over a century of research and attempts to control and eliminate this infectious disease. Challenges such as the development and spread of drug resistant malaria parasites, insecticide resistance to mosquitoes, climate change, the presence of individuals with subpatent malaria infections which normally are asymptomatic and behavioral plasticity in the mosquito hinder the prospects of malaria control and elimination. In this thesis, mathematical models of malaria transmission and control that address the role of drug resistance, immunity, iron supplementation and anemia, immigration and visitation, and the presence of asymptomatic carriers in malaria transmission are developed. A within-host mathematical model of severe Plasmodium falciparum malaria is also developed. First, a deterministic mathematical model for transmission of antimalarial drug resistance parasites with superinfection is developed and analyzed. The possibility of increase in the risk of superinfection due to iron supplementation and fortification in malaria endemic areas is discussed. The model results calls upon stakeholders to weigh the pros and cons of iron supplementation to individuals living in malaria endemic regions. Second, a deterministic model of transmission of drug resistant malaria parasites, including the inflow of infective immigrants, is presented and analyzed. The optimal control theory is applied to this model to study the impact of various malaria and vector control strategies, such as screening of immigrants, treatment of drug-sensitive infections, treatment of drug-resistant infections, and the use of insecticide-treated bed nets and indoor spraying of mosquitoes. The results of the model emphasize the importance of using a combination of all four controls tools for effective malaria intervention. Next, a two-age-class mathematical model for malaria transmission with asymptomatic carriers is developed and analyzed. In development of this model, four possible control measures are analyzed: the use of long-lasting treated mosquito nets, indoor residual spraying, screening and treatment of symptomatic, and screening and treatment of asymptomatic individuals. The numerical results show that a disease-free equilibrium can be attained if all four control measures are used. A common pitfall for most epidemiological models is the absence of real data; model-based conclusions have to be drawn based on uncertain parameter values. In this thesis, an approach to study the robustness of optimal control solutions under such parameter uncertainty is presented. Numerical analysis of the optimal control problem in the presence of parameter uncertainty demonstrate the robustness of the optimal control approach that: when a comprehensive control strategy is used the main conclusions of the optimal control remain unchanged, even if inevitable variability remains in the control profiles. The results provide a promising framework for the design of cost-effective strategies for disease control with multiple interventions, even under considerable uncertainty of model parameters. Finally, a separate work modeling the within-host Plasmodium falciparum infection in humans is presented. The developed model allows re-infection of already-infected red blood cells. The model hypothesizes that in severe malaria due to parasite quest for survival and rapid multiplication, the Plasmodium falciparum can be absorbed in the already-infected red blood cells which accelerates the rupture rate and consequently cause anemia. Analysis of the model and parameter identifiability using Markov chain Monte Carlo methods is presented.
Resumo:
Parents of children with autism spectrum disorders (ASD) and developmental delays (DD) may experience more child problem behaviours, report lower parenting selfefficacy (PSE), and be more reactive than proactive in their parenting strategies than those who have children with typical development (TD). Differences in PSE and parenting strategies may also influence the extent to which child problem behaviours are experienced by parents who have children with ASD and DD, compared to those who have children with TD. Using a convenience sample of parents of children with ASD (n = 48), DD (n = 51), and TD (n = 72), this study examined group differences on three key variables: PSE, parenting strategies, and child problem behaviour. Results indicated that those in the DD group scored lower on PSE in preventing child problem behaviour than the ASD group. The TD group used fewer reactive strategies than the DD group, and fewer proactive strategies than both the ASD and DD groups. For the overall sample, higher reactive strategies use was found to predict higher ratings of child problem behaviour, while a greater proportion of proactive to reactive strategies use predicted lower ratings of child problem behaviour. PSE was found to moderate DD diagnosis and child problem behaviour. Implications for a behavioural (i.e., parenting strategies) or cognitive (i.e., PSE) approach to parenting are discussed.
Resumo:
DNA assembly is among the most fundamental and difficult problems in bioinformatics. Near optimal assembly solutions are available for bacterial and small genomes, however assembling large and complex genomes especially the human genome using Next-Generation-Sequencing (NGS) technologies is shown to be very difficult because of the highly repetitive and complex nature of the human genome, short read lengths, uneven data coverage and tools that are not specifically built for human genomes. Moreover, many algorithms are not even scalable to human genome datasets containing hundreds of millions of short reads. The DNA assembly problem is usually divided into several subproblems including DNA data error detection and correction, contig creation, scaffolding and contigs orientation; each can be seen as a distinct research area. This thesis specifically focuses on creating contigs from the short reads and combining them with outputs from other tools in order to obtain better results. Three different assemblers including SOAPdenovo [Li09], Velvet [ZB08] and Meraculous [CHS+11] are selected for comparative purposes in this thesis. Obtained results show that this thesis’ work produces comparable results to other assemblers and combining our contigs to outputs from other tools, produces the best results outperforming all other investigated assemblers.
Resumo:
In this article we study the effect of uncertainty on an entrepreneur who must choose the capacity of his business before knowing the demand for his product. The unit profit of operation is known with certainty but there is no flexibility in our one-period framework. We show how the introduction of global uncertainty reduces the investment of the risk neutral entrepreneur and, even more, that the risk averse one. We also show how marginal increases in risk reduce the optimal capacity of both the risk neutral and the risk averse entrepreneur, without any restriction on the concave utility function and with limited restrictions on the definition of a mean preserving spread. These general results are explained by the fact that the newsboy has a piecewise-linear, and concave, monetary payoff witha kink endogenously determined at the level of optimal capacity. Our results are compared with those in the two literatures on price uncertainty and demand uncertainty, and particularly, with the recent contributions of Eeckhoudt, Gollier and Schlesinger (1991, 1995).
Resumo:
In this article we study the effect of uncertainty on an entrepreneur who must choose the capacity of his business before knowing the demand for his product. The unit profit of operation is known with certainty but there is no flexibility in our one-period framework. We show how the introduction of global uncertainty reduces the investment of the risk neutral entrepreneur and, even more, that the risk averse one. We also show how marginal increases in risk reduce the optimal capacity of both the risk neutral and the risk averse entrepreneur, without any restriction on the concave utility function and with limited restrictions on the definition of a mean preserving spread. These general results are explained by the fact that the newsboy has a piecewise-linear, and concave, monetary payoff witha kink endogenously determined at the level of optimal capacity. Our results are compared with those in the two literatures on price uncertainty and demand uncertainty, and particularly, with the recent contributions of Eeckhoudt, Gollier and Schlesinger (1991, 1995).
Resumo:
Dans cette thèse, nous étudions quelques problèmes fondamentaux en mathématiques financières et actuarielles, ainsi que leurs applications. Cette thèse est constituée de trois contributions portant principalement sur la théorie de la mesure de risques, le problème de l’allocation du capital et la théorie des fluctuations. Dans le chapitre 2, nous construisons de nouvelles mesures de risque cohérentes et étudions l’allocation de capital dans le cadre de la théorie des risques collectifs. Pour ce faire, nous introduisons la famille des "mesures de risque entropique cumulatifs" (Cumulative Entropic Risk Measures). Le chapitre 3 étudie le problème du portefeuille optimal pour le Entropic Value at Risk dans le cas où les rendements sont modélisés par un processus de diffusion à sauts (Jump-Diffusion). Dans le chapitre 4, nous généralisons la notion de "statistiques naturelles de risque" (natural risk statistics) au cadre multivarié. Cette extension non-triviale produit des mesures de risque multivariées construites à partir des données financiéres et de données d’assurance. Le chapitre 5 introduit les concepts de "drawdown" et de la "vitesse d’épuisement" (speed of depletion) dans la théorie de la ruine. Nous étudions ces concepts pour des modeles de risque décrits par une famille de processus de Lévy spectrallement négatifs.
Resumo:
In most classical frameworks for learning from examples, it is assumed that examples are randomly drawn and presented to the learner. In this paper, we consider the possibility of a more active learner who is allowed to choose his/her own examples. Our investigations are carried out in a function approximation setting. In particular, using arguments from optimal recovery (Micchelli and Rivlin, 1976), we develop an adaptive sampling strategy (equivalent to adaptive approximation) for arbitrary approximation schemes. We provide a general formulation of the problem and show how it can be regarded as sequential optimal recovery. We demonstrate the application of this general formulation to two special cases of functions on the real line 1) monotonically increasing functions and 2) functions with bounded derivative. An extensive investigation of the sample complexity of approximating these functions is conducted yielding both theoretical and empirical results on test functions. Our theoretical results (stated insPAC-style), along with the simulations demonstrate the superiority of our active scheme over both passive learning as well as classical optimal recovery. The analysis of active function approximation is conducted in a worst-case setting, in contrast with other Bayesian paradigms obtained from optimal design (Mackay, 1992).
Optimal Methodology for Synchronized Scheduling of Parallel Station Assembly with Air Transportation
Resumo:
We present an optimal methodology for synchronized scheduling of production assembly with air transportation to achieve accurate delivery with minimized cost in consumer electronics supply chain (CESC). This problem was motivated by a major PC manufacturer in consumer electronics industry, where it is required to schedule the delivery requirements to meet the customer needs in different parts of South East Asia. The overall problem is decomposed into two sub-problems which consist of an air transportation allocation problem and an assembly scheduling problem. The air transportation allocation problem is formulated as a Linear Programming Problem with earliness tardiness penalties for job orders. For the assembly scheduling problem, it is basically required to sequence the job orders on the assembly stations to minimize their waiting times before they are shipped by flights to their destinations. Hence the second sub-problem is modelled as a scheduling problem with earliness penalties. The earliness penalties are assumed to be independent of the job orders.
Resumo:
Most network operators have considered reducing Label Switched Routers (LSR) label spaces (i.e. the number of labels that can be used) as a means of simplifying management of underlaying Virtual Private Networks (VPNs) and, hence, reducing operational expenditure (OPEX). This letter discusses the problem of reducing the label spaces in Multiprotocol Label Switched (MPLS) networks using label merging - better known as MultiPoint-to-Point (MP2P) connections. Because of its origins in IP, MP2P connections have been considered to have tree- shapes with Label Switched Paths (LSP) as branches. Due to this fact, previous works by many authors affirm that the problem of minimizing the label space using MP2P in MPLS - the Merging Problem - cannot be solved optimally with a polynomial algorithm (NP-complete), since it involves a hard- decision problem. However, in this letter, the Merging Problem is analyzed, from the perspective of MPLS, and it is deduced that tree-shapes in MP2P connections are irrelevant. By overriding this tree-shape consideration, it is possible to perform label merging in polynomial time. Based on how MPLS signaling works, this letter proposes an algorithm to compute the minimum number of labels using label merging: the Full Label Merging algorithm. As conclusion, we reclassify the Merging Problem as Polynomial-solvable, instead of NP-complete. In addition, simulation experiments confirm that without the tree-branch selection problem, more labels can be reduced