949 resultados para optimization, heuristic, solver, operations, research
Resumo:
We analyse the relationship between the privatization of a public firm and government preferences for environmental tax revenue. The model that we consider is more general than the one consider in Wang and Wang (2009), in the sense that we put a larger weight in the environment tax revenue than on the other terms of the government's objective function. The model has two stages. In the first stage, the government sets the environmental tax. Then, the firms engage in a Cournot competition, choosing output and pollution abatement levels.
Resumo:
Competition between public and private firms exists in a range of industries like telecommunications, electricity, natural gas, airlines industries, as weel as services including hospitals, banking and education. Some authors studied mixed oligopolies under Cournot competition (firms move simultaneously) and some others considered Stackelberg models (firms move sequentially). Tomaru [1] analyzed, in a Cournot model, how decision-making upon cost-reducing R&D investment by a domestic public firm is affected by privatization when competing in the domestic market with a foreign firm. He shows that privatization of the domestic public firm lowers productive efficiency and deteriorates domestic social welfare. In this paper, we examine the same question but in a Stackelberg formulation instead of Cournot. The model is a three-stage game. In the first stage, the domestic firm chooses the amount of cost-reducing R&D investment. Then, the firms compete à la Stackelberg. Two cases are considered: (i) The domestic firm is the leader; (ii) The foreign firm is the leader. We show that the results obtained in [1] for Cournot competition are robust in the sence that they are also true when firms move sequentially.
Resumo:
Sectorization means dividing a whole into parts (sectors), a procedure that occurs in many contexts and applications, usually to achieve some goal or to facilitate an activity. The objective may be a better organization or simplification of a large problem into smaller sub-problems. Examples of applications are political districting and sales territory division. When designing/comparing sectors some characteristics such as contiguity, equilibrium and compactness are usually considered. This paper presents and describes new generic measures and proposes a new measure, desirability, connected with the idea of preference.
Resumo:
Due to the progresses made in the branch of embedded technologies, manufacturers are becoming able to pack their shop floor level manufacturing resources with even more complex functionalities. This technological progression is radically changing the way production systems are designed and deployed, as well as, monitored and controlled. The dissemination of smart devices inside production processes confers new visibility on the production system while enabling for a more efficient and effective management of the operations. By turning the current manufacturing resources functionalities into services based on a Service Oriented Architecture (SOA), in order to expose them as a service to the user, the binomial manufacturing resource/service will push the entire manufacturing enterprise visibility to another level while enabling the global optimization of the operations and processes of a production system while, at the same time, supporting its accommodation to the operational spike easily and with reduced impact on production. The present work implements a Cloud Manufacturing infrastructure for achieving the resource/service value-added i.e. to facilitate the creation of services that are the composition of currently available atomic services. In this context, manufacturing resource virtualization (i.e. formalization of resources capabilities into services accessible inside and outside the enterprise) and semantic representation/description are the pillars for achieving resource service composition. In conclusion, the present work aims to act on the manufacturing resource layer where physical resources and shop floor capabilities are going to be provided to the user as a SaaS (Software as a Service) and/or IaaS (Infrastructure as a Service).
Resumo:
This paper compares the forecasting performance of different models which have been proposed for forecasting in the presence of structural breaks. These models differ in their treatment of the break process, the parameters defining the model which applies in each regime and the out-of-sample probability of a break occurring. In an extensive empirical evaluation involving many important macroeconomic time series, we demonstrate the presence of structural breaks and their importance for forecasting in the vast majority of cases. However, we find no single forecasting model consistently works best in the presence of structural breaks. In many cases, the formal modeling of the break process is important in achieving good forecast performance. However, there are also many cases where simple, rolling OLS forecasts perform well.
Resumo:
This paper describes a new reliable method, based on modal interval analysis (MIA) and set inversion (SI) techniques, for the characterization of solution sets defined by quantified constraints satisfaction problems (QCSP) over continuous domains. The presented methodology, called quantified set inversion (QSI), can be used over a wide range of engineering problems involving uncertain nonlinear models. Finally, an application on parameter identification is presented
Resumo:
Teicoplanin is frequently administered to treat Gram-positive infections in pediatric patients. However, not enough is known about the pharmacokinetics (PK) of teicoplanin in children to justify the optimal dosing regimen. The aim of this study was to determine the population PK of teicoplanin in children and evaluate the current dosage regimens. A PK hospital-based study was conducted. Current dosage recommendations were used for children up to 16 years of age. Thirty-nine children were recruited. Serum samples were collected at the first dose interval (1, 3, 6, and 24 h) and at steady state. A standard 2-compartment PK model was developed, followed by structural models that incorporated weight. Weight was allowed to affect clearance (CL) using linear and allometric scaling terms. The linear model best accounted for the observed data and was subsequently chosen for Monte Carlo simulations. The PK parameter medians/means (standard deviation [SD]) were as follows: CL, [0.019/0.023 (0.01)] × weight liters/h/kg of body weight; volume, 2.282/4.138 liters (4.14 liters); first-order rate constant from the central to peripheral compartment (Kcp), 0.474/3.876 h(-1) (8.16 h(-1)); and first-order rate constant from peripheral to central compartment (Kpc), 0.292/3.994 h(-1) (8.93 h(-1)). The percentage of patients with a minimum concentration of drug in serum (Cmin) of <10 mg/liter was 53.85%. The median/mean (SD) total population area under the concentration-time curve (AUC) was 619/527.05 mg · h/liter (166.03 mg · h/liter). Based on Monte Carlo simulations, only 30.04% (median AUC, 507.04 mg · h/liter), 44.88% (494.1 mg · h/liter), and 60.54% (452.03 mg · h/liter) of patients weighing 50, 25, and 10 kg, respectively, attained trough concentrations of >10 mg/liter by day 4 of treatment. The teicoplanin population PK is highly variable in children, with a wider AUC distribution spread than for adults. Therapeutic drug monitoring should be a routine requirement to minimize suboptimal concentrations. (This trial has been registered in the European Clinical Trials Database Registry [EudraCT] under registration number 2012-005738-12.).
Resumo:
Minimax lower bounds for concept learning state, for example, thatfor each sample size $n$ and learning rule $g_n$, there exists a distributionof the observation $X$ and a concept $C$ to be learnt such that the expectederror of $g_n$ is at least a constant times $V/n$, where $V$ is the VC dimensionof the concept class. However, these bounds do not tell anything about therate of decrease of the error for a {\sl fixed} distribution--concept pair.\\In this paper we investigate minimax lower bounds in such a--stronger--sense.We show that for several natural $k$--parameter concept classes, includingthe class of linear halfspaces, the class of balls, the class of polyhedrawith a certain number of faces, and a class of neural networks, for any{\sl sequence} of learning rules $\{g_n\}$, there exists a fixed distributionof $X$ and a fixed concept $C$ such that the expected error is larger thana constant times $k/n$ for {\sl infinitely many n}. We also obtain suchstrong minimax lower bounds for the tail distribution of the probabilityof error, which extend the corresponding minimax lower bounds.
Resumo:
The Drivers Scheduling Problem (DSP) consists of selecting a set of duties for vehicle drivers, for example buses, trains, plane or boat drivers or pilots, for the transportation of passengers or goods. This is a complex problem because it involves several constraints related to labour and company rules and can also present different evaluation criteria and objectives. Being able to develop an adequate model for this problem that can represent the real problem as close as possible is an important research area.The main objective of this research work is to present new mathematical models to the DSP problem that represent all the complexity of the drivers scheduling problem, and also demonstrate that the solutions of these models can be easily implemented in real situations. This issue has been recognized by several authors and as important problem in Public Transportation. The most well-known and general formulation for the DSP is a Set Partition/Set Covering Model (SPP/SCP). However, to a large extend these models simplify some of the specific business aspects and issues of real problems. This makes it difficult to use these models as automatic planning systems because the schedules obtained must be modified manually to be implemented in real situations. Based on extensive passenger transportation experience in bus companies in Portugal, we propose new alternative models to formulate the DSP problem. These models are also based on Set Partitioning/Covering Models; however, they take into account the bus operator issues and the perspective opinions and environment of the user.We follow the steps of the Operations Research Methodology which consist of: Identify the Problem; Understand the System; Formulate a Mathematical Model; Verify the Model; Select the Best Alternative; Present the Results of theAnalysis and Implement and Evaluate. All the processes are done with close participation and involvement of the final users from different transportation companies. The planner s opinion and main criticisms are used to improve the proposed model in a continuous enrichment process. The final objective is to have a model that can be incorporated into an information system to be used as an automatic tool to produce driver schedules. Therefore, the criteria for evaluating the models is the capacity to generate real and useful schedules that can be implemented without many manual adjustments or modifications. We have considered the following as measures of the quality of the model: simplicity, solution quality and applicability. We tested the alternative models with a set of real data obtained from several different transportation companies and analyzed the optimal schedules obtained with respect to the applicability of the solution to the real situation. To do this, the schedules were analyzed by the planners to determine their quality and applicability. The main result of this work is the proposition of new mathematical models for the DSP that better represent the realities of the passenger transportation operators and lead to better schedules that can be implemented directly in real situations.
Resumo:
We obtain minimax lower and upper bounds for the expected distortionredundancy of empirically designed vector quantizers. We show that the meansquared distortion of a vector quantizer designed from $n$ i.i.d. datapoints using any design algorithm is at least $\Omega (n^{-1/2})$ awayfrom the optimal distortion for some distribution on a bounded subset of${\cal R}^d$. Together with existing upper bounds this result shows thatthe minimax distortion redundancy for empirical quantizer design, as afunction of the size of the training data, is asymptotically on the orderof $n^{1/2}$. We also derive a new upper bound for the performance of theempirically optimal quantizer.
Resumo:
The classical binary classification problem is investigatedwhen it is known in advance that the posterior probability function(or regression function) belongs to some class of functions. We introduceand analyze a method which effectively exploits this knowledge. The methodis based on minimizing the empirical risk over a carefully selected``skeleton'' of the class of regression functions. The skeleton is acovering of the class based on a data--dependent metric, especiallyfitted for classification. A new scale--sensitive dimension isintroduced which is more useful for the studied classification problemthan other, previously defined, dimension measures. This fact isdemonstrated by performance bounds for the skeleton estimate in termsof the new dimension.
Resumo:
In liberalized electricity markets, generation Companies must build an hourly bidthat is sent to the market operator. The price at which the energy will be paid is unknown during the bidding process and has to be forecast. In this work we apply forecasting factor models to this framework and study its suitability.
Resumo:
[cat] En el context dels mercats a dues bandes, considerem, en primer lloc, que els jugadors poden escollir on dur a terme les seves transaccions. Mostrem que el joc corresponent a aquesta situació, que es representa pel màxim d’un conjunt finit de jocs d’assignació, pot ser un joc no equilibrat. Aleshores proporcionem condicions per a l’equilibri del joc i, per aquest cas, analitzem algunes propietats del core del joc. En segon lloc, considerem que els jugadors poden fer transaccions en diversos mercats simultàniament i, llavors, sumar els guanys obtinguts. El joc corresponent, representat per la suma d’un conjunt finit de jocs d’assignació, és equilibrat. A més a més, sota certes condicions, la suma dels cores dels dos jocs d’assignació coincideix amb el core del joc suma.
Resumo:
[cat] En el context dels mercats a dues bandes, considerem, en primer lloc, que els jugadors poden escollir on dur a terme les seves transaccions. Mostrem que el joc corresponent a aquesta situació, que es representa pel màxim d’un conjunt finit de jocs d’assignació, pot ser un joc no equilibrat. Aleshores proporcionem condicions per a l’equilibri del joc i, per aquest cas, analitzem algunes propietats del core del joc. En segon lloc, considerem que els jugadors poden fer transaccions en diversos mercats simultàniament i, llavors, sumar els guanys obtinguts. El joc corresponent, representat per la suma d’un conjunt finit de jocs d’assignació, és equilibrat. A més a més, sota certes condicions, la suma dels cores dels dos jocs d’assignació coincideix amb el core del joc suma.