962 resultados para OPERATIONS RESEARCH
Resumo:
Dissertação de mestrado em Engenharia de Sistemas
Resumo:
This paper compares the forecasting performance of different models which have been proposed for forecasting in the presence of structural breaks. These models differ in their treatment of the break process, the parameters defining the model which applies in each regime and the out-of-sample probability of a break occurring. In an extensive empirical evaluation involving many important macroeconomic time series, we demonstrate the presence of structural breaks and their importance for forecasting in the vast majority of cases. However, we find no single forecasting model consistently works best in the presence of structural breaks. In many cases, the formal modeling of the break process is important in achieving good forecast performance. However, there are also many cases where simple, rolling OLS forecasts perform well.
Resumo:
This paper describes a new reliable method, based on modal interval analysis (MIA) and set inversion (SI) techniques, for the characterization of solution sets defined by quantified constraints satisfaction problems (QCSP) over continuous domains. The presented methodology, called quantified set inversion (QSI), can be used over a wide range of engineering problems involving uncertain nonlinear models. Finally, an application on parameter identification is presented
Resumo:
Teicoplanin is frequently administered to treat Gram-positive infections in pediatric patients. However, not enough is known about the pharmacokinetics (PK) of teicoplanin in children to justify the optimal dosing regimen. The aim of this study was to determine the population PK of teicoplanin in children and evaluate the current dosage regimens. A PK hospital-based study was conducted. Current dosage recommendations were used for children up to 16 years of age. Thirty-nine children were recruited. Serum samples were collected at the first dose interval (1, 3, 6, and 24 h) and at steady state. A standard 2-compartment PK model was developed, followed by structural models that incorporated weight. Weight was allowed to affect clearance (CL) using linear and allometric scaling terms. The linear model best accounted for the observed data and was subsequently chosen for Monte Carlo simulations. The PK parameter medians/means (standard deviation [SD]) were as follows: CL, [0.019/0.023 (0.01)] × weight liters/h/kg of body weight; volume, 2.282/4.138 liters (4.14 liters); first-order rate constant from the central to peripheral compartment (Kcp), 0.474/3.876 h(-1) (8.16 h(-1)); and first-order rate constant from peripheral to central compartment (Kpc), 0.292/3.994 h(-1) (8.93 h(-1)). The percentage of patients with a minimum concentration of drug in serum (Cmin) of <10 mg/liter was 53.85%. The median/mean (SD) total population area under the concentration-time curve (AUC) was 619/527.05 mg · h/liter (166.03 mg · h/liter). Based on Monte Carlo simulations, only 30.04% (median AUC, 507.04 mg · h/liter), 44.88% (494.1 mg · h/liter), and 60.54% (452.03 mg · h/liter) of patients weighing 50, 25, and 10 kg, respectively, attained trough concentrations of >10 mg/liter by day 4 of treatment. The teicoplanin population PK is highly variable in children, with a wider AUC distribution spread than for adults. Therapeutic drug monitoring should be a routine requirement to minimize suboptimal concentrations. (This trial has been registered in the European Clinical Trials Database Registry [EudraCT] under registration number 2012-005738-12.).
Resumo:
Minimax lower bounds for concept learning state, for example, thatfor each sample size $n$ and learning rule $g_n$, there exists a distributionof the observation $X$ and a concept $C$ to be learnt such that the expectederror of $g_n$ is at least a constant times $V/n$, where $V$ is the VC dimensionof the concept class. However, these bounds do not tell anything about therate of decrease of the error for a {\sl fixed} distribution--concept pair.\\In this paper we investigate minimax lower bounds in such a--stronger--sense.We show that for several natural $k$--parameter concept classes, includingthe class of linear halfspaces, the class of balls, the class of polyhedrawith a certain number of faces, and a class of neural networks, for any{\sl sequence} of learning rules $\{g_n\}$, there exists a fixed distributionof $X$ and a fixed concept $C$ such that the expected error is larger thana constant times $k/n$ for {\sl infinitely many n}. We also obtain suchstrong minimax lower bounds for the tail distribution of the probabilityof error, which extend the corresponding minimax lower bounds.
Resumo:
The Drivers Scheduling Problem (DSP) consists of selecting a set of duties for vehicle drivers, for example buses, trains, plane or boat drivers or pilots, for the transportation of passengers or goods. This is a complex problem because it involves several constraints related to labour and company rules and can also present different evaluation criteria and objectives. Being able to develop an adequate model for this problem that can represent the real problem as close as possible is an important research area.The main objective of this research work is to present new mathematical models to the DSP problem that represent all the complexity of the drivers scheduling problem, and also demonstrate that the solutions of these models can be easily implemented in real situations. This issue has been recognized by several authors and as important problem in Public Transportation. The most well-known and general formulation for the DSP is a Set Partition/Set Covering Model (SPP/SCP). However, to a large extend these models simplify some of the specific business aspects and issues of real problems. This makes it difficult to use these models as automatic planning systems because the schedules obtained must be modified manually to be implemented in real situations. Based on extensive passenger transportation experience in bus companies in Portugal, we propose new alternative models to formulate the DSP problem. These models are also based on Set Partitioning/Covering Models; however, they take into account the bus operator issues and the perspective opinions and environment of the user.We follow the steps of the Operations Research Methodology which consist of: Identify the Problem; Understand the System; Formulate a Mathematical Model; Verify the Model; Select the Best Alternative; Present the Results of theAnalysis and Implement and Evaluate. All the processes are done with close participation and involvement of the final users from different transportation companies. The planner s opinion and main criticisms are used to improve the proposed model in a continuous enrichment process. The final objective is to have a model that can be incorporated into an information system to be used as an automatic tool to produce driver schedules. Therefore, the criteria for evaluating the models is the capacity to generate real and useful schedules that can be implemented without many manual adjustments or modifications. We have considered the following as measures of the quality of the model: simplicity, solution quality and applicability. We tested the alternative models with a set of real data obtained from several different transportation companies and analyzed the optimal schedules obtained with respect to the applicability of the solution to the real situation. To do this, the schedules were analyzed by the planners to determine their quality and applicability. The main result of this work is the proposition of new mathematical models for the DSP that better represent the realities of the passenger transportation operators and lead to better schedules that can be implemented directly in real situations.
Resumo:
We obtain minimax lower and upper bounds for the expected distortionredundancy of empirically designed vector quantizers. We show that the meansquared distortion of a vector quantizer designed from $n$ i.i.d. datapoints using any design algorithm is at least $\Omega (n^{-1/2})$ awayfrom the optimal distortion for some distribution on a bounded subset of${\cal R}^d$. Together with existing upper bounds this result shows thatthe minimax distortion redundancy for empirical quantizer design, as afunction of the size of the training data, is asymptotically on the orderof $n^{1/2}$. We also derive a new upper bound for the performance of theempirically optimal quantizer.
Resumo:
The classical binary classification problem is investigatedwhen it is known in advance that the posterior probability function(or regression function) belongs to some class of functions. We introduceand analyze a method which effectively exploits this knowledge. The methodis based on minimizing the empirical risk over a carefully selected``skeleton'' of the class of regression functions. The skeleton is acovering of the class based on a data--dependent metric, especiallyfitted for classification. A new scale--sensitive dimension isintroduced which is more useful for the studied classification problemthan other, previously defined, dimension measures. This fact isdemonstrated by performance bounds for the skeleton estimate in termsof the new dimension.
Resumo:
In liberalized electricity markets, generation Companies must build an hourly bidthat is sent to the market operator. The price at which the energy will be paid is unknown during the bidding process and has to be forecast. In this work we apply forecasting factor models to this framework and study its suitability.
Resumo:
Desenvolupament dels models matemà tics necessaris per a controlar de forma òptima la microxarxa existent als laboratoris del Institut de Recerca en Energia de Catalunya. Els algoritmes s'implementaran per tal de simular el comportament i posteriorment es programaran directament sobre els elements de la microxarxa per verificar el seu correcte funcionament.. Desenvolupament dels models matemà tics necessaris per a controlar de forma òptima la microxarxa existent als laboratoris del Institut de Recerca en Energia de Catalunya. Els algoritmes s'implementaran per tal de simular el comportament i posteriorment es programaran directament sobre els elements de la microxarxa per verificar el seu correcte funcionament.
Resumo:
[cat] En el context dels mercats a dues bandes, considerem, en primer lloc, que els jugadors poden escollir on dur a terme les seves transaccions. Mostrem que el joc corresponent a aquesta situació, que es representa pel mà xim d’un conjunt finit de jocs d’assignació, pot ser un joc no equilibrat. Aleshores proporcionem condicions per a l’equilibri del joc i, per aquest cas, analitzem algunes propietats del core del joc. En segon lloc, considerem que els jugadors poden fer transaccions en diversos mercats simultà niament i, llavors, sumar els guanys obtinguts. El joc corresponent, representat per la suma d’un conjunt finit de jocs d’assignació, és equilibrat. A més a més, sota certes condicions, la suma dels cores dels dos jocs d’assignació coincideix amb el core del joc suma.
Resumo:
[cat] En el context dels mercats a dues bandes, considerem, en primer lloc, que els jugadors poden escollir on dur a terme les seves transaccions. Mostrem que el joc corresponent a aquesta situació, que es representa pel mà xim d’un conjunt finit de jocs d’assignació, pot ser un joc no equilibrat. Aleshores proporcionem condicions per a l’equilibri del joc i, per aquest cas, analitzem algunes propietats del core del joc. En segon lloc, considerem que els jugadors poden fer transaccions en diversos mercats simultà niament i, llavors, sumar els guanys obtinguts. El joc corresponent, representat per la suma d’un conjunt finit de jocs d’assignació, és equilibrat. A més a més, sota certes condicions, la suma dels cores dels dos jocs d’assignació coincideix amb el core del joc suma.
Resumo:
Aquesta exposició vol presentar breument el ventall d'eines disponibles, la terminologia utilitzada i, en general, el marc metodològic de l'estadÃstica exploratoria i de l'analisi de dades, el paradigma de la disciplina.En el decurs dels darrers anys, la disciplina no ha estat pas capgirada, però de tota manera sà que cal una actualització permanent.S'han forjat i provat algunes eines gairebé només esbossades, han aparegut nous dominis d'aplicació. Cal precisar la relació amb els competidors i dinamics veïns (intel·ligencia artificial, xarxes neurals, Data Mining). La perspectiva que presento dels mètodes d'anà lisi de dades emana evidentment d'un punt de vista particular; altres punts de vista poden ser igualment và lids
Resumo:
Annualising work hours (AH) is a means of achievement flexibility in the use of human resources to face the seasonal nature of demand. In Corominas et al. (1) two MILP models are used to solve the problem of planning staff working hours with annual horizon. The costs due to overtime and to the employment of temporary workers are minimised, and the distribution of working time over the course of the year for each worker and the distribution of working time provided by temporary workers are regularised.In the aforementioned paper, the following is assumed: (i) the holiday weeks are fixed a priori and (ii) the workers are from different categories who are able to perform specific type of task have se same efficiency; moreover, the values of the binary variables (and others) in the second model are fixed to those in the first model (thus, in the second model these will intervene as constants and not as variables, resulting in an LP model).In the present paper, these assumptions are relaxed and a more general problem is solved. The computational experiment leads to the conclusion that MILP is a technique suited to dealing with the problem.