935 resultados para Objective function values


Relevância:

90.00% 90.00%

Publicador:

Resumo:

Objective Levodopa in presence of decarboxylase inhibitors is following two-compartment kinetics and its effect is typically modelled using sigmoid Emax models. Pharmacokinetic modelling of the absorption phase of oral distributions is problematic because of irregular gastric emptying. The purpose of this work was to identify and estimate a population pharmacokinetic- pharmacodynamic model for duodenal infusion of levodopa/carbidopa (Duodopa®) that can be used for in numero simulation of treatment strategies. Methods The modelling involved pooling data from two studies and fixing some parameters to values found in literature (Chan et al. J Pharmacokinet Pharmacodyn. 2005 Aug;32(3-4):307-31). The first study involved 12 patients on 3 occasions and is described in Nyholm et al. Clinical Neuropharmacology 2003:26:156-63. The second study, PEDAL, involved 3 patients on 2 occasions. A bolus dose (normal morning dose plus 50%) was given after a washout during night. Plasma samples and motor ratings (clinical assessment of motor function from video recordings on a treatment response scale between -3 and 3, where -3 represents severe parkinsonism and 3 represents severe dyskinesia.) were repeatedly collected until the clinical effect was back at baseline. At this point, the usual infusion rate was started and sampling continued for another two hours. Different structural absorption models and effect models were evaluated using the value of the objective function in the NONMEM package. Population mean parameter values, standard error of estimates (SE) and if possible, interindividual/interoccasion variability (IIV/IOV) were estimated. Results Our results indicate that Duodopa absorption can be modelled with an absorption compartment with an added bioavailability fraction and a lag time. The most successful effect model was of sigmoid Emax type with a steep Hill coefficient and an effect compartment delay. Estimated parameter values are presented in the table. Conclusions The absorption and effect models were reasonably successful in fitting observed data and can be used in simulation experiments.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

We examine efficient computer implementation of one method of deterministic global optimisation, the cutting angle method. In this method the objective function is approximated from values below the function with a piecewise linear auxiliary function. The global minimum of the objective function is approximated from the sequence of minima of this auxiliary function. Computing the minima of the auxiliary function is a combinatorial problem, and we show that it can be effectively parallelised. We discuss the improvements made to the serial implementation of the cutting angle method, and ways of distributing computations across multiple processors on parallel and cluster computers.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The molecular geometry, the three dimensional arrangement of atoms in space, is a major factor determining the properties and reactivity of molecules, biomolecules and macromolecules. Computation of stable molecular conformations can be done by locating minima on the potential energy surface (PES). This is a very challenging global optimization problem because of extremely large numbers of shallow local minima and complicated landscape of PES. This paper illustrates the mathematical and computational challenges on one important instance of the problem, computation of molecular geometry of oligopeptides, and proposes the use of the Extended Cutting Angle Method (ECAM) to solve this problem.

ECAM is a deterministic global optimization technique, which computes tight lower bounds on the values of the objective function and fathoms those part of the domain where the global minimum cannot reside. As with any domain partitioning scheme, its challenge is an extremely large partition of the domain required for accurate lower bounds. We address this challenge by providing an efficient combinatorial algorithm for calculating the lower bounds, and by combining ECAM with a local optimization method, while preserving the deterministic character of ECAM.


Relevância:

90.00% 90.00%

Publicador:

Resumo:

Bounded uncertainty is a major challenge to real life scheduling as it increases the risk and cost depending on the objective function. Bounded uncertainty provides limited information about its nature. It provides only the upper and the lower bounds without information in between, in contrast to probability distributions and fuzzymembership functions. Bratley algorithm is usually used for scheduling with the constraints of earliest start and due-date. It is formulated as . The proposed research uses interval computation to minimize the impact of bounded uncertainty of processing times on Bratley’s algorithm. It minimizes the uncertainty of the estimate of the objective function. The proposed concept is to do the calculations on the interval values and approximate the end result instead of approximating each interval then doing numerical calculations. This methodology gives a more certain estimate of the objective function.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

We analize a discrete type version of a common agency model with informed principals of Martimort and Moreira (2005) in the context of lobby games. We begin discussing issues related to the common values nature of the model, i.e.the agent cares directly about the principal’s utility function. With this feature the equilibrium of Martimort and Moreira (2005) is not valid. We argue in favor of one solution, although we are not able to fully characterize the equilibrium in this context. We then turn to an application: a modification of the Grossman and Helpman (1994) model of lobbying for tariff protection to incoporate assimetric information (but disconsidering the problem of common values) in the lobbies objective function. We show that the main results of the original model do not hold and that lobbies may behave less agressively towards the police maker when there is private information in the lobbies valuation for the tariffs.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The usual programs for load flow calculation were in general developped aiming the simulation of electric energy transmission, subtransmission and distribution systems. However, the mathematical methods and algorithms used by the formulations were based, in majority, just on the characteristics of the transmittion systems, which were the main concern focus of engineers and researchers. Though, the physical characteristics of these systems are quite different from the distribution ones. In the transmission systems, the voltage levels are high and the lines are generally very long. These aspects contribute the capacitive and inductive effects that appear in the system to have a considerable influence in the values of the interest quantities, reason why they should be taken into consideration. Still in the transmission systems, the loads have a macro nature, as for example, cities, neiborhoods, or big industries. These loads are, generally, practically balanced, what reduces the necessity of utilization of three-phase methodology for the load flow calculation. Distribution systems, on the other hand, present different characteristics: the voltage levels are small in comparison to the transmission ones. This almost annul the capacitive effects of the lines. The loads are, in this case, transformers, in whose secondaries are connected small consumers, in a sort of times, mono-phase ones, so that the probability of finding an unbalanced circuit is high. This way, the utilization of three-phase methodologies assumes an important dimension. Besides, equipments like voltage regulators, that use simultaneously the concepts of phase and line voltage in their functioning, need a three-phase methodology, in order to allow the simulation of their real behavior. For the exposed reasons, initially was developped, in the scope of this work, a method for three-phase load flow calculation in order to simulate the steady-state behaviour of distribution systems. Aiming to achieve this goal, the Power Summation Algorithm was used, as a base for developing the three phase method. This algorithm was already widely tested and approved by researchers and engineers in the simulation of radial electric energy distribution systems, mainly for single-phase representation. By our formulation, lines are modeled in three-phase circuits, considering the magnetic coupling between the phases; but the earth effect is considered through the Carson reduction. It s important to point out that, in spite of the loads being normally connected to the transformer s secondaries, was considered the hypothesis of existence of star or delta loads connected to the primary circuit. To perform the simulation of voltage regulators, a new model was utilized, allowing the simulation of various types of configurations, according to their real functioning. Finally, was considered the possibility of representation of switches with current measuring in various points of the feeder. The loads are adjusted during the iteractive process, in order to match the current in each switch, converging to the measured value specified by the input data. In a second stage of the work, sensibility parameters were derived taking as base the described load flow, with the objective of suporting further optimization processes. This parameters are found by calculating of the partial derivatives of a variable in respect to another, in general, voltages, losses and reactive powers. After describing the calculation of the sensibility parameters, the Gradient Method was presented, using these parameters to optimize an objective function, that will be defined for each type of study. The first one refers to the reduction of technical losses in a medium voltage feeder, through the installation of capacitor banks; the second one refers to the problem of correction of voltage profile, through the instalation of capacitor banks or voltage regulators. In case of the losses reduction will be considered, as objective function, the sum of the losses in all the parts of the system. To the correction of the voltage profile, the objective function will be the sum of the square voltage deviations in each node, in respect to the rated voltage. In the end of the work, results of application of the described methods in some feeders are presented, aiming to give insight about their performance and acuity

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The usual programs for load flow calculation were in general developped aiming the simulation of electric energy transmission, subtransmission and distribution systems. However, the mathematical methods and algorithms used by the formulations were based, in majority, just on the characteristics of the transmittion systems, which were the main concern focus of engineers and researchers. Though, the physical characteristics of these systems are quite different from the distribution ones. In the transmission systems, the voltage levels are high and the lines are generally very long. These aspects contribute the capacitive and inductive effects that appear in the system to have a considerable influence in the values of the interest quantities, reason why they should be taken into consideration. Still in the transmission systems, the loads have a macro nature, as for example, cities, neiborhoods, or big industries. These loads are, generally, practically balanced, what reduces the necessity of utilization of three-phase methodology for the load flow calculation. Distribution systems, on the other hand, present different characteristics: the voltage levels are small in comparison to the transmission ones. This almost annul the capacitive effects of the lines. The loads are, in this case, transformers, in whose secondaries are connected small consumers, in a sort of times, mono-phase ones, so that the probability of finding an unbalanced circuit is high. This way, the utilization of three-phase methodologies assumes an important dimension. Besides, equipments like voltage regulators, that use simultaneously the concepts of phase and line voltage in their functioning, need a three-phase methodology, in order to allow the simulation of their real behavior. For the exposed reasons, initially was developped, in the scope of this work, a method for three-phase load flow calculation in order to simulate the steady-state behaviour of distribution systems. Aiming to achieve this goal, the Power Summation Algorithm was used, as a base for developping the three phase method. This algorithm was already widely tested and approved by researchers and engineers in the simulation of radial electric energy distribution systems, mainly for single-phase representation. By our formulation, lines are modeled in three-phase circuits, considering the magnetic coupling between the phases; but the earth effect is considered through the Carson reduction. Its important to point out that, in spite of the loads being normally connected to the transformers secondaries, was considered the hypothesis of existence of star or delta loads connected to the primary circuit. To perform the simulation of voltage regulators, a new model was utilized, allowing the simulation of various types of configurations, according to their real functioning. Finally, was considered the possibility of representation of switches with current measuring in various points of the feeder. The loads are adjusted during the iteractive process, in order to match the current in each switch, converging to the measured value specified by the input data. In a second stage of the work, sensibility parameters were derived taking as base the described load flow, with the objective of suporting further optimization processes. This parameters are found by calculating of the partial derivatives of a variable in respect to another, in general, voltages, losses and reactive powers. After describing the calculation of the sensibility parameters, the Gradient Method was presented, using these parameters to optimize an objective function, that will be defined for each type of study. The first one refers to the reduction of technical losses in a medium voltage feeder, through the installation of capacitor banks; the second one refers to the problem of correction of voltage profile, through the instalation of capacitor banks or voltage regulators. In case of the losses reduction will be considered, as objective function, the sum of the losses in all the parts of the system. To the correction of the voltage profile, the objective function will be the sum of the square voltage deviations in each node, in respect to the rated voltage. In the end of the work, results of application of the described methods in some feeders are presented, aiming to give insight about their performance and acuity

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This work presents the application of a multiobjective evolutionary algorithm (MOEA) for optimal power flow (OPF) solution. The OPF is modeled as a constrained nonlinear optimization problem, non-convex of large-scale, with continuous and discrete variables. The violated inequality constraints are treated as objective function of the problem. This strategy allows attending the physical and operational restrictions without compromise the quality of the found solutions. The developed MOEA is based on the theory of Pareto and employs a diversity-preserving mechanism to overcome the premature convergence of algorithm and local optimal solutions. Fuzzy set theory is employed to extract the best compromises of the Pareto set. Results for the IEEE-30, RTS-96 and IEEE-354 test systems are presents to validate the efficiency of proposed model and solution technique.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

O trabalho em pauta tem como objetivo o modelamento da crosta, através da inversão de dados de refração sísmica profunda, segundo camadas planas horizontais lateralmente homogêneas, sobre um semi-espaço. O modelo direto é dado pela expressão analítica da curva tempo-distância como uma função que depende da distância fonte-estação e do vetor de parâmetros velocidades e espessuras de cada camada, calculado segundo as trajetórias do raio sísmico, regidas pela Lei de Snell. O cálculo dos tempos de chegada por este procedimento, exige a utilização de um modelo cujas velocidades sejam crescentes com a profundidade, de modo que a ocorrência das camadas de baixa velocidade (CBV) é contornada pela reparametrização do modelo, levando-se em conta o fato de que o topo da CBV funciona apenas como um refletor do raio sísmico, e não como refrator. A metodologia de inversão utilizada tem em vista não só a determinação das soluções possíveis, mas também a realização de uma análise sobre as causas responsáveis pela ambiguidade do problema. A região de pesquisa das prováveis soluções é vinculada segundo limites superiores e inferiores para cada parâmetro procurado, e pelo estabelecimento de limites superiores para os valores de distâncias críticas, calculadas a partir do vetor de parâmetros. O processo de inversão é feito utilizando-se uma técnica de otimização do ajuste de curvas através da busca direta no espaço dos parâmetros, denominado COMPLEX. Esta técnica apresenta a vantagem de poder ser utilizada com qualquer função objeto, e ser bastante prática na obtenção de múltiplas soluções do problema. Devido a curva tempo-distância corresponder ao caso de uma multi-função, o algoritmo foi adaptado de modo a minimizar simultaneamente várias funções objetos, com vínculos nos parâmetros. A inversão é feita de modo a se obter um conjunto de soluções representativas do universo existente. Por sua vez, a análise da ambiguidade é realizada pela análise fatorial modo-Q, através da qual é possível se caracterizar as propriedades comuns existentes no elenco das soluções analisadas. Os testes com dados sintéticos e reais foram feitos tendo como aproximação inicial ao processo de inversão, os valores de velocidades e espessuras calculados diretamente da interpretação visual do sismograma. Para a realização dos primeiros, utilizou-se sismogramas calculados pelo método da refletividade, segundo diferentes modelos. Por sua vez, os testes com dados reais foram realizados utilizando-se dados extraídos de um dos sismogramas coletados pelo projeto Lithospheric Seismic Profile in Britain (LISPB), na região norte da Grã-Bretanha. Em todos os testes foi verificado que a geometria do modelo possui um maior peso na ambiguidade do problema, enquanto os parâmetros físicos apresentam apenas suaves variações, no conjunto das soluções obtidas.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

O método de empilhamento por Superfície de Reflexão Comum (SRC) produz seções simuladas de afastamento nulo (AN) por meio do somatório de eventos sísmicos dos dados de cobertura múltipla contidos nas superfícies de empilhamento. Este método não depende do modelo de velocidade do meio, apenas requer o conhecimento a priori da velocidade próxima a superfície. A simulação de seções AN por este método de empilhamento utiliza uma aproximação hiperbólica de segunda ordem do tempo de trânsito de raios paraxiais para definir a superfície de empilhamento ou operador de empilhamento SRC. Para meios 2D este operador depende de três atributos cinemáticos de duas ondas hipotéticas (ondas PIN e N), observados no ponto de emergência do raio central com incidência normal, que são: o ângulo de emergência do raio central com fonte-receptor nulo (β0) , o raio de curvatura da onda ponto de incidência normal (RPIN) e o raio de curvatura da onda normal (RN). Portanto, o problema de otimização no método SRC consiste na determinação, a partir dos dados sísmicos, dos três parâmetros (β0, RPIN, RN) ótimos associados a cada ponto de amostragem da seção AN a ser simulada. A determinação simultânea destes parâmetros pode ser realizada por meio de processos de busca global (ou otimização global) multidimensional, utilizando como função objetivo algum critério de coerência. O problema de otimização no método SRC é muito importante para o bom desempenho no que diz respeito a qualidade dos resultados e principalmente ao custo computacional, comparado com os métodos tradicionalmente utilizados na indústria sísmica. Existem várias estratégias de busca para determinar estes parâmetros baseados em buscas sistemáticas e usando algoritmos de otimização, podendo estimar apenas um parâmetro de cada vez, ou dois ou os três parâmetros simultaneamente. Levando em conta a estratégia de busca por meio da aplicação de otimização global, estes três parâmetros podem ser estimados através de dois procedimentos: no primeiro caso os três parâmetros podem ser estimados simultaneamente e no segundo caso inicialmente podem ser determinados simultaneamente dois parâmetros (β0, RPIN) e posteriormente o terceiro parâmetro (RN) usando os valores dos dois parâmetros já conhecidos. Neste trabalho apresenta-se a aplicação e comparação de quatro algoritmos de otimização global para encontrar os parâmetros SRC ótimos, estes são: Simulated Annealing (SA), Very Fast Simulated Annealing (VFSA), Differential Evolution (DE) e Controlled Rando Search - 2 (CRS2). Como resultados importantes são apresentados a aplicação de cada método de otimização e a comparação entre os métodos quanto a eficácia, eficiência e confiabilidade para determinar os melhores parâmetros SRC. Posteriormente, aplicando as estratégias de busca global para a determinação destes parâmetros, por meio do método de otimização VFSA que teve o melhor desempenho foi realizado o empilhamento SRC a partir dos dados Marmousi, isto é, foi realizado um empilhamento SRC usando dois parâmetros (β0, RPIN) estimados por busca global e outro empilhamento SRC usando os três parâmetros (β0, RPIN, RN) também estimados por busca global.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Objective: To determine the prevalence of exercise-induced bronchoconstriction among elite long-distance runners in Brazil and whether there is a difference in the training loads among athletes with and without exercise-induced bronchoconstriction. Methods: This was a cross-sectional study involving elite long-distance runners with neither current asthma symptoms nor a diagnosis of exercise-induced bronchoconstriction. All of the participants underwent eucapnic voluntary hyperpnea challenge and maximal cardiopulmonary exercise tests, as well as completing questionnaires regarding asthma symptoms and physical activity, in order to monitor their weekly training load. Results: Among the 86 male athletes recruited, participation in the study was agreed to by 20, of whom 5 (25%) were subsequently diagnosed with exercise-induced bronchoconstriction. There were no differences between the athletes with and without exercise-induced bronchoconstriction regarding anthropometric characteristics, peak oxygen consumption, baseline pulmonary function values, or reported asthma symptoms. The weekly training load was significantly lower among those with exercise-induced bronchoconstriction than among those without. Conclusions: In this sample of long-distance runners in Brazil, the prevalence of exercise-induced bronchoconstriction was high.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The aim of this Doctoral Thesis is to develop a genetic algorithm based optimization methods to find the best conceptual design architecture of an aero-piston-engine, for given design specifications. Nowadays, the conceptual design of turbine airplanes starts with the aircraft specifications, then the most suited turbofan or turbo propeller for the specific application is chosen. In the aeronautical piston engines field, which has been dormant for several decades, as interest shifted towards turboaircraft, new materials with increased performance and properties have opened new possibilities for development. Moreover, the engine’s modularity given by the cylinder unit, makes it possible to design a specific engine for a given application. In many real engineering problems the amount of design variables may be very high, characterized by several non-linearities needed to describe the behaviour of the phenomena. In this case the objective function has many local extremes, but the designer is usually interested in the global one. The stochastic and the evolutionary optimization techniques, such as the genetic algorithms method, may offer reliable solutions to the design problems, within acceptable computational time. The optimization algorithm developed here can be employed in the first phase of the preliminary project of an aeronautical piston engine design. It’s a mono-objective genetic algorithm, which, starting from the given design specifications, finds the engine propulsive system configuration which possesses minimum mass while satisfying the geometrical, structural and performance constraints. The algorithm reads the project specifications as input data, namely the maximum values of crankshaft and propeller shaft speed and the maximal pressure value in the combustion chamber. The design variables bounds, that describe the solution domain from the geometrical point of view, are introduced too. In the Matlab® Optimization environment the objective function to be minimized is defined as the sum of the masses of the engine propulsive components. Each individual that is generated by the genetic algorithm is the assembly of the flywheel, the vibration damper and so many pistons, connecting rods, cranks, as the number of the cylinders. The fitness is evaluated for each individual of the population, then the rules of the genetic operators are applied, such as reproduction, mutation, selection, crossover. In the reproduction step the elitist method is applied, in order to save the fittest individuals from a contingent mutation and recombination disruption, making it undamaged survive until the next generation. Finally, as the best individual is found, the optimal dimensions values of the components are saved to an Excel® file, in order to build a CAD-automatic-3D-model for each component of the propulsive system, having a direct pre-visualization of the final product, still in the engine’s preliminary project design phase. With the purpose of showing the performance of the algorithm and validating this optimization method, an actual engine is taken, as a case study: it’s the 1900 JTD Fiat Avio, 4 cylinders, 4T, Diesel. Many verifications are made on the mechanical components of the engine, in order to test their feasibility and to decide their survival through generations. A system of inequalities is used to describe the non-linear relations between the design variables, and is used for components checking for static and dynamic loads configurations. The design variables geometrical boundaries are taken from actual engines data and similar design cases. Among the many simulations run for algorithm testing, twelve of them have been chosen as representative of the distribution of the individuals. Then, as an example, for each simulation, the corresponding 3D models of the crankshaft and the connecting rod, have been automatically built. In spite of morphological differences among the component the mass is almost the same. The results show a significant mass reduction (almost 20% for the crankshaft) in comparison to the original configuration, and an acceptable robustness of the method have been shown. The algorithm here developed is shown to be a valid method for an aeronautical-piston-engine preliminary project design optimization. In particular the procedure is able to analyze quite a wide range of design solutions, rejecting the ones that cannot fulfill the feasibility design specifications. This optimization algorithm could increase the aeronautical-piston-engine development, speeding up the production rate and joining modern computation performances and technological awareness to the long lasting traditional design experiences.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

OBJECTIVES The importance of phrenic nerve preservation during pneumonectomy remains controversial. We previously demonstrated that preservation of the phrenic nerve in the immediate postoperative period preserved lung function by 3-5% but little is known about its long-term effects. We, therefore, decided to investigate the effect of temporary ipsilateral cervical phrenic nerve block on dynamic lung volumes in mid- to long-term pneumonectomy patients. METHODS We investigated 14 patients after a median of 9 years post pneumonectomy (range: 1-15 years). Lung function testing (spirometry) and fluoroscopic and/or sonographic assessment of diaphragmatic motion on the pneumonectomy side were performed before and after ultrasonographic-guided ipsilateral cervical phrenic nerve block by infiltration with lidocaine. RESULTS Ipsilateral phrenic nerve block was successfully achieved in 12 patients (86%). In the remaining 2 patients, diaphragmatic motion was already paradoxical before the nerve block. We found no significant difference on dynamic lung function values (FEV1 'before' 1.39 ± 0.44 vs FEV1 'after' 1.38 ± 0.40; P = 0.81). CONCLUSIONS Induction of a temporary diaphragmatic palsy did not significantly influence dynamic lung volumes in mid- to long-term pneumonectomy patients, suggesting that preservation of the phrenic nerve is of greater importance in the immediate postoperative period after pneumonectomy.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

En general, la distribución de una flota de vehículos que recorre rutas fijas no se realiza completamente en base a criterios objetivos, primando otros aspectos más difícilmente cuantificables. El análisis apropiado debería tener en consideración la variabilidad existente entre las diferentes rutas dentro de una misma ciudad para así determinar qué tecnología es la que mejor se adapta a las características de cada itinerario. Este trabajo presenta una metodología para optimizar la asignación de una flota de vehículos a sus rutas, consiguiendo reducir el consumo y las emisiones contaminantes. El método propuesto está organizado según el siguiente procedimiento: - Registro de las características cinemáticas de los vehículos que recorren un conjunto representativo de rutas. - Agrupamiento de las líneas en conglomerados de líneas similares empleando un algoritmo jerárquico que optimice el índice de semejanza entre rutas obtenido mediante contraste de hipótesis de las variables representativas. - Generación de un ciclo cinemático específico para cada conglomerado. - Tipificación de variables macroscópicas que faciliten la clasificación de las restantes líneas utilizando una red neuronal entrenada con la información recopilada en las rutas medidas. - Conocimiento de las características de la flota disponible. - Disponibilidad de un modelo que estime, según la tecnología del vehículo, el consumo y las emisiones asociados a las variables cinemáticas de los ciclos. - Desarrollo de un algoritmo de reasignación de vehículos que optimice una función objetivo dependiente de las emisiones. En el proceso de optimización de la flota se plantean dos escenarios de gran trascendencia en la evaluación ambiental, consistentes en minimizar la emisión de dióxido de carbono y su impacto como gas de efecto invernadero (GEI), y alternativamente, la producción de nitróxidos, por su influencia en la lluvia ácida y en la formación de ozono troposférico en núcleos urbanos. Además, en ambos supuestos se introducen en el problema restricciones adicionales para evitar que las emisiones de las restantes sustancias superen los valores estipulados según la organización de la flota actualmente realizada por el operador. La metodología ha sido aplicada en 160 líneas de autobús de la EMT de Madrid, conociéndose los datos cinemáticos de 25 rutas. Los resultados indican que, en ambos supuestos, es factible obtener una redistribución de la flota que consiga reducir significativamente la mayoría de las sustancias contaminantes, evitando que, en contraprestación, aumente la emisión de cualquier otro contaminante. ABSTRACT In general, the distribution of a fleet of vehicles that travel fixed routes is not usually implemented on the basis of objective criteria, thus prioritizing on other features that are more difficult to quantify. The appropriate analysis should consider the existing variability amongst the different routes within the city in order to determine which technology adapts better to the peculiarities of each itinerary. This study proposes a methodology to optimize the allocation of a fleet of vehicles to the routes in order to reduce fuel consumption and pollutant emissions. The suggested method is structured in accordance with the following procedure: - Recording of the kinematic characteristics of the vehicles that travel a representative set of routes. - Grouping of the lines in clusters of similar routes by utilizing a hierarchical algorithm that optimizes the similarity index between routes, which has been previously obtained by means of hypothesis contrast based on a set of representative variables. - Construction of a specific kinematic cycle to represent each cluster of routes. - Designation of macroscopic variables that allow the classification of the remaining lines using a neural network trained with the information gathered from a sample of routes. - Identification and comprehension of the operational characteristics of the existing fleet. - Availability of a model that evaluates, in accordance with the technology of the vehicle, the fuel consumption and the emissions related with the kinematic variables of the cycles. - Development of an algorithm for the relocation of the vehicle fleet by optimizing an objective function which relies on the values of the pollutant emissions. Two scenarios having great relevance in environmental evaluation are assessed during the optimization process of the fleet, these consisting in minimizing carbon dioxide emissions due to its impact as greenhouse gas (GHG), and alternatively, the production of nitroxides for their influence on acid rain and in the formation of tropospheric ozone in urban areas. Furthermore, additional restrictions are introduced in both assumptions in order to prevent that emission levels for the remaining substances exceed the stipulated values for the actual fleet organization implemented by the system operator. The methodology has been applied in 160 bus lines of the EMT of Madrid, for which kinematic information is known for a sample consisting of 25 routes. The results show that, in both circumstances, it is feasible to obtain a redistribution of the fleet that significantly reduces the emissions for the majority of the pollutant substances, while preventing an alternative increase in the emission level of any other contaminant.