956 resultados para Cost function
Resumo:
The procedure for online process control by attributes consists of inspecting a single item at every m produced items. It is decided on the basis of the inspection result whether the process is in-control (the conforming fraction is stable) or out-of-control (the conforming fraction is decreased, for example). Most articles about online process control have cited the stoppage of the production process for an adjustment when the inspected item is non-conforming (then the production is restarted in-control, here denominated as corrective adjustment). Moreover, the articles related to this subject do not present semi-economical designs (which may yield high quantities of non-conforming items), as they do not include a policy of preventive adjustments (in such case no item is inspected), which can be more economical, mainly if the inspected item can be misclassified. In this article, the possibility of preventive or corrective adjustments in the process is decided at every m produced item. If a preventive adjustment is decided upon, then no item is inspected. On the contrary, the m-th item is inspected; if it conforms, the production goes on, otherwise, an adjustment takes place and the process restarts in-control. This approach is economically feasible for some practical situations and the parameters of the proposed procedure are determined minimizing an average cost function subject to some statistical restrictions (for example, to assure a minimal levelfixed in advanceof conforming items in the production process). Numerical examples illustrate the proposal.
Distributed Estimation Over an Adaptive Incremental Network Based on the Affine Projection Algorithm
Resumo:
We study the problem of distributed estimation based on the affine projection algorithm (APA), which is developed from Newton`s method for minimizing a cost function. The proposed solution is formulated to ameliorate the limited convergence properties of least-mean-square (LMS) type distributed adaptive filters with colored inputs. The analysis of transient and steady-state performances at each individual node within the network is developed by using a weighted spatial-temporal energy conservation relation and confirmed by computer simulations. The simulation results also verify that the proposed algorithm provides not only a faster convergence rate but also an improved steady-state performance as compared to an LMS-based scheme. In addition, the new approach attains an acceptable misadjustment performance with lower computational and memory cost, provided the number of regressor vectors and filter length parameters are appropriately chosen, as compared to a distributed recursive-least-squares (RLS) based method.
Resumo:
We derive an easy-to-compute approximate bound for the range of step-sizes for which the constant-modulus algorithm (CMA) will remain stable if initialized close to a minimum of the CM cost function. Our model highlights the influence, of the signal constellation used in the transmission system: for smaller variation in the modulus of the transmitted symbols, the algorithm will be more robust, and the steady-state misadjustment will be smaller. The theoretical results are validated through several simulations, for long and short filters and channels.
Resumo:
In this work, a wide analysis of local search multiuser detection (LS-MUD) for direct sequence/code division multiple access (DS/CDMA) systems under multipath channels is carried out considering the performance-complexity trade-off. It is verified the robustness of the LS-MUD to variations in loading, E(b)/N(0), near-far effect, number of fingers of the Rake receiver and errors in the channel coefficients estimates. A compared analysis of the bit error rate (BER) and complexity trade-off is accomplished among LS, genetic algorithm (GA) and particle swarm optimization (PSO). Based on the deterministic behavior of the LS algorithm, it is also proposed simplifications over the cost function calculation, obtaining more efficient algorithms (simplified and combined LS-MUD versions) and creating new perspectives for the MUD implementation. The computational complexity is expressed in terms of the number of operations in order to converge. Our conclusion pointed out that the simplified LS (s-LS) method is always more efficient, independent of the system conditions, achieving a better performance with a lower complexity than the others heuristics detectors. Associated to this, the deterministic strategy and absence of input parameters made the s-LS algorithm the most appropriate for the MUD problem. (C) 2008 Elsevier GmbH. All rights reserved.
Resumo:
The aim of this paper is to present an economical design of an X chart for a short-run production. The process mean starts equal to mu(0) (in-control, State I) and in a random time it shifts to mu(1) > mu(0) (out-of-control, State II). The monitoring procedure consists of inspecting a single item at every m produced ones. If the measurement of the quality characteristic does not meet the control limits, the process is stopped, adjusted, and additional (r - 1) items are inspected retrospectively. The probabilistic model was developed considering only shifts in the process mean. A direct search technique is applied to find the optimum parameters which minimizes the expected cost function. Numerical examples illustrate the proposed procedure. (C) 2009 Elsevier B.V. All rights reserved.
Resumo:
Two basic representations of principal-agent relationships, the 'state-space' and 'parameterized distribution' formulations, have emerged. Although the state-space formulation appears more natural, analytical studies using this formulation have had limited success. This paper develops a state-space formulation of the moral-hazard problem using a general representation of production under uncertainty. A closed-form solution for the agency-cost problem is derived. Comparative-static results are deduced. Next we solve the principal's problem of selecting the optimal output given the agency-cost function. The analysis is applied to the problem of point-source pollution control. (C) 1998 Published by Elsevier Science S.A. All rights reserved.
Resumo:
In general, modern networks are analysed by taking several Key Performance Indicators (KPIs) into account, their proper balance being required in order to guarantee a desired Quality of Service (QoS), particularly, cellular wireless heterogeneous networks. A model to integrate a set of KPIs into a single one is presented, by using a Cost Function that includes these KPIs, providing for each network node a single evaluation parameter as output, and reflecting network conditions and common radio resource management strategies performance. The proposed model enables the implementation of different network management policies, by manipulating KPIs according to users' or operators' perspectives, allowing for a better QoS. Results show that different policies can in fact be established, with a different impact on the network, e.g., with median values ranging by a factor higher than two.
Resumo:
This paper presents a complete, quadratic programming formulation of the standard thermal unit commitment problem in power generation planning, together with a novel iterative optimisation algorithm for its solution. The algorithm, based on a mixed-integer formulation of the problem, considers piecewise linear approximations of the quadratic fuel cost function that are dynamically updated in an iterative way, converging to the optimum; this avoids the requirement of resorting to quadratic programming, making the solution process much quicker. From extensive computational tests on a broad set of benchmark instances of this problem, the algorithm was found to be flexible and capable of easily incorporating different problem constraints. Indeed, it is able to tackle ramp constraints, which although very important in practice were rarely considered in previous publications. Most importantly, optimal solutions were obtained for several well-known benchmark instances, including instances of practical relevance, that are not yet known to have been solved to optimality. Computational experiments and their results showed that the method proposed is both simple and extremely effective.
Resumo:
This paper analyses the provision of auxiliary clinical services that are typically carried out within the hospital. We estimate a exible cost function for the three most important (cost- wise) diagnostic techniques and therapeutic services in Portuguese hospitals: Clinical Pathology, Medical Imaging and Physical Medicine and Rehabilitation. Our objective in carrying out this estimation is the evaluation of economies of scale and scope in the provision of these services. For all services, we nd evidence of ray economies of scale and some evidence of economies of scope. These results have important policy implications and can be related to the ongoing discussion of where and how should hospitals provide these services.
Resumo:
Com as variações e instabilidade dos preços do petróleo, assim como as políticas europeias para adoção de estratégias para o desenvolvimento sustentável, têm levado à procura de forma crescente de novas tecnologias e fontes de energia alternativas. Neste contexto, tem-se assistido a políticas energéticas que estimulam o aumento da produção e a utilização do gás natural, visto que é considerado uma fonte de energia limpa. O crescimento do mercado do gás natural implica um reforço significativo das redes de transporte deste combustível, quer ao nível do armazenamento e fornecimento, quer ao nível dos gasodutos e da sua gestão. O investimento em gasodutos de transporte implica grandes investimentos, que poderiam não ser remunerados da forma esperada, sendo um dos motivos para que exista em Portugal cinco distritos se veem privados deste tipo de infraestruturas. O transporte de gás natural acarreta custos elevados para os consumidores, tanto maiores quanto maior forem as quantidades de gás transacionadas e quanto maior for o percurso pelo gás natural percorrido. Assim assume especial importância a realização de um despacho de gás natural: quais as cargas que cada unidade de fornecimento de gás irá alimentar, qual a quantidade de gás natural que cada UFGs deve injetar na rede, qual o menor percurso possível para o fazer, o tipo de transporte que será utilizado? Estas questões são abordadas na presente dissertação, por forma a minimizar a função custo de transporte, diminuindo assim as perdas na rede de alta pressão e os custos de transporte que serão suportados pelos consumidores. A rede de testes adotada foi a rede nacional de transporte, constituída por 18 nós de consumos, e os tipos de transporte considerados, foram o transporte por gasoduto físico e o transporte através de gasoduto virtual – rotas de transporte rodoviário de gás natural liquefeito. Foram criados diversos cenários, baseados em períodos de inverno e verão, os diferentes cenários abrangeram de forma distinta as variáveis de forma a analisar os impactos que estas variáveis teriam no custo relativo ao transporte de gás natural. Para dar suporte ao modelo de despacho económico, foi desenvolvida uma aplicação computacional – Despacho_GN com o objetivo de despachar as quantidades de gás natural que cada UFG deveria injetar na rede, assim como apresentar os custos acumulados relativos ao transporte. Com o apoio desta aplicação foram testados diversos cenários, sendo apresentados os respectivos resultados. A metodologia elaborada para a criação de um despacho através da aplicação “Despacho_GN” demonstrou ser eficiente na obtenção das soluções, mostrando ser suficientemente rápida para realizar as simulações em poucos segundos. A dissertação proporciona uma contribuição para a exploração de problemas relacionados com o despacho de gás natural, e sugere perspectivas futuras de investigação e desenvolvimento.
Resumo:
RESUMO - Perante o actual contexto de contenção de gastos no sector da saúde e consequente preocupação com a eficiência do sistema, tem‐se assistido a mudanças várias no modelo de gestão e organizacional do sistema de saúde. Destaca‐se a alteração da estrutura hospitalar, com vista à racionalização dos seus recursos internos, onde as fusões hospitalares têm assumido um papel determinante. Em Portugal, nos últimos 10 anos, assistiu‐se a uma significativa redução do número de hospitais (de sensivelmente 90 para 50 unidades), exclusivamente através das fusões e sem quaisquer alterações no número de estruturas físicas existentes. Não obstante os argumentos justificativos desta reforma, a avaliação dos objectivos implícitos é insuficiente. Neste âmbito, pretendeu‐se com este estudo contribuir para a análise do impacte da criação de centros hospitalares na redução de gastos, isto é, verificar se a consolidação e consequente reengenharia dos processos produtivos teve consequencias ao nível da obtenção de economias de escala. Para esta análise usou‐se uma base de dados em painel, onde se consideraram 75 hospitais durante 7 anos (2003‐2009), número que foi reduzindo ao longo do período em análise devido às inúmeras fusões já referidas. Para avaliar os ganhos relativos às fusões hospitalares, ao nível da eficiência técnica e das economias de escala, recorreu‐se à fronteira estocástica especificada função custo translog. Estimada a fronteira, foi possível analisar três centros hospitalares específicos, onde se comparou o período pré‐fusão (2005‐2006) com o período após a fusão (2008‐2009). Como variáveis explicativas, relativas à produção hospitalar, considerou‐se o número de casos tratados e os dias de internamento (Vita, 1990; Schuffham et al., 1996), o número de consultas e o número de urgências, sendo estas variáveis as mais comuns na literatura (Vita, 1990; Fournier e Mitchell, 1992; Carreira, 1999). Quanto à variável dependente usou‐se o custo variável total, que compreende o total de custos anuais dos hospitais excepto de imobilizado. Como principais conclusões da investigação, em consequência da criação dos centros hospitalares, são de referir os ganhos de escala na fusão de hospitais de reduzida dimensão e com mais serviços complementares. --------ABSTRACT - Driven by the current pressure on resources induced by budgetary cuts, the Portuguese Ministry of Health is imposing changes in the management model and organization of NHS hospitals. The most recent change is based on the creation of Hospital Centres that are a result of administrative mergers of existing hospitals. In less than 10 years the number of hospitals passed from around 90 to around 50, only due to the mergers and without any change in the existing number of physical institutions. According to the political discourse, one of the main goals expected from this measure is the creation of synergies and more efficiency in the use of available resources. However, the merger of the hospitals has been a political decision without support or evaluation of the first experiments. The aim of this study is to measure the results of this policy by looking at economies of scale namely through reductions in the expenditures, as expected and sought by the MoH. Data used covers 7 years (2003‐2009) and 75 hospitals, number that has been reduced my the enoumerous mergers during the last decade. This work uses a stochastic frontier analysis through the translog cost function to examine the gains from mergers, which were decomposed into technical efficiency and economies of scale. It was analised these effects by the creation of three specific hospital centers, using a longitudinal approach to compare the period pre‐merger (2003‐2006) with the post‐merger period (2007‐09). To measure changes in inpatient hospital production volume and length of stay are going to be considered as done by Vita (1990) and Schuffham et al. (1996). For outpatient services the number of consultations and emergencies are going to be considered (Vita, 1990; Fournier e Mitchell, 1992; Carreira, 1999). Total variable cost is considered as the dependent variable explained the aforementioned ones. After a review of the literature results expected point to benefits from the mergers, namely a reduction in total expenditures and in the number of duplicated services. Results extracted from our data point in the same direction, and thus for the existence of some economies of scale only for small hospitals.
Resumo:
4th International Conference on Climbing and Walking Robots - From Biology to Industrial Applications
Resumo:
As centrais termoelétricas convencionais convertem apenas parte do combustível consumido na produção de energia elétrica, sendo que outra parte resulta em perdas sob a forma de calor. Neste sentido, surgiram as unidades de cogeração, ou Combined Heat and Power (CHP), que permitem reaproveitar a energia dissipada sob a forma de energia térmica e disponibilizá-la, em conjunto com a energia elétrica gerada, para consumo doméstico ou industrial, tornando-as mais eficientes que as unidades convencionais Os custos de produção de energia elétrica e de calor das unidades CHP são representados por uma função não-linear e apresentam uma região de operação admissível que pode ser convexa ou não-convexa, dependendo das caraterísticas de cada unidade. Por estas razões, a modelação de unidades CHP no âmbito do escalonamento de geradores elétricos (na literatura inglesa Unit Commitment Problem (UCP)) tem especial relevância para as empresas que possuem, também, este tipo de unidades. Estas empresas têm como objetivo definir, entre as unidades CHP e as unidades que apenas geram energia elétrica ou calor, quais devem ser ligadas e os respetivos níveis de produção para satisfazer a procura de energia elétrica e de calor a um custo mínimo. Neste documento são propostos dois modelos de programação inteira mista para o UCP com inclusão de unidades de cogeração: um modelo não-linear que inclui a função real de custo de produção das unidades CHP e um modelo que propõe uma linearização da referida função baseada na combinação convexa de um número pré-definido de pontos extremos. Em ambos os modelos a região de operação admissível não-convexa é modelada através da divisão desta àrea em duas àreas convexas distintas. Testes computacionais efetuados com ambos os modelos para várias instâncias permitiram verificar a eficiência do modelo linear proposto. Este modelo permitiu obter as soluções ótimas do modelo não-linear com tempos computationais significativamente menores. Para além disso, ambos os modelos foram testados com e sem a inclusão de restrições de tomada e deslastre de carga, permitindo concluir que este tipo de restrições aumenta a complexidade do problema sendo que o tempo computacional exigido para a resolução do mesmo cresce significativamente.
Resumo:
A Work Project, presented as part of the requirements for the Award of a Masters Degree in Economics from the NOVA – School of Business and Economics
Resumo:
A Masters Thesis, presented as part of the requirements for the award of a Research Masters Degree in Economics from NOVA – School of Business and Economics