863 resultados para multi-stage fixed costs
Resumo:
Electrical switching studies on amorphous Si15Te75Ge10 thin film devices reveal the existence of two distinct, stable low-resistance, SET states, achieved by varying the electrical input to the device. The multiple resistance levels can be attributed to multi-stage crystallization, as observed from temperature dependant resistance studies. The devices are tested for their ability to be RESET with minimal resistance degradation; further, they exhibit a minimal drift in the SET resistance value even after several months of switching. (c) 2013 Elsevier B.V. All rights reserved.
Resumo:
Desalination is one of the most traditional processes to generate potable water. With the rise in demand for potable water and paucity of fresh water resources, this process has gained special importance. Conventional thermal desalination processes involves evaporative methods such as multi-stage flash and solar distils, which are found to be energy intensive, whereas reverse osmosis based systems have high operating and maintenance costs. The present work describes the Adsorption Desalination (AD) system, which is an emerging process of thermal desalination cum refrigeration capable of utilizing low grade heat easily obtainable from even non-concentrating type solar collectors. The system employs a combination of flash evaporation and thermal compression to generate cooling and desalinated water. The current study analyses the system dynamics of a 4-bed single stage silica-gel plus water based AD system. A lumped model is developed using conservation of energy and mass coupled with the kinetics of adsorption/desorption process. The constitutive equations for the system components viz. evaporator, adsorber and condenser, are solved and the performance of the system is evaluated for a single stage AD system at various condenser temperatures and cycle times to determine optimum operating conditions required for desalination and cooling. (C) 2013 P. Dutta. Published by Elsevier Ltd.
Resumo:
A genética forense tem grande importância na geração de provas em casos de violência sexual, paternidade criminal, identificação de cadáveres e investigação de evidências de locais de crime. A análise de STRs apresenta grande poder de discriminação, mas é uma metodologia multi-etapas, trabalhosa, cara e em muitos casos a análise genética é prejudicada pela baixa quantidade e qualidade das evidências coletadas. Este estudo teve como objetivo desenvolver e caracterizar uma metodologia de triagem de amostras forenses através da análise de perfis de dissociação em alta resolução (HRM) de regiões do DNA mitocondrial, o qual está presente em maior número de cópias e mais resistente a degradação. Para tanto, foram extraídos DNAs de 68 doadores. Estas amostras foram sequenciadas e analisadas por HRM para sete alvos no DNA mitocondrial. Também foram realizados ensaios para determinar a influência do método de extração, da concentração e nível de degradação do DNA no perfil de HRM obtido para uma amostra. Os resultados demonstraram a capacidade da técnica de excluir indivíduos com sequências diferentes da referência comparativa em cinco regiões amplificadas. Podem ser analisadas em conjunto, amostras de DNA com variação de concentração de até a ordem de 100 vezes e extraídas por diferentes metodologias. Condições de degradação de material genético não prejudicaram a obtenção de perfis de dissociação em alta resolução. A sensibilidade da técnica foi aprimorada com a análise de produtos de amplificação de tamanho reduzido. A fim de otimizar o ensaio foi testada a análise de HRM em reações de PCR duplex. Um dos pares de amplificação forneceu perfis de HRM compatíveis com resultados obtidos de reações com amplificação de apenas um dos alvos. Através da análise conjunta das cinco regiões, esta metodologia visa a identificação de indivíduos não relacionados com as referências comparativas, diminuindo o número de amostras a serem analisadas por STRs, reduzindo gastos e aumentando a eficiência da rotina de laboratórios de genética forense.
Resumo:
This paper focuses on the analysis of the relationship between maritime trade and transport cost in Latin America. The analysis is based on disaggregated (SITC 5 digit level) trade data for intra Latin maritime trade routes over the period 1999-2004. The research contributes to the literature by disentangling the effects of transport costs on the range of traded goods (extensive margin) and the traded volumes of goods (intensive margin) of international trade in order to test some of the predictions of the trade theories that introduce firm heterogeneity in productivity, as well as fixed costs of exporting. Recent investigations show that spatial frictions (distance) reduce trade mainly by trimming the number of shipments and that most firms ship only to geographically proximate customers, instead of shipping to many destinations in quantities that decrease in distance. Our analyses confirm these findings and show that the opposite pattern is observed for ad-valorem freight rates that reduce aggregate trade values mainly by reducing the volume of imported goods (intensive margin).
Resumo:
Trabalho Final de Mestrado para obtenção do grau de Mestre em Engenharia Mecânica
Resumo:
Dans ma thèse doctorale, j'étudie trois facteurs importants qui caractérisent le commerce international : les différences technologiques entre les pays, les barrières à l'entrée sous la forme de coûts fixes et la migration internationale. Le premier chapitre analyse si les différences technologiques entre les pays peuvent expliquer la spécialisation dans le commerce international entre les pays. Pour mesurer le niveau de la spécialisation, je calcule les index de concentration pour la valeur des importations et des exportations et décompose la concentration totale dans la marge de produits extensive (nombre de produits commercialisés) et la marge de produits intensive (volume de produits commercialisés). En utilisant des données commerciales détaillées au niveau du produit dans 160 pays, mes résultats montrent que les exportations sont plus concentrées que les importations, que la spécialisation se produit principalement au niveau de la marge intensive du produit, et que les économies plus grandes disposent d'importations et d'exportations plus diversifiées, car elles commercialisent plus de produits. Compte tenu de ces faits, j'évalue la capacité du modèle Eaton-Kortum, le principal modèle de la théorie ricardienne du commerce, pour représenter les preuves empiriques. Les résultats montrent que la spécialisation à travers l'avantage comparatif induit par les différences de technologie peut expliquer les faits qualitatifs et quantitatifs. De plus, j'évalue le rôle des déterminants clés de la spécialisation : le degré de l'avantage comparatif, l'élasticité de la substitution et la géographie. Une implication de ces résultats est qu'il est important d’évaluer jusqu'à quel point la volatilité de production mesurée par la volatilité du PIB est motivée par la spécialisation des exportations et des importations. Étant donné le compromis entre l'ouverture du commerce et la volatilité de production, les bénéfices tirés du commerce peuvent s'avérer plus faibles que ceux estimés précédemment. Par conséquent, les politiques commerciales alternatives telles que l'ouverture graduelle au commerce combinée à la diversification de la production pour réduire la concentration de l'exportation peuvent se révéler être une meilleure stratégie que l'approche du laissez-faire. En utilisant la relation entre la taille du marché et l’entrée de firmes et produits, le deuxième chapitre évalue si les barrières à l'entrée sous la forme de coûts fixes à exporter sont au niveau de la firme ou au niveau du produit. Si les coûts fixes se trouvent au niveau de la firme, la firme multiproduits a un avantage de coût de production par rapport aux autres firmes parce qu’elles peuvent diviser les coûts fixes sur plusieurs produits. Dans ce cas, le commerce international sera caractérisé par peu de firmes qui exportent beaucoup des produits. Si les coûts fixes sont au niveau du produit, l’entrée d’un produit est associée avec l’entrée de plusieurs firmes. La raison est qu’une fois que la première firme entre et paye les coûts fixes du produit, elle crée un effet d’entrainement qui réduit les coûts fixes pour des firmes rivales. Dans ce cas, le commerce international sera caractérisé par plusieurs firmes qui vendent des variétés différentes du même produit. En utilisant des données détaillées provenant de 40 pays exportateurs à travers 180 marchés de destination, mes résultats montrent que les barrières à l'entrée se trouvent principalement au niveau du produit. Un marché plus large favorise l'expansion d'un plus grand nombre d’entreprises au sein d'une catégorie de produit plutôt que de permettre aux entreprises produisant plusieurs produits de croître dans une gamme de produits. En regardant la différence entre le nombre d'exportateurs au sein d'une catégorie de produit dans des destinations données, je trouve que le taux d'entrée de firmes augmente significativement après qu'un produit entre la première fois dans le marché. J'en déduis donc que le premier entrant réduit les coûts fixes pour les firmes suivantes. Mes recherches démontrent également que malgré une plus grande compétition sur le marché du produit, les entreprises disposent de revenus d'exportation supérieurs et sont plus susceptibles de rester sur les marchés internationaux. Ces résultats sont cohérents avec l’hypothèse que l’effet d’entrainement incite l'entrée de firmes rivales et permettent aux entreprises de produire à plus grande échelle. Cette recherche dévoile un nombre de conclusions importantes. D'abord, les politiques commerciales encouragent l'entrée de nouveaux produits, par exemple, en promouvant des produits dans les marchés de destination entraînant ainsi des retombées qui se traduiront par un taux de participation plus élevé de l'entreprise et une croissance de l'exportation. Deuxièmement, les consommateurs du pays importateur peuvent bénéficier de prix plus bas pour le produit en réduisant les barrières techniques du commerce. Troisièmement, lorsque l'on effectue des expérimentations politiques sous la forme de réduction des coûts commerciaux, il est de coutume de considérer uniquement une baisse des coûts marginaux et d'évaluer les répercussions sur le bien-être du consommateur. Cependant, un élément important des accords commerciaux est la réduction des barrières techniques au commerce grâce à la négociation de normes communes pour un produit. Négliger l'existence des barrières à l'entrée et les conséquences des réaffectations de l'industrie affaiblit l'impact des réformes commerciales. Le troisième chapitre prend en compte le rôle de l'information dans la facilitation du commerce international. Les immigrants réduisent les coûts de transaction dans le commerce international en fournissant des informations sur les possibilités d'échange avec leur pays d'origine. En utilisant des données géographiques détaillées sur l'immigration et les importations aux États-Unis entre 1970 et 2005, je quantifie l'incidence qu'ont les nouveaux immigrants sur la demande pour les importations de biens intermédiaires aux États-Unis. Pour établir le lien cause à effet entre le commerce et la migration, j’exploite l'important afflux d'immigrants d'Amérique centrale après l'ouragan Mitch. Les résultats montrent que l'augmentation de dix pour cent d'immigrants a fait croître la demande pour les importations de biens intermédiaires de 1,5 pour cent. Mes résultats sont robustes aux problèmes de la causalité inverse ou la décision d’émigrer est causée par des opportunités de faire du commerce.
Resumo:
Predictors of random effects are usually based on the popular mixed effects (ME) model developed under the assumption that the sample is obtained from a conceptual infinite population; such predictors are employed even when the actual population is finite. Two alternatives that incorporate the finite nature of the population are obtained from the superpopulation model proposed by Scott and Smith (1969. Estimation in multi-stage surveys. J. Amer. Statist. Assoc. 64, 830-840) or from the finite population mixed model recently proposed by Stanek and Singer (2004. Predicting random effects from finite population clustered samples with response error. J. Amer. Statist. Assoc. 99, 1119-1130). Predictors derived under the latter model with the additional assumptions that all variance components are known and that within-cluster variances are equal have smaller mean squared error (MSE) than the competitors based on either the ME or Scott and Smith`s models. As population variances are rarely known, we propose method of moment estimators to obtain empirical predictors and conduct a simulation study to evaluate their performance. The results suggest that the finite population mixed model empirical predictor is more stable than its competitors since, in terms of MSE, it is either the best or the second best and when second best, its performance lies within acceptable limits. When both cluster and unit intra-class correlation coefficients are very high (e.g., 0.95 or more), the performance of the empirical predictors derived under the three models is similar. (c) 2007 Elsevier B.V. All rights reserved.
Resumo:
Reliability is a key aspect in power system design and planning. Maintaining a reliable power system is a very important issue for their design and operation. Under the new competitive framework of the electricity sector, power systems find ever more and more strained to operate near their limits. Under this new scenario, it is crucial for the system operator to use tools that facilitate an energy dispatch that minimizes possible power cuts. This paper presents a mathematical model to calculate an energy dispatch that considers security constraints (single contingencies in transmission lines and transformers). The model involves pool markets and fixed bilateral contracts. Traditional methodologies that include security constraints are usually based in multistage dispatch processes. In this case, we propose a single-stage model that avoids the economic inefficiencies which result when conventional multi-stage dispatch approaches are applied. The proposed model includes an AC representation of the transport system and allows calculating the cost overruns incurred in due to reliability restrictions. We found that complying with fixed bilateral contracts, when they go above certain levels, might lead to congestion problems in transmission lines.
Resumo:
Pós-graduação em Engenharia Elétrica - FEIS
Resumo:
Pós-graduação em Engenharia Elétrica - FEIS
Resumo:
A central design challenge facing network planners is how to select a cost-effective network configuration that can provide uninterrupted service despite edge failures. In this paper, we study the Survivable Network Design (SND) problem, a core model underlying the design of such resilient networks that incorporates complex cost and connectivity trade-offs. Given an undirected graph with specified edge costs and (integer) connectivity requirements between pairs of nodes, the SND problem seeks the minimum cost set of edges that interconnects each node pair with at least as many edge-disjoint paths as the connectivity requirement of the nodes. We develop a hierarchical approach for solving the problem that integrates ideas from decomposition, tabu search, randomization, and optimization. The approach decomposes the SND problem into two subproblems, Backbone design and Access design, and uses an iterative multi-stage method for solving the SND problem in a hierarchical fashion. Since both subproblems are NP-hard, we develop effective optimization-based tabu search strategies that balance intensification and diversification to identify near-optimal solutions. To initiate this method, we develop two heuristic procedures that can yield good starting points. We test the combined approach on large-scale SND instances, and empirically assess the quality of the solutions vis-à-vis optimal values or lower bounds. On average, our hierarchical solution approach generates solutions within 2.7% of optimality even for very large problems (that cannot be solved using exact methods), and our results demonstrate that the performance of the method is robust for a variety of problems with different size and connectivity characteristics.
Resumo:
In this study, we measure the utilization costs of free trade agreement (FTA) tariff schemes. To do that, we use shipment-level customs data on Thai imports, which identify not only firms, source country, and commodity but also tariff schemes. We propose several measures as a proxy for FTA utilization costs. The example includes the minimum amount of firm-level savings on tariff payments, i.e., trade values under FTA schemes multiplied by the tariff margin, in all transactions. Consequently, the median costs for FTA utilization in 2008, for example, are estimated to be approximately US$2,000 for exports from China, US$300 for exports from Australia, and US$1,000 for exports from Japan. We also found that FTA utilization costs differ by rule of origin and industry.
Resumo:
This paper presents a multi-stage algorithm for the dynamic condition monitoring of a gear. The algorithm provides information referred to the gear status (fault or normal condition) and estimates the mesh stiffness per shaft revolution in case that any abnormality is detected. In the first stage, the analysis of coefficients generated through discrete wavelet transformation (DWT) is proposed as a fault detection and localization tool. The second stage consists in establishing the mesh stiffness reduction associated with local failures by applying a supervised learning mode and coupled with analytical models. To do this, a multi-layer perceptron neural network has been configured using as input features statistical parameters sensitive to torsional stiffness decrease and derived from wavelet transforms of the response signal. The proposed method is applied to the gear condition monitoring and results show that it can update the mesh dynamic properties of the gear on line.
Resumo:
The optimal integration of work and its interaction with heat can represent large energy savings in industrial plants. This paper introduces a new optimization model for the simultaneous synthesis of work exchange networks (WENs), with heat integration for the optimal pressure recovery of process gaseous streams. The proposed approach for the WEN synthesis is analogous to the well-known problem of synthesis of heat exchanger networks (HENs). Thus, there is work exchange between high-pressure (HP) and low-pressure (LP) streams, achieved by pressure manipulation equipment running on common axes. The model allows the use of several units of single-shaft-turbine-compressor (SSTC), as well as stand-alone compressors, turbines and valves. Helper motors and generators are used to respond to any demand and excess of energy. Moreover, between the WEN stages the streams are sent to the HEN to promote thermal recovery, aiming to enhance the work integration. A multi-stage superstructure is proposed to represent the process. The WEN superstructure is optimized in a mixed-integer nonlinear programming (MINLP) formulation and solved with the GAMS software, with the goal of minimizing the total annualized cost. Three examples are conducted to verify the accuracy of the proposed method. In all case studies, the heat integration between WEN stages is essential to improve the pressure recovery, and to reduce the total costs involved in the process.