930 resultados para Input-output data


Relevância:

90.00% 90.00%

Publicador:

Resumo:

We present a mechanistic modeling methodology to predict both the percolation threshold and effective conductivity of infiltrated Solid Oxide Fuel Cell (SOFC) electrodes. The model has been developed to mirror each step of the experimental fabrication process. The primary model output is the infiltrated electrode effective conductivity which provides results over a range of infiltrate loadings that are independent of the chosen electronically conducting material. The percolation threshold is utilized as a valuable output data point directly related to the effective conductivity to compare a wide range of input value choices. The predictive capability of the model is demonstrated by favorable comparison to two separate published experimental studies, one using strontium molybdate and one using La0.8Sr0.2FeO3-δ as infiltrate materials. Effective conductivities and percolation thresholds are shown for varied infiltrate particle size, pore size, and porosity with the infiltrate particle size having the largest impact on the results.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

We present a mechanistic modeling methodology to predict both the percolation threshold and effective conductivity of infiltrated Solid Oxide Fuel Cell (SOFC) electrodes. The model has been developed to mirror each step of the experimental fabrication process. The primary model output is the infiltrated electrode effective conductivity which provides results over a range of infiltrate loadings that are independent of the chosen electronically conducting material. The percolation threshold is utilized as a valuable output data point directly related to the effective conductivity to compare a wide range of input value choices. The predictive capability of the model is demonstrated by favorable comparison to two separate published experimental studies, one using strontium molybdate and one using La0.8Sr0.2FeO3-delta as infiltrate materials. Effective conductivities and percolation thresholds are shown for varied infiltrate particle size, pore size, and porosity with the infiltrate particle size having the largest impact on the results. (C) 2013 The Electrochemical Society. All rights reserved.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Civil infrastructure provides essential services for the development of both society and economy. It is very important to manage systems efficiently to ensure sound performance. However, there are challenges in information extraction from available data, which also necessitates the establishment of methodologies and frameworks to assist stakeholders in the decision making process. This research proposes methodologies to evaluate systems performance by maximizing the use of available information, in an effort to build and maintain sustainable systems. Under the guidance of problem formulation from a holistic view proposed by Mukherjee and Muga, this research specifically investigates problem solving methods that measure and analyze metrics to support decision making. Failures are inevitable in system management. A methodology is developed to describe arrival pattern of failures in order to assist engineers in failure rescues and budget prioritization especially when funding is limited. It reveals that blockage arrivals are not totally random. Smaller meaningful subsets show good random behavior. Additional overtime failure rate is analyzed by applying existing reliability models and non-parametric approaches. A scheme is further proposed to depict rates over the lifetime of a given facility system. Further analysis of sub-data sets is also performed with the discussion of context reduction. Infrastructure condition is another important indicator of systems performance. The challenges in predicting facility condition are the transition probability estimates and model sensitivity analysis. Methods are proposed to estimate transition probabilities by investigating long term behavior of the model and the relationship between transition rates and probabilities. To integrate heterogeneities, model sensitivity is performed for the application of non-homogeneous Markov chains model. Scenarios are investigated by assuming transition probabilities follow a Weibull regressed function and fall within an interval estimate. For each scenario, multiple cases are simulated using a Monte Carlo simulation. Results show that variations on the outputs are sensitive to the probability regression. While for the interval estimate, outputs have similar variations to the inputs. Life cycle cost analysis and life cycle assessment of a sewer system are performed comparing three different pipe types, which are reinforced concrete pipe (RCP) and non-reinforced concrete pipe (NRCP), and vitrified clay pipe (VCP). Life cycle cost analysis is performed for material extraction, construction and rehabilitation phases. In the rehabilitation phase, Markov chains model is applied in the support of rehabilitation strategy. In the life cycle assessment, the Economic Input-Output Life Cycle Assessment (EIO-LCA) tools are used in estimating environmental emissions for all three phases. Emissions are then compared quantitatively among alternatives to support decision making.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This paper describes the open source framework MARVIN for rapid application development in the field of biomedical and clinical research. MARVIN applications consist of modules that can be plugged together in order to provide the functionality required for a specific experimental scenario. Application modules work on a common patient database that is used to store and organize medical data as well as derived data. MARVIN provides a flexible input/output system with support for many file formats including DICOM, various 2D image formats and surface mesh data. Furthermore, it implements an advanced visualization system and interfaces to a wide range of 3D tracking hardware. Since it uses only highly portable libraries, MARVIN applications run on Unix/Linux, Mac OS X and Microsoft Windows.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Determining the profit maximizing input-output bundle of a firm requires data on prices. This paper shows how endogenously determined shadow prices can be used in place of actual prices to obtain the optimal input-output bundle where the firm.s shadow profit is maximized. This approach amounts to an application of the Weak Axiom of Profit Maximization (WAPM) formulated by Varian (1984) based on shadow prices rather than actual prices. At these prices the shadow profit of a firm is zero. Thus, the maximum profit that could have been attained at some other input-output bundle is a measure of the inefficiency of the firm. Because the benchmark input-output bundle is always an observed bundle from the data, it can be determined without having to solve any elaborate programming problem. An empirical application to U.S. airlines data illustrates the proposed methodology.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Esta monografía se enmarca en el manejo de los recursos hídricos en grandes redes de riego. En ella se describe el caso del río Mendoza, en la provincia homónima, el que fuera regulado en el año 2002. Este río nace en la Cordillera de los Andes, y presenta un importante arrastre de sólidos en suspensión, los que actualmente son retenidos en gran medida por el embalse Potrerillos. Las “aguas claras" que se erogan del embalse producen problemas erosivos, los que a su vez estarían ocasionando una mayor infiltración en los canales, y con ello un incremento en la recarga de acuíferos en ciertas zonas, así como problemas derivados del ascenso de la freática en otras. Se citan procesos ocurridos en otros distritos de riego frente a la regulación de los ríos, para concluir que el del río Mendoza es un caso susceptible de sufrir ciertos per-juicios, ya señalados en la Manifestación General de Impacto Ambiental del embalse Potrerillos, los que actualmente se están presentando en la red de riego. A partir de los estudios de sedimentología en el río Mendoza, se hace un análisis técnico de los fenómenos asociados al cambio de las características físicas del agua. Luego se describen los procesos erosivos, de acuerdo con la hidráulica clásica. Se define la Eficiencia de conducción (Ec), la infiltración en canales y su importancia en distintos distritos de riego, para luego mencionar los estudios realizados en el área del río Mendoza. Se analiza el desarrollo espacial que ha tenido el oasis, la escasa programación que tuvo su traza y la antigüedad de la misma. La descripción de los suelos permite concluir acerca de la importancia de su estructura y del papel que juegan las porciones finas, aún en minoría, que integran las distintas clases texturales con respecto a la Ec. Se describen los criterios con que se distribuye el agua en Mendoza, analizándose los caudales distribuidos actualmente, para relacionarlos con los niveles freáticos. Se mencionan además distintas acciones encaradas por la provincia para mitigar los efectos de las aguas claras. El análisis de los métodos utilizados para medir la Ec, permite apreciar el estado de la ciencia al respecto. Un análisis de las ventajas y de las desventajas de los distintos métodos, y de los resultados que con ellos se obtienen, permite concluir que el método de entradas y salidas es el que mejor se adapta en Mendoza, incluyendo además aspectos metodológicos de la medición. También se concluye en que la Ec. está insuficientemente evaluada; las fracciones finas de los suelos en muchos casos gravitan más que la textura frente a la Ec; por ello, se considera que el estudio de la Ec en las distintas áreas de manejo es necesario para entender los procesos de revenición y recarga de acuíferos, y que las pérdidas administrativas pueden gravitar más que la Ec. Se recomienda continuar con los trabajos de evaluación de Ec, al ser necesarios para todas las actividades en la cuenca; se desaconseja en este río el ajuste de modelos de predicción de Ec; las características de los suelos obligan a interpretar y aplicar con criterio la bibliografía internacional, pero aún así no se pueden hacer generalizaciones acerca de de la Ec en Mendoza.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Based on analyses of actual data, we reveal that many Asian developing economies own economic structural features of "non-mono-cultural economy" and the "large primary good sector", which have not been discussed in developing economies RBC literature. We also examine the input-output tables to develop a model reflecting actual developing economies' structures. Referring to the analyses, we construct RBC models of ASEAN countries. Based on the model, we find that approximately half of GDP volatility is attributable to domestic productivity shocks, and the remaining half is attributable to price shocks.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

With regression formulas replaced by equilibrium conditions, a spatial CGE model can substantially reduce data requirements. Detailed regional analyses are thus possible in countries where only limited regional statistics are available. While regional price differentials play important roles in multi-regional settings, transport does not receive much attention in existing models. This paper formulates a spatial CGE model that explicitly considers the transport sector and FOB/CIF prices. After describing the model, performance of our model is evaluated by comparing the benchmark equilibrium for China with survey-based regional I-O and interregional I-O tables for 1987. The structure of Chinese economies is summarized using information obtained from the benchmark equilibrium computation. This includes regional and sectoral production distributions and price differentials. The equilibrium for 1997 facilitates discussion of changes in regional economic structures that China has experienced in the decade.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This paper estimates the elasticity of labor productivity with respect to employment density, a widely used measure of the agglomeration effect, in the Yangtze River Delta, China. A spatial Durbin model is presented that makes explicit the influences of spatial dependence and endogeneity bias in a very simple way. Results of Bayesian estimation using the data of the year 2009 indicate that the productivity is influenced by factors correlated with density rather than density itself and that spatial spillovers of these factors of agglomeration play a significant role. They are consistent with the findings of Ke (2010) and Artis, et al. (2011) that suggest the importance of taking into account spatial dependence and hitherto omitted variables.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This paper estimates the impact of industrial agglomeration on firm-level productivity in Chinese manufacturing sectors. To account for spatial autocorrelation across regions, we formulate a hierarchical spatial model at the firm level and develop a Bayesian estimation algorithm. A Bayesian instrumental-variables approach is used to address endogeneity bias of agglomeration. Robust to these potential biases, we find that agglomeration of the same industry (i.e. localization) has a productivity-boosting effect, but agglomeration of urban population (i.e. urbanization) has no such effects. Additionally, the localization effects increase with educational levels of employees and the share of intermediate inputs in gross output. These results may suggest that agglomeration externalities occur through knowledge spillovers and input sharing among firms producing similar manufactures.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This paper integrates two lines of research into a unified conceptual framework: trade in global value chains and embodied emissions. This allows both value added and emissions to be systematically traced at the country, sector, and bilateral levels through various production network routes. By combining value-added and emissions accounting in a consistent way, the potential environmental cost (amount of emissions per unit of value added) along global value chains can be estimated. Using this unified accounting method, we trace CO2 emissions in the global production and trade network among 41 economies in 35 sectors from 1995 to 2009, basing our calculations on the World Input–Output Database, and show how they help us to better understand the impact of cross-country production sharing on the environment.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The concept and logic of the "smile curve" in the context of global value chains has been widely used and discussed at the individual firm level, but rarely identified and investigated at the country and industry levels by using real data. This paper proposes an idea, based on an inter-country input-output model, to consistently measure both the strength and length of linkages between producers and consumers along global value chains. This idea allows for better identification and mapping of smile curves for countries and industries according to their positions and degrees of participation in a given conceptual value chain. Using the 1995-2011 World Input-Output Tables, several conceptual value chains are investigated, including exports of electrical and optical equipment from China and Mexico and exports of automobiles from Japan and Germany. The identified smile curves provide a very intuitive and visual image, which can significantly improve our understanding of the roles played by different countries and industries in global value chains. Further, the smile curves help identify the benefits gained by these countries and industries through their participation in global trade.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Using an augmented Chinese input–output table in which information about firm ownership and type of traded goods are explicitly reported, we show that ignoring firm heterogeneity causes embodied CO2 emissions in Chinese exports to be overestimated by 20% at the national level, with huge differences at the sector level, for 2007. This is because different types of firm that are allocated to the same sector of the conventional Chinese input–output table vary greatly in terms of market share, production technology and carbon intensity. This overestimation of export-related carbon emissions would be even higher if it were not for the fact that 80% of CO2 emissions embodied in exports of foreign-owned firms are, in fact, emitted by Chinese-owned firms upstream of the supply chain. The main reason is that the largest CO2 emitter, the electricity sector located upstream in Chinese domestic supply chains, is strongly dominated by Chinese-owned firms with very high carbon intensity.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

To tackle global climate change, it is desirable to reduce CO2 emissions associated with household consumption in particular in developed countries, which tend to have much higher per capita household carbon footprints than less developed countries. Our results show that carbon intensity of different consumption categories in the U.S. varies significantly. The carbon footprint tends to increase with increasing income but at a decreasing rate due to additional income being spent on less carbon intensive consumption items. This general tendency is frequently compensated by higher frequency of international trips and higher housing related carbon emissions (larger houses and more space for consumption items). Our results also show that more than 30% of CO2 emissions associated with household consumption in the U.S. occur outside of the U.S. Given these facts, the design of carbon mitigation policies should take changing household consumption patterns and international trade into account.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Esta Tesis aborda el diseño e implementación de aplicaciones en el campo de procesado de señal, utilizando como plataforma los dispositivos reconfigurables FPGA. Esta plataforma muestra una alta capacidad de lógica, e incorpora elementos orientados al procesado de señal, que unido a su relativamente bajo coste, la hacen ideal para el desarrollo de aplicaciones de procesado de señal cuando se requiere realizar un procesado intensivo y se buscan unas altas prestaciones. Sin embargo, el coste asociado al desarrollo en estas plataformas es elevado. Mientras que el aumento en la capacidad lógica de los dispositivos FPGA permite el desarrollo de sistemas completos, los requisitos de altas prestaciones obligan a que en muchas ocasiones se deban optimizar operadores a muy bajo nivel. Además de las restricciones temporales que imponen este tipo de aplicaciones, también tienen asociadas restricciones de área asociadas al dispositivo, lo que obliga a evaluar y verificar entre diferentes alternativas de implementación. El ciclo de diseño e implementación para estas aplicaciones se puede prolongar tanto, que es normal que aparezcan nuevos modelos de FPGA, con mayor capacidad y mayor velocidad, antes de completar el sistema, y que hagan a las restricciones utilizadas para el diseño del sistema inútiles. Para mejorar la productividad en el desarrollo de estas aplicaciones, y con ello acortar su ciclo de diseño, se pueden encontrar diferentes métodos. Esta Tesis se centra en la reutilización de componentes hardware previamente diseñados y verificados. Aunque los lenguajes HDL convencionales permiten reutilizar componentes ya definidos, se pueden realizar mejoras en la especificación que simplifiquen el proceso de incorporar componentes a nuevos diseños. Así, una primera parte de la Tesis se orientará a la especificación de diseños basada en componentes predefinidos. Esta especificación no sólo busca mejorar y simplificar el proceso de añadir componentes a una descripción, sino que también busca mejorar la calidad del diseño especificado, ofreciendo una mayor posibilidad de configuración e incluso la posibilidad de informar de características de la propia descripción. Reutilizar una componente ya descrito depende en gran medida de la información que se ofrezca para su integración en un sistema. En este sentido los HDLs convencionales únicamente proporcionan junto con la descripción del componente la interfaz de entrada/ salida y un conjunto de parámetros para su configuración, mientras que el resto de información requerida normalmente se acompaña mediante documentación externa. En la segunda parte de la Tesis se propondrán un conjunto de encapsulados cuya finalidad es incorporar junto con la propia descripción del componente, información que puede resultar útil para su integración en otros diseños. Incluyendo información de la implementación, ayuda a la configuración del componente, e incluso información de cómo configurar y conectar al componente para realizar una función. Finalmente se elegirá una aplicación clásica en el campo de procesado de señal, la transformada rápida de Fourier (FFT), y se utilizará como ejemplo de uso y aplicación, tanto de las posibilidades de especificación como de los encapsulados descritos. El objetivo del diseño realizado no sólo mostrará ejemplos de la especificación propuesta, sino que también se buscará obtener una implementación de calidad comparable con resultados de la literatura. Para ello, el diseño realizado se orientará a su implementación en FPGA, aprovechando tanto los elementos lógicos generalistas como elementos específicos de bajo nivel disponibles en estos dispositivos. Finalmente, la especificación de la FFT obtenida se utilizará para mostrar cómo incorporar en su interfaz información que ayude para su selección y configuración desde fases tempranas del ciclo de diseño. Abstract This PhD. thesis addresses the design and implementation of signal processing applications using reconfigurable FPGA platforms. This kind of platform exhibits high logic capability, incorporates dedicated signal processing elements and provides a low cost solution, which makes it ideal for the development of signal processing applications, where intensive data processing is required in order to obtain high performance. However, the cost associated to the hardware development on these platforms is high. While the increase in logic capacity of FPGA devices allows the development of complete systems, high-performance constraints require the optimization of operators at very low level. In addition to time constraints imposed by these applications, Area constraints are also applied related to the particular device, which force to evaluate and verify a design among different implementation alternatives. The design and implementation cycle for these applications can be tedious and long, being therefore normal that new FPGA models with a greater capacity and higher speed appear before completing the system implementation. Thus, the original constraints which guided the design of the system become useless. Different methods can be used to improve the productivity when developing these applications, and consequently shorten their design cycle. This PhD. Thesis focuses on the reuse of hardware components previously designed and verified. Although conventional HDLs allow the reuse of components already defined, their specification can be improved in order to simplify the process of incorporating new design components. Thus, a first part of the PhD. Thesis will focus on the specification of designs based on predefined components. This specification improves and simplifies the process of adding components to a description, but it also seeks to improve the quality of the design specified with better configuration options and even offering to report on features of the description. Hardware reuse of a component for its integration into a system largely depends on the information it offers. In this sense the conventional HDLs only provide together with the component description, the input/output interface and a set of parameters for its configuration, while other information is usually provided by external documentation. In the second part of the Thesis we will propose a formal way of encapsulation which aims to incorporate with the component description information that can be useful for its integration into other designs. This information will include features of the own implementation, but it will also support component configuration, and even information on how to configure and connect the component to carry out a function. Finally, the fast Fourier transform (FFT) will be chosen as a well-known signal processing application. It will be used as case study to illustrate the possibilities of proposed specification and encapsulation formalisms. The objective of the FFT design is not only to show practical examples of the proposed specification, but also to obtain an implementation of a quality comparable to scientific literature results. The design will focus its implementation on FPGA platforms, using general logic elements as base of the implementation, but also taking advantage of low-level specific elements available on these devices. Last, the specification of the obtained FFT will be used to show how to incorporate in its interface information to assist in the selection and configuration process early in the design cycle.