928 resultados para E16 - Aggregate Input-Output Analysis
Resumo:
The introduction of a homogeneous road charging system according to the Directive 2011/76/EU for the use of roads is still under development in most European Union (EU) member states. Spain, like other EU members, has been encouraged to introduce a charging system for Heavy Goods Vehicles (HGVs) throughout the country. This nationwide charge has been postponed because there are serious concerns about their advantages from an economic point of view. Within this context, this paper applies an integrated modeling approach to shape elastic trade coefficients among regions by using a random utility based multiregional Input- Output (RUBMRIO) approach and a road transport network model in order to determine regional distributive and substitutive economic effects by simulating the introduction of a distance-based charge (?/km) considering 7,053.8 kilometers of free highways linking the capitals of the Spanish regions. In addition, an in-depth analysis of interregional trade changes is developed to evaluate and characterize the role of the road charging approach in trade relations among regions and across freight intensive economic sectors. For this purpose, differences in trade relations are described and assessed between a base-case or ?do nothing? scenario and a road fee-charge setting scenario. The results show that the specific amount of the charge set for HGVs affect each region differently and to a different extent because in some regions the price of commodities and the Generalized Transport Cost will decrease its competiveness within the country.
Resumo:
The assessment on introducing Longer and Heavier Vehicles (LHVs) on the road freight transport demand is performed in this paper by applying an integrated modeling approach composed of a Random Utility-Based Multiregional Input-Output model (RUBMRIO) and a road transport network model. The approach strongly supports the concept that changes in transport costs derived from the LHVs allowance as well as the economic structure of regions have both direct and indirect effects on the road freight transport system. In addition, we estimate the magnitude and extent of demand changes in the road freight transportation system by using the commodity-based structure of the approach to identify the effect on traffic flows and on pollutant emissions over the whole network of Spain by considering a sensitivity analysis of the main parameters which determine the share of Heavy-Goods Vehicles (HGVs) and LHVs. The results show that the introduction of LHVs will strengthen the competitiveness of the road haulage sector by reducing costs, emissions, and the total freight vehicles required.
Resumo:
El análisis determinista de seguridad (DSA) es el procedimiento que sirve para diseñar sistemas, estructuras y componentes relacionados con la seguridad en las plantas nucleares. El DSA se basa en simulaciones computacionales de una serie de hipotéticos accidentes representativos de la instalación, llamados escenarios base de diseño (DBS). Los organismos reguladores señalan una serie de magnitudes de seguridad que deben calcularse en las simulaciones, y establecen unos criterios reguladores de aceptación (CRA), que son restricciones que deben cumplir los valores de esas magnitudes. Las metodologías para realizar los DSA pueden ser de 2 tipos: conservadoras o realistas. Las metodologías conservadoras utilizan modelos predictivos e hipótesis marcadamente pesimistas, y, por ello, relativamente simples. No necesitan incluir un análisis de incertidumbre de sus resultados. Las metodologías realistas se basan en hipótesis y modelos predictivos realistas, generalmente mecanicistas, y se suplementan con un análisis de incertidumbre de sus principales resultados. Se les denomina también metodologías BEPU (“Best Estimate Plus Uncertainty”). En ellas, la incertidumbre se representa, básicamente, de manera probabilista. Para metodologías conservadores, los CRA son, simplemente, restricciones sobre valores calculados de las magnitudes de seguridad, que deben quedar confinados en una “región de aceptación” de su recorrido. Para metodologías BEPU, el CRA no puede ser tan sencillo, porque las magnitudes de seguridad son ahora variables inciertas. En la tesis se desarrolla la manera de introducción de la incertidumbre en los CRA. Básicamente, se mantiene el confinamiento a la misma región de aceptación, establecida por el regulador. Pero no se exige el cumplimiento estricto sino un alto nivel de certidumbre. En el formalismo adoptado, se entiende por ello un “alto nivel de probabilidad”, y ésta corresponde a la incertidumbre de cálculo de las magnitudes de seguridad. Tal incertidumbre puede considerarse como originada en los inputs al modelo de cálculo, y propagada a través de dicho modelo. Los inputs inciertos incluyen las condiciones iniciales y de frontera al cálculo, y los parámetros empíricos de modelo, que se utilizan para incorporar la incertidumbre debida a la imperfección del modelo. Se exige, por tanto, el cumplimiento del CRA con una probabilidad no menor a un valor P0 cercano a 1 y definido por el regulador (nivel de probabilidad o cobertura). Sin embargo, la de cálculo de la magnitud no es la única incertidumbre existente. Aunque un modelo (sus ecuaciones básicas) se conozca a la perfección, la aplicación input-output que produce se conoce de manera imperfecta (salvo que el modelo sea muy simple). La incertidumbre debida la ignorancia sobre la acción del modelo se denomina epistémica; también se puede decir que es incertidumbre respecto a la propagación. La consecuencia es que la probabilidad de cumplimiento del CRA no se puede conocer a la perfección; es una magnitud incierta. Y así se justifica otro término usado aquí para esta incertidumbre epistémica: metaincertidumbre. Los CRA deben incorporar los dos tipos de incertidumbre: la de cálculo de la magnitud de seguridad (aquí llamada aleatoria) y la de cálculo de la probabilidad (llamada epistémica o metaincertidumbre). Ambas incertidumbres pueden introducirse de dos maneras: separadas o combinadas. En ambos casos, el CRA se convierte en un criterio probabilista. Si se separan incertidumbres, se utiliza una probabilidad de segundo orden; si se combinan, se utiliza una probabilidad única. Si se emplea la probabilidad de segundo orden, es necesario que el regulador imponga un segundo nivel de cumplimiento, referido a la incertidumbre epistémica. Se denomina nivel regulador de confianza, y debe ser un número cercano a 1. Al par formado por los dos niveles reguladores (de probabilidad y de confianza) se le llama nivel regulador de tolerancia. En la Tesis se razona que la mejor manera de construir el CRA BEPU es separando las incertidumbres, por dos motivos. Primero, los expertos defienden el tratamiento por separado de incertidumbre aleatoria y epistémica. Segundo, el CRA separado es (salvo en casos excepcionales) más conservador que el CRA combinado. El CRA BEPU no es otra cosa que una hipótesis sobre una distribución de probabilidad, y su comprobación se realiza de forma estadística. En la tesis, los métodos estadísticos para comprobar el CRA BEPU en 3 categorías, según estén basados en construcción de regiones de tolerancia, en estimaciones de cuantiles o en estimaciones de probabilidades (ya sea de cumplimiento, ya sea de excedencia de límites reguladores). Según denominación propuesta recientemente, las dos primeras categorías corresponden a los métodos Q, y la tercera, a los métodos P. El propósito de la clasificación no es hacer un inventario de los distintos métodos en cada categoría, que son muy numerosos y variados, sino de relacionar las distintas categorías y citar los métodos más utilizados y los mejor considerados desde el punto de vista regulador. Se hace mención especial del método más utilizado hasta el momento: el método no paramétrico de Wilks, junto con su extensión, hecha por Wald, al caso multidimensional. Se decribe su método P homólogo, el intervalo de Clopper-Pearson, típicamente ignorado en el ámbito BEPU. En este contexto, se menciona el problema del coste computacional del análisis de incertidumbre. Los métodos de Wilks, Wald y Clopper-Pearson requieren que la muestra aleatortia utilizada tenga un tamaño mínimo, tanto mayor cuanto mayor el nivel de tolerancia exigido. El tamaño de muestra es un indicador del coste computacional, porque cada elemento muestral es un valor de la magnitud de seguridad, que requiere un cálculo con modelos predictivos. Se hace especial énfasis en el coste computacional cuando la magnitud de seguridad es multidimensional; es decir, cuando el CRA es un criterio múltiple. Se demuestra que, cuando las distintas componentes de la magnitud se obtienen de un mismo cálculo, el carácter multidimensional no introduce ningún coste computacional adicional. Se prueba así la falsedad de una creencia habitual en el ámbito BEPU: que el problema multidimensional sólo es atacable desde la extensión de Wald, que tiene un coste de computación creciente con la dimensión del problema. En el caso (que se da a veces) en que cada componente de la magnitud se calcula independientemente de los demás, la influencia de la dimensión en el coste no se puede evitar. Las primeras metodologías BEPU hacían la propagación de incertidumbres a través de un modelo sustitutivo (metamodelo o emulador) del modelo predictivo o código. El objetivo del metamodelo no es su capacidad predictiva, muy inferior a la del modelo original, sino reemplazar a éste exclusivamente en la propagación de incertidumbres. Para ello, el metamodelo se debe construir con los parámetros de input que más contribuyan a la incertidumbre del resultado, y eso requiere un análisis de importancia o de sensibilidad previo. Por su simplicidad, el modelo sustitutivo apenas supone coste computacional, y puede estudiarse exhaustivamente, por ejemplo mediante muestras aleatorias. En consecuencia, la incertidumbre epistémica o metaincertidumbre desaparece, y el criterio BEPU para metamodelos se convierte en una probabilidad simple. En un resumen rápido, el regulador aceptará con más facilidad los métodos estadísticos que menos hipótesis necesiten; los exactos más que los aproximados; los no paramétricos más que los paramétricos, y los frecuentistas más que los bayesianos. El criterio BEPU se basa en una probabilidad de segundo orden. La probabilidad de que las magnitudes de seguridad estén en la región de aceptación no sólo puede asimilarse a una probabilidad de éxito o un grado de cumplimiento del CRA. También tiene una interpretación métrica: representa una distancia (dentro del recorrido de las magnitudes) desde la magnitud calculada hasta los límites reguladores de aceptación. Esta interpretación da pie a una definición que propone esta tesis: la de margen de seguridad probabilista. Dada una magnitud de seguridad escalar con un límite superior de aceptación, se define el margen de seguridad (MS) entre dos valores A y B de la misma como la probabilidad de que A sea menor que B, obtenida a partir de las incertidumbres de A y B. La definición probabilista de MS tiene varias ventajas: es adimensional, puede combinarse de acuerdo con las leyes de la probabilidad y es fácilmente generalizable a varias dimensiones. Además, no cumple la propiedad simétrica. El término margen de seguridad puede aplicarse a distintas situaciones: distancia de una magnitud calculada a un límite regulador (margen de licencia); distancia del valor real de la magnitud a su valor calculado (margen analítico); distancia desde un límite regulador hasta el valor umbral de daño a una barrera (margen de barrera). Esta idea de representar distancias (en el recorrido de magnitudes de seguridad) mediante probabilidades puede aplicarse al estudio del conservadurismo. El margen analítico puede interpretarse como el grado de conservadurismo (GC) de la metodología de cálculo. Utilizando la probabilidad, se puede cuantificar el conservadurismo de límites de tolerancia de una magnitud, y se pueden establecer indicadores de conservadurismo que sirvan para comparar diferentes métodos de construcción de límites y regiones de tolerancia. Un tópico que nunca se abordado de manera rigurosa es el de la validación de metodologías BEPU. Como cualquier otro instrumento de cálculo, una metodología, antes de poder aplicarse a análisis de licencia, tiene que validarse, mediante la comparación entre sus predicciones y valores reales de las magnitudes de seguridad. Tal comparación sólo puede hacerse en escenarios de accidente para los que existan valores medidos de las magnitudes de seguridad, y eso ocurre, básicamente en instalaciones experimentales. El objetivo último del establecimiento de los CRA consiste en verificar que se cumplen para los valores reales de las magnitudes de seguridad, y no sólo para sus valores calculados. En la tesis se demuestra que una condición suficiente para este objetivo último es la conjunción del cumplimiento de 2 criterios: el CRA BEPU de licencia y un criterio análogo, pero aplicado a validación. Y el criterio de validación debe demostrarse en escenarios experimentales y extrapolarse a plantas nucleares. El criterio de licencia exige un valor mínimo (P0) del margen probabilista de licencia; el criterio de validación exige un valor mínimo del margen analítico (el GC). Esos niveles mínimos son básicamente complementarios; cuanto mayor uno, menor el otro. La práctica reguladora actual impone un valor alto al margen de licencia, y eso supone que el GC exigido es pequeño. Adoptar valores menores para P0 supone menor exigencia sobre el cumplimiento del CRA, y, en cambio, más exigencia sobre el GC de la metodología. Y es importante destacar que cuanto mayor sea el valor mínimo del margen (de licencia o analítico) mayor es el coste computacional para demostrarlo. Así que los esfuerzos computacionales también son complementarios: si uno de los niveles es alto (lo que aumenta la exigencia en el cumplimiento del criterio) aumenta el coste computacional. Si se adopta un valor medio de P0, el GC exigido también es medio, con lo que la metodología no tiene que ser muy conservadora, y el coste computacional total (licencia más validación) puede optimizarse. ABSTRACT Deterministic Safety Analysis (DSA) is the procedure used in the design of safety-related systems, structures and components of nuclear power plants (NPPs). DSA is based on computational simulations of a set of hypothetical accidents of the plant, named Design Basis Scenarios (DBS). Nuclear regulatory authorities require the calculation of a set of safety magnitudes, and define the regulatory acceptance criteria (RAC) that must be fulfilled by them. Methodologies for performing DSA van be categorized as conservative or realistic. Conservative methodologies make use of pessimistic model and assumptions, and are relatively simple. They do not need an uncertainty analysis of their results. Realistic methodologies are based on realistic (usually mechanistic) predictive models and assumptions, and need to be supplemented with uncertainty analyses of their results. They are also termed BEPU (“Best Estimate Plus Uncertainty”) methodologies, and are typically based on a probabilistic representation of the uncertainty. For conservative methodologies, the RAC are simply the restriction of calculated values of safety magnitudes to “acceptance regions” defined on their range. For BEPU methodologies, the RAC cannot be so simple, because the safety magnitudes are now uncertain. In the present Thesis, the inclusion of uncertainty in RAC is studied. Basically, the restriction to the acceptance region must be fulfilled “with a high certainty level”. Specifically, a high probability of fulfillment is required. The calculation uncertainty of the magnitudes is considered as propagated from inputs through the predictive model. Uncertain inputs include model empirical parameters, which store the uncertainty due to the model imperfection. The fulfillment of the RAC is required with a probability not less than a value P0 close to 1 and defined by the regulator (probability or coverage level). Calculation uncertainty is not the only one involved. Even if a model (i.e. the basic equations) is perfectly known, the input-output mapping produced by the model is imperfectly known (unless the model is very simple). This ignorance is called epistemic uncertainty, and it is associated to the process of propagation). In fact, it is propagated to the probability of fulfilling the RAC. Another term used on the Thesis for this epistemic uncertainty is metauncertainty. The RAC must include the two types of uncertainty: one for the calculation of the magnitude (aleatory uncertainty); the other one, for the calculation of the probability (epistemic uncertainty). The two uncertainties can be taken into account in a separate fashion, or can be combined. In any case the RAC becomes a probabilistic criterion. If uncertainties are separated, a second-order probability is used; of both are combined, a single probability is used. On the first case, the regulator must define a level of fulfillment for the epistemic uncertainty, termed regulatory confidence level, as a value close to 1. The pair of regulatory levels (probability and confidence) is termed the regulatory tolerance level. The Thesis concludes that the adequate way of setting the BEPU RAC is by separating the uncertainties. There are two reasons to do so: experts recommend the separation of aleatory and epistemic uncertainty; and the separated RAC is in general more conservative than the joint RAC. The BEPU RAC is a hypothesis on a probability distribution, and must be statistically tested. The Thesis classifies the statistical methods to verify the RAC fulfillment in 3 categories: methods based on tolerance regions, in quantile estimators and on probability (of success or failure) estimators. The former two have been termed Q-methods, whereas those in the third category are termed P-methods. The purpose of our categorization is not to make an exhaustive survey of the very numerous existing methods. Rather, the goal is to relate the three categories and examine the most used methods from a regulatory standpoint. Special mention deserves the most used method, due to Wilks, and its extension to multidimensional variables (due to Wald). The counterpart P-method of Wilks’ is Clopper-Pearson interval, typically ignored in the BEPU realm. The problem of the computational cost of an uncertainty analysis is tackled. Wilks’, Wald’s and Clopper-Pearson methods require a minimum sample size, which is a growing function of the tolerance level. The sample size is an indicator of the computational cost, because each element of the sample must be calculated with the predictive models (codes). When the RAC is a multiple criteria, the safety magnitude becomes multidimensional. When all its components are output of the same calculation, the multidimensional character does not introduce additional computational cost. In this way, an extended idea in the BEPU realm, stating that the multi-D problem can only be tackled with the Wald extension, is proven to be false. When the components of the magnitude are independently calculated, the influence of the problem dimension on the cost cannot be avoided. The former BEPU methodologies performed the uncertainty propagation through a surrogate model of the code, also termed emulator or metamodel. The goal of a metamodel is not the predictive capability, clearly worse to the original code, but the capacity to propagate uncertainties with a lower computational cost. The emulator must contain the input parameters contributing the most to the output uncertainty, and this requires a previous importance analysis. The surrogate model is practically inexpensive to run, so that it can be exhaustively analyzed through Monte Carlo. Therefore, the epistemic uncertainty due to sampling will be reduced to almost zero, and the BEPU RAC for metamodels includes a simple probability. The regulatory authority will tend to accept the use of statistical methods which need a minimum of assumptions: exact, nonparametric and frequentist methods rather than approximate, parametric and bayesian methods, respectively. The BEPU RAC is based on a second-order probability. The probability of the safety magnitudes being inside the acceptance region is a success probability and can be interpreted as a fulfillment degree if the RAC. Furthermore, it has a metric interpretation, as a distance (in the range of magnitudes) from calculated values of the magnitudes to acceptance regulatory limits. A probabilistic definition of safety margin (SM) is proposed in the thesis. The same from a value A to other value B of a safety magnitude is defined as the probability that A is less severe than B, obtained from the uncertainties if A and B. The probabilistic definition of SM has several advantages: it is nondimensional, ranges in the interval (0,1) and can be easily generalized to multiple dimensions. Furthermore, probabilistic SM are combined according to the probability laws. And a basic property: probabilistic SM are not symmetric. There are several types of SM: distance from a calculated value to a regulatory limit (licensing margin); or from the real value to the calculated value of a magnitude (analytical margin); or from the regulatory limit to the damage threshold (barrier margin). These representations of distances (in the magnitudes’ range) as probabilities can be applied to the quantification of conservativeness. Analytical margins can be interpreted as the degree of conservativeness (DG) of the computational methodology. Conservativeness indicators are established in the Thesis, useful in the comparison of different methods of constructing tolerance limits and regions. There is a topic which has not been rigorously tackled to the date: the validation of BEPU methodologies. Before being applied in licensing, methodologies must be validated, on the basis of comparisons of their predictions ad real values of the safety magnitudes. Real data are obtained, basically, in experimental facilities. The ultimate goal of establishing RAC is to verify that real values (aside from calculated values) fulfill them. In the Thesis it is proved that a sufficient condition for this goal is the conjunction of 2 criteria: the BEPU RAC and an analogous criterion for validation. And this las criterion must be proved in experimental scenarios and extrapolated to NPPs. The licensing RAC requires a minimum value (P0) of the probabilistic licensing margin; the validation criterion requires a minimum value of the analytical margin (i.e., of the DG). These minimum values are basically complementary; the higher one of them, the lower the other one. The regulatory practice sets a high value on the licensing margin, so that the required DG is low. The possible adoption of lower values for P0 would imply weaker exigence on the RCA fulfillment and, on the other hand, higher exigence on the conservativeness of the methodology. It is important to highlight that a higher minimum value of the licensing or analytical margin requires a higher computational cost. Therefore, the computational efforts are also complementary. If medium levels are adopted, the required DG is also medium, and the methodology does not need to be very conservative. The total computational effort (licensing plus validation) could be optimized.
Resumo:
El modelo económico actual basado en el consumo y en la búsqueda permanente de una mayor calidad de vida, unido a una población mundial en aumento, contribuye a incrementar la demanda de servicios energéticos para cubrir las necesidades de energía de las personas y las industrias. Desde finales del siglo XIX la energía se ha generado fundamentalmente a partir de combustibles fósiles (carbón, petróleo y gas), convertidos en el suministro energético predominante mundialmente. Las emisiones de gases de efecto invernadero que genera la prestación de servicios energéticos han contribuido considerablemente al aumento histórico de las concentraciones de esos gases en la atmósfera, hasta el punto de que el consumo de combustibles fósiles es responsable de la mayoría de las emisiones antropogénicas (IPCC, 2012). Existen diversas opciones para disminuir las emisiones de gases de efecto invernadero del sector energético y con ello contribuir a mitigar el cambio climático, entre otras sería viable aumentar la eficiencia energética y sustituir combustibles de origen fósil por combustibles de origen renovable, pudiendo garantizar un suministro de energía sostenible, competitivo y seguro. De todas las energías renovables susceptibles de formar parte de una cartera de opciones de mitigación, esta tesis se centra en la bioenergía generada a partir de la valorización energética de las biomasas agrícolas, forestales, ganaderas o de otro tipo, con fines eléctricos y térmicos. Con objeto de mostrar su capacidad para contribuir a mitigar el cambio climático y su potencial contribución al desarrollo socioeconómico, a la generación de energía distribuida y a reducir determinados efectos negativos sobre el medio ambiente, se ha analizado minuciosamente el sector español de la biomasa en su conjunto. Desde el recurso biomásico que existe en España, las formas de extraerlo y procesarlo, las tecnologías de valorización energética, sus usos energéticos principales y la capacidad de implementación del sector en España. Asimismo se ha examinado el contexto energético tanto internacional y europeo como nacional, y se han analizado pormenorizadamente los instrumentos de soporte que han contribuido de manera directa e indirecta al desarrollo del sector en España. Además, la tesis integra el análisis de los resultados obtenidos mediante dos metodologías diferentes con fines también distintos. Por un lado se han obtenido los resultados medioambientales y socioeconómicos de los análisis de ciclo de vida input-output generados a partir de las cinco tecnologías biomásicas más ampliamente utilizadas en España. Y por otro lado, en base a los objetivos energéticos y medioambientales establecidos, se han obtenido distintas proyecciones de la implementación del sector a medio plazo, en forma de escenarios energéticos con horizonte 2035, mediante el modelo TIMES-Spain. La tesis ofrece también una serie de conclusiones y recomendaciones que podrían resultar pertinentes para los agentes que constituyen la cadena de valor del propio sector e interesados, así como para la formulación de políticas y mecanismos de apoyo para los agentes decisores, tanto del ámbito de la Administración General del Estado como autonómico y regional, sobre las características y ventajas de determinadas formas de valorización, sobre los efectos sociales y medioambientales que induce su uso, y sobre la capacidad de sector para contribuir a determinadas políticas más allá de las puramente energéticas. En todo caso, esta tesis doctoral aspira a contribuir a la toma de decisiones idóneas tanto a los agentes del sector como a responsables públicos, con objeto de adoptar medidas orientadas a fomentar modificaciones del sistema energético que incrementen la proporción de energía renovable y, de esta forma, contribuir a mitigar la amenaza que supone el cambio climático no solo en la actualidad, sino especialmente en los próximos años para las generaciones venideras. ABSTRACT The current economic model based on both, consumption and the constant search for greater quality of life, coupled with a growing world population, contribute to increase the demand for energy services in order to meet the energy needs of people and industries. Since the late nineteenth century, energy has been basically generated from fossil fuels (coal, oil and gas), which converted fossil fuels into the predominant World energy supply source. Emissions of greenhouse gases generated by the provision of energy services have contributed significantly to the historical increase in the concentrations of these gases in the atmosphere, to the extent that the consumption of fossil fuels is responsible for most of the anthropogenic emissions (IPCC, 2012). There are several options to reduce emissions of greenhouse gases in the energy sector and, thereby, to contribute to mitigate climate change. Among others, would be feasible to increase energy efficiency and progressively replacing fossil fuels by renewable fuels, which are able to ensure a sustainable, competitive and secure energy supply. Of all the renewable energies likely to form part of a portfolio of mitigation options, this thesis focuses on bioenergy generated from agricultural, forestry, farming or other kind of biomass, with electrical and thermal purposes. In order to show their ability to contribute to mitigate climate change and its potential contribution to socio-economic development, distributed energy generation and to reduce certain negative effects on the environment, the Spanish biomass sector as a whole has been dissected. From the types of biomass resources that exist in Spain, ways of extracting and processing them, energy production technologies, its main energy uses and the implementation capacity of the sector in Spain. It has also examined the international, European and national energy context, and has thoroughly analyzed the support instruments that have contributed directly and indirectly to the development of the sector in Spain, so far. Furthermore, the thesis integrates the analysis of results obtained using two different methodologies also with different purposes. On the one hand, the environmental and socio-economic results of the analysis of input-output cycle life generated from the five biomass technologies most widely used in Spain, have been obtained. On the other hand, different projections of the implementation of the sector in the medium term, as energy scenarios with horizon 2035, have been obtained by the model TIMES-Spain, based on several energy and environmental objectives. The thesis also offers a series of conclusions and recommendations that could be relevant to the agents that constitute the value chain of the biomass sector itself and other stakeholders. As well as policy and support mechanisms for decision-makers, from both: The Central and Regional Governments, on the characteristics and advantages of certain forms of valorization, on the social and environmental effects that induce their use, and the ability of the biomass sector to contribute to certain policies beyond the purely energy ones. In any case, this thesis aims to contribute to decision making, suitable for both: Industry players and to public officials. In order to adopt measures to promote significant changes in the energy system that increase the proportion of renewable energy and, consequently, that contribute to mitigate the threat of climate change; not only today but in the coming years, especially for future generations.
Resumo:
Qualquer tarefa motora ativa se dá pela ativação de uma população de unidades motoras. Porém, devido a diversas dificuldades, tanto técnicas quanto éticas, não é possível medir a entrada sináptica dos motoneurônios em humanos. Por essas razões, o uso de modelos computacionais realistas de um núcleo de motoneurônios e as suas respectivas fibras musculares tem um importante papel no estudo do controle humano dos músculos. Entretanto, tais modelos são complexos e uma análise matemática é difícil. Neste texto é apresentada uma abordagem baseada em identificação de sistemas de um modelo realista de um núcleo de unidades motoras, com o objetivo de obter um modelo mais simples capaz de representar a transdução das entradas do núcleo de unidades motoras na força do músculo associado ao núcleo. A identificação de sistemas foi baseada em um algoritmo de mínimos quadrados ortogonal para achar um modelo NARMAX, sendo que a entrada considerada foi a condutância sináptica excitatória dendrítica total dos motoneurônios e a saída foi a força dos músculos produzida pelo núcleo de unidades motoras. O modelo identificado reproduziu o comportamento médio da saída do modelo computacional realista, mesmo para pares de sinal de entrada-saída não usados durante o processo de identificação do modelo, como sinais de força muscular modulados senoidalmente. Funções de resposta em frequência generalizada do núcleo de motoneurônios foram obtidas do modelo NARMAX, e levaram a que se inferisse que oscilações corticais na banda-beta (20 Hz) podem influenciar no controle da geração de força pela medula espinhal, comportamento do núcleo de motoneurônios até então desconhecido.
Resumo:
La literatura económica ha centrado la atención en el offshoring de servicios y en su efecto sobre el nivel de empleo nacional, unido a importantes críticas en relación al impacto negativo que esta estrategia provoca en términos de destrucción de empleos nacionales. En este trabajo se analiza la relevancia que tiene el offshoring de servicios en la economía española y, en concreto, en las ramas de servicios y se estudia su efecto sobre el nivel de empleo de este sector. El análisis empírico se lleva a cabo estimando una función de demanda de trabajo con elasticidad de sustitución constante (CES), incluyendo en la misma el efecto de offshoring. Este estudio se realiza para el periodo previo a la crisis, 2000-2007, a partir de los datos contenidos en las Tablas Input-Output de la Contabilidad Nacional del INE.
Resumo:
"Supported in part by the Office of Naval Research. Contract no.N00014-67-A-0305-0007."
Resumo:
The Raf-MEK-ERK MAP kinase cascade transmits signals from activated receptors into the cell to regulate proliferation and differentiation. The cascade is controlled by the Ras GTPase, which recruits Raf from the cytosol to the plasma membrane for activation. In turn, MEK, ERK, and scaffold proteins translocate to the plasma membrane for activation. Here, we examine the input-output properties of the Raf-MEK-ERK MAP kinase module in mammalian cells activated in different cellular contexts. We show that the MAP kinase module operates as a molecular switch in vivo but that the input sensitivity of the module is determined by subcellular location. Signal output from the module is sensitive to low-level input only when it is activated at the plasma membrane. This is because the threshold for activation is low at the plasma membrane, whereas the threshold for activation is high in the cytosol. Thus, the circuit configuration of the module at the plasma membrane generates maximal outputs from low-level analog inputs, allowing cells to process and respond appropriately to physiological stimuli. These results reveal the engineering logic behind the recruitment of elements of the module from the cytosol to the membrane for activation.
Resumo:
Electronic communications devices intended for government or military applications must be rigorously evaluated to ensure that they maintain data confidentiality. High-grade information security evaluations require a detailed analysis of the device's design, to determine how it achieves necessary security functions. In practice, such evaluations are labour-intensive and costly, so there is a strong incentive to find ways to make the process more efficient. In this paper we show how well-known concepts from graph theory can be applied to a device's design to optimise information security evaluations. In particular, we use end-to-end graph traversals to eliminate components that do not need to be evaluated at all, and minimal cutsets to identify the smallest group of components that needs to be evaluated in depth.
Resumo:
Drawing on extensive academic research and theory on clusters and their analysis, the methodology employed in this pilot study (sponsored by the Welsh Assembly Government’s Economic Research Grants Assessment Board) seeks to create a framework for reviewing and monitoring clusters in Wales on an ongoing basis, and generate the information necessary for successful cluster development policy to occur. The multi-method framework developed and tested in the pilot study is designed to map existing Welsh sectors with cluster characteristics, uncover existing linkages, and better understand areas of strength and weakness. The approach adopted relies on synthesising both quantitative and qualitative evidence. Statistical measures, including the size of potential clusters, are united with other evidence on input-output derived inter-linkages within clusters and to other sectors in Wales and the UK, as well as the export and import intensity of the cluster. Multi Sector Qualitative Analysis is then designed for competencies/capacity, risk factors, markets, types and crucially, the perceived strengths of cluster structures and relationships. The approach outlined above can, with the refinements recommended through the review process, provide policy-makers with a valuable tool for reviewing and monitoring individual sectors and ameliorating problems in sectors likely to decline further.
Resumo:
As we enter the 21st Century, technologies originally developed for defense purposes such as computers and satellite communications appear to have become a driving force behind economic growth in the United States. Paradoxically, almost all previous econometric models suggest that the largely defense-oriented federal industrial R&D funding that helped create these technologies had no discernible effect on U.S. industrial productivity growth. This paper addresses this paradox by stressing that defense procurement as well as federal R&D expenditures were targeted to a few narrowly defined manufacturing sub-sectors that produced high tech weaponry. Analysis employing data from the NBER Manufacturing Productivity Database and the BEA' s Input Output tables then demonstrates that defense procurement policies did have significant effects on the productivity performance of disaggregated manufacturing industries because of a process of procurement-driven technological change.
Resumo:
This thesis is concerned with the study of a non-sequential identification technique, so that it may be applied to the identification of process plant mathematical models from process measurements with the greatest degree of accuracy and reliability. In order to study the accuracy of the technique under differing conditions, simple mathematical models were set up on a parallel hybrid. computer and these models identified from input/output measurements by a small on-line digital computer. Initially, the simulated models were identified on-line. However, this method of operation was found not suitable for a thorough study of the technique due to equipment limitations. Further analysis was carried out in a large off-line computer using data generated by the small on-line computer. Hence identification was not strictly on-line. Results of the work have shovm that the identification technique may be successfully applied in practice. An optimum sampling period is suggested, together with noise level limitations for maximum accuracy. A description of a double-effect evaporator is included in this thesis. It is proposed that the next stage in the work will be the identification of a mathematical model of this evaporator using the teclmique described.
Resumo:
This paper re-assesses three independently developed approaches that are aimed at solving the problem of zero-weights or non-zero slacks in Data Envelopment Analysis (DEA). The methods are weights restricted, non-radial and extended facet DEA models. Weights restricted DEA models are dual to envelopment DEA models with restrictions on the dual variables (DEA weights) aimed at avoiding zero values for those weights; non-radial DEA models are envelopment models which avoid non-zero slacks in the input-output constraints. Finally, extended facet DEA models recognize that only projections on facets of full dimension correspond to well defined rates of substitution/transformation between all inputs/outputs which in turn correspond to non-zero weights in the multiplier version of the DEA model. We demonstrate how these methods are equivalent, not only in their aim but also in the solutions they yield. In addition, we show that the aforementioned methods modify the production frontier by extending existing facets or creating unobserved facets. Further we propose a new approach that uses weight restrictions to extend existing facets. This approach has some advantages in computational terms, because extended facet models normally make use of mixed integer programming models, which are computationally demanding.
Resumo:
Liquid-liquid extraction has long been known as a unit operation that plays an important role in industry. This process is well known for its complexity and sensitivity to operation conditions. This thesis presents an attempt to explore the dynamics and control of this process using a systematic approach and state of the art control system design techniques. The process was studied first experimentally under carefully selected. operation conditions, which resembles the ranges employed practically under stable and efficient conditions. Data were collected at steady state conditions using adequate sampling techniques for the dispersed and continuous phases as well as during the transients of the column with the aid of a computer-based online data logging system and online concentration analysis. A stagewise single stage backflow model was improved to mimic the dynamic operation of the column. The developed model accounts for the variation in hydrodynamics, mass transfer, and physical properties throughout the length of the column. End effects were treated by addition of stages at the column entrances. Two parameters were incorporated in the model namely; mass transfer weight factor to correct for the assumption of no mass transfer in the. settling zones at each stage and the backmixing coefficients to handle the axial dispersion phenomena encountered in the course of column operation. The parameters were estimated by minimizing the differences between the experimental and the model predicted concentration profiles at steady state conditions using non-linear optimisation technique. The estimated values were then correlated as functions of operating parameters and were incorporated in·the model equations. The model equations comprise a stiff differential~algebraic system. This system was solved using the GEAR ODE solver. The calculated concentration profiles were compared to those experimentally measured. A very good agreement of the two profiles was achieved within a percent relative error of ±2.S%. The developed rigorous dynamic model of the extraction column was used to derive linear time-invariant reduced-order models that relate the input variables (agitator speed, solvent feed flowrate and concentration, feed concentration and flowrate) to the output variables (raffinate concentration and extract concentration) using the asymptotic method of system identification. The reduced-order models were shown to be accurate in capturing the dynamic behaviour of the process with a maximum modelling prediction error of I %. The simplicity and accuracy of the derived reduced-order models allow for control system design and analysis of such complicated processes. The extraction column is a typical multivariable process with agitator speed and solvent feed flowrate considered as manipulative variables; raffinate concentration and extract concentration as controlled variables and the feeds concentration and feed flowrate as disturbance variables. The control system design of the extraction process was tackled as multi-loop decentralised SISO (Single Input Single Output) as well as centralised MIMO (Multi-Input Multi-Output) system using both conventional and model-based control techniques such as IMC (Internal Model Control) and MPC (Model Predictive Control). Control performance of each control scheme was. studied in terms of stability, speed of response, sensitivity to modelling errors (robustness), setpoint tracking capabilities and load rejection. For decentralised control, multiple loops were assigned to pair.each manipulated variable with each controlled variable according to the interaction analysis and other pairing criteria such as relative gain array (RGA), singular value analysis (SVD). Loops namely Rotor speed-Raffinate concentration and Solvent flowrate Extract concentration showed weak interaction. Multivariable MPC has shown more effective performance compared to other conventional techniques since it accounts for loops interaction, time delays, and input-output variables constraints.
Resumo:
DEA literature continues apace but software has lagged behind. This session uses suitably selected data to present newly developed software which includes many of the most recent DEA models. The software enables the user to address a variety of issues not frequently found in existing DEA software such as: -Assessments under a variety of possible assumptions of returns to scale including NIRS and NDRS; -Scale elasticity computations; -Numerous Input/Output variables and truly unlimited number of assessment units (DMUs) -Panel data analysis -Analysis of categorical data (multiple categories) -Malmquist Index and its decompositions -Computations of Supper efficiency -Automated removal of super-efficient outliers under user-specified criteria; -Graphical presentation of results -Integrated statistical tests