18 resultados para Buy and hold -menetelmä
em Universidad Politécnica de Madrid
Resumo:
The author participated in the 6 th EU Framework Project ―Q-pork Chains (FP6-036245-2)‖ from 2007 to 2009. With understanding of work reports from China and other countries, it is found that compared with other countries, China has great problems in pork quality and safety. By comparing the pork chain management between China and Spain, It is found that the difference in governance structure is one of the main differences in pork chain management between Spain and China. In China, spot-market relationship still dominates governance structure of pork chain, especially between the numerous house-hold pig holders and the great number of small slaughters. While in Spain, chain agents commonly apply cooperatives or integrations to cooperate. It also has been proven by recent studies, that in quality management at the chain level that supply chain integration has a direct effect on quality management practices (Han, 2010). Therefore, the author started to investigate the governance structure choices in supply chain management. And it has been set as the first research objective, which is to explain the governance structure choices process and the influencing factors in supply chain management, analyzing the pork chains cases in Spain and in China. During the further investigation, the author noticed the international trade of pork between Spain and China is not smooth since the signature of bi-lateral agreement on pork trade in 2007. Thus, another objective of the research is to find and solve the problems exist in the international pork chain between Spain and China. For the first objective, to explain the governance structure choices in supply chain management, the thesis conducts research in three main sections. 10 First of all, the thesis gives a literature overview in chapter two on Supply Chain Management (SCM), agri-food chain management and pork chain management. It concludes that SCM is a systems approach to view the supply chains as a whole, and to manage the total flow of goods inventory from the supplier to the ultimate customer. It includes the bi-directional flow of products (materials and services) and information, and the associated managerial and operational activities. And it also is a customer focus to create unique and individual source of customer value with an appropriate use of resources, leading to customer satisfaction and building competitive chain advantages. Agri-food chain management and pork chain management are applications of SCM in agri-food sector and pork sector respectively. Then, the research gives a comparative study in chapter three in the pork chain and pork chain management between Spain and China. Many differences are found, while the main difference is governance structure in pork chain management. Furthermore, the author gives an empirical study on governance structure choice in chapter five. It is concluded that governance structure of supply chain consists of a collection of rules/institutions/constraints structuring the transactions between the various stakeholders. Based on the overview on literatures closely related with governance structure, such as transaction cost economics, transaction value analysis and resource-based view theories, seven hypotheses are proposed, which are: Hypothesis 1: Transaction cost has positive relationship with governance structure choice Hypothesis 2: Uncertainty has positive relationship with transaction cost; higher uncertainty exerts high transaction cost Hypothesis 3: The relationship between asset specificity and transaction cost is positive Hypothesis 4: Collaboration advantages and governance structure choice have positive relationship11 Hypothesis 5: Willingness to collaborate has positive relationship with collaboration advantages Hypothesis 6: Capability to collaborate has positive relationship with collaboration advantages Hypothesis 7: Uncertainty has negative effect on collaboration advantages It is noted that as transaction cost value is negative, the transaction cost mentioned in the hypotheses is its absolute value. To test the seven hypotheses, Structural Equation Model (SEM) is applied and data collected from 350 pork slaughtering and processing companies in Jiangsu, Shandong and Henan Provinces in China is used. Based on the empirical SEM model and its results, the seven hypotheses are proved. The author generates several conclusions accordingly. It is found that the governance structure choice of the chain not only depends on transaction cost, it also depends on collaboration advantages. Exchange partners establish more stable and more intense relationship to reduce transaction cost and to maximize collaboration advantages. ―Collaboration advantages‖ in this thesis is defined as the joint value achieved through transaction (mutual activities) of agents in supply chains. This value forms as improvements, mainly in mutual logistics systems, cash response, information exchange, technological improvements and innovative improvements and quality management improvements, etc. Governance structure choice is jointly decided by transaction cost and collaboration advantages. Chain agents take different governance structures to coordinate in order to decrease their transaction cost and to increase their collaboration advantages. In China´s pork chain case, spot market relationship dominates the governance structure among the numerous backyard pig farmer and small family slaughterhouse 12 as they are connected by acquaintance relationship and the transaction cost in turn is low. Their relationship is reliable as they know each other in the neighborhood; as a result, spot market relationship is suitable for their exchange. However, the transaction between large-scale slaughtering and processing industries and small-scale pig producers is becoming difficult. The information hold back behavior and hold-up behavior of small-scale pig producers increase transaction cost between them and large-scale slaughtering and processing industries. Thus, through the more intense and stable relationship between processing industries and pig producers, processing industries reduce the transaction cost and improve the collaboration advantages with their chain partners, in which quality and safety collaboration advantages be increased, meaning that processing industries are able to provide consumers products with better quality and higher safety. It is also drawn that transaction cost is influenced mainly by uncertainty and asset specificity, which is in line with new institutional economics theories developed by Williamson O. E. In China´s pork chain case, behavioral uncertainty is created by the hold-up behaviors of great numbers of small pig producers, while big slaughtering and processing industries having strong asset specificity. On the other hand, ―collaboration advantages‖ is influenced by chain agents´ willingness to collaborate and chain agents´ capabilities to cooperate. With the fast growth of big scale slaughtering and processing industries, they are more willing to know and make effort to cooperate with their chain members, and they are more capable to create joint value together with other chain agents. Therefore, they are now the main chain agents who drive more intense and stable governance structure in China‘s pork chain. For the other objective, to find and solve the problems in the international pork chain between Spain and China, the research gives an analysis in chapter four on the 13 international pork chain. This study gives explanations why the international trade of pork between Spain and China is not sufficient from the chain perspective. It is found that the first obstacle is the high quality and safety requirement set by Chinese government. It makes the Spanish companies difficult to get authorities to export. Other aspects, such as Spanish pork is not competitive in price compared with other countries such as Denmark, United States, Canada, etc., Chinese consumers do not have sufficient information on Spanish pork products, are also important reasons that Spain does not export great quantity of pork products to China. It is concluded that China´s government has too much concern on the quality and safety requirements to Spanish pork products, which makes trade difficult to complete. The two countries need to establish a more stable and intense trade relationship. They also should make the information exchange sufficient and efficient and try to break trade barriers. Spanish companies should consider proper price strategies to win the Chinese pork market
Resumo:
Las Tecnologías de la Información y la Comunicación en general e Internet en particular han supuesto una revolución en nuestra forma de comunicarnos, relacionarnos, producir, comprar y vender acortando tiempo y distancias entre proveedores y consumidores. A la paulatina penetración del ordenador, los teléfonos inteligentes y la banda ancha fija y/o móvil ha seguido un mayor uso de estas tecnologías entre ciudadanos y empresas. El comercio electrónico empresa–consumidor (B2C) alcanzó en 2010 en España un volumen de 9.114 millones de euros, con un incremento del 17,4% respecto al dato registrado en 2009. Este crecimiento se ha producido por distintos hechos: un incremento en el porcentaje de internautas hasta el 65,1% en 2010 de los cuales han adquirido productos o servicios a través de la Red un 43,1% –1,6 puntos porcentuales más respecto a 2010–. Por otra parte, el gasto medio por comprador ha ascendido a 831€ en 2010, lo que supone un incremento del 10,9% respecto al año anterior. Si segmentamos a los compradores según por su experiencia anterior de compra podemos encontrar dos categorías: el comprador novel –que adquirió por primera vez productos o servicios en 2010– y el comprador constante –aquel que había adquirido productos o servicios en 2010 y al menos una vez en años anteriores–. El 85,8% de los compradores se pueden considerar como compradores constantes: habían comprado en la Red en 2010, pero también lo habían hecho anteriormente. El comprador novel tiene un perfil sociodemográfico de persona joven de entre 15–24 años, con estudios secundarios, de clase social media y media–baja, estudiante no universitario, residente en poblaciones pequeñas y sigue utilizando fórmulas de pago como el contra–reembolso (23,9%). Su gasto medio anual ascendió en 2010 a 449€. El comprador constante, o comprador que ya había comprado en Internet anteriormente, tiene un perfil demográfico distinto: estudios superiores, clase alta, trabajador y residente en grandes ciudades, con un comportamiento maduro en la compra electrónica dada su mayor experiencia –utiliza con mayor intensidad canales exclusivos en Internet que no disponen de tienda presencial–. Su gasto medio duplica al observado en compradores noveles (con una media de 930€ anuales). Por tanto, los compradores constantes suponen una mayoría de los compradores con un gasto medio que dobla al comprador que ha adoptado el medio recientemente. Por consiguiente es de interés estudiar los factores que predicen que un internauta vuelva a adquirir un producto o servicio en la Red. La respuesta a esta pregunta no se ha revelado sencilla. En España, la mayoría de productos y servicios aún se adquieren de manera presencial, con una baja incidencia de las ventas a distancia como la teletienda, la venta por catálogo o la venta a través de Internet. Para dar respuesta a las preguntas planteadas se ha investigado desde distintos puntos de vista: se comenzará con un estudio descriptivo desde el punto de vista de la demanda que trata de caracterizar la situación del comercio electrónico B2C en España, poniendo el foco en las diferencias entre los compradores constantes y los nuevos compradores. Posteriormente, la investigación de modelos de adopción y continuidad en el uso de las tecnologías y de los factores que inciden en dicha continuidad –con especial interés en el comercio electrónico B2C–, permiten afrontar el problema desde la perspectiva de las ecuaciones estructurales pudiendo también extraer conclusiones de tipo práctico. Este trabajo sigue una estructura clásica de investigación científica: en el capítulo 1 se introduce el tema de investigación, continuando con una descripción del estado de situación del comercio electrónico B2C en España utilizando fuentes oficiales (capítulo 2). Posteriormente se desarrolla el marco teórico y el estado del arte de modelos de adopción y de utilización de las tecnologías (capítulo 3) y de los factores principales que inciden en la adopción y continuidad en el uso de las tecnologías (capítulo 4). El capítulo 5 desarrolla las hipótesis de la investigación y plantea los modelos teóricos. Las técnicas estadísticas a utilizar se describen en el capítulo 6, donde también se analizan los resultados empíricos sobre los modelos desarrollados en el capítulo 5. El capítulo 7 expone las principales conclusiones de la investigación, sus limitaciones y propone nuevas líneas de investigación. La primera parte corresponde al capítulo 1, que introduce la investigación justificándola desde un punto de vista teórico y práctico. También se realiza una breve introducción a la teoría del comportamiento del consumidor desde una perspectiva clásica. Se presentan los principales modelos de adopción y se introducen los modelos de continuidad de utilización que se estudiarán más detalladamente en el capítulo 3. En este capítulo se desarrollan los objetivos principales y los objetivos secundarios, se propone el mapa mental de la investigación y se planifican en un cronograma los principales hitos del trabajo. La segunda parte corresponde a los capítulos dos, tres y cuatro. En el capítulo 2 se describe el comercio electrónico B2C en España utilizando fuentes secundarias. Se aborda un diagnóstico del sector de comercio electrónico y su estado de madurez en España. Posteriormente, se analizan las diferencias entre los compradores constantes, principal interés de este trabajo, frente a los compradores noveles, destacando las diferencias de perfiles y usos. Para los dos segmentos se estudian aspectos como el lugar de acceso a la compra, la frecuencia de compra, los medios de pago utilizados o las actitudes hacia la compra. El capítulo 3 comienza desarrollando los principales conceptos sobre la teoría del comportamiento del consumidor, para continuar estudiando los principales modelos de adopción de tecnología existentes, analizando con especial atención su aplicación en comercio electrónico. Posteriormente se analizan los modelos de continuidad en el uso de tecnologías (Teoría de la Confirmación de Expectativas; Teoría de la Justicia), con especial atención de nuevo a su aplicación en el comercio electrónico. Una vez estudiados los principales modelos de adopción y continuidad en el uso de tecnologías, el capítulo 4 analiza los principales factores que se utilizan en los modelos: calidad, valor, factores basados en la confirmación de expectativas –satisfacción, utilidad percibida– y factores específicos en situaciones especiales –por ejemplo, tras una queja– como pueden ser la justicia, las emociones o la confianza. La tercera parte –que corresponde al capítulo 5– desarrolla el diseño de la investigación y la selección muestral de los modelos. En la primera parte del capítulo se enuncian las hipótesis –que van desde lo general a lo particular, utilizando los factores específicos analizados en el capítulo 4– para su posterior estudio y validación en el capítulo 6 utilizando las técnicas estadísticas apropiadas. A partir de las hipótesis, y de los modelos y factores estudiados en los capítulos 3 y 4, se definen y vertebran dos modelos teóricos originales que den respuesta a los retos de investigación planteados en el capítulo 1. En la segunda parte del capítulo se diseña el trabajo empírico de investigación definiendo los siguientes aspectos: alcance geográfico–temporal, tipología de la investigación, carácter y ambiente de la investigación, fuentes primarias y secundarias utilizadas, técnicas de recolección de datos, instrumentos de medida utilizados y características de la muestra utilizada. Los resultados del trabajo de investigación constituyen la cuarta parte de la investigación y se desarrollan en el capítulo 6, que comienza analizando las técnicas estadísticas basadas en Modelos de Ecuaciones Estructurales. Se plantean dos alternativas, modelos confirmatorios correspondientes a Métodos Basados en Covarianzas (MBC) y modelos predictivos. De forma razonada se eligen las técnicas predictivas dada la naturaleza exploratoria de la investigación planteada. La segunda parte del capítulo 6 desarrolla el análisis de los resultados de los modelos de medida y modelos estructurales construidos con indicadores formativos y reflectivos y definidos en el capítulo 4. Para ello se validan, sucesivamente, los modelos de medida y los modelos estructurales teniendo en cuenta los valores umbrales de los parámetros estadísticos necesarios para la validación. La quinta parte corresponde al capítulo 7, que desarrolla las conclusiones basándose en los resultados del capítulo 6, analizando los resultados desde el punto de vista de las aportaciones teóricas y prácticas, obteniendo conclusiones para la gestión de las empresas. A continuación, se describen las limitaciones de la investigación y se proponen nuevas líneas de estudio sobre distintos temas que han ido surgiendo a lo largo del trabajo. Finalmente, la bibliografía recoge todas las referencias utilizadas a lo largo de este trabajo. Palabras clave: comprador constante, modelos de continuidad de uso, continuidad en el uso de tecnologías, comercio electrónico, B2C, adopción de tecnologías, modelos de adopción tecnológica, TAM, TPB, IDT, UTAUT, ECT, intención de continuidad, satisfacción, confianza percibida, justicia, emociones, confirmación de expectativas, calidad, valor, PLS. ABSTRACT Information and Communication Technologies in general, but more specifically those related to the Internet in particular, have changed the way in which we communicate, relate to one another, produce, and buy and sell products, reducing the time and shortening the distance between suppliers and consumers. The steady breakthrough of computers, Smartphones and landline and/or wireless broadband has been greatly reflected in its large scale use by both individuals and businesses. Business–to–consumer (B2C) e–commerce reached a volume of 9,114 million Euros in Spain in 2010, representing a 17.4% increase with respect to the figure in 2009. This growth is due in part to two different facts: an increase in the percentage of web users to 65.1% en 2010, 43.1% of whom have acquired products or services through the Internet– which constitutes 1.6 percentage points higher than 2010. On the other hand, the average spending by individual buyers rose to 831€ en 2010, constituting a 10.9% increase with respect to the previous year. If we select buyers according to whether or not they have previously made some type of purchase, we can divide them into two categories: the novice buyer–who first made online purchases in 2010– and the experienced buyer: who also made purchases in 2010, but had done so previously as well. The socio–demographic profile of the novice buyer is that of a young person between 15–24 years of age, with secondary studies, middle to lower–middle class, and a non–university educated student who resides in smaller towns and continues to use payment methods such as cash on delivery (23.9%). In 2010, their average purchase grew to 449€. The more experienced buyer, or someone who has previously made purchases online, has a different demographic profile: highly educated, upper class, resident and worker in larger cities, who exercises a mature behavior when making online purchases due to their experience– this type of buyer frequently uses exclusive channels on the Internet that don’t have an actual store. His or her average purchase doubles that of the novice buyer (with an average purchase of 930€ annually.) That said, the experienced buyers constitute the majority of buyers with an average purchase that doubles that of novice buyers. It is therefore of interest to study the factors that help to predict whether or not a web user will buy another product or use another service on the Internet. The answer to this question has proven not to be so simple. In Spain, the majority of goods and services are still bought in person, with a low amount of purchases being made through means such as the Home Shopping Network, through catalogues or Internet sales. To answer the questions that have been posed here, an investigation has been conducted which takes into consideration various viewpoints: it will begin with a descriptive study from the perspective of the supply and demand that characterizes the B2C e–commerce situation in Spain, focusing on the differences between experienced buyers and novice buyers. Subsequently, there will be an investigation concerning the technology acceptance and continuity of use of models as well as the factors that have an effect on their continuity of use –with a special focus on B2C electronic commerce–, which allows for a theoretic approach to the problem from the perspective of the structural equations being able to reach practical conclusions. This investigation follows the classic structure for a scientific investigation: the subject of the investigation is introduced (Chapter 1), then the state of the B2C e–commerce in Spain is described citing official sources of information (Chapter 2), the theoretical framework and state of the art of technology acceptance and continuity models are developed further (Chapter 3) and the main factors that affect their acceptance and continuity (Chapter 4). Chapter 5 explains the hypothesis behind the investigation and poses the theoretical models that will be confirmed or rejected partially or completely. In Chapter 6, the technical statistics that will be used are described briefly as well as an analysis of the empirical results of the models put forth in Chapter 5. Chapter 7 explains the main conclusions of the investigation, its limitations and proposes new projects. First part of the project, chapter 1, introduces the investigation, justifying it from a theoretical and practical point of view. It is also a brief introduction to the theory of consumer behavior from a standard perspective. Technology acceptance models are presented and then continuity and repurchase models are introduced, which are studied more in depth in Chapter 3. In this chapter, both the main and the secondary objectives are developed through a mind map and a timetable which highlights the milestones of the project. The second part of the project corresponds to Chapters Two, Three and Four. Chapter 2 describes the B2C e–commerce in Spain from the perspective of its demand, citing secondary official sources. A diagnosis concerning the e–commerce sector and the status of its maturity in Spain is taken on, as well as the barriers and alternative methods of e–commerce. Subsequently, the differences between experienced buyers, which are of particular interest to this project, and novice buyers are analyzed, highlighting the differences between their profiles and their main transactions. In order to study both groups, aspects such as the place of purchase, frequency with which online purchases are made, payment methods used and the attitudes of the purchasers concerning making online purchases are taken into consideration. Chapter 3 begins by developing the main concepts concerning consumer behavior theory in order to continue the study of the main existing acceptance models (among others, TPB, TAM, IDT, UTAUT and other models derived from them) – paying special attention to their application in e–commerce–. Subsequently, the models of technology reuse are analyzed (CDT, ECT; Theory of Justice), focusing again specifically on their application in e–commerce. Once the main technology acceptance and reuse models have been studied, Chapter 4 analyzes the main factors that are used in these models: quality, value, factors based on the contradiction of expectations/failure to meet expectations– satisfaction, perceived usefulness– and specific factors pertaining to special situations– for example, after receiving a complaint justice, emotions or confidence. The third part– which appears in Chapter 5– develops the plan for the investigation and the sample selection for the models that have been designed. In the first section of the Chapter, the hypothesis is presented– beginning with general ideas and then becoming more specific, using the detailed factors that were analyzed in Chapter 4– for its later study and validation in Chapter 6– as well as the corresponding statistical factors. Based on the hypothesis and the models and factors that were studied in Chapters 3 and 4, two original theoretical models are defined and organized in order to answer the questions posed in Chapter 1. In the second part of the Chapter, the empirical investigation is designed, defining the following aspects: geographic–temporal scope, type of investigation, nature and setting of the investigation, primary and secondary sources used, data gathering methods, instruments according to the extent of their use and characteristics of the sample used. The results of the project constitute the fourth part of the investigation and are developed in Chapter 6, which begins analyzing the statistical techniques that are based on the Models of Structural Equations. Two alternatives are put forth: confirmatory models which correspond to Methods Based on Covariance (MBC) and predictive models– Methods Based on Components–. In a well–reasoned manner, the predictive techniques are chosen given the explorative nature of the investigation. The second part of Chapter 6 explains the results of the analysis of the measurement models and structural models built by the formative and reflective indicators defined in Chapter 4. In order to do so, the measurement models and the structural models are validated one by one, while keeping in mind the threshold values of the necessary statistic parameters for their validation. The fifth part corresponds to Chapter 7 which explains the conclusions of the study, basing them on the results found in Chapter 6 and analyzing them from the perspective of the theoretical and practical contributions, and consequently obtaining conclusions for business management. The limitations of the investigation are then described and new research lines about various topics that came up during the project are proposed. Lastly, all of the references that were used during the project are listed in a final bibliography. Key Words: constant buyer, repurchase models, continuity of use of technology, e–commerce, B2C, technology acceptance, technology acceptance models, TAM, TPB, IDT, UTAUT, ECT, intention of repurchase, satisfaction, perceived trust/confidence, justice, feelings, the contradiction of expectations, quality, value, PLS.
Resumo:
When I flrst encountered the architecture of Manuel and Francisco AIRES MATEUS, I was reminded of a Nazarlte proverb that provides a lyrical recipe for construction. "To make a house," lt directs, "you grab a handful of alr and hold lt together wlth a few walls." I have come to realize that thls ls precisely what these Portuguese brothers do ln all their buildings.
Resumo:
El mercado ibérico de futuros de energía eléctrica gestionado por OMIP (“Operador do Mercado Ibérico de Energia, Pólo Português”, con sede en Lisboa), también conocido como el mercado ibérico de derivados de energía, comenzó a funcionar el 3 de julio de 2006. Se analiza la eficiencia de este mercado organizado, por lo que se estudia la precisión con la que sus precios de futuros predicen el precio de contado. En dicho mercado coexisten dos modos de negociación: el mercado continuo (modo por defecto) y la contratación mediante subasta. En la negociación en continuo, las órdenes anónimas de compra y de venta interactúan de manera inmediata e individual con órdenes contrarias, dando lugar a operaciones con un número indeterminado de precios para cada contrato. En la negociación a través de subasta, un precio único de equilibrio maximiza el volumen negociado, liquidándose todas las operaciones a ese precio. Adicionalmente, los miembros negociadores de OMIP pueden liquidar operaciones “Over-The-Counter” (OTC) a través de la cámara de compensación de OMIP (OMIClear). Las cinco mayores empresas españolas de distribución de energía eléctrica tenían la obligación de comprar electricidad hasta julio de 2009 en subastas en OMIP, para cubrir parte de sus suministros regulados. De igual manera, el suministrador de último recurso portugués mantuvo tal obligación hasta julio de 2010. Los precios de equilibrio de esas subastas no han resultado óptimos a efectos retributivos de tales suministros regulados dado que dichos precios tienden a situarse ligeramente sesgados al alza. La prima de riesgo ex-post, definida como la diferencia entre los precios a plazo y de contado en el periodo de entrega, se emplea para medir su eficiencia de precio. El mercado de contado, gestionado por OMIE (“Operador de Mercado Ibérico de la Energía”, conocido tradicionalmente como “OMEL”), tiene su sede en Madrid. Durante los dos primeros años del mercado de futuros, la prima de riesgo media tiende a resultar positiva, al igual que en otros mercados europeos de energía eléctrica y gas natural. En ese periodo, la prima de riesgo ex-post tiende a ser negativa en los mercados de petróleo y carbón. Los mercados de energía tienden a mostrar niveles limitados de eficiencia de mercado. La eficiencia de precio del mercado de futuros aumenta con el desarrollo de otros mecanismos coexistentes dentro del mercado ibérico de electricidad (conocido como “MIBEL”) –es decir, el mercado dominante OTC, las subastas de centrales virtuales de generación conocidas en España como Emisiones Primarias de Energía, y las subastas para cubrir parte de los suministros de último recurso conocidas en España como subastas CESUR– y con una mayor integración de los mercados regionales europeos de energía eléctrica. Se construye un modelo de regresión para analizar la evolución de los volúmenes negociados en el mercado continuo durante sus cuatro primeros años como una función de doce indicadores potenciales de liquidez. Los únicos indicadores significativos son los volúmenes negociados en las subastas obligatorias gestionadas por OMIP, los volúmenes negociados en el mercado OTC y los volúmenes OTC compensados por OMIClear. El número de creadores de mercado, la incorporación de agentes financieros y compañías de generación pertenecientes a grupos integrados con suministradores de último recurso, y los volúmenes OTC compensados por OMIClear muestran una fuerte correlación con los volúmenes negociados en el mercado continuo. La liquidez de OMIP está aún lejos de los niveles alcanzados por los mercados europeos más maduros (localizados en los países nórdicos (Nasdaq OMX Commodities) y Alemania (EEX)). El operador de mercado y su cámara de compensación podrían desarrollar acciones eficientes de marketing para atraer nuevos agentes activos en el mercado de contado (p.ej. industrias consumidoras intensivas de energía, suministradores, pequeños productores, compañías energéticas internacionales y empresas de energías renovables) y agentes financieros, captar volúmenes del opaco OTC, y mejorar el funcionamiento de los productos existentes aún no líquidos. Resultaría de gran utilidad para tales acciones un diálogo activo con todos los agentes (participantes en el mercado, operador de mercado de contado, y autoridades supervisoras). Durante sus primeros cinco años y medio, el mercado continuo presenta un crecimento de liquidez estable. Se mide el desempeño de sus funciones de cobertura mediante la ratio de posición neta obtenida al dividir la posición abierta final de un contrato de derivados mensual entre su volumen acumulado en la cámara de compensación. Los futuros carga base muestran la ratio más baja debido a su buena liquidez. Los futuros carga punta muestran una mayor ratio al producirse su menor liquidez a través de contadas subastas fijadas por regulación portuguesa. Las permutas carga base liquidadas en la cámara de compensación ubicada en Madrid –MEFF Power, activa desde el 21 de marzo de 2011– muestran inicialmente valores altos debido a bajos volúmenes registrados, dado que esta cámara se emplea principalmente para vencimientos pequeños (diario y semanal). Dicha ratio puede ser una poderosa herramienta de supervisión para los reguladores energéticos cuando accedan a todas las transacciones de derivados en virtud del Reglamento Europeo sobre Integridad y Transparencia de los Mercados de Energía (“REMIT”), en vigor desde el 28 de diciembre de 2011. La prima de riesgo ex-post tiende a ser positiva en todos los mecanismos (futuros en OMIP, mercado OTC y subastas CESUR) y disminuye debido a la curvas de aprendizaje y al efecto, desde el año 2011, del precio fijo para la retribución de la generación con carbón autóctono. Se realiza una comparativa con los costes a plazo de generación con gas natural (diferencial “clean spark spread”) obtenido como la diferencia entre el precio del futuro eléctrico y el coste a plazo de generación con ciclo combinado internalizando los costes de emisión de CO2. Los futuros eléctricos tienen una elevada correlación con los precios de gas europeos. Los diferenciales de contratos con vencimiento inmediato tienden a ser positivos. Los mayores diferenciales se dan para los contratos mensuales, seguidos de los trimestrales y anuales. Los generadores eléctricos con gas pueden maximizar beneficios con contratos de menor vencimiento. Los informes de monitorización por el operador de mercado que proporcionan transparencia post-operacional, el acceso a datos OTC por el regulador energético, y la valoración del riesgo regulatorio pueden contribuir a ganancias de eficiencia. Estas recomendaciones son también válidas para un potencial mercado ibérico de futuros de gas, una vez que el hub ibérico de gas –actualmente en fase de diseño, con reuniones mensuales de los agentes desde enero de 2013 en el grupo de trabajo liderado por el regulador energético español– esté operativo. El hub ibérico de gas proporcionará transparencia al atraer más agentes y mejorar la competencia, incrementando su eficiencia, dado que en el mercado OTC actual no se revela precio alguno de gas. ABSTRACT The Iberian Power Futures Market, managed by OMIP (“Operador do Mercado Ibérico de Energia, Pólo Português”, located in Lisbon), also known as the Iberian Energy Derivatives Market, started operations on 3 July 2006. The market efficiency, regarding how well the future price predicts the spot price, is analysed for this energy derivatives exchange. There are two trading modes coexisting within OMIP: the continuous market (default mode) and the call auction. In the continuous trading, anonymous buy and sell orders interact immediately and individually with opposite side orders, generating trades with an undetermined number of prices for each contract. In the call auction trading, a single price auction maximizes the traded volume, being all trades settled at the same price (equilibrium price). Additionally, OMIP trading members may settle Over-the-Counter (OTC) trades through OMIP clearing house (OMIClear). The five largest Spanish distribution companies have been obliged to purchase in auctions managed by OMIP until July 2009, in order to partly cover their portfolios of end users’ regulated supplies. Likewise, the Portuguese last resort supplier kept that obligation until July 2010. The auction equilibrium prices are not optimal for remuneration purposes of regulated supplies as such prices seem to be slightly upward biased. The ex-post forward risk premium, defined as the difference between the forward and spot prices in the delivery period, is used to measure its price efficiency. The spot market, managed by OMIE (Market Operator of the Iberian Energy Market, Spanish Pool, known traditionally as “OMEL”), is located in Madrid. During the first two years of the futures market, the average forward risk premium tends to be positive, as it occurs with other European power and natural gas markets. In that period, the ex-post forward risk premium tends to be negative in oil and coal markets. Energy markets tend to show limited levels of market efficiency. The price efficiency of the Iberian Power Futures Market improves with the market development of all the coexistent forward contracting mechanisms within the Iberian Electricity Market (known as “MIBEL”) – namely, the dominant OTC market, the Virtual Power Plant Auctions known in Spain as Energy Primary Emissions, and the auctions catering for part of the last resort supplies known in Spain as CESUR auctions – and with further integration of European Regional Electricity Markets. A regression model tracking the evolution of the traded volumes in the continuous market during its first four years is built as a function of twelve potential liquidity drivers. The only significant drivers are the traded volumes in OMIP compulsory auctions, the traded volumes in the OTC market, and the OTC cleared volumes by OMIClear. The amount of market makers, the enrolment of financial members and generation companies belonging to the integrated group of last resort suppliers, and the OTC cleared volume by OMIClear show strong correlation with the traded volumes in the continuous market. OMIP liquidity is still far from the levels reached by the most mature European markets (located in the Nordic countries (Nasdaq OMX Commodities) and Germany (EEX)). The market operator and its clearing house could develop efficient marketing actions to attract new entrants active in the spot market (e.g. energy intensive industries, suppliers, small producers, international energy companies and renewable generation companies) and financial agents as well as volumes from the opaque OTC market, and to improve the performance of existing illiquid products. An active dialogue with all the stakeholders (market participants, spot market operator, and supervisory authorities) will help to implement such actions. During its firs five and a half years, the continuous market shows steady liquidity growth. The hedging performance is measured through a net position ratio obtained from the final open interest of a month derivatives contract divided by its accumulated cleared volume. The base load futures in the Iberian energy derivatives exchange show the lowest ratios due to good liquidity. The peak futures show bigger ratios as their reduced liquidity is produced by auctions fixed by Portuguese regulation. The base load swaps settled in the clearing house located in Spain – MEFF Power, operating since 21 March 2011, with a new denomination (BME Clearing) since 9 September 2013 – show initially large values due to low registered volumes, as this clearing house is mainly used for short maturity (daily and weekly swaps). The net position ratio can be a powerful oversight tool for energy regulators when accessing to all the derivatives transactions as envisaged by European regulation on Energy Market Integrity and Transparency (“REMIT”), in force since 28 December 2011. The ex-post forward risk premium tends to be positive in all existing mechanisms (OMIP futures, OTC market and CESUR auctions) and diminishes due to the learning curve and the effect – since year 2011 – of the fixed price retributing the indigenous coal fired generation. Comparison with the forward generation costs from natural gas (“clean spark spread”) – obtained as the difference between the power futures price and the forward generation cost with a gas fired combined cycle plant taking into account the CO2 emission rates – is also performed. The power futures are strongly correlated with European gas prices. The clean spark spreads built with prompt contracts tend to be positive. The biggest clean spark spreads are for the month contract, followed by the quarter contract and then by the year contract. Therefore, gas fired generation companies can maximize profits trading with contracts of shorter maturity. Market monitoring reports by the market operator providing post-trade transparency, OTC data access by the energy regulator, and assessment of the regulatory risk can contribute to efficiency gains. The same recommendations are also valid for a potential Iberian gas futures market, once an Iberian gas hub – currently in a design phase, with monthly meetings amongst the stakeholders in a Working Group led by the Spanish energy regulatory authority since January 2013 – is operating. The Iberian gas hub would bring transparency attracting more shippers and improving competition and thus its efficiency, as no gas price is currently disclosed in the existing OTC market.
Resumo:
Global linear instability theory is concerned with the temporal or spatial development of small-amplitude perturbations superposed upon laminar steady or time-periodic threedimensional flows, which are inhomogeneous in two (and periodic in one) or all three spatial directions.1 The theory addresses flows developing in complex geometries, in which the parallel or weakly nonparallel basic flow approximation invoked by classic linear stability theory does not hold. As such, global linear theory is called to fill the gap in research into stability and transition in flows over or through complex geometries. Historically, global linear instability has been (and still is) concerned with solution of multi-dimensional eigenvalue problems; the maturing of non-modal linear instability ideas in simple parallel flows during the last decade of last century2–4 has given rise to investigation of transient growth scenarios in an ever increasing variety of complex flows. After a brief exposition of the theory, connections are sought with established approaches for structure identification in flows, such as the proper orthogonal decomposition and topology theory in the laminar regime and the open areas for future research, mainly concerning turbulent and three-dimensional flows, are highlighted. Recent results obtained in our group are reported in both the time-stepping and the matrix-forming approaches to global linear theory. In the first context, progress has been made in implementing a Jacobian-Free Newton Krylov method into a standard finite-volume aerodynamic code, such that global linear instability results may now be obtained in compressible flows of aeronautical interest. In the second context a new stable very high-order finite difference method is implemented for the spatial discretization of the operators describing the spatial BiGlobal EVP, PSE-3D and the TriGlobal EVP; combined with sparse matrix treatment, all these problems may now be solved on standard desktop computers.
Resumo:
This paper presents some fundamental properties of independent and-parallelism and extends its applicability by enlarging the class of goals eligible for parallel execution. A simple model of (independent) and-parallel execution is proposed and issues of correctness and efficiency discussed in the light of this model. Two conditions, "strict" and "non-strict" independence, are defined and then proved sufficient to ensure correctness and efñciency of parallel execution: if goals which meet these conditions are executed in parallel the solutions obtained are the same as those produced by standard sequential execution. Also, in absence of failure, the parallel proof procedure does not genérate any additional work (with respect to standard SLD-resolution) while the actual execution time is reduced. Finally, in case of failure of any of the goals no slow down will occur. For strict independence the results are shown to hold independently of whether the parallel goals execute in the same environment or in sepárate environments. In addition, a formal basis is given for the automatic compile-time generation of independent and-parallelism: compile-time conditions to efficiently check goal independence at run-time are proposed and proved sufficient. Also, rules are given for constructing simpler conditions if information regarding the binding context of the goals to be executed in parallel is available to the compiler.
Resumo:
We propose a general framework for assertion-based debugging of constraint logic programs. Assertions are linguistic constructions for expressing properties of programs. We define several assertion schemas for writing (partial) specifications for constraint logic programs using quite general properties, including user-defined programs. The framework is aimed at detecting deviations of the program behavior (symptoms) with respect to the given assertions, either at compile-time (i.e., statically) or run-time (i.e., dynamically). We provide techniques for using information from global analysis both to detect at compile-time assertions which do not hold in at least one of the possible executions (i.e., static symptoms) and assertions which hold for all possible executions (i.e., statically proved assertions). We also provide program transformations which introduce tests in the program for checking at run-time those assertions whose status cannot be determined at compile-time. Both the static and the dynamic checking are provably safe in the sense that all errors flagged are definite violations of the pecifications. Finally, we report briefly on the currently implemented instances of the generic framework.
Resumo:
This paper presents a technique for achieving a class of optimizations related to the reduction of checks within cycles. The technique uses both Program Transformation and Abstract Interpretation. After a ñrst pass of an abstract interpreter which detects simple invariants, program transformation is used to build a hypothetical situation that simpliñes some predicates that should be executed within the cycle. This transformation implements the heuristic hypothesis that once conditional tests hold they may continué doing so recursively. Specialized versions of predicates are generated to detect and exploit those cases in which the invariance may hold. Abstract interpretation is then used again to verify the truth of such hypotheses and conñrm the proposed simpliñcation. This allows optimizations that go beyond those possible with only one pass of the abstract interpreter over the original program, as is normally the case. It also allows selective program specialization using a standard abstract interpreter not speciñcally designed for this purpose, thus simplifying the design of this already complex module of the compiler. In the paper, a class of programs amenable to such optimization is presented, along with some examples and an evaluation of the proposed techniques in some application áreas such as floundering detection and reducing run-time tests in automatic logic program parallelization. The analysis of the examples presented has been performed automatically by an implementation of the technique using existing abstract interpretation and program transformation tools.
Resumo:
This paper presents and proves some fundamental results for independent and-parallelism (IAP). First, the paper treats the issues of correctness and efficiency: after defining strict and non-strict goal independence, it is proved that if strictly independent goals are executed in parallel the solutions obtained are the same as those produced by standard sequential execution. It is also shown that, in the absence of failure, the parallel proof procedure doesn't genérate any additional work (with respect to standard SLDresolution) while the actual execution time is reduced. The same results hold even if non-strictly independent goals are executed in parallel, provided a trivial rewriting of such goals is performed. In addition, and most importantly, treats the issue of compile-time generation of IAP by proposing conditions, to be written at compile-time, to efficiently check strict and non-strict goal independence at run-time and proving the sufficiency of such conditions. It is also shown how simpler conditions can be constructed if some information regarding the binding context of the goals to be executed in parallel is available to the compiler trough either local or program-level analysis. These results therefore provide a formal basis for the automatic compile-time generation of IAP. As a corollary of such results, the paper also proves that negative goals are always non-strictly independent, and that goals which share a first occurrence of an existential variable are never independent.
Resumo:
We present a generic preprocessor for combined static/dynamic validation and debugging of constraint logic programs. Passing programs through the preprocessor prior to execution allows detecting many bugs automatically. This is achieved by performing a repertoire of tests which range from simple syntactic checks to much more advanced checks based on static analysis of the program. Together with the program, the user may provide a series of assertions which trigger further automatic checking of the program. Such assertions are written using the assertion language presented in Chapter 2, which allows expressing a wide variety of properties. These properties extend beyond the predefined set which may be understandable by the available static analyzers and include properties defined by means of user programs. In addition to user-provided assertions, in each particular CLP system assertions may be available for predefined system predicates. Checking of both user-provided assertions and assertions for system predicates is attempted first at compile-time by comparing them with the results of static analysis. This may allow statically proving that the assertions hold (Le., they are validated) or that they are violated (and thus bugs detected). User-provided assertions (or parts of assertions) which cannot be statically proved ñor disproved are optionally translated into run-time tests. The implementation of the preprocessor is generic in that it can be easily customized to different CLP systems and dialects and in that it is designed to allow the integration of additional analyses in a simple way. We also report on two tools which are instances of the generic preprocessor: CiaoPP (for the Ciao Prolog system) and CHIPRE (for the CHIP CLP(FL>) system). The currently existing analyses include types, modes, non-failure, determinacy, and computational cost, and can treat modules separately, performing incremental analysis.
Resumo:
Many diseases have a genetic origin, and a great effort is being made to detect the genes that are responsible for their insurgence. One of the most promising techniques is the analysis of genetic information through the use of complex networks theory. Yet, a practical problem of this approach is its computational cost, which scales as the square of the number of features included in the initial dataset. In this paper, we propose the use of an iterative feature selection strategy to identify reduced subsets of relevant features, and show an application to the analysis of congenital Obstructive Nephropathy. Results demonstrate that, besides achieving a drastic reduction of the computational cost, the topologies of the obtained networks still hold all the relevant information, and are thus able to fully characterize the severity of the disease.
Resumo:
We consider the stability of isoperimetric inequalities under quasi-isometries between Riemann surfaces. Kanai observed that quasi-isometries preserve isoperimetric inequalities on complete Riemannian manifolds with finite geometry: positive injectivity radius and Ricci curvature bounded from below (see [2]). In [1], it is shown that the linear isoperimetric inequality is a quasi-isometric invariant for planar Riemann surfaces (genus zero surfaces) with vanishing injectivity radius. Moreover, it is proved that non-linear isoperimetric inequalities can only hold for Riemann surfaces with positive injectivity radius, and hence, by Kanai's observation, preserved by quasi-isometries. In this talk we present an overview on isoperimetric inequalities and give some of the ideas of the proofs of the results cited above.
Resumo:
A series of motion compensation algorithms is run on the challenge data including methods that optimize only a linear transformation, or a non-linear transformation, or both – first a linear and then a non-linear transformation. Methods that optimize a linear transformation run an initial segmentation of the area of interest around the left myocardium by means of an independent component analysis (ICA) (ICA-*). Methods that optimize non-linear transformations may run directly on the full images, or after linear registration. Non-linear motion compensation approaches applied include one method that only registers pairs of images in temporal succession (SERIAL), one method that registers all image to one common reference (AllToOne), one method that was designed to exploit quasi-periodicity in free breathing acquired image data and was adapted to also be usable to image data acquired with initial breath-hold (QUASI-P), a method that uses ICA to identify the motion and eliminate it (ICA-SP), and a method that relies on the estimation of a pseudo ground truth (PG) to guide the motion compensation.
Design and Simulation of Deep Nanometer SRAM Cells under Energy, Mismatch, and Radiation Constraints
Resumo:
La fiabilidad está pasando a ser el principal problema de los circuitos integrados según la tecnología desciende por debajo de los 22nm. Pequeñas imperfecciones en la fabricación de los dispositivos dan lugar ahora a importantes diferencias aleatorias en sus características eléctricas, que han de ser tenidas en cuenta durante la fase de diseño. Los nuevos procesos y materiales requeridos para la fabricación de dispositivos de dimensiones tan reducidas están dando lugar a diferentes efectos que resultan finalmente en un incremento del consumo estático, o una mayor vulnerabilidad frente a radiación. Las memorias SRAM son ya la parte más vulnerable de un sistema electrónico, no solo por representar más de la mitad del área de los SoCs y microprocesadores actuales, sino también porque las variaciones de proceso les afectan de forma crítica, donde el fallo de una única célula afecta a la memoria entera. Esta tesis aborda los diferentes retos que presenta el diseño de memorias SRAM en las tecnologías más pequeñas. En un escenario de aumento de la variabilidad, se consideran problemas como el consumo de energía, el diseño teniendo en cuenta efectos de la tecnología a bajo nivel o el endurecimiento frente a radiación. En primer lugar, dado el aumento de la variabilidad de los dispositivos pertenecientes a los nodos tecnológicos más pequeños, así como a la aparición de nuevas fuentes de variabilidad por la inclusión de nuevos dispositivos y la reducción de sus dimensiones, la precisión del modelado de dicha variabilidad es crucial. Se propone en la tesis extender el método de inyectores, que modela la variabilidad a nivel de circuito, abstrayendo sus causas físicas, añadiendo dos nuevas fuentes para modelar la pendiente sub-umbral y el DIBL, de creciente importancia en la tecnología FinFET. Los dos nuevos inyectores propuestos incrementan la exactitud de figuras de mérito a diferentes niveles de abstracción del diseño electrónico: a nivel de transistor, de puerta y de circuito. El error cuadrático medio al simular métricas de estabilidad y prestaciones de células SRAM se reduce un mínimo de 1,5 veces y hasta un máximo de 7,5 a la vez que la estimación de la probabilidad de fallo se mejora en varios ordenes de magnitud. El diseño para bajo consumo es una de las principales aplicaciones actuales dada la creciente importancia de los dispositivos móviles dependientes de baterías. Es igualmente necesario debido a las importantes densidades de potencia en los sistemas actuales, con el fin de reducir su disipación térmica y sus consecuencias en cuanto al envejecimiento. El método tradicional de reducir la tensión de alimentación para reducir el consumo es problemático en el caso de las memorias SRAM dado el creciente impacto de la variabilidad a bajas tensiones. Se propone el diseño de una célula que usa valores negativos en la bit-line para reducir los fallos de escritura según se reduce la tensión de alimentación principal. A pesar de usar una segunda fuente de alimentación para la tensión negativa en la bit-line, el diseño propuesto consigue reducir el consumo hasta en un 20 % comparado con una célula convencional. Una nueva métrica, el hold trip point se ha propuesto para prevenir nuevos tipos de fallo debidos al uso de tensiones negativas, así como un método alternativo para estimar la velocidad de lectura, reduciendo el número de simulaciones necesarias. Según continúa la reducción del tamaño de los dispositivos electrónicos, se incluyen nuevos mecanismos que permiten facilitar el proceso de fabricación, o alcanzar las prestaciones requeridas para cada nueva generación tecnológica. Se puede citar como ejemplo el estrés compresivo o extensivo aplicado a los fins en tecnologías FinFET, que altera la movilidad de los transistores fabricados a partir de dichos fins. Los efectos de estos mecanismos dependen mucho del layout, la posición de unos transistores afecta a los transistores colindantes y pudiendo ser el efecto diferente en diferentes tipos de transistores. Se propone el uso de una célula SRAM complementaria que utiliza dispositivos pMOS en los transistores de paso, así reduciendo la longitud de los fins de los transistores nMOS y alargando los de los pMOS, extendiéndolos a las células vecinas y hasta los límites de la matriz de células. Considerando los efectos del STI y estresores de SiGe, el diseño propuesto mejora los dos tipos de transistores, mejorando las prestaciones de la célula SRAM complementaria en más de un 10% para una misma probabilidad de fallo y un mismo consumo estático, sin que se requiera aumentar el área. Finalmente, la radiación ha sido un problema recurrente en la electrónica para aplicaciones espaciales, pero la reducción de las corrientes y tensiones de los dispositivos actuales los está volviendo vulnerables al ruido generado por radiación, incluso a nivel de suelo. Pese a que tecnologías como SOI o FinFET reducen la cantidad de energía colectada por el circuito durante el impacto de una partícula, las importantes variaciones de proceso en los nodos más pequeños va a afectar su inmunidad frente a la radiación. Se demuestra que los errores inducidos por radiación pueden aumentar hasta en un 40 % en el nodo de 7nm cuando se consideran las variaciones de proceso, comparado con el caso nominal. Este incremento es de una magnitud mayor que la mejora obtenida mediante el diseño de células de memoria específicamente endurecidas frente a radiación, sugiriendo que la reducción de la variabilidad representaría una mayor mejora. ABSTRACT Reliability is becoming the main concern on integrated circuit as the technology goes beyond 22nm. Small imperfections in the device manufacturing result now in important random differences of the devices at electrical level which must be dealt with during the design. New processes and materials, required to allow the fabrication of the extremely short devices, are making new effects appear resulting ultimately on increased static power consumption, or higher vulnerability to radiation SRAMs have become the most vulnerable part of electronic systems, not only they account for more than half of the chip area of nowadays SoCs and microprocessors, but they are critical as soon as different variation sources are regarded, with failures in a single cell making the whole memory fail. This thesis addresses the different challenges that SRAM design has in the smallest technologies. In a common scenario of increasing variability, issues like energy consumption, design aware of the technology and radiation hardening are considered. First, given the increasing magnitude of device variability in the smallest nodes, as well as new sources of variability appearing as a consequence of new devices and shortened lengths, an accurate modeling of the variability is crucial. We propose to extend the injectors method that models variability at circuit level, abstracting its physical sources, to better model sub-threshold slope and drain induced barrier lowering that are gaining importance in FinFET technology. The two new proposed injectors bring an increased accuracy of figures of merit at different abstraction levels of electronic design, at transistor, gate and circuit levels. The mean square error estimating performance and stability metrics of SRAM cells is reduced by at least 1.5 and up to 7.5 while the yield estimation is improved by orders of magnitude. Low power design is a major constraint given the high-growing market of mobile devices that run on battery. It is also relevant because of the increased power densities of nowadays systems, in order to reduce the thermal dissipation and its impact on aging. The traditional approach of reducing the voltage to lower the energy consumption if challenging in the case of SRAMs given the increased impact of process variations at low voltage supplies. We propose a cell design that makes use of negative bit-line write-assist to overcome write failures as the main supply voltage is lowered. Despite using a second power source for the negative bit-line, the design achieves an energy reduction up to 20% compared to a conventional cell. A new metric, the hold trip point has been introduced to deal with new sources of failures to cells using a negative bit-line voltage, as well as an alternative method to estimate cell speed, requiring less simulations. With the continuous reduction of device sizes, new mechanisms need to be included to ease the fabrication process and to meet the performance targets of the successive nodes. As example we can consider the compressive or tensile strains included in FinFET technology, that alter the mobility of the transistors made out of the concerned fins. The effects of these mechanisms are very dependent on the layout, with transistor being affected by their neighbors, and different types of transistors being affected in a different way. We propose to use complementary SRAM cells with pMOS pass-gates in order to reduce the fin length of nMOS devices and achieve long uncut fins for the pMOS devices when the cell is included in its corresponding array. Once Shallow Trench isolation and SiGe stressors are considered the proposed design improves both kinds of transistor, boosting the performance of complementary SRAM cells by more than 10% for a same failure probability and static power consumption, with no area overhead. While radiation has been a traditional concern in space electronics, the small currents and voltages used in the latest nodes are making them more vulnerable to radiation-induced transient noise, even at ground level. Even if SOI or FinFET technologies reduce the amount of energy transferred from the striking particle to the circuit, the important process variation that the smallest nodes will present will affect their radiation hardening capabilities. We demonstrate that process variations can increase the radiation-induced error rate by up to 40% in the 7nm node compared to the nominal case. This increase is higher than the improvement achieved by radiation-hardened cells suggesting that the reduction of process variations would bring a higher improvement.
Resumo:
Cognitive radio represents a promising paradigm to further increase transmission rates in wireless networks, as well as to facilitate the deployment of self-organized networks such as femtocells. Within this framework, secondary users (SU) may exploit the channel under the premise to maintain the quality of service (QoS) on primary users (PU) above a certain level. To achieve this goal, we present a noncooperative game where SU maximize their transmission rates, and may act as well as relays of the PU in order to hold their perceived QoS above the given threshold. In the paper, we analyze the properties of the game within the theory of variational inequalities, and provide an algorithm that converges to one Nash Equilibrium of the game. Finally, we present some simulations and compare the algorithm with another method that does not consider SU acting as relays.