21 resultados para term structure strategy
em Universidad Politécnica de Madrid
Resumo:
There are many the requirements that modern power converters should fulfill. Most of the applications where these converters are used, demand smaller converters with high efficiency, improved power density and a fast dynamic response. For instance, loads like microprocessors demand aggressive current steps with very high slew rates (100A/mus and higher); besides, during these load steps, the supply voltage of the microprocessor should be kept within tight limits in order to ensure its correct performance. The accomplishment of these requirements is not an easy task; complex solutions like advanced topologies - such as multiphase converters- as well as advanced control strategies are often needed. Besides, it is also necessary to operate the converter at high switching frequencies and to use capacitors with high capacitance and low ESR. Improving the dynamic response of power converters does not rely only on the control strategy but also the power topology should be suited to enable a fast dynamic response. Moreover, in later years, a fast dynamic response does not only mean accomplishing fast load steps but output voltage steps are gaining importance as well. At least, two applications that require fast voltage changes can be named: Low power microprocessors. In these devices, the voltage supply is changed according to the workload and the operating frequency of the microprocessor is changed at the same time. An important reduction in voltage dependent losses can be achieved with such changes. This technique is known as Dynamic Voltage Scaling (DVS). Another application where important energy savings can be achieved by means of changing the supply voltage are Radio Frequency Power Amplifiers. For example, RF architectures based on ‘Envelope Tracking’ and ‘Envelope Elimination and Restoration’ techniques can take advantage of voltage supply modulation and accomplish important energy savings in the power amplifier. However, in order to achieve these efficiency improvements, a power converter with high efficiency and high enough bandwidth (hundreds of kHz or even tens of MHz) is necessary in order to ensure an adequate supply voltage. The main objective of this Thesis is to improve the dynamic response of DC-DC converters from the point of view of the power topology. And the term dynamic response refers both to the load steps and the voltage steps; it is also interesting to modulate the output voltage of the converter with a specific bandwidth. In order to accomplish this, the question of what is it that limits the dynamic response of power converters should be answered. Analyzing this question leads to the conclusion that the dynamic response is limited by the power topology and specifically, by the filter inductance of the converter which is found in series between the input and the output of the converter. The series inductance is the one that determines the gain of the converter and provides the regulation capability. Although the energy stored in the filter inductance enables the regulation and the capability of filtering the output voltage, it imposes a limitation which is the concern of this Thesis. The series inductance stores energy and prevents the current from changing in a fast way, limiting the slew rate of the current through this inductor. Different solutions are proposed in the literature in order to reduce the limit imposed by the filter inductor. Many publications proposing new topologies and improvements to known topologies can be found in the literature. Also, complex control strategies are proposed with the objective of improving the dynamic response in power converters. In the proposed topologies, the energy stored in the series inductor is reduced; examples of these topologies are Multiphase converters, Buck converter operating at very high frequency or adding a low impedance path in parallel with the series inductance. Control techniques proposed in the literature, focus on adjusting the output voltage as fast as allowed by the power stage; examples of these control techniques are: hysteresis control, V 2 control, and minimum time control. In some of the proposed topologies, a reduction in the value of the series inductance is achieved and with this, the energy stored in this magnetic element is reduced; less stored energy means a faster dynamic response. However, in some cases (as in the high frequency Buck converter), the dynamic response is improved at the cost of worsening the efficiency. In this Thesis, a drastic solution is proposed: to completely eliminate the series inductance of the converter. This is a more radical solution when compared to those proposed in the literature. If the series inductance is eliminated, the regulation capability of the converter is limited which can make it difficult to use the topology in one-converter solutions; however, this topology is suitable for power architectures where the energy conversion is done by more than one converter. When the series inductor is eliminated from the converter, the current slew rate is no longer limited and it can be said that the dynamic response of the converter is independent from the switching frequency. This is the main advantage of eliminating the series inductor. The main objective, is to propose an energy conversion strategy that is done without series inductance. Without series inductance, no energy is stored between the input and the output of the converter and the dynamic response would be instantaneous if all the devices were ideal. If the energy transfer from the input to the output of the converter is done instantaneously when a load step occurs, conceptually it would not be necessary to store energy at the output of the converter (no output capacitor COUT would be needed) and if the input source is ideal, the input capacitor CIN would not be necessary. This last feature (no CIN with ideal VIN) is common to all power converters. However, when the concept is actually implemented, parasitic inductances such as leakage inductance of the transformer and the parasitic inductance of the PCB, cannot be avoided because they are inherent to the implementation of the converter. These parasitic elements do not affect significantly to the proposed concept. In this Thesis, it is proposed to operate the converter without series inductance in order to improve the dynamic response of the converter; however, on the other side, the continuous regulation capability of the converter is lost. It is said continuous because, as it will be explained throughout the Thesis, it is indeed possible to achieve discrete regulation; a converter without filter inductance and without energy stored in the magnetic element, is capable to achieve a limited number of output voltages. The changes between these output voltage levels are achieved in a fast way. The proposed energy conversion strategy is implemented by means of a multiphase converter where the coupling of the phases is done by discrete two-winding transformers instead of coupledinductors since transformers are, ideally, no energy storing elements. This idea is the main contribution of this Thesis. The feasibility of this energy conversion strategy is first analyzed and then verified by simulation and by the implementation of experimental prototypes. Once the strategy is proved valid, different options to implement the magnetic structure are analyzed. Three different discrete transformer arrangements are studied and implemented. A converter based on this energy conversion strategy would be designed with a different approach than the one used to design classic converters since an additional design degree of freedom is available. The switching frequency can be chosen according to the design specifications without penalizing the dynamic response or the efficiency. Low operating frequencies can be chosen in order to favor the efficiency; on the other hand, high operating frequencies (MHz) can be chosen in order to favor the size of the converter. For this reason, a particular design procedure is proposed for the ‘inductorless’ conversion strategy. Finally, applications where the features of the proposed conversion strategy (high efficiency with fast dynamic response) are advantageus, are proposed. For example, in two-stage power architectures where a high efficiency converter is needed as the first stage and there is a second stage that provides the fine regulation. Another example are RF power amplifiers where the voltage is modulated following an envelope reference in order to save power; in this application, a high efficiency converter, capable of achieving fast voltage steps is required. The main contributions of this Thesis are the following: The proposal of a conversion strategy that is done, ideally, without storing energy in the magnetic element. The validation and the implementation of the proposed energy conversion strategy. The study of different magnetic structures based on discrete transformers for the implementation of the proposed energy conversion strategy. To elaborate and validate a design procedure. To identify and validate applications for the proposed energy conversion strategy. It is important to remark that this work is done in collaboration with Intel. The particular features of the proposed conversion strategy enable the possibility of solving the problems related to microprocessor powering in a different way. For example, the high efficiency achieved with the proposed conversion strategy enables it as a good candidate to be used for power conditioning, as a first stage in a two-stage power architecture for powering microprocessors.
Resumo:
Wind power time series usually show complex dynamics mainly due to non-linearities related to the wind physics and the power transformation process in wind farms. This article provides an approach to the incorporation of observed local variables (wind speed and direction) to model some of these effects by means of statistical models. To this end, a benchmarking between two different families of varying-coefficient models (regime-switching and conditional parametric models) is carried out. The case of the offshore wind farm of Horns Rev in Denmark has been considered. The analysis is focused on one-step ahead forecasting and a time series resolution of 10 min. It has been found that the local wind direction contributes to model some features of the prevailing winds, such as the impact of the wind direction on the wind variability, whereas the non-linearities related to the power transformation process can be introduced by considering the local wind speed. In both cases, conditional parametric models showed a better performance than the one achieved by the regime-switching strategy. The results attained reinforce the idea that each explanatory variable allows the modelling of different underlying effects in the dynamics of wind power time series.
Resumo:
El manejo pre-sacrificio es de vital importancia en acuicultura, ya que afecta tanto a las reacciones fisiológicas como a los procesos bioquímicos post mortem, y por tanto al bienestar y a la calidad del producto. El ayuno pre-sacrificio se lleva a cabo de forma habitual en acuicultura, ya que permite el vaciado del aparato digestivo de restos de alimento y heces, reduciendo de esta manera la carga bacteriana en el intestino y la dispersión de enzimas digestivos y potenciales patógenos a la carne. Sin embargo, la duración óptima de este ayuno sin que el pez sufra un estrés innecesario no está clara. Además, se sabe muy poco sobre la mejor hora del día para realizar el sacrificio, lo que a su vez está regido por los ritmos diarios de los parámetros fisiológicos de estrés. Finalmente, se sabe que la temperatura del agua juega un papel muy importante en la fisiología del estrés pero no se ha determinado su efecto en combinación con el ayuno. Además, las actuales recomendaciones en relación a la duración óptima del ayuno previo al sacrificio en peces no suelen considerar la temperatura del agua y se basan únicamente en días y no en grados día (ºC d). Se determinó el efecto del ayuno previo al sacrificio (1, 2 y 3 días, equivalente a 11,1-68,0 grados día) y la hora de sacrificio (08h00, 14h00 y 20h00) en trucha arco iris (Oncorhynchus mykiss) de tamaño comercial en cuatro pruebas usando diferentes temperaturas de agua (Prueba 1: 11,8 ºC; Prueba 2: 19,2 ºC; Prueba 3: 11,1 ºC; y Prueba 4: 22,7 ºC). Se midieron indicadores biométricos, hematológicos, metabólicos y de calidad de la carne. En cada prueba, los valores de los animales ayunados (n=90) se compararon con 90 animales control mantenidos bajo condiciones similares pero nos ayunados. Los resultados sugieren que el ayuno tuvo un efecto significativo sobre los indicadores biométricos. El coeficiente de condición en los animales ayunados fue menor que en los controles después de 2 días de ayuno. El vaciado del aparato digestivo se produjo durante las primeras 24 h de ayuno, encontrándose pequeñas cantidades de alimento después de 48 h. Por otra parte, este vaciado fue más rápido cuando las temperaturas fueron más altas. El peso del hígado de los animales ayunados fue menor y las diferencias entre truchas ayunadas y controles fueron más evidentes a medida que el vaciado del aparato digestivo fue más rápido. El efecto del ayuno hasta 3 días en los indicadores hematológicos no fue significativo. Los niveles de cortisol en plasma resultaron ser altos tanto en truchas ayunadas como en las alimentadas en todas las pruebas realizadas. La concentración media de glucosa varió entre pruebas pero mostró una tendencia a disminuir en animales ayunados a medida que el ayuno progresaba. En cualquier caso, parece que la temperatura del agua jugó un papel muy importante, ya que se encontraron concentraciones más altas durante los días 2 y 3 de ayuno en animales mantenidos a temperaturas más bajas previamente al sacrificio. Los altos niveles de lactato obtenidos en sangre parecen sugerir episodios de intensa actividad muscular pero no se pudo encontrar relación con el ayuno. De la misma manera, el nivel de hematocrito no mostró efecto alguno del ayuno y los leucocitos tendieron a ser más altos cuando los animales estaban menos estresados y cuando su condición corporal fue mayor. Finalmente, la disminución del peso del hígado (índice hepatosomático) en la Prueba 3 no se vio acompañada de una reducción del glucógeno hepático, lo que sugiere que las truchas emplearon una estrategia diferente para mantener constantes los niveles de glucosa durante el periodo de ayuno en esa prueba. En relación a la hora de sacrificio, se obtuvieron niveles más bajos de cortisol a las 20h00, lo que indica que las truchas estaban menos estresadas y que el manejo pre-sacrificio podría resultar menos estresante por la noche. Los niveles de hematocrito fueron también más bajos a las 20h00 pero solo con temperaturas más bajas, sugiriendo que las altas temperaturas incrementan el metabolismo. Ni el ayuno ni la hora de sacrificio tuvieron un efecto significativo sobre la evolución de la calidad de la carne durante los 3 días de almacenamiento. Por el contrario, el tiempo de almacenamiento sí que parece tener un efecto claro sobre los parámetros de calidad del producto final. Los niveles más bajos de pH se alcanzaron a las 24-48 h post mortem, con una lata variabilidad entre duraciones del ayuno (1, 2 y 3 días) en animales sacrificados a las 20h00, aunque no se pudo distinguir ningún patrón común. Por otra parte, la mayor rigidez asociada al rigor mortis se produjo a las 24 h del sacrificio. La capacidad de retención de agua se mostró muy estable durante el período de almacenamiento y parece ser independiente de los cambios en el pH. El parámetro L* de color se incrementó a medida que avanzaba el período de almacenamiento de la carne, mientras que los valores a* y b* no variaron en gran medida. En conclusión, basándose en los resultados hematológicos, el sacrificio a última hora del día parece tener un efecto menos negativo en el bienestar. De manera general, nuestros resultados sugieren que la trucha arco iris puede soportar un período de ayuno previo al sacrificio de hasta 3 días o 68 ºC d sin que su bienestar se vea seriamente comprometido. Es probable que con temperaturas más bajas las truchas pudieran ser ayunadas durante más tiempo sin ningún efecto negativo sobre su bienestar. En cualquier caso, se necesitan más estudios para determinar la relación entre la temperatura del agua y la duración óptima del ayuno en términos de pérdida de peso vivo y la disminución de los niveles de glucosa en sangre y otros indicadores metabólicos. SUMMARY Pre-slaughter handling in fish is important because it affects both physiological reactions and post mortem biochemical processes, and thus welfare and product quality. Pre-slaughter fasting is regularly carried out in aquaculture, as it empties the viscera of food and faeces, thus reducing the intestinal bacteria load and the spread of gut enzymes and potential pathogens to the flesh. However, it is unclear how long rainbow trout can be fasted before suffering unnecessary stress. In addition, very little is known about the best time of the day to slaughter fish, which may in turn be dictated by diurnal rhythms in physiological stress parameters. Water temperature is also known to play a very important role in stress physiology in fish but the combined effect with fasting is unclear. Current recommendations regarding the optimal duration of pre-slaughter fasting do not normally consider water temperature and are only based on days, not degree days (ºC d). The effects of short-term fasting prior to slaughter (1, 2 and 3 days, between 11.1 and 68.0 ºC days) and hour of slaughter (08h00, 14h00 and 20h00) were determined in commercial-sized rainbow trout (Oncorhynchus mykiss) over four trials at different water temperatures (TRIAL 1, 11.8 ºC; TRIAL 2, 19.2 ºC; TRIAL 3, 11.1 ºC; and TRIAL 4, 22.7 ºC). We measured biometric, haematological, metabolic and product quality indicators. In each trial, the values of fasted fish (n=90) were compared with 90 control fish kept under similar conditions but not fasted. Results show that fasting affected biometric indicators. The coefficient of condition in fasted trout was lower than controls 2 days after food deprivation. Gut emptying occurred within the first 24 h after the cessation of feeding, with small traces of digesta after 48 h. Gut emptying was faster at higher water temperatures. Liver weight decreased in food deprived fish and differences between fasted and fed trout were more evident when gut clearance was faster. The overall effect of fasting for up to three days on haematological indicators was small. Plasma cortisol levels were high in both fasted and fed fish in all trials. Plasma glucose response to fasting varied among trials, but it tended to be lower in fasted fish as the days of fasting increased. In any case, it seems that water temperature played a more important role, with higher concentrations at lower temperatures on days 2 and 3 after the cessation of feeding. Plasma lactate levels indicate moments of high muscular activity and were also high, but no variation related to fasting could be found. Haematocrit did not show any significant effect of fasting, but leucocytes tended to be higher when trout were less stressed and when their body condition was higher. Finally, the loss of liver weight was not accompanied by a decrease in liver glycogen (only measured in TRIAL 3), suggesting that a different strategy to maintain plasma glucose levels was used. Regarding the hour of slaughter, lower cortisol levels were found at 20h00, suggesting that trout were less stressed later in the day and that pre-slaughter handling may be less stressful at night. Haematocrit levels were also lower at 20h00 but only at lower temperatures, indicating that higher temperatures increase metabolism. Neither fasting nor the hour of slaughter had a significant effect on the evolution of meat quality during 3 days of storage. In contrast, storage time seemed to have a more important effect on meat quality parameters. The lowest pH was reached 24-48 h post mortem, with a higher variability among fasting durations at 20h00, although no clear pattern could be discerned. Maximum stiffening from rigor mortis occurred after 24 h. The water holding capacity was very stable throughout storage and seemed to be independent of pH changes. Meat lightness (L*) slightly increased during storage and a* and b*-values were relatively stable. In conclusion, based on the haematological results, slaughtering at night may have less of a negative effect on welfare than at other times of the day. Overall, our results suggest that rainbow trout can cope well with fasting up to three days or 68 ºC d prior to slaughter and that their welfare is therefore not seriously compromised. At low water temperatures, trout could probably be fasted for longer periods without negative effects on welfare but more research is needed to determine the relationship between water temperature and days of fasting in terms of loss of live weight and the decrease in plasma glucose and other metabolic indicators.
Resumo:
This article presents an alternative approach to the decision-making process in transport strategy design. The study explores the possibility of integrating forecasting, assessment and optimization procedures in support of a decision-making process designed to reach the best achievable scenario through mobility policies. Long-term evaluation, as required by a dynamic system such as a city, is provided by a strategic Land-Use and Transport Interaction (LUTI) model. The social welfare achieved by implementing mobility LUTI model policies is measured through a cost-benefit analysis and maximized through an optimization process throughout the evaluation period. The method is tested by optimizing a pricing policy scheme in Madrid on a cordon toll in a context requiring system efficiency, social equity and environmental quality. The optimized scheme yields an appreciable increase in social surplus through a relatively low rate compared to other similar pricing toll schemes. The results highlight the different considerations regarding mobility impacts on the case study area, as well as the major contributors to social welfare surplus. This leads the authors to reconsider the cost-analysis approach, as defined in the study, as the best option for formulating sustainability measures.
Resumo:
This paper employs a 3D hp self-adaptive grid-refinement finite element strategy for the solution of a particular electromagnetic waveguide structure known as Magic-T. This structure is utilized as a power divider/combiner in communication systems as well as in other applications. It often incorporates dielectrics, metallic screws, round corners, and so on, which may facilitate its construction or improve its design, but significantly difficult its modeling when employing semi-analytical techniques. The hp-adaptive finite element method enables accurate modeling of a Magic-T structure even in the presence of these undesired materials/geometries. Numerical results demonstrate the suitability of the hp-adaptive method for modeling a Magic-T rectangular waveguide structure, delivering errors below 0.5% with a limited number of unknowns. Solutions of waveguide problems delivered by the self-adaptive hp-FEM are comparable to those obtained with semi-analytical techniques such as the Mode Matching method, for problems where the latest methods can be applied. At the same time, the hp-adaptive FEM enables accurate modeling of more complex waveguide structures.
Resumo:
The elaboration of a generic decision-making strategy to address the evolution of an emergency situation, from the stages of response to recovery, and including a planning stage, can facilitate timely, effective and consistent decision making by the response organisations at every level within the emergency management structure and between countries, helping to ensure optimal protection of health, environment, and society. The degree of involvement of stakeholders in this process is a key strategic element for strengthening the local preparedness and response and can help a successful countermeasures strategy. A significant progress was made with the multi-national European project EURANOS (2004-2009) which brought together best practice, knowledge and technology to enhance the preparedness for Europe's response to any radiation emergency and long term contamination. The subsequent establishment of a European Technology Platform and the recent launch of the research project NERIS-TP ("Towards a self sustaining European Technology Platform (NERIS-TP) on Preparedness for Nuclear and Radiological Emergency Response and Recovery") are aimed to continue with the remaining tasks for gaining appropriate levels of emergency preparedness at local level in most European countries. One of the objectives of the NERIS-TP project is: Strengthen the preparedness at the local/national level by setting up dedicated fora and developing new tools or adapting the tools developed within the EURANOS projects (such as the governance framework for preparedness, the handbooks on countermeasures, the RODOS system, and the MOIRA DSS for long term contamination in catchments) to meet the needs of local communities. CIEMAT and UPM in close interaction with the Nuclear Safety Council will explore, within this project, the use and application in Spain of such technical tools, including other national tools and information and communication strategies to foster cooperation between local, national and international stakeholders. The aim is identify and involve relevant stakeholders in emergency preparedness to improve the development and implementation of appropriate protection strategies as part of the consequence management and the transition to recovery. In this paper, an overview of the "state of the art" on this area in Spain and the methodology and work Plan proposed by the Spanish group within the project NERIS to grow the stakeholder involvement in the preparedness to emergency response and recovery is presented.
Resumo:
In the mid-long-term after a nuclear accident, the contamination of drinking water sources, fish and other aquatic foodstuffs, irrigation supplies and people?s exposure during recreational activities may create considerable public concern, even though dose assessment may in certain situations indicate lesser importance than for other sources, as clearly experienced in the aftermath of past accidents. In such circumstances there are a number of available countermeasure options, ranging from specific chemical treatment of lakes to bans on fish ingestion or on the use of water for crop irrigation. The potential actions can be broadly grouped into four main categories, chemical, biological, physical and social. In some cases a combination of actions may be the optimal strategy and a decision support system (DSS) like MOIRA-PLUS can be of great help to optimise a decision. A further option is of course not to take any remedial actions, although this may also have significant socio-economic repercussions which should be adequately evaluated. MOIRA-PLUS is designed to allow for a reliable assessment of the long-term evolution of the radiological situation and of feasible alternative rehabilitation strategies, including an objective evaluation of their social, economic and ecological impacts in a rational and comprehensive manner. MOIRA-PLUS also features a decision analysis methodology, making use of multi-attribute analysis, which can take into account the preferences and needs of different types of stakeholders. The main functions and elements of the system are described summarily. Also the conclusions from end-user?s experiences with the system are discussed, including exercises involving the organizations responsible for emergency management and the affected services, as well as different local and regional stakeholders. MOIRAPLUS has proven to be a mature system, user friendly and relatively easy to set up. It can help to better decisionmaking by enabling a realistic evaluation of the complete impacts of possible recovery strategies. Also, the interaction with stakeholders has allowed identifying improvements of the system that have been recently implemented.
Resumo:
Experimental research on imposed deformation is generally conducted on small scale laboratory experiments. The attractiveness of field research lies in the possibility to compare results obtained from full scale structures to theoretical prediction. Unfortunately, measurements obtained from real structures are rarely described in literature. The structural response of integral edifices depends significantly on stiffness changes and constraints. The New Airport Terminal Barajas in Madrid, Spain provides with large integral modules, partially post?tensioned concrete frames, cast monolithically over three floor levels and an overall length of approx. 80 m. The field campaign described in this article explains the instrumentation of one of these frames focusing on the influence of imposed deformations such as creep, shrinkage and temperature. The applied monitoring equipment included embedded strain gages, thermocouples, DEMEC measurements and simple displacement measurements. Data was collected throughout construction and during two years of service. A complete data range of five years is presented and analysed. The results are compared with a simple approach to predict the long?term shortening of this concrete structure. Both analytical and experimental results are discussed.
Resumo:
We studied the situation in Spanish public universities regarding the use of the Balanced Scorecard (BSC), as an instrument of control and strategic management. Also, we studied its application to the School of Mines and Energy at Universidad Politécnica de Madrid. The main advantage of the BSC is that improves the organizational structure of the workplace and the achievement of the objectives that ensure long-term success. First we review the strategy for success used in the Spanish educational system and specifically in the Spanish public universities. Then using the BSC and applying the main strategic lines for the successful management of the School of Mines and Energy at Universidad Politécnica de Madrid. The strategic lines affect all the college groups and the success of the BSC tool is to increase communication between the faculties, personal auxiliary, students and society in general that make up the university. First we performed a SWOT analysis (DAFO in Spanish) there are proposed different perspectives that focus the long-term strategic objectives. The BSC is designed based on the strategic objectives that set the direction through using indicators and initiatives, the goals are achieved up to the programmed schedule. In the perspective of teaching, objectives are set to update facilities and increase partnerships with other universities and businesses, encouraging ongoing training of staff and improved coordination and internal communication. The internal process perspective aims at improving the marketing, the promotion of the international dimension of the school through strategic alliances, better mobility for students and professors and improved teaching and research quality results. It continues with improving the image of the school between customer?s perspective, the quality perceived by students and the loyalty of the teaching staff by retaining talent. Finally, the financial perspective which should contain costs without harming the quality, improving the employability of students and achieve relevant jobs at teaching and research through international measurement standards.
Resumo:
A simple and scalable chemical approach has been proposed for the generation of 1-dimensional nanostructures of two most important inorganic materials such as zinc oxide and cadmium sulfide. By controlling the growth habit of the nanostructures with manipulated reaction conditions, the diameter and uniformity of the nanowires/nanorods were tailored. We studied extensively optical behavior and structural growth of CdS NWs and ZnO NRs doped ferroelectric liquid crystal Felix-017/100. Due to doping band gap has been changed and several blue shifts occurred in photoluminescence spectra because of nanoconfinement effect and mobility of charges.
Resumo:
This article presents an alternative approach to the decision-making process in transport strategy design. The study explores the possibility of integrating forecasting, assessment and optimization procedures in support of a decision-making process designed to reach the best achievable scenario through mobility policies. Long-term evaluation, as required by a dynamic system such as a city, is provided by a strategic Land-Use and Transport Interaction (LUTI) model. The social welfare achieved by implementing mobility LUTI model policies is measured through a cost-benefit analysis and maximized through an optimization process throughout the evaluation period. The method is tested by optimizing a pricing policy scheme in Madrid on a cordon toll in a context requiring system efficiency, social equity and environmental quality. The optimized scheme yields an appreciable increase in social surplus through a relatively low rate compared to other similar pricing toll schemes. The results highlight the different considerations regarding mobility impacts on the case study area, as well as the major contributors to social welfare surplus. This leads the authors to reconsider the cost-analysis approach, as defined in the study, as the best option for formulating sustainability measures.
Resumo:
Esta Tesis aborda los problemas de eficiencia de las redes eléctrica desde el punto de vista del consumo. En particular, dicha eficiencia es mejorada mediante el suavizado de la curva de consumo agregado. Este objetivo de suavizado de consumo implica dos grandes mejoras en el uso de las redes eléctricas: i) a corto plazo, un mejor uso de la infraestructura existente y ii) a largo plazo, la reducción de la infraestructura necesaria para suplir las mismas necesidades energéticas. Además, esta Tesis se enfrenta a un nuevo paradigma energético, donde la presencia de generación distribuida está muy extendida en las redes eléctricas, en particular, la generación fotovoltaica (FV). Este tipo de fuente energética afecta al funcionamiento de la red, incrementando su variabilidad. Esto implica que altas tasas de penetración de electricidad de origen fotovoltaico es perjudicial para la estabilidad de la red eléctrica. Esta Tesis trata de suavizar la curva de consumo agregado considerando esta fuente energética. Por lo tanto, no sólo se mejora la eficiencia de la red eléctrica, sino que también puede ser aumentada la penetración de electricidad de origen fotovoltaico en la red. Esta propuesta conlleva grandes beneficios en los campos económicos, social y ambiental. Las acciones que influyen en el modo en que los consumidores hacen uso de la electricidad con el objetivo producir un ahorro energético o un aumento de eficiencia son llamadas Gestión de la Demanda Eléctrica (GDE). Esta Tesis propone dos algoritmos de GDE diferentes para cumplir con el objetivo de suavizado de la curva de consumo agregado. La diferencia entre ambos algoritmos de GDE reside en el marco en el cual estos tienen lugar: el marco local y el marco de red. Dependiendo de este marco de GDE, el objetivo energético y la forma en la que se alcanza este objetivo son diferentes. En el marco local, el algoritmo de GDE sólo usa información local. Este no tiene en cuenta a otros consumidores o a la curva de consumo agregado de la red eléctrica. Aunque esta afirmación pueda diferir de la definición general de GDE, esta vuelve a tomar sentido en instalaciones locales equipadas con Recursos Energéticos Distribuidos (REDs). En este caso, la GDE está enfocada en la maximización del uso de la energía local, reduciéndose la dependencia con la red. El algoritmo de GDE propuesto mejora significativamente el auto-consumo del generador FV local. Experimentos simulados y reales muestran que el auto-consumo es una importante estrategia de gestión energética, reduciendo el transporte de electricidad y alentando al usuario a controlar su comportamiento energético. Sin embargo, a pesar de todas las ventajas del aumento de auto-consumo, éstas no contribuyen al suavizado del consumo agregado. Se han estudiado los efectos de las instalaciones locales en la red eléctrica cuando el algoritmo de GDE está enfocado en el aumento del auto-consumo. Este enfoque puede tener efectos no deseados, incrementando la variabilidad en el consumo agregado en vez de reducirlo. Este efecto se produce porque el algoritmo de GDE sólo considera variables locales en el marco local. Los resultados sugieren que se requiere una coordinación entre las instalaciones. A través de esta coordinación, el consumo debe ser modificado teniendo en cuenta otros elementos de la red y buscando el suavizado del consumo agregado. En el marco de la red, el algoritmo de GDE tiene en cuenta tanto información local como de la red eléctrica. En esta Tesis se ha desarrollado un algoritmo autoorganizado para controlar el consumo de la red eléctrica de manera distribuida. El objetivo de este algoritmo es el suavizado del consumo agregado, como en las implementaciones clásicas de GDE. El enfoque distribuido significa que la GDE se realiza desde el lado de los consumidores sin seguir órdenes directas emitidas por una entidad central. Por lo tanto, esta Tesis propone una estructura de gestión paralela en lugar de una jerárquica como en las redes eléctricas clásicas. Esto implica que se requiere un mecanismo de coordinación entre instalaciones. Esta Tesis pretende minimizar la cantidad de información necesaria para esta coordinación. Para lograr este objetivo, se han utilizado dos técnicas de coordinación colectiva: osciladores acoplados e inteligencia de enjambre. La combinación de estas técnicas para llevar a cabo la coordinación de un sistema con las características de la red eléctrica es en sí mismo un enfoque novedoso. Por lo tanto, este objetivo de coordinación no es sólo una contribución en el campo de la gestión energética, sino también en el campo de los sistemas colectivos. Los resultados muestran que el algoritmo de GDE propuesto reduce la diferencia entre máximos y mínimos de la red eléctrica en proporción a la cantidad de energía controlada por el algoritmo. Por lo tanto, conforme mayor es la cantidad de energía controlada por el algoritmo, mayor es la mejora de eficiencia en la red eléctrica. Además de las ventajas resultantes del suavizado del consumo agregado, otras ventajas surgen de la solución distribuida seguida en esta Tesis. Estas ventajas se resumen en las siguientes características del algoritmo de GDE propuesto: • Robustez: en un sistema centralizado, un fallo o rotura del nodo central provoca un mal funcionamiento de todo el sistema. La gestión de una red desde un punto de vista distribuido implica que no existe un nodo de control central. Un fallo en cualquier instalación no afecta el funcionamiento global de la red. • Privacidad de datos: el uso de una topología distribuida causa de que no hay un nodo central con información sensible de todos los consumidores. Esta Tesis va más allá y el algoritmo propuesto de GDE no utiliza información específica acerca de los comportamientos de los consumidores, siendo la coordinación entre las instalaciones completamente anónimos. • Escalabilidad: el algoritmo propuesto de GDE opera con cualquier número de instalaciones. Esto implica que se permite la incorporación de nuevas instalaciones sin afectar a su funcionamiento. • Bajo coste: el algoritmo de GDE propuesto se adapta a las redes actuales sin requisitos topológicos. Además, todas las instalaciones calculan su propia gestión con un bajo requerimiento computacional. Por lo tanto, no se requiere un nodo central con un alto poder de cómputo. • Rápido despliegue: las características de escalabilidad y bajo coste de los algoritmos de GDE propuestos permiten una implementación rápida. No se requiere una planificación compleja para el despliegue de este sistema. ABSTRACT This Thesis addresses the efficiency problems of the electrical grids from the consumption point of view. In particular, such efficiency is improved by means of the aggregated consumption smoothing. This objective of consumption smoothing entails two major improvements in the use of electrical grids: i) in the short term, a better use of the existing infrastructure and ii) in long term, the reduction of the required infrastructure to supply the same energy needs. In addition, this Thesis faces a new energy paradigm, where the presence of distributed generation is widespread over the electrical grids, in particular, the Photovoltaic (PV) generation. This kind of energy source affects to the operation of the grid by increasing its variability. This implies that a high penetration rate of photovoltaic electricity is pernicious for the electrical grid stability. This Thesis seeks to smooth the aggregated consumption considering this energy source. Therefore, not only the efficiency of the electrical grid is improved, but also the penetration of photovoltaic electricity into the grid can be increased. This proposal brings great benefits in the economic, social and environmental fields. The actions that influence the way that consumers use electricity in order to achieve energy savings or higher efficiency in energy use are called Demand-Side Management (DSM). This Thesis proposes two different DSM algorithms to meet the aggregated consumption smoothing objective. The difference between both DSM algorithms lie in the framework in which they take place: the local framework and the grid framework. Depending on the DSM framework, the energy goal and the procedure to reach this goal are different. In the local framework, the DSM algorithm only uses local information. It does not take into account other consumers or the aggregated consumption of the electrical grid. Although this statement may differ from the general definition of DSM, it makes sense in local facilities equipped with Distributed Energy Resources (DERs). In this case, the DSM is focused on the maximization of the local energy use, reducing the grid dependence. The proposed DSM algorithm significantly improves the self-consumption of the local PV generator. Simulated and real experiments show that self-consumption serves as an important energy management strategy, reducing the electricity transport and encouraging the user to control his energy behavior. However, despite all the advantages of the self-consumption increase, they do not contribute to the smooth of the aggregated consumption. The effects of the local facilities on the electrical grid are studied when the DSM algorithm is focused on self-consumption maximization. This approach may have undesirable effects, increasing the variability in the aggregated consumption instead of reducing it. This effect occurs because the algorithm only considers local variables in the local framework. The results suggest that coordination between these facilities is required. Through this coordination, the consumption should be modified by taking into account other elements of the grid and seeking for an aggregated consumption smoothing. In the grid framework, the DSM algorithm takes into account both local and grid information. This Thesis develops a self-organized algorithm to manage the consumption of an electrical grid in a distributed way. The goal of this algorithm is the aggregated consumption smoothing, as the classical DSM implementations. The distributed approach means that the DSM is performed from the consumers side without following direct commands issued by a central entity. Therefore, this Thesis proposes a parallel management structure rather than a hierarchical one as in the classical electrical grids. This implies that a coordination mechanism between facilities is required. This Thesis seeks for minimizing the amount of information necessary for this coordination. To achieve this objective, two collective coordination techniques have been used: coupled oscillators and swarm intelligence. The combination of these techniques to perform the coordination of a system with the characteristics of the electric grid is itself a novel approach. Therefore, this coordination objective is not only a contribution in the energy management field, but in the collective systems too. Results show that the proposed DSM algorithm reduces the difference between the maximums and minimums of the electrical grid proportionally to the amount of energy controlled by the system. Thus, the greater the amount of energy controlled by the algorithm, the greater the improvement of the efficiency of the electrical grid. In addition to the advantages resulting from the smoothing of the aggregated consumption, other advantages arise from the distributed approach followed in this Thesis. These advantages are summarized in the following features of the proposed DSM algorithm: • Robustness: in a centralized system, a failure or breakage of the central node causes a malfunction of the whole system. The management of a grid from a distributed point of view implies that there is not a central control node. A failure in any facility does not affect the overall operation of the grid. • Data privacy: the use of a distributed topology causes that there is not a central node with sensitive information of all consumers. This Thesis goes a step further and the proposed DSM algorithm does not use specific information about the consumer behaviors, being the coordination between facilities completely anonymous. • Scalability: the proposed DSM algorithm operates with any number of facilities. This implies that it allows the incorporation of new facilities without affecting its operation. • Low cost: the proposed DSM algorithm adapts to the current grids without any topological requirements. In addition, every facility calculates its own management with low computational requirements. Thus, a central computational node with a high computational power is not required. • Quick deployment: the scalability and low cost features of the proposed DSM algorithms allow a quick deployment. A complex schedule of the deployment of this system is not required.
Resumo:
En un mundo donde el cambio es constante y cada vez más vertiginoso, la innovación es el combustible que utilizan las empresas que permite su renovación constante y, como consecuencia, su supervivencia en el largo plazo. La innovación es sin dudas un elemento fundamental para determinar la capacidad de las empresas en crear valor a lo largo del tiempo, y por ello, las empresas suelen dedicar esfuerzos considerables y recursos de todo tipo para identificar nuevas alternativas de innovación que se adapten a su estrategia, cultura, objetivos y ambiciones corporativas. Una forma específica para llevar a cabo la innovación es la innovación abierta. Esta se entiende como la innovación que se realiza de manera conjunta con otras empresas o participantes del ecosistema. Cabe la aclaración que en este documento se toma la definición de ecosistema referida al conjunto de clientes, proveedores, competidores y otros participantes que interactúan en un mismo entorno donde existen posiciones de liderazgo que pueden cambiar a lo largo del tiempo (Moore 1996). El termino de innovación abierta fue acuñado por Henry Chesbrough hace algo mas de una década para referirse a esta forma particular de organizar la innovación corporativa. Como se observa en el presente trabajo la innovación abierta es un nuevo paradigma que ha capturado el interés académico y empresarial desde algo más de una década. Se verán varios casos de innovación abierta que se están llevando a cabo en diversos países y sectores de la economía. El objetivo principal de este trabajo de investigación es el de desarrollar y explicar un modelo de relación entre la innovación abierta y la creación de valor en las empresas. Para ello, y como objetivos secundarios, se ha investigado los elementos de un Programa de Innovación Abierta, los impulsores 1 de creación de valor, el proceso de creación de valor y, finalmente, la interacción entre estos tres elementos. Como producto final de la investigación se ha desarrollado un marco teórico general para establecer la conexión entre la innovación abierta y la creación de valor que facilita la explicación de la interacción entre ambos elementos. Se observa a partir de los casos de estudio que la innovación abierta puede abarcar todos los sectores de la economía, múltiples geografías y empresas de distintos tamaños (grandes empresas, pequeñas y medianas empresas, incluso empresas de reciente creación) cada una de ellas con distinta relevancia dentro del ecosistema en el que participan. Elementos de un Programa de Innovación Abierta La presente investigación comienza con la enumeración de los distintos elementos que se encuentran presentes en los Programas de Innovación Abierta. De esta manera, se describen los diversos elementos que se han identificado a través de la revisión de la literatura académica que se ha llevado a cabo. En función de una serie de características comunes, los distintos elementos se agrupan en cuatro niveles diferentes para lograr un mejor entendimiento de los Programas de Innovación Abierta. A continuación se detallan estos elementos § Organización del Programa. En primer lugar se menciona la existencia de una estructura organizativa capaz de cumplir una serie de objetivos establecidos previamente. Por su naturaleza de innovación abierta deberá existir cierto grado de interacción entre los distintos miembros que participen en el proceso de innovación. § Talento Interno. El talento interno asociado a los programas de innovación abierta juega un rol fundamental en la ejecución y éxito del programa. Bajo este nivel se asocian elementos como la cultura de innovación abierta y el liderazgo como mecanismo para entender uno de los elementos que explica el grado de adopción de innovación en una empresa. Estrechamente ligados al liderazgo se encuentran los comportamientos organizacionales como elementos diferenciadores para aumentar las posibilidades de creación de innovación abierta. § Infraestructura. En este nivel se agrupan los elementos relacionados con la infraestructura tecnológica necesaria para llevar a cabo el programa incluyendo los procesos productivos y las herramientas necesarias para la gestión cotidiana. § Instrumentos. Por último, se mencionan los instrumentos o vehículos que se utilizan en el entorno corporativo para implementar innovación abierta. Hay varios instrumentos disponibles como las incubadoras corporativas, los acuerdos de licenciamiento o las áreas de capital de riesgo corporativo. Para este último caso se hará una mención especial por el creciente y renovado interés que ha despertado tanto en el entorno académico como empresarial. Se ha identificado al capital de riesgo corporativo como un de los elementos diferenciales en el desarrollo de la estrategia de innovación abierta de las empresas ya que suele aportar credibilidad, capacidad y soporte tecnológico. Estos cuatro elementos, interactuando de manera conjunta y coordinada, tienen la capacidad de crear, potenciar e incluso desarrollar impulsores de creación de valor que impactan en la estrategia y organización de la empresa y partir de aquí en su desempeño financiero a lo largo del tiempo. Los Impulsores de Creación de Valor Luego de identificar, ordenar y describir los distintos elementos presentes en un Programa de Innovación Abierta se ha avanzado en la investigación con los impulsores de creación de valor. Estos pueden definirse como elementos que potencian o determinan la capacidad de crear valor dentro del entorno empresarial. Como se puede observar, se detallan estos impulsores como punto de interacción entre los elementos del programa y el proceso de creación de valor corporativo. A lo largo de la presente investigación se han identificado 6 impulsores de creación de valor presentes en un Programa de Innovación Abierta. § Nuevos Productos y Servicios. El impulsor de creación de valor más directo y evidente en un Programa de Innovación Abierta es la capacidad de crear nuevos productos y servicios dado que se relacionan directamente con el proceso de innovación de la empresa § Acceso a Mercados Adyacentes. El proceso de innovación también puede ser una fuente de valor al permitir que la empresa acceda a mercados cercanos a su negocio tradicional, es decir satisfaciendo nuevas necesidades de sus clientes existentes o de nuevos clientes en otro mercado. § Disponibilidad de Tecnologías. La disponibilidad de tecnologías es un impulsor en si mismo de la creación de valor. Estas pueden ser tanto complementarias como de apalancamiento de tecnologías ya existentes dentro de la empresa y que tengan la función de transformar parte de los componentes de la estrategia de la empresa. § Atracción del Talento Externo. La introducción de un Programa de Innovación Abierta en una empresa ofrece la oportunidad de interactuar con otras organizaciones del ecosistema y, por tanto, de atraer el talento externo. La movilidad del talento es una característica singular de la innovación abierta. § Participación en un Ecosistema Virtuoso. Se ha observado que las acciones realizadas en el entorno por cualquiera de los participantes también tendrán un claro impacto en la creación de valor para el resto de participantes por lo tanto la participación en un ecosistema virtuoso es un impulsor de creación de valor presente en la innovación abierta. § Tecnología “Dentro--‐Fuera”. Como último impulsor de valor es necesario comentar que la dirección que puede seguir la tecnología puede ser desde la empresa hacia el resto del ecosistema generando valor a partir de disponibilizar tecnologías que no son de utilidad interna para la empresa. Estos seis impulsores de creación de valor, presentes en los procesos de innovación corporativos, tienen la capacidad de influir en la estrategia y organización de la empresa aumentando su habilidad de crear valor. El Proceso de Creación de Valor en las Empresas Luego se ha investigado la práctica de la gestión basada en valor que sostiene la necesidad de alinear la estrategia corporativa y el diseño de la organización con el fin de obtener retornos financieros superiores al resto de los competidores de manera sostenida, y finalmente crear valor a lo largo del tiempo. Se describe como los impulsores de creación de valor influyen en la creación y fortalecimiento de las ventajas competitivas de la empresa impactando y alineando su estrategia y organización. Durante la investigación se ha identificado que las opciones reales pueden utilizarse como una herramienta para gestionar entornos de innovación abierta que, por definición, tienen altos niveles de incertidumbre. Las opciones reales aportan una capacidad para la toma de decisiones de forma modular y flexible que pueden aplicarse al entorno corporativo. Las opciones reales han sido particularmente diseñadas para entender, estructurar y gestionar entornos de múltiples incertidumbres y por ello tienen una amplia aplicación en los entornos de innovación. Se analizan los usos potenciales de las opciones reales como complemento a los distintos instrumentos identificados en los Programas de Innovación Abierta. La Interacción Entre los Programas de Innovación Abierta, los Impulsores de Creación de Valor y el Proceso de Creación de Valor A modo de conclusión del presente trabajo se puede mencionar que se ha desarrollado un marco general de creación de valor en el entorno de los Programas de Innovación Abierta. Este marco general incluye tres elementos fundamentales. En primer lugar describe los elementos que se encuentran presentes en los Programas de Innovación Abierta, en segundo lugar como estos programas colaboran en la creación de los seis impulsores de creación de valor que se han identificado y finalmente en tercer lugar como estos impulsores impactan sobre la estrategia y la organización de la empresa para dar lugar a la creación de valor de forma sostenida. A través de un Programa de Innovación Abierta, se pueden desarrollar los impulsores de valor para fortalecer la posición estratégica de la empresa y su capacidad de crear de valor. Es lo que denominamos el marco de referencia para la creación de valor en un Programa de Innovación Abierta. Se presentará la idea que los impulsores de creación de valor pueden colaborar en generar una estrategia óptima que permita alcanzar un desempeño financiero superior y lograr creación de valor de la empresa. En resumen, se ha desarrollado un modelo de relación que describe el proceso de creación de valor en la empresa a partir de los Programas de Innovación Abierta. Para ello, se han identificado los impulsores de creación de valor y se ha descripto la interacción entre los distintos elementos del modelo. ABSTRACT In a world of constant, accelerating change innovation is fuel for business. Year after year, innovation allows firms to renew and, therefore, advance their long--‐term survival. Undoubtedly, innovation is a key element for the firms’ ability to create value over time. Companies often devote considerable effort and diverse resources to identify innovation alternatives that could fit into their strategy, culture, corporate goals and ambitions. Open innovation refers to a specific approach to innovate by collaborating with other firms operating within the same business ecosystem.2 The term open innovation was pioneered by Henry Chesbrough more than a decade ago to refer to this particular mode of driving corporate innovation. Open innovation is a new paradigm that has attracted academic and business interest for over a decade. Several cases of open innovation from different countries and from different economic sectors are included and reviewed in this document. The main objective of this study is to explain and develop a relationship model between open innovation and value creation. To this end, and as secondary objectives, we have explored the elements of an Open Innovation Program, the drivers of value creation, the process of value creation and, finally, the interaction between these three elements. As a final product of the research we have developed a general theoretical framework for establishing the connection between open innovation and value creation that facilitates the explanation of the interaction between the two. From the case studies we see that open innovation can encompass all sectors of the economy, multiple geographies and varying businesses – large companies, SMEs, including (even) start--‐ups – each with a different relevance within the ecosystem in which they participate. Elements of an Open Innovation Program We begin by listing and describing below the items that can be found in an Open Innovation Program. Many of such items have been identified through the review of relevant academic literature. Furthermore, in order to achieve a better understanding of Open Innovation, we have classified those aspects into four different categories according to the features they share. § Program Organization. An organizational structure must exist with a degree of interaction between the different members involved in the innovation process. This structure must be able to meet a number of previously established objectives. § Internal Talent. Internal talent plays a key role in the implementation and success of any Open Innovation program. An open innovation culture and leadership skills are essential for adopting either radical or incremental innovation. In fact, leadership is closely linked to organizational behavior and it is essential to promote open innovation. § Infrastructure. This category groups the elements related to the technological infrastructure required to carry out the program, including production processes and daily management tools. § Instruments. Finally, we list the instruments or vehicles used in the corporate environment to implement open innovation. Several instruments are available, such as corporate incubators, licensing agreements or venture capital. There has been a growing and renewed interest in the latter, both in academia and business circles. The use of corporate venture capital to sustain the development of the open innovation strategy brings ability, credibility, and technological support to the process. The combination of elements from these four categories, interacting in a coordinated way, makes it possible to create, enhance and develop value creation drivers that may impact the company’s strategy and organization and affect its financial performance over time. The Drivers of Value Creation After identifying describing and categorizing the different elements present in an Open Innovation Program our research examines the drivers of value creation. These can be defined as elements that enhance or determine the ability to create value in the business environment. As can be seen, these drivers can act as interacting points between the elements of the program and the process of value creation. The study identifies six drivers of value creation that might be found in an Open Innovation Program. § New Products and Services. The more direct and obvious driver of value creation in any Open Innovation Program is the ability to create new products and services. This is directly related to the company’s innovation process. § Access to Adjacent Markets. The innovation process can also serve as a source of value by granting access to adjacent markets through satisfying new needs for existing customers or attracting new customers from other markets. § Availability of Technologies. The availability of technology is in itself a driver for value creation. New technologies can either be complementary and/or can leverage existing technologies within the firm. They can partly transform certain elements of the company’s strategy. § External Talent Strategy. Incorporating an Open Innovation Program offers the opportunity to interact with other organizations operating in the same ecosystem and can therefore attract external skilled resources. Talent mobility is a unique feature of open innovation. § Becoming Part of a Virtuous Circle. The actions carried out in the environment by any of its members will also have a clear impact on value creation for the other participants. Participation in a virtuous ecosystem is thus a driver for value creation in an open innovation strategy. § Inside--‐out Technology. Value creation may also evolve by allowing other firms in the ecosystem to incorporate internally developed under--‐utilized technologies into their own innovation processes. These six drivers that are present in the innovation process can influence the strategy and the organization of the company, increasing its ability to create value. The Value Creation Process Value--‐based management is the management approach that requires aligning the corporate strategy and the organizational design to create value and obtain sustained financial returns (at least, higher returns than its competitors). We describe how the drivers of value creation can enhance corporate advantages by aligning its strategy and organization. During this study, we were able to determine that real options can be used as managing tools in open innovation environments which, by definition, have high uncertainty levels. Real options provide capability for flexible and modular decision--‐making in the business environment. In particular, real options have been designed for uncertainty management and, therefore, they may be widely applied in innovation environments. We analyze potential uses of real options to supplement the various instruments identified in the Open Innovation programs. The Interaction Between Open Innovation Programs, Value Creation drivers and Value Creation Process As a result of this study, we have developed a general framework for value creation in Open Innovation Programs. This framework includes three key elements. We first described the elements that are present in Open Innovation Programs. Next, we showed how these programs can boost six drivers of value creation that have been identified. Finally, we analyzed how the drivers impact on the strategy and organization of the company in order to lead to the creation of sustainable value. Through an Open Innovation Program, value drivers can be developed to strengthen a company’s strategic position and its ability to create value. That is what we call the framework for value creation in the Open Innovation Program. Value drivers can collaborate in generating an optimal strategy that helps foster a superior financial performance and a sustained value creation process. In sum, we have developed a relationship model that describes the process of creating value in a firm with an Open Innovation Program. We have identified the drivers of value creation and described how the different elements of the model interact with each other.
Resumo:
The backdrop of actual problematic about the implementation of Information Technology (IT) services management in Small and Medium Enterprises (SMEs) will be described. It will be exposed the reasons why reaching a maturity/capability level through well-known standards or the implementation of good software engineering practices by means of IT infrastructure Library are really difficult to achieve by SMEs. Also, the solutions to the exposed problems will be explained. Also master thesis goals are presented in terms of: purpose, research questions, research goals, objectives and scope. Finally, thesis structure is described.
A simplified spectral approachfor impedance-based damage identification of frp-strengthened rc beams
Resumo:
Hoy en día, el refuerzo y reparación de estructuras de hormigón armado mediante el pegado de bandas de polímeros reforzados con fibras (FRP) se emplea cada vez con más frecuencia a causa de sus numerosas ventajas. Sin embargo, las vigas reforzadas con esta técnica pueden experimentar un modo de fallo frágil a causa del despegue repentino de la banda de FRP a partir de una fisura intermedia. A pesar de su importancia, el número de trabajos que abordan el estudio de este mecanismo de fallo y su monitorización es muy limitado. Por ello, el desarrollo de metodologías capaces de monitorizar a largo plazo la adherencia de este refuerzo a las estructuras de hormigón e identificar cuándo se inicia el despegue de la banda constituyen un importante desafío a abordar. El principal objetivo de esta tesis es la implementación de una metodología fiable y efectiva, capaz de detectar el despegue de una banda de FRP en una viga de hormigón armado a partir de una fisura intermedia. Para alcanzar este objetivo se ha implementado un procedimiento de calibración numérica a partir de ensayos experimentales. Para ello, en primer lugar, se ha desarrollado un modelo numérico unidimensional simple y no costoso representativo del comportamiento de este tipo vigas de hormigón reforzadas con FRP, basado en un modelo de fisura discreta para el hormigón y el método de elementos espectrales. La formación progresiva de fisuras a flexion y el consiguiente despegue en la interface entre el hormigón y el FRP se formulan mediante la introducción de un nuevo elemento capaz de representar ambos fenómenos simultáneamente sin afectar al procedimiento numérico. Además, con el modelo propuesto, se puede obtener de una forma sencilla la respuesta dinámica en altas frecuencias de este tipo de estructuras, lo cual puede hacer muy útil su uso como herramienta de diagnosis y detección del despegue en su fase inicial mediante una monitorización de la variación de las características dinámicas locales de la estructura. Un método de evaluación no destructivo muy prometedor para la monitorización local de las estructuras es el método de la impedancia usando sensores-actuadores piezoeléctricos (PZT). La impedancia eléctrica de los sensores PZT se puede relacionar con la impedancia mecánica de las estructuras donde se encuentran adheridos Ya que la impedancia mecánica de una estructura se verá afectada por su deterioro, se pueden implementar indicadores de daño mediante una comparación del espectro de admitancia (inversa de la impedancia) a lo largo de distintas etapas durante el periodo de servicio de una estructura. Cualquier cambio en el espectro se podría interpretar como una variación en la integridad de la estructura. La impedancia eléctrica se mide a altas frecuencias con lo cual esta metodología debería ser muy sensible a la detección de estados de daño incipiente local, tal como se desea en la aplicación de este trabajo. Se ha implementado un elemento espectral PZT-FRP como extensión del modelo previamente desarrollado, con el objetivo de poder calcular numéricamente la impedancia eléctrica de sensores PZT adheridos a bandas de FRP sobre una viga de hormigón armado. El modelo, combinado con medidas experimentales captadas mediante sensores PZT, se implementa en el marco de una metodología de calibración de modelos para detectar cuantitativamente el despegue en la interfase entre una banda de FRP y una viga de hormigón. El procedimiento de optimización se resuelve empleando el método del enjambre cooperativo con un algoritmo bagging. Los resultados muestran una gran aproximación en la estimación del daño para el problema propuesto. Adicionalmente, se ha desarrollado también un método adaptativo para el mallado de elementos espectrales con el objetivo de localizar las zonas dañadas a partir de los resultados experimentales, el cual contribuye a aumentar la robustez y efectividad del método propuesto a la hora de identificar daños incipientes en su aparición inicial. Finalmente, se ha llevado a cabo un procedimiento de optimización multi-objetivo para detectar el despegue inicial en una viga de hormigón a escala real reforzada con FRP a partir de las impedancias captadas con una red de sensores PZT instrumentada a lo largo de la longitud de la viga. Cada sensor aporta los datos para definir cada una de las funciones objetivo que definen el procedimiento. Combinando el modelo previo de elementos espectrales con un algoritmo PSO multi-objetivo el procedimiento de detección de daño resultante proporciona resultados satisfactorios considerando la escala de la estructura y todas las incertidumbres características ligadas a este proceso. Los resultados obtenidos prueban la viabilidad y capacidad de los métodos antes mencionados y también su potencial en aplicaciones reales. Abstract Nowadays, the external bonding of fibre reinforced polymer (FRP) plates or sheets is increasingly used for the strengthening and retrofitting of reinforced concrete (RC) structures due to its numerous advantages. However, this kind of strengthening often leads to brittle failure modes being the most dominant failure mode the debonding induced by an intermediate crack (IC). In spite of its importance, the number of studies regarding the IC debonding mechanism and bond health monitoring is very limited. Methodologies able to monitor the long-term efficiency of bonding and successfully identify the initiation of FRP debonding constitute a challenge to be met. The main purpose of this thesisis the implementation of a reliable and effective methodology of damage identification able to detect intermediate crack debonding in FRP-strengthened RC beams. To achieve this goal, a model updating procedure based on numerical simulations and experimental tests has been implemented. For it, firstly, a simple and non-expensive one-dimensional model based on the discrete crack approach for concrete and the spectral element method has been developed. The progressive formation of flexural cracks and subsequent concrete-FRP interfacial debonding is formulated by the introduction of a new element able to represent both phenomena simultaneously without perturbing the numerical procedure. Furthermore, with the proposed model, high frequency dynamic response for these kinds of structures can also be obtained in a very simple and non-expensive way, which makes this procedure very useful as a tool for diagnoses and detection of debonding in its initial stage by monitoring the change in local dynamic characteristics. One very promising active non-destructive evaluation method for local monitoring is impedance-based structural health monitoring(SHM)using piezoelectric ceramic (PZT) sensor-actuators. The electrical impedance of the PZT can be directly related to the mechanical impedance of the host structural component where the PZT transducers are attached. Since the structural mechanical impedance will be affected by the presence of structural damage, comparisons of admittance (inverse of impedance) spectra at various times during the service period of the structure can be used as damage indicator. Any change in the spectra might be an indication of a change in the structural integrity. The electrical impedance is measured at high frequencies with which this methodology appears to be very sensitive to incipient damage in structural systems as desired for our application. Abonded-PZT-FRP spectral beam element approach based on an extension of the previous discrete crack approach is implemented in the calculation of the electrical impedance of the PZT transducer bonded to the FRP plates of a RC beam. This approach in conjunction with the experimental measurements of PZT actuator-sensors mounted on the structure is used to present an updating methodology to quantitatively detect interfacial debonding between a FRP strip and the host RC structure. The updating procedure is solved by using an ensemble particle swarm optimization approach with abagging algorithm, and the results demonstrate a big improvement for the performance and accuracy of the damage detection in the proposed problem. Additionally, an adaptive strategy of spectral element mesh has been also developed to detect damage location with experimental results, which shows the robustness and effectiveness of the proposed method to identify initial and incipient damages at its early stage. Lastly, multi-objective optimization has been carried out to detect debonding damage in a real scale FRP-strengthened RC beam by using impedance signatures. A net of PZT sensors is distributed along the beam to construct impedance-based multiple objectives under gradually induced damage scenario. By combining the spectral element model presented previously and an ensemble multi-objective PSO algorithm, the implemented damage detection process yields satisfactory predictions considering the scale and uncertainties of the structure. The obtained results prove the feasibility and capability of the aforementioned methods and also their potentials in real engineering applications.