92 resultados para Application efficiency
Resumo:
Esta Tesis aborda los problemas de eficiencia de las redes eléctrica desde el punto de vista del consumo. En particular, dicha eficiencia es mejorada mediante el suavizado de la curva de consumo agregado. Este objetivo de suavizado de consumo implica dos grandes mejoras en el uso de las redes eléctricas: i) a corto plazo, un mejor uso de la infraestructura existente y ii) a largo plazo, la reducción de la infraestructura necesaria para suplir las mismas necesidades energéticas. Además, esta Tesis se enfrenta a un nuevo paradigma energético, donde la presencia de generación distribuida está muy extendida en las redes eléctricas, en particular, la generación fotovoltaica (FV). Este tipo de fuente energética afecta al funcionamiento de la red, incrementando su variabilidad. Esto implica que altas tasas de penetración de electricidad de origen fotovoltaico es perjudicial para la estabilidad de la red eléctrica. Esta Tesis trata de suavizar la curva de consumo agregado considerando esta fuente energética. Por lo tanto, no sólo se mejora la eficiencia de la red eléctrica, sino que también puede ser aumentada la penetración de electricidad de origen fotovoltaico en la red. Esta propuesta conlleva grandes beneficios en los campos económicos, social y ambiental. Las acciones que influyen en el modo en que los consumidores hacen uso de la electricidad con el objetivo producir un ahorro energético o un aumento de eficiencia son llamadas Gestión de la Demanda Eléctrica (GDE). Esta Tesis propone dos algoritmos de GDE diferentes para cumplir con el objetivo de suavizado de la curva de consumo agregado. La diferencia entre ambos algoritmos de GDE reside en el marco en el cual estos tienen lugar: el marco local y el marco de red. Dependiendo de este marco de GDE, el objetivo energético y la forma en la que se alcanza este objetivo son diferentes. En el marco local, el algoritmo de GDE sólo usa información local. Este no tiene en cuenta a otros consumidores o a la curva de consumo agregado de la red eléctrica. Aunque esta afirmación pueda diferir de la definición general de GDE, esta vuelve a tomar sentido en instalaciones locales equipadas con Recursos Energéticos Distribuidos (REDs). En este caso, la GDE está enfocada en la maximización del uso de la energía local, reduciéndose la dependencia con la red. El algoritmo de GDE propuesto mejora significativamente el auto-consumo del generador FV local. Experimentos simulados y reales muestran que el auto-consumo es una importante estrategia de gestión energética, reduciendo el transporte de electricidad y alentando al usuario a controlar su comportamiento energético. Sin embargo, a pesar de todas las ventajas del aumento de auto-consumo, éstas no contribuyen al suavizado del consumo agregado. Se han estudiado los efectos de las instalaciones locales en la red eléctrica cuando el algoritmo de GDE está enfocado en el aumento del auto-consumo. Este enfoque puede tener efectos no deseados, incrementando la variabilidad en el consumo agregado en vez de reducirlo. Este efecto se produce porque el algoritmo de GDE sólo considera variables locales en el marco local. Los resultados sugieren que se requiere una coordinación entre las instalaciones. A través de esta coordinación, el consumo debe ser modificado teniendo en cuenta otros elementos de la red y buscando el suavizado del consumo agregado. En el marco de la red, el algoritmo de GDE tiene en cuenta tanto información local como de la red eléctrica. En esta Tesis se ha desarrollado un algoritmo autoorganizado para controlar el consumo de la red eléctrica de manera distribuida. El objetivo de este algoritmo es el suavizado del consumo agregado, como en las implementaciones clásicas de GDE. El enfoque distribuido significa que la GDE se realiza desde el lado de los consumidores sin seguir órdenes directas emitidas por una entidad central. Por lo tanto, esta Tesis propone una estructura de gestión paralela en lugar de una jerárquica como en las redes eléctricas clásicas. Esto implica que se requiere un mecanismo de coordinación entre instalaciones. Esta Tesis pretende minimizar la cantidad de información necesaria para esta coordinación. Para lograr este objetivo, se han utilizado dos técnicas de coordinación colectiva: osciladores acoplados e inteligencia de enjambre. La combinación de estas técnicas para llevar a cabo la coordinación de un sistema con las características de la red eléctrica es en sí mismo un enfoque novedoso. Por lo tanto, este objetivo de coordinación no es sólo una contribución en el campo de la gestión energética, sino también en el campo de los sistemas colectivos. Los resultados muestran que el algoritmo de GDE propuesto reduce la diferencia entre máximos y mínimos de la red eléctrica en proporción a la cantidad de energía controlada por el algoritmo. Por lo tanto, conforme mayor es la cantidad de energía controlada por el algoritmo, mayor es la mejora de eficiencia en la red eléctrica. Además de las ventajas resultantes del suavizado del consumo agregado, otras ventajas surgen de la solución distribuida seguida en esta Tesis. Estas ventajas se resumen en las siguientes características del algoritmo de GDE propuesto: • Robustez: en un sistema centralizado, un fallo o rotura del nodo central provoca un mal funcionamiento de todo el sistema. La gestión de una red desde un punto de vista distribuido implica que no existe un nodo de control central. Un fallo en cualquier instalación no afecta el funcionamiento global de la red. • Privacidad de datos: el uso de una topología distribuida causa de que no hay un nodo central con información sensible de todos los consumidores. Esta Tesis va más allá y el algoritmo propuesto de GDE no utiliza información específica acerca de los comportamientos de los consumidores, siendo la coordinación entre las instalaciones completamente anónimos. • Escalabilidad: el algoritmo propuesto de GDE opera con cualquier número de instalaciones. Esto implica que se permite la incorporación de nuevas instalaciones sin afectar a su funcionamiento. • Bajo coste: el algoritmo de GDE propuesto se adapta a las redes actuales sin requisitos topológicos. Además, todas las instalaciones calculan su propia gestión con un bajo requerimiento computacional. Por lo tanto, no se requiere un nodo central con un alto poder de cómputo. • Rápido despliegue: las características de escalabilidad y bajo coste de los algoritmos de GDE propuestos permiten una implementación rápida. No se requiere una planificación compleja para el despliegue de este sistema. ABSTRACT This Thesis addresses the efficiency problems of the electrical grids from the consumption point of view. In particular, such efficiency is improved by means of the aggregated consumption smoothing. This objective of consumption smoothing entails two major improvements in the use of electrical grids: i) in the short term, a better use of the existing infrastructure and ii) in long term, the reduction of the required infrastructure to supply the same energy needs. In addition, this Thesis faces a new energy paradigm, where the presence of distributed generation is widespread over the electrical grids, in particular, the Photovoltaic (PV) generation. This kind of energy source affects to the operation of the grid by increasing its variability. This implies that a high penetration rate of photovoltaic electricity is pernicious for the electrical grid stability. This Thesis seeks to smooth the aggregated consumption considering this energy source. Therefore, not only the efficiency of the electrical grid is improved, but also the penetration of photovoltaic electricity into the grid can be increased. This proposal brings great benefits in the economic, social and environmental fields. The actions that influence the way that consumers use electricity in order to achieve energy savings or higher efficiency in energy use are called Demand-Side Management (DSM). This Thesis proposes two different DSM algorithms to meet the aggregated consumption smoothing objective. The difference between both DSM algorithms lie in the framework in which they take place: the local framework and the grid framework. Depending on the DSM framework, the energy goal and the procedure to reach this goal are different. In the local framework, the DSM algorithm only uses local information. It does not take into account other consumers or the aggregated consumption of the electrical grid. Although this statement may differ from the general definition of DSM, it makes sense in local facilities equipped with Distributed Energy Resources (DERs). In this case, the DSM is focused on the maximization of the local energy use, reducing the grid dependence. The proposed DSM algorithm significantly improves the self-consumption of the local PV generator. Simulated and real experiments show that self-consumption serves as an important energy management strategy, reducing the electricity transport and encouraging the user to control his energy behavior. However, despite all the advantages of the self-consumption increase, they do not contribute to the smooth of the aggregated consumption. The effects of the local facilities on the electrical grid are studied when the DSM algorithm is focused on self-consumption maximization. This approach may have undesirable effects, increasing the variability in the aggregated consumption instead of reducing it. This effect occurs because the algorithm only considers local variables in the local framework. The results suggest that coordination between these facilities is required. Through this coordination, the consumption should be modified by taking into account other elements of the grid and seeking for an aggregated consumption smoothing. In the grid framework, the DSM algorithm takes into account both local and grid information. This Thesis develops a self-organized algorithm to manage the consumption of an electrical grid in a distributed way. The goal of this algorithm is the aggregated consumption smoothing, as the classical DSM implementations. The distributed approach means that the DSM is performed from the consumers side without following direct commands issued by a central entity. Therefore, this Thesis proposes a parallel management structure rather than a hierarchical one as in the classical electrical grids. This implies that a coordination mechanism between facilities is required. This Thesis seeks for minimizing the amount of information necessary for this coordination. To achieve this objective, two collective coordination techniques have been used: coupled oscillators and swarm intelligence. The combination of these techniques to perform the coordination of a system with the characteristics of the electric grid is itself a novel approach. Therefore, this coordination objective is not only a contribution in the energy management field, but in the collective systems too. Results show that the proposed DSM algorithm reduces the difference between the maximums and minimums of the electrical grid proportionally to the amount of energy controlled by the system. Thus, the greater the amount of energy controlled by the algorithm, the greater the improvement of the efficiency of the electrical grid. In addition to the advantages resulting from the smoothing of the aggregated consumption, other advantages arise from the distributed approach followed in this Thesis. These advantages are summarized in the following features of the proposed DSM algorithm: • Robustness: in a centralized system, a failure or breakage of the central node causes a malfunction of the whole system. The management of a grid from a distributed point of view implies that there is not a central control node. A failure in any facility does not affect the overall operation of the grid. • Data privacy: the use of a distributed topology causes that there is not a central node with sensitive information of all consumers. This Thesis goes a step further and the proposed DSM algorithm does not use specific information about the consumer behaviors, being the coordination between facilities completely anonymous. • Scalability: the proposed DSM algorithm operates with any number of facilities. This implies that it allows the incorporation of new facilities without affecting its operation. • Low cost: the proposed DSM algorithm adapts to the current grids without any topological requirements. In addition, every facility calculates its own management with low computational requirements. Thus, a central computational node with a high computational power is not required. • Quick deployment: the scalability and low cost features of the proposed DSM algorithms allow a quick deployment. A complex schedule of the deployment of this system is not required.
Resumo:
We report, for the first time, about an intermediate band solar cell implemented with InAs/AlGaAs quantum dots whose photoresponse expands from 250 to ~ 6000 nm. To our knowledge, this is the broadest quantum efficiency reported to date for a solar cell and demonstrates that the intermediate band solar cell is capable of producing photocurrent when illuminated with photons whose energy equals the energy of the lowest band gap. We show experimental evidence indicating that this result is in agreement with the theory of the intermediate band solar cell, according to which the generation recombination between the intermediate band and the valence band makes this photocurrent detectable. © 2015 American Physical Society
Resumo:
Three-dimensional kinematic analysis provides quantitative assessment of upper limb motion and is used as an outcome measure to evaluate movement disorders. The aim of the present study is to present a set of kinematic metrics for quantifying characteristics of movement performance and the functional status of the subject during the execution of the activity of daily living (ADL) of drinking from a glass. Then, the objective is to apply these metrics in healthy people and a population with cervical spinal cord injury (SCI), and to analyze the metrics ability to discriminate between healthy and pathologic people. 19 people participated in the study: 7 subjects with metameric level C6 tetraplegia, 4 subjects with metameric level C7 tetraplegia and 8 healthy subjects. The movement was recorded with a photogrammetry system. The ADL of drinking was divided into a series of clearly identifiable phases to facilitate analysis. Metrics describing the time of the reaching phase, the range of motion of the joints analyzed, and characteristics of movement performance such as the efficiency, accuracy and smoothness of the distal segment and inter-joint coordination were obtained. The performance of the drinking task was more variable in people with SCI compared to the control group in relation to the metrics measured. Reaching time was longer in SCI groups. The proposed metrics showed capability to discriminate between healthy and pathologic people. Relative deficits in efficiency were larger in SCI people than in controls. These metrics can provide useful information in a clinical setting about the quality of the movement performed by healthy and SCI people during functional activities.
Resumo:
Conservation tillage and crop rotation have spread during the last decades because promotes several positive effects (increase of soil organic content, reduction of soil erosion, and enhancement of carbon sequestration) (Six et al., 2004). However, these benefits could be partly counterbalanced by negative effects on the release of nitrous oxide (N2O) (Linn and Doran, 1984). There is a lack of data on long-term tillage system study, particularly in Mediterranean agro-ecosystems. The aim of this study was to evaluate the effects of long-term (>17 year) tillage systems (no tillage (NT), minimum tillage (MT) and conventional tillage (CT)); and crop rotation (wheat (W)-vetch (V)-barley (B)) versus wheat monoculture (M) on N2O emissions. Additionally, Yield-scaled N2O emissions (YSNE) and N uptake efficiency (NUpE) were assessed for each treatment.
Resumo:
Water stress (WS) slows growth and photosynthesis (An), but most knowledge comes from short-time studies that do not account for longer term acclimation processes that are especially relevant in tree species. Using two Eucalyptus species that contrast in drought tolerance, we induced moderate and severe water deficits by withholding water until stomatal conductance (gsw) decreased to two pre-defined values for 24 d, WS was maintained at the target gsw for 29 d and then plants were re-watered. Additionally, we developed new equations to simulate the effect on mesophyll conductance (gm) of accounting for the resistance to refixation of CO2. The diffusive limitations to CO2, dominated by the stomata, were the most important constraints to An. Full recovery of An was reached after re-watering, characterized by quick recovery of gm and even higher biochemical capacity, in contrast to the slower recovery of gsw. The acclimation to long-term WS led to decreased mesophyll and biochemical limitations, in contrast to studies in which stress was imposed more rapidly. Finally, we provide evidence that higher gm under WS contributes to higher intrinsic water-use efficiency (iWUE) and reduces the leaf oxidative stress, highlighting the importance of gm as a target for breeding/genetic engineering.
Resumo:
Performance of football teams varies constantly due to the dynamic nature of this sport, whilst the typical performance and its spread can be represented by profiles combining different performance-related variables based on data from multiple matches. The current study aims to use a profiling technique to evaluate and compare match performance of football teams in the UEFA Champions League incorporating three situational variables (i.e. strength of team and opponent, match outcome and match location). Match statistics of 72 teams, 496 games across four seasons (2008-09 to 2012-13) of this competition were analysed. Sixteen performance-related events were included: shots, shots on target, shots from open play, shots from set piece, shots from counter attack, passes, pass accuracy (%), crosses, through balls, corners, dribbles, possession, aerial success (%), fouls, tackles, and yellow cards. Teams were classified into three levels of strength by a k-cluster analysis. Profiles of overall performance and profiles incorporating three situational variables for teams of all three levels of strength were set up by presenting the mean, standard deviation, median, lower and upper quartiles of the counts of each event to represent their typical performances and spreads. Means were compared by using one-way ANOVA and independent sample t test (for match location, home and away differences), and were plotted into the same radar charts after unifying all the event counts by standardised score. Established profiles can present straightforwardly typical performances of football teams of different levels playing in different situations, which could provide detailed references for coaches and analysts to evaluate performances of upcoming opposition and of their own.
Resumo:
Transverse galloping is a type of aeroelastic instability characterized by large amplitude, low frequency, normal to wind oscillations that appear in some elastic two-dimensional bluff bodies when subjected to a fluid flow, provided that the flow velocity exceeds a threshold critical value. Such an oscillatory motion is explained because of the energy transfer from the flow to the two-dimensional bluff body. The 7 amount of energy that can be extracted depends on the cross section of the galloping prism. Assuming that the Glauert-Den Hartog quasistatic criterion for galloping instability is satisfied in a first approximation, the suitability of a given cross section for energy harvesting is evaluated by analyzing the lateral aerodynamic force coefficient, fitting a function with a power series in tan a (a being the angle of attack) to 10 available experimental data. In this paper, a fairly large number of simple prisms (triangle, ellipse, biconvex, and rhombus cross sections, as well 11 as D-shaped bodies) is analyzed for suitability as energy harvesters. The influence of the fitting process in the energy harvesting efficiency evaluation is also demonstrated. The analysis shows that the more promising bodies are those with isosceles or approximate isosceles cross sections.
Resumo:
The computational and cooling power demands of enterprise servers are increasing at an unsustainable rate. Understanding the relationship between computational power, temperature, leakage, and cooling power is crucial to enable energy-efficient operation at the server and data center levels. This paper develops empirical models to estimate the contributions of static and dynamic power consumption in enterprise servers for a wide range of workloads, and analyzes the interactions between temperature, leakage, and cooling power for various workload allocation policies. We propose a cooling management policy that minimizes the server energy consumption by setting the optimum fan speed during runtime. Our experimental results on a presently shipping enterprise server demonstrate that including leakage awareness in workload and cooling management provides additional energy savings without any impact on performance.
Resumo:
A study of the assessment of the irrigation water use has been carried out in the Spanish irrigation District “Río Adaja” that has analyzed the water use efficiency and the water productivity indicators for the main crops for three years: 2010-2011, 2011-2012 and 2012-2013. A soil water balance model was applied taking into ccount climatic data for the nearby weather station and soil properties. Crop water requirements were calculated by the FAO Penman- Monteith with the application of the dual crop coefficient and by considering the readily vailable soil water content (RAW) concept. Likewise, productivity was measured by the indexes: annual relative irrigation supply (ARIS), annual relative water supply (ARWS), relative rainfall supply (RRS), the water productivity (WP), the evapotranspiration water productivity (ETWP), and the irrigation water productivity (IWP. The results show that in most crops deficit irrigation was applied (ARIS<1) in the first two years however, the IWP improved. This was higher in 2010-2011 which corresponded to the highest effective precipitation Pe. In general, the IWP (€.m-3) varied amongcrops but crops such as: onion (4.14, 1.98 and 2.77 respectively for the three years), potato (2.79, 1.69 and 1.62 respectively for the three years), carrot (1.37, 1.70 and 1.80 respectively for the three years) and barley (1.21, 1.16 and 0.68 respectively for the three years) showed the higher values. Thus, it is highlighted the y could be included into the cropping pattern which would maximize the famer’s gross income in the irrigation district.
Resumo:
Melon is traditionally cultivated in fertigated farmlands in the center of Spain with high inputs of water and N fertilizer. Excess N can have a negative impact, from the economic point of view, since it can diminish the production and quality of the fruit, from the environmental point of view, since it is a very mobile element in the soil and can contaminate groundwater. From health point of view, nitrate can be accumulated in fruit pulp, and, in addition, groundwater is the fundamental supply source of human populations. Best management practices are particularly necessary in this region as many zones have been declared vulnerable to NO3- pollution (Directive 91/676/CEE) During successive years, a melon crop (Cucumis melo L.) was grown under field conditions applying mineral and organic fertilizers under drip irrigation. Different doses of ammonium nitrate were used as well as compost derived from the wine-distillery industry which is relevant in this area. The present study reviews the most common N efficiency indexes under the different management options with a view to maximizing yield and minimizing N loss.
Resumo:
Different approaches have arisen aiming to exceed the Shockley-Queisser efficiency limit of solar cells. Particularly, stacking QD layers allows exploiting their unique properties, not only for intermediate-band solar cells or multiple exciton generation, but also for tandem cells in which the tunability of QD properties through the capping layer (CL) could be very useful.
Resumo:
Different approaches have recently arisen aiming to exceed the Shockley-Queisser efficiency limit. Particularly, the use of self-organized quantum dots (QD) has been recently proposed in order to introduce new states within the barrier material, which enhances the subband gap absorption yielding a photocurrent increase. Stacking QD layers allows exploiting their unique properties for intermediate-band solar cells (SC) or tandem cells.In all these cases, tuning the QD properties by modifying the capping layer (CL) can be very useful.
Resumo:
Automated Teller Machines (ATMs) are sensitive self-service systems that require important investments in security and testing. ATM certifications are testing processes for machines that integrate software components from different vendors and are performed before their deployment for public use. This project was originated from the need of optimization of the certification process in an ATM manufacturing company. The process identifies compatibility problems between software components through testing. It is composed by a huge number of manual user tasks that makes the process very expensive and error-prone. Moreover, it is not possible to fully automate the process as it requires human intervention for manipulating ATM peripherals. This project presented important challenges for the development team. First, this is a critical process, as all the ATM operations rely on the software under test. Second, the context of use of ATMs applications is vastly different from ordinary software. Third, ATMs’ useful lifetime is beyond 15 years and both new and old models need to be supported. Fourth, the know-how for efficient testing depends on each specialist and it is not explicitly documented. Fifth, the huge number of tests and their importance implies the need for user efficiency and accuracy. All these factors led us conclude that besides the technical challenges, the usability of the intended software solution was critical for the project success. This business context is the motivation of this Master Thesis project. Our proposal focused in the development process applied. By combining user-centered design (UCD) with agile development we ensured both the high priority of usability and the early mitigation of software development risks caused by all the technology constraints. We performed 23 development iterations and finally we were able to provide a working solution on time according to users’ expectations. The evaluation of the project was carried out through usability tests, where 4 real users participated in different tests in the real context of use. The results were positive, according to different metrics: error rate, efficiency, effectiveness, and user satisfaction. We discuss the problems found, the benefits and the lessons learned in the process. Finally, we measured the expected project benefits by comparing the effort required by the current and the new process (once the new software tool is adopted). The savings corresponded to 40% less effort (man-hours) per certification. Future work includes additional evaluation of product usability in a real scenario (with customers) and the measuring of benefits in terms of quality improvement.
Resumo:
Los análisis de fiabilidad representan una herramienta adecuada para contemplar las incertidumbres inherentes que existen en los parámetros geotécnicos. En esta Tesis Doctoral se desarrolla una metodología basada en una linealización sencilla, que emplea aproximaciones de primer o segundo orden, para evaluar eficientemente la fiabilidad del sistema en los problemas geotécnicos. En primer lugar, se emplean diferentes métodos para analizar la fiabilidad de dos aspectos propios del diseño de los túneles: la estabilidad del frente y el comportamiento del sostenimiento. Se aplican varias metodologías de fiabilidad — el Método de Fiabilidad de Primer Orden (FORM), el Método de Fiabilidad de Segundo Orden (SORM) y el Muestreo por Importancia (IS). Los resultados muestran que los tipos de distribución y las estructuras de correlación consideradas para todas las variables aleatorias tienen una influencia significativa en los resultados de fiabilidad, lo cual remarca la importancia de una adecuada caracterización de las incertidumbres geotécnicas en las aplicaciones prácticas. Los resultados también muestran que tanto el FORM como el SORM pueden emplearse para estimar la fiabilidad del sostenimiento de un túnel y que el SORM puede mejorar el FORM con un esfuerzo computacional adicional aceptable. Posteriormente, se desarrolla una metodología de linealización para evaluar la fiabilidad del sistema en los problemas geotécnicos. Esta metodología solamente necesita la información proporcionada por el FORM: el vector de índices de fiabilidad de las funciones de estado límite (LSFs) que componen el sistema y su matriz de correlación. Se analizan dos problemas geotécnicos comunes —la estabilidad de un talud en un suelo estratificado y un túnel circular excavado en roca— para demostrar la sencillez, precisión y eficiencia del procedimiento propuesto. Asimismo, se reflejan las ventajas de la metodología de linealización con respecto a las herramientas computacionales alternativas. Igualmente se muestra que, en el caso de que resulte necesario, se puede emplear el SORM —que aproxima la verdadera LSF mejor que el FORM— para calcular estimaciones más precisas de la fiabilidad del sistema. Finalmente, se presenta una nueva metodología que emplea Algoritmos Genéticos para identificar, de manera precisa, las superficies de deslizamiento representativas (RSSs) de taludes en suelos estratificados, las cuales se emplean posteriormente para estimar la fiabilidad del sistema, empleando la metodología de linealización propuesta. Se adoptan tres taludes en suelos estratificados característicos para demostrar la eficiencia, precisión y robustez del procedimiento propuesto y se discuten las ventajas del mismo con respecto a otros métodos alternativos. Los resultados muestran que la metodología propuesta da estimaciones de fiabilidad que mejoran los resultados previamente publicados, enfatizando la importancia de hallar buenas RSSs —y, especialmente, adecuadas (desde un punto de vista probabilístico) superficies de deslizamiento críticas que podrían ser no-circulares— para obtener estimaciones acertadas de la fiabilidad de taludes en suelos. Reliability analyses provide an adequate tool to consider the inherent uncertainties that exist in geotechnical parameters. This dissertation develops a simple linearization-based approach, that uses first or second order approximations, to efficiently evaluate the system reliability of geotechnical problems. First, reliability methods are employed to analyze the reliability of two tunnel design aspects: face stability and performance of support systems. Several reliability approaches —the first order reliability method (FORM), the second order reliability method (SORM), the response surface method (RSM) and importance sampling (IS)— are employed, with results showing that the assumed distribution types and correlation structures for all random variables have a significant effect on the reliability results. This emphasizes the importance of an adequate characterization of geotechnical uncertainties for practical applications. Results also show that both FORM and SORM can be used to estimate the reliability of tunnel-support systems; and that SORM can outperform FORM with an acceptable additional computational effort. A linearization approach is then developed to evaluate the system reliability of series geotechnical problems. The approach only needs information provided by FORM: the vector of reliability indices of the limit state functions (LSFs) composing the system, and their correlation matrix. Two common geotechnical problems —the stability of a slope in layered soil and a circular tunnel in rock— are employed to demonstrate the simplicity, accuracy and efficiency of the suggested procedure. Advantages of the linearization approach with respect to alternative computational tools are discussed. It is also found that, if necessary, SORM —that approximates the true LSF better than FORM— can be employed to compute better estimations of the system’s reliability. Finally, a new approach using Genetic Algorithms (GAs) is presented to identify the fully specified representative slip surfaces (RSSs) of layered soil slopes, and such RSSs are then employed to estimate the system reliability of slopes, using our proposed linearization approach. Three typical benchmark-slopes with layered soils are adopted to demonstrate the efficiency, accuracy and robustness of the suggested procedure, and advantages of the proposed method with respect to alternative methods are discussed. Results show that the proposed approach provides reliability estimates that improve previously published results, emphasizing the importance of finding good RSSs —and, especially, good (probabilistic) critical slip surfaces that might be non-circular— to obtain good estimations of the reliability of soil slope systems.
Resumo:
El actual contexto de fabricación, con incrementos en los precios de la energía, una creciente preocupación medioambiental y cambios continuos en los comportamientos de los consumidores, fomenta que los responsables prioricen la fabricación respetuosa con el medioambiente. El paradigma del Internet de las Cosas (IoT) promete incrementar la visibilidad y la atención prestada al consumo de energía gracias tanto a sensores como a medidores inteligentes en los niveles de máquina y de línea de producción. En consecuencia es posible y sencillo obtener datos de consumo de energía en tiempo real proveniente de los procesos de fabricación, pero además es posible analizarlos para incrementar su importancia en la toma de decisiones. Esta tesis pretende investigar cómo utilizar la adopción del Internet de las Cosas en el nivel de planta de producción, en procesos discretos, para incrementar la capacidad de uso de la información proveniente tanto de la energía como de la eficiencia energética. Para alcanzar este objetivo general, la investigación se ha dividido en cuatro sub-objetivos y la misma se ha desarrollado a lo largo de cuatro fases principales (en adelante estudios). El primer estudio de esta tesis, que se apoya sobre una revisión bibliográfica comprehensiva y sobre las aportaciones de expertos, define prácticas de gestión de la producción que son energéticamente eficientes y que se apoyan de un modo preeminente en la tecnología IoT. Este primer estudio también detalla los beneficios esperables al adoptar estas prácticas de gestión. Además, propugna un marco de referencia para permitir la integración de los datos que sobre el consumo energético se obtienen en el marco de las plataformas y sistemas de información de la compañía. Esto se lleva a cabo con el objetivo último de remarcar cómo estos datos pueden ser utilizados para apalancar decisiones en los niveles de procesos tanto tácticos como operativos. Segundo, considerando los precios de la energía como variables en el mercado intradiario y la disponibilidad de información detallada sobre el estado de las máquinas desde el punto de vista de consumo energético, el segundo estudio propone un modelo matemático para minimizar los costes del consumo de energía para la programación de asignaciones de una única máquina que deba atender a varios procesos de producción. Este modelo permite la toma de decisiones en el nivel de máquina para determinar los instantes de lanzamiento de cada trabajo de producción, los tiempos muertos, cuándo la máquina debe ser puesta en un estado de apagada, el momento adecuado para rearrancar, y para pararse, etc. Así, este modelo habilita al responsable de producción de implementar el esquema de producción menos costoso para cada turno de producción. En el tercer estudio esta investigación proporciona una metodología para ayudar a los responsables a implementar IoT en el nivel de los sistemas productivos. Se incluye un análisis del estado en que se encuentran los sistemas de gestión de energía y de producción en la factoría, así como también se proporcionan recomendaciones sobre procedimientos para implementar IoT para capturar y analizar los datos de consumo. Esta metodología ha sido validada en un estudio piloto, donde algunos indicadores clave de rendimiento (KPIs) han sido empleados para determinar la eficiencia energética. En el cuarto estudio el objetivo es introducir una vía para obtener visibilidad y relevancia a diferentes niveles de la energía consumida en los procesos de producción. El método propuesto permite que las factorías con procesos de producción discretos puedan determinar la energía consumida, el CO2 emitido o el coste de la energía consumida ya sea en cualquiera de los niveles: operación, producto o la orden de fabricación completa, siempre considerando las diferentes fuentes de energía y las fluctuaciones en los precios de la misma. Los resultados muestran que decisiones y prácticas de gestión para conseguir sistemas de producción energéticamente eficientes son posibles en virtud del Internet de las Cosas. También, con los resultados de esta tesis los responsables de la gestión energética en las compañías pueden plantearse una aproximación a la utilización del IoT desde un punto de vista de la obtención de beneficios, abordando aquellas prácticas de gestión energética que se encuentran más próximas al nivel de madurez de la factoría, a sus objetivos, al tipo de producción que desarrolla, etc. Así mismo esta tesis muestra que es posible obtener reducciones significativas de coste simplemente evitando los períodos de pico diario en el precio de la misma. Además la tesis permite identificar cómo el nivel de monitorización del consumo energético (es decir al nivel de máquina), el intervalo temporal, y el nivel del análisis de los datos son factores determinantes a la hora de localizar oportunidades para mejorar la eficiencia energética. Adicionalmente, la integración de datos de consumo energético en tiempo real con datos de producción (cuando existen altos niveles de estandarización en los procesos productivos y sus datos) es esencial para permitir que las factorías detallen la energía efectivamente consumida, su coste y CO2 emitido durante la producción de un producto o componente. Esto permite obtener una valiosa información a los gestores en el nivel decisor de la factoría así como a los consumidores y reguladores. ABSTRACT In today‘s manufacturing scenario, rising energy prices, increasing ecological awareness, and changing consumer behaviors are driving decision makers to prioritize green manufacturing. The Internet of Things (IoT) paradigm promises to increase the visibility and awareness of energy consumption, thanks to smart sensors and smart meters at the machine and production line level. Consequently, real-time energy consumption data from the manufacturing processes can be easily collected and then analyzed, to improve energy-aware decision-making. This thesis aims to investigate how to utilize the adoption of the Internet of Things at shop floor level to increase energy–awareness and the energy efficiency of discrete production processes. In order to achieve the main research goal, the research is divided into four sub-objectives, and is accomplished during four main phases (i.e., studies). In the first study, by relying on a comprehensive literature review and on experts‘ insights, the thesis defines energy-efficient production management practices that are enhanced and enabled by IoT technology. The first study also explains the benefits that can be obtained by adopting such management practices. Furthermore, it presents a framework to support the integration of gathered energy data into a company‘s information technology tools and platforms, which is done with the ultimate goal of highlighting how operational and tactical decision-making processes could leverage such data in order to improve energy efficiency. Considering the variable energy prices in one day, along with the availability of detailed machine status energy data, the second study proposes a mathematical model to minimize energy consumption costs for single machine production scheduling during production processes. This model works by making decisions at the machine level to determine the launch times for job processing, idle time, when the machine must be shut down, ―turning on‖ time, and ―turning off‖ time. This model enables the operations manager to implement the least expensive production schedule during a production shift. In the third study, the research provides a methodology to help managers implement the IoT at the production system level; it includes an analysis of current energy management and production systems at the factory, and recommends procedures for implementing the IoT to collect and analyze energy data. The methodology has been validated by a pilot study, where energy KPIs have been used to evaluate energy efficiency. In the fourth study, the goal is to introduce a way to achieve multi-level awareness of the energy consumed during production processes. The proposed method enables discrete factories to specify energy consumption, CO2 emissions, and the cost of the energy consumed at operation, production and order levels, while considering energy sources and fluctuations in energy prices. The results show that energy-efficient production management practices and decisions can be enhanced and enabled by the IoT. With the outcomes of the thesis, energy managers can approach the IoT adoption in a benefit-driven way, by addressing energy management practices that are close to the maturity level of the factory, target, production type, etc. The thesis also shows that significant reductions in energy costs can be achieved by avoiding high-energy price periods in a day. Furthermore, the thesis determines the level of monitoring energy consumption (i.e., machine level), the interval time, and the level of energy data analysis, which are all important factors involved in finding opportunities to improve energy efficiency. Eventually, integrating real-time energy data with production data (when there are high levels of production process standardization data) is essential to enable factories to specify the amount and cost of energy consumed, as well as the CO2 emitted while producing a product, providing valuable information to decision makers at the factory level as well as to consumers and regulators.