922 resultados para energy efficiency, performance assessment, retrofit


Relevância:

100.00% 100.00%

Publicador:

Resumo:

El actual contexto de fabricación, con incrementos en los precios de la energía, una creciente preocupación medioambiental y cambios continuos en los comportamientos de los consumidores, fomenta que los responsables prioricen la fabricación respetuosa con el medioambiente. El paradigma del Internet de las Cosas (IoT) promete incrementar la visibilidad y la atención prestada al consumo de energía gracias tanto a sensores como a medidores inteligentes en los niveles de máquina y de línea de producción. En consecuencia es posible y sencillo obtener datos de consumo de energía en tiempo real proveniente de los procesos de fabricación, pero además es posible analizarlos para incrementar su importancia en la toma de decisiones. Esta tesis pretende investigar cómo utilizar la adopción del Internet de las Cosas en el nivel de planta de producción, en procesos discretos, para incrementar la capacidad de uso de la información proveniente tanto de la energía como de la eficiencia energética. Para alcanzar este objetivo general, la investigación se ha dividido en cuatro sub-objetivos y la misma se ha desarrollado a lo largo de cuatro fases principales (en adelante estudios). El primer estudio de esta tesis, que se apoya sobre una revisión bibliográfica comprehensiva y sobre las aportaciones de expertos, define prácticas de gestión de la producción que son energéticamente eficientes y que se apoyan de un modo preeminente en la tecnología IoT. Este primer estudio también detalla los beneficios esperables al adoptar estas prácticas de gestión. Además, propugna un marco de referencia para permitir la integración de los datos que sobre el consumo energético se obtienen en el marco de las plataformas y sistemas de información de la compañía. Esto se lleva a cabo con el objetivo último de remarcar cómo estos datos pueden ser utilizados para apalancar decisiones en los niveles de procesos tanto tácticos como operativos. Segundo, considerando los precios de la energía como variables en el mercado intradiario y la disponibilidad de información detallada sobre el estado de las máquinas desde el punto de vista de consumo energético, el segundo estudio propone un modelo matemático para minimizar los costes del consumo de energía para la programación de asignaciones de una única máquina que deba atender a varios procesos de producción. Este modelo permite la toma de decisiones en el nivel de máquina para determinar los instantes de lanzamiento de cada trabajo de producción, los tiempos muertos, cuándo la máquina debe ser puesta en un estado de apagada, el momento adecuado para rearrancar, y para pararse, etc. Así, este modelo habilita al responsable de producción de implementar el esquema de producción menos costoso para cada turno de producción. En el tercer estudio esta investigación proporciona una metodología para ayudar a los responsables a implementar IoT en el nivel de los sistemas productivos. Se incluye un análisis del estado en que se encuentran los sistemas de gestión de energía y de producción en la factoría, así como también se proporcionan recomendaciones sobre procedimientos para implementar IoT para capturar y analizar los datos de consumo. Esta metodología ha sido validada en un estudio piloto, donde algunos indicadores clave de rendimiento (KPIs) han sido empleados para determinar la eficiencia energética. En el cuarto estudio el objetivo es introducir una vía para obtener visibilidad y relevancia a diferentes niveles de la energía consumida en los procesos de producción. El método propuesto permite que las factorías con procesos de producción discretos puedan determinar la energía consumida, el CO2 emitido o el coste de la energía consumida ya sea en cualquiera de los niveles: operación, producto o la orden de fabricación completa, siempre considerando las diferentes fuentes de energía y las fluctuaciones en los precios de la misma. Los resultados muestran que decisiones y prácticas de gestión para conseguir sistemas de producción energéticamente eficientes son posibles en virtud del Internet de las Cosas. También, con los resultados de esta tesis los responsables de la gestión energética en las compañías pueden plantearse una aproximación a la utilización del IoT desde un punto de vista de la obtención de beneficios, abordando aquellas prácticas de gestión energética que se encuentran más próximas al nivel de madurez de la factoría, a sus objetivos, al tipo de producción que desarrolla, etc. Así mismo esta tesis muestra que es posible obtener reducciones significativas de coste simplemente evitando los períodos de pico diario en el precio de la misma. Además la tesis permite identificar cómo el nivel de monitorización del consumo energético (es decir al nivel de máquina), el intervalo temporal, y el nivel del análisis de los datos son factores determinantes a la hora de localizar oportunidades para mejorar la eficiencia energética. Adicionalmente, la integración de datos de consumo energético en tiempo real con datos de producción (cuando existen altos niveles de estandarización en los procesos productivos y sus datos) es esencial para permitir que las factorías detallen la energía efectivamente consumida, su coste y CO2 emitido durante la producción de un producto o componente. Esto permite obtener una valiosa información a los gestores en el nivel decisor de la factoría así como a los consumidores y reguladores. ABSTRACT In today‘s manufacturing scenario, rising energy prices, increasing ecological awareness, and changing consumer behaviors are driving decision makers to prioritize green manufacturing. The Internet of Things (IoT) paradigm promises to increase the visibility and awareness of energy consumption, thanks to smart sensors and smart meters at the machine and production line level. Consequently, real-time energy consumption data from the manufacturing processes can be easily collected and then analyzed, to improve energy-aware decision-making. This thesis aims to investigate how to utilize the adoption of the Internet of Things at shop floor level to increase energy–awareness and the energy efficiency of discrete production processes. In order to achieve the main research goal, the research is divided into four sub-objectives, and is accomplished during four main phases (i.e., studies). In the first study, by relying on a comprehensive literature review and on experts‘ insights, the thesis defines energy-efficient production management practices that are enhanced and enabled by IoT technology. The first study also explains the benefits that can be obtained by adopting such management practices. Furthermore, it presents a framework to support the integration of gathered energy data into a company‘s information technology tools and platforms, which is done with the ultimate goal of highlighting how operational and tactical decision-making processes could leverage such data in order to improve energy efficiency. Considering the variable energy prices in one day, along with the availability of detailed machine status energy data, the second study proposes a mathematical model to minimize energy consumption costs for single machine production scheduling during production processes. This model works by making decisions at the machine level to determine the launch times for job processing, idle time, when the machine must be shut down, ―turning on‖ time, and ―turning off‖ time. This model enables the operations manager to implement the least expensive production schedule during a production shift. In the third study, the research provides a methodology to help managers implement the IoT at the production system level; it includes an analysis of current energy management and production systems at the factory, and recommends procedures for implementing the IoT to collect and analyze energy data. The methodology has been validated by a pilot study, where energy KPIs have been used to evaluate energy efficiency. In the fourth study, the goal is to introduce a way to achieve multi-level awareness of the energy consumed during production processes. The proposed method enables discrete factories to specify energy consumption, CO2 emissions, and the cost of the energy consumed at operation, production and order levels, while considering energy sources and fluctuations in energy prices. The results show that energy-efficient production management practices and decisions can be enhanced and enabled by the IoT. With the outcomes of the thesis, energy managers can approach the IoT adoption in a benefit-driven way, by addressing energy management practices that are close to the maturity level of the factory, target, production type, etc. The thesis also shows that significant reductions in energy costs can be achieved by avoiding high-energy price periods in a day. Furthermore, the thesis determines the level of monitoring energy consumption (i.e., machine level), the interval time, and the level of energy data analysis, which are all important factors involved in finding opportunities to improve energy efficiency. Eventually, integrating real-time energy data with production data (when there are high levels of production process standardization data) is essential to enable factories to specify the amount and cost of energy consumed, as well as the CO2 emitted while producing a product, providing valuable information to decision makers at the factory level as well as to consumers and regulators.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

El programa Europeo HORIZON2020 en Futuras Ciudades Inteligentes establece como objetivo que el 20% de la energía eléctrica sea generada a partir de fuentes renovables. Este objetivo implica la necesidad de potenciar la generación de energía eólica en todos los ámbitos. La energía eólica reduce drásticamente las emisiones de gases de efecto invernadero y evita los riesgos geo-políticos asociados al suministro e infraestructuras energéticas, así como la dependencia energética de otras regiones. Además, la generación de energía distribuida (generación en el punto de consumo) presenta significativas ventajas en términos de elevada eficiencia energética y estimulación de la economía. El sector de la edificación representa el 40% del consumo energético total de la Unión Europea. La reducción del consumo energético en este área es, por tanto, una prioridad de acuerdo con los objetivos "20-20-20" en eficiencia energética. La Directiva 2010/31/EU del Parlamento Europeo y del Consejo de 19 de mayo de 2010 sobre el comportamiento energético de edificaciones contempla la instalación de sistemas de suministro energético a partir de fuentes renovables en las edificaciones de nuevo diseño. Actualmente existe una escasez de conocimiento científico y tecnológico acerca de la geometría óptima de las edificaciones para la explotación de la energía eólica en entornos urbanos. El campo tecnológico de estudio de la presente Tesis Doctoral es la generación de energía eólica en entornos urbanos. Específicamente, la optimization de la geometría de las cubiertas de edificaciones desde el punto de vista de la explotación del recurso energético eólico. Debido a que el flujo del viento alrededor de las edificaciones es exhaustivamente investigado en esta Tesis empleando herramientas de simulación numérica, la mecánica de fluidos computacional (CFD en inglés) y la aerodinámica de edificaciones son los campos científicos de estudio. El objetivo central de esta Tesis Doctoral es obtener una geometría de altas prestaciones (u óptima) para la explotación de la energía eólica en cubiertas de edificaciones de gran altura. Este objetivo es alcanzado mediante un análisis exhaustivo de la influencia de la forma de la cubierta del edificio en el flujo del viento desde el punto de vista de la explotación energética del recurso eólico empleando herramientas de simulación numérica (CFD). Adicionalmente, la geometría de la edificación convencional (edificio prismático) es estudiada, y el posicionamiento adecuado para los diferentes tipos de aerogeneradores es propuesto. La compatibilidad entre el aprovechamiento de las energías solar fotovoltaica y eólica también es analizado en este tipo de edificaciones. La investigación prosigue con la optimización de la geometría de la cubierta. La metodología con la que se obtiene la geometría óptima consta de las siguientes etapas: - Verificación de los resultados de las geometrías previamente estudiadas en la literatura. Las geometrías básicas que se someten a examen son: cubierta plana, a dos aguas, inclinada, abovedada y esférica. - Análisis de la influencia de la forma de las aristas de la cubierta sobre el flujo del viento. Esta tarea se lleva a cabo mediante la comparación de los resultados obtenidos para la arista convencional (esquina sencilla) con un parapeto, un voladizo y una esquina curva. - Análisis del acoplamiento entre la cubierta y los cerramientos verticales (paredes) mediante la comparación entre diferentes variaciones de una cubierta esférica en una edificación de gran altura: cubierta esférica estudiada en la literatura, cubierta esférica integrada geométricamente con las paredes (planta cuadrada en el suelo) y una cubierta esférica acoplada a una pared cilindrica. El comportamiento del flujo sobre la cubierta es estudiado también considerando la posibilidad de la variación en la dirección del viento incidente. - Análisis del efecto de las proporciones geométricas del edificio sobre el flujo en la cubierta. - Análisis del efecto de la presencia de edificaciones circundantes sobre el flujo del viento en la cubierta del edificio objetivo. Las contribuciones de la presente Tesis Doctoral pueden resumirse en: - Se demuestra que los modelos de turbulencia RANS obtienen mejores resultados para la simulación del viento alrededor de edificaciones empleando los coeficientes propuestos por Crespo y los propuestos por Bechmann y Sórensen que empleando los coeficientes estándar. - Se demuestra que la estimación de la energía cinética turbulenta del flujo empleando modelos de turbulencia RANS puede ser validada manteniendo el enfoque en la cubierta de la edificación. - Se presenta una nueva modificación del modelo de turbulencia Durbin k — e que reproduce mejor la distancia de recirculación del flujo de acuerdo con los resultados experimentales. - Se demuestra una relación lineal entre la distancia de recirculación en una cubierta plana y el factor constante involucrado en el cálculo de la escala de tiempo de la velocidad turbulenta. Este resultado puede ser empleado por la comunidad científica para la mejora del modelado de la turbulencia en diversas herramientas computacionales (OpenFOAM, Fluent, CFX, etc.). - La compatibilidad entre las energías solar fotovoltaica y eólica en cubiertas de edificaciones es analizada. Se demuestra que la presencia de los módulos solares provoca un descenso en la intensidad de turbulencia. - Se demuestran conflictos en el cambio de escala entre simulaciones de edificaciones a escala real y simulaciones de modelos a escala reducida (túnel de viento). Se demuestra que para respetar las limitaciones de similitud (número de Reynolds) son necesarias mediciones en edificaciones a escala real o experimentos en túneles de viento empleando agua como fluido, especialmente cuando se trata con geometrías complejas, como es el caso de los módulos solares. - Se determina el posicionamiento más adecuado para los diferentes tipos de aerogeneradores tomando en consideración la velocidad e intensidad de turbulencia del flujo. El posicionamiento de aerogeneradores es investigado en las geometrías de cubierta más habituales (plana, a dos aguas, inclinada, abovedada y esférica). - Las formas de aristas más habituales (esquina, parapeto, voladizo y curva) son analizadas, así como su efecto sobre el flujo del viento en la cubierta de un edificio de gran altura desde el punto de vista del aprovechamiento eólico. - Se propone una geometría óptima (o de altas prestaciones) para el aprovechamiento de la energía eólica urbana. Esta optimización incluye: verificación de las geometrías estudiadas en el estado del arte, análisis de la influencia de las aristas de la cubierta en el flujo del viento, estudio del acoplamiento entre la cubierta y las paredes, análisis de sensibilidad del grosor de la cubierta, exploración de la influencia de las proporciones geométricas de la cubierta y el edificio, e investigación del efecto de las edificaciones circundantes (considerando diferentes alturas de los alrededores) sobre el flujo del viento en la cubierta del edificio objetivo. Las investigaciones comprenden el análisis de la velocidad, la energía cinética turbulenta y la intensidad de turbulencia en todos los casos. ABSTRACT The HORIZON2020 European program in Future Smart Cities aims to have 20% of electricity produced by renewable sources. This goal implies the necessity to enhance the wind energy generation, both with large and small wind turbines. Wind energy drastically reduces carbon emissions and avoids geo-political risks associated with supply and infrastructure constraints, as well as energy dependence from other regions. Additionally, distributed energy generation (generation at the consumption site) offers significant benefits in terms of high energy efficiency and stimulation of the economy. The buildings sector represents 40% of the European Union total energy consumption. Reducing energy consumption in this area is therefore a priority under the "20-20-20" objectives on energy efficiency. The Directive 2010/31/EU of the European Parliament and of the Council of 19 May 2010 on the energy performance of buildings aims to consider the installation of renewable energy supply systems in new designed buildings. Nowadays, there is a lack of knowledge about the optimum building shape for urban wind energy exploitation. The technological field of study of the present Thesis is the wind energy generation in urban environments. Specifically, the improvement of the building-roof shape with a focus on the wind energy resource exploitation. Since the wind flow around buildings is exhaustively investigated in this Thesis using numerical simulation tools, both computational fluid dynamics (CFD) and building aerodynamics are the scientific fields of study. The main objective of this Thesis is to obtain an improved (or optimum) shape of a high-rise building for the wind energy exploitation on the roof. To achieve this objective, an analysis of the influence of the building shape on the behaviour of the wind flow on the roof from the point of view of the wind energy exploitation is carried out using numerical simulation tools (CFD). Additionally, the conventional building shape (prismatic) is analysed, and the adequate positions for different kinds of wind turbines are proposed. The compatibility of both photovoltaic-solar and wind energies is also analysed for this kind of buildings. The investigation continues with the buildingroof optimization. The methodology for obtaining the optimum high-rise building roof shape involves the following stages: - Verification of the results of previous building-roof shapes studied in the literature. The basic shapes that are compared are: flat, pitched, shed, vaulted and spheric. - Analysis of the influence of the roof-edge shape on the wind flow. This task is carried out by comparing the results obtained for the conventional edge shape (simple corner) with a railing, a cantilever and a curved edge. - Analysis of the roof-wall coupling by testing different variations of a spherical roof on a high-rise building: spherical roof studied in the litera ture, spherical roof geometrically integrated with the walls (squared-plant) and spherical roof with a cylindrical wall. The flow behaviour on the roof according to the variation of the incident wind direction is commented. - Analysis of the effect of the building aspect ratio on the flow. - Analysis of the surrounding buildings effect on the wind flow on the target building roof. The contributions of the present Thesis can be summarized as follows: - It is demonstrated that RANS turbulence models obtain better results for the wind flow around buildings using the coefficients proposed by Crespo and those proposed by Bechmann and S0rensen than by using the standard ones. - It is demonstrated that RANS turbulence models can be validated for turbulent kinetic energy focusing on building roofs. - A new modification of the Durbin k — e turbulence model is proposed in order to obtain a better agreement of the recirculation distance between CFD simulations and experimental results. - A linear relationship between the recirculation distance on a flat roof and the constant factor involved in the calculation of the turbulence velocity time scale is demonstrated. This discovery can be used by the research community in order to improve the turbulence modeling in different solvers (OpenFOAM, Fluent, CFX, etc.). - The compatibility of both photovoltaic-solar and wind energies on building roofs is demonstrated. A decrease of turbulence intensity due to the presence of the solar panels is demonstrated. - Scaling issues are demonstrated between full-scale buildings and windtunnel reduced-scale models. The necessity of respecting the similitude constraints is demonstrated. Either full-scale measurements or wind-tunnel experiments using water as a medium are needed in order to accurately reproduce the wind flow around buildings, specially when dealing with complex shapes (as solar panels, etc.). - The most adequate position (most adequate roof region) for the different kinds of wind turbines is highlighted attending to both velocity and turbulence intensity. The wind turbine positioning was investigated for the most habitual kind of building-roof shapes (flat, pitched, shed, vaulted and spherical). - The most habitual roof-edge shapes (simple edge, railing, cantilever and curved) were investigated, and their effect on the wind flow on a highrise building roof were analysed from the point of view of the wind energy exploitation. - An optimum building-roof shape is proposed for the urban wind energy exploitation. Such optimization includes: state-of-the-art roof shapes test, analysis of the influence of the roof-edge shape on the wind flow, study of the roof-wall coupling, sensitivity analysis of the roof width, exploration of the aspect ratio of the building-roof shape and investigation of the effect of the neighbouring buildings (considering different surrounding heights) on the wind now on the target building roof. The investigations comprise analysis of velocity, turbulent kinetic energy and turbulence intensity for all the cases.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Esta tesis doctoral se enmarca dentro del campo de los sistemas embebidos reconfigurables, redes de sensores inalámbricas para aplicaciones de altas prestaciones, y computación distribuida. El documento se centra en el estudio de alternativas de procesamiento para sistemas embebidos autónomos distribuidos de altas prestaciones (por sus siglas en inglés, High-Performance Autonomous Distributed Systems (HPADS)), así como su evolución hacia el procesamiento de alta resolución. El estudio se ha llevado a cabo tanto a nivel de plataforma como a nivel de las arquitecturas de procesamiento dentro de la plataforma con el objetivo de optimizar aspectos tan relevantes como la eficiencia energética, la capacidad de cómputo y la tolerancia a fallos del sistema. Los HPADS son sistemas realimentados, normalmente formados por elementos distribuidos conectados o no en red, con cierta capacidad de adaptación, y con inteligencia suficiente para llevar a cabo labores de prognosis y/o autoevaluación. Esta clase de sistemas suele formar parte de sistemas más complejos llamados sistemas ciber-físicos (por sus siglas en inglés, Cyber-Physical Systems (CPSs)). Los CPSs cubren un espectro enorme de aplicaciones, yendo desde aplicaciones médicas, fabricación, o aplicaciones aeroespaciales, entre otras muchas. Para el diseño de este tipo de sistemas, aspectos tales como la confiabilidad, la definición de modelos de computación, o el uso de metodologías y/o herramientas que faciliten el incremento de la escalabilidad y de la gestión de la complejidad, son fundamentales. La primera parte de esta tesis doctoral se centra en el estudio de aquellas plataformas existentes en el estado del arte que por sus características pueden ser aplicables en el campo de los CPSs, así como en la propuesta de un nuevo diseño de plataforma de altas prestaciones que se ajuste mejor a los nuevos y más exigentes requisitos de las nuevas aplicaciones. Esta primera parte incluye descripción, implementación y validación de la plataforma propuesta, así como conclusiones sobre su usabilidad y sus limitaciones. Los principales objetivos para el diseño de la plataforma propuesta se enumeran a continuación: • Estudiar la viabilidad del uso de una FPGA basada en RAM como principal procesador de la plataforma en cuanto a consumo energético y capacidad de cómputo. • Propuesta de técnicas de gestión del consumo de energía en cada etapa del perfil de trabajo de la plataforma. •Propuestas para la inclusión de reconfiguración dinámica y parcial de la FPGA (por sus siglas en inglés, Dynamic Partial Reconfiguration (DPR)) de forma que sea posible cambiar ciertas partes del sistema en tiempo de ejecución y sin necesidad de interrumpir al resto de las partes. Evaluar su aplicabilidad en el caso de HPADS. Las nuevas aplicaciones y nuevos escenarios a los que se enfrentan los CPSs, imponen nuevos requisitos en cuanto al ancho de banda necesario para el procesamiento de los datos, así como en la adquisición y comunicación de los mismos, además de un claro incremento en la complejidad de los algoritmos empleados. Para poder cumplir con estos nuevos requisitos, las plataformas están migrando desde sistemas tradicionales uni-procesador de 8 bits, a sistemas híbridos hardware-software que incluyen varios procesadores, o varios procesadores y lógica programable. Entre estas nuevas arquitecturas, las FPGAs y los sistemas en chip (por sus siglas en inglés, System on Chip (SoC)) que incluyen procesadores embebidos y lógica programable, proporcionan soluciones con muy buenos resultados en cuanto a consumo energético, precio, capacidad de cómputo y flexibilidad. Estos buenos resultados son aún mejores cuando las aplicaciones tienen altos requisitos de cómputo y cuando las condiciones de trabajo son muy susceptibles de cambiar en tiempo real. La plataforma propuesta en esta tesis doctoral se ha denominado HiReCookie. La arquitectura incluye una FPGA basada en RAM como único procesador, así como un diseño compatible con la plataforma para redes de sensores inalámbricas desarrollada en el Centro de Electrónica Industrial de la Universidad Politécnica de Madrid (CEI-UPM) conocida como Cookies. Esta FPGA, modelo Spartan-6 LX150, era, en el momento de inicio de este trabajo, la mejor opción en cuanto a consumo y cantidad de recursos integrados, cuando además, permite el uso de reconfiguración dinámica y parcial. Es importante resaltar que aunque los valores de consumo son los mínimos para esta familia de componentes, la potencia instantánea consumida sigue siendo muy alta para aquellos sistemas que han de trabajar distribuidos, de forma autónoma, y en la mayoría de los casos alimentados por baterías. Por esta razón, es necesario incluir en el diseño estrategias de ahorro energético para incrementar la usabilidad y el tiempo de vida de la plataforma. La primera estrategia implementada consiste en dividir la plataforma en distintas islas de alimentación de forma que sólo aquellos elementos que sean estrictamente necesarios permanecerán alimentados, cuando el resto puede estar completamente apagado. De esta forma es posible combinar distintos modos de operación y así optimizar enormemente el consumo de energía. El hecho de apagar la FPGA para ahora energía durante los periodos de inactividad, supone la pérdida de la configuración, puesto que la memoria de configuración es una memoria volátil. Para reducir el impacto en el consumo y en el tiempo que supone la reconfiguración total de la plataforma una vez encendida, en este trabajo, se incluye una técnica para la compresión del archivo de configuración de la FPGA, de forma que se consiga una reducción del tiempo de configuración y por ende de la energía consumida. Aunque varios de los requisitos de diseño pueden satisfacerse con el diseño de la plataforma HiReCookie, es necesario seguir optimizando diversos parámetros tales como el consumo energético, la tolerancia a fallos y la capacidad de procesamiento. Esto sólo es posible explotando todas las posibilidades ofrecidas por la arquitectura de procesamiento en la FPGA. Por lo tanto, la segunda parte de esta tesis doctoral está centrada en el diseño de una arquitectura reconfigurable denominada ARTICo3 (Arquitectura Reconfigurable para el Tratamiento Inteligente de Cómputo, Confiabilidad y Consumo de energía) para la mejora de estos parámetros por medio de un uso dinámico de recursos. ARTICo3 es una arquitectura de procesamiento para FPGAs basadas en RAM, con comunicación tipo bus, preparada para dar soporte para la gestión dinámica de los recursos internos de la FPGA en tiempo de ejecución gracias a la inclusión de reconfiguración dinámica y parcial. Gracias a esta capacidad de reconfiguración parcial, es posible adaptar los niveles de capacidad de procesamiento, energía consumida o tolerancia a fallos para responder a las demandas de la aplicación, entorno, o métricas internas del dispositivo mediante la adaptación del número de recursos asignados para cada tarea. Durante esta segunda parte de la tesis se detallan el diseño de la arquitectura, su implementación en la plataforma HiReCookie, así como en otra familia de FPGAs, y su validación por medio de diferentes pruebas y demostraciones. Los principales objetivos que se plantean la arquitectura son los siguientes: • Proponer una metodología basada en un enfoque multi-hilo, como las propuestas por CUDA (por sus siglas en inglés, Compute Unified Device Architecture) u Open CL, en la cual distintos kernels, o unidades de ejecución, se ejecuten en un numero variable de aceleradores hardware sin necesidad de cambios en el código de aplicación. • Proponer un diseño y proporcionar una arquitectura en la que las condiciones de trabajo cambien de forma dinámica dependiendo bien de parámetros externos o bien de parámetros que indiquen el estado de la plataforma. Estos cambios en el punto de trabajo de la arquitectura serán posibles gracias a la reconfiguración dinámica y parcial de aceleradores hardware en tiempo real. • Explotar las posibilidades de procesamiento concurrente, incluso en una arquitectura basada en bus, por medio de la optimización de las transacciones en ráfaga de datos hacia los aceleradores. •Aprovechar las ventajas ofrecidas por la aceleración lograda por módulos puramente hardware para conseguir una mejor eficiencia energética. • Ser capaces de cambiar los niveles de redundancia de hardware de forma dinámica según las necesidades del sistema en tiempo real y sin cambios para el código de aplicación. • Proponer una capa de abstracción entre el código de aplicación y el uso dinámico de los recursos de la FPGA. El diseño en FPGAs permite la utilización de módulos hardware específicamente creados para una aplicación concreta. De esta forma es posible obtener rendimientos mucho mayores que en el caso de las arquitecturas de propósito general. Además, algunas FPGAs permiten la reconfiguración dinámica y parcial de ciertas partes de su lógica en tiempo de ejecución, lo cual dota al diseño de una gran flexibilidad. Los fabricantes de FPGAs ofrecen arquitecturas predefinidas con la posibilidad de añadir bloques prediseñados y poder formar sistemas en chip de una forma más o menos directa. Sin embargo, la forma en la que estos módulos hardware están organizados dentro de la arquitectura interna ya sea estática o dinámicamente, o la forma en la que la información se intercambia entre ellos, influye enormemente en la capacidad de cómputo y eficiencia energética del sistema. De la misma forma, la capacidad de cargar módulos hardware bajo demanda, permite añadir bloques redundantes que permitan aumentar el nivel de tolerancia a fallos de los sistemas. Sin embargo, la complejidad ligada al diseño de bloques hardware dedicados no debe ser subestimada. Es necesario tener en cuenta que el diseño de un bloque hardware no es sólo su propio diseño, sino también el diseño de sus interfaces, y en algunos casos de los drivers software para su manejo. Además, al añadir más bloques, el espacio de diseño se hace más complejo, y su programación más difícil. Aunque la mayoría de los fabricantes ofrecen interfaces predefinidas, IPs (por sus siglas en inglés, Intelectual Property) comerciales y plantillas para ayudar al diseño de los sistemas, para ser capaces de explotar las posibilidades reales del sistema, es necesario construir arquitecturas sobre las ya establecidas para facilitar el uso del paralelismo, la redundancia, y proporcionar un entorno que soporte la gestión dinámica de los recursos. Para proporcionar este tipo de soporte, ARTICo3 trabaja con un espacio de soluciones formado por tres ejes fundamentales: computación, consumo energético y confiabilidad. De esta forma, cada punto de trabajo se obtiene como una solución de compromiso entre estos tres parámetros. Mediante el uso de la reconfiguración dinámica y parcial y una mejora en la transmisión de los datos entre la memoria principal y los aceleradores, es posible dedicar un número variable de recursos en el tiempo para cada tarea, lo que hace que los recursos internos de la FPGA sean virtualmente ilimitados. Este variación en el tiempo del número de recursos por tarea se puede usar bien para incrementar el nivel de paralelismo, y por ende de aceleración, o bien para aumentar la redundancia, y por lo tanto el nivel de tolerancia a fallos. Al mismo tiempo, usar un numero óptimo de recursos para una tarea mejora el consumo energético ya que bien es posible disminuir la potencia instantánea consumida, o bien el tiempo de procesamiento. Con el objetivo de mantener los niveles de complejidad dentro de unos límites lógicos, es importante que los cambios realizados en el hardware sean totalmente transparentes para el código de aplicación. A este respecto, se incluyen distintos niveles de transparencia: • Transparencia a la escalabilidad: los recursos usados por una misma tarea pueden ser modificados sin que el código de aplicación sufra ningún cambio. • Transparencia al rendimiento: el sistema aumentara su rendimiento cuando la carga de trabajo aumente, sin cambios en el código de aplicación. • Transparencia a la replicación: es posible usar múltiples instancias de un mismo módulo bien para añadir redundancia o bien para incrementar la capacidad de procesamiento. Todo ello sin que el código de aplicación cambie. • Transparencia a la posición: la posición física de los módulos hardware es arbitraria para su direccionamiento desde el código de aplicación. • Transparencia a los fallos: si existe un fallo en un módulo hardware, gracias a la redundancia, el código de aplicación tomará directamente el resultado correcto. • Transparencia a la concurrencia: el hecho de que una tarea sea realizada por más o menos bloques es transparente para el código que la invoca. Por lo tanto, esta tesis doctoral contribuye en dos líneas diferentes. En primer lugar, con el diseño de la plataforma HiReCookie y en segundo lugar con el diseño de la arquitectura ARTICo3. Las principales contribuciones de esta tesis se resumen a continuación. • Arquitectura de la HiReCookie incluyendo: o Compatibilidad con la plataforma Cookies para incrementar las capacidades de esta. o División de la arquitectura en distintas islas de alimentación. o Implementación de los diversos modos de bajo consumo y políticas de despertado del nodo. o Creación de un archivo de configuración de la FPGA comprimido para reducir el tiempo y el consumo de la configuración inicial. • Diseño de la arquitectura reconfigurable para FPGAs basadas en RAM ARTICo3: o Modelo de computación y modos de ejecución inspirados en el modelo de CUDA pero basados en hardware reconfigurable con un número variable de bloques de hilos por cada unidad de ejecución. o Estructura para optimizar las transacciones de datos en ráfaga proporcionando datos en cascada o en paralelo a los distinto módulos incluyendo un proceso de votado por mayoría y operaciones de reducción. o Capa de abstracción entre el procesador principal que incluye el código de aplicación y los recursos asignados para las diferentes tareas. o Arquitectura de los módulos hardware reconfigurables para mantener la escalabilidad añadiendo una la interfaz para las nuevas funcionalidades con un simple acceso a una memoria RAM interna. o Caracterización online de las tareas para proporcionar información a un módulo de gestión de recursos para mejorar la operación en términos de energía y procesamiento cuando además se opera entre distintos nieles de tolerancia a fallos. El documento está dividido en dos partes principales formando un total de cinco capítulos. En primer lugar, después de motivar la necesidad de nuevas plataformas para cubrir las nuevas aplicaciones, se detalla el diseño de la plataforma HiReCookie, sus partes, las posibilidades para bajar el consumo energético y se muestran casos de uso de la plataforma así como pruebas de validación del diseño. La segunda parte del documento describe la arquitectura reconfigurable, su implementación en varias FPGAs, y pruebas de validación en términos de capacidad de procesamiento y consumo energético, incluyendo cómo estos aspectos se ven afectados por el nivel de tolerancia a fallos elegido. Los capítulos a lo largo del documento son los siguientes: El capítulo 1 analiza los principales objetivos, motivación y aspectos teóricos necesarios para seguir el resto del documento. El capítulo 2 está centrado en el diseño de la plataforma HiReCookie y sus posibilidades para disminuir el consumo de energía. El capítulo 3 describe la arquitectura reconfigurable ARTICo3. El capítulo 4 se centra en las pruebas de validación de la arquitectura usando la plataforma HiReCookie para la mayoría de los tests. Un ejemplo de aplicación es mostrado para analizar el funcionamiento de la arquitectura. El capítulo 5 concluye esta tesis doctoral comentando las conclusiones obtenidas, las contribuciones originales del trabajo y resultados y líneas futuras. ABSTRACT This PhD Thesis is framed within the field of dynamically reconfigurable embedded systems, advanced sensor networks and distributed computing. The document is centred on the study of processing solutions for high-performance autonomous distributed systems (HPADS) as well as their evolution towards High performance Computing (HPC) systems. The approach of the study is focused on both platform and processor levels to optimise critical aspects such as computing performance, energy efficiency and fault tolerance. HPADS are considered feedback systems, normally networked and/or distributed, with real-time adaptive and predictive functionality. These systems, as part of more complex systems known as Cyber-Physical Systems (CPSs), can be applied in a wide range of fields such as military, health care, manufacturing, aerospace, etc. For the design of HPADS, high levels of dependability, the definition of suitable models of computation, and the use of methodologies and tools to support scalability and complexity management, are required. The first part of the document studies the different possibilities at platform design level in the state of the art, together with description, development and validation tests of the platform proposed in this work to cope with the previously mentioned requirements. The main objectives targeted by this platform design are the following: • Study the feasibility of using SRAM-based FPGAs as the main processor of the platform in terms of energy consumption and performance for high demanding applications. • Analyse and propose energy management techniques to reduce energy consumption in every stage of the working profile of the platform. • Provide a solution with dynamic partial and wireless remote HW reconfiguration (DPR) to be able to change certain parts of the FPGA design at run time and on demand without interrupting the rest of the system. • Demonstrate the applicability of the platform in different test-bench applications. In order to select the best approach for the platform design in terms of processing alternatives, a study of the evolution of the state-of-the-art platforms is required to analyse how different architectures cope with new more demanding applications and scenarios: security, mixed-critical systems for aerospace, multimedia applications, or military environments, among others. In all these scenarios, important changes in the required processing bandwidth or the complexity of the algorithms used are provoking the migration of the platforms from single microprocessor architectures to multiprocessing and heterogeneous solutions with more instant power consumption but higher energy efficiency. Within these solutions, FPGAs and Systems on Chip including FPGA fabric and dedicated hard processors, offer a good trade of among flexibility, processing performance, energy consumption and price, when they are used in demanding applications where working conditions are very likely to vary over time and high complex algorithms are required. The platform architecture proposed in this PhD Thesis is called HiReCookie. It includes an SRAM-based FPGA as the main and only processing unit. The FPGA selected, the Xilinx Spartan-6 LX150, was at the beginning of this work the best choice in terms of amount of resources and power. Although, the power levels are the lowest of these kind of devices, they can be still very high for distributed systems that normally work powered by batteries. For that reason, it is necessary to include different energy saving possibilities to increase the usability of the platform. In order to reduce energy consumption, the platform architecture is divided into different power islands so that only those parts of the systems that are strictly needed are powered on, while the rest of the islands can be completely switched off. This allows a combination of different low power modes to decrease energy. In addition, one of the most important handicaps of SRAM-based FPGAs is that they are not alive at power up. Therefore, recovering the system from a switch-off state requires to reload the FPGA configuration from a non-volatile memory device. For that reason, this PhD Thesis also proposes a methodology to compress the FPGA configuration file in order to reduce time and energy during the initial configuration process. Although some of the requirements for the design of HPADS are already covered by the design of the HiReCookie platform, it is necessary to continue improving energy efficiency, computing performance and fault tolerance. This is only possible by exploiting all the opportunities provided by the processing architectures configured inside the FPGA. Therefore, the second part of the thesis details the design of the so called ARTICo3 FPGA architecture to enhance the already intrinsic capabilities of the FPGA. ARTICo3 is a DPR-capable bus-based virtual architecture for multiple HW acceleration in SRAM-based FPGAs. The architecture provides support for dynamic resource management in real time. In this way, by using DPR, it will be possible to change the levels of computing performance, energy consumption and fault tolerance on demand by increasing or decreasing the amount of resources used by the different tasks. Apart from the detailed design of the architecture and its implementation in different FPGA devices, different validation tests and comparisons are also shown. The main objectives targeted by this FPGA architecture are listed as follows: • Provide a method based on a multithread approach such as those offered by CUDA (Compute Unified Device Architecture) or OpenCL kernel executions, where kernels are executed in a variable number of HW accelerators without requiring application code changes. • Provide an architecture to dynamically adapt working points according to either self-measured or external parameters in terms of energy consumption, fault tolerance and computing performance. Taking advantage of DPR capabilities, the architecture must provide support for a dynamic use of resources in real time. • Exploit concurrent processing capabilities in a standard bus-based system by optimizing data transactions to and from HW accelerators. • Measure the advantage of HW acceleration as a technique to boost performance to improve processing times and save energy by reducing active times for distributed embedded systems. • Dynamically change the levels of HW redundancy to adapt fault tolerance in real time. • Provide HW abstraction from SW application design. FPGAs give the possibility of designing specific HW blocks for every required task to optimise performance while some of them include the possibility of including DPR. Apart from the possibilities provided by manufacturers, the way these HW modules are organised, addressed and multiplexed in area and time can improve computing performance and energy consumption. At the same time, fault tolerance and security techniques can also be dynamically included using DPR. However, the inherent complexity of designing new HW modules for every application is not negligible. It does not only consist of the HW description, but also the design of drivers and interfaces with the rest of the system, while the design space is widened and more complex to define and program. Even though the tools provided by the majority of manufacturers already include predefined bus interfaces, commercial IPs, and templates to ease application prototyping, it is necessary to improve these capabilities. By adding new architectures on top of them, it is possible to take advantage of parallelization and HW redundancy while providing a framework to ease the use of dynamic resource management. ARTICo3 works within a solution space where working points change at run time in a 3D space defined by three different axes: Computation, Consumption, and Fault Tolerance. Therefore, every working point is found as a trade-off solution among these three axes. By means of DPR, different accelerators can be multiplexed so that the amount of available resources for any application is virtually unlimited. Taking advantage of DPR capabilities and a novel way of transmitting data to the reconfigurable HW accelerators, it is possible to dedicate a dynamically-changing number of resources for a given task in order to either boost computing speed or adding HW redundancy and a voting process to increase fault-tolerance levels. At the same time, using an optimised amount of resources for a given task reduces energy consumption by reducing instant power or computing time. In order to keep level complexity under certain limits, it is important that HW changes are transparent for the application code. Therefore, different levels of transparency are targeted by the system: • Scalability transparency: a task must be able to expand its resources without changing the system structure or application algorithms. • Performance transparency: the system must reconfigure itself as load changes. • Replication transparency: multiple instances of the same task are loaded to increase reliability and performance. • Location transparency: resources are accessed with no knowledge of their location by the application code. • Failure transparency: task must be completed despite a failure in some components. • Concurrency transparency: different tasks will work in a concurrent way transparent to the application code. Therefore, as it can be seen, the Thesis is contributing in two different ways. First with the design of the HiReCookie platform and, second with the design of the ARTICo3 architecture. The main contributions of this PhD Thesis are then listed below: • Architecture of the HiReCookie platform including: o Compatibility of the processing layer for high performance applications with the Cookies Wireless Sensor Network platform for fast prototyping and implementation. o A division of the architecture in power islands. o All the different low-power modes. o The creation of the partial-initial bitstream together with the wake-up policies of the node. • The design of the reconfigurable architecture for SRAM FPGAs: ARTICo3: o A model of computation and execution modes inspired in CUDA but based on reconfigurable HW with a dynamic number of thread blocks per kernel. o A structure to optimise burst data transactions providing coalesced or parallel data to HW accelerators, parallel voting process and reduction operation. o The abstraction provided to the host processor with respect to the operation of the kernels in terms of the number of replicas, modes of operation, location in the reconfigurable area and addressing. o The architecture of the modules representing the thread blocks to make the system scalable by adding functional units only adding an access to a BRAM port. o The online characterization of the kernels to provide information to a scheduler or resource manager in terms of energy consumption and processing time when changing among different fault-tolerance levels, as well as if a kernel is expected to work in the memory-bounded or computing-bounded areas. The document of the Thesis is divided into two main parts with a total of five chapters. First, after motivating the need for new platforms to cover new more demanding applications, the design of the HiReCookie platform, its parts and several partial tests are detailed. The design of the platform alone does not cover all the needs of these applications. Therefore, the second part describes the architecture inside the FPGA, called ARTICo3, proposed in this PhD Thesis. The architecture and its implementation are tested in terms of energy consumption and computing performance showing different possibilities to improve fault tolerance and how this impact in energy and time of processing. Chapter 1 shows the main goals of this PhD Thesis and the technology background required to follow the rest of the document. Chapter 2 shows all the details about the design of the FPGA-based platform HiReCookie. Chapter 3 describes the ARTICo3 architecture. Chapter 4 is focused on the validation tests of the ARTICo3 architecture. An application for proof of concept is explained where typical kernels related to image processing and encryption algorithms are used. Further experimental analyses are performed using these kernels. Chapter 5 concludes the document analysing conclusions, comments about the contributions of the work, and some possible future lines for the work.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In 2009, President Obama pledged that, by 2020, the United States would achieve reductions in greenhouse gas emissions of 17% from 2005 levels. With the failure of Congress to adopt comprehensive climate legislation in 2010, the feasibility of the pledge was put in doubt. However, we find that the United States is near to reaching this goal: the country is currently on course to achieve reductions of 16.3% from 2005 levels in 2020. Three factors contribute to this outcome: greenhouse gas regulations under the Clean Air Act, secular trends including changes in relative fuel prices and energy efficiency and sub-national efforts. Perhaps even more surprising, domestic emissions are probably lower than would have been the case if the Waxman-Markey cap-and-trade proposal had become law in 2010. At this point, however, the United States is expected to fail to meet its financing commitments under the Copenhagen Accord for 2020.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis presents a number of methodological developments that were raised by a real life application to measuring the efficiency of bank branches. The advent of internet banking and phone banking is changing the role of bank branches from a predominantly transaction-based one to a sales-oriented role. This fact requires the development of new forms of assessing and comparing branches of a bank. In addition, performance assessment models must also take into account the fact that bank branches are service and for-profit organisations to which providing adequate service quality as well as being profitable are crucial objectives. This study analyses bank branches performance in their new roles in three different areas: their effectiveness in fostering the use of new transaction channels such as the internet and the telephone (transactional efficiency); their effectiveness in increasing sales and their customer base (operational efficiency); and their effectiveness in generating profits without compromising the quality of service (profit efficiency). The chosen methodology for the overall analysis is Data Envelopment Analysis (DEA). The application attempted here required some adaptations to existing DEA models and indeed some new models so that some specialities of our data could be handled. These concern the development of models that can account for negative data, the development of models to measure profit efficiency, and the development of models that yield production units with targets that are nearer to their observed levels than targets yielded by traditional DEA models. The application of the developed models to a sample of Portuguese bank branches allowed their classification according to the three performance dimensions (transactional, operational and profit efficiency). It also provided useful insights to bank managers regarding how bank branches compare between themselves in terms of their performance, and how, in general, the three performance dimensions are connected between themselves.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Faced with a future of rising energy costs there is a need for industry to manage energy more carefully in order to meet its economic objectives. A problem besetting the growth of energy conservation in the UK is that a large proportion of energy consumption is used in a low intensive manner in organisations where they would be responsibility for energy efficiency is spread over a large number of personnel who each see only small energy costs. In relation to this problem in the non-energy intensive industrial sector, an application of an energy management technique known as monitoring and targeting (M & T) has been installed at the Whetstone site of the General Electric Company Limited in an attempt to prove it as a means for motivating line management and personnel to save energy. The objective energy saving for which the M & T was devised is very specific. During early energy conservation work at the site there had been a change from continuous to intermittent heating but the maintenance of the strategy was receiving a poor level of commitment from line management and performance was some 5% - 10% less than expected. The M & T is concerned therefore with heat for space heating for which a heat metering system was required. Metering of the site high pressure hot water system posed technical difficulties and expenditure was also limited. This led to a ‘tin-house' design being installed for a price less than the commercial equivalent. The timespan of work to achieve an operational heat metering system was 3 years which meant that energy saving results from the scheme were not observed during the study. If successful the replication potential is the larger non energy intensive sites from which some 30 PT savings could be expected in the UK.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis investigates the modelling of drying processes for the promotion of market-led Demand Side Management (DSM) as applied to the UK Public Electricity Suppliers. A review of DSM in the electricity supply industry is provided, together with a discussion of the relevant drivers supporting market-led DSM and energy services (ES). The potential opportunities for ES in a fully deregulated energy market are outlined. It is suggested that targeted industrial sector energy efficiency schemes offer significant opportunity for long term customer and supplier benefit. On a process level, industrial drying is highlighted as offering significant scope for the application of energy services. Drying is an energy-intensive process used widely throughout industry. The results of an energy survey suggest that 17.7 per cent of total UK industrial energy use derives from drying processes. Comparison with published work indicates that energy use for drying shows an increasing trend against a background of reducing overall industrial energy use. Airless drying is highlighted as offering potential energy saving and production benefits to industry. To this end, a comprehensive review of the novel airless drying technology and its background theory is made. Advantages and disadvantages of airless operation are defined and the limited market penetration of airless drying is identified, as are the key opportunities for energy saving. Limited literature has been found which details the modelling of energy use for airless drying. A review of drying theory and previous modelling work is made in an attempt to model energy consumption for drying processes. The history of drying models is presented as well as a discussion of the different approaches taken and their relative merits. The viability of deriving energy use from empirical drying data is examined. Adaptive neuro fuzzy inference systems (ANFIS) are successfully applied to the modelling of drying rates for 3 drying technologies, namely convective air, heat pump and airless drying. The ANFIS systems are then integrated into a novel energy services model for the prediction of relative drying times, energy cost and atmospheric carbon dioxide emission levels. The author believes that this work constitutes the first to use fuzzy systems for the modelling of drying performance as an energy services approach to DSM. To gain an insight into the 'real world' use of energy for drying, this thesis presents a unique first-order energy audit of every ceramic sanitaryware manufacturing site in the UK. Previously unknown patterns of energy use are highlighted. Supplementary comments on the timing and use of drying systems are also made. The limitations of such large scope energy surveys are discussed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Operation of reverse osmosis (RO) in cyclic batch mode can in principle provide both high energy efficiency and high recovery. However, one factor that causes the performance to be less than ideal is longitudinal dispersion in the RO module. At the end of the batch pressurisation phase it is necessary to purge and then refill the module. During the purge and refill phases, dispersion causes undesirable mixing of concentrated brine with less concentrated feed water, therefore increasing the salt concentration and energy usage in the subsequent pressurisation phase of the cycle. In this study, we quantify the significance of dispersion through theory and experiment. We provide an analysis that relates the energy efficiency of the batch operation to the amount of dispersion. With the help of a model based on the analysis by Taylor, dispersion is quantified according to flow rate. The model is confirmed by experiments with two types of proprietary spiral wound RO modules, using sodium chloride (NaCl) solutions of concentration 1000 to 20,000 ppm. In practice the typical energy usage increases by 4% to 5.5% compared to the ideal case of zero dispersion.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The energy consumption and the energy efficiency have become very important issue in optimizing the current as well as in designing the future telecommunications networks. The energy and power metrics are being introduced in order to enable assessment and comparison of the energy consumption and power efficiency of the telecommunications networks and other transmission equipment. The standardization of the energy and power metrics is a significant ongoing activity aiming to define the baseline energy and power metrics for the telecommunications systems. This article provides an up-to-date overview of the energy and power metrics being proposed by the various standardization bodies and subsequently adopted worldwide by the equipment manufacturers and the network operators. © Institut Télécom and Springer-Verlag 2012.and Springer-Verlag 2012.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This project is focused on exchanging knowledge between ABS, UKBI and managers of business incubators in the UK. The project relates to exploitation of extant knowledge-base on assessing and improving business incubation management practice and performance and builds on two earlier studies. It addresses a pressing need for assessing and benchmarking business incubation input, process and outcome performance and highlighting best practice. The overarching aim of this project was to obtain proof-of-concept for a business incubation performance assessment and benchmarking online tool, fine-tune it and put it in use by nurturing a community of business incubation management practice, aligned by the resultant tool. The purpose was to offer an appropriate set of measures, in areas identified by relevant research on business incubation performance management and impact as critical, against which: 1.The input and process performance of business incubation management practice can be assessed and benchmarked within the auspices of a community of incubator managers concerned with best practice 2.The outcome performance and impact of business incubators can be assessed longitudinally. As such, the developed online assessment framework is geared towards the needs of researchers, policy makers and practitioners concerned with business incubation performance, added value and impact.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Distributed source coding (DSC) has recently been considered as an efficient approach to data compression in wireless sensor networks (WSN). Using this coding method multiple sensor nodes compress their correlated observations without inter-node communications. Therefore energy and bandwidth can be efficiently saved. In this paper, we investigate a randombinning based DSC scheme for remote source estimation in WSN and its performance of estimated signal to distortion ratio (SDR). With the introduction of a detailed power consumption model for wireless sensor communications, we quantitatively analyze the overall network energy consumption of the DSC scheme. We further propose a novel energy-aware transmission protocol for the DSC scheme, which flexibly optimizes the DSC performance in terms of either SDR or energy consumption, by adapting the source coding and transmission parameters to the network conditions. Simulations validate the energy efficiency of the proposed adaptive transmission protocol. © 2007 IEEE.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The UK government aims at achieving 80% CO2 emission reduction by 2050 which requires collective efforts across all the UK industry sectors. In particular, the housing sector has a large potential to contribute to achieving the aim because the housing sector alone accounts for 27% of the total UK CO2 emission, and furthermore, 87% of the housing which is responsible for current 27% CO2 emission will still stand in 2050. Therefore, it is essential to improve energy efficiency of existing housing stock built with low energy efficiency standard. In order for this, a whole‐house needs to be refurbished in a sustainable way by considering the life time financial and environmental impacts of a refurbished house. However, the current refurbishment process seems to be challenging to generate a financially and environmentally affordable refurbishment solution due to the highly fragmented nature of refurbishment practice and a lack of knowledge and skills about whole‐house refurbishment in the construction industry. In order to generate an affordable refurbishment solution, diverse information regarding costs and environmental impacts of refurbishment measures and materials should be collected and integrated in right sequences throughout the refurbishment project life cycle among key project stakeholders. Consequently, various researchers increasingly study a way of utilizing Building Information Modelling (BIM) to tackle current problems in the construction industry because BIM can support construction professionals to manage construction projects in a collaborative manner by integrating diverse information, and to determine the best refurbishment solution among various alternatives by calculating the life cycle costs and lifetime CO2 performance of a refurbishment solution. Despite the capability of BIM, the BIM adoption rate is low with 25% in the housing sector and it has been rarely studied about a way of using BIM for housing refurbishment projects. Therefore, this research aims to develop a BIM framework to formulate a financially and environmentally affordable whole‐house refurbishment solution based on the Life Cycle Costing (LCC) and Life Cycle Assessment (LCA) methods simultaneously. In order to achieve the aim, a BIM feasibility study was conducted as a pilot study to examine whether BIM is suitable for housing refurbishment, and a BIM framework was developed based on the grounded theory because there was no precedent research. After the development of a BIM framework, this framework was examined by a hypothetical case study using BIM input data collected from questionnaire survey regarding homeowners’ preferences for housing refurbishment. Finally, validation of the BIM framework was conducted among academics and professionals by providing the BIM framework and a formulated refurbishment solution based on the LCC and LCA studies through the framework. As a result, BIM was identified as suitable for housing refurbishment as a management tool, and it is timely for developing the BIM framework. The BIM framework with seven project stages was developed to formulate an affordable refurbishment solution. Through the case study, the Building Regulation is identified as the most affordable energy efficiency standard which renders the best LCC and LCA results when it is applied for whole‐house refurbishment solution. In addition, the Fabric Energy Efficiency Standard (FEES) is recommended when customers are willing to adopt high energy standard, and the maximum 60% of CO2 emissions can be reduced through whole‐house fabric refurbishment with the FEES. Furthermore, limitations and challenges to fully utilize BIM framework for housing refurbishment were revealed such as a lack of BIM objects with proper cost and environmental information, limited interoperability between different BIM software and limited information of LCC and LCA datasets in BIM system. Finally, the BIM framework was validated as suitable for housing refurbishment projects, and reviewers commented that the framework can be more practical if a specific BIM library for housing refurbishment with proper LCC and LCA datasets is developed. This research is expected to provide a systematic way of formulating a refurbishment solution using BIM, and to become a basis for further research on BIM for the housing sector to resolve the current limitations and challenges. Future research should enhance the BIM framework by developing more detailed process map and develop BIM objects with proper LCC and LCA Information.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Data Envelopment Analysis (DEA) is a powerful analytical technique for measuring the relative efficiency of alternatives based on their inputs and outputs. The alternatives can be in the form of countries who attempt to enhance their productivity and environmental efficiencies concurrently. However, when desirable outputs such as productivity increases, undesirable outputs increase as well (e.g. carbon emissions), thus making the performance evaluation questionable. In addition, traditional environmental efficiency has been typically measured by crisp input and output (desirable and undesirable). However, the input and output data, such as CO2 emissions, in real-world evaluation problems are often imprecise or ambiguous. This paper proposes a DEA-based framework where the input and output data are characterized by symmetrical and asymmetrical fuzzy numbers. The proposed method allows the environmental evaluation to be assessed at different levels of certainty. The validity of the proposed model has been tested and its usefulness is illustrated using two numerical examples. An application of energy efficiency among 23 European Union (EU) member countries is further presented to show the applicability and efficacy of the proposed approach under asymmetric fuzzy numbers.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper, we investigate the hop distance optimization problem in ad hoc networks where cooperative multiinput- single-output (MISO) is adopted to improve the energy efficiency of the network. We first establish the energy model of multihop cooperative MISO transmission. Based on the model, the energy consumption per bit of the network with high node density is minimized numerically by finding an optimal hop distance, and, to get the global minimum energy consumption, both hop distance and the number of cooperating nodes around each relay node for multihop transmission are jointly optimized. We also compare the performance between multihop cooperative MISO transmission and single-input-single-output (SISO) transmission, under the same network condition (high node density). We show that cooperative MISO transmission could be energyinefficient compared with SISO transmission when the path-loss exponent becomes high. We then extend our investigation to the networks with varied node densities and show the effectiveness of the joint optimization method in this scenario using simulation results. It is shown that the optimal results depend on network conditions such as node density and path-loss exponent, and the simulation results are closely matched to those obtained using the numerical models for high node density cases.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Energy efficiency is one of the most important performances of a wireless sensor network. In this paper, we show that choosing a proper transmission scheme given the channel and network conditions can ensure a high energy performance in different transmission environments. Based on the energy models we established for both cooperative and non-cooperative communications, the efficiency in terms of energy consumption per bit for different transmission schemes is investigated. It is shown that cooperative transmission schemes can outperform non-cooperative schemes in energy efficiency in severe channel conditions and when the source-destination distance is in a medium or long range. But the latter is more energy efficient than the former for short-range transmission. For cooperative transmission schemes, the number of transmission branches and the number of relays per branch can also be properly selected to adapt to the variations of the transmission environment, so that the total energy consumption can be minimized.