976 resultados para Dynamically changing electrode processes


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Esta tesis doctoral se enmarca dentro del campo de los sistemas embebidos reconfigurables, redes de sensores inalámbricas para aplicaciones de altas prestaciones, y computación distribuida. El documento se centra en el estudio de alternativas de procesamiento para sistemas embebidos autónomos distribuidos de altas prestaciones (por sus siglas en inglés, High-Performance Autonomous Distributed Systems (HPADS)), así como su evolución hacia el procesamiento de alta resolución. El estudio se ha llevado a cabo tanto a nivel de plataforma como a nivel de las arquitecturas de procesamiento dentro de la plataforma con el objetivo de optimizar aspectos tan relevantes como la eficiencia energética, la capacidad de cómputo y la tolerancia a fallos del sistema. Los HPADS son sistemas realimentados, normalmente formados por elementos distribuidos conectados o no en red, con cierta capacidad de adaptación, y con inteligencia suficiente para llevar a cabo labores de prognosis y/o autoevaluación. Esta clase de sistemas suele formar parte de sistemas más complejos llamados sistemas ciber-físicos (por sus siglas en inglés, Cyber-Physical Systems (CPSs)). Los CPSs cubren un espectro enorme de aplicaciones, yendo desde aplicaciones médicas, fabricación, o aplicaciones aeroespaciales, entre otras muchas. Para el diseño de este tipo de sistemas, aspectos tales como la confiabilidad, la definición de modelos de computación, o el uso de metodologías y/o herramientas que faciliten el incremento de la escalabilidad y de la gestión de la complejidad, son fundamentales. La primera parte de esta tesis doctoral se centra en el estudio de aquellas plataformas existentes en el estado del arte que por sus características pueden ser aplicables en el campo de los CPSs, así como en la propuesta de un nuevo diseño de plataforma de altas prestaciones que se ajuste mejor a los nuevos y más exigentes requisitos de las nuevas aplicaciones. Esta primera parte incluye descripción, implementación y validación de la plataforma propuesta, así como conclusiones sobre su usabilidad y sus limitaciones. Los principales objetivos para el diseño de la plataforma propuesta se enumeran a continuación: • Estudiar la viabilidad del uso de una FPGA basada en RAM como principal procesador de la plataforma en cuanto a consumo energético y capacidad de cómputo. • Propuesta de técnicas de gestión del consumo de energía en cada etapa del perfil de trabajo de la plataforma. •Propuestas para la inclusión de reconfiguración dinámica y parcial de la FPGA (por sus siglas en inglés, Dynamic Partial Reconfiguration (DPR)) de forma que sea posible cambiar ciertas partes del sistema en tiempo de ejecución y sin necesidad de interrumpir al resto de las partes. Evaluar su aplicabilidad en el caso de HPADS. Las nuevas aplicaciones y nuevos escenarios a los que se enfrentan los CPSs, imponen nuevos requisitos en cuanto al ancho de banda necesario para el procesamiento de los datos, así como en la adquisición y comunicación de los mismos, además de un claro incremento en la complejidad de los algoritmos empleados. Para poder cumplir con estos nuevos requisitos, las plataformas están migrando desde sistemas tradicionales uni-procesador de 8 bits, a sistemas híbridos hardware-software que incluyen varios procesadores, o varios procesadores y lógica programable. Entre estas nuevas arquitecturas, las FPGAs y los sistemas en chip (por sus siglas en inglés, System on Chip (SoC)) que incluyen procesadores embebidos y lógica programable, proporcionan soluciones con muy buenos resultados en cuanto a consumo energético, precio, capacidad de cómputo y flexibilidad. Estos buenos resultados son aún mejores cuando las aplicaciones tienen altos requisitos de cómputo y cuando las condiciones de trabajo son muy susceptibles de cambiar en tiempo real. La plataforma propuesta en esta tesis doctoral se ha denominado HiReCookie. La arquitectura incluye una FPGA basada en RAM como único procesador, así como un diseño compatible con la plataforma para redes de sensores inalámbricas desarrollada en el Centro de Electrónica Industrial de la Universidad Politécnica de Madrid (CEI-UPM) conocida como Cookies. Esta FPGA, modelo Spartan-6 LX150, era, en el momento de inicio de este trabajo, la mejor opción en cuanto a consumo y cantidad de recursos integrados, cuando además, permite el uso de reconfiguración dinámica y parcial. Es importante resaltar que aunque los valores de consumo son los mínimos para esta familia de componentes, la potencia instantánea consumida sigue siendo muy alta para aquellos sistemas que han de trabajar distribuidos, de forma autónoma, y en la mayoría de los casos alimentados por baterías. Por esta razón, es necesario incluir en el diseño estrategias de ahorro energético para incrementar la usabilidad y el tiempo de vida de la plataforma. La primera estrategia implementada consiste en dividir la plataforma en distintas islas de alimentación de forma que sólo aquellos elementos que sean estrictamente necesarios permanecerán alimentados, cuando el resto puede estar completamente apagado. De esta forma es posible combinar distintos modos de operación y así optimizar enormemente el consumo de energía. El hecho de apagar la FPGA para ahora energía durante los periodos de inactividad, supone la pérdida de la configuración, puesto que la memoria de configuración es una memoria volátil. Para reducir el impacto en el consumo y en el tiempo que supone la reconfiguración total de la plataforma una vez encendida, en este trabajo, se incluye una técnica para la compresión del archivo de configuración de la FPGA, de forma que se consiga una reducción del tiempo de configuración y por ende de la energía consumida. Aunque varios de los requisitos de diseño pueden satisfacerse con el diseño de la plataforma HiReCookie, es necesario seguir optimizando diversos parámetros tales como el consumo energético, la tolerancia a fallos y la capacidad de procesamiento. Esto sólo es posible explotando todas las posibilidades ofrecidas por la arquitectura de procesamiento en la FPGA. Por lo tanto, la segunda parte de esta tesis doctoral está centrada en el diseño de una arquitectura reconfigurable denominada ARTICo3 (Arquitectura Reconfigurable para el Tratamiento Inteligente de Cómputo, Confiabilidad y Consumo de energía) para la mejora de estos parámetros por medio de un uso dinámico de recursos. ARTICo3 es una arquitectura de procesamiento para FPGAs basadas en RAM, con comunicación tipo bus, preparada para dar soporte para la gestión dinámica de los recursos internos de la FPGA en tiempo de ejecución gracias a la inclusión de reconfiguración dinámica y parcial. Gracias a esta capacidad de reconfiguración parcial, es posible adaptar los niveles de capacidad de procesamiento, energía consumida o tolerancia a fallos para responder a las demandas de la aplicación, entorno, o métricas internas del dispositivo mediante la adaptación del número de recursos asignados para cada tarea. Durante esta segunda parte de la tesis se detallan el diseño de la arquitectura, su implementación en la plataforma HiReCookie, así como en otra familia de FPGAs, y su validación por medio de diferentes pruebas y demostraciones. Los principales objetivos que se plantean la arquitectura son los siguientes: • Proponer una metodología basada en un enfoque multi-hilo, como las propuestas por CUDA (por sus siglas en inglés, Compute Unified Device Architecture) u Open CL, en la cual distintos kernels, o unidades de ejecución, se ejecuten en un numero variable de aceleradores hardware sin necesidad de cambios en el código de aplicación. • Proponer un diseño y proporcionar una arquitectura en la que las condiciones de trabajo cambien de forma dinámica dependiendo bien de parámetros externos o bien de parámetros que indiquen el estado de la plataforma. Estos cambios en el punto de trabajo de la arquitectura serán posibles gracias a la reconfiguración dinámica y parcial de aceleradores hardware en tiempo real. • Explotar las posibilidades de procesamiento concurrente, incluso en una arquitectura basada en bus, por medio de la optimización de las transacciones en ráfaga de datos hacia los aceleradores. •Aprovechar las ventajas ofrecidas por la aceleración lograda por módulos puramente hardware para conseguir una mejor eficiencia energética. • Ser capaces de cambiar los niveles de redundancia de hardware de forma dinámica según las necesidades del sistema en tiempo real y sin cambios para el código de aplicación. • Proponer una capa de abstracción entre el código de aplicación y el uso dinámico de los recursos de la FPGA. El diseño en FPGAs permite la utilización de módulos hardware específicamente creados para una aplicación concreta. De esta forma es posible obtener rendimientos mucho mayores que en el caso de las arquitecturas de propósito general. Además, algunas FPGAs permiten la reconfiguración dinámica y parcial de ciertas partes de su lógica en tiempo de ejecución, lo cual dota al diseño de una gran flexibilidad. Los fabricantes de FPGAs ofrecen arquitecturas predefinidas con la posibilidad de añadir bloques prediseñados y poder formar sistemas en chip de una forma más o menos directa. Sin embargo, la forma en la que estos módulos hardware están organizados dentro de la arquitectura interna ya sea estática o dinámicamente, o la forma en la que la información se intercambia entre ellos, influye enormemente en la capacidad de cómputo y eficiencia energética del sistema. De la misma forma, la capacidad de cargar módulos hardware bajo demanda, permite añadir bloques redundantes que permitan aumentar el nivel de tolerancia a fallos de los sistemas. Sin embargo, la complejidad ligada al diseño de bloques hardware dedicados no debe ser subestimada. Es necesario tener en cuenta que el diseño de un bloque hardware no es sólo su propio diseño, sino también el diseño de sus interfaces, y en algunos casos de los drivers software para su manejo. Además, al añadir más bloques, el espacio de diseño se hace más complejo, y su programación más difícil. Aunque la mayoría de los fabricantes ofrecen interfaces predefinidas, IPs (por sus siglas en inglés, Intelectual Property) comerciales y plantillas para ayudar al diseño de los sistemas, para ser capaces de explotar las posibilidades reales del sistema, es necesario construir arquitecturas sobre las ya establecidas para facilitar el uso del paralelismo, la redundancia, y proporcionar un entorno que soporte la gestión dinámica de los recursos. Para proporcionar este tipo de soporte, ARTICo3 trabaja con un espacio de soluciones formado por tres ejes fundamentales: computación, consumo energético y confiabilidad. De esta forma, cada punto de trabajo se obtiene como una solución de compromiso entre estos tres parámetros. Mediante el uso de la reconfiguración dinámica y parcial y una mejora en la transmisión de los datos entre la memoria principal y los aceleradores, es posible dedicar un número variable de recursos en el tiempo para cada tarea, lo que hace que los recursos internos de la FPGA sean virtualmente ilimitados. Este variación en el tiempo del número de recursos por tarea se puede usar bien para incrementar el nivel de paralelismo, y por ende de aceleración, o bien para aumentar la redundancia, y por lo tanto el nivel de tolerancia a fallos. Al mismo tiempo, usar un numero óptimo de recursos para una tarea mejora el consumo energético ya que bien es posible disminuir la potencia instantánea consumida, o bien el tiempo de procesamiento. Con el objetivo de mantener los niveles de complejidad dentro de unos límites lógicos, es importante que los cambios realizados en el hardware sean totalmente transparentes para el código de aplicación. A este respecto, se incluyen distintos niveles de transparencia: • Transparencia a la escalabilidad: los recursos usados por una misma tarea pueden ser modificados sin que el código de aplicación sufra ningún cambio. • Transparencia al rendimiento: el sistema aumentara su rendimiento cuando la carga de trabajo aumente, sin cambios en el código de aplicación. • Transparencia a la replicación: es posible usar múltiples instancias de un mismo módulo bien para añadir redundancia o bien para incrementar la capacidad de procesamiento. Todo ello sin que el código de aplicación cambie. • Transparencia a la posición: la posición física de los módulos hardware es arbitraria para su direccionamiento desde el código de aplicación. • Transparencia a los fallos: si existe un fallo en un módulo hardware, gracias a la redundancia, el código de aplicación tomará directamente el resultado correcto. • Transparencia a la concurrencia: el hecho de que una tarea sea realizada por más o menos bloques es transparente para el código que la invoca. Por lo tanto, esta tesis doctoral contribuye en dos líneas diferentes. En primer lugar, con el diseño de la plataforma HiReCookie y en segundo lugar con el diseño de la arquitectura ARTICo3. Las principales contribuciones de esta tesis se resumen a continuación. • Arquitectura de la HiReCookie incluyendo: o Compatibilidad con la plataforma Cookies para incrementar las capacidades de esta. o División de la arquitectura en distintas islas de alimentación. o Implementación de los diversos modos de bajo consumo y políticas de despertado del nodo. o Creación de un archivo de configuración de la FPGA comprimido para reducir el tiempo y el consumo de la configuración inicial. • Diseño de la arquitectura reconfigurable para FPGAs basadas en RAM ARTICo3: o Modelo de computación y modos de ejecución inspirados en el modelo de CUDA pero basados en hardware reconfigurable con un número variable de bloques de hilos por cada unidad de ejecución. o Estructura para optimizar las transacciones de datos en ráfaga proporcionando datos en cascada o en paralelo a los distinto módulos incluyendo un proceso de votado por mayoría y operaciones de reducción. o Capa de abstracción entre el procesador principal que incluye el código de aplicación y los recursos asignados para las diferentes tareas. o Arquitectura de los módulos hardware reconfigurables para mantener la escalabilidad añadiendo una la interfaz para las nuevas funcionalidades con un simple acceso a una memoria RAM interna. o Caracterización online de las tareas para proporcionar información a un módulo de gestión de recursos para mejorar la operación en términos de energía y procesamiento cuando además se opera entre distintos nieles de tolerancia a fallos. El documento está dividido en dos partes principales formando un total de cinco capítulos. En primer lugar, después de motivar la necesidad de nuevas plataformas para cubrir las nuevas aplicaciones, se detalla el diseño de la plataforma HiReCookie, sus partes, las posibilidades para bajar el consumo energético y se muestran casos de uso de la plataforma así como pruebas de validación del diseño. La segunda parte del documento describe la arquitectura reconfigurable, su implementación en varias FPGAs, y pruebas de validación en términos de capacidad de procesamiento y consumo energético, incluyendo cómo estos aspectos se ven afectados por el nivel de tolerancia a fallos elegido. Los capítulos a lo largo del documento son los siguientes: El capítulo 1 analiza los principales objetivos, motivación y aspectos teóricos necesarios para seguir el resto del documento. El capítulo 2 está centrado en el diseño de la plataforma HiReCookie y sus posibilidades para disminuir el consumo de energía. El capítulo 3 describe la arquitectura reconfigurable ARTICo3. El capítulo 4 se centra en las pruebas de validación de la arquitectura usando la plataforma HiReCookie para la mayoría de los tests. Un ejemplo de aplicación es mostrado para analizar el funcionamiento de la arquitectura. El capítulo 5 concluye esta tesis doctoral comentando las conclusiones obtenidas, las contribuciones originales del trabajo y resultados y líneas futuras. ABSTRACT This PhD Thesis is framed within the field of dynamically reconfigurable embedded systems, advanced sensor networks and distributed computing. The document is centred on the study of processing solutions for high-performance autonomous distributed systems (HPADS) as well as their evolution towards High performance Computing (HPC) systems. The approach of the study is focused on both platform and processor levels to optimise critical aspects such as computing performance, energy efficiency and fault tolerance. HPADS are considered feedback systems, normally networked and/or distributed, with real-time adaptive and predictive functionality. These systems, as part of more complex systems known as Cyber-Physical Systems (CPSs), can be applied in a wide range of fields such as military, health care, manufacturing, aerospace, etc. For the design of HPADS, high levels of dependability, the definition of suitable models of computation, and the use of methodologies and tools to support scalability and complexity management, are required. The first part of the document studies the different possibilities at platform design level in the state of the art, together with description, development and validation tests of the platform proposed in this work to cope with the previously mentioned requirements. The main objectives targeted by this platform design are the following: • Study the feasibility of using SRAM-based FPGAs as the main processor of the platform in terms of energy consumption and performance for high demanding applications. • Analyse and propose energy management techniques to reduce energy consumption in every stage of the working profile of the platform. • Provide a solution with dynamic partial and wireless remote HW reconfiguration (DPR) to be able to change certain parts of the FPGA design at run time and on demand without interrupting the rest of the system. • Demonstrate the applicability of the platform in different test-bench applications. In order to select the best approach for the platform design in terms of processing alternatives, a study of the evolution of the state-of-the-art platforms is required to analyse how different architectures cope with new more demanding applications and scenarios: security, mixed-critical systems for aerospace, multimedia applications, or military environments, among others. In all these scenarios, important changes in the required processing bandwidth or the complexity of the algorithms used are provoking the migration of the platforms from single microprocessor architectures to multiprocessing and heterogeneous solutions with more instant power consumption but higher energy efficiency. Within these solutions, FPGAs and Systems on Chip including FPGA fabric and dedicated hard processors, offer a good trade of among flexibility, processing performance, energy consumption and price, when they are used in demanding applications where working conditions are very likely to vary over time and high complex algorithms are required. The platform architecture proposed in this PhD Thesis is called HiReCookie. It includes an SRAM-based FPGA as the main and only processing unit. The FPGA selected, the Xilinx Spartan-6 LX150, was at the beginning of this work the best choice in terms of amount of resources and power. Although, the power levels are the lowest of these kind of devices, they can be still very high for distributed systems that normally work powered by batteries. For that reason, it is necessary to include different energy saving possibilities to increase the usability of the platform. In order to reduce energy consumption, the platform architecture is divided into different power islands so that only those parts of the systems that are strictly needed are powered on, while the rest of the islands can be completely switched off. This allows a combination of different low power modes to decrease energy. In addition, one of the most important handicaps of SRAM-based FPGAs is that they are not alive at power up. Therefore, recovering the system from a switch-off state requires to reload the FPGA configuration from a non-volatile memory device. For that reason, this PhD Thesis also proposes a methodology to compress the FPGA configuration file in order to reduce time and energy during the initial configuration process. Although some of the requirements for the design of HPADS are already covered by the design of the HiReCookie platform, it is necessary to continue improving energy efficiency, computing performance and fault tolerance. This is only possible by exploiting all the opportunities provided by the processing architectures configured inside the FPGA. Therefore, the second part of the thesis details the design of the so called ARTICo3 FPGA architecture to enhance the already intrinsic capabilities of the FPGA. ARTICo3 is a DPR-capable bus-based virtual architecture for multiple HW acceleration in SRAM-based FPGAs. The architecture provides support for dynamic resource management in real time. In this way, by using DPR, it will be possible to change the levels of computing performance, energy consumption and fault tolerance on demand by increasing or decreasing the amount of resources used by the different tasks. Apart from the detailed design of the architecture and its implementation in different FPGA devices, different validation tests and comparisons are also shown. The main objectives targeted by this FPGA architecture are listed as follows: • Provide a method based on a multithread approach such as those offered by CUDA (Compute Unified Device Architecture) or OpenCL kernel executions, where kernels are executed in a variable number of HW accelerators without requiring application code changes. • Provide an architecture to dynamically adapt working points according to either self-measured or external parameters in terms of energy consumption, fault tolerance and computing performance. Taking advantage of DPR capabilities, the architecture must provide support for a dynamic use of resources in real time. • Exploit concurrent processing capabilities in a standard bus-based system by optimizing data transactions to and from HW accelerators. • Measure the advantage of HW acceleration as a technique to boost performance to improve processing times and save energy by reducing active times for distributed embedded systems. • Dynamically change the levels of HW redundancy to adapt fault tolerance in real time. • Provide HW abstraction from SW application design. FPGAs give the possibility of designing specific HW blocks for every required task to optimise performance while some of them include the possibility of including DPR. Apart from the possibilities provided by manufacturers, the way these HW modules are organised, addressed and multiplexed in area and time can improve computing performance and energy consumption. At the same time, fault tolerance and security techniques can also be dynamically included using DPR. However, the inherent complexity of designing new HW modules for every application is not negligible. It does not only consist of the HW description, but also the design of drivers and interfaces with the rest of the system, while the design space is widened and more complex to define and program. Even though the tools provided by the majority of manufacturers already include predefined bus interfaces, commercial IPs, and templates to ease application prototyping, it is necessary to improve these capabilities. By adding new architectures on top of them, it is possible to take advantage of parallelization and HW redundancy while providing a framework to ease the use of dynamic resource management. ARTICo3 works within a solution space where working points change at run time in a 3D space defined by three different axes: Computation, Consumption, and Fault Tolerance. Therefore, every working point is found as a trade-off solution among these three axes. By means of DPR, different accelerators can be multiplexed so that the amount of available resources for any application is virtually unlimited. Taking advantage of DPR capabilities and a novel way of transmitting data to the reconfigurable HW accelerators, it is possible to dedicate a dynamically-changing number of resources for a given task in order to either boost computing speed or adding HW redundancy and a voting process to increase fault-tolerance levels. At the same time, using an optimised amount of resources for a given task reduces energy consumption by reducing instant power or computing time. In order to keep level complexity under certain limits, it is important that HW changes are transparent for the application code. Therefore, different levels of transparency are targeted by the system: • Scalability transparency: a task must be able to expand its resources without changing the system structure or application algorithms. • Performance transparency: the system must reconfigure itself as load changes. • Replication transparency: multiple instances of the same task are loaded to increase reliability and performance. • Location transparency: resources are accessed with no knowledge of their location by the application code. • Failure transparency: task must be completed despite a failure in some components. • Concurrency transparency: different tasks will work in a concurrent way transparent to the application code. Therefore, as it can be seen, the Thesis is contributing in two different ways. First with the design of the HiReCookie platform and, second with the design of the ARTICo3 architecture. The main contributions of this PhD Thesis are then listed below: • Architecture of the HiReCookie platform including: o Compatibility of the processing layer for high performance applications with the Cookies Wireless Sensor Network platform for fast prototyping and implementation. o A division of the architecture in power islands. o All the different low-power modes. o The creation of the partial-initial bitstream together with the wake-up policies of the node. • The design of the reconfigurable architecture for SRAM FPGAs: ARTICo3: o A model of computation and execution modes inspired in CUDA but based on reconfigurable HW with a dynamic number of thread blocks per kernel. o A structure to optimise burst data transactions providing coalesced or parallel data to HW accelerators, parallel voting process and reduction operation. o The abstraction provided to the host processor with respect to the operation of the kernels in terms of the number of replicas, modes of operation, location in the reconfigurable area and addressing. o The architecture of the modules representing the thread blocks to make the system scalable by adding functional units only adding an access to a BRAM port. o The online characterization of the kernels to provide information to a scheduler or resource manager in terms of energy consumption and processing time when changing among different fault-tolerance levels, as well as if a kernel is expected to work in the memory-bounded or computing-bounded areas. The document of the Thesis is divided into two main parts with a total of five chapters. First, after motivating the need for new platforms to cover new more demanding applications, the design of the HiReCookie platform, its parts and several partial tests are detailed. The design of the platform alone does not cover all the needs of these applications. Therefore, the second part describes the architecture inside the FPGA, called ARTICo3, proposed in this PhD Thesis. The architecture and its implementation are tested in terms of energy consumption and computing performance showing different possibilities to improve fault tolerance and how this impact in energy and time of processing. Chapter 1 shows the main goals of this PhD Thesis and the technology background required to follow the rest of the document. Chapter 2 shows all the details about the design of the FPGA-based platform HiReCookie. Chapter 3 describes the ARTICo3 architecture. Chapter 4 is focused on the validation tests of the ARTICo3 architecture. An application for proof of concept is explained where typical kernels related to image processing and encryption algorithms are used. Further experimental analyses are performed using these kernels. Chapter 5 concludes the document analysing conclusions, comments about the contributions of the work, and some possible future lines for the work.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Interactions between stimulus-induced oscillations (35-80 Hz) and stimulus-locked nonoscillatory responses were investigated in the visual cortex areas 17 and 18 of anaesthetized cats. A single square-wave luminance grating was used as a visual stimulus during simultaneous recordings from up to seven electrodes. The stimulus movement consisted of a superposition of a smooth movement with a sequence of dynamically changing accelerations. Responses of local groups of neurons at each electrode were studied on the basis of multiple unit activity and local slow field potentials (13-120 Hz). Oscillatory and stimulus-locked components were extracted from multiple unit activity and local slow field potentials and quantified by a combination of temporal and spectral correlation methods. We found fast stimulus-locked components primarily evoked by sudden stimulus accelerations, whereas oscillatory components (35-80 Hz) were induced during slow smooth movements. Oscillations were gradually reduced in amplitude and finally fully suppressed with increasing amplitudes of fast stimulus-locked components. It is argued that suppression of oscillations is necessary to prevent confusion during sequential processing of stationary and fast changing retinal images.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Workflow technology has delivered effectively for a large class of business processes, providing the requisite control and monitoring functions. At the same time, this technology has been the target of much criticism due to its limited ability to cope with dynamically changing business conditions which require business processes to be adapted frequently, and/or its limited ability to model business processes which cannot be entirely predefined. Requirements indicate the need for generic solutions where a balance between process control and flexibility may be achieved. In this paper we present a framework that allows the workflow to execute on the basis of a partially specified model where the full specification of the model is made at runtime, and may be unique to each instance. This framework is based on the notion of process constraints. Where as process constraints may be specified for any aspect of the workflow, such as structural, temporal, etc. our focus in this paper is on a constraint which allows dynamic selection of activities for inclusion in a given instance. We call these cardinality constraints, and this paper will discuss their specification and validation requirements.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Draglines are massive machines commonly used in surface mining to strip overburden, revealing the targeted minerals for extraction. Automating some or all of the phases of operation of these machines offers the potential for significant productivity and maintenance benefits. The mining industry has a history of slow uptake of automation systems due to the challenges contained in the harsh, complex, three-dimensional (3D), dynamically changing mine operating environment. Robotics as a discipline is finally starting to gain acceptance as a technology with the potential to assist mining operations. This article examines the evolution of robotic technologies applied to draglines in the form of machine embedded intelligent systems. Results from this work include a production trial in which 250,000 tons of material was moved autonomously, experiments demonstrating steps towards full autonomy, and teleexcavation experiments in which a dragline in Australia was tasked by an operator in the United States.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Visual adaptation regulates contrast sensitivity during dynamically changing light conditions (Crawford, 1947; Hecht, Haig & Chase, 1937). These adaptation dynamics are unknown under dim (mesopic) light levels when the rod (R) and long (L), medium (M) and short (S) wavelength cone photoreceptor classes contribute to vision via interactions in shared non-opponent Magnocellular (MC), chromatically opponent Parvocellular (PC) and Koniocellular (KC) visual pathways (Dacey, 2000). This study investigated the time-course of adaptation and post-receptoral pathways mediating receptor specific rod and cone interactions under mesopic illumination. A four-primary photostimulator (Pokorny, Smithson & Quinlan, 2004) was used to independently control the activity of the four photoreceptor classes and their post-receptoral visual athways in human observers. In the first experiment, the contrast sensitivity and time-course of visual adaptation under mesopic illumination were measured for receptoral (L, S, R) and post-receptoral (LMS, LMSR, L-M) stimuli. An incremental (Rapid-ON) sawtooth conditioning pulse biased detection to ON-cells within the visual pathways and sensitivity was assayed relative to pulse onset using a briefly presented incremental probe that did not alter adaptation. Cone.Cone interactions with luminance stimuli (L cone, LMS, LMSR) reduced sensitivity by 15% and the time course of recovery was 25± 5ms-1 (μ ± SEM). PC mediated (+L-M) chromatic stimuli sensitivity loss was less (8%) than for luminance and recovery was slower (μ = 2.95 ± 0.05 ms-1), with KC mediated (S cone) chromatic stimuli showing a high sensitivity loss (38%) and the slowest recovery time (1.6 ± 0.2 ms-1). Rod-Rod interactions increased sensitivity by 20% and the time course of recovery was 0.7 ± 0.2 ms-1 (μ ± SD). Compared to these interaction types, Rod-Cone interactions reduced sensitivity to a lesser degree (5%) and showed the fastest recovery (μ = 43 ± 7 ms-1). In the second experiment, rod contribution to the magnocellular, parvocellular and koniocellular post-receptoral pathways under mesopic illumination was determined as a function of incremental stimulus duration and waveform (rectangular; sawtooth) using a rod colour match procedure (Cao, Pokorny & Smith, 2005; Cao, Pokorny, Smith & Zele, 2008a). For a 30% rod increment, a cone match required a decrease in [L/(L+M)] and an increase in [L+M] and [S/(L+M)], giving a greenish-blue and brighter appearance for probe durations of 75 ms or longer. Probe durations less than 75 ms showed an increase in [L+M] and no change in chromaticity [L/(L+M) or S/(L+M)], uggesting mediation by the MC pathway only for short duration rod stimuli. s We advance previous studies by determining the time-course and nature of photoreceptor specific retinal interactions in the three post-receptoral pathways under mesopic illumination. In the first experiment, the time-course of adaptation for ON cell processing was determined, revealing opponent cell facilitation in chromatic PC and KC pathways. The Rod-Rod and Rod-Cone data identify previously unknown interaction types that act to maintain contrast sensitivity during dynamically changing light conditions and improve the speed of light adaptation under mesopic light levels. The second experiment determined the degree of rod contribution to the inferred post-eceptoral pathways as a function of the temporal properties of the rod signal. r The understanding of the mechanisms underlying interactions between photoreceptors under mesopic illumination has implications for the study of retinal disease. Visual function has been shown to be reduced in persons with age-related maculopathy (ARM) risk genotypes prior to clinical signs of the disease (Feigl, Cao, Morris & Zele, 2011) and disturbances in rod-mediated adaptation have been shown in early phases of ARM (Dimitrov, Guymer, Zele, Anderson & Vingrys, 2008; Feigl, Brown, Lovie-Kitchin & Swann, 2005). Also, the understanding of retinal networks controlling vision enables the development of international lighting standards to optimise visual performance nder dim light levels (e.g. work-place environments, transportation).

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The ability to estimate the asset reliability and the probability of failure is critical to reducing maintenance costs, operation downtime, and safety hazards. Predicting the survival time and the probability of failure in future time is an indispensable requirement in prognostics and asset health management. In traditional reliability models, the lifetime of an asset is estimated using failure event data, alone; however, statistically sufficient failure event data are often difficult to attain in real-life situations due to poor data management, effective preventive maintenance, and the small population of identical assets in use. Condition indicators and operating environment indicators are two types of covariate data that are normally obtained in addition to failure event and suspended data. These data contain significant information about the state and health of an asset. Condition indicators reflect the level of degradation of assets while operating environment indicators accelerate or decelerate the lifetime of assets. When these data are available, an alternative approach to the traditional reliability analysis is the modelling of condition indicators and operating environment indicators and their failure-generating mechanisms using a covariate-based hazard model. The literature review indicates that a number of covariate-based hazard models have been developed. All of these existing covariate-based hazard models were developed based on the principle theory of the Proportional Hazard Model (PHM). However, most of these models have not attracted much attention in the field of machinery prognostics. Moreover, due to the prominence of PHM, attempts at developing alternative models, to some extent, have been stifled, although a number of alternative models to PHM have been suggested. The existing covariate-based hazard models neglect to fully utilise three types of asset health information (including failure event data (i.e. observed and/or suspended), condition data, and operating environment data) into a model to have more effective hazard and reliability predictions. In addition, current research shows that condition indicators and operating environment indicators have different characteristics and they are non-homogeneous covariate data. Condition indicators act as response variables (or dependent variables) whereas operating environment indicators act as explanatory variables (or independent variables). However, these non-homogenous covariate data were modelled in the same way for hazard prediction in the existing covariate-based hazard models. The related and yet more imperative question is how both of these indicators should be effectively modelled and integrated into the covariate-based hazard model. This work presents a new approach for addressing the aforementioned challenges. The new covariate-based hazard model, which termed as Explicit Hazard Model (EHM), explicitly and effectively incorporates all three available asset health information into the modelling of hazard and reliability predictions and also drives the relationship between actual asset health and condition measurements as well as operating environment measurements. The theoretical development of the model and its parameter estimation method are demonstrated in this work. EHM assumes that the baseline hazard is a function of the both time and condition indicators. Condition indicators provide information about the health condition of an asset; therefore they update and reform the baseline hazard of EHM according to the health state of asset at given time t. Some examples of condition indicators are the vibration of rotating machinery, the level of metal particles in engine oil analysis, and wear in a component, to name but a few. Operating environment indicators in this model are failure accelerators and/or decelerators that are included in the covariate function of EHM and may increase or decrease the value of the hazard from the baseline hazard. These indicators caused by the environment in which an asset operates, and that have not been explicitly identified by the condition indicators (e.g. Loads, environmental stresses, and other dynamically changing environment factors). While the effects of operating environment indicators could be nought in EHM; condition indicators could emerge because these indicators are observed and measured as long as an asset is operational and survived. EHM has several advantages over the existing covariate-based hazard models. One is this model utilises three different sources of asset health data (i.e. population characteristics, condition indicators, and operating environment indicators) to effectively predict hazard and reliability. Another is that EHM explicitly investigates the relationship between condition and operating environment indicators associated with the hazard of an asset. Furthermore, the proportionality assumption, which most of the covariate-based hazard models suffer from it, does not exist in EHM. According to the sample size of failure/suspension times, EHM is extended into two forms: semi-parametric and non-parametric. The semi-parametric EHM assumes a specified lifetime distribution (i.e. Weibull distribution) in the form of the baseline hazard. However, for more industry applications, due to sparse failure event data of assets, the analysis of such data often involves complex distributional shapes about which little is known. Therefore, to avoid the restrictive assumption of the semi-parametric EHM about assuming a specified lifetime distribution for failure event histories, the non-parametric EHM, which is a distribution free model, has been developed. The development of EHM into two forms is another merit of the model. A case study was conducted using laboratory experiment data to validate the practicality of the both semi-parametric and non-parametric EHMs. The performance of the newly-developed models is appraised using the comparison amongst the estimated results of these models and the other existing covariate-based hazard models. The comparison results demonstrated that both the semi-parametric and non-parametric EHMs outperform the existing covariate-based hazard models. Future research directions regarding to the new parameter estimation method in the case of time-dependent effects of covariates and missing data, application of EHM in both repairable and non-repairable systems using field data, and a decision support model in which linked to the estimated reliability results, are also identified.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper proposes an organisational effectiveness model that applies the theoretical frameworks of shared leadership, appreciative inquiry, and knowledge creation. Similar to many libraries worldwide, Auraria Library technical services department struggled to establish efficient and effective workflow for electronic resources management. The library purchased an Electronic Resource Management System, as the literature suggests; however, this technology-enabled system did not resolve workflow issues. The Auraria Library case study demonstrates that a technical services division can successfully reorganize personnel, reassign responsibilities, and measure outcomes within an evidencebased shared leadership culture, which invites and enables participants to identify problems and create solutions amidst a dynamically changing electronic resources environment.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

There remains a lack of published empirical data on the substantive outcomes of higher learning and the establishment of quality processes for determining them. Studies that do exist are nationally focused with available rankings of institutions reflecting neither the quality of teaching and learning nor the diversity of institutions. This paper describes two studies in which Associate Deans from Australian higher education institutions and focus groups of management and academics identify current issues and practices in the design, development and implementation of processes for assuring the quality of learning and teaching. Results indicate that developing a perspective on graduate attributes and mapping assessments to measure outcomes across an entire program necessitates knowledge creation and new inclusive processes. Common elements supporting consistently superior outcomes included: inclusivity; embedded graduate attributes; consistent and appropriate assessment; digital collection mechanisms; and systematic analysis of outcomes used in program review. Quality measures for assuring learning are proliferating nationally and changing the processes, systems and culture of higher education as a result.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper, a wind energy conversion system interfaced to the grid using a dual inverter is proposed. One of the two inverters in the dual inverter is connected to the rectified output of the wind generator while the other is directly connected to a battery energy storage system (BESS). This approach eliminates the need for an additional dc-dc converter and thus reduces power losses, cost, and complexity. The main issue with this scheme is uncorrelated dynamic changes in dc-link voltages that results in unevenly distributed space vectors. A detailed analysis on the effects of these variations is presented in this paper. Furthermore, a modified modulation technique is proposed to produce undistorted currents even in the presence of unevenly distributed and dynamically changing space vectors. An analysis on the battery charging/discharging process and maximum power point tracking of the wind turbine generator is also presented. Simulation and experimental results are presented to verify the efficacy of the proposed modulation technique and battery charging/discharging process.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Additional converters that are used to interface energy storage devices incur power losses as well as increased system cost and complexity. The need for additional converters can be eliminated if the grid side inverter can itself be effectively used as the interface for energy storage. This paper therefore proposes a technique whereby the grid side inverter can also be used as an interface to connect a supercapacitor energy storage for wind energy conversion systems. The proposed grid side inverter is formed by cascading a 3-level inverter and a 2-level inverter through a coupling transformer. The three-level inverter is the main inverter and it is powered by the rectified output of the wind turbine coupled AC generator while the 2-level auxiliary inverter is connected to the super capacitor bank that is used to compensate short term power fluctuations. Novel modulation and control techniques are proposed to address the problems associated with non-integer and dynamically-changing dc-link voltage ratio, which is caused by the random nature of wind. Simulation results are presented to verify the efficacy of the proposed system in suppressing short term wind power fluctuations.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper presents a novel concept of Energy Storage System (ESS) interfacing with the grid side inverter in wind energy conversion systems. The inverter system used here is formed by cascading a 2-level inverter and a three level inverter through a coupling transformer. The constituent inverters are named as the “main inverter” and the “auxiliary inverter” respectively. The main inverter is connected with the rectified output of the wind generator while the auxiliary inverter is attached to a Battery Energy Storage System (BESS). The BESS ensures constant power dispatch to the grid irrespective of change in wind condition. Furthermore, this unique combination of BESS and inverter eliminates the need of additional dc-dc converters. Novel modulation and control techniques are proposed to address the problem of non-integer, dynamically-changing dc-link voltage ratio, which is due to random wind changes. Strategies used to handle auxiliary inverter dc-link voltage imbalances and controllers used to charge batteries at different rates are explained in detail. Simulation results are presented to verify the efficacy of the proposed modulation and control techniques in suppressing random wind power fluctuations.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper explores a new breed of energy storage system interfacing for grid connected photovoltaic (PV) systems. The proposed system uses the popular dual inverter topology in which one inverter is supplied by a PV cell array and the other by a Battery Energy Storage System (BESS). The resulting conversion structure is controlled in a way that both demand matching and maximum power point tracking of the PV cell array are performed simultaneously. This dual inverter topology can produces 2, 3, 4 and 5 level inverter voltage waveforms at the dc-link voltage ratios of 0:1, 1:1, 2:1 and 3:2 respectively. Since the output voltage of the PV cell array and the battery are uncorrelated and dynamically change, the resulting dc-link voltage ratio can take non-integer values as well. These noninteger dc-link voltage ratios produce unevenly distributed space vectors. Therefore, the main issue with the proposed system is the generation of undistorted current even in the presence of unevenly distributed and dynamically changing space vectors. A modified space vector modulation method is proposed in this paper to address this issue and its efficacy is proved by simulation results. The ability of the proposed system to act as an active power source is also verified.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper presents a new direct integration scheme for supercapacitors that are used to mitigate short term power fluctuations in wind power systems. The proposed scheme uses the popular dual inverter topology for grid connection as well as interfacing a supercapacitor bank. The dual inverter system is formed by cascading two 2-level inverters named as the “main inverter” and the “auxiliary inverter”. The main inverter is powered by the rectified output of a wind turbine coupled permanent magnet synchronous generator. The auxiliary inverter is directly connected to a super capacitor bank. This approach eliminates the need for an interfacing dc-dc converter for the supercapacitor bank and thus improves the overall efficiency. A detailed analysis on the effects of non-integer dynamically changing voltage ratio is presented. The concept of integrated boost rectifier is used to carry out the Maximum Power Point Tracking (MPPT) of the wind turbine generator. Another novel feature of this paper is the power reference adjuster which effectively manages capacitor charging and discharging at extreme conditions. Simulation results are presented to verify the efficacy of the proposed system in suppressing short term wind power fluctuations.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Purpose This paper investigates the interrelationships between knowledge integration (KI), product innovation and capability development to enhance our understanding of how firms can develop capability at the firm level, which in turn enhances their performance. One of the critical underlying mechanisms for capability building identified in the literature is the role of knowledge integration, which operates within product innovation projects and contributes to dynamic capability development. Therefore, the main research question is “how does the integration of knowledge across product innovation projects lead to the development of capability?” Design/methodology/approach We adopted a case-based approach and investigated the case of a successful firm that was able to sustain its performance through a series of product innovation projects. In particular this research focused on the role of KI and firm-level capability development over the course of four projects, during which the firm successfully managed the transformation of its product base and renewal of its competitive advantage. For this purpose an in-depth case study of capability development was undertaken at the Iran Khodro Company (IKCO), the key player in the Iranian auto industry transformation. Originality/value This research revealed that along with changes at each level of product architecture “design knowledge” and “design capability” have been developed at the same level of product architecture, leading to capability development at that level. It can be argued that along the step by step maturation of radical innovation across the four case projects, architectural knowledge and capability have been developed at the case company, resulting in the gradual emergence of a modular product and capability architecture across different levels of product architecture. Such findings basically add to extensive emphasis in the literature on the interrelationship of the concept of modularity with knowledge management and capability development. Practical implications Findings of this study indicate that firms manage their knowledge in accordance with the level of specialization in knowledge and capability. Furthermore, firms design appropriate knowledge integration mechanisms within and among functions in order dynamically align knowledge processes at different levels of the product architecture. Accordingly, the outcomes of this study may guide practitioners in managing their knowledge processes, through dynamically employing knowledge integration modes step-by-step and from the part level to the architectural level of product architecture across a sequence of product innovation projects to encourage learning and radical innovation.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A room-temperature cathodic electrolytic process was developed in the laboratory to recover zinc from industrial leach residues. The various parameters affecting the electroleaching process were studied using a statistically designed experiment. To understand the mechanisms behind the electrode processes, cyclic voltammetry and galvanostatic studies were carried out. The role of Einh measurements in monitoring such an electroleaching procedure is also shown. Since significant amounts of iron were also present in the leach liquor, attempts were made to purify it before zinc recovery by electrowinning. Reductive dissolution and creation of anion vacancies were found to be responsible for the dissolution of zinc ferrite present in the leach residue. A flow sheet of the process is given.