955 resultados para System-Level Models


Relevância:

90.00% 90.00%

Publicador:

Resumo:

Las Field-Programmable Gate Arrays (FPGAs) SRAM se construyen sobre una memoria de configuración de tecnología RAM Estática (SRAM). Presentan múltiples características que las hacen muy interesantes para diseñar sistemas empotrados complejos. En primer lugar presentan un coste no-recurrente de ingeniería (NRE) bajo, ya que los elementos lógicos y de enrutado están pre-implementados (el diseño de usuario define su conexionado). También, a diferencia de otras tecnologías de FPGA, pueden ser reconfiguradas (incluso en campo) un número ilimitado de veces. Es más, las FPGAs SRAM de Xilinx soportan Reconfiguración Parcial Dinámica (DPR), la cual permite reconfigurar la FPGA sin interrumpir la aplicación. Finalmente, presentan una alta densidad de lógica, una alta capacidad de procesamiento y un rico juego de macro-bloques. Sin embargo, un inconveniente de esta tecnología es su susceptibilidad a la radiación ionizante, la cual aumenta con el grado de integración (geometrías más pequeñas, menores tensiones y mayores frecuencias). Esta es una precupación de primer nivel para aplicaciones en entornos altamente radiativos y con requisitos de alta confiabilidad. Este fenómeno conlleva una degradación a largo plazo y también puede inducir fallos instantáneos, los cuales pueden ser reversibles o producir daños irreversibles. En las FPGAs SRAM, los fallos inducidos por radiación pueden aparecer en en dos capas de arquitectura diferentes, que están físicamente superpuestas en el dado de silicio. La Capa de Aplicación (o A-Layer) contiene el hardware definido por el usuario, y la Capa de Configuración contiene la memoria de configuración y la circuitería de soporte. Los fallos en cualquiera de estas capas pueden hacer fracasar el sistema, lo cual puede ser ás o menos tolerable dependiendo de los requisitos de confiabilidad del sistema. En el caso general, estos fallos deben gestionados de alguna manera. Esta tesis trata sobre la gestión de fallos en FPGAs SRAM a nivel de sistema, en el contexto de sistemas empotrados autónomos y confiables operando en un entorno radiativo. La tesis se centra principalmente en aplicaciones espaciales, pero los mismos principios pueden aplicarse a aplicaciones terrenas. Las principales diferencias entre ambas son el nivel de radiación y la posibilidad de mantenimiento. Las diferentes técnicas para la gestión de fallos en A-Layer y C-Layer son clasificados, y sus implicaciones en la confiabilidad del sistema son analizados. Se proponen varias arquitecturas tanto para Gestores de Fallos de una capa como de doble-capa. Para estos últimos se propone una arquitectura novedosa, flexible y versátil. Gestiona las dos capas concurrentemente de manera coordinada, y permite equilibrar el nivel de redundancia y la confiabilidad. Con el objeto de validar técnicas de gestión de fallos dinámicas, se desarrollan dos diferentes soluciones. La primera es un entorno de simulación para Gestores de Fallos de C-Layer, basado en SystemC como lenguaje de modelado y como simulador basado en eventos. Este entorno y su metodología asociada permite explorar el espacio de diseño del Gestor de Fallos, desacoplando su diseño del desarrollo de la FPGA objetivo. El entorno incluye modelos tanto para la C-Layer de la FPGA como para el Gestor de Fallos, los cuales pueden interactuar a diferentes niveles de abstracción (a nivel de configuration frames y a nivel físico JTAG o SelectMAP). El entorno es configurable, escalable y versátil, e incluye capacidades de inyección de fallos. Los resultados de simulación para algunos escenarios son presentados y comentados. La segunda es una plataforma de validación para Gestores de Fallos de FPGAs Xilinx Virtex. La plataforma hardware aloja tres Módulos de FPGA Xilinx Virtex-4 FX12 y dos Módulos de Unidad de Microcontrolador (MCUs) de 32-bits de propósito general. Los Módulos MCU permiten prototipar Gestores de Fallos de C-Layer y A-Layer basados en software. Cada Módulo FPGA implementa un enlace de A-Layer Ethernet (a través de un switch Ethernet) con uno de los Módulos MCU, y un enlace de C-Layer JTAG con el otro. Además, ambos Módulos MCU intercambian comandos y datos a través de un enlace interno tipo UART. Al igual que para el entorno de simulación, se incluyen capacidades de inyección de fallos. Los resultados de pruebas para algunos escenarios son también presentados y comentados. En resumen, esta tesis cubre el proceso completo desde la descripción de los fallos FPGAs SRAM inducidos por radiación, pasando por la identificación y clasificación de técnicas de gestión de fallos, y por la propuesta de arquitecturas de Gestores de Fallos, para finalmente validarlas por simulación y pruebas. El trabajo futuro está relacionado sobre todo con la implementación de Gestores de Fallos de Sistema endurecidos para radiación. ABSTRACT SRAM-based Field-Programmable Gate Arrays (FPGAs) are built on Static RAM (SRAM) technology configuration memory. They present a number of features that make them very convenient for building complex embedded systems. First of all, they benefit from low Non-Recurrent Engineering (NRE) costs, as the logic and routing elements are pre-implemented (user design defines their connection). Also, as opposed to other FPGA technologies, they can be reconfigured (even in the field) an unlimited number of times. Moreover, Xilinx SRAM-based FPGAs feature Dynamic Partial Reconfiguration (DPR), which allows to partially reconfigure the FPGA without disrupting de application. Finally, they feature a high logic density, high processing capability and a rich set of hard macros. However, one limitation of this technology is its susceptibility to ionizing radiation, which increases with technology scaling (smaller geometries, lower voltages and higher frequencies). This is a first order concern for applications in harsh radiation environments and requiring high dependability. Ionizing radiation leads to long term degradation as well as instantaneous faults, which can in turn be reversible or produce irreversible damage. In SRAM-based FPGAs, radiation-induced faults can appear at two architectural layers, which are physically overlaid on the silicon die. The Application Layer (or A-Layer) contains the user-defined hardware, and the Configuration Layer (or C-Layer) contains the (volatile) configuration memory and its support circuitry. Faults at either layers can imply a system failure, which may be more ore less tolerated depending on the dependability requirements. In the general case, such faults must be managed in some way. This thesis is about managing SRAM-based FPGA faults at system level, in the context of autonomous and dependable embedded systems operating in a radiative environment. The focus is mainly on space applications, but the same principles can be applied to ground applications. The main differences between them are the radiation level and the possibility for maintenance. The different techniques for A-Layer and C-Layer fault management are classified and their implications in system dependability are assessed. Several architectures are proposed, both for single-layer and dual-layer Fault Managers. For the latter, a novel, flexible and versatile architecture is proposed. It manages both layers concurrently in a coordinated way, and allows balancing redundancy level and dependability. For the purpose of validating dynamic fault management techniques, two different solutions are developed. The first one is a simulation framework for C-Layer Fault Managers, based on SystemC as modeling language and event-driven simulator. This framework and its associated methodology allows exploring the Fault Manager design space, decoupling its design from the target FPGA development. The framework includes models for both the FPGA C-Layer and for the Fault Manager, which can interact at different abstraction levels (at configuration frame level and at JTAG or SelectMAP physical level). The framework is configurable, scalable and versatile, and includes fault injection capabilities. Simulation results for some scenarios are presented and discussed. The second one is a validation platform for Xilinx Virtex FPGA Fault Managers. The platform hosts three Xilinx Virtex-4 FX12 FPGA Modules and two general-purpose 32-bit Microcontroller Unit (MCU) Modules. The MCU Modules allow prototyping software-based CLayer and A-Layer Fault Managers. Each FPGA Module implements one A-Layer Ethernet link (through an Ethernet switch) with one of the MCU Modules, and one C-Layer JTAG link with the other. In addition, both MCU Modules exchange commands and data over an internal UART link. Similarly to the simulation framework, fault injection capabilities are implemented. Test results for some scenarios are also presented and discussed. In summary, this thesis covers the whole process from describing the problem of radiationinduced faults in SRAM-based FPGAs, then identifying and classifying fault management techniques, then proposing Fault Manager architectures and finally validating them by simulation and test. The proposed future work is mainly related to the implementation of radiation-hardened System Fault Managers.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Los Centros de Datos se encuentran actualmente en cualquier sector de la economía mundial. Están compuestos por miles de servidores, dando servicio a los usuarios de forma global, las 24 horas del día y los 365 días del año. Durante los últimos años, las aplicaciones del ámbito de la e-Ciencia, como la e-Salud o las Ciudades Inteligentes han experimentado un desarrollo muy significativo. La necesidad de manejar de forma eficiente las necesidades de cómputo de aplicaciones de nueva generación, junto con la creciente demanda de recursos en aplicaciones tradicionales, han facilitado el rápido crecimiento y la proliferación de los Centros de Datos. El principal inconveniente de este aumento de capacidad ha sido el rápido y dramático incremento del consumo energético de estas infraestructuras. En 2010, la factura eléctrica de los Centros de Datos representaba el 1.3% del consumo eléctrico mundial. Sólo en el año 2012, el consumo de potencia de los Centros de Datos creció un 63%, alcanzando los 38GW. En 2013 se estimó un crecimiento de otro 17%, hasta llegar a los 43GW. Además, los Centros de Datos son responsables de más del 2% del total de emisiones de dióxido de carbono a la atmósfera. Esta tesis doctoral se enfrenta al problema energético proponiendo técnicas proactivas y reactivas conscientes de la temperatura y de la energía, que contribuyen a tener Centros de Datos más eficientes. Este trabajo desarrolla modelos de energía y utiliza el conocimiento sobre la demanda energética de la carga de trabajo a ejecutar y de los recursos de computación y refrigeración del Centro de Datos para optimizar el consumo. Además, los Centros de Datos son considerados como un elemento crucial dentro del marco de la aplicación ejecutada, optimizando no sólo el consumo del Centro de Datos sino el consumo energético global de la aplicación. Los principales componentes del consumo en los Centros de Datos son la potencia de computación utilizada por los equipos de IT, y la refrigeración necesaria para mantener los servidores dentro de un rango de temperatura de trabajo que asegure su correcto funcionamiento. Debido a la relación cúbica entre la velocidad de los ventiladores y el consumo de los mismos, las soluciones basadas en el sobre-aprovisionamiento de aire frío al servidor generalmente tienen como resultado ineficiencias energéticas. Por otro lado, temperaturas más elevadas en el procesador llevan a un consumo de fugas mayor, debido a la relación exponencial del consumo de fugas con la temperatura. Además, las características de la carga de trabajo y las políticas de asignación de recursos tienen un impacto importante en los balances entre corriente de fugas y consumo de refrigeración. La primera gran contribución de este trabajo es el desarrollo de modelos de potencia y temperatura que permiten describes estos balances entre corriente de fugas y refrigeración; así como la propuesta de estrategias para minimizar el consumo del servidor por medio de la asignación conjunta de refrigeración y carga desde una perspectiva multivariable. Cuando escalamos a nivel del Centro de Datos, observamos un comportamiento similar en términos del balance entre corrientes de fugas y refrigeración. Conforme aumenta la temperatura de la sala, mejora la eficiencia de la refrigeración. Sin embargo, este incremente de la temperatura de sala provoca un aumento en la temperatura de la CPU y, por tanto, también del consumo de fugas. Además, la dinámica de la sala tiene un comportamiento muy desigual, no equilibrado, debido a la asignación de carga y a la heterogeneidad en el equipamiento de IT. La segunda contribución de esta tesis es la propuesta de técnicas de asigación conscientes de la temperatura y heterogeneidad que permiten optimizar conjuntamente la asignación de tareas y refrigeración a los servidores. Estas estrategias necesitan estar respaldadas por modelos flexibles, que puedan trabajar en tiempo real, para describir el sistema desde un nivel de abstracción alto. Dentro del ámbito de las aplicaciones de nueva generación, las decisiones tomadas en el nivel de aplicación pueden tener un impacto dramático en el consumo energético de niveles de abstracción menores, como por ejemplo, en el Centro de Datos. Es importante considerar las relaciones entre todos los agentes computacionales implicados en el problema, de forma que puedan cooperar para conseguir el objetivo común de reducir el coste energético global del sistema. La tercera contribución de esta tesis es el desarrollo de optimizaciones energéticas para la aplicación global por medio de la evaluación de los costes de ejecutar parte del procesado necesario en otros niveles de abstracción, que van desde los nodos hasta el Centro de Datos, por medio de técnicas de balanceo de carga. Como resumen, el trabajo presentado en esta tesis lleva a cabo contribuciones en el modelado y optimización consciente del consumo por fugas y la refrigeración de servidores; el modelado de los Centros de Datos y el desarrollo de políticas de asignación conscientes de la heterogeneidad; y desarrolla mecanismos para la optimización energética de aplicaciones de nueva generación desde varios niveles de abstracción. ABSTRACT Data centers are easily found in every sector of the worldwide economy. They consist of tens of thousands of servers, serving millions of users globally and 24-7. In the last years, e-Science applications such e-Health or Smart Cities have experienced a significant development. The need to deal efficiently with the computational needs of next-generation applications together with the increasing demand for higher resources in traditional applications has facilitated the rapid proliferation and growing of data centers. A drawback to this capacity growth has been the rapid increase of the energy consumption of these facilities. In 2010, data center electricity represented 1.3% of all the electricity use in the world. In year 2012 alone, global data center power demand grew 63% to 38GW. A further rise of 17% to 43GW was estimated in 2013. Moreover, data centers are responsible for more than 2% of total carbon dioxide emissions. This PhD Thesis addresses the energy challenge by proposing proactive and reactive thermal and energy-aware optimization techniques that contribute to place data centers on a more scalable curve. This work develops energy models and uses the knowledge about the energy demand of the workload to be executed and the computational and cooling resources available at data center to optimize energy consumption. Moreover, data centers are considered as a crucial element within their application framework, optimizing not only the energy consumption of the facility, but the global energy consumption of the application. The main contributors to the energy consumption in a data center are the computing power drawn by IT equipment and the cooling power needed to keep the servers within a certain temperature range that ensures safe operation. Because of the cubic relation of fan power with fan speed, solutions based on over-provisioning cold air into the server usually lead to inefficiencies. On the other hand, higher chip temperatures lead to higher leakage power because of the exponential dependence of leakage on temperature. Moreover, workload characteristics as well as allocation policies also have an important impact on the leakage-cooling tradeoffs. The first key contribution of this work is the development of power and temperature models that accurately describe the leakage-cooling tradeoffs at the server level, and the proposal of strategies to minimize server energy via joint cooling and workload management from a multivariate perspective. When scaling to the data center level, a similar behavior in terms of leakage-temperature tradeoffs can be observed. As room temperature raises, the efficiency of data room cooling units improves. However, as we increase room temperature, CPU temperature raises and so does leakage power. Moreover, the thermal dynamics of a data room exhibit unbalanced patterns due to both the workload allocation and the heterogeneity of computing equipment. The second main contribution is the proposal of thermal- and heterogeneity-aware workload management techniques that jointly optimize the allocation of computation and cooling to servers. These strategies need to be backed up by flexible room level models, able to work on runtime, that describe the system from a high level perspective. Within the framework of next-generation applications, decisions taken at this scope can have a dramatical impact on the energy consumption of lower abstraction levels, i.e. the data center facility. It is important to consider the relationships between all the computational agents involved in the problem, so that they can cooperate to achieve the common goal of reducing energy in the overall system. The third main contribution is the energy optimization of the overall application by evaluating the energy costs of performing part of the processing in any of the different abstraction layers, from the node to the data center, via workload management and off-loading techniques. In summary, the work presented in this PhD Thesis, makes contributions on leakage and cooling aware server modeling and optimization, data center thermal modeling and heterogeneityaware data center resource allocation, and develops mechanisms for the energy optimization for next-generation applications from a multi-layer perspective.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Il tatto assume un'importanza fondamentale nella vita quotidiana, in quanto ci permette di discriminare le caratteristiche fisiche di un oggetto specifico, di identificarlo e di eventualmente integrare le suddette informazioni tattili con informazioni provenienti da altri canali sensoriali. Questa è la componente sensoriale-discriminativa del tatto. Tuttavia quotidianamente il tatto assume un ruolo fondamentale durante le diverse interazioni sociali, positive, come quando abbracciamo o accarezziamo una persona con cui abbiamo un rapporto affettivo e negative, per esempio quando allontaniamo una persona estranea dal nostro spazio peri-personale. Questa componente è la cosiddetta dimensione affettiva-motivazionale, la quale determina la codifica della valenza emotiva che l'interazione assume. Questa componente ci permette di creare, mantenere o distruggere i legami sociali in relazione al significato che il tocco assume durante l'interazione. Se per esempio riceviamo una carezza da un familiare, questa verrà percepita come piacevole e assumerà un significato affiliativo. Questo tipo di tocco è comunente definito come Tocco Sociale (Social Touch). Gli aspetti discriminativi del tatto sono stati ben caratterizzati, in quanto storicamente, il ruolo del tatto è stato considerato quello di discriminare le caratteristiche di ciò che viene toccato, mentre gli aspetti affettivi sono stati solo recentemente indagati considerando la loro importanza nelle interazioni sociali. Il tocco statico responsabile dell'aspetto discriminante attiva a livello della pelle le grandi fibre mieliniche (Aβ), modulando a livello del sistema nervoso centrale le cortecce sensoriali, sia primarie che secondarie. Questo permette la codifica a livello del sistema nervoso centrale delle caratteristiche fisiche oggettive degli oggetti toccati. Studi riguardanti le caratteristiche del tocco affiliativo sociale hanno messo in evidenza che suddetta stimolazione tattile 1) è un particolare tocco dinamico che avviene sul lato peloso delle pelle con una velocità di 1-10 cm/sec; 2) attiva le fibre amieliniche (fibre CT o C-LTMRs); 3) induce positivi effetti autonomici, ad esempio la diminuzione della frequenza cardiaca e l'aumento della variabilità della frequenza cardiaca; e 4) determina la modulazione di regioni cerebrali coinvolte nella codifica del significato affiliativo dello stimolo sensoriale periferico, in particolare la corteccia insulare. Il senso del tatto, con le sue due dimensioni discriminativa e affiliativa, è quotidianamente usato non solo negli esseri umani, ma anche tra i primati non umani. Infatti, tutti i primati non umani utilizzano la componente discriminativa del tatto per identificare gli oggetti e il cibo e l'aspetto emotivo durante le interazioni sociali, sia negative come durante un combattimento, che positive, come durante i comportamenti affiliativi tra cui il grooming. I meccanismi di codifica della componente discriminativa dei primati non umani sono simili a quelli umani. Tuttavia, si conosce ben poco dei meccanismi alla base della codifica del tocco piacevole affiliativo. Pur essendo ben noto che i meccanorecettori amilienici C-LTMRs sono presenti anche sul lato peloso della pelle dei primati non umani, attualmente non ci sono studi riguardanti la correlazione tra il tocco piacevole e la loro modulazione, come invece è stato ampiamente dimostrato nell'uomo. Recentemente è stato ipotizzato (Dunbar, 2010) il ruolo delle fibre C-LTMRs durante il grooming, in particolare durante il cosiddetto swepping. Il grooming è costituito da due azioni motorie, lo sweeping e il picking che vengono eseguite in modo ritmico. Durante lo sweeping la scimmia agente muove il pelo della scimmia ricevente con un movimento a mano aperta, per poter vedere il preciso punto della pelle dove eseguire il picking, ovvero dove prendere la pelle a livello della radice del pelo con le unghie dell'indice e del pollice e tirare per rimuovere parassiti o uova di parassiti e ciò che è rimasto incastrato nel pelo. Oltre il noto ruolo igenico, il grooming sembra avere anche una importante funzione sociale affiliativa. Come la carezza nella società umana, cosi il grooming tra i primati non umani è considerato un comportamento. Secondo l'ipotesi di Dunbar l'attivazione delle C-LTMRs avverrebbe durante lo sweeping e questo porta a supporre che lo sweeping, come la carezza umana, costituisca una componente affiliativa del grooming, determinando quindi a contribuire alla sua codifica come comportamento sociale. Fino ad ora non vi è però alcuna prova diretta a sostegno di questa ipotesi. In particolare, 1) la velocità cui viene eseguito lo sweeping è compatibile con la velocità di attivazione delle fibre CT nell'uomo e quindi con la velocità tipica della carezza piacevole di carattere sociale affiliativo (1-10 cm/sec)?; 2) lo sweeping induce la stessa modulazione del sistema nervoso autonomo in direzione della modulazione del sistema vagale, come il tocco piacevole nell'uomo, attraverso l'attivazione delle fibre CT?; 3) lo sweeping modula la corteccia insulare, cosi come il tocco piacevole viene codificato come affiliativo nell'uomo mediante le proiezioni delle fibre CT a livello dell'insula posteriore? Lo scopo del presente lavoro è quella di testare l'ipotesi di Dunbar sopra citata, cercando quindi di rispondere alle suddette domande. Le risposte potrebbero consentire di ipotizzare la somiglianza tra lo sweeping, caratteristico del comportamento affiliativo di grooming tra i primati non umani e la carezza. In particolare, abbiamo eseguito 4 studi pilota. Nello Studio 1 abbiamo valutato la velocità con cui viene eseguito lo sweeping tra scimmie Rhesus, mediante una analisi cinematica di video registrati tra un gruppo di scimmie Rhesus. Negli Studi 2 e 3 abbiamo valutato gli effetti sul sistema nervoso autonomo dello sweeping eseguito dallo sperimentatore su una scimmia Rhesus di sesso maschile in una tipica situazione sperimentale. La stimolazione tattile è stata eseguita a diverse velocità, in accordo con i risultati dello Studio 1 e degli studi umani che hanno dimostrato la velocità ottimale e non ottimale per l'attivazione delle C-LTMRs. In particolare, nello Studio 2 abbiamo misurato la frequenza cardiaca e la variabilità di questa, come indice della modulatione vagale, mentre nello Studio 3 abbiamo valutato gli effetti dello sweeping sul sistema nervoso autonomo in termini di variazioni di temperatura del corpo, nello specifico a livello del muso della scimmia. Infine, nello Studio 4 abbiamo studiato il ruolo della corteccia somatosensoriale secondaria e insulare nella codifica dello sweeping. A questo scopo abbiamo eseguito registrazioni di singoli neuroni mentre la medesima scimmia soggetto sperimentale dello Studio 2 e 3, riceveva lo sweeping a due velocità, una ottimale per l'attivazione delle C-LTMRs secondo gli studi umani e i risultati dei tre studi sopra citati, ed una non ottimale. I dati preliminari ottenuti, dimostrano che 1) (Studio 1) lo sweeping tra scimmie Rhesus viene eseguito con una velocità media di 9.31 cm/sec, all'interno dell'intervallo di attivazione delle fibre CT nell'uomo; 2) (Studio 2) lo sweeping eseguito dallo sperimentatore sulla schiena di una scimmia Rhesus di sesso maschile in una situazione sperimentale determina una diminuzione della frequenza cardiaca e l'aumento della variabilità della frequenza cardiaca se eseguito alla velocità di 5 e 10 cm/sec. Al contrario, lo sweeping eseguito ad una velocità minore di 1 cm/sec o maggiore di 10 cm/sec, determina l'aumento della frequenza cardiaca e la diminuzione della variabilità di questa, quindi il decremento dell'attivazione del sistema nervoso parasimpatico; 3) (Studio 3) lo sweeping eseguito dallo sperimentatore sulla schiena di una scimmia Rhesus di sesso maschile in una situazione sperimentale determina l'aumento della temperatura corporea a livello del muso della scimmia se eseguito alla velocità di 5-10 cm/sec. Al contrario, lo sweeping eseguito ad una velocità minore di 5 cm/sec o maggiore di 10 cm/sec, determina la diminuzione della temperatura del muso; 4) (Studio 4) la corteccia somatosensoriale secondaria e la corteccia insulare posteriore presentano neuroni selettivamente modulati durante lo sweeping eseguito ad una velocità di 5-13 cm/sec ma non neuroni selettivi per la codifica della velocità dello sweeping minore di 5 cm/sec. Questi risultati supportano l'ipotesi di Dunbar relativa al coinvolgimento delle fibre CT durante lo sweeping. Infatti i dati mettono in luce che lo sweeping viene eseguito con una velocità (9.31 cm/sec), simile a quella di attivazione delle fibre CT nell'uomo (1-10 cm/sec), determina gli stessi effetti fisiologici positivi in termini di frequenza cardiaca (diminuzione) e variabilità della frequenza cardiaca (incremento) e la modulazione delle medesime aree a livello del sistema nervoso centrale (in particolare la corteccia insulare). Inoltre, abbiamo dimostrato per la prima volta che suddetta stimolazione tattile determina l'aumento della temperatura del muso della scimmia. Il presente studio rappresenta la prima prova indiretta dell'ipotesi relativa alla modulazione del sistema delle fibre C-LTMRs durante lo sweeping e quindi della codifica della stimolazione tattile piacevole affiliativa a livello del sistema nervoso centrale ed autonomo, nei primati non umani. I dati preliminari qui presentati evidenziano la somiglianza tra il sistema delle fibre CT dell'uomo e del sistema C-LTMRs nei primati non umano, riguardanti il Social Touch. Nonostante ciò abbiamo riscontrato alcune discrepanze tra i risultati da noi ottenuti e quelli invece ottenuti dagli studi umani. La velocità media dello sweeping è di 9.31 cm / sec, rasente il limite superiore dell’intervallo di velocità che attiva le fibre CT nell'uomo. Inoltre, gli effetti autonomici positivi, in termini di battito cardiaco, variabilità della frequenza cardiaca e temperatura a livello del muso, sono stati evidenziati durante lo sweeping eseguito con una velocità di 5 e 10 cm/sec, quindi al limite superiore dell’intervallo ottimale che attiva le fibre CT nell’uomo. Al contrario, lo sweeping eseguito con una velocità inferiore a 5 cm/sec e superiore a 10 cm/sec determina effetti fisiologici negativo. Infine, la corteccia insula sembra essere selettivamente modulata dallo stimolazione eseguita alla velocità di 5-13 cm/sec, ma non 1-5 cm/sec. Quindi, gli studi sul sistema delle fibre CT nell’uomo hanno dimostrato che la velocità ottimale è 1-10 cm/sec, mentre dai nostri risultati la velocità ottimale sembra essere 5-13 cm / sec. Quindi, nonostante l'omologia tra il sistema delle fibre CT nell'umano deputato alla codifica del tocco piacevole affiliativo ed il sistema delle fibre C-LTMRs nei primati non umani, ulteriori studi saranno necessari per definire con maggiore precisione la velocità ottimale di attivazione delle fibre C-LTMR e per dimostrare direttamente la loro attivazione durante lo sweeping, mediante la misurazione diretta della loro modulazione. Studi in questa direzione potranno confermare l'omologia tra lo sweeping in qualità di tocco affiliativo piacevole tra i primati non umani e la carezza tra gli uomini. Infine, il presente studio potrebbe essere un importante punto di partenza per esplorare il meccanismo evolutivo dietro la trasformazione dello sweeping tra primati non umani, azione utilitaria eseguita durante il grooming, a carezza, gesto puramente affiliativo tra gli uomini.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Systems biology is based on computational modelling and simulation of large networks of interacting components. Models may be intended to capture processes, mechanisms, components and interactions at different levels of fidelity. Input data are often large and geographically disperse, and may require the computation to be moved to the data, not vice versa. In addition, complex system-level problems require collaboration across institutions and disciplines. Grid computing can offer robust, scaleable solutions for distributed data, compute and expertise. We illustrate some of the range of computational and data requirements in systems biology with three case studies: one requiring large computation but small data (orthologue mapping in comparative genomics), a second involving complex terabyte data (the Visible Cell project) and a third that is both computationally and data-intensive (simulations at multiple temporal and spatial scales). Authentication, authorisation and audit systems are currently not well scalable and may present bottlenecks for distributed collaboration particularly where outcomes may be commercialised. Challenges remain in providing lightweight standards to facilitate the penetration of robust, scalable grid-type computing into diverse user communities to meet the evolving demands of systems biology.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This special issue of the Journal of the Operational Research Society is dedicated to papers on the related subjects of knowledge management and intellectual capital. These subjects continue to generate considerable interest amongst both practitioners and academics. This issue demonstrates that operational researchers have many contributions to offer to the area, especially by bringing multi-disciplinary, integrated and holistic perspectives. The papers included are both theoretical as well as practical, and include a number of case studies showing how knowledge management has been implemented in practice that may assist other organisations in their search for a better means of managing what is now recognised as a core organisational activity. It has been accepted by a growing number of organisations that the precise handling of information and knowledge is a significant factor in facilitating their success but that there is a challenge in how to implement a strategy and processes for this handling. It is here, in the particular area of knowledge process handling that we can see the contributions of operational researchers most clearly as is illustrated in the papers included in this journal edition. The issue comprises nine papers, contributed by authors based in eight different countries on five continents. Lind and Seigerroth describe an approach that they call team-based reconstruction, intended to help articulate knowledge in a particular organisational. context. They illustrate the use of this approach with three case studies, two in manufacturing and one in public sector health care. Different ways of carrying out reconstruction are analysed, and the benefits of team-based reconstruction are established. Edwards and Kidd, and Connell, Powell and Klein both concentrate on knowledge transfer. Edwards and Kidd discuss the issues involved in transferring knowledge across frontières (borders) of various kinds, from those borders within organisations to those between countries. They present two examples, one in distribution and the other in manufacturing. They conclude that trust and culture both play an important part in facilitating such transfers, that IT should be kept in a supporting role in knowledge management projects, and that a staged approach to this IT support may be the most effective. Connell, Powell and Klein consider the oft-quoted distinction between explicit and tacit knowledge, and argue that such a distinction is sometimes unhelpful. They suggest that knowledge should rather be regarded as a holistic systemic property. The consequences of this for knowledge transfer are examined, with a particular emphasis on what this might mean for the practice of OR Their view of OR in the context of knowledge management very much echoes Lind and Seigerroth's focus on knowledge for human action. This is an interesting convergence of views given that, broadly speaking, one set of authors comes from within the OR community, and the other from outside it. Hafeez and Abdelmeguid present the nearest to a 'hard' OR contribution of the papers in this special issue. In their paper they construct and use system dynamics models to investigate alternative ways in which an organisation might close a knowledge gap or skills gap. The methods they use have the potential to be generalised to any other quantifiable aspects of intellectual capital. The contribution by Revilla, Sarkis and Modrego is also at the 'hard' end of the spectrum. They evaluate the performance of public–private research collaborations in Spain, using an approach based on data envelopment analysis. They found that larger organisations tended to perform relatively better than smaller ones, even though the approach used takes into account scale effects. Perhaps more interesting was that many factors that might have been thought relevant, such as the organisation's existing knowledge base or how widely applicable the results of the project would be, had no significant effect on the performance. It may be that how well the partnership between the collaborators works (not a factor it was possible to take into account in this study) is more important than most other factors. Mak and Ramaprasad introduce the concept of a knowledge supply network. This builds on existing ideas of supply chain management, but also integrates the design chain and the marketing chain, to address all the intellectual property connected with the network as a whole. The authors regard the knowledge supply network as the natural focus for considering knowledge management issues. They propose seven criteria for evaluating knowledge supply network architecture, and illustrate their argument with an example from the electronics industry—integrated circuit design and fabrication. In the paper by Hasan and Crawford, their interest lies in the holistic approach to knowledge management. They demonstrate their argument—that there is no simple IT solution for organisational knowledge management efforts—through two case study investigations. These case studies, in Australian universities, are investigated through cultural historical activity theory, which focuses the study on the activities that are carried out by people in support of their interpretations of their role, the opportunities available and the organisation's purpose. Human activities, it is argued, are mediated by the available tools, including IT and IS and in this particular context, KMS. It is this argument that places the available technology into the knowledge activity process and permits the future design of KMS to be improved through the lessons learnt by studying these knowledge activity systems in practice. Wijnhoven concentrates on knowledge management at the operational level of the organisation. He is concerned with studying the transformation of certain inputs to outputs—the operations function—and the consequent realisation of organisational goals via the management of these operations. He argues that the inputs and outputs of this process in the context of knowledge management are different types of knowledge and names the operation method the knowledge logistics. The method of transformation he calls learning. This theoretical paper discusses the operational management of four types of knowledge objects—explicit understanding; information; skills; and norms and values; and shows how through the proposed framework learning can transfer these objects to clients in a logistical process without a major transformation in content. Millie Kwan continues this theme with a paper about process-oriented knowledge management. In her case study she discusses an implementation of knowledge management where the knowledge is centred around an organisational process and the mission, rationale and objectives of the process define the scope of the project. In her case they are concerned with the effective use of real estate (property and buildings) within a Fortune 100 company. In order to manage the knowledge about this property and the process by which the best 'deal' for internal customers and the overall company was reached, a KMS was devised. She argues that process knowledge is a source of core competence and thus needs to be strategically managed. Finally, you may also wish to read a related paper originally submitted for this Special Issue, 'Customer knowledge management' by Garcia-Murillo and Annabi, which was published in the August 2002 issue of the Journal of the Operational Research Society, 53(8), 875–884.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The simulation of a power system such as the More Electric Aircraft is a complex problem. There are conflicting requirements of the simulation, for example in order to reduce simulation run-times, power ratings that need to be established over long periods of the flight can be calculated using a fairly coarse model, whereas power quality is established over relatively short periods with a detailed model. An important issue is to establish the requirements of the simulation work at an early stage. This paper describes the modelling and simulation strategy adopted for the UK TIMES project, which is looking into the optimisation of the More Electric Aircraft from a system level. Essentially four main requirements of the simulation work have been identified, resulting in four different types of simulation. Each of the simulations is described along with preliminary models and results.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

With the features of low-power and flexible networking capabilities IEEE 802.15.4 has been widely regarded as one strong candidate of communication technologies for wireless sensor networks (WSNs). It is expected that with an increasing number of deployments of 802.15.4 based WSNs, multiple WSNs could coexist with full or partial overlap in residential or enterprise areas. As WSNs are usually deployed without coordination, the communication could meet significant degradation with the 802.15.4 channel access scheme, which has a large impact on system performance. In this thesis we are motivated to investigate the effectiveness of 802.15.4 networks supporting WSN applications with various environments, especially when hidden terminals are presented due to the uncoordinated coexistence problem. Both analytical models and system level simulators are developed to analyse the performance of the random access scheme specified by IEEE 802.15.4 medium access control (MAC) standard for several network scenarios. The first part of the thesis investigates the effectiveness of single 802.15.4 network supporting WSN applications. A Markov chain based analytic model is applied to model the MAC behaviour of IEEE 802.15.4 standard and a discrete event simulator is also developed to analyse the performance and verify the proposed analytical model. It is observed that 802.15.4 networks could sufficiently support most WSN applications with its various functionalities. After the investigation of single network, the uncoordinated coexistence problem of multiple 802.15.4 networks deployed with communication range fully or partially overlapped are investigated in the next part of the thesis. Both nonsleep and sleep modes are investigated with different channel conditions by analytic and simulation methods to obtain the comprehensive performance evaluation. It is found that the uncoordinated coexistence problem can significantly degrade the performance of 802.15.4 networks, which is unlikely to satisfy the QoS requirements for many WSN applications. The proposed analytic model is validated by simulations which could be used to obtain the optimal parameter setting before WSNs deployments to eliminate the interference risks.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

For remote, semi-arid areas, brackish groundwater (BW) desalination powered by solar energy may serve as the most technically and economically viable means to alleviate the water stresses. For such systems, high recovery ratio is desired because of the technical and economical difficulties of concentrate management. It has been demonstrated that the current, conventional solar reverse osmosis (RO) desalination can be improved by 40–200 times by eliminating unnecessary energy losses. In this work, a batch-RO system that can be powered by a thermal Rankine cycle has been developed. By directly recycling high pressure concentrates and by using a linkage connection to provide increasing feed pressures, the batch-RO has been shown to achieve a 70% saving in energy consumption compared to a continuous single-stage RO system. Theoretical investigations on the mass transfer phenomena, including dispersion and concentration polarization, have been carried out to complement and to guide experimental efforts. The performance evaluation of the batch-RO system, named DesaLink, has been based on extensive experimental tests performed upon it. Operating DesaLink using compressed air as power supply under laboratory conditions, a freshwater production of approximately 300 litres per day was recorded with a concentration of around 350 ppm, whilst the feed water had a concentration range of 2500–4500 ppm; the corresponding linkage efficiency was around 40%. In the computational aspect, simulation models have been developed and validated for each of the subsystems of DesaLink, upon which an integrated model has been realised for the whole system. The models, both the subsystem ones and the integrated one, have been demonstrated to predict accurately the system performance under specific operational conditions. A simulation case study has been performed using the developed model. Simulation results indicate that the system can be expected to achieve a water production of 200 m3 per year by using a widely available evacuated tube solar collector having an area of only 2 m2. This freshwater production would satisfy the drinking water needs of 163 habitants in the Rajasthan region, the area for which the case study was performed.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

* This paper was made according to the program No 14 of fundamental scientific research of the Presidium of the Russian Academy of Sciences, the project "Intellectual Systems Based on Multilevel Domain Models".

Relevância:

90.00% 90.00%

Publicador:

Resumo:

A szerző az alkalmazott többszektoros modellezés területén a lineáris programozási modellektől a számszerűsített általános egyensúlyi modellekig végbement változásokat tekinti át. Egy rövid történeti visszapillantás után a lineáris programozás módszereire épülő nemzetgazdasági szintű modellekkel összevetve mutatja be az általános egyensúlyi modellek közös, illetve eltérő jellemzőit. Egyidejűleg azt is érzékelteti, hogyan lehet az általános egyensúlyi modelleket a gazdaságpolitikai célok konzisztenciájának, a célok közötti átváltási lehetőségek elemzésére és általában a gazdaságpolitikai elképzelések érzékenységi vizsgálatára felhasználni. A szerző az elméleti-módszertani kérdések taglalását számszerűsített általános egyensúlyi modell segítségével illusztrálja. _______ The author surveys the changes having taken place in the field of multi-sector modeling, from the linear programming models to the quantified general equilibrium models. After a brief historical retrospection he presents the common and different characteristic features of the general equilibrium models by comparing them with the national economic level models based on the methods of linear programming. He also makes clear how the general equilibrium models can be used for analysing the consistency of economic policy targets, for the investigation of trade-off possibilities among the targets and, in general, for sensitivity analyses of economic policy targets. The discussion of theoretical and methodological quuestions is illustrated by the author with the aid of a quantified general equilibrium model.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

“Availability” is the terminology used in asset intensive industries such as petrochemical and hydrocarbons processing to describe the readiness of equipment, systems or plants to perform their designed functions. It is a measure to suggest a facility’s capability of meeting targeted production in a safe working environment. Availability is also vital as it encompasses reliability and maintainability, allowing engineers to manage and operate facilities by focusing on one performance indicator. These benefits make availability a very demanding and highly desired area of interest and research for both industry and academia. In this dissertation, new models, approaches and algorithms have been explored to estimate and manage the availability of complex hydrocarbon processing systems. The risk of equipment failure and its effect on availability is vital in the hydrocarbon industry, and is also explored in this research. The importance of availability encouraged companies to invest in this domain by putting efforts and resources to develop novel techniques for system availability enhancement. Most of the work in this area is focused on individual equipment compared to facility or system level availability assessment and management. This research is focused on developing an new systematic methods to estimate system availability. The main focus areas in this research are to address availability estimation and management through physical asset management, risk-based availability estimation strategies, availability and safety using a failure assessment framework, and availability enhancement using early equipment fault detection and maintenance scheduling optimization.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Dissertação (mestrado)—Universidade de Brasília, Instituto de Ciências Exatas, Departamento de Ciência da Computação, 2015.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

To tackle the challenges at circuit level and system level VLSI and embedded system design, this dissertation proposes various novel algorithms to explore the efficient solutions. At the circuit level, a new reliability-driven minimum cost Steiner routing and layer assignment scheme is proposed, and the first transceiver insertion algorithmic framework for the optical interconnect is proposed. At the system level, a reliability-driven task scheduling scheme for multiprocessor real-time embedded systems, which optimizes system energy consumption under stochastic fault occurrences, is proposed. The embedded system design is also widely used in the smart home area for improving health, wellbeing and quality of life. The proposed scheduling scheme for multiprocessor embedded systems is hence extended to handle the energy consumption scheduling issues for smart homes. The extended scheme can arrange the household appliances for operation to minimize monetary expense of a customer based on the time-varying pricing model.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The deployment of ultra-dense networks is one of the most promising solutions to manage the phenomenon of co-channel interference that affects the latest wireless communication systems, especially in hotspots. To meet the requirements of the use-cases and the immense amount of traffic generated in these scenarios, 5G ultra-dense networks are being deployed using various technologies, such as distributed antenna system (DAS) and cloud-radio access network (C-RAN). Through these centralized densification schemes, virtualized baseband processing units coordinate the distributed access points and manage the available network resources. In particular, link adaptation techniques are shown to be fundamental to overall system operation and performance enhancement. The core of this dissertation is the result of an analysis and a comparison of dynamic and adaptive methods for modulation and coding scheme (MCS) selection applied to the latest mobile telecommunications standards. A novel algorithm based on the proportional-integral-derivative (PID) controller principles and block error rate (BLER) target has been proposed. Tests were conducted in a 4G and 5G system level laboratory and, by means of a channel emulator, the performance was evaluated for different channel models and target BLERs. Furthermore, due to the intrinsic sectorization of the end-users distribution in the investigated scenario, a preliminary analysis on the joint application of users grouping algorithms with multi-antenna and multi-user techniques has been performed. In conclusion, the importance and impact of other fundamental physical layer operations, such as channel estimation and power control, on the overall end-to-end system behavior and performance were highlighted.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Deep Neural Networks (DNNs) have revolutionized a wide range of applications beyond traditional machine learning and artificial intelligence fields, e.g., computer vision, healthcare, natural language processing and others. At the same time, edge devices have become central in our society, generating an unprecedented amount of data which could be used to train data-hungry models such as DNNs. However, the potentially sensitive or confidential nature of gathered data poses privacy concerns when storing and processing them in centralized locations. To this purpose, decentralized learning decouples model training from the need of directly accessing raw data, by alternating on-device training and periodic communications. The ability of distilling knowledge from decentralized data, however, comes at the cost of facing more challenging learning settings, such as coping with heterogeneous hardware and network connectivity, statistical diversity of data, and ensuring verifiable privacy guarantees. This Thesis proposes an extensive overview of decentralized learning literature, including a novel taxonomy and a detailed description of the most relevant system-level contributions in the related literature for privacy, communication efficiency, data and system heterogeneity, and poisoning defense. Next, this Thesis presents the design of an original solution to tackle communication efficiency and system heterogeneity, and empirically evaluates it on federated settings. For communication efficiency, an original method, specifically designed for Convolutional Neural Networks, is also described and evaluated against the state-of-the-art. Furthermore, this Thesis provides an in-depth review of recently proposed methods to tackle the performance degradation introduced by data heterogeneity, followed by empirical evaluations on challenging data distributions, highlighting strengths and possible weaknesses of the considered solutions. Finally, this Thesis presents a novel perspective on the usage of Knowledge Distillation as a mean for optimizing decentralized learning systems in settings characterized by data heterogeneity or system heterogeneity. Our vision on relevant future research directions close the manuscript.