30 resultados para MCU


Relevância:

20.00% 20.00%

Publicador:

Resumo:

The first topic analyzed in the thesis will be Neural Architecture Search (NAS). I will focus on two different tools that I developed, one to optimize the architecture of Temporal Convolutional Networks (TCNs), a convolutional model for time-series processing that has recently emerged, and one to optimize the data precision of tensors inside CNNs. The first NAS proposed explicitly targets the optimization of the most peculiar architectural parameters of TCNs, namely dilation, receptive field, and the number of features in each layer. Note that this is the first NAS that explicitly targets these networks. The second NAS proposed instead focuses on finding the most efficient data format for a target CNN, with the granularity of the layer filter. Note that applying these two NASes in sequence allows an "application designer" to minimize the structure of the neural network employed, minimizing the number of operations or the memory usage of the network. After that, the second topic described is the optimization of neural network deployment on edge devices. Importantly, exploiting edge platforms' scarce resources is critical for NN efficient execution on MCUs. To do so, I will introduce DORY (Deployment Oriented to memoRY) -- an automatic tool to deploy CNNs on low-cost MCUs. DORY, in different steps, can manage different levels of memory inside the MCU automatically, offload the computation workload (i.e., the different layers of a neural network) to dedicated hardware accelerators, and automatically generates ANSI C code that orchestrates off- and on-chip transfers with the computation phases. On top of this, I will introduce two optimized computation libraries that DORY can exploit to deploy TCNs and Transformers on edge efficiently. I conclude the thesis with two different applications on bio-signal analysis, i.e., heart rate tracking and sEMG-based gesture recognition.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Trabalho Final de Mestrado para obtenção do grau de Mestre em Engenharia de Electrónica e Telecomunicações

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Dissertação para obtenção do grau de Mestre em Engenharia Electrotécnica Ramo de Automação e Electrónica Industrial

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Presented at Embed with Linux Workshop (EWiLi 2015). 4 to 9, Oct, 2015. Amsterdam, Netherlands.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Whether the response of the fetal heart to ischemia-reperfusion is associated with activation of the c-Jun N-terminal kinase (JNK) pathway is not known. In contrast, involvement of the sarcolemmal L-type Ca2+ channel (LCC) and the mitochondrial KATP (mitoKATP) channel has been established. This work aimed at investigating the profile of JNK activity during anoxia-reoxygenation and its modulation by LCC and mitoK(ATP) channel. Hearts isolated from 4-day-old chick embryos were submitted to anoxia (30 min) and reoxygenation (60 min). Using the kinase assay method, the profile of JNK activity in the ventricle was determined every 10 min throughout anoxia-reoxygenation. Effects on JNK activity of the LCC blocker verapamil (10 nM), the mitoK(ATP) channel opener diazoxide (50 microM) and the blocker 5-hydroxydecanoate (5-HD, 500 microM), the mitochondrial Ca2+ uniporter (MCU) inhibitor Ru360 (10 microM), and the antioxidant N-(2-mercaptopropionyl) glycine (MPG, 1 mM) were determined. In untreated hearts, JNK activity was increased by 40% during anoxia and peaked fivefold relative to basal level after 30-40 min reoxygenation. This peak value was reduced by half by diazoxide and was tripled by 5-HD. Furthermore, the 5-HD-mediated stimulation of JNK activity during reoxygenation was abolished by diazoxide, verapamil or Ru360. MPG had no effect on JNK activity, whatever the conditions. None of the tested pharmacological agents altered JNK activity under basal normoxic conditions. Thus, in the embryonic heart, JNK activity exhibits a characteristic pattern during anoxia and reoxygenation and the respective open-state of LCC, MCU and mitoKATP channel can be a major determinant of JNK activity in a ROS-independent manner.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

IP-verkko asettaa uusia vaatimuksia palveluntarjoajille multimedianeu-vottelupalvelujen toteuttamisessa. Samalla kuitenkin mahdollistuu myös ominaisuuksia, joita ei ole aikaisemmin voitu toteuttaa. Keskeinen elementti IP-pohjaisille on H.323-suosituk-seen pohjautuva MCU. Tässä työssä esitellään ja vertaillaan multimedianeuvottelupalveluissa ennen ja nykyään käytettyä tekniikkaa. Huomiota kiinnitetään myös neuvottelukulttuuriin ja multimedianeuvotteluissa tarvittaviin rooleihin. Työn käytännön osuudessa toteutettiin prototyyppi neuvottelupalvelusta, joka hyödyntää IP-maailman mahdollistamia uusia ominaisuuksia. Toteutuksen suunnitteluun, määrittelyyn ja dokumentointiin käytettiin UML-kieltä. Ohjelmointikielenä käytettiin Javaa.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Im Verlauf der Forschungsarbeit wurden Proben aus fünf, mit nachwachsenden Rohstoffen (NawaRo) beschickten, landwirtschaftlichen Biogasanlagen (BGA) auf die Biozönose methanogener Archaea hin molekularbiologisch untersucht. Über „amplified rDNA restriction analysis“-Screening (ARDRA) von Bibliotheken auf Basis von 16S rRNA-Genfragmenten konnte anhand zweier beispielhafter BGA das Vorkommen von Vertretern der Gattungen Methanoculleus (Mcu.), Methanobacterium (Mb.), Methanosarcina (Msc.) und Methanosaeta (Mst.) nachgewiesen werden. Mittels denaturierender Gradienten-Gelelektrophorese (DGGE) wurde das Vorkommen dieser Mikroorganismen auch in den übrigen Anlagen gezeigt. Ergänzend dazu wurde in drei Anlagen Methanospirillum hungatei nachgewiesen. Nach Ausarbeitung gattungsspezifischer Isolierungsstrategien konnten insgesamt zehn Vertreter der Gattung Methanobacterium (Isolate Mb1 bis Mb10) und jeweils ein Vertreter der Gattungen Methanoculleus (Isolat Mcu(1)), Methanosarcina (Isolat NieKK) und Methanosaeta (Isolat Mst1.3) aus den BGA-Proben isoliert werden. Durch in silico-Abgleich der partiellen 16S rRNA-Gensequenzen wurden diese als Verwandte von Mb. formicicum MFT, Mcu. bourgensis MS2T, Msc. mazei S-6T und Mst. concilii FE mit einer Sequenzidentität > 97% identifiziert. Im Laufe weiterer molekularbiologischer Untersuchungen mittels DGGE und ARDRA-Analyse konnten die Isolate den Referenzstämmen zugeordnet werden. In Bezug auf die Gattung Methanobacterium ergaben sich jedoch leichte Abweichungen. Diese bestätigten sich in vergleichenden Analysen des genomischen Fingerabdrucks in der „specifically amplified polymorphic DNA“-PCR (SAPD-PCR), welche im Rahmen dieser Arbeit erstmalig erfolgreich auf archaeelle Organismen angewandt wurde. Hier zeigten die Isolate zwei von den Fingerabdrücken der untersuchten Referenzstämme verschiedene Hauptamplifikationsmuster. Aufgrund der Vielzahl der Isolate sowie dem signifikanten Vorkommen in qPCR-Analysen und Klonbibliotheken fokussierten sich die weiteren Arbeiten zur genauen Untersuchung dieser Abweichungen auf phylogenetische Analysen der Gattung Methanobacterium und die Entwicklung von Nachweissystemen. Die Aufklärung eines Großteils der 23S rRNA-Gensequenzen der Isolate und von ausgewählten Typstämmen ermöglichte ergänzende phylogenetische Untersuchungen zu durchgeführten 16S rRNA-Analysen. Dabei wurden die Isolate jeweils in einem eigenen Cluster abseits der meisten Referenzstämme aus der Gattung Methanobacterium positioniert. Analog zur Musterbildung im Rahmen der SAPD-Analyse zeigte sich eine Differenzierung in zwei Äste und ergab in Übereinstimmung mit den in silico-Sequenzabgleichen den höchsten Verwandtschaftsgrad mit Mb. formicicum MFT. Die Eignung der SAPD-PCR zur Ableitung spezifischer Primerpaare konnte erstmals auch für methanogene Archaea gezeigt werden. Die Ableitung zweier Primerpaare mit Spezifität für die Methanobacterium-Isolate Mb1 bis Mb10 sowie für den Typstamm Mb. formicicum MFT gelang und konnte im Rahmen eines Direkt-PCR-Nachweises erfolgreich auf Reinkulturen und Fermenterproben angewandt werden. Unter Einbezug der sequenzierten 23S rRNA-Genfragmente gelang die Erstellung von Oligonukleotid-Sonden für den Einsatz in Fluoreszenz in situ-Hybridisierungsexperimenten. Im Praxistest ergab sich für diese Sonden eine Spezifität für alle getesteten Vertreter der Gattung Methanobacterium sowie für Methanosphaera stadtmanae MCB-3T und Methanobrevibacter smithii PST.rnSomit konnten im Laufe der Arbeit die dominanten methanogenen Archaea in NawaRo-BGA in mehrphasigen Experimenten nachgewiesen, quantifiziert und auf nur wenige Gattungen eingegrenzt werden. Vertreter der vier dominanten Gattungen wurden isoliert und Nachweissysteme für Arten der Gattung Methanobacterium erstellt.rn

Relevância:

10.00% 10.00%

Publicador:

Resumo:

With the development and capabilities of the Smart Home system, people today are entering an era in which household appliances are no longer just controlled by people, but also operated by a Smart System. This results in a more efficient, convenient, comfortable, and environmentally friendly living environment. A critical part of the Smart Home system is Home Automation, which means that there is a Micro-Controller Unit (MCU) to control all the household appliances and schedule their operating times. This reduces electricity bills by shifting amounts of power consumption from the on-peak hour consumption to the off-peak hour consumption, in terms of different “hour price”. In this paper, we propose an algorithm for scheduling multi-user power consumption and implement it on an FPGA board, using it as the MCU. This algorithm for discrete power level tasks scheduling is based on dynamic programming, which could find a scheduling solution close to the optimal one. We chose FPGA as our system’s controller because FPGA has low complexity, parallel processing capability, a large amount of I/O interface for further development and is programmable on both software and hardware. In conclusion, it costs little time running on FPGA board and the solution obtained is good enough for the consumers.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Lately, videoconference applications have experienced an evolution towards the World Wide Web. New technologies have given browsers real-time communications capabilities. In this context, WebRTC aims to provide this functionality by following and defining standards. Being a new effort, WebRTC still lacks advanced videoconferencing services such as session recording, media mixing and adjusting to varying network conditions. This paper analyzes these challenges and proposes an architecture based on a traditional communications entity, the Multipoint Control Unit or MCU as a solution.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Multi party videoconference systems use MCU (Multipoint Control Unit) devices to forward media streams. In this paper we describe a mechanism that allows the mobility of such streams between MCU devices. This mobility is especially useful when redistribution of streams is needed due to scalability requirements. These requirements are mandatory in Cloud scenarios to adapt the number of MCUs and their capabilities to variations in the user demand. Our mechanism is based on TURN (Traversal Using Relay around NAT) standard and adapts MICE (Mobility with ICE) specification to the requirements of this kind of scenarios. We conclude that this mechanism achieves the stream mobility in a transparent way for client nodes and without interruptions for the users.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

En este Trabajo de Fin de Grado se va a explicar el procedimiento seguido a la hora de estudiar, diseñar y desarrollar Ackuaria, un portal de monitorización y análisis de estadísticas de comunicaciones en tiempo real. Después, se mostrarán los resultados obtenidos y la interfaz gráfica desarrollada para una mejor experiencia de usuario. Ackuaria se apoyará en el uso de Licode, un proyecto de código libre desarrollado en la Universidad Politécnica de Madrid, más concretamente en el Grupo de Internet de Nueva Generación de la Escuela Técnica Superior de Ingenieros de Telecomunicación. Licode ofrece la posibilidad de crear un servicio de streaming y videoconferencia en la propia infraestructura del usuario. Está diseñado para ser totalmente escalable y su uso está orientado principalmente al Cloud, aunque es perfectamente utilizable en una infraestructura física. Licode a su vez se basa en WebRTC, un protocolo desarrollado por la W3C (World Wide Web Consortium) y el IETF (Internet Engineering Task Force) pensado para poder transmitir y recibir flujos de audio, video y datos a través del navegador. No necesita ninguna instalación adicional, por lo que establecer una sesión de videoconferencia Peer-to-Peer es realmente sencillo. Con Licode se usa una MCU (Multipoint Control Unit) para evitar que todas las conexiones entre los usuarios sean Peer-To-Peer. Actúa como un cliente WebRTC más por el que pasan todos los flujos, que se encarga de multiplexar y redirigir donde sea necesario. De esta forma se ahorra ancho de banda y recursos del dispositivo de una forma muy significativa. Existe la creciente necesidad de los usuarios de Licode y de cualquier servicio de videoconferencia en general de poder gestionar su infraestructura a partir de datos y estadísticas fiables. Sus objetivos son muy variados: desde estudiar el comportamiento de WebRTC en distintos escenarios hasta monitorizar el uso de los usuarios para poder contabilizar después el tiempo publicado por cada uno. En todos los casos era común la necesidad de disponer de una herramienta que permitiese conocer en todo momento qué está pasando en el servicio de Licode, así como de almacenar toda la información para poder ser analizada posteriormente. Para conseguir desarrollar Ackuaria se ha realizado un estudio de las comunicaciones en tiempo real con el objetivo de determinar qué parámetros era indispensable y útil monitorizar. A partir de este estudio se ha actualizado la arquitectura de Licode para que obtuviese todos los datos necesarios y los enviase de forma que pudiesen ser recogidos por Ackuaria. El portal de monitorización entonces tratará esa información y la mostrará de forma clara y ordenada, además de proporcionar una API REST al usuario.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Las Field-Programmable Gate Arrays (FPGAs) SRAM se construyen sobre una memoria de configuración de tecnología RAM Estática (SRAM). Presentan múltiples características que las hacen muy interesantes para diseñar sistemas empotrados complejos. En primer lugar presentan un coste no-recurrente de ingeniería (NRE) bajo, ya que los elementos lógicos y de enrutado están pre-implementados (el diseño de usuario define su conexionado). También, a diferencia de otras tecnologías de FPGA, pueden ser reconfiguradas (incluso en campo) un número ilimitado de veces. Es más, las FPGAs SRAM de Xilinx soportan Reconfiguración Parcial Dinámica (DPR), la cual permite reconfigurar la FPGA sin interrumpir la aplicación. Finalmente, presentan una alta densidad de lógica, una alta capacidad de procesamiento y un rico juego de macro-bloques. Sin embargo, un inconveniente de esta tecnología es su susceptibilidad a la radiación ionizante, la cual aumenta con el grado de integración (geometrías más pequeñas, menores tensiones y mayores frecuencias). Esta es una precupación de primer nivel para aplicaciones en entornos altamente radiativos y con requisitos de alta confiabilidad. Este fenómeno conlleva una degradación a largo plazo y también puede inducir fallos instantáneos, los cuales pueden ser reversibles o producir daños irreversibles. En las FPGAs SRAM, los fallos inducidos por radiación pueden aparecer en en dos capas de arquitectura diferentes, que están físicamente superpuestas en el dado de silicio. La Capa de Aplicación (o A-Layer) contiene el hardware definido por el usuario, y la Capa de Configuración contiene la memoria de configuración y la circuitería de soporte. Los fallos en cualquiera de estas capas pueden hacer fracasar el sistema, lo cual puede ser ás o menos tolerable dependiendo de los requisitos de confiabilidad del sistema. En el caso general, estos fallos deben gestionados de alguna manera. Esta tesis trata sobre la gestión de fallos en FPGAs SRAM a nivel de sistema, en el contexto de sistemas empotrados autónomos y confiables operando en un entorno radiativo. La tesis se centra principalmente en aplicaciones espaciales, pero los mismos principios pueden aplicarse a aplicaciones terrenas. Las principales diferencias entre ambas son el nivel de radiación y la posibilidad de mantenimiento. Las diferentes técnicas para la gestión de fallos en A-Layer y C-Layer son clasificados, y sus implicaciones en la confiabilidad del sistema son analizados. Se proponen varias arquitecturas tanto para Gestores de Fallos de una capa como de doble-capa. Para estos últimos se propone una arquitectura novedosa, flexible y versátil. Gestiona las dos capas concurrentemente de manera coordinada, y permite equilibrar el nivel de redundancia y la confiabilidad. Con el objeto de validar técnicas de gestión de fallos dinámicas, se desarrollan dos diferentes soluciones. La primera es un entorno de simulación para Gestores de Fallos de C-Layer, basado en SystemC como lenguaje de modelado y como simulador basado en eventos. Este entorno y su metodología asociada permite explorar el espacio de diseño del Gestor de Fallos, desacoplando su diseño del desarrollo de la FPGA objetivo. El entorno incluye modelos tanto para la C-Layer de la FPGA como para el Gestor de Fallos, los cuales pueden interactuar a diferentes niveles de abstracción (a nivel de configuration frames y a nivel físico JTAG o SelectMAP). El entorno es configurable, escalable y versátil, e incluye capacidades de inyección de fallos. Los resultados de simulación para algunos escenarios son presentados y comentados. La segunda es una plataforma de validación para Gestores de Fallos de FPGAs Xilinx Virtex. La plataforma hardware aloja tres Módulos de FPGA Xilinx Virtex-4 FX12 y dos Módulos de Unidad de Microcontrolador (MCUs) de 32-bits de propósito general. Los Módulos MCU permiten prototipar Gestores de Fallos de C-Layer y A-Layer basados en software. Cada Módulo FPGA implementa un enlace de A-Layer Ethernet (a través de un switch Ethernet) con uno de los Módulos MCU, y un enlace de C-Layer JTAG con el otro. Además, ambos Módulos MCU intercambian comandos y datos a través de un enlace interno tipo UART. Al igual que para el entorno de simulación, se incluyen capacidades de inyección de fallos. Los resultados de pruebas para algunos escenarios son también presentados y comentados. En resumen, esta tesis cubre el proceso completo desde la descripción de los fallos FPGAs SRAM inducidos por radiación, pasando por la identificación y clasificación de técnicas de gestión de fallos, y por la propuesta de arquitecturas de Gestores de Fallos, para finalmente validarlas por simulación y pruebas. El trabajo futuro está relacionado sobre todo con la implementación de Gestores de Fallos de Sistema endurecidos para radiación. ABSTRACT SRAM-based Field-Programmable Gate Arrays (FPGAs) are built on Static RAM (SRAM) technology configuration memory. They present a number of features that make them very convenient for building complex embedded systems. First of all, they benefit from low Non-Recurrent Engineering (NRE) costs, as the logic and routing elements are pre-implemented (user design defines their connection). Also, as opposed to other FPGA technologies, they can be reconfigured (even in the field) an unlimited number of times. Moreover, Xilinx SRAM-based FPGAs feature Dynamic Partial Reconfiguration (DPR), which allows to partially reconfigure the FPGA without disrupting de application. Finally, they feature a high logic density, high processing capability and a rich set of hard macros. However, one limitation of this technology is its susceptibility to ionizing radiation, which increases with technology scaling (smaller geometries, lower voltages and higher frequencies). This is a first order concern for applications in harsh radiation environments and requiring high dependability. Ionizing radiation leads to long term degradation as well as instantaneous faults, which can in turn be reversible or produce irreversible damage. In SRAM-based FPGAs, radiation-induced faults can appear at two architectural layers, which are physically overlaid on the silicon die. The Application Layer (or A-Layer) contains the user-defined hardware, and the Configuration Layer (or C-Layer) contains the (volatile) configuration memory and its support circuitry. Faults at either layers can imply a system failure, which may be more ore less tolerated depending on the dependability requirements. In the general case, such faults must be managed in some way. This thesis is about managing SRAM-based FPGA faults at system level, in the context of autonomous and dependable embedded systems operating in a radiative environment. The focus is mainly on space applications, but the same principles can be applied to ground applications. The main differences between them are the radiation level and the possibility for maintenance. The different techniques for A-Layer and C-Layer fault management are classified and their implications in system dependability are assessed. Several architectures are proposed, both for single-layer and dual-layer Fault Managers. For the latter, a novel, flexible and versatile architecture is proposed. It manages both layers concurrently in a coordinated way, and allows balancing redundancy level and dependability. For the purpose of validating dynamic fault management techniques, two different solutions are developed. The first one is a simulation framework for C-Layer Fault Managers, based on SystemC as modeling language and event-driven simulator. This framework and its associated methodology allows exploring the Fault Manager design space, decoupling its design from the target FPGA development. The framework includes models for both the FPGA C-Layer and for the Fault Manager, which can interact at different abstraction levels (at configuration frame level and at JTAG or SelectMAP physical level). The framework is configurable, scalable and versatile, and includes fault injection capabilities. Simulation results for some scenarios are presented and discussed. The second one is a validation platform for Xilinx Virtex FPGA Fault Managers. The platform hosts three Xilinx Virtex-4 FX12 FPGA Modules and two general-purpose 32-bit Microcontroller Unit (MCU) Modules. The MCU Modules allow prototyping software-based CLayer and A-Layer Fault Managers. Each FPGA Module implements one A-Layer Ethernet link (through an Ethernet switch) with one of the MCU Modules, and one C-Layer JTAG link with the other. In addition, both MCU Modules exchange commands and data over an internal UART link. Similarly to the simulation framework, fault injection capabilities are implemented. Test results for some scenarios are also presented and discussed. In summary, this thesis covers the whole process from describing the problem of radiationinduced faults in SRAM-based FPGAs, then identifying and classifying fault management techniques, then proposing Fault Manager architectures and finally validating them by simulation and test. The proposed future work is mainly related to the implementation of radiation-hardened System Fault Managers.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Polymer/montmorillonite nanocomposites were prepared. Intercalation of 2-aminobenzene sulfonic acid with aniline monomers into montmorillonite modified by cation was followed by subsequent oxidative polymerization of monomers in the interlayer spacing. The clay was prepared by cation exchange process between sodium cation in (M–Na) and copper cation (M–Cu). XRD analyses show the manifestation of a basal spacing (d-spacing) for M–Cu changes depending on the inorganic cation and the polymer intercalated in the M–Cu structure. TGA analyses reveal that polymer/M–Cu composites is less stable than M–Cu. The conductivity of the composites is found to be 103 times higher than that for M–Cu. The microscopic examinations including TEM picture of the nanocomposite demonstrated an entirely different and more compatible morphology. Remarkable differences in the properties of the polymers have also been observed by UV–Vis and FTIR, suggesting that the polymer produced with presence of aniline has a higher degree of branching. The electrochemical behavior of the polymers extracted from the nanocomposites has been studied by cyclic voltammetry which indicates the electroactive effect of nanocomposite gradually increased with aniline in the polymer chain.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Wireless sensor networks (WSNs) have shown wide applicability to many fields including monitoring of environmental, civil, and industrial settings. WSNs however are resource constrained by many competing factors that span their hardware, software, and networking. One of the central resource constrains is the charge consumption of WSN nodes. With finite energy supplies, low charge consumption is needed to ensure long lifetimes and success of WSNs. This thesis details the design of a power system to support long-term operation of WSNs. The power system’s development occurs in parallel with a custom WSN from the Queen’s MEMS Lab (QML-WSN), with the goal of supporting a 1+ year lifetime without sacrificing functionality. The final power system design utilizes a TPS62740 DC-DC converter with AA alkaline batteries to efficiently supply the nodes while providing battery monitoring functionality and an expansion slot for future development. Testing tools for measuring current draw and charge consumption were created along with analysis and processing software. Through their use charge consumption of the power system was drastically lowered and issues in QML-WSN were identified and resolved including the proper shutdown of accelerometers, and incorrect microcontroller unit (MCU) power pin connection. Controlled current profiling revealed unexpected behaviour of nodes and detailed current-voltage relationships. These relationships were utilized with a lifetime projection model to estimate a lifetime between 521-551 days, depending on the mode of operation. The power system and QML-WSN were tested over a long term trial lasting 272+ days in an industrial testbed to monitor an air compressor pump. Environmental factors were found to influence the behaviour of nodes leading to increased charge consumption, while a node in an office setting was still operating at the conclusion of the trail. This agrees with the lifetime projection and gives a strong indication that a 1+ year lifetime is achievable. Additionally, a light-weight charge consumption model was developed which allows charge consumption information of nodes in a distributed WSN to be monitored. This model was tested in a laboratory setting demonstrating +95% accuracy for high packet reception rate WSNs across varying data rates, battery supply capacities, and runtimes up to full battery depletion.