873 resultados para Energy Efficient Algorithms
Resumo:
The performance, energy efficiency and cost improvements due to traditional technology scaling have begun to slow down and present diminishing returns. Underlying reasons for this trend include fundamental physical limits of transistor scaling, the growing significance of quantum effects as transistors shrink, and a growing mismatch between transistors and interconnects regarding size, speed and power. Continued Moore's Law scaling will not come from technology scaling alone, and must involve improvements to design tools and development of new disruptive technologies such as 3D integration. 3D integration presents potential improvements to interconnect power and delay by translating the routing problem into a third dimension, and facilitates transistor density scaling independent of technology node. Furthermore, 3D IC technology opens up a new architectural design space of heterogeneously-integrated high-bandwidth CPUs. Vertical integration promises to provide the CPU architectures of the future by integrating high performance processors with on-chip high-bandwidth memory systems and highly connected network-on-chip structures. Such techniques can overcome the well-known CPU performance bottlenecks referred to as memory and communication wall. However the promising improvements to performance and energy efficiency offered by 3D CPUs does not come without cost, both in the financial investments to develop the technology, and the increased complexity of design. Two main limitations to 3D IC technology have been heat removal and TSV reliability. Transistor stacking creates increases in power density, current density and thermal resistance in air cooled packages. Furthermore the technology introduces vertical through silicon vias (TSVs) that create new points of failure in the chip and require development of new BEOL technologies. Although these issues can be controlled to some extent using thermal-reliability aware physical and architectural 3D design techniques, high performance embedded cooling schemes, such as micro-fluidic (MF) cooling, are fundamentally necessary to unlock the true potential of 3D ICs. A new paradigm is being put forth which integrates the computational, electrical, physical, thermal and reliability views of a system. The unification of these diverse aspects of integrated circuits is called Co-Design. Independent design and optimization of each aspect leads to sub-optimal designs due to a lack of understanding of cross-domain interactions and their impacts on the feasibility region of the architectural design space. Co-Design enables optimization across layers with a multi-domain view and thus unlocks new high-performance and energy efficient configurations. Although the co-design paradigm is becoming increasingly necessary in all fields of IC design, it is even more critical in 3D ICs where, as we show, the inter-layer coupling and higher degree of connectivity between components exacerbates the interdependence between architectural parameters, physical design parameters and the multitude of metrics of interest to the designer (i.e. power, performance, temperature and reliability). In this dissertation we present a framework for multi-domain co-simulation and co-optimization of 3D CPU architectures with both air and MF cooling solutions. Finally we propose an approach for design space exploration and modeling within the new Co-Design paradigm, and discuss the possible avenues for improvement of this work in the future.
Resumo:
This thesis describes the theoretical, methodological and programmatic proposal for a multifamily residential building located in the urban expansion area of Parnamirim/RN, inserted in the program Minha Casa Minha Vida and level of energy efficiency "A", as the RegulamentoTécnico de Qualidade (RTQ-R/INMETRO) for residential buildings. The development project initially consists of procedures as the study of theoretical, architectural programming and cases studies. With the delimitation of a field solution, situated between the reference and the context, proposals are studied to determine the solution and architectural detailing of the proposal. The architectural program was built based on the method of Problem Seeking (Peña and Parshall, 2001) and research has highlighted aspects of reducing the environmental impact and of the program Minha Casa Minha Vida , among others. The design process was characterized by the incorporation of aspects reviewed and programmed, seeking them compatible and have an economically viable building, socio-spatial quality and energy efficient. The results show that it is possible to obtain a building that meets the constraints of the program that provides housing and energy efficiency level A - and many other environmental qualities and constructive, particularly through architectural design
Resumo:
The performance of building envelopes and roofing systems significantly depends on accurate knowledge of wind loads and the response of envelope components under realistic wind conditions. Wind tunnel testing is a well-established practice to determine wind loads on structures. For small structures much larger model scales are needed than for large structures, to maintain modeling accuracy and minimize Reynolds number effects. In these circumstances the ability to obtain a large enough turbulence integral scale is usually compromised by the limited dimensions of the wind tunnel meaning that it is not possible to simulate the low frequency end of the turbulence spectrum. Such flows are called flows with Partial Turbulence Simulation.^ In this dissertation, the test procedure and scaling requirements for tests in partial turbulence simulation are discussed. A theoretical method is proposed for including the effects of low-frequency turbulences in the post-test analysis. In this theory the turbulence spectrum is divided into two distinct statistical processes, one at high frequencies which can be simulated in the wind tunnel, and one at low frequencies which can be treated in a quasi-steady manner. The joint probability of load resulting from the two processes is derived from which full-scale equivalent peak pressure coefficients can be obtained. The efficacy of the method is proved by comparing predicted data derived from tests on large-scale models of the Silsoe Cube and Texas-Tech University buildings in Wall of Wind facility at Florida International University with the available full-scale data.^ For multi-layer building envelopes such as rain-screen walls, roof pavers, and vented energy efficient walls not only peak wind loads but also their spatial gradients are important. Wind permeable roof claddings like roof pavers are not well dealt with in many existing building codes and standards. Large-scale experiments were carried out to investigate the wind loading on concrete pavers including wind blow-off tests and pressure measurements. Simplified guidelines were developed for design of loose-laid roof pavers against wind uplift. The guidelines are formatted so that use can be made of the existing information in codes and standards such as ASCE 7-10 on pressure coefficients on components and cladding.^
Resumo:
Two key solutions to reduce the greenhouse gas emissions and increase the overall energy efficiency are to maximize the utilization of renewable energy resources (RERs) to generate energy for load consumption and to shift to low or zero emission plug-in electric vehicles (PEVs) for transportation. The present U.S. aging and overburdened power grid infrastructure is under a tremendous pressure to handle the issues involved in penetration of RERS and PEVs. The future power grid should be designed with for the effective utilization of distributed RERs and distributed generations to intelligently respond to varying customer demand including PEVs with high level of security, stability and reliability. This dissertation develops and verifies such a hybrid AC-DC power system. The system will operate in a distributed manner incorporating multiple components in both AC and DC styles and work in both grid-connected and islanding modes. ^ The verification was performed on a laboratory-based hybrid AC-DC power system testbed as hardware/software platform. In this system, RERs emulators together with their maximum power point tracking technology and power electronics converters were designed to test different energy harvesting algorithms. The Energy storage devices including lithium-ion batteries and ultra-capacitors were used to optimize the performance of the hybrid power system. A lithium-ion battery smart energy management system with thermal and state of charge self-balancing was proposed to protect the energy storage system. A grid connected DC PEVs parking garage emulator, with five lithium-ion batteries was also designed with the smart charging functions that can emulate the future vehicle-to-grid (V2G), vehicle-to-vehicle (V2V) and vehicle-to-house (V2H) services. This includes grid voltage and frequency regulations, spinning reserves, micro grid islanding detection and energy resource support. ^ The results show successful integration of the developed techniques for control and energy management of future hybrid AC-DC power systems with high penetration of RERs and PEVs.^
Resumo:
This dissertation studies the manipulation of particles using acoustic stimulation for applications in microfluidics and templating of devices. The term particle is used here to denote any solid, liquid or gaseous material that has properties, which are distinct from the fluid in which it is suspended. Manipulation means to take over the movements of the particles and to position them in specified locations. ^ Using devices, microfabricated out of silicon, the behavior of particles under the acoustic stimulation was studied with the main purpose of aligning the particles at either low-pressure zones, known as the nodes or high-pressure zones, known as anti-nodes. By aligning particles at the nodes in a flow system, these particles can be focused at the center or walls of a microchannel in order to ultimately separate them. These separations are of high scientific importance, especially in the biomedical domain, since acoustopheresis provides a unique approach to separate based on density and compressibility, unparalleled by other techniques. The study of controlling and aligning the particles in various geometries and configurations was successfully achieved by controlling the acoustic waves. ^ Apart from their use in flow systems, a stationary suspended-particle device was developed to provide controllable light transmittance based on acoustic stimuli. Using a glass compartment and a carbon-particle suspension in an organic solvent, the device responded to acoustic stimulation by aligning the particles. The alignment of light-absorbing carbon particles afforded an increase in visible light transmittance as high as 84.5%, and it was controlled by adjusting the frequency and amplitude of the acoustic wave. The device also demonstrated alignment memory rendering it energy-efficient. A similar device for suspended-particles in a monomer enabled the development of electrically conductive films. These films were based on networks of conductive particles. Elastomers doped with conductive metal particles were rendered surface conductive at particle loadings as low as 1% by weight using acoustic focusing. The resulting films were flexible and had transparencies exceeding 80% in the visible spectrum (400-800 nm) These films had electrical bulk conductivities exceeding 50 S/cm. ^
Resumo:
Expected damages of environmental risks depend both on their intensities and probabilities. There is very little control over probabilities of climate related disasters such as hurricanes. Therefore, researchers of social science are interested identifying preparation and mitigation measures that build human resilience to disasters and avoid serious loss. Conversely, environmental degradation, which is a process through which the natural environment is compromised in some way, has been accelerated by human activities. As scientists are finding effective ways on how to prevent and reduce pollution, the society often fails to adopt these effective preventive methods. Researchers of psychological and contextual characterization offer specific lessons for policy interventions that encourage human efforts to reduce pollution. This dissertation addresses four discussions of effective policy regimes encouraging pro-environmental preference in consumption and production, and promoting risk mitigation behavior in the face of natural hazards. The first essay describes how the speed of adoption of environment friendly technologies is driven largely by consumers’ preferences and their learning dynamics rather than producers’ choice. The second essay is an empirical analysis of a choice experiment to understand preferences for energy efficient investments. The empirical analysis suggests that subjects tend to increase energy efficient investment when they pay a pollution tax proportional to the total expenditure on energy consumption. However, investments in energy efficiency seem to be crowded out when subjects have the option to buy health insurance to cover pollution related health risks. In context of hurricane risk mitigation and in evidence of recently adopted My Safe Florida Home (MSFH) program by the State of Florida, the third essay shows that households with home insurance, prior experience with damages, and with a higher sense of vulnerability to be affected by hurricanes are more likely to allow home inspection to seek mitigation information. The fourth essay evaluates the impact of utility disruption on household well being based on the responses of a household-level phone survey in the wake of hurricane Wilma. Findings highlight the need for significant investment to enhance the capacity of rapid utility restoration after a hurricane event in the context of South Florida.
Resumo:
Nanoparticles are often considered as efficient drug delivery vehicles for precisely dispensing the therapeutic payloads specifically to the diseased sites in the patient’s body, thereby minimizing the toxic side effects of the payloads on the healthy tissue. However, the fundamental physics that underlies the nanoparticles’ intrinsic interaction with the surrounding cells is inadequately elucidated. The ability of the nanoparticles to precisely control the release of its payloads externally (on-demand) without depending on the physiological conditions of the target sites has the potential to enable patient- and disease-specific nanomedicine, also known as Personalized NanoMedicine (PNM). In this dissertation, magneto-electric nanoparticles (MENs) were utilized for the first time to enable important functions, such as (i) field-controlled high-efficacy dissipation-free targeted drug delivery system and on-demand release at the sub-cellular level, (ii) non-invasive energy-efficient stimulation of deep brain tissue at body temperature, and (iii) a high-sensitivity contrasting agent to map the neuronal activity in the brain non-invasively. First, this dissertation specifically focuses on using MENs as energy-efficient and dissipation-free field-controlled nano-vehicle for targeted delivery and on-demand release of a anti-cancer Paclitaxel (Taxol) drug and a anti-HIV AZT 5’-triphosphate (AZTTP) drug from 30-nm MENs (CoFe2O4-BaTiO3) by applying low-energy DC and low-frequency (below 1000 Hz) AC fields to separate the functions of delivery and release, respectively. Second, this dissertation focuses on the use of MENs to non-invasively stimulate the deep brain neuronal activity via application of a low energy and low frequency external magnetic field to activate intrinsic electric dipoles at the cellular level through numerical simulations. Third, this dissertation describes the use of MENs to track the neuronal activities in the brain (non-invasively) using a magnetic resonance and a magnetic nanoparticle imaging by monitoring the changes in the magnetization of the MENs surrounding the neuronal tissue under different states. The potential therapeutic and diagnostic impact of this innovative and novel study is highly significant not only in HIV-AIDS, Cancer, Parkinson’s and Alzheimer’s disease but also in many CNS and other diseases, where the ability to remotely control targeted drug delivery/release, and diagnostics is the key.
Resumo:
Cassava contributes significantly to biobased material development. Conventional approaches for its bio-derivative-production and application cause significant wastes, tailored material development challenges, with negative environmental impact and application limitations. Transforming cassava into sustainable value-added resources requires redesigning new approaches. Harnessing unexplored material source, and downstream process innovations can mitigate challenges. The ultimate goal proposed an integrated sustainable process system for cassava biomaterial development and potential application. An improved simultaneous release recovery cyanogenesis (SRRC) methodology, incorporating intact bitter cassava, was developed and standardized. Films were formulated, characterised, their mass transport behaviour, simulating real-distribution-chain conditions quantified, and optimised for desirable properties. Integrated process design system, for sustainable waste-elimination and biomaterial development, was developed. Films and bioderivatives for desired MAP, fast-delivery nutraceutical excipients and antifungal active coating applications were demonstrated. SRRC-processed intact bitter cassava produced significantly higher yield safe bio-derivatives than peeled, guaranteeing 16% waste-elimination. Process standardization transformed entire root into higher yield and clarified colour bio-derivatives and efficient material balance at optimal global desirability. Solvent mass through temperature-humidity-stressed films induced structural changes, and influenced water vapour and oxygen permeability. Sevenunit integrated-process design led to cost-effectiveness, energy-efficient and green cassava processing and biomaterials with zero-environment footprints. Desirable optimised bio-derivatives and films demonstrated application in desirable in-package O2/CO2, mouldgrowth inhibition, faster tablet excipient nutraceutical dissolutions and releases, and thymolencapsulated smooth antifungal coatings. Novel material resources, non-root peeling, zero-waste-elimination, and desirable standardised methodology present promising process integration tools for sustainable cassava biobased system development. Emerging design outcomes have potential applications to mitigate cyanide challenges and provide bio-derivative development pathways. Process system leads to zero-waste, with potential to reshape current style one-way processes into circular designs modelled on nature's effective approaches. Indigenous cassava components as natural material reinforcements, and SRRC processing approach has initiated a process with potential wider deployment in broad product research development. This research contributes to scientific knowledge in material science and engineering process design.
Resumo:
Kenia liegt in den Äquatorialtropen von Ostafrika und ist als ein weltweiter Hot-Spot für Aflatoxinbelastung insbesondere bei Mais bekannt. Diese toxischen und karzinogenen Verbindungen sind Stoffwechselprodukte von Pilzen und so insbesondere von der Wasseraktivität abhängig. Diese beeinflusst sowohl die Trocknung als auch die Lagerfähigkeit von Nahrungsmitteln und ist somit ein wichtiger Faktor bei der Entwicklung von energieeffizienten und qualitätsorientierten Verarbeitungsprozessen. Die vorliegende Arbeit hat sich zum Ziel gesetzt, die Veränderung der Wasseraktivität während der konvektiven Trocknung von Mais zu untersuchen. Mittels einer Optimierungssoftware (MS Excel Solver) wurde basierend auf sensorerfassten thermo-hygrometrischen Daten der gravimetrische Feuchteverlust von Maiskolben bei 37°C, 43°C und 53°C vorausberechnet. Dieser Bereich stellt den Übergang zwischen Niedrig- und Hochtemperaturtrocknung dar. Die Ergebnisse zeigen deutliche Unterschiede im Verhalten der Körner und der Spindel. Die Trocknung im Bereich von 35°C bis 45°C kombiniert mit hohen Strömungsgeschwindigkeiten (> 1,5 m / s) begünstigte die Trocknung der Körner gegenüber der Spindel und kann daher für eine energieeffiziente Trocknung von Kolben mit hohem Anfangsfeuchtegehalt empfohlen werden. Weitere Untersuchungen wurden zum Verhalten unterschiedlicher Schüttungen bei der bei Mais üblichen Satztrocknung durchgeführt. Entlieschter und gedroschener Mais führte zu einem vergrößerten Luftwiderstand in der Schüttung und sowohl zu einem höheren Energiebedarf als auch zu ungleichmäßigerer Trocknung, was nur durch einen erhöhten technischen Aufwand etwa durch Mischeinrichtungen oder Luftumkehr behoben werden könnte. Aufgrund des geringeren Aufwandes für die Belüftung und die Kontrolle kann für kleine landwirtschaftliche Praxisbetriebe in Kenia daher insbesondere die Trocknung ganzer Kolben in ungestörten Schüttungen empfohlen werden. Weiterhin wurde in der Arbeit die Entfeuchtung mittels eines Trockenmittels (Silikagel) kombiniert mit einer Heizquelle und abgegrenztem Luftvolumen untersucht und der konventionellen Trocknung gegenüber gestellt. Die Ergebnisse zeigten vergleichbare Entfeuchtungsraten während der ersten 5 Stunden der Trocknung. Der jeweilige Luftzustand bei Verwendung von Silikagel wurde insbesondere durch das eingeschlossene Luftvolumen und die Temperatur beeinflusst. Granulierte Trockenmittel sind bei der Maistrocknung unter hygienischen Gesichtspunkten vorteilhaft und können beispielsweise mit einfachen Öfen regeneriert werden, so dass Qualitätsbeeinträchtigungen wie bei Hochtemperatur- oder auch Freilufttrocknung vermieden werden können. Eine hochwertige Maistrocknungstechnik ist sehr kapitalintensiv. Aus der vorliegenden Arbeit kann aber abgeleitet werden, dass einfache Verbesserungen wie eine sensorgestützte Belüftung von Satztrocknern, der Einsatz von Trockenmitteln und eine angepasste Schüttungshöhe praktikable Lösungen für Kleinbauern in Kenia sein können. Hierzu besteht, ggf. auch zum Aspekt der Verwendung regenerativer Energien, weiterer Forschungsbedarf.
Resumo:
Early definitions of Smart Building focused almost entirely on the technology aspect and did not suggest user interaction at all. Indeed, today we would attribute it more to the concept of the automated building. In this sense, control of comfort conditions inside buildings is a problem that is being well investigated, since it has a direct effect on users’ productivity and an indirect effect on energy saving. Therefore, from the users’ perspective, a typical environment can be considered comfortable, if it’s capable of providing adequate thermal comfort, visual comfort and indoor air quality conditions and acoustic comfort. In the last years, the scientific community has dealt with many challenges, especially from a technological point of view. For instance, smart sensing devices, the internet, and communication technologies have enabled a new paradigm called Edge computing that brings computation and data storage closer to the location where it is needed, to improve response times and save bandwidth. This has allowed us to improve services, sustainability and decision making. Many solutions have been implemented such as smart classrooms, controlling the thermal condition of the building, monitoring HVAC data for energy-efficient of the campus and so forth. Though these projects provide to the realization of smart campus, a framework for smart campus is yet to be determined. These new technologies have also introduced new research challenges: within this thesis work, some of the principal open challenges will be faced, proposing a new conceptual framework, technologies and tools to move forward the actual implementation of smart campuses. Keeping in mind, several problems known in the literature have been investigated: the occupancy detection, noise monitoring for acoustic comfort, context awareness inside the building, wayfinding indoor, strategic deployment for air quality and books preserving.
Resumo:
In recent years, radars have been used in many applications such as precision agriculture and advanced driver assistant systems. Optimal techniques for the estimation of the number of targets and of their coordinates require solving multidimensional optimization problems entailing huge computational efforts. This has motivated the development of sub-optimal estimation techniques able to achieve good accuracy at a manageable computational cost. Another technical issue in advanced driver assistant systems is the tracking of multiple targets. Even if various filtering techniques have been developed, new efficient and robust algorithms for target tracking can be devised exploiting a probabilistic approach, based on the use of the factor graph and the sum-product algorithm. The two contributions provided by this dissertation are the investigation of the filtering and smoothing problems from a factor graph perspective and the development of efficient algorithms for two and three-dimensional radar imaging. Concerning the first contribution, a new factor graph for filtering is derived and the sum-product rule is applied to this graphical model; this allows to interpret known algorithms and to develop new filtering techniques. Then, a general method, based on graphical modelling, is proposed to derive filtering algorithms that involve a network of interconnected Bayesian filters. Finally, the proposed graphical approach is exploited to devise a new smoothing algorithm. Numerical results for dynamic systems evidence that our algorithms can achieve a better complexity-accuracy tradeoff and tracking capability than other techniques in the literature. Regarding radar imaging, various algorithms are developed for frequency modulated continuous wave radars; these algorithms rely on novel and efficient methods for the detection and estimation of multiple superimposed tones in noise. The accuracy achieved in the presence of multiple closely spaced targets is assessed on the basis of both synthetically generated data and of the measurements acquired through two commercial multiple-input multiple-output radars.
Resumo:
In the last decades, we saw a soaring interest in autonomous robots boosted not only by academia and industry, but also by the ever in- creasing demand from civil users. As a matter of fact, autonomous robots are fast spreading in all aspects of human life, we can see them clean houses, navigate through city traffic, or harvest fruits and vegetables. Almost all commercial drones already exhibit unprecedented and sophisticated skills which makes them suitable for these applications, such as obstacle avoidance, simultaneous localisation and mapping, path planning, visual-inertial odometry, and object tracking. The major limitations of such robotic platforms lie in the limited payload that can carry, in their costs, and in the limited autonomy due to finite battery capability. For this reason researchers start to develop new algorithms able to run even on resource constrained platforms both in terms of computation capabilities and limited types of endowed sensors, focusing especially on very cheap sensors and hardware. The possibility to use a limited number of sensors allowed to scale a lot the UAVs size, while the implementation of new efficient algorithms, performing the same task in lower time, allows for lower autonomy. However, the developed robots are not mature enough to completely operate autonomously without human supervision due to still too big dimensions (especially for aerial vehicles), which make these platforms unsafe for humans, and the high probability of numerical, and decision, errors that robots may make. In this perspective, this thesis aims to review and improve the current state-of-the-art solutions for autonomous navigation from a purely practical point of view. In particular, we deeply focused on the problems of robot control, trajectory planning, environments exploration, and obstacle avoidance.
Resumo:
Imaging technologies are widely used in application fields such as natural sciences, engineering, medicine, and life sciences. A broad class of imaging problems reduces to solve ill-posed inverse problems (IPs). Traditional strategies to solve these ill-posed IPs rely on variational regularization methods, which are based on minimization of suitable energies, and make use of knowledge about the image formation model (forward operator) and prior knowledge on the solution, but lack in incorporating knowledge directly from data. On the other hand, the more recent learned approaches can easily learn the intricate statistics of images depending on a large set of data, but do not have a systematic method for incorporating prior knowledge about the image formation model. The main purpose of this thesis is to discuss data-driven image reconstruction methods which combine the benefits of these two different reconstruction strategies for the solution of highly nonlinear ill-posed inverse problems. Mathematical formulation and numerical approaches for image IPs, including linear as well as strongly nonlinear problems are described. More specifically we address the Electrical impedance Tomography (EIT) reconstruction problem by unrolling the regularized Gauss-Newton method and integrating the regularization learned by a data-adaptive neural network. Furthermore we investigate the solution of non-linear ill-posed IPs introducing a deep-PnP framework that integrates the graph convolutional denoiser into the proximal Gauss-Newton method with a practical application to the EIT, a recently introduced promising imaging technique. Efficient algorithms are then applied to the solution of the limited electrods problem in EIT, combining compressive sensing techniques and deep learning strategies. Finally, a transformer-based neural network architecture is adapted to restore the noisy solution of the Computed Tomography problem recovered using the filtered back-projection method.
Resumo:
Deep Learning architectures give brilliant results in a large variety of fields, but a comprehensive theoretical description of their inner functioning is still lacking. In this work, we try to understand the behavior of neural networks by modelling in the frameworks of Thermodynamics and Condensed Matter Physics. We approach neural networks as in a real laboratory and we measure the frequency spectrum and the entropy of the weights of the trained model. The stochasticity of the training occupies a central role in the dynamics of the weights and makes it difficult to assimilate neural networks to simple physical systems. However, the analogy with Thermodynamics and the introduction of a well defined temperature leads us to an interesting result: if we eliminate from a CNN the "hottest" filters, the performance of the model remains the same, whereas, if we eliminate the "coldest" ones, the performance gets drastically worst. This result could be exploited in the realization of a training loop which eliminates the filters that do not contribute to loss reduction. In this way, the computational cost of the training will be lightened and more importantly this would be done by following a physical model. In any case, beside important practical applications, our analysis proves that a new and improved modeling of Deep Learning systems can pave the way to new and more efficient algorithms.
Resumo:
Photoplethysmography (PPG) sensors allow for noninvasive and comfortable heart-rate (HR) monitoring, suitable for compact wearable devices. However, PPG signals collected from such devices often suffer from corruption caused by motion artifacts. This is typically addressed by combining the PPG signal with acceleration measurements from an inertial sensor. Recently, different energy-efficient deep learning approaches for heart rate estimation have been proposed. To test these new solutions, in this work, we developed a highly wearable platform (42mm x 48 mm x 1.2mm) for PPG signal acquisition and processing, based on GAP9, a parallel ultra low power system-on-chip featuring nine cores RISC-V compute cluster with neural network accelerator and 1 core RISC-V controller. The hardware platform also integrates a commercial complete Optical Biosensing Module and an ARM-Cortex M4 microcontroller unit (MCU) with Bluetooth low-energy connectivity. To demonstrate the capabilities of the system, a deep learning-based approach for PPG-based HR estimation has been deployed. Thanks to the reduced power consumption of the digital computational platform, the total power budget is just 2.67 mW providing up to 5 days of operation (105 mAh battery).