2 resultados para Apache Cordova
em Duke University
Resumo:
The realization of an energy future based on safe, clean, sustainable, and economically viable technologies is one of the grand challenges facing modern society. Electrochemical energy technologies underpin the potential success of this effort to divert energy sources away from fossil fuels, whether one considers alternative energy conversion strategies through photoelectrochemical (PEC) production of chemical fuels or fuel cells run with sustainable hydrogen, or energy storage strategies, such as in batteries and supercapacitors. This dissertation builds on recent advances in nanomaterials design, synthesis, and characterization to develop novel electrodes that can electrochemically convert and store energy.
Chapter 2 of this dissertation focuses on refining the properties of TiO2-based PEC water-splitting photoanodes used for the direct electrochemical conversion of solar energy into hydrogen fuel. The approach utilized atomic layer deposition (ALD); a growth process uniquely suited for the conformal and uniform deposition of thin films with angstrom-level thickness precision. ALD’s thickness control enabled a better understanding of how the effects of nitrogen doping via NH3 annealing treatments, used to reduce TiO2’s bandgap, can have a strong dependence on TiO2’s thickness and crystalline quality. In addition, it was found that some of the negative effects on the PEC performance typically associated with N-doped TiO2 could be mitigated if the NH3-annealing was directly preceded by an air-annealing step, especially for ultrathin (i.e., < 10 nm) TiO2 films. ALD was also used to conformally coat an ultraporous conductive fluorine-doped tin oxide nanoparticle (nanoFTO) scaffold with an ultrathin layer of TiO2. The integration of these ultrathin films and the oxide nanoparticles resulted in a heteronanostructure design with excellent PEC water oxidation photocurrents (0.7 mA/cm2 at 0 V vs. Ag/AgCl) and charge transfer efficiency.
In Chapter 3, two innovative nanoarchitectures were engineered in order to enhance the pseudocapacitive energy storage of next generation supercapacitor electrodes. The morphology and quantity of MnO2 electrodeposits was controlled by adjusting the density of graphene foliates on a novel graphenated carbon nanotube (g-CNT) scaffold. This control enabled the nanocomposite supercapacitor electrode to reach a capacitance of 640 F/g, under MnO2 specific mass loading conditions (2.3 mg/cm2) that are higher than previously reported. In the second engineered nanoarchitecture, the electrochemical energy storage properties of a transparent electrode based on a network of solution-processed Cu/Ni cores/shell nanowires (NWs) were activated by electrochemically converting the Ni metal shell into Ni(OH)2. Furthermore, an adjustment of the molar percentage of Ni plated onto the Cu NWs was found to result in a tradeoff between capacitance, transmittance, and stability of the resulting nickel hydroxide-based electrode. The nominal area capacitance and power performance results obtained for this Cu/Ni(OH)2 transparent electrode demonstrates that it has significant potential as a hybrid supercapacitor electrode for integration into cutting edge flexible and transparent electronic devices.
Resumo:
Distributed Computing frameworks belong to a class of programming models that allow developers to
launch workloads on large clusters of machines. Due to the dramatic increase in the volume of
data gathered by ubiquitous computing devices, data analytic workloads have become a common
case among distributed computing applications, making Data Science an entire field of
Computer Science. We argue that Data Scientist's concern lays in three main components: a dataset,
a sequence of operations they wish to apply on this dataset, and some constraint they may have
related to their work (performances, QoS, budget, etc). However, it is actually extremely
difficult, without domain expertise, to perform data science. One need to select the right amount
and type of resources, pick up a framework, and configure it. Also, users are often running their
application in shared environments, ruled by schedulers expecting them to specify precisely their resource
needs. Inherent to the distributed and concurrent nature of the cited frameworks, monitoring and
profiling are hard, high dimensional problems that block users from making the right
configuration choices and determining the right amount of resources they need. Paradoxically, the
system is gathering a large amount of monitoring data at runtime, which remains unused.
In the ideal abstraction we envision for data scientists, the system is adaptive, able to exploit
monitoring data to learn about workloads, and process user requests into a tailored execution
context. In this work, we study different techniques that have been used to make steps toward
such system awareness, and explore a new way to do so by implementing machine learning
techniques to recommend a specific subset of system configurations for Apache Spark applications.
Furthermore, we present an in depth study of Apache Spark executors configuration, which highlight
the complexity in choosing the best one for a given workload.