909 resultados para supramolecular architectures
Resumo:
In this work, we present a thorough assessment of the performance of some representative double-hybrid density functionals (revPBE0-DH-NL and B2PLYP-NL) as well as their parent hybrid and GGA counterparts, in combination with the most modern version of the nonlocal (NL) van der Waals correction to describe very large weakly interacting molecular systems dominated by noncovalent interactions. Prior to the assessment, an accurate and homogeneous set of reference interaction energies was computed for the supramolecular complexes constituting the L7 and S12L data sets by using the novel, precise, and efficient DLPNO-CCSD(T) method at the complete basis set limit (CBS). The correction of the basis set superposition error and the inclusion of the deformation energies (for the S12L set) have been crucial for obtaining precise DLPNO-CCSD(T)/CBS interaction energies. Among the density functionals evaluated, the double-hybrid revPBE0-DH-NL and B2PLYP-NL with the three-body dispersion correction provide remarkably accurate association energies very close to the chemical accuracy. Overall, the NL van der Waals approach combined with proper density functionals can be seen as an accurate and affordable computational tool for the modeling of large weakly bonded supramolecular systems.
Resumo:
In recent years, security of industrial control systems has been the main research focus due to the potential cyber-attacks that can impact the physical operations. As a result of these risks, there has been an urgent need to establish a stronger security protection against these threats. Conventional firewalls with stateful rules can be implemented in the critical cyberinfrastructure environment which might require constant updates. Despite the ongoing effort to maintain the rules, the protection mechanism does not restrict malicious data flows and it poses the greater risk of potential intrusion occurrence. The contributions of this thesis are motivated by the aforementioned issues which include a systematic investigation of attack-related scenarios within a substation network in a reliable sense. The proposed work is two-fold: (i) system architecture evaluation and (ii) construction of attack tree for a substation network. Cyber-system reliability remains one of the important factors in determining the system bottleneck for investment planning and maintenance. It determines the longevity of the system operational period with or without any disruption. First, a complete enumeration of existing implementation is exhaustively identified with existing communication architectures (bidirectional) and new ones with strictly unidirectional. A detailed modeling of the extended 10 system architectures has been evaluated. Next, attack tree modeling for potential substation threats is formulated. This quantifies the potential risks for possible attack scenarios within a network or from the external networks. The analytical models proposed in this thesis can serve as a fundamental development that can be further researched.
Resumo:
The thesis "COMPARATIVE ANALYSIS OF EFFICIENCY AND OPERATING CHARACTERISTICS OF AUTOMOTIVE POWERTRAIN ARCHITECTURES THROUGH CHASSIS DYNAMOMETER TESTING" was completed through a collaborative partnership between Michigan Technological University and Argonne National Laboratory under a contractual agreement titled "Advanced Vehicle Characterization at Argonne National Laboratory". The goal of this project was to investigate, understand and document the performance and operational strategy of several modern passenger vehicles of various architectures. The vehicles were chosen to represent several popular engine and transmission architectures and were instrumented to allow for data collection to facilitate comparative analysis. In order to ensure repeatability and reliability during testing, each vehicle was tested over a series of identical drive cycles in a controlled environment utilizing a vehicle chassis dynamometer. Where possible, instrumentation was preserved between vehicles to ensure robust data collection. The efficiency and fuel economy performance of the vehicles was studied. In addition, the powertrain utilization strategies, significant energy loss sources, tailpipe emissions, combustion characteristics, and cold start behavior were also explored in detail. It was concluded that each vehicle realizes different strengths and suffers from different limitations in the course of their attempts to maximize efficiency and fuel economy. In addition, it was observed that each vehicle regardless of architecture exhibits significant energy losses and difficulties in cold start operation that can be further improved with advancing technology. It is clear that advanced engine technologies and driveline technologies are complimentary aspects of vehicle design that must be utilized together for best efficiency improvements. Finally, it was concluded that advanced technology vehicles do not come without associated cost; the complexity of the powertrains and lifecycle costs must be considered to understand the full impact of advanced vehicle technology.
Resumo:
Efficient and reliable techniques for power delivery and utilization are needed to account for the increased penetration of renewable energy sources in electric power systems. Such methods are also required for current and future demands of plug-in electric vehicles and high-power electronic loads. Distributed control and optimal power network architectures will lead to viable solutions to the energy management issue with high level of reliability and security. This dissertation is aimed at developing and verifying new techniques for distributed control by deploying DC microgrids, involving distributed renewable generation and energy storage, through the operating AC power system. To achieve the findings of this dissertation, an energy system architecture was developed involving AC and DC networks, both with distributed generations and demands. The various components of the DC microgrid were designed and built including DC-DC converters, voltage source inverters (VSI) and AC-DC rectifiers featuring novel designs developed by the candidate. New control techniques were developed and implemented to maximize the operating range of the power conditioning units used for integrating renewable energy into the DC bus. The control and operation of the DC microgrids in the hybrid AC/DC system involve intelligent energy management. Real-time energy management algorithms were developed and experimentally verified. These algorithms are based on intelligent decision-making elements along with an optimization process. This was aimed at enhancing the overall performance of the power system and mitigating the effect of heavy non-linear loads with variable intensity and duration. The developed algorithms were also used for managing the charging/discharging process of plug-in electric vehicle emulators. The protection of the proposed hybrid AC/DC power system was studied. Fault analysis and protection scheme and coordination, in addition to ideas on how to retrofit currently available protection concepts and devices for AC systems in a DC network, were presented. A study was also conducted on the effect of changing the distribution architecture and distributing the storage assets on the various zones of the network on the system’s dynamic security and stability. A practical shipboard power system was studied as an example of a hybrid AC/DC power system involving pulsed loads. Generally, the proposed hybrid AC/DC power system, besides most of the ideas, controls and algorithms presented in this dissertation, were experimentally verified at the Smart Grid Testbed, Energy Systems Research Laboratory. All the developments in this dissertation were experimentally verified at the Smart Grid Testbed.
Resumo:
New powertrain design is highly influenced by CO2 and pollutant limits defined by legislations, the demand of fuel economy in for real conditions, high performances and acceptable cost. To reach the requirements coming from both end-users and legislations, several powertrain architectures and engine technologies are possible (e.g. SI or CI engines), with many new technologies, new fuels, and different degree of electrification. The benefits and costs given by the possible architectures and technology mix must be accurately evaluated by means of objective procedures and tools in order to choose among the best alternatives. This work presents a basic design methodology and a comparison at concept level of the main powertrain architectures and technologies that are currently being developed, considering technical benefits and their cost effectiveness. The analysis is carried out on the basis of studies from the technical literature, integrating missing data with evaluations performed by means of powertrain-vehicle simplified models, considering the most important powertrain architectures. Technology pathways for passenger cars up to 2025 and beyond have been defined. After that, with support of more detailed models and experimentations, the investigation has been focused on the more promising technologies to improve internal combustion engine, such as: water injection, low temperature combustions and heat recovery systems.
Resumo:
Nowadays the production of increasingly complex and electrified vehicles requires the implementation of new control and monitoring systems. This reason, together with the tendency of moving rapidly from the test bench to the vehicle, leads to a landscape that requires the development of embedded hardware and software to face the application effectively and efficiently. The development of application-based software on real-time/FPGA hardware could be a good answer for these challenges: FPGA grants parallel low-level and high-speed calculation/timing, while the Real-Time processor can handle high-level calculation layers, logging and communication functions with determinism. Thanks to the software flexibility and small dimensions, these architectures can find a perfect collocation as engine RCP (Rapid Control Prototyping) units and as smart data logger/analyser, both for test bench and on vehicle application. Efforts have been done for building a base architecture with common functionalities capable of easily hosting application-specific control code. Several case studies originating in this scenario will be shown; dedicated solutions for protype applications have been developed exploiting a real-time/FPGA architecture as ECU (Engine Control Unit) and custom RCP functionalities, such as water injection and testing hydraulic brake control.
Resumo:
Acoustic Emission (AE) monitoring can be used to detect the presence of damage as well as determine its location in Structural Health Monitoring (SHM) applications. Information on the time difference of the signal generated by the damage event arriving at different sensors is essential in performing localization. This makes the time of arrival (ToA) an important piece of information to retrieve from the AE signal. Generally, this is determined using statistical methods such as the Akaike Information Criterion (AIC) which is particularly prone to errors in the presence of noise. And given that the structures of interest are surrounded with harsh environments, a way to accurately estimate the arrival time in such noisy scenarios is of particular interest. In this work, two new methods are presented to estimate the arrival times of AE signals which are based on Machine Learning. Inspired by great results in the field, two models are presented which are Deep Learning models - a subset of machine learning. They are based on Convolutional Neural Network (CNN) and Capsule Neural Network (CapsNet). The primary advantage of such models is that they do not require the user to pre-define selected features but only require raw data to be given and the models establish non-linear relationships between the inputs and outputs. The performance of the models is evaluated using AE signals generated by a custom ray-tracing algorithm by propagating them on an aluminium plate and compared to AIC. It was found that the relative error in estimation on the test set was < 5% for the models compared to around 45% of AIC. The testing process was further continued by preparing an experimental setup and acquiring real AE signals to test on. Similar performances were observed where the two models not only outperform AIC by more than a magnitude in their average errors but also they were shown to be a lot more robust as compared to AIC which fails in the presence of noise.
Resumo:
Biological systems are complex and highly organized architectures governed by non-covalent interactions responsible for the regulation of essential tasks in all living organisms. These systems are a constant source of inspiration for supramolecular chemists aiming to design multicomponent molecular assemblies able to perform elaborated tasks, thanks to the role and action of the components that constitute them. Artificial supramolecular systems exploit non-covalent interactions to mimic naturally occurring events. In this context, stimuli-responsive supramolecular systems have attracted attention due to the possibility to control macroscopic effects through modifications at the nanoscale. This thesis is divided in three experimental chapters, characterized by a progressive increase in molecular complexity. Initially, the preparation and studies of liposomes functionalized with a photoactive guest such as azobenzene in the bilayer were tackled, in order to evaluate the effect of such photochrome on the vesicle properties. Subsequently, the synthesis and studies of thread-like molecules comprising an azobenzene functionality was reported. Such molecules were conceived to be intercalated in the bilayer membrane of liposomes with the aim to be used as components for photoresponsive transmembrane molecular pumps. Finally, a [3]rotaxane was developed and studied in solution. This system is composed of two crown ether rings interlocked with an axle containing three recognition sites for the macrocycles, i.e. two pH-switchable ammonium stations and a permanent triazolium station. Such molecule was designed to achieve a change in the ratio between the recognition sites and the crown ethers as a consequence of acid-base inputs. This leads to the formation of rotaxanes containing a number of recognition sites respectively larger, equal or lower than the number of interlocked rings and connected by a network of acid-base reactions.
Resumo:
Analog In-memory Computing (AIMC) has been proposed in the context of Beyond Von Neumann architectures as a valid strategy to reduce internal data transfers energy consumption and latency, and to improve compute efficiency. The aim of AIMC is to perform computations within the memory unit, typically leveraging the physical features of memory devices. Among resistive Non-volatile Memories (NVMs), Phase-change Memory (PCM) has become a promising technology due to its intrinsic capability to store multilevel data. Hence, PCM technology is currently investigated to enhance the possibilities and the applications of AIMC. This thesis aims at exploring the potential of new PCM-based architectures as in-memory computational accelerators. In a first step, a preliminar experimental characterization of PCM devices has been carried out in an AIMC perspective. PCM cells non-idealities, such as time-drift, noise, and non-linearity have been studied to develop a dedicated multilevel programming algorithm. Measurement-based simulations have been then employed to evaluate the feasibility of PCM-based operations in the fields of Deep Neural Networks (DNNs) and Structural Health Monitoring (SHM). Moreover, a first testchip has been designed and tested to evaluate the hardware implementation of Multiply-and-Accumulate (MAC) operations employing PCM cells. This prototype experimentally demonstrates the possibility to reach a 95% MAC accuracy with a circuit-level compensation of cells time drift and non-linearity. Finally, empirical circuit behavior models have been included in simulations to assess the use of this technology in specific DNN applications, and to enhance the potentiality of this innovative computation approach.
Resumo:
This thesis explores the methods based on the free energy principle and active inference for modelling cognition. Active inference is an emerging framework for designing intelligent agents where psychological processes are cast in terms of Bayesian inference. Here, I appeal to it to test the design of a set of cognitive architectures, via simulation. These architectures are defined in terms of generative models where an agent executes a task under the assumption that all cognitive processes aspire to the same objective: the minimization of variational free energy. Chapter 1 introduces the free energy principle and its assumptions about self-organizing systems. Chapter 2 describes how from the mechanics of self-organization can emerge a minimal form of cognition able to achieve autopoiesis. In chapter 3 I present the method of how I formalize generative models for action and perception. The architectures proposed allow providing a more biologically plausible account of more complex cognitive processing that entails deep temporal features. I then present three simulation studies that aim to show different aspects of cognition, their associated behavior and the underlying neural dynamics. In chapter 4, the first study proposes an architecture that represents the visuomotor system for the encoding of actions during action observation, understanding and imitation. In chapter 5, the generative model is extended and is lesioned to simulate brain damage and neuropsychological patterns observed in apraxic patients. In chapter 6, the third study proposes an architecture for cognitive control and the modulation of attention for action selection. At last, I argue how active inference can provide a formal account of information processing in the brain and how the adaptive capabilities of the simulated agents are a mere consequence of the architecture of the generative models. Cognitive processing, then, becomes an emergent property of the minimization of variational free energy.
Resumo:
The recent trend of moving Cloud Computing capabilities to the Edge of the network is reshaping how applications and their middleware supports are designed, deployed, and operated. This new model envisions a continuum of virtual resources between the traditional cloud and the network edge, which is potentially more suitable to meet the heterogeneous Quality of Service (QoS) requirements of diverse application domains and next-generation applications. Several classes of advanced Internet of Things (IoT) applications, e.g., in the industrial manufacturing domain, are expected to serve a wide range of applications with heterogeneous QoS requirements and call for QoS management systems to guarantee/control performance indicators, even in the presence of real-world factors such as limited bandwidth and concurrent virtual resource utilization. The present dissertation proposes a comprehensive QoS-aware architecture that addresses the challenges of integrating cloud infrastructure with edge nodes in IoT applications. The architecture provides end-to-end QoS support by incorporating several components for managing physical and virtual resources. The proposed architecture features: i) a multilevel middleware for resolving the convergence between Operational Technology (OT) and Information Technology (IT), ii) an end-to-end QoS management approach compliant with the Time-Sensitive Networking (TSN) standard, iii) new approaches for virtualized network environments, such as running TSN-based applications under Ultra-low Latency (ULL) constraints in virtual and 5G environments, and iv) an accelerated and deterministic container overlay network architecture. Additionally, the QoS-aware architecture includes two novel middlewares: i) a middleware that transparently integrates multiple acceleration technologies in heterogeneous Edge contexts and ii) a QoS-aware middleware for Serverless platforms that leverages coordination of various QoS mechanisms and virtualized Function-as-a-Service (FaaS) invocation stack to manage end-to-end QoS metrics. Finally, all architecture components were tested and evaluated by leveraging realistic testbeds, demonstrating the efficacy of the proposed solutions.
Resumo:
This thesis focuses on two main topics: photoresponsive azobenzene-based polymers and supramolecular systems generated by the self-assembly of lipophilic guanosines. In the first chapters describe innovative photoresponsive devices and materials capable of performing multiple roles in the field of soft robotics and energy conversion. Chapter 2 describes a device obtained by coupling a photoresponsive liquid-crystalline network and a piezoelectric polymer to convert visible light into electricity. Chapter 3 deals with a material that can assume different shapes when triggered by three different stimuli in different environments. Chapter 4 reports a highly performing artificial muscle that contracts when irradiated. The last two chapters report on supramolecular structures generated from functionalized guanosines dissolved in organic solvents. Chapter 6 illustrates the self-assembly into G-quadruplexes of 8- and 5’-functionalized guanosines in the absence of templating ions. Chapter 7 describes the supramolecular structure generated by the assembly of a lipophilic guanosine in the presence of silver cations. Chapter 6 is reproduced from an already published paper, while the other chapters are going to be submitted to different journals in a couple of months.
Resumo:
The Neural Networks customized and tested in this thesis (WaldoNet, FlowNet and PatchNet) are a first exploration and approach to the Template Matching task. The possibilities of extension are therefore many and some are proposed below. During my thesis, I have analyzed the functioning of the classical algorithms and adapted with deep learning algorithms. The features extracted from both the template and the query images resemble the keypoints of the SIFT algorithm. Then, instead of similarity function or keypoints matching, WaldoNet and PatchNet use the convolutional layer to compare the features, while FlowNet uses the correlational layer. In addition, I have identified the major challenges of the Template Matching task (affine/non-affine transformations, intensity changes...) and solved them with a careful design of the dataset.
Resumo:
Histological and histochemical observations support the hypothesis that collagen fibers can link to elastic fibers. However, the resulting organization of elastin and collagen type complexes and differences between these materials in terms of macromolecular orientation and frequencies of their chemical vibrational groups have not yet been solved. This study aimed to investigate the macromolecular organization of pure elastin, collagen type I and elastin-collagen complexes using polarized light DIC-microscopy. Additionally, differences and similarities between pure elastin and collagen bundles (CB) were investigated by Fourier transform-infrared (FT-IR) microspectroscopy. Although elastin exhibited a faint birefringence, the elastin-collagen complex aggregates formed in solution exhibited a deep birefringence and formation of an ordered-supramolecular complex typical of collagen chiral structure. The FT-IR study revealed elastin and CB peptide NH groups involved in different types of H-bonding. More energy is absorbed in the vibrational transitions corresponding to CH, CH2 and CH3 groups (probably associated with the hydrophobicity demonstrated by 8-anilino-1-naphtalene sulfonic acid sodium salt [ANS] fluorescence), and to νCN, δNH and ωCH2 groups of elastin compared to CB. It is assumed that the α-helix contribution to the pure elastin amide I profile is 46.8%, whereas that of the B-sheet is 20% and that unordered structures contribute to the remaining percentage. An FT-IR profile library reveals that the elastin signature within the 1360-1189cm(-1) spectral range resembles that of Conex-Toray aramid fibers.
Resumo:
The present review paper describes the main features of nickel hydroxide modified electrodes covering its structural and electrochemical behavior and the newest advances promoted by nanostructured architectures. Important aspects such as synthetic procedures and characterization techniques such as X-Ray diffraction, Raman and Infrared spectroscopy, Electronic Microscopy and many others are detailed herein. The most important aspect concerning nickel hydroxide is related to its great versatility covering different fields in electrochemical-based devices such as batteries, electrocatalytic systems and electrochromic electrodes, the fundamental issues of these devices are also commented. Finally, some of the newest advances achieved in each field by the incorporation of nanomaterials will be shown.