935 resultados para Transceiver architectures
Resumo:
New powertrain design is highly influenced by CO2 and pollutant limits defined by legislations, the demand of fuel economy in for real conditions, high performances and acceptable cost. To reach the requirements coming from both end-users and legislations, several powertrain architectures and engine technologies are possible (e.g. SI or CI engines), with many new technologies, new fuels, and different degree of electrification. The benefits and costs given by the possible architectures and technology mix must be accurately evaluated by means of objective procedures and tools in order to choose among the best alternatives. This work presents a basic design methodology and a comparison at concept level of the main powertrain architectures and technologies that are currently being developed, considering technical benefits and their cost effectiveness. The analysis is carried out on the basis of studies from the technical literature, integrating missing data with evaluations performed by means of powertrain-vehicle simplified models, considering the most important powertrain architectures. Technology pathways for passenger cars up to 2025 and beyond have been defined. After that, with support of more detailed models and experimentations, the investigation has been focused on the more promising technologies to improve internal combustion engine, such as: water injection, low temperature combustions and heat recovery systems.
Resumo:
Nowadays the production of increasingly complex and electrified vehicles requires the implementation of new control and monitoring systems. This reason, together with the tendency of moving rapidly from the test bench to the vehicle, leads to a landscape that requires the development of embedded hardware and software to face the application effectively and efficiently. The development of application-based software on real-time/FPGA hardware could be a good answer for these challenges: FPGA grants parallel low-level and high-speed calculation/timing, while the Real-Time processor can handle high-level calculation layers, logging and communication functions with determinism. Thanks to the software flexibility and small dimensions, these architectures can find a perfect collocation as engine RCP (Rapid Control Prototyping) units and as smart data logger/analyser, both for test bench and on vehicle application. Efforts have been done for building a base architecture with common functionalities capable of easily hosting application-specific control code. Several case studies originating in this scenario will be shown; dedicated solutions for protype applications have been developed exploiting a real-time/FPGA architecture as ECU (Engine Control Unit) and custom RCP functionalities, such as water injection and testing hydraulic brake control.
Resumo:
Acoustic Emission (AE) monitoring can be used to detect the presence of damage as well as determine its location in Structural Health Monitoring (SHM) applications. Information on the time difference of the signal generated by the damage event arriving at different sensors is essential in performing localization. This makes the time of arrival (ToA) an important piece of information to retrieve from the AE signal. Generally, this is determined using statistical methods such as the Akaike Information Criterion (AIC) which is particularly prone to errors in the presence of noise. And given that the structures of interest are surrounded with harsh environments, a way to accurately estimate the arrival time in such noisy scenarios is of particular interest. In this work, two new methods are presented to estimate the arrival times of AE signals which are based on Machine Learning. Inspired by great results in the field, two models are presented which are Deep Learning models - a subset of machine learning. They are based on Convolutional Neural Network (CNN) and Capsule Neural Network (CapsNet). The primary advantage of such models is that they do not require the user to pre-define selected features but only require raw data to be given and the models establish non-linear relationships between the inputs and outputs. The performance of the models is evaluated using AE signals generated by a custom ray-tracing algorithm by propagating them on an aluminium plate and compared to AIC. It was found that the relative error in estimation on the test set was < 5% for the models compared to around 45% of AIC. The testing process was further continued by preparing an experimental setup and acquiring real AE signals to test on. Similar performances were observed where the two models not only outperform AIC by more than a magnitude in their average errors but also they were shown to be a lot more robust as compared to AIC which fails in the presence of noise.
Resumo:
Analog In-memory Computing (AIMC) has been proposed in the context of Beyond Von Neumann architectures as a valid strategy to reduce internal data transfers energy consumption and latency, and to improve compute efficiency. The aim of AIMC is to perform computations within the memory unit, typically leveraging the physical features of memory devices. Among resistive Non-volatile Memories (NVMs), Phase-change Memory (PCM) has become a promising technology due to its intrinsic capability to store multilevel data. Hence, PCM technology is currently investigated to enhance the possibilities and the applications of AIMC. This thesis aims at exploring the potential of new PCM-based architectures as in-memory computational accelerators. In a first step, a preliminar experimental characterization of PCM devices has been carried out in an AIMC perspective. PCM cells non-idealities, such as time-drift, noise, and non-linearity have been studied to develop a dedicated multilevel programming algorithm. Measurement-based simulations have been then employed to evaluate the feasibility of PCM-based operations in the fields of Deep Neural Networks (DNNs) and Structural Health Monitoring (SHM). Moreover, a first testchip has been designed and tested to evaluate the hardware implementation of Multiply-and-Accumulate (MAC) operations employing PCM cells. This prototype experimentally demonstrates the possibility to reach a 95% MAC accuracy with a circuit-level compensation of cells time drift and non-linearity. Finally, empirical circuit behavior models have been included in simulations to assess the use of this technology in specific DNN applications, and to enhance the potentiality of this innovative computation approach.
Resumo:
This thesis explores the methods based on the free energy principle and active inference for modelling cognition. Active inference is an emerging framework for designing intelligent agents where psychological processes are cast in terms of Bayesian inference. Here, I appeal to it to test the design of a set of cognitive architectures, via simulation. These architectures are defined in terms of generative models where an agent executes a task under the assumption that all cognitive processes aspire to the same objective: the minimization of variational free energy. Chapter 1 introduces the free energy principle and its assumptions about self-organizing systems. Chapter 2 describes how from the mechanics of self-organization can emerge a minimal form of cognition able to achieve autopoiesis. In chapter 3 I present the method of how I formalize generative models for action and perception. The architectures proposed allow providing a more biologically plausible account of more complex cognitive processing that entails deep temporal features. I then present three simulation studies that aim to show different aspects of cognition, their associated behavior and the underlying neural dynamics. In chapter 4, the first study proposes an architecture that represents the visuomotor system for the encoding of actions during action observation, understanding and imitation. In chapter 5, the generative model is extended and is lesioned to simulate brain damage and neuropsychological patterns observed in apraxic patients. In chapter 6, the third study proposes an architecture for cognitive control and the modulation of attention for action selection. At last, I argue how active inference can provide a formal account of information processing in the brain and how the adaptive capabilities of the simulated agents are a mere consequence of the architecture of the generative models. Cognitive processing, then, becomes an emergent property of the minimization of variational free energy.
Resumo:
The recent trend of moving Cloud Computing capabilities to the Edge of the network is reshaping how applications and their middleware supports are designed, deployed, and operated. This new model envisions a continuum of virtual resources between the traditional cloud and the network edge, which is potentially more suitable to meet the heterogeneous Quality of Service (QoS) requirements of diverse application domains and next-generation applications. Several classes of advanced Internet of Things (IoT) applications, e.g., in the industrial manufacturing domain, are expected to serve a wide range of applications with heterogeneous QoS requirements and call for QoS management systems to guarantee/control performance indicators, even in the presence of real-world factors such as limited bandwidth and concurrent virtual resource utilization. The present dissertation proposes a comprehensive QoS-aware architecture that addresses the challenges of integrating cloud infrastructure with edge nodes in IoT applications. The architecture provides end-to-end QoS support by incorporating several components for managing physical and virtual resources. The proposed architecture features: i) a multilevel middleware for resolving the convergence between Operational Technology (OT) and Information Technology (IT), ii) an end-to-end QoS management approach compliant with the Time-Sensitive Networking (TSN) standard, iii) new approaches for virtualized network environments, such as running TSN-based applications under Ultra-low Latency (ULL) constraints in virtual and 5G environments, and iv) an accelerated and deterministic container overlay network architecture. Additionally, the QoS-aware architecture includes two novel middlewares: i) a middleware that transparently integrates multiple acceleration technologies in heterogeneous Edge contexts and ii) a QoS-aware middleware for Serverless platforms that leverages coordination of various QoS mechanisms and virtualized Function-as-a-Service (FaaS) invocation stack to manage end-to-end QoS metrics. Finally, all architecture components were tested and evaluated by leveraging realistic testbeds, demonstrating the efficacy of the proposed solutions.
Resumo:
The Neural Networks customized and tested in this thesis (WaldoNet, FlowNet and PatchNet) are a first exploration and approach to the Template Matching task. The possibilities of extension are therefore many and some are proposed below. During my thesis, I have analyzed the functioning of the classical algorithms and adapted with deep learning algorithms. The features extracted from both the template and the query images resemble the keypoints of the SIFT algorithm. Then, instead of similarity function or keypoints matching, WaldoNet and PatchNet use the convolutional layer to compare the features, while FlowNet uses the correlational layer. In addition, I have identified the major challenges of the Template Matching task (affine/non-affine transformations, intensity changes...) and solved them with a careful design of the dataset.
Resumo:
The thesis presents the UHF band transceiver project carried out under the lead of Spacemind company. In particular reports the outcome of the first phase of the project encompassing management tasks, requirements definition and the first electrical design. Then follows the study of the UHF band antenna which develops in parallel with the transceiver. The antenna plus the transceiver will be sold together as a complete UHF telecommunication system for cubesats made by Spacemind. As a main result, this work contributed to the design and manufacturing of the first transceiver prototype.
Resumo:
The present review paper describes the main features of nickel hydroxide modified electrodes covering its structural and electrochemical behavior and the newest advances promoted by nanostructured architectures. Important aspects such as synthetic procedures and characterization techniques such as X-Ray diffraction, Raman and Infrared spectroscopy, Electronic Microscopy and many others are detailed herein. The most important aspect concerning nickel hydroxide is related to its great versatility covering different fields in electrochemical-based devices such as batteries, electrocatalytic systems and electrochromic electrodes, the fundamental issues of these devices are also commented. Finally, some of the newest advances achieved in each field by the incorporation of nanomaterials will be shown.
Resumo:
Ticlopidine hydrochloride (TICLID (R)) is a platelet antiaggregating agent whose use as a potent antithrombotic pharmaceutical ingredient is widespread, even though this drug has not been well characterized in the solid state. Only the crystal phase used for drug product manufacturing is known. Here, a new polymorph of ticlopidine hydrochloride was discovered and its structure was determined. While the antecedent polymorph crystallizes in the triclinic space group P (1) over bar, the new crystal phase was solved in the monoclinic space group P2(1)/c. Both polymorphs crystallize as racemic mixtures of enantiomeric (ticlopidine)(+) cations. Detailed geometrical and packing comparisons between the crystal structures of the two polymorphs have allowed us to understand how different supramolecular architectures are assembled. It was feasible to conclude that the main difference between the two polymorphs is a rotation of about 120 degrees on the bridging bond between the thienopyridine and o-chlorobenzyl moieties. The differential o-chlorobenzyl conformation is related to changeable patterns of weak intermolecular contacts involving this moiety, such as edge-to-face Cl center dot center dot center dot pi and C-H center dot center dot center dot pi interactions in the new polymorph and face-to-face pi center dot center dot center dot pi contacts in the triclinic crystal phase, leading to a symmetry increase in the ticlopidine hydrochloride solid state form described for the first time in this study. Other conformational features are slightly different between the two polymorphs, such as the thienopyridine puckerings and the o-chlorophenyl orientations. These conformational characteristics were also correlated to the crystal packing patterns.
Resumo:
The control of molecular architectures has been a key factor for the use of Langmuir-Blodgett (LB) films in biosensors, especially because biomolecules can be immobilized with preserved activity. In this paper we investigated the incorporation of tyrosinase (Tyr) in mixed Langmuir films of arachidic acid (AA) and a lutetium bisphthalocyanine (LuPc(2)), which is confirmed by a large expansion in the surface pressure isotherm. These mixed films of AA-LuPc(2) + Tyr could be transferred onto ITO and Pt electrodes as indicated by FTIR and electrochemical measurements, and there was no need for crosslinking of the enzyme molecules to preserve their activity. Significantly, the activity of the immobilised Tyr was considerably higher than in previous work in the literature, which allowed Tyr-containing LB films to be used as highly sensitive voltammetric sensors to detect pyrogallol. Linear responses have been found up to 400 mu M, with a detection limit of 4.87 x 10(-2) mu M (n = 4) and a sensitivity of 1.54 mu A mu M(-1) cm(-2). In addition, the Hill coefficient (h = 1.27) indicates cooperation with LuPc(2) that also acts as a catalyst. The enhanced performance of the LB-based biosensor resulted therefore from a preserved activity of Tyr combined with the catalytic activity of LuPc(2), in a strategy that can be extended to other enzymes and analytes upon varying the LB film architecture.
Resumo:
Support for interoperability and interchangeability of software components which are part of a fieldbus automation system relies on the definition of open architectures, most of them involving proprietary technologies. Concurrently, standard, open and non-proprietary technologies, such as XML, SOAP, Web Services and the like, have greatly evolved and been diffused in the computing area. This article presents a FOUNDATION fieldbus (TM) device description technology named Open-EDD, based on XML and other related technologies (XLST, DOM using Xerces implementation, OO, XMIL Schema), proposing an open and nonproprietary alternative to the EDD (Electronic Device Description). This initial proposal includes defining Open-EDDML as the programming language of the technology in the FOUNDATION fieldbus (TM) protocol, implementing a compiler and a parser, and finally, integrating and testing the new technology using field devices and a commercial fieldbus configurator. This study attests that this new technology is feasible and can be applied to other configurators or HMI applications used in fieldbus automation systems. (c) 2008 Elsevier B.V. All rights reserved.
Resumo:
This paper analyses an optical network architecture composed by an arrangement of nodes equipped with multi-granular optical cross-connects (MG-OXCs) in addition to the usual optical cross-connects (OXCs). Then, selected network nodes can perform both waveband as well as traffic grooming operations and our goal is to assess the improvement on network performance brought by these additional capabilities. Specifically, the influence of the MG-OXC multi-granularity on the blocking probability is evaluated for 16 classes of service over a network based on the NSFNet topology. A mechanism of fairness in bandwidth capacity is also added to the connection admission control to manage the blocking probabilities of all kind of bandwidth requirements. Comprehensive computational simulation are carried out to compare eight distinct node architectures, showing that an adequate combination of waveband and single-wavelength ports of the MG-OXCs and OXCs allow a more efficient operation of a WDM optical network carrying multi-rate traffic.
Resumo:
The computational design of a composite where the properties of its constituents change gradually within a unit cell can be successfully achieved by means of a material design method that combines topology optimization with homogenization. This is an iterative numerical method, which leads to changes in the composite material unit cell until desired properties (or performance) are obtained. Such method has been applied to several types of materials in the last few years. In this work, the objective is to extend the material design method to obtain functionally graded material architectures, i.e. materials that are graded at the local level (e.g. microstructural level). Consistent with this goal, a continuum distribution of the design variable inside the finite element domain is considered to represent a fully continuous material variation during the design process. Thus the topology optimization naturally leads to a smoothly graded material system. To illustrate the theoretical and numerical approaches, numerical examples are provided. The homogenization method is verified by considering one-dimensional material gradation profiles for which analytical solutions for the effective elastic properties are available. The verification of the homogenization method is extended to two dimensions considering a trigonometric material gradation, and a material variation with discontinuous derivatives. These are also used as benchmark examples to verify the optimization method for functionally graded material cell design. Finally the influence of material gradation on extreme materials is investigated, which includes materials with near-zero shear modulus, and materials with negative Poisson`s ratio.
Resumo:
Polymer-clay nanocomposites are materials with many interesting structures, properties, and potential applications. Microstructural evaluation of a nanocomposite is not an easy task, as clay may form hierarchical structures which may look different when observed at various magnifications under a microscope, and also as the concepts of ""intercalation"" and ""exfoliation"" are not self-sufficient to describe its morphology. In this work polymer-clay nanocomposites of polystyrene and two styrene-containing block copolymers (styrene-butadiene-styrene and styrene-ethylene/butylene-styrene) were prepared using three different techniques. Clay dispersion was evaluated by a recently developed microscopy image analysis procedure, combining the analysis of optical and transmission electron micrographs, and the characterization was complemented by X-ray diffraction and rheological measurements. The results showed better clay dispersion for both block copolymers nanocomposites, mainly due to their molecular architectures. Moreover, the techniques which showed the best results involved mixing the materials in a solvent medium. POLYM. ENG. SCI., 50:257-267, 2010. (C) 2009 Society of Plastics Engineers