995 resultados para Software components
Resumo:
This talk explores how the runtime system and operating system can leverage metrics that express the significance and resilience of application components in order to reduce the energy footprint of parallel applications. We will explore in particular how software can tolerate and indeed exploit higher error rates in future processors and memory technologies that may operate outside their safe margins.
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-08
Resumo:
Modern software application testing, such as the testing of software driven by graphical user interfaces (GUIs) or leveraging event-driven architectures in general, requires paying careful attention to context. Model-based testing (MBT) approaches first acquire a model of an application, then use the model to construct test cases covering relevant contexts. A major shortcoming of state-of-the-art automated model-based testing is that many test cases proposed by the model are not actually executable. These \textit{infeasible} test cases threaten the integrity of the entire model-based suite, and any coverage of contexts the suite aims to provide. In this research, I develop and evaluate a novel approach for classifying the feasibility of test cases. I identify a set of pertinent features for the classifier, and develop novel methods for extracting these features from the outputs of MBT tools. I use a supervised logistic regression approach to obtain a model of test case feasibility from a randomly selected training suite of test cases. I evaluate this approach with a set of experiments. The outcomes of this investigation are as follows: I confirm that infeasibility is prevalent in MBT, even for test suites designed to cover a relatively small number of unique contexts. I confirm that the frequency of infeasibility varies widely across applications. I develop and train a binary classifier for feasibility with average overall error, false positive, and false negative rates under 5\%. I find that unique event IDs are key features of the feasibility classifier, while model-specific event types are not. I construct three types of features from the event IDs associated with test cases, and evaluate the relative effectiveness of each within the classifier. To support this study, I also develop a number of tools and infrastructure components for scalable execution of automated jobs, which use state-of-the-art container and continuous integration technologies to enable parallel test execution and the persistence of all experimental artifacts.
Resumo:
This portfolio thesis describes work undertaken by the author under the Engineering Doctorate program of the Institute for System Level Integration. It was carried out in conjunction with the sponsor company Teledyne Defence Limited. A radar warning receiver is a device used to detect and identify the emissions of radars. They were originally developed during the Second World War and are found today on a variety of military platforms as part of the platform’s defensive systems. Teledyne Defence has designed and built components and electronic subsystems for the defence industry since the 1970s. This thesis documents part of the work carried out to create Phobos, Teledyne Defence’s first complete radar warning receiver. Phobos was designed to be the first low cost radar warning receiver. This was made possible by the reuse of existing Teledyne Defence products, commercial off the shelf hardware and advanced UK government algorithms. The challenges of this integration are described and discussed, with detail given of the software architecture and the development of the embedded application. Performance of the embedded system as a whole is described and qualified within the context of a low cost system.
Resumo:
Embedded software systems in vehicles are of rapidly increasing commercial importance for the automotive industry. Current systems employ a static run-time environment; due to the difficulty and cost involved in the development of dynamic systems in a high-integrity embedded control context. A dynamic system, referring to the system configuration, would greatly increase the flexibility of the offered functionality and enable customised software configuration for individual vehicles, adding customer value through plug-and-play capability, and increased quality due to its inherent ability to adjust to changes in hardware and software. We envisage an automotive system containing a variety of components, from a multitude of organizations, not necessarily known at development time. The system dynamically adapts its configuration to suit the run-time system constraints. This paper presents our vision for future automotive control systems that will be regarded in an EU research project, referred to as DySCAS (Dynamically Self-Configuring Automotive Systems). We propose a self-configuring vehicular control system architecture, with capabilities that include automatic discovery and inclusion of new devices, self-optimisation to best-use the processing, storage and communication resources available, self-diagnostics and ultimately self-healing. Such an architecture has benefits extending to reduced development and maintenance costs, improved passenger safety and comfort, and flexible owner customisation. Specifically, this paper addresses the following issues: The state of the art of embedded software systems in vehicles, emphasising the current limitations arising from fixed run-time configurations; and the benefits and challenges of dynamic configuration, giving rise to opportunities for self-healing, self-optimisation, and the automatic inclusion of users’ Consumer Electronic (CE) devices. Our proposal for a dynamically reconfigurable automotive software system platform is outlined and a typical use-case is presented as an example to exemplify the benefits of the envisioned dynamic capabilities.
Resumo:
Call Level Interfaces (CLI) play a key role in business tiers of relational and on some NoSQL database applications whenever a fine tune control between application tiers and the host databases is a key requirement. Unfortunately, in spite of this significant advantage, CLI are low level API, this way not addressing high level architectural requirements. Among the examples we emphasize two situations: a) the need to decouple or not to decouple the development process of business tiers from the development process of application tiers and b) the need to automatically adapt business tiers to new business and/or security needs at runtime. To tackle these CLI drawbacks, and simultaneously keep their advantages, this paper proposes an architecture relying on CLI from which multi-purpose business tiers components are built, herein referred to as Adaptable Business Tier Components (ABTC). Beyond the reference architecture, this paper presents a proof of concept based on Java and Java Database Connectivity (an example of CLI).
Resumo:
Call Level Interfaces (CLI) are low level API that play a key role in database applications whenever a fine tune control between application tiers and the host databases is a key requirement. Unfortunately, in spite of this significant advantage, CLI were not designed to address organizational requirements and contextual runtime requirements. Among the examples we emphasize the need to decouple or not to decouple the development process of business tiers from the development process of application tiers and also the need to automatically adapt to new business and/or security needs at runtime. To tackle these CLI drawbacks, and simultaneously keep their advantages, this paper proposes an architecture relying on CLI from which multi-purpose business tiers components are built, herein referred to as Adaptable Business Tier Components (ABTC). This paper presents the reference architecture for those components and a proof of concept based on Java and Java Database Connectivity (an example of CLI).
Resumo:
Dissertação (mestrado)—Universidade de Brasília, Instituto de Ciências Exatas, Departamento de Ciência da Computação, 2015.
Resumo:
Dissertação (mestrado)—Universidade de Brasília, Instituto de Física, Programa de Pós-Graduação de Mestrado Profissional em Ensino de Física, Mestrado Nacional Profissional em Ensino de Física, 2015.
Resumo:
Stainless steel is widely used in seawater reverse osmosis units (SWRO) for both good mechanical and corrosion resistance properties. However, many corrosion failures of stainless steel in SWRO desalination units have been reported. These failures may often be attributed to un-adapted stainless steel grade selection and/or to the particular aggressive seawater conditions in "warm" regions (high ambient temperature, severe biofouling, etc.). Cathodic protection (CP) is a well-known efficient system to prevent corrosion of metallic materials in seawater. It is successfully used in the oil and gas industry to protect carbon steel structures exposed in open-sea. However, the specific service conditions of SWRO units may seriously affect the efficiency of such anti-corrosion system (high flow rates, large stainless steel surfaces affected by biofouling, confinement limiting protective cathodic current flow, etc.). Hence, CP in SWRO units should be considered with special care and modeling appears as useful tool to assess an appropriate CP design. However, there is a clear lack of CP data that could be transposed to SWRO service conditions (i.e. stainless steel, effect of biofouling, high flow rate, etc.). From this background a Join Industry Program was initiated including laboratory exposures, field measurements in a full scale SWRO desalination plant, and modeling work using PROCOR software. The present paper reviews the main parameters affecting corrosion of stainless steel alloys in seawater reverse osmosis units. CP on specific stainless steel devices was investigated in order to assess its actual efficiency for SWRO units. Severe environmental conditions were intentionally used to promote corrosion on the tested stainless steel products in order to evaluate the efficiency of CP. The study includes a modeling work aiming at predicting and designing adapted CP protection to modeled stainless steel units. An excellent correlation between modeling work and field measurements was found.
Resumo:
The low-frequency electromagnetic compatibility (EMC) is an increasingly important aspect in the design of practical systems to ensure the functional safety and reliability of complex products. The opportunities for using numerical techniques to predict and analyze system’s EMC are therefore of considerable interest in many industries. As the first phase of study, a proper model, including all the details of the component, was required. Therefore, the advances in EMC modeling were studied with classifying analytical and numerical models. The selected model was finite element (FE) modeling, coupled with the distributed network method, to generate the model of the converter’s components and obtain the frequency behavioral model of the converter. The method has the ability to reveal the behavior of parasitic elements and higher resonances, which have critical impacts in studying EMI problems. For the EMC and signature studies of the machine drives, the equivalent source modeling was studied. Considering the details of the multi-machine environment, including actual models, some innovation in equivalent source modeling was performed to decrease the simulation time dramatically. Several models were designed in this study and the voltage current cube model and wire model have the best result. The GA-based PSO method is used as the optimization process. Superposition and suppression of the fields in coupling the components were also studied and verified. The simulation time of the equivalent model is 80-100 times lower than the detailed model. All tests were verified experimentally. As the application of EMC and signature study, the fault diagnosis and condition monitoring of an induction motor drive was developed using radiated fields. In addition to experimental tests, the 3DFE analysis was coupled with circuit-based software to implement the incipient fault cases. The identification was implemented using ANN for seventy various faulty cases. The simulation results were verified experimentally. Finally, the identification of the types of power components were implemented. The results show that it is possible to identify the type of components, as well as the faulty components, by comparing the amplitudes of their stray field harmonics. The identification using the stray fields is nondestructive and can be used for the setups that cannot go offline and be dismantled
Resumo:
Through modelling activity, experimental campaigns, test bench and on-field validation, a complete powertrain for a BEV has been designed, assembled and used in a motorsport competition. The activity can be split in three main subjects, representing the three key components of an BEV vehicle. First of all a model of the entire powertrain has been developed in order to understand how the various design choices will influence the race lap-time. The data obtained was then used to design, build and test a first battery pack. After bench tests and track tests, it was understood that by using all the cell charac-teristics, without breaking the rules limitations, higher energy and power densities could have been achieved. An updated battery pack was then designed, produced and raced with at Motostudent 2018 re-sulting in a third place at debut. The second topic of this PhD was the design of novel inverter topologies. Three inverters have been de-signed, two of them using Gallium Nitride devices, a promising semiconductor technology that can achieve high switching speeds while maintaining low switching losses. High switching frequency is crucial to reduce the DC-Bus capacitor and then increase the power density of 3 phase inverters. The third in-verter uses classic Silicon devices but employs a ZVS (Zero Voltage Switching) topology. Despite the in-creased complexity of both the hardware and the control software, it can offer reduced switching losses by using conventional and established silicon mosfet technology. Finally, the mechanical parts of a three phase permanent magnet motor have been designed with the aim to employ it in UniBo Motorsport’s 2020 Formula Student car.
Resumo:
The objective of the thesis project, developed within the Line Control & Software Engineering team of G.D company, is to analyze and identify the appropriate tool to automate the HW configuration process using Beckhoff technologies by importing data from an ECAD tool. This would save a great deal of time, since the I/O topology created as part of the electrical planning is presently imported manually in the related SW project of the machine. Moreover, a manual import is more error-prone because of human mistake than an automatic configuration tool. First, an introduction about TwinCAT 3, EtherCAT and Automation Interface is provided; then, it is analyzed the official Beckhoff tool, XCAD Interface, and the requirements on the electrical planning to use it: the interface is realized by means of the AutomationML format. Finally, due to some limitations observed, the design and implementation of a company internal tool is performed. Tests and validation of the tool are performed on a sample production line of the company.
Resumo:
This study investigates the effect of an additive process in manufacturing of thick composites. Airstone 780 E epoxy resin and 785H Hardener system is used in the analysis since it is widely used wind turbine blade, namely thick components. As a fiber, fabric by SAERTEX (812 g/m2) with a 0-90 degrees layup direction is used. Temperature overshoot is a major issue during the manufacturing of thick composites. A high temperature overshoot leads to an increase in residual stresses. These residual stresses are causing warping, delamination, dimensional instability, and undesired distortion of composite structures. A coupled thermo-mechanical model capable of predicting cure induced residual stresses have been built using the commercial FE software Abaqus®. The possibility of building thick composite components by means of adding a finite number of sub-laminates has been investigated. The results have been compared against components manufactured following a standard route. The influence of pre-curing of the sub-laminates has also been addressed and results compared with standard practice. As a result of the study, it is found that introducing additive process can prevent temperature overshoot to occur and benefits the residual stresses generation during the curing process. However, the process time required increases by 50%, therefore increasing the manufacturing costs. An optimized cure cycle is required to minimize process time and cure induced defects simultaneously.
Resumo:
The BP (Bundle Protocol) version 7 has been recently standardized by IETF in RFC 9171, but it is the whole DTN (Delay-/Disruption-Tolerant Networking) architecture, of which BP is the core, that is gaining a renewed interest, thanks to its planned adoption in future space missions. This is obviously positive, but at the same time it seems to make space agencies more interested in deployment than in research, with new BP implementations that may challenge the central role played until now by the historical BP reference implementations, such as ION and DTNME. To make Unibo research on DTN independent of space agency decisions, the development of an internal BP implementation was in order. This is the goal of this thesis, which deals with the design and implementation of Unibo-BP: a novel, research-driven BP implementation, to be released as Free Software. Unibo-BP is fully compliant with RFC 9171, as demonstrated by a series of interoperability tests with ION and DTNME, and presents a few innovations, such as the ability to manage remote DTN nodes by means of the BP itself. Unibo-BP is compatible with pre-existing Unibo implementations of CGR (Contact Graph Routing) and LTP (Licklider Transmission Protocol) thanks to interfaces designed during the thesis. The thesis project also includes an implementation of TCPCLv3 (TCP Convergence Layer version 3, RFC 7242), which can be used as an alternative to LTPCL to connect with proximate nodes, especially in terrestrial networks. Summarizing, Unibo-BP is at the heart of a larger project, Unibo-DTN, which aims to implement the main components of a complete DTN stack (BP, TCPCL, LTP, CGR). Moreover, Unibo-BP is compatible with all DTNsuite applications, thanks to an extension of the Unified API library on which DTNsuite applications are based. The hope is that Unibo-BP and all the ancillary programs developed during this thesis will contribute to the growth of DTN popularity in academia and among space agencies.