810 resultados para algorithm design and analysis
Resumo:
This paper presents the development and evaluation of PICTOAPRENDE, which is an interactive software designed to improve oral communication. Additionally, it contributes to the development of children and youth who are diagnosed with autism spectrum disorder (ASD) in Ecuador. To fulfill this purpose initially analyzes the intervention area where the general characteristics of people with ASD and their status in Ecuador is described. Statistical techniques used for this evaluation constitutes the basis of this study. A section that presents the development of research-based cognitive and social parameters of the area of intervention is also shown. Finally, the algorithms to obtain the measurements and experimental results along with the analysis of them are presented.
Resumo:
Scientific applications rely heavily on floating point data types. Floating point operations are complex and require complicated hardware that is both area and power intensive. The emergence of massively parallel architectures like Rigel creates new challenges and poses new questions with respect to floating point support. The massively parallel aspect of Rigel places great emphasis on area efficient, low power designs. At the same time, Rigel is a general purpose accelerator and must provide high performance for a wide class of applications. This thesis presents an analysis of various floating point unit (FPU) components with respect to Rigel, and attempts to present a candidate design of an FPU that balances performance, area, and power and is suitable for massively parallel architectures like Rigel.
Resumo:
This thesis presents approximation algorithms for some NP-Hard combinatorial optimization problems on graphs and networks; in particular, we study problems related to Network Design. Under the widely-believed complexity-theoretic assumption that P is not equal to NP, there are no efficient (i.e., polynomial-time) algorithms that solve these problems exactly. Hence, if one desires efficient algorithms for such problems, it is necessary to consider approximate solutions: An approximation algorithm for an NP-Hard problem is a polynomial time algorithm which, for any instance of the problem, finds a solution whose value is guaranteed to be within a multiplicative factor of the value of an optimal solution to that instance. We attempt to design algorithms for which this factor, referred to as the approximation ratio of the algorithm, is as small as possible. The field of Network Design comprises a large class of problems that deal with constructing networks of low cost and/or high capacity, routing data through existing networks, and many related issues. In this thesis, we focus chiefly on designing fault-tolerant networks. Two vertices u,v in a network are said to be k-edge-connected if deleting any set of k − 1 edges leaves u and v connected; similarly, they are k-vertex connected if deleting any set of k − 1 other vertices or edges leaves u and v connected. We focus on building networks that are highly connected, meaning that even if a small number of edges and nodes fail, the remaining nodes will still be able to communicate. A brief description of some of our results is given below. We study the problem of building 2-vertex-connected networks that are large and have low cost. Given an n-node graph with costs on its edges and any integer k, we give an O(log n log k) approximation for the problem of finding a minimum-cost 2-vertex-connected subgraph containing at least k nodes. We also give an algorithm of similar approximation ratio for maximizing the number of nodes in a 2-vertex-connected subgraph subject to a budget constraint on the total cost of its edges. Our algorithms are based on a pruning process that, given a 2-vertex-connected graph, finds a 2-vertex-connected subgraph of any desired size and of density comparable to the input graph, where the density of a graph is the ratio of its cost to the number of vertices it contains. This pruning algorithm is simple and efficient, and is likely to find additional applications. Recent breakthroughs on vertex-connectivity have made use of algorithms for element-connectivity problems. We develop an algorithm that, given a graph with some vertices marked as terminals, significantly simplifies the graph while preserving the pairwise element-connectivity of all terminals; in fact, the resulting graph is bipartite. We believe that our simplification/reduction algorithm will be a useful tool in many settings. We illustrate its applicability by giving algorithms to find many trees that each span a given terminal set, while being disjoint on edges and non-terminal vertices; such problems have applications in VLSI design and other areas. We also use this reduction algorithm to analyze simple algorithms for single-sink network design problems with high vertex-connectivity requirements; we give an O(k log n)-approximation for the problem of k-connecting a given set of terminals to a common sink. We study similar problems in which different types of links, of varying capacities and costs, can be used to connect nodes; assuming there are economies of scale, we give algorithms to construct low-cost networks with sufficient capacity or bandwidth to simultaneously support flow from each terminal to the common sink along many vertex-disjoint paths. We further investigate capacitated network design, where edges may have arbitrary costs and capacities. Given a connectivity requirement R_uv for each pair of vertices u,v, the goal is to find a low-cost network which, for each uv, can support a flow of R_uv units of traffic between u and v. We study several special cases of this problem, giving both algorithmic and hardness results. In addition to Network Design, we consider certain Traveling Salesperson-like problems, where the goal is to find short walks that visit many distinct vertices. We give a (2 + epsilon)-approximation for Orienteering in undirected graphs, achieving the best known approximation ratio, and the first approximation algorithm for Orienteering in directed graphs. We also give improved algorithms for Orienteering with time windows, in which vertices must be visited between specified release times and deadlines, and other related problems. These problems are motivated by applications in the fields of vehicle routing, delivery and transportation of goods, and robot path planning.
Resumo:
Metamamterials are 1D, 2D or 3D arrays of articial atoms. The articial atoms, called "meta-atoms", can be any component with tailorable electromagnetic properties, such as resonators, LC circuits, nano particles, and so on. By designing the properties of individual meta-atoms and the interaction created by putting them in a lattice, one can create a metamaterial with intriguing properties not found in nature. My Ph. D. work examines the meta-atoms based on radio frequency superconducting quantum interference devices (rf-SQUIDs); their tunability with dc magnetic field, rf magnetic field, and temperature are studied. The rf-SQUIDs are superconducting split ring resonators in which the usual capacitance is supplemented with a Josephson junction, which introduces strong nonlinearity in the rf properties. At relatively low rf magnetic field, a magnetic field tunability of the resonant frequency of up to 80 THz/Gauss by dc magnetic field is observed, and a total frequency tunability of 100% is achieved. The macroscopic quantum superconducting metamaterial also shows manipulative self-induced broadband transparency due to a qualitatively novel nonlinear mechanism that is different from conventional electromagnetically induced transparency (EIT) or its classical analogs. A near complete disappearance of resonant absorption under a range of applied rf flux is observed experimentally and explained theoretically. The transparency comes from the intrinsic bi-stability and can be tuned on/ off easily by altering rf and dc magnetic fields, temperature and history. Hysteretic in situ 100% tunability of transparency paves the way for auto-cloaking metamaterials, intensity dependent filters, and fast-tunable power limiters. An rf-SQUID metamaterial is shown to have qualitatively the same behavior as a single rf-SQUID with regards to dc flux, rf flux and temperature tuning. The two-tone response of self-resonant rf-SQUID meta-atoms and metamaterials is then studied here via intermodulation (IM) measurement over a broad range of tone frequencies and tone powers. A sharp onset followed by a surprising strongly suppressed IM region near the resonance is observed. This behavior can be understood employing methods in nonlinear dynamics; the sharp onset, and the gap of IM, are due to sudden state jumps during a beat of the two-tone sum input signal. The theory predicts that the IM can be manipulated with tone power, center frequency, frequency difference between the two tones, and temperature. This quantitative understanding potentially allows for the design of rf-SQUID metamaterials with either very low or very high IM response.
Resumo:
This document presents an Enterprise Application Integration based proposal for research outcomes and technological information management. The proposal addresses national and international science and research outcomes information management, and corresponding information systems. Information systems interoperability problems, approaches, technologies and integration tools are presented and applied to the research outcomes information management case. A business and technological perspective is provided, including the conceptual analysis and modelling, an integration solution based in a Domain-Specific Language (DSL) and the integration platform to execute the proposed solution. For illustrative purposes, the role and information system needs of a research unit is assumed as the representative case.
Resumo:
An overview is given of a user interaction monitoring and analysis framework called BaranC. Monitoring and analysing human-digital interaction is an essential part of developing a user model as the basis for investigating user experience. The primary human-digital interaction, such as on a laptop or smartphone, is best understood and modelled in the wider context of the user and their environment. The BaranC framework provides monitoring and analysis capabilities that not only records all user interaction with a digital device (e.g. smartphone), but also collects all available context data (such as from sensors in the digital device itself, a fitness band or a smart appliances). The data collected by BaranC is recorded as a User Digital Imprint (UDI) which is, in effect, the user model and provides the basis for data analysis. BaranC provides functionality that is useful for user experience studies, user interface design evaluation, and providing user assistance services. An important concern for personal data is privacy, and the framework gives the user full control over the monitoring, storing and sharing of their data.
Resumo:
Shearing is the process where sheet metal is mechanically cut between two tools. Various shearing technologies are commonly used in the sheet metal industry, for example, in cut to length lines, slitting lines, end cropping etc. Shearing has speed and cost advantages over competing cutting methods like laser and plasma cutting, but involves large forces on the equipment and large strains in the sheet material. The constant development of sheet metals toward higher strength and formability leads to increased forces on the shearing equipment and tools. Shearing of new sheet materials imply new suitable shearing parameters. Investigations of the shearing parameters through live tests in the production are expensive and separate experiments are time consuming and requires specialized equipment. Studies involving a large number of parameters and coupled effects are therefore preferably performed by finite element based simulations. Accurate experimental data is still a prerequisite to validate such simulations. There is, however, a shortage of accurate experimental data to validate such simulations. In industrial shearing processes, measured forces are always larger than the actual forces acting on the sheet, due to friction losses. Shearing also generates a force that attempts to separate the two tools with changed shearing conditions through increased clearance between the tools as result. Tool clearance is also the most common shearing parameter to adjust, depending on material grade and sheet thickness, to moderate the required force and to control the final sheared edge geometry. In this work, an experimental procedure that provides a stable tool clearance together with accurate measurements of tool forces and tool displacements, was designed, built and evaluated. Important shearing parameters and demands on the experimental set-up were identified in a sensitivity analysis performed with finite element simulations under the assumption of plane strain. With respect to large tool clearance stability and accurate force measurements, a symmetric experiment with two simultaneous shears and internal balancing of forces attempting to separate the tools was constructed. Steel sheets of different strength levels were sheared using the above mentioned experimental set-up, with various tool clearances, sheet clamping and rake angles. Results showed that tool penetration before fracture decreased with increased material strength. When one side of the sheet was left unclamped and free to move, the required shearing force decreased but instead the force attempting to separate the two tools increased. Further, the maximum shearing force decreased and the rollover increased with increased tool clearance. Digital image correlation was applied to measure strains on the sheet surface. The obtained strain fields, together with a material model, were used to compute the stress state in the sheet. A comparison, up to crack initiation, of these experimental results with corresponding results from finite element simulations in three dimensions and at a plane strain approximation showed that effective strains on the surface are representative also for the bulk material. A simple model was successfully applied to calculate the tool forces in shearing with angled tools from forces measured with parallel tools. These results suggest that, with respect to tool forces, a plane strain approximation is valid also at angled tools, at least for small rake angles. In general terms, this study provide a stable symmetric experimental set-up with internal balancing of lateral forces, for accurate measurements of tool forces, tool displacements, and sheet deformations, to study the effects of important shearing parameters. The results give further insight to the strain and stress conditions at crack initiation during shearing, and can also be used to validate models of the shearing process.
Resumo:
In 2017, Chronic Respiratory Diseases accounted for almost four million deaths worldwide. Unfortunately, current treatments are not definitive for such diseases. This unmet medical need forces the scientific community to increase efforts in the identification of new therapeutic solutions. PI3K delta plays a key role in mechanisms that promote airway chronic inflammation underlying Asthma and COPD. The first part of this project was dedicated to the identification of novel PI3K delta inhibitors. A first SAR expansion of a Hit, previously identified by a HTS campaign, was carried out. A library of 43 analogues was synthesised taking advantage of an efficient synthetic approach. This allowed the identification of an improved Hit of nanomolar enzymatic potency and moderate selectivity for PI3K delta over other PI3K isoforms. However, this compound exhibited low potency in cell-based assays. Low cellular potency was related to sub optimal phys-chem and ADME properties. The analysis of the X-ray crystal structure of this compound in human PI3K delta guided a second tailored SAR expansion that led to improved cellular potency and solubility. The second part of the thesis was focused on the rational design and synthesis of new macrocyclic Rho-associated protein kinases (ROCKs) inhibitors. Inhibition of these kinases has been associated with vasodilating effects. Therefore, ROCKs could represent attractive targets for the treatment of pulmonary arterial hypertension (PAH). Known ROCK inhibitors suffer from low selectivity across the kinome. The design of macrocyclic inhibitors was considered a promising strategy to obtain improved selectivity. Known inhibitors from literature were evaluated for opportunities of macrocyclization using a knowledge-based approach supported by Computer Aided Drug Design (CADD). The identification of a macrocyclic ROCK inhibitor with enzymatic activity in the low micro molar range against ROCK II represented a promising result that validated this innovative approach in the design of new ROCKs inhibitors.
Resumo:
Reinforcement Learning is an increasingly popular area of Artificial Intelligence. The applications of this learning paradigm are many, but its application in mobile computing is in its infancy. This study aims to provide an overview of current Reinforcement Learning applications on mobile devices, as well as to introduce a new framework for iOS devices: Swift-RL Lib. This new Swift package allows developers to easily support and integrate two of the most common RL algorithms, Q-Learning and Deep Q-Network, in a fully customizable environment. All processes are performed on the device, without any need for remote computation. The framework was tested in different settings and evaluated through several use cases. Through an in-depth performance analysis, we show that the platform provides effective and efficient support for Reinforcement Learning for mobile applications.
Resumo:
Power-to-Gas storage systems have the potential to address grid-stability issues that arise when an increasing share of power is generated from sources that have a highly variable output. Although the proof-of-concept of these has been promising, the behaviour of the processes in off-design conditions is not easily predictable. The primary aim of this PhD project was to evaluate the performance of an original Power-to-Gas system, made up of innovative components. To achieve this, a numerical model has been developed to simulate the characteristics and the behaviour of the several components when the whole system is coupled with a renewable source. The developed model has been applied to a large variety of scenarios, evaluating the performance of the considered process and exploiting a limited amount of experimental data. The model has been then used to compare different Power-to-Gas concepts, in a real scenario of functioning. Several goals have been achieved. In the concept phase, the possibility to thermally integrate the high temperature components has been demonstrated. Then, the parameters that affect the energy performance of a Power-to-Gas system coupled with a renewable source have been identified, providing general recommendations on the design of hybrid systems; these parameters are: 1) the ratio between the storage system size and the renewable generator size; 2) the type of coupled renewable source; 3) the related production profile. Finally, from the results of the comparative analysis, it is highlighted that configurations with a highly oversized renewable source with respect to the storage system show the maximum achievable profit.
Resumo:
This work deals with the development of calibration procedures and control systems to improve the performance and efficiency of modern spark ignition turbocharged engines. The algorithms developed are used to optimize and manage the spark advance and the air-to-fuel ratio to control the knock and the exhaust gas temperature at the turbine inlet. The described work falls within the activity that the research group started in the previous years with the industrial partner Ferrari S.p.a. . The first chapter deals with the development of a control-oriented engine simulator based on a neural network approach, with which the main combustion indexes can be simulated. The second chapter deals with the development of a procedure to calibrate offline the spark advance and the air-to-fuel ratio to run the engine under knock-limited conditions and with the maximum admissible exhaust gas temperature at the turbine inlet. This procedure is then converted into a model-based control system and validated with a Software in the Loop approach using the engine simulator developed in the first chapter. Finally, it is implemented in a rapid control prototyping hardware to manage the combustion in steady-state and transient operating conditions at the test bench. The third chapter deals with the study of an innovative and cheap sensor for the in-cylinder pressure measurement, which is a piezoelectric washer that can be installed between the spark plug and the engine head. The signal generated by this kind of sensor is studied, developing a specific algorithm to adjust the value of the knock index in real-time. Finally, with the engine simulator developed in the first chapter, it is demonstrated that the innovative sensor can be coupled with the control system described in the second chapter and that the performance obtained could be the same reachable with the standard in-cylinder pressure sensors.
Resumo:
This work thesis focuses on the Helicon Plasma Thruster (HPT) as a candidate for generating thrust for small satellites and CubeSats. Two main topics are addressed: the development of a Global Model (GM) and a 3D self-consistent numerical tool. The GM is suitable for preliminary analysis of HPTs with noble gases such as argon, neon, krypton, and xenon, and alternative propellants such as air and iodine. A lumping methodology is developed to reduce the computational cost when modelling the excited species in the plasma chemistry. A 3D self-consistent numerical tool is also developed that can treat discharges with a generic 3D geometry and model the actual plasma-antenna coupling. The tool consists of two main modules, an EM module and a FLUID module, which run iteratively until a steady state solution is converged. A third module is available for solving the plume with a simplified semi-analytical approach, a PIC code, or directly by integration of the fluid equations. Results obtained from both the numerical tools are benchmarked against experimental measures of HPTs or Helicon reactors, obtaining very good qualitative agreement with the experimental trend for what concerns the GM, and an excellent agreement of the physical trends predicted against the measured data for the 3D numerical strategy.
Resumo:
Nowadays, technological advancements have brought industry and research towards the automation of various processes. Automation brings a reduction in costs and an improvement in product quality. For this reason, companies are pushing research to investigate new technologies. The agriculture industry has always looked towards automating various processes, from product processing to storage. In the last years, the automation of harvest and cultivation phases also has become attractive, pushed by the advancement of autonomous driving. Nevertheless, ADAS systems are not enough. Merging different technologies will be the solution to obtain total automation of agriculture processes. For example, sensors that estimate products' physical and chemical properties can be used to evaluate the maturation level of fruit. Therefore, the fusion of these technologies has a key role in industrial process automation. In this dissertation, ADAS systems and sensors for precision agriculture will be both treated. Several measurement procedures for characterizing commercial 3D LiDARs will be proposed and tested to cope with the growing need for comparison tools. Axial errors and transversal errors have been investigated. Moreover, a measurement method and setup for evaluating the fog effect on 3D LiDARs will be proposed. Each presented measurement procedure has been tested. The obtained results highlight the versatility and the goodness of the proposed approaches. Regarding the precision agriculture sensors, a measurement approach for the Moisture Content and density estimation of crop directly on the field is presented. The approach regards the employment of a Near Infrared spectrometer jointly with Partial Least Square statistical analysis. The approach and the model will be described together with a first laboratory prototype used to evaluate the NIRS approach. Finally, a prototype for on the field analysis is realized and tested. The test results are promising, evidencing that the proposed approach is suitable for Moisture Content and density estimation.
Resumo:
One of the major issues for power converters that are connected to the electric grid are the measurement of three phase Conduced Emissions (CE), which are regulated by international and regional standards. CE are composed of two components which are Common Mode (CM) noise and Differential Mode (DM) noise. To achieve compliance with these regulations the Equipment Under Test (EUT) includes filtering and other electromagnetic emission control strategies. The separation of differential mode and common mode noise in Electromagnetic Interference (EMI) analysis is a well-known procedure which is useful especially for the optimization of the EMI filter, to improve the CM or DM attenuation depending on which component of the conducted emissions is predominant, and for the analysis and the understanding of interference phenomena of switched mode power converters. However, separating both components is rarely done during measurements. Therefore, in this thesis an active device for the separation of the CM and DM EMI noise in three phase power electronic systems has been designed and experimentally analysed.
Resumo:
In this paper, a joint location-inventory model is proposed that simultaneously optimises strategic supply chain design decisions such as facility location and customer allocation to facilities, and tactical-operational inventory management and production scheduling decisions. All this is analysed in a context of demand uncertainty and supply uncertainty. While demand uncertainty stems from potential fluctuations in customer demands over time, supply-side uncertainty is associated with the risk of “disruption” to which facilities may be subject. The latter is caused by external factors such as natural disasters, strikes, changes of ownership and information technology security incidents. The proposed model is formulated as a non-linear mixed integer programming problem to minimise the expected total cost, which includes four basic cost items: the fixed cost of locating facilities at candidate sites, the cost of transport from facilities to customers, the cost of working inventory, and the cost of safety stock. Next, since the optimisation problem is very complex and the number of evaluable instances is very low, a "matheuristic" solution is presented. This approach has a twofold objective: on the one hand, it considers a larger number of facilities and customers within the network in order to reproduce a supply chain configuration that more closely reflects a real-world context; on the other hand, it serves to generate a starting solution and perform a series of iterations to try to improve it. Thanks to this algorithm, it was possible to obtain a solution characterised by a lower total system cost than that observed for the initial solution. The study concludes with some reflections and the description of possible future insights.