810 resultados para algorithm design and analysis
Resumo:
"March 1998."
An Alternative ADS for the Analysis, Design and Evaluation of Information Representations in the ICU
Resumo:
A new surface analysis technique has been developed which has a number of benefits compared to conventional Low Energy Ion Scattering Spectrometry (LEISS). A major potential advantage arising from the absence of charge exchange complications is the possibility of quantification. The instrumentation that has been developed also offers the possibility of unique studies concerning the interaction between low energy ions and atoms and solid surfaces. From these studies it may also be possible, in principle, to generate sensitivity factors to quantify LEISS data. The instrumentation, which is referred to as a Time-of-Flight Fast Atom Scattering Spectrometer has been developed to investigate these conjecture in practice. The development, involved a number of modifications to an existing instrument, and allowed samples to be bombarded with a monoenergetic pulsed beam of either atoms or ions, and provided the capability to analyse the spectra of scattered atoms and ions separately. Further to this a system was designed and constructed to allow incident, exit and azimuthal angles of the particle beam to be varied independently. The key development was that of a pulsed, and mass filtered atom source; which was developed by a cyclic process of design, modelling and experimentation. Although it was possible to demonstrate the unique capabilities of the instrument, problems relating to surface contamination prevented the measurement of the neutralisation probabilities. However, these problems appear to be technical rather than scientific in nature, and could be readily resolved given the appropriate resources. Experimental spectra obtained from a number of samples demonstrate some fundamental differences between the scattered ion and neutral spectra. For practical non-ordered surfaces the ToF spectra are more complex than their LEISS counterparts. This is particularly true for helium scattering where it appears, in the absence of detailed computer simulation, that quantitative analysis is limited to ordered surfaces. Despite this limitation the ToFFASS instrument opens the way for quantitative analysis of the 'true' surface region to a wider range of surface materials.
Resumo:
DUE TO COPYRIGHT RESTRICTIONS ONLY AVAILABLE FOR CONSULTATION AT ASTON UNIVERSITY LIBRARY AND INFORMATION SERVICES WITH PRIOR ARRANGEMENT
Resumo:
This thesis discusses market design and regulation in electricity systems, focusing on the information exchange of the regulated grid firm and the generation firms as well as the regulation of the grid firm. In the first chapter, an economic framework is developed to consistently analyze different market designs and the information exchange between the grid firm and the generation firms. Perfect competition between the generation firms and perfect regulation of the grid firm is assumed. A numerical algorithm is developed and its feasibility demonstrated on a large-scale problem. The effects of different market designs for the Central Western European (CWE) region until 2030 are analyzed. In the second chapter, the consequences of restricted grid expansion within the current market design in the CWE region until 2030 are analyzed. In the third chapter the assumption of efficient markets is modified. The focus of the analysis is then, whether and how inefficiencies in information availability and processing affect different market designs. For different parameter settings, nodal and zonal pricing are compared regarding their welfare in the spot and forward market. In the fourth chapter, information asymmetries between the regulator and the regulated firm are analyzed. The optimal regulatory strategy for a firm, providing one output with two substitutable inputs, is defined. Thereby, one input and the absolute quantity of inputs is not observable for the regulator. The result is then compared to current regulatory approaches.
Resumo:
In the aerospace, automotive, printing, and sports industries, the development of hybrid Carbon Fiber Reinforced Polymer (CFRP)-metal components is becoming increasingly important. The coupling of metal with CFRP in axial symmetric components results in reduced production costs and increased mechanical properties such as bending, torsional stiffness, mass reduction, damping, and critical speed compared to the single material-built ones. In this thesis, thanks to a novel methodology involving a rubbery/viscoelastic interface layer, several hybrid aluminum-CFRP prototype tubes were produced. Besides, an innovative system for the cure of the CFRP part has been studied, analyzed, tested, and developed in the company that financed these research activities (Reglass SRL, Minerbio BO, Italy). The residual thermal stresses and strains have been investigated with numerical models based on the Finite Element Method (FEM) and compared with experimental tests. Thanks to numerical models, it was also possible to reduce residual thermal stresses by optimizing the lamination sequence of CFRP and determining the influence of the system parameters. A novel software and methodology for evaluating mechanical and damping properties of specimens and tubes made in CFRP were also developed. Moreover, to increase the component's damping properties, rubber nanofibers have been produced and interposed throughout the lamination of specimens. The promising results indicated that the nanofibrous mat could improve the material damping factor over 77% and be adopted in CFRP components with a negligible increment of weight or losing mechanical properties.
Resumo:
Driving simulators emulate a real vehicle drive in a virtual environment. One of the most challenging problems in this field is to create a simulated drive as real as possible to deceive the driver's senses and cause the believing to be in a real vehicle. This thesis first provides an overview of the Stuttgart driving simulator with a description of the overall system, followed by a theoretical presentation of the commonly used motion cueing algorithms. The second and predominant part of the work presents the implementation of the classical and optimal washout algorithms in a Simulink environment. The project aims to create a new optimal washout algorithm and compare the obtained results with the results of the classical washout. The classical washout algorithm, already implemented in the Stuttgart driving simulator, is the most used in the motion control of the simulator. This classical algorithm is based on a sequence of filters in which each parameter has a clear physical meaning and a unique assignment to a single degree of freedom. However, the effects on human perception are not exploited, and each parameter must be tuned online by an engineer in the control room, depending on the driver's feeling. To overcome this problem and also consider the driver's sensations, the optimal washout motion cueing algorithm was implemented. This optimal control-base algorithm treats motion cueing as a tracking problem, forcing the accelerations perceived in the simulator to track the accelerations that would have been perceived in a real vehicle, by minimizing the perception error within the constraints of the motion platform. The last chapter presents a comparison between the two algorithms, based on the driver's feelings after the test drive. Firstly it was implemented an off-line test with a step signal as an input acceleration to verify the behaviour of the simulator. Secondly, the algorithms were executed in the simulator during a test drive on several tracks.
Resumo:
The objective of the thesis project, developed within the Line Control & Software Engineering team of G.D company, is to analyze and identify the appropriate tool to automate the HW configuration process using Beckhoff technologies by importing data from an ECAD tool. This would save a great deal of time, since the I/O topology created as part of the electrical planning is presently imported manually in the related SW project of the machine. Moreover, a manual import is more error-prone because of human mistake than an automatic configuration tool. First, an introduction about TwinCAT 3, EtherCAT and Automation Interface is provided; then, it is analyzed the official Beckhoff tool, XCAD Interface, and the requirements on the electrical planning to use it: the interface is realized by means of the AutomationML format. Finally, due to some limitations observed, the design and implementation of a company internal tool is performed. Tests and validation of the tool are performed on a sample production line of the company.
Resumo:
This master's thesis investigates different aspects of Dual-Active-Bridge (DAB) Converter and extends aspects further to Multi-Active-Bridges (MAB). The thesis starts with an overview of the applications of the DAB and MAB and their importance. The analytical part of the thesis includes the derivation of the peak and RMS currents, which is required for finding the losses present in the system. The power converters, considered in this thesis are DAB, Triple-Active Bridge (TAB) and Quad-Active Bridge (QAB). All the theoretical calculations are compared with the simulation results from PLECS software for identifying the correctness of the reviewed and developed theory. The Hardware-in-the-Loop (HIL) simulation is conducted for checking the control operation in real-time with the help of the RT box from the Plexim. Additionally, as in real systems digital signal processor (DSP), system-on-chip or field programmable gate array is employed for the control of the power electronic systems, and the execution of the control in the real-time simulation (RTS) conducted is performed by DSP.
Resumo:
Wireless power transfer is becoming a crucial and demanding task in the IoT world. Despite the already known solutions exploiting a near-field powering approach, far-field WPT is definitely more challenging, and commercial applications are not available yet. This thesis proposes the recent frequency-diverse array technology as a potential candidate for realizing smart and reconfigurable far-field WPT solutions. In the first section of this work, an analysis on some FDA systems is performed, identifying the planar array with circular geometry as the most promising layout in terms of radiation properties. Then, a novel energy aware solution to handle the critical time variability of the FDA beam pattern is proposed. It consists on a time-control strategy through a triangular pulse, and it allows to achieve ad-hoc and real time WPT. Moreover, an essential frequency domain analysis of the radiating behaviour of a pulsed FDA system is presented. This study highlights the benefits of exploiting the intrinsic pulse harmonics for powering purposes, thus minimising the power loss. Later, the electromagnetic design of a radial FDA architecture is addressed. In this context, an exhaustive investigation on miniaturization techniques is carried out; the use of multiple shorting pins together with a meandered feeding network has been selected as a powerful solution to halve the original prototype dimension. Finally, accurate simulations of the designed radial FDA system are performed, and the obtained results are given.
Resumo:
This thesis deals with the sizing and analysis of the electrical power system of a petrochemical plant. The activity was carried out in the framework of an electrical engineering internship. The sizing and electrical calculations, as well as the study of the dynamic behavior of network quantities, are accomplished by using the ETAP (Electrical Transient Analyzer Program) software. After determining the type and size of the loads, the calculation of power flows is carried out for all possible network layout and different power supply configurations. The network is normally operated in a double radial configuration. However, the sizing must be carried out taking into account the most critical configuration, i.e., the temporary one of single radial operation, and also considering the most unfavorable power supply conditions. The calculation of shortcircuit currents is then carried out and the appropriate circuit breakers are selected. Where necessary, capacitor banks are sized in order to keep power factor at the point of common coupling within the preset limits. The grounding system is sized by using the finite element method. For loads with the highest level of criticality, UPS are sized in order to ensure their operation even in the absence of the main power supply. The main faults that can occur in the plant are examined, determining the intervention times of the protections to ensure that, in case of failure on one component, the others can still properly operate. The report concludes with the dynamic and stability analysis of the power system during island operation, in order to ensure that the two gas turbines are able to support the load even during transient conditions.
Resumo:
Human Neks are a conserved protein kinase family related to cell cycle progression and cell division and are considered potential drug targets for the treatment of cancer and other pathologies. We screened the activation loop mutant kinases hNek1 and hNek2, wild-type hNek7, and five hNek6 variants in different activation/phosphorylation statesand compared them against 85 compounds using thermal shift denaturation. We identified three compounds with significant Tm shifts: JNK Inhibitor II for hNek1(Δ262-1258)-(T162A), Isogranulatimide for hNek6(S206A), andGSK-3 Inhibitor XIII for hNek7wt. Each one of these compounds was also validated by reducing the kinases activity by at least 25%. The binding sites for these compounds were identified by in silico docking at the ATP-binding site of the respective hNeks. Potential inhibitors were first screened by thermal shift assays, had their efficiency tested by a kinase assay, and were finally analyzed by molecular docking. Our findings corroborate the idea of ATP-competitive inhibition for hNek1 and hNek6 and suggest a novel non-competitive inhibition for hNek7 in regard to GSK-3 Inhibitor XIII. Our results demonstrate that our approach is useful for finding promising general and specific hNekscandidate inhibitors, which may also function as scaffolds to design more potent and selective inhibitors.
Resumo:
This study evaluated the effect of specimens' design and manufacturing process on microtensile bond strength, internal stress distributions (Finite Element Analysis - FEA) and specimens' integrity by means of Scanning Electron Microscopy (SEM) and Laser Scanning Confocal Microscopy (LCM). Excite was applied to flat enamel surface and a resin composite build-ups were made incrementally with 1-mm increments of Tetric Ceram. Teeth were cut using a diamond disc or a diamond wire, obtaining 0.8 mm² stick-shaped specimens, or were shaped with a Micro Specimen Former, obtaining dumbbell-shaped specimens (n = 10). Samples were randomly selected for SEM and LCM analysis. Remaining samples underwent microtensile test, and results were analyzed with ANOVA and Tukey test. FEA dumbbell-shaped model resulted in a more homogeneous stress distribution. Nonetheless, they failed under lower bond strengths (21.83 ± 5.44 MPa)c than stick-shaped specimens (sectioned with wire: 42.93 ± 4.77 MPaª; sectioned with disc: 36.62 ± 3.63 MPa b), due to geometric irregularities related to manufacturing process, as noted in microscopic analyzes. It could be concluded that stick-shaped, nontrimmed specimens, sectioned with diamond wire, are preferred for enamel specimens as they can be prepared in a less destructive, easier, and more precise way.
Resumo:
This paper revisits the design of L and S band bridged loop-gap resonators (BLGRs) for electron paramagnetic resonance applications. A novel configuration is described and extensively characterized for resonance frequency and quality factor as a function of the geometrical parameters of the device. The obtained experimental results indicate higher values of the quality factor (Q) than previously reported in the literature, and the experimental analysis data should provide useful guidelines for BLGR design.
Resumo:
Thanks to recent advances in molecular biology, allied to an ever increasing amount of experimental data, the functional state of thousands of genes can now be extracted simultaneously by using methods such as cDNA microarrays and RNA-Seq. Particularly important related investigations are the modeling and identification of gene regulatory networks from expression data sets. Such a knowledge is fundamental for many applications, such as disease treatment, therapeutic intervention strategies and drugs design, as well as for planning high-throughput new experiments. Methods have been developed for gene networks modeling and identification from expression profiles. However, an important open problem regards how to validate such approaches and its results. This work presents an objective approach for validation of gene network modeling and identification which comprises the following three main aspects: (1) Artificial Gene Networks (AGNs) model generation through theoretical models of complex networks, which is used to simulate temporal expression data; (2) a computational method for gene network identification from the simulated data, which is founded on a feature selection approach where a target gene is fixed and the expression profile is observed for all other genes in order to identify a relevant subset of predictors; and (3) validation of the identified AGN-based network through comparison with the original network. The proposed framework allows several types of AGNs to be generated and used in order to simulate temporal expression data. The results of the network identification method can then be compared to the original network in order to estimate its properties and accuracy. Some of the most important theoretical models of complex networks have been assessed: the uniformly-random Erdos-Renyi (ER), the small-world Watts-Strogatz (WS), the scale-free Barabasi-Albert (BA), and geographical networks (GG). The experimental results indicate that the inference method was sensitive to average degree k variation, decreasing its network recovery rate with the increase of k. The signal size was important for the inference method to get better accuracy in the network identification rate, presenting very good results with small expression profiles. However, the adopted inference method was not sensible to recognize distinct structures of interaction among genes, presenting a similar behavior when applied to different network topologies. In summary, the proposed framework, though simple, was adequate for the validation of the inferred networks by identifying some properties of the evaluated method, which can be extended to other inference methods.