915 resultados para Engineering design--Data processing
Resumo:
In vielen Bereichen der industriellen Fertigung, wie zum Beispiel in der Automobilindustrie, wer- den digitale Versuchsmodelle (sog. digital mock-ups) eingesetzt, um die Entwicklung komplexer Maschinen m ̈oglichst gut durch Computersysteme unterstu ̈tzen zu k ̈onnen. Hierbei spielen Be- wegungsplanungsalgorithmen eine wichtige Rolle, um zu gew ̈ahrleisten, dass diese digitalen Pro- totypen auch kollisionsfrei zusammengesetzt werden k ̈onnen. In den letzten Jahrzehnten haben sich hier sampling-basierte Verfahren besonders bew ̈ahrt. Diese erzeugen eine große Anzahl von zuf ̈alligen Lagen fu ̈r das ein-/auszubauende Objekt und verwenden einen Kollisionserken- nungsmechanismus, um die einzelnen Lagen auf Gu ̈ltigkeit zu u ̈berpru ̈fen. Daher spielt die Kollisionserkennung eine wesentliche Rolle beim Design effizienter Bewegungsplanungsalgorith- men. Eine Schwierigkeit fu ̈r diese Klasse von Planern stellen sogenannte “narrow passages” dar, schmale Passagen also, die immer dort auftreten, wo die Bewegungsfreiheit der zu planenden Objekte stark eingeschr ̈ankt ist. An solchen Stellen kann es schwierig sein, eine ausreichende Anzahl von kollisionsfreien Samples zu finden. Es ist dann m ̈oglicherweise n ̈otig, ausgeklu ̈geltere Techniken einzusetzen, um eine gute Performance der Algorithmen zu erreichen.rnDie vorliegende Arbeit gliedert sich in zwei Teile: Im ersten Teil untersuchen wir parallele Kollisionserkennungsalgorithmen. Da wir auf eine Anwendung bei sampling-basierten Bewe- gungsplanern abzielen, w ̈ahlen wir hier eine Problemstellung, bei der wir stets die selben zwei Objekte, aber in einer großen Anzahl von unterschiedlichen Lagen auf Kollision testen. Wir im- plementieren und vergleichen verschiedene Verfahren, die auf Hu ̈llk ̈operhierarchien (BVHs) und hierarchische Grids als Beschleunigungsstrukturen zuru ̈ckgreifen. Alle beschriebenen Verfahren wurden auf mehreren CPU-Kernen parallelisiert. Daru ̈ber hinaus vergleichen wir verschiedene CUDA Kernels zur Durchfu ̈hrung BVH-basierter Kollisionstests auf der GPU. Neben einer un- terschiedlichen Verteilung der Arbeit auf die parallelen GPU Threads untersuchen wir hier die Auswirkung verschiedener Speicherzugriffsmuster auf die Performance der resultierenden Algo- rithmen. Weiter stellen wir eine Reihe von approximativen Kollisionstests vor, die auf den beschriebenen Verfahren basieren. Wenn eine geringere Genauigkeit der Tests tolerierbar ist, kann so eine weitere Verbesserung der Performance erzielt werden.rnIm zweiten Teil der Arbeit beschreiben wir einen von uns entworfenen parallelen, sampling- basierten Bewegungsplaner zur Behandlung hochkomplexer Probleme mit mehreren “narrow passages”. Das Verfahren arbeitet in zwei Phasen. Die grundlegende Idee ist hierbei, in der er- sten Planungsphase konzeptionell kleinere Fehler zuzulassen, um die Planungseffizienz zu erh ̈ohen und den resultierenden Pfad dann in einer zweiten Phase zu reparieren. Der hierzu in Phase I eingesetzte Planer basiert auf sogenannten Expansive Space Trees. Zus ̈atzlich haben wir den Planer mit einer Freidru ̈ckoperation ausgestattet, die es erlaubt, kleinere Kollisionen aufzul ̈osen und so die Effizienz in Bereichen mit eingeschr ̈ankter Bewegungsfreiheit zu erh ̈ohen. Optional erlaubt unsere Implementierung den Einsatz von approximativen Kollisionstests. Dies setzt die Genauigkeit der ersten Planungsphase weiter herab, fu ̈hrt aber auch zu einer weiteren Perfor- mancesteigerung. Die aus Phase I resultierenden Bewegungspfade sind dann unter Umst ̈anden nicht komplett kollisionsfrei. Um diese Pfade zu reparieren, haben wir einen neuartigen Pla- nungsalgorithmus entworfen, der lokal beschr ̈ankt auf eine kleine Umgebung um den bestehenden Pfad einen neuen, kollisionsfreien Bewegungspfad plant.rnWir haben den beschriebenen Algorithmus mit einer Klasse von neuen, schwierigen Metall- Puzzlen getestet, die zum Teil mehrere “narrow passages” aufweisen. Unseres Wissens nach ist eine Sammlung vergleichbar komplexer Benchmarks nicht ̈offentlich zug ̈anglich und wir fan- den auch keine Beschreibung von vergleichbar komplexen Benchmarks in der Motion-Planning Literatur.
Resumo:
Data sets describing the state of the earth's atmosphere are of great importance in the atmospheric sciences. Over the last decades, the quality and sheer amount of the available data increased significantly, resulting in a rising demand for new tools capable of handling and analysing these large, multidimensional sets of atmospheric data. The interdisciplinary work presented in this thesis covers the development and the application of practical software tools and efficient algorithms from the field of computer science, aiming at the goal of enabling atmospheric scientists to analyse and to gain new insights from these large data sets. For this purpose, our tools combine novel techniques with well-established methods from different areas such as scientific visualization and data segmentation. In this thesis, three practical tools are presented. Two of these tools are software systems (Insight and IWAL) for different types of processing and interactive visualization of data, the third tool is an efficient algorithm for data segmentation implemented as part of Insight.Insight is a toolkit for the interactive, three-dimensional visualization and processing of large sets of atmospheric data, originally developed as a testing environment for the novel segmentation algorithm. It provides a dynamic system for combining at runtime data from different sources, a variety of different data processing algorithms, and several visualization techniques. Its modular architecture and flexible scripting support led to additional applications of the software, from which two examples are presented: the usage of Insight as a WMS (web map service) server, and the automatic production of a sequence of images for the visualization of cyclone simulations. The core application of Insight is the provision of the novel segmentation algorithm for the efficient detection and tracking of 3D features in large sets of atmospheric data, as well as for the precise localization of the occurring genesis, lysis, merging and splitting events. Data segmentation usually leads to a significant reduction of the size of the considered data. This enables a practical visualization of the data, statistical analyses of the features and their events, and the manual or automatic detection of interesting situations for subsequent detailed investigation. The concepts of the novel algorithm, its technical realization, and several extensions for avoiding under- and over-segmentation are discussed. As example applications, this thesis covers the setup and the results of the segmentation of upper-tropospheric jet streams and cyclones as full 3D objects. Finally, IWAL is presented, which is a web application for providing an easy interactive access to meteorological data visualizations, primarily aimed at students. As a web application, the needs to retrieve all input data sets and to install and handle complex visualization tools on a local machine are avoided. The main challenge in the provision of customizable visualizations to large numbers of simultaneous users was to find an acceptable trade-off between the available visualization options and the performance of the application. Besides the implementational details, benchmarks and the results of a user survey are presented.
Resumo:
Im Bereich sicherheitsrelevanter eingebetteter Systeme stellt sich der Designprozess von Anwendungen als sehr komplex dar. Entsprechend einer gegebenen Hardwarearchitektur lassen sich Steuergeräte aufrüsten, um alle bestehenden Prozesse und Signale pünktlich auszuführen. Die zeitlichen Anforderungen sind strikt und müssen in jeder periodischen Wiederkehr der Prozesse erfüllt sein, da die Sicherstellung der parallelen Ausführung von größter Bedeutung ist. Existierende Ansätze können schnell Designalternativen berechnen, aber sie gewährleisten nicht, dass die Kosten für die nötigen Hardwareänderungen minimal sind. Wir stellen einen Ansatz vor, der kostenminimale Lösungen für das Problem berechnet, die alle zeitlichen Bedingungen erfüllen. Unser Algorithmus verwendet Lineare Programmierung mit Spaltengenerierung, eingebettet in eine Baumstruktur, um untere und obere Schranken während des Optimierungsprozesses bereitzustellen. Die komplexen Randbedingungen zur Gewährleistung der periodischen Ausführung verlagern sich durch eine Zerlegung des Hauptproblems in unabhängige Unterprobleme, die als ganzzahlige lineare Programme formuliert sind. Sowohl die Analysen zur Prozessausführung als auch die Methoden zur Signalübertragung werden untersucht und linearisierte Darstellungen angegeben. Des Weiteren präsentieren wir eine neue Formulierung für die Ausführung mit fixierten Prioritäten, die zusätzlich Prozessantwortzeiten im schlimmsten anzunehmenden Fall berechnet, welche für Szenarien nötig sind, in denen zeitliche Bedingungen an Teilmengen von Prozessen und Signalen gegeben sind. Wir weisen die Anwendbarkeit unserer Methoden durch die Analyse von Instanzen nach, welche Prozessstrukturen aus realen Anwendungen enthalten. Unsere Ergebnisse zeigen, dass untere Schranken schnell berechnet werden können, um die Optimalität von heuristischen Lösungen zu beweisen. Wenn wir optimale Lösungen mit Antwortzeiten liefern, stellt sich unsere neue Formulierung in der Laufzeitanalyse vorteilhaft gegenüber anderen Ansätzen dar. Die besten Resultate werden mit einem hybriden Ansatz erzielt, der heuristische Startlösungen, eine Vorverarbeitung und eine heuristische mit einer kurzen nachfolgenden exakten Berechnungsphase verbindet.
Resumo:
Electric power grids throughout the world suffer from serious inefficiencies associated with under-utilization due to demand patterns, engineering design and load following approaches in use today. These grids consume much of the world’s energy and represent a large carbon footprint. From material utilization perspectives significant hardware is manufactured and installed for this infrastructure often to be used at less than 20-40% of its operational capacity for most of its lifetime. These inefficiencies lead engineers to require additional grid support and conventional generation capacity additions when renewable technologies (such as solar and wind) and electric vehicles are to be added to the utility demand/supply mix. Using actual data from the PJM [PJM 2009] the work shows that consumer load management, real time price signals, sensors and intelligent demand/supply control offer a compelling path forward to increase the efficient utilization and carbon footprint reduction of the world’s grids. Underutilization factors from many distribution companies indicate that distribution feeders are often operated at only 70-80% of their peak capacity for a few hours per year, and on average are loaded to less than 30-40% of their capability. By creating strong societal connections between consumers and energy providers technology can radically change this situation. Intelligent deployment of smart sensors, smart electric vehicles, consumer-based load management technology very high saturations of intermittent renewable energy supplies can be effectively controlled and dispatched to increase the levels of utilization of existing utility distribution, substation, transmission, and generation equipment. The strengthening of these technology, society and consumer relationships requires rapid dissemination of knowledge (real time prices, costs & benefit sharing, demand response requirements) in order to incentivize behaviors that can increase the effective use of technological equipment that represents one of the largest capital assets modern society has created.
Resumo:
This is the second part of a study investigating a model-based transient calibration process for diesel engines. The first part addressed the data requirements and data processing required for empirical transient emission and torque models. The current work focuses on modelling and optimization. The unexpected result of this investigation is that when trained on transient data, simple regression models perform better than more powerful methods such as neural networks or localized regression. This result has been attributed to extrapolation over data that have estimated rather than measured transient air-handling parameters. The challenges of detecting and preventing extrapolation using statistical methods that work well with steady-state data have been explained. The concept of constraining the distribution of statistical leverage relative to the distribution of the starting solution to prevent extrapolation during the optimization process has been proposed and demonstrated. Separate from the issue of extrapolation is preventing the search from being quasi-static. Second-order linear dynamic constraint models have been proposed to prevent the search from returning solutions that are feasible if each point were run at steady state, but which are unrealistic in a transient sense. Dynamic constraint models translate commanded parameters to actually achieved parameters that then feed into the transient emission and torque models. Combined model inaccuracies have been used to adjust the optimized solutions. To frame the optimization problem within reasonable dimensionality, the coefficients of commanded surfaces that approximate engine tables are adjusted during search iterations, each of which involves simulating the entire transient cycle. The resulting strategy, different from the corresponding manual calibration strategy and resulting in lower emissions and efficiency, is intended to improve rather than replace the manual calibration process.
Resumo:
The performance of the parallel vector implementation of the one- and two-dimensional orthogonal transforms is evaluated. The orthogonal transforms are computed using actual or modified fast Fourier transform (FFT) kernels. The factors considered in comparing the speed-up of these vectorized digital signal processing algorithms are discussed and it is shown that the traditional way of comparing th execution speed of digital signal processing algorithms by the ratios of the number of multiplications and additions is no longer effective for vector implementation; the structure of the algorithm must also be considered as a factor when comparing the execution speed of vectorized digital signal processing algorithms. Simulation results on the Cray X/MP with the following orthogonal transforms are presented: discrete Fourier transform (DFT), discrete cosine transform (DCT), discrete sine transform (DST), discrete Hartley transform (DHT), discrete Walsh transform (DWHT), and discrete Hadamard transform (DHDT). A comparison between the DHT and the fast Hartley transform is also included.(34 refs)
Resumo:
A new liquid-fuel injector was designed for use in the atmospheric-pressure, model gas turbine combustor in Bucknell University’s Combustion Research Laboratory during alternative fuel testing. The current liquid-fuel injector requires a higher-than-desired pressure drop and volumetric flow rate to provide proper atomization of liquid fuels. An air-blast atomizer type of fuel injector was chosen and an experiment utilizing water as the working fluid was performed on a variable-geometry prototype. Visualization of the spray pattern was achieved through photography and the pressure drop was measured as a function of the required operating parameters. Experimental correlations were used to estimate droplet sizes over flow conditions similar to that which would be experienced in the actual combustor. The results of this experiment were used to select the desired geometric parameters for the proposed final injector design and a CAD model was generated. Eventually, the new injector will be fabricated and tested to provide final validation of the design prior to use in the combustion test apparatus.
Resumo:
An atmospheric combustion apparatus was designed through several iterations for Bucknell University's combustion laboratory. The final design required extensive fine-tuning of the fuel and air systems and repeated tests to arrive at a satisfactory procedure to transfer from gaseous to liquid fuel operation. Measurement of exhaust emissions were obtained under tests of gaseous methane and liquid heptane were operation in order to validate the functionality of the combustion apparatus, the fuel transition procedure, and emissions analyzer systems. The emission concentrations of CO, CO2, NOx, 02, S02, and unburned hydrocarbons from a multianalyzer and HFID analyzer were obtained for a range of equivalence ratios. The results verify the potential for future alternative fuel tests and illuminate necessary alterations for further liquid fuel studies.
Resumo:
Petroleum supply and environmental pollution issues constantly increase interest in renewable low polluting alternative fuels. Published test results show decreased pollution with similar power output and fuel consumption from Internal Combustion Engines (ICE) burning alternative fuels. More specifically, diesel engines burning biodiesel derived from plant oils and animal fats not only reduce harmful exhaust emissions but are renewable and environmentally friendly. To validate these claims and assess the feasibility of alternative fuels, independent engine dynamometer and emissions testing was performed. A testing apparatus capable of making relevant measurements was designed, built, and used to test and determine the feasibility of biodiesel. The apparatus marks the addition of a valuable testing tool to the University and provides a foundation for future experiments. This thesis will discuss the background of biodiesel, testing methods, design and function of the testing apparatus, experimental results, relevant calculations, and conclusions.
Resumo:
The Environmental Process and Simulation Center (EPSC) at Michigan Technological University started accommodating laboratories for an Environmental Engineering senior level class CEE 4509 Environmental Process and Simulation Laboratory since 2004. Even though the five units that exist in EPSC provide the students opportunities to have hands-on experiences with a wide range of water/wastewater treatment technologies, a key module was still missing for the student to experience a full cycle of treatment. This project fabricated a direct-filtration pilot system in EPSC and generated a laboratory manual for education purpose. Engineering applications such as clean bed head loss calculation, backwash flowrate determination, multimedia density calculation and run length prediction are included in the laboratory manual. The system was tested for one semester and modifications have been made both to the direct filtration unit and the laboratory manual. Future work is also proposed to further refine the module.
Resumo:
Routine bridge inspections require labor intensive and highly subjective visual interpretation to determine bridge deck surface condition. Light Detection and Ranging (LiDAR) a relatively new class of survey instrument has become a popular and increasingly used technology for providing as-built and inventory data in civil applications. While an increasing number of private and governmental agencies possess terrestrial and mobile LiDAR systems, an understanding of the technology’s capabilities and potential applications continues to evolve. LiDAR is a line-of-sight instrument and as such, care must be taken when establishing scan locations and resolution to allow the capture of data at an adequate resolution for defining features that contribute to the analysis of bridge deck surface condition. Information such as the location, area, and volume of spalling on deck surfaces, undersides, and support columns can be derived from properly collected LiDAR point clouds. The LiDAR point clouds contain information that can provide quantitative surface condition information, resulting in more accurate structural health monitoring. LiDAR scans were collected at three study bridges, each of which displayed a varying degree of degradation. A variety of commercially available analysis tools and an independently developed algorithm written in ArcGIS Python (ArcPy) were used to locate and quantify surface defects such as location, volume, and area of spalls. The results were visual and numerically displayed in a user-friendly web-based decision support tool integrating prior bridge condition metrics for comparison. LiDAR data processing procedures along with strengths and limitations of point clouds for defining features useful for assessing bridge deck condition are discussed. Point cloud density and incidence angle are two attributes that must be managed carefully to ensure data collected are of high quality and useful for bridge condition evaluation. When collected properly to ensure effective evaluation of bridge surface condition, LiDAR data can be analyzed to provide a useful data set from which to derive bridge deck condition information.
Resumo:
The report reviews the technology of Free-space Optical Communication (FSO) and simulation methods for testing the performance of diverged beam in the technology. In addition to the introduction, the theory of turbulence and its effect over laser is also reviewed. In the simulation revision chapter, on-off keying (OOK) and diverged beam is assumed in the transmitter, and in the receiver, avalanche photodiode (APD) is utilized to convert the photon stream into electron stream. Phase screens are adopted to simulate the effect of turbulence over the phase of the optical beam. Apart from this, the method of data processing is introduced and retrospected. In the summary chapter, there is a general explanation of different beam divergence and their performance.
Resumo:
Building energy meter network, based on per-appliance monitoring system, willbe an important part of the Advanced Metering Infrastructure. Two key issues exist for designing such networks. One is the network structure to be used. The other is the implementation of the network structure on a large amount of small low power devices, and the maintenance of high quality communication when the devices have electric connection with high voltage AC line. The recent advancement of low-power wireless communication makes itself the right candidate for house and building energy network. Among all kinds of wireless solutions, the low speed but highly reliable 802.15.4 radio has been chosen in this design. While many network-layer solutions have been provided on top of 802.15.4, an IPv6 based method is used in this design. 6LOWPAN is the particular protocol which adapts IP on low power personal network radio. In order to extend the network into building area without, a specific network layer routing mechanism-RPL, is included in this design. The fundamental unit of the building energy monitoring system is a smart wall plug. It is consisted of an electricity energy meter, a RF communication module and a low power CPU. The real challenge for designing such a device is its network firmware. In this design, IPv6 is implemented through Contiki operation system. Customize hardware driver and meter application program have been developed on top of the Contiki OS. Some experiments have been done, in order to prove the network ability of this system.
Resumo:
Power transformers are key components of the power grid and are also one of the most subjected to a variety of power system transients. The failure of a large transformer can cause severe monetary losses to a utility, thus adequate protection schemes are of great importance to avoid transformer damage and maximize the continuity of service. Computer modeling can be used as an efficient tool to improve the reliability of a transformer protective relay application. Unfortunately, transformer models presently available in commercial software lack completeness in the representation of several aspects such as internal winding faults, which is a common cause of transformer failure. It is also important to adequately represent the transformer at frequencies higher than the power frequency for a more accurate simulation of switching transients since these are a well known cause for the unwanted tripping of protective relays. This work develops new capabilities for the Hybrid Transformer Model (XFMR) implemented in ATPDraw to allow the representation of internal winding faults and slow-front transients up to 10 kHz. The new model can be developed using any of two sources of information: 1) test report data and 2) design data. When only test-report data is available, a higher-order leakage inductance matrix is created from standard measurements. If design information is available, a Finite Element Model is created to calculate the leakage parameters for the higher-order model. An analytical model is also implemented as an alternative to FEM modeling. Measurements on 15-kVA 240?/208Y V and 500-kVA 11430Y/235Y V distribution transformers were performed to validate the model. A transformer model that is valid for simulations for frequencies above the power frequency was developed after continuing the division of windings into multiple sections and including a higher-order capacitance matrix. Frequency-scan laboratory measurements were used to benchmark the simulations. Finally, a stability analysis of the higher-order model was made by analyzing the trapezoidal rule for numerical integration as used in ATP. Numerical damping was also added to suppress oscillations locally when discontinuities occurred in the solution. A maximum error magnitude of 7.84% was encountered in the simulated currents for different turn-to-ground and turn-to-turn faults. The FEM approach provided the most accurate means to determine the leakage parameters for the ATP model. The higher-order model was found to reproduce the short-circuit impedance acceptably up to about 10 kHz and the behavior at the first anti-resonant frequency was better matched with the measurements.
Resumo:
Gene-directed enzyme prodrug therapy is a form of cancer therapy in which delivery of a gene that encodes an enzyme is able to convert a prodrug, a pharmacologically inactive molecule, into a potent cytotoxin. Currently delivery of gene and prodrug is a two-step process. Here, we propose a one-step method using polymer nanocarriers to deliver prodrug, gene and cytotoxic drug simultaneously to malignant cells. Prodrugs acyclovir, ganciclovir and 5-doxifluridine were used to directly to initiate ring-opening polymerization of epsilon-caprolactone, forming a hydrophobic prodrug-tagged poly(epsilon-caprolactone) which was further grafted with hydrophilic polymers (methoxy poly(ethylene glycol), chitosan or polyethylenemine) to form amphiphilic copolymers for micelle formation. Successful synthesis of copolymers and micelle formation was confirmed by standard analytical means. Conversion of prodrugs to their cytotoxic forms was analyzed by both two-step and one-step means i.e. by first delivering gene plasmid into cell line HT29 and then challenging the cells with the prodrug-tagged micelle carriers and secondly by complexing gene plasmid onto micelle nanocarriers and delivery gene and prodrug simultaneously to parental HT29 cells. Anticancer effectiveness of prodrug-tagged micelles was further enhanced by encapsulating chemotherapy drugs doxorubicin or SN-38. Viability of colon cancer cell line HT29 was significantly reduced. Furthermore, in an effort to develop a stealth and targeted carrier, CD47-streptavidin fusion protein was attached onto the micelle surface utilizing biotin-streptavidin affinity. CD47, a marker of self on the red blood cell surface, was used for its antiphagocytic efficacy, results showed that micelles bound with CD47 showed antiphagocytic efficacy when exposed to J774A.1 macrophages. Since CD47 is not only an antiphagocytic ligand but also an integrin associated protein, it was used to target integrin alpha(v)beta(3), which is overexpressed on tumor-activated neovascular endothelial cells. Results showed that CD47-tagged micelles had enhanced uptake when treated to PC3 cells which have high expression of alpha(v)beta(3). The synthesized multifunctional polymeric micelle carriers developed could offer a new platform for an innovative cancer therapy regime.