915 resultados para Engineering design--Data processing


Relevância:

100.00% 100.00%

Publicador:

Resumo:

This dissertation develops a new mathematical approach that overcomes the effect of a data processing phenomenon known as "histogram binning" inherent to flow cytometry data. A real-time procedure is introduced to prove the effectiveness and fast implementation of such an approach on real-world data. The histogram binning effect is a dilemma posed by two seemingly antagonistic developments: (1) flow cytometry data in its histogram form is extended in its dynamic range to improve its analysis and interpretation, and (2) the inevitable dynamic range extension introduces an unwelcome side effect, the binning effect, which skews the statistics of the data, undermining as a consequence the accuracy of the analysis and the eventual interpretation of the data. Researchers in the field contended with such a dilemma for many years, resorting either to hardware approaches that are rather costly with inherent calibration and noise effects; or have developed software techniques based on filtering the binning effect but without successfully preserving the statistical content of the original data. The mathematical approach introduced in this dissertation is so appealing that a patent application has been filed. The contribution of this dissertation is an incremental scientific innovation based on a mathematical framework that will allow researchers in the field of flow cytometry to improve the interpretation of data knowing that its statistical meaning has been faithfully preserved for its optimized analysis. Furthermore, with the same mathematical foundation, proof of the origin of such an inherent artifact is provided. These results are unique in that new mathematical derivations are established to define and solve the critical problem of the binning effect faced at the experimental assessment level, providing a data platform that preserves its statistical content. In addition, a novel method for accumulating the log-transformed data was developed. This new method uses the properties of the transformation of statistical distributions to accumulate the output histogram in a non-integer and multi-channel fashion. Although the mathematics of this new mapping technique seem intricate, the concise nature of the derivations allow for an implementation procedure that lends itself to a real-time implementation using lookup tables, a task that is also introduced in this dissertation.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Current state of the art techniques for landmine detection in ground penetrating radar (GPR) utilize statistical methods to identify characteristics of a landmine response. This research makes use of 2-D slices of data in which subsurface landmine responses have hyperbolic shapes. Various methods from the field of visual image processing are adapted to the 2-D GPR data, producing superior landmine detection results. This research goes on to develop a physics-based GPR augmentation method motivated by current advances in visual object detection. This GPR specific augmentation is used to mitigate issues caused by insufficient training sets. This work shows that augmentation improves detection performance under training conditions that are normally very difficult. Finally, this work introduces the use of convolutional neural networks as a method to learn feature extraction parameters. These learned convolutional features outperform hand-designed features in GPR detection tasks. This work presents a number of methods, both borrowed from and motivated by the substantial work in visual image processing. The methods developed and presented in this work show an improvement in overall detection performance and introduce a method to improve the robustness of statistical classification.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Thesis (Ph.D.)--University of Washington, 2016-08

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Dissertação de Mestrado para obtenção do grau de Mestre em Design de Produto, apresentada na Universidade de Lisboa - Faculdade de Arquitectura.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Gasarite structures are a unique type of metallic foam containing tubular pores. The original methods for their production limited them to laboratory study despite appealing foam properties. Thermal decomposition processing of gasarites holds the potential to increase the application of gasarite foams in engineering design by removing several barriers to their industrial scale production. The following study characterized thermal decomposition gasarite processing both experimentally and theoretically. It was found that significant variation was inherent to this process therefore several modifications were necessary to produce gasarites using this method. Conventional means to increase porosity and enhance pore morphology were studied. Pore morphology was determined to be more easily replicated if pores were stabilized by alumina additions and powders were dispersed evenly. In order to better characterize processing, high temperature and high ramp rate thermal decomposition data were gathered. It was found that the high ramp rate thermal decomposition behavior of several hydrides was more rapid than hydride kinetics at low ramp rates. This data was then used to estimate the contribution of several pore formation mechanisms to the development of pore structure. It was found that gas-metal eutectic growth can only be a viable pore formation mode if non-equilibrium conditions persist. Bubble capture cannot be a dominant pore growth mode due to high bubble terminal velocities. Direct gas evolution appears to be the most likely pore formation mode due to high gas evolution rate from the decomposing particulate and microstructural pore growth trends. The overall process was evaluated for its economic viability. It was found that thermal decomposition has potential for industrialization, but further refinements are necessary in order for the process to be viable.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Hot tensile and creep tests were carried out on Kanthal A1 alloy in the temperature range from 600 to 800 degrees C. Each of these sets of data were analyzed separately according to their own methodologies, but an attempt was made to find a correlation between them. A new criterion proposed for converting hot tensile data to creep data, makes possible the analysis of the two kinds of results according to usual creep relations like: Norton, Monkman-Grant, Larson-Miller and others. The remarkable compatibility verified between both sets of data by this procedure strongly suggests that hot tensile data can be converted to creep data and vice-versa for Kanthal A1 alloy, as verified previously for other metallic materials.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper results of tests on 32 concrete-filled steel tubular columns under axial load are reported. The test parameters were the concrete compressive strength, the column slenderness (L/D) and the wall thickness (t). The test results were compared with predictions from the codes NBR 8800:2008 and EN 1994-1-1:2004 (EC4). The columns were 3, 5, 7 and 10 length to diameter ratios (L/D) and were tested with 30MPa, 60MPa, 80MPa and 100MPa concrete compressive strengths. The results of ultimate strength predicted by codes showed good agreement with experimental results. The results of NBR 8800 code were the most conservative and the EC4 showed the best results, in mean, but it was not conservative for usual concrete-filled short columns.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

For the optimal design of plate heat exchangers (PHEs), an accurate thermal-hydraulic model that takes into account the effect of the flow arrangement on the heat load and pressure drop is necessary. In the present study, the effect of the flow arrangement on the pressure drop of a PHE is investigated. Thirty two different arrangements were experimentally tested using a laboratory scale PHE with flat plates. The experimental data was used for (a) determination of an empirical correlation for the effect of the number of passes and number of flow channels per pass on the pressure drop; (b) validation of a friction factor model through parameter estimation; and (c) comparison with the simulation results obtained with a CFD (computational fluid dynamics) model of the PHE. All three approaches resulted in a good agreement between experimental and predicted values of pressure drop. Moreover, the CFD model is used for evaluating the flow maldistribution in a PHE with two channels Per Pass. (c) 2008 Elsevier Ltd. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

OctVCE is a cartesian cell CFD code produced especially for numerical simulations of shock and blast wave interactions with complex geometries. Virtual Cell Embedding (VCE) was chosen as its cartesian cell kernel as it is simple to code and sufficient for practical engineering design problems. This also makes the code much more ‘user-friendly’ than structured grid approaches as the gridding process is done automatically. The CFD methodology relies on a finite-volume formulation of the unsteady Euler equations and is solved using a standard explicit Godonov (MUSCL) scheme. Both octree-based adaptive mesh refinement and shared-memory parallel processing capability have also been incorporated. For further details on the theory behind the code, see the companion report 2007/12.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The development of large-scale solid-stale fermentation (SSF) processes is hampered by the lack of simple tools for the design of SSF bioreactors. The use of semifundamental mathematical models to design and operate SSF bioreactors can be complex. In this work, dimensionless design factors are used to predict the effects of scale and of operational variables on the performance of rotating drum bioreactors. The dimensionless design factor (DDF) is a ratio of the rate of heat generation to the rate of heat removal at the time of peak heat production. It can be used to predict maximum temperatures reached within the substrate bed for given operational variables. Alternatively, given the maximum temperature that can be tolerated during the fermentation, it can be used to explore the combinations of operating variables that prevent that temperature from being exceeded. Comparison of the predictions of the DDF approach with literature data for operation of rotating drums suggests that the DDF is a useful tool. The DDF approach was used to explore the consequences of three scale-up strategies on the required air flow rates and maximum temperatures achieved in the substrate bed as the bioreactor size was increased on the basis of geometric similarity. The first of these strategies was to maintain the superficial flow rate of the process air through the drum constant. The second was to maintain the ratio of volumes of air per volume of bioreactor constant. The third strategy was to adjust the air flow rate with increase in scale in such a manner as to maintain constant the maximum temperature attained in the substrate bed during the fermentation. (C) 2000 John Wiley & Sons, Inc.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The most widely used method for predicting the onset of continuous caving is Laubscher's caving chart. A detailed examination of this method was undertaken which concluded that it had limitations which may impact on results, particularly when dealing with stronger rock masses that are outside current experience. These limitations relate to inadequate guidelines for adjustment factors to rock mass rating (RMR), concerns about the position on the chart of critical case history data, undocumented changes to the method and an inadequate number of data points to be confident of stability boundaries. A review was undertaken on the application and reliability of a numerical method of assessing cavability. The review highlighted a number of issues, which at this stage, make numerical continuum methods problematic for predicting cavability. This is in particular reference to sensitivity to input parameters that are difficult to determine accurately and mesh dependency. An extended version of the Mathews method for open stope design was developed as an alternative method of predicting the onset of continuous caving. A number of caving case histories were collected and analyzed and a caving boundary delineated statistically on the Mathews stability graph. The definition of the caving boundary was aided by the existence of a large and wide-ranging stability database from non-caving mines. A caving rate model was extrapolated from the extended Mathews stability graph but could only be partially validated due to a lack of reliable data.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper, we present a method for estimating local thickness distribution in nite element models, applied to injection molded and cast engineering parts. This method features considerable improved performance compared to two previously proposed approaches, and has been validated against thickness measured by di erent human operators. We also demonstrate that the use of this method for assigning a distribution of local thickness in FEM crash simulations results in a much more accurate prediction of the real part performance, thus increasing the bene ts of computer simulations in engineering design by enabling zero-prototyping and thus reducing product development costs. The simulation results have been compared to experimental tests, evidencing the advantage of the proposed method. Thus, the proposed approach to consider local thickness distribution in FEM crash simulations has high potential on the product development process of complex and highly demanding injection molded and casted parts and is currently being used by Ford Motor Company.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Dissertation presented to obtain the PhD degree in Electrical and Computer Engineering - Electronics

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In the context of focal epilepsy, the simultaneous combination of electroencephalography (EEG) and functional magnetic resonance imaging (fMRI) holds a great promise as a technique by which the hemodynamic correlates of interictal spikes detected on scalp EEG can be identified. The fact that traditional EEG recordings have not been able to overcome the difficulty in correlating the ictal clinical symptoms to the onset in particular areas of the lobes, brings the need of mapping with more precision the epileptogenic cortical regions. On the other hand, fMRI suggested localizations more consistent with the ictal clinical manifestations detected. This study was developed in order to improve the knowledge about the way parameters involved in the physical and mathematical data, produced by the EEG/fMRI technique processing, would influence the final results. The evaluation of the accuracy was made by comparing the BOLD results with: the high resolution EEG maps; the malformative lesions detected in the T1 weighted MR images; and the anatomical localizations of the diagnosed symptomatology of each studied patient. The optimization of the set of parameters used, will provide an important contribution to the diagnosis of epileptogenic focuses, in patients included on an epilepsy surgery evaluation program. The results obtained allowed us to conclude that: by associating the BOLD effect with interictal spikes, the epileptogenic areas are mapped to localizations different from those obtained by the EEG maps representing the electrical potential distribution across the scalp (EEG); there is an important and solid bond between the variation of particular parameters (manipulated during the fMRI data processing) and the optimization of the final results, from which smoothing, deleted volumes, HRF (used to convolve with the activation design), and the shape of the Gamma function can be certainly emphasized.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Configuració d'un entorn de desenvolupament en el IDE Eclipse. Introducció als SIG. Usos, utilitats i exemples. Conèixer la eina gvSIG. Conèixer els estàndards més estesos de l'Open Geospatial Consortium (OGC) i en especial del Web Processing Services. Analitzar, dissenyar i desenvolupar un client capaç de consumir serveis wps.