961 resultados para Engineering design
Resumo:
In the process of engineering design of structural shapes, the flat plate analysis results can be generalized to predict behaviors of complete structural shapes. In this case, the purpose of this project is to analyze a thin flat plate under conductive heat transfer and to simulate the temperature distribution, thermal stresses, total displacements, and buckling deformations. The current approach in these cases has been using the Finite Element Method (FEM), whose basis is the construction of a conforming mesh. In contrast, this project uses the mesh-free Scan Solve Method. This method eliminates the meshing limitation using a non-conforming mesh. I implemented this modeling process developing numerical algorithms and software tools to model thermally induced buckling. In addition, convergence analysis was achieved, and the results were compared with FEM. In conclusion, the results demonstrate that the method gives similar solutions to FEM in quality, but it is computationally less time consuming.
Resumo:
Adjoint methods have proven to be an efficient way of calculating the gradient of an objective function with respect to a shape parameter for optimisation, with a computational cost nearly independent of the number of the design variables [1]. The approach in this paper links the adjoint surface sensitivities (gradient of objective function with respect to the surface movement) with the parametric design velocities (movement of the surface due to a CAD parameter perturbation) in order to compute the gradient of the objective function with respect to CAD variables.
For a successful implementation of shape optimization strategies in practical industrial cases, the choice of design variables or parameterisation scheme used for the model to be optimized plays a vital role. Where the goal is to base the optimization on a CAD model the choices are to use a NURBS geometry generated from CAD modelling software, where the position of the NURBS control points are the optimisation variables [2] or to use the feature based CAD model with all of the construction history to preserve the design intent [3]. The main advantage of using the feature based model is that the optimized model produced can be directly used for the downstream applications including manufacturing and process planning.
This paper presents an approach for optimization based on the feature based CAD model, which uses CAD parameters defining the features in the model geometry as the design variables. In order to capture the CAD surface movement with respect to the change in design variable, the “Parametric Design Velocity” is calculated, which is defined as the movement of the CAD model boundary in the normal direction due to a change in the parameter value.
The approach presented here for calculating the design velocities represents an advancement in terms of capability and robustness of that described by Robinson et al. [3]. The process can be easily integrated to most industrial optimisation workflows and is immune to the topology and labelling issues highlighted by other CAD based optimisation processes. It considers every continuous (“real value”) parameter type as an optimisation variable, and it can be adapted to work with any CAD modelling software, as long as it has an API which provides access to the values of the parameters which control the model shape and allows the model geometry to be exported. To calculate the movement of the boundary the methodology employs finite differences on the shape of the 3D CAD models before and after the parameter perturbation. The implementation procedure includes calculating the geometrical movement along a normal direction between two discrete representations of the original and perturbed geometry respectively. Parametric design velocities can then be directly linked with adjoint surface sensitivities to extract the gradients to use in a gradient-based optimization algorithm.
The optimisation of a flow optimisation problem is presented, in which the power dissipation of the flow in an automotive air duct is to be reduced by changing the parameters of the CAD geometry created in CATIA V5. The flow sensitivities are computed with the continuous adjoint method for a laminar and turbulent flow [4] and are combined with the parametric design velocities to compute the cost function gradients. A line-search algorithm is then used to update the design variables and proceed further with optimisation process.
Resumo:
Thesis (Master's)--University of Washington, 2016-08
Resumo:
Design for behaviour change aims to influence user behaviour, through design, for social or environmental benefit. Understanding and modelling human behaviour has thus come within the scope of designers’work, as in interaction design, service design and user experience design more generally. Diverse approaches to how to model users when seeking to influence behaviour can result in many possible strategies, but a major challenge for the field is matching appropriate design strategies to particular behaviours (Zachrisson & Boks, 2012). In this paper, we introduce and explore behavioural heuristics as a way of framing problem-solution pairs (Dorst & Cross, 2001) in terms of simple rules. These act as a ‘common language’ between insights from user research and design principles and techniques, and draw on ideas from human factors, behavioural economics, and decision research. We introduce the process via a case study on interaction with office heating systems, based on interviews with 16 people. This is followed by worked examples in the ‘other direction’, based on a workshop held at the Interaction ’12 conference, extracting heuristics from existing systems designed to influence user behaviour, to illustrate both ends of a possible design process using heuristics.
Resumo:
El siguiente artículo es la recopilación de la metodología utilizada en la transformación de la Librería Acentos, de la Universidad EAFIT en la materia Proyecto final del pregrado de Ingeniería de Diseño de Producto -- Esta metodología permite de una manera clara y ordenada seguir un esquema de trabajo con objetivos claros como una buena fórmula para un proyecto exitoso
Resumo:
Este proyecto estudia y compara las metodologías Bottom Up y Top Down, utilizadas en el desarrollo de productos dentro de un departamento de manufactura en un ambiente colaborativo -- Se desarrolló un producto mediante ambas metodologías, posteriormente se analizó su incidencia en el comportamiento de indicadores de gestión, que miden el desempeño de una organización -- Se destacan también los beneficios del Top Down en la manufactura de grandes ensambles, tomando como ejemplo un torno
Resumo:
Contemporary integrated circuits are designed and manufactured in a globalized environment leading to concerns of piracy, overproduction and counterfeiting. One class of techniques to combat these threats is circuit obfuscation which seeks to modify the gate-level (or structural) description of a circuit without affecting its functionality in order to increase the complexity and cost of reverse engineering. Most of the existing circuit obfuscation methods are based on the insertion of additional logic (called “key gates”) or camouflaging existing gates in order to make it difficult for a malicious user to get the complete layout information without extensive computations to determine key-gate values. However, when the netlist or the circuit layout, although camouflaged, is available to the attacker, he/she can use advanced logic analysis and circuit simulation tools and Boolean SAT solvers to reveal the unknown gate-level information without exhaustively trying all the input vectors, thus bringing down the complexity of reverse engineering. To counter this problem, some ‘provably secure’ logic encryption algorithms that emphasize methodical selection of camouflaged gates have been proposed previously in literature [1,2,3]. The contribution of this paper is the creation and simulation of a new layout obfuscation method that uses don't care conditions. We also present proof-of-concept of a new functional or logic obfuscation technique that not only conceals, but modifies the circuit functionality in addition to the gate-level description, and can be implemented automatically during the design process. Our layout obfuscation technique utilizes don’t care conditions (namely, Observability and Satisfiability Don’t Cares) inherent in the circuit to camouflage selected gates and modify sub-circuit functionality while meeting the overall circuit specification. Here, camouflaging or obfuscating a gate means replacing the candidate gate by a 4X1 Multiplexer which can be configured to perform all possible 2-input/ 1-output functions as proposed by Bao et al. [4]. It is important to emphasize that our approach not only obfuscates but alters sub-circuit level functionality in an attempt to make IP piracy difficult. The choice of gates to obfuscate determines the effort required to reverse engineer or brute force the design. As such, we propose a method of camouflaged gate selection based on the intersection of output logic cones. By choosing these candidate gates methodically, the complexity of reverse engineering can be made exponential, thus making it computationally very expensive to determine the true circuit functionality. We propose several heuristic algorithms to maximize the RE complexity based on don’t care based obfuscation and methodical gate selection. Thus, the goal of protecting the design IP from malicious end-users is achieved. It also makes it significantly harder for rogue elements in the supply chain to use, copy or replicate the same design with a different logic. We analyze the reverse engineering complexity by applying our obfuscation algorithm on ISCAS-85 benchmarks. Our experimental results indicate that significant reverse engineering complexity can be achieved at minimal design overhead (average area overhead for the proposed layout obfuscation methods is 5.51% and average delay overhead is about 7.732%). We discuss the strengths and limitations of our approach and suggest directions that may lead to improved logic encryption algorithms in the future. References: [1] R. Chakraborty and S. Bhunia, “HARPOON: An Obfuscation-Based SoC Design Methodology for Hardware Protection,” IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, vol. 28, no. 10, pp. 1493–1502, 2009. [2] J. A. Roy, F. Koushanfar, and I. L. Markov, “EPIC: Ending Piracy of Integrated Circuits,” in 2008 Design, Automation and Test in Europe, 2008, pp. 1069–1074. [3] J. Rajendran, M. Sam, O. Sinanoglu, and R. Karri, “Security Analysis of Integrated Circuit Camouflaging,” ACM Conference on Computer Communications and Security, 2013. [4] Bao Liu, Wang, B., "Embedded reconfigurable logic for ASIC design obfuscation against supply chain attacks,"Design, Automation and Test in Europe Conference and Exhibition (DATE), 2014 , vol., no., pp.1,6, 24-28 March 2014.
Resumo:
The "Sonar Hopf" cochlea is a recently much advertised engineering design of an auditory sensor. We analyze this approach based on a recent description by its inventors Hamilton, Tapson, Rapson, Jin, and van Schaik, in which they exhibit the "Sonar Hopf" model, its analysis and the corresponding hardware in detail. We identify problems in the theoretical formulation of the model and critically examine the claimed coherence between the described model, the measurements from the implemented hardware, and biological data.
Resumo:
Esta investigación pretende evidenciar como el rediseño de un producto, específicamente el rediseño modular, puede generar cambios en la distribución de planta incrementando los niveles de producción de una empresa -- Dicho incremento se logró implementando una nueva metodología, que será ampliamente descrita en este trabajo, y que contempla todo el proceso del rediseño de productos, desde el desensamble hasta la distribución de la planta -- Para esta investigación se utilizaron herramientas conocidas como el Análisis Funcional, la Matriz de Interacción, El Diseño para el Ensamble (DFA), el Diagrama de Operaciones y algunos conceptos de la gestión de plataforma de un producto modular -- Como ejemplo ilustrativo, se rediseñaron dos productos ya existentes, una licuadora SAMURAI modelo FACICLIC negra y un procesador de alimentos HAMILTON BEACH modelo 70740, los cuales sirvieron de base para evidenciar cómo al transformar un producto en uno más modular se mejora significativamente la distribución de planta, logrando un incremento en la producción de éste, al tiempo que se reducen los tiempos de ensamble y se optimiza la disposición de los puestos de trabajo en la planta para conformar el producto final -- Para llevar a cabo esta investigación, fue indispensable crear una nueva ruta metodológica que nos permitiera rediseñar un producto inicial como un producto modularizado y terminar con el rediseño de la distribución de planta de una familia de productos modulares -- Esta nueva ruta metodológica, que hace parte de nuestros resultados de investigación, consta de tres fases -- La primera fase, se basa principalmente en modularizar cada uno de los productos que se quieren integrar dentro de la familia de productos, la segunda fase, logra identificar los módulos comunes entre los productos modularizados para generar la plataforma, y por último, la fase tres genera una nueva distribución de planta para el ensamble de la familia de productos generada -- Esta investigación constata, entre otras cosas, como el rediseño modular de productos es una herramienta fundamental en el planteamiento de una familia de productos en una empresa -- Partiendo de productos que ya pasaron por un proceso de modularización previo, se propuso una métrica que permite definir la similitud entre módulos, considerando la mayor cantidad de sus propiedades, la forma física y la función de cada uno de ellos, complementando así la metodología propuesta por Katja Hölttä en su tesis doctoral -- Finalmente se propuso una distribución de planta que permite un ensamble más eficiente tanto en tiempo como en número de operarios necesarios para la producción, incrementando así la versatilidad y las tasas de producción de la empresa
Resumo:
Crystallization is employed in different industrial processes. The method and operation can differ depending on the nature of the substances involved. The aim of this study is to examine the effect of various operating conditions on the crystal properties in a chemical engineering design window with a focus on ultrasound assisted cooling crystallization. Batch to batch variations, minimal manufacturing steps and faster production times are factors which continuous crystallization seeks to resolve. Continuous processes scale-up is considered straightforward compared to batch processes owing to increase of processing time in the specific reactor. In cooling crystallization process, ultrasound can be used to control the crystal properties. Different model compounds were used to define the suitable process parameters for the modular crystallizer using equal operating conditions in each module. A final temperature of 20oC was employed in all experiments while the operating conditions differed. The studied process parameters and configuration of the crystallizer were manipulated to achieve a continuous operation without crystal clogging along the crystallization path. The results from the continuous experiment were compared with the batch crystallization results and analysed using the Malvern Morphologi G3 instrument to determine the crystal morphology and CSD. The modular crystallizer was operated successfully with three different residence times. At optimal process conditions, a longer residence time gives smaller crystals and narrower CSD. Based on the findings, at a constant initial solution concentration, the residence time had clear influence on crystal properties. The equal supersaturation criterion in each module offered better results compared to other cooling profiles. The combination of continuous crystallization and ultrasound has large potential to overcome clogging, obtain reproducible and narrow CSD, specific crystal morphologies and uniform particle sizes, and exclusion of milling stages in comparison to batch processes.
Resumo:
Este trabajo de grado consistió en la elaboración y desalcoholización de 3 recetas de cerveza artesanal basadas en recetas previamente diseñadas -- Las cervezas fueron desalcoholizadas mediante un proceso de sublimación bajo vacío y luego reconstituidas con agua carbonatada -- Usando espectrometría infrarroja y cromatografía de gases se determinó que más del 98% del alcohol presente en cada una de las muestras originales fue removido exitosamente -- El grado de aceptación de cada una de las 6 variedades de cerveza se determinó mediante un panel de consumidores -- Los resultados del panel mostraron un mayor grado de aceptación para las cervezas alcohólicas que para las cervezas no alcohólicas y permitieron determinar que existían diferencias significativas entre el sabor de las cervezas no alcohólicas y las cervezas originales -- Por último se elaboró un diseño conceptual de la planta para la elaboración de cerveza con y sin alcohol a partir del cual se realizó un análisis económico en el que se observa que el proyecto no es económicamente viable bajo las condiciones estudiadas, presentando una TIR del 7% y un VPN ($1,054’498,368) menor a la inversión inicial ($1,151’965,681)
Resumo:
Toppling analysis of a precariously balanced rock (PBR) can provide insights into the nature of ground motion that has not occurred at that location in the past and, by extension, realistic constraints on peak ground motions for use in engineering design. Earlier approaches have targeted simplistic 2-D models of the rock or modeled the rock-pedestal contact using spring-damper assemblies that require re-calibration for each rock. These analyses also assume that the rock does not slide on the pedestal. Here, a method to model PBRs in three dimensions is presented. The 3-D model is created from a point cloud of the rock, the pedestal, and their interface, obtained using Terrestrial Laser Scanning (TLS). The dynamic response of the model under earthquake excitation is simulated using a rigid body dynamics algorithm. The veracity of this approach is demonstrated by comparisons against data from shake table experiments. Fragility maps for toppling probability of the Echo Cliff PBR and the Pacifico PBR as a function of various ground motion parameters, rock-pedestal interface friction coefficient, and excitation direction are presented. The seismic hazard at these PBR locations is estimated using these maps. Additionally, these maps are used to assess whether the synthetic ground motions at these locations resulting from scenario earthquakes on the San Andreas Fault are realistic (toppling would indicate that the ground motions are unrealistically high).
Resumo:
Despite the extensive implementation of Superstreets on congested arterials, reliable methodologies for such designs remain unavailable. The purpose of this research is to fill the information gap by offering reliable tools to assist traffic professionals in the design of Superstreets with and without signal control. The entire tool developed in this thesis consists of three models. The first model is used to determine the minimum U-turn offset length for an Un-signalized Superstreet, given the arterial headway distribution of the traffic flows and the distribution of critical gaps among drivers. The second model is designed to estimate the queue size and its variation on each critical link in a signalized Superstreet, based on the given signal plan and the range of observed volumes. Recognizing that the operational performance of a Superstreet cannot be achieved without an effective signal plan, the third model is developed to produce a signal optimization method that can generate progression offsets for heavy arterial flows moving into and out of such an intersection design.
Resumo:
Los protocolos de medición antropométrica se caracterizan por la profusión de medidas discretas o localizadas, en un intento para caracterizar completamente la forma corporal del sujeto -- Dichos protocolos se utilizan intensivamente en campos como medicina deportiva, forense y/o reconstructiva, diseño de prótesis, ergonomía, en la confección de prendas, accesorios, etc -- Con el avance de algoritmos de recuperación de formas a partir de muestreos (digitalizaciones) la caracterización antropométrica se ha alterado significativamente -- El articulo presente muestra el proceso de caracterización digital de forma corpórea, incluyendo los protocolos de medición sobre el sujeto, el ambiente computacional - DigitLAB- (desarrollado en el CII-CAD-CAM-CG de la Universidad EAFIT) para recuperación de superficies, hasta los modelos geométricos finales -- Se presentan comparaciones de los resultados obtenidos con DigitLAB y con paquetes comerciales de recuperación de forma 3D -- Los resultados de DigitLAB resultan superiores, debido principalmente al hecho de que este toma ventaja de los patrones de las digitalizaciones (planares de contacto, por rejilla de pixels - range images -, etc.) y provee módulos de tratamiento geométrico - estadístico de los datos para poder aplicar efectivamente los algoritmos de recuperación de forma -- Se presenta un caso de estudio dirigido a la industria de la confección, y otros efectuados sobre conjuntos de prueba comunes en el ámbito científico para la homologación de algoritmos
Resumo:
Gasarite structures are a unique type of metallic foam containing tubular pores. The original methods for their production limited them to laboratory study despite appealing foam properties. Thermal decomposition processing of gasarites holds the potential to increase the application of gasarite foams in engineering design by removing several barriers to their industrial scale production. The following study characterized thermal decomposition gasarite processing both experimentally and theoretically. It was found that significant variation was inherent to this process therefore several modifications were necessary to produce gasarites using this method. Conventional means to increase porosity and enhance pore morphology were studied. Pore morphology was determined to be more easily replicated if pores were stabilized by alumina additions and powders were dispersed evenly. In order to better characterize processing, high temperature and high ramp rate thermal decomposition data were gathered. It was found that the high ramp rate thermal decomposition behavior of several hydrides was more rapid than hydride kinetics at low ramp rates. This data was then used to estimate the contribution of several pore formation mechanisms to the development of pore structure. It was found that gas-metal eutectic growth can only be a viable pore formation mode if non-equilibrium conditions persist. Bubble capture cannot be a dominant pore growth mode due to high bubble terminal velocities. Direct gas evolution appears to be the most likely pore formation mode due to high gas evolution rate from the decomposing particulate and microstructural pore growth trends. The overall process was evaluated for its economic viability. It was found that thermal decomposition has potential for industrialization, but further refinements are necessary in order for the process to be viable.