86 resultados para Design variables
Resumo:
Os desafios à engenharia moderna são cada vez maiores, pretendendo-se quase sempre obter estruturas mais leves, com propriedades mecânicas atrativas e muitas vezes com geometrias complexas. Com tais requisitos, um dos materiais que tem vindo a ter uma crescente aplicação é o material compósito. Contudo, no que toca ao cálculo estrutural destes materiais, tudo se torna mais complexo, já que são materiais que geralmente são formados por empilhamento de várias camadas de material heterogéneo, podendo estas encontrarem-se dispostas segundo diferentes orientações. Assim, a utilização de um software que permita a previsão das propriedades mecânicas de uma estrutura em material compósito através da micromecânica, a aplicação da Teoria Clássica dos Laminados e de um critério de rotura, como por exemplo o de Tsai-Hill, é fundamental para agilizar o processo de estudo da estrutura a fabricar. Para dar uma resposta a tal necessidade foi desenvolvida uma aplicação, em MATLAB® GUI, denominada CAFE – Composite Analysis For Engineers, com ambiente gráfico apelativo, que permite determinar todas as variáveis importantes no estudo de estruturas em material compósito. Esta aplicação visa suportar e agilizar a aprendizagem desta área do conhecimento, permitindo também o acesso ao código de cálculo por parte do utilizador, de modo a conhecerem-se as equações utilizadas e, eventualmente, ser alvo de futuros desenvolvimentos. O programa desenvolvido foi alvo de validação, recorrendo-se para tal, a uma comparação dos resultados obtidos entre o respetivo programa e por um outro programa de grande fiabilidade. Assim sendo, concluiu-se que o software CAFE apresenta resultados válidos, encontrando-se apto a ser utilizado.
Resumo:
An analytical method using microwave-assisted extraction (MAE) and liquid chromatography (LC) with fluorescence detection (FD) for the determination of ochratoxin A (OTA) in bread samples is described. A 24 orthogonal composite design coupled with response surface methodology was used to study the influence of MAE parameters (extraction time, temperature, solvent volume, and stirring speed) in order to maximize OTA recovery. The optimized MAE conditions were the following: 25 mL of acetonitrile, 10 min of extraction, at 80 °C, and maximum stirring speed. Validation of the overall methodology was performed by spiking assays at five levels (0.1–3.00 ng/g). The quantification limit was 0.005 ng/g. The established method was then applied to 64 bread samples (wheat, maize, and wheat/maize bread) collected in Oporto region (Northern Portugal). OTAwas detected in 84 % of the samples with a maximum value of 2.87 ng/g below the European maximum limit established for OTA in cereal products of 3 ng/g.
Resumo:
In this study, the effect of incorporation of recycled glass fibre reinforced plastics (GFRP) waste materials, obtained by means of shredding and milling processes, on mechanical behaviour of polyester polymer mortars (PM) was assessed. For this purpose, different contents of GFRP recyclates, between 4% up to 12% in weight, were incorporated into polyester PM materials as sand aggregates and filler replacements. The effect of the addition of a silane coupling agent to resin binder was also evaluated. Applied waste material was proceeding from the shredding of the leftovers resultant from the cutting and assembly processes of GFRP pultrusion profiles. Currently, these leftovers as well as non-conform products and scrap resulting from pultrusion manufacturing process are landfilled, with additional costs to producers and suppliers. Hence, besides the evident environmental benefits, a viable and feasible solution for these wastes would also conduct to significant economic advantages. Design of experiments and data treatment were accomplish by means of full factorial design approach and analysis of variance ANOVA. Experimental results were promising toward the recyclability of GFRP waste materials as partial replacement of aggregates and reinforcement for PM materials, with significant improvements on mechanical properties of resultant mortars with regards to waste-free formulations.
Resumo:
The problem of uncertainty propagation in composite laminate structures is studied. An approach based on the optimal design of composite structures to achieve a target reliability level is proposed. Using the Uniform Design Method (UDM), a set of design points is generated over a design domain centred at mean values of random variables, aimed at studying the space variability. The most critical Tsai number, the structural reliability index and the sensitivities are obtained for each UDM design point, using the maximum load obtained from optimal design search. Using the UDM design points as input/output patterns, an Artificial Neural Network (ANN) is developed based on supervised evolutionary learning. Finally, using the developed ANN a Monte Carlo simulation procedure is implemented and the variability of the structural response based on global sensitivity analysis (GSA) is studied. The GSA is based on the first order Sobol indices and relative sensitivities. An appropriate GSA algorithm aiming to obtain Sobol indices is proposed. The most important sources of uncertainty are identified.
Resumo:
Our day-to-day life is dependent on several embedded devices, and in the near future, many more objects will have computation and communication capabilities enabling an Internet of Things. Correspondingly, with an increase in the interaction of these devices around us, developing novel applications is set to become challenging with current software infrastructures. In this paper, we argue that a new paradigm for operating systems needs to be conceptualized to provide aconducive base for application development on Cyber-physical systems. We demonstrate its need and importance using a few use-case scenarios and provide the design principles behind, and an architecture of a co-operating system or CoS that can serve as an example of this new paradigm.
Resumo:
Variations of manufacturing process parameters and environmental aspects may affect the quality and performance of composite materials, which consequently affects their structural behaviour. Reliability-based design optimisation (RBDO) and robust design optimisation (RDO) searches for safe structural systems with minimal variability of response when subjected to uncertainties in material design parameters. An approach that simultaneously considers reliability and robustness is proposed in this paper. Depending on a given reliability index imposed on composite structures, a trade-off is established between the performance targets and robustness. Robustness is expressed in terms of the coefficient of variation of the constrained structural response weighted by its nominal value. The Pareto normed front is built and the nearest point to the origin is estimated as the best solution of the bi-objective optimisation problem.
Resumo:
Doctoral Thesis in Information Systems and Technologies Area of Engineering and Manag ement Information Systems
Resumo:
Dynamical systems theory in this work is used as a theoretical language and tool to design a distributed control architecture for a team of three robots that must transport a large object and simultaneously avoid collisions with either static or dynamic obstacles. The robots have no prior knowledge of the environment. The dynamics of behavior is defined over a state space of behavior variables, heading direction and path velocity. Task constraints are modeled as attractors (i.e. asymptotic stable states) of the behavioral dynamics. For each robot, these attractors are combined into a vector field that governs the behavior. By design the parameters are tuned so that the behavioral variables are always very close to the corresponding attractors. Thus the behavior of each robot is controlled by a time series of asymptotical stable states. Computer simulations support the validity of the dynamical model architecture.
Resumo:
In this paper dynamical systems theory is used as a theoretical language and tool to design a distributed control architecture for a team of two robots that must transport a large object and simultaneously avoid collisions with obstacles (either static or dynamic). This work extends the previous work with two robots (see [1] and [5]). However here we demonstrate that it’s possible to simplify the architecture presented in [1] and [5] and reach an equally stable global behavior. The robots have no prior knowledge of the environment. The dynamics of behavior is defined over a state space of behavior variables, heading direction and path velocity. Task constrains are modeled as attractors (i.e. asymptotic stable states) of a behavioral dynamics. For each robot, these attractors are combined into a vector field that governs the behavior. By design the parameters are tuned so that the behavioral variables are always very close to the corresponding attractors. Thus the behavior of each robot is controlled by a time series of asymptotic stable states. Computer simulations support the validity of the dynamical model architecture.
Resumo:
Dynamical systems theory is used here as a theoretical language and tool to design a distributed control architecture for a team of two mobile robots that must transport a long object and simultaneously avoid obstacles. In this approach the level of modeling is at the level of behaviors. A “dynamics” of behavior is defined over a state space of behavioral variables (heading direction and path velocity). The environment is also modeled in these terms by representing task constraints as attractors (i.e. asymptotically stable states) or reppelers (i.e. unstable states) of behavioral dynamics. For each robot attractors and repellers are combined into a vector field that governs the behavior. The resulting dynamical systems that generate the behavior of the robots may be nonlinear. By design the systems are tuned so that the behavioral variables are always very close to one attractor. Thus the behavior of each robot is controled by a time series of asymptotically stable states. Computer simulations support the validity of our dynamic model architectures.
Resumo:
Structural health monitoring has long been identified as a prominent application of Wireless Sensor Networks (WSNs), as traditional wired-based solutions present some inherent limitations such as installation/maintenance cost, scalability and visual impact. Nevertheless, there is a lack of ready-to-use and off-the-shelf WSN technologies that are able to fulfill some most demanding requirements of these applications, which can span from critical physical infrastructures (e.g. bridges, tunnels, mines, energy grid) to historical buildings or even industrial machinery and vehicles. Low-power and low-cost yet extremely sensitive and accurate accelerometer and signal acquisition hardware and stringent time synchronization of all sensors data are just examples of the requirements imposed by most of these applications. This paper presents a prototype system for health monitoring of civil engineering structures that has been jointly conceived by a team of civil, and electrical and computer engineers. It merges the benefits of standard and off-the-shelf (COTS) hardware and communication technologies with a minimum set of custom-designed signal acquisition hardware that is mandatory to fulfill all application requirements.
Resumo:
In the last twenty years genetic algorithms (GAs) were applied in a plethora of fields such as: control, system identification, robotics, planning and scheduling, image processing, and pattern and speech recognition (Bäck et al., 1997). In robotics the problems of trajectory planning, collision avoidance and manipulator structure design considering a single criteria has been solved using several techniques (Alander, 2003). Most engineering applications require the optimization of several criteria simultaneously. Often the problems are complex, include discrete and continuous variables and there is no prior knowledge about the search space. These kind of problems are very more complex, since they consider multiple design criteria simultaneously within the optimization procedure. This is known as a multi-criteria (or multiobjective) optimization, that has been addressed successfully through GAs (Deb, 2001). The overall aim of multi-criteria evolutionary algorithms is to achieve a set of non-dominated optimal solutions known as Pareto front. At the end of the optimization procedure, instead of a single optimal (or near optimal) solution, the decision maker can select a solution from the Pareto front. Some of the key issues in multi-criteria GAs are: i) the number of objectives, ii) to obtain a Pareto front as wide as possible and iii) to achieve a Pareto front uniformly spread. Indeed, multi-objective techniques using GAs have been increasing in relevance as a research area. In 1989, Goldberg suggested the use of a GA to solve multi-objective problems and since then other researchers have been developing new methods, such as the multi-objective genetic algorithm (MOGA) (Fonseca & Fleming, 1995), the non-dominated sorted genetic algorithm (NSGA) (Deb, 2001), and the niched Pareto genetic algorithm (NPGA) (Horn et al., 1994), among several other variants (Coello, 1998). In this work the trajectory planning problem considers: i) robots with 2 and 3 degrees of freedom (dof ), ii) the inclusion of obstacles in the workspace and iii) up to five criteria that are used to qualify the evolving trajectory, namely the: joint traveling distance, joint velocity, end effector / Cartesian distance, end effector / Cartesian velocity and energy involved. These criteria are used to minimize the joint and end effector traveled distance, trajectory ripple and energy required by the manipulator to reach at destination point. Bearing this ideas in mind, the paper addresses the planning of robot trajectories, meaning the development of an algorithm to find a continuous motion that takes the manipulator from a given starting configuration up to a desired end position without colliding with any obstacle in the workspace. The chapter is organized as follows. Section 2 describes the trajectory planning and several approaches proposed in the literature. Section 3 formulates the problem, namely the representation adopted to solve the trajectory planning and the objectives considered in the optimization. Section 4 studies the algorithm convergence. Section 5 studies a 2R manipulator (i.e., a robot with two rotational joints/links) when the optimization trajectory considers two and five objectives. Sections 6 and 7 show the results for the 3R redundant manipulator with five goals and for other complementary experiments are described, respectively. Finally, section 8 draws the main conclusions.
Resumo:
Fractional calculus (FC) is currently being applied in many areas of science and technology. In fact, this mathematical concept helps the researches to have a deeper insight about several phenomena that integer order models overlook. Genetic algorithms (GA) are an important tool to solve optimization problems that occur in engineering. This methodology applies the concepts that describe biological evolution to obtain optimal solution in many different applications. In this line of thought, in this work we use the FC and the GA concepts to implement the electrical fractional order potential. The performance of the GA scheme, and the convergence of the resulting approximation, are analyzed. The results are analyzed for different number of charges and several fractional orders.
Resumo:
A genetic algorithm used to design radio-frequency binary-weighted differential switched capacitor arrays (RFDSCAs) is presented in this article. The algorithm provides a set of circuits all having the same maximum performance. This article also describes the design, implementation, and measurements results of a 0.25 lm BiCMOS 3-bit RFDSCA. The experimental results show that the circuit presents the expected performance up to 40 GHz. The similarity between the evolutionary solutions, circuit simulations, and measured results indicates that the genetic synthesis method is a very useful tool for designing optimum performance RFDSCAs.
Resumo:
The increasing complexity of VLSI circuits and the reduced accessibility of modern packaging and mounting technologies restrict the usefulness of conventional in-circuit debugging tools, such as in-circuit emulators for microprocessors and microcontrollers. However, this same trend enables the development of more complex products, which in turn require more powerful debugging tools. These conflicting demands could be met if the standard scan test infrastructures now common in most complex components were able to match the debugging requirements of design verification and prototype validation. This paper analyses the main debug requirements in the design of microprocessor-based applications and the feasibility of their implementation using the mandatory, optional and additional operating modes of the standard IEEE 1149.1 test infrastructure.