974 resultados para Design variables
Resumo:
On yleisesti tiedossa, että väsyttävän kuormituksen alaisena olevat hitsatut rakenteet rikkoutuvat juuri hitsausliitoksista. Täyden tunkeuman hitsausliitoksia sisältävien rakenteiden asiantunteva suunnittelu janykyaikaiset valmistusmenetelmät ovat lähes eliminoineet väsymisvauriot hitsatuissa rakenteissa. Väsymislujuuden parantaminen tiukalla täyden tunkeuman vaatimuksella on kuitenkin epätaloudellinen ratkaisu. Täyden tunkeuman hitsausliitoksille asetettavien laatuvaatimuksien on määriteltävä selkeät tarkastusohjeet ja hylkäämisperusteet. Tämän diplomityön tarkoituksena oli tutkia geometristen muuttujien vaikutusta kuormaa kantavien hitsausliitosten väsymislujuuteen. Huomio kiinnitettiin pääasiassa suunnittelumuuttujiin, joilla on vaikutusta väsymisvaurioiden syntymiseen hitsauksen juuren puolella. Nykyiset määräykset ja standardit, jotka perustuvat kokeellisiin tuloksiin; antavat melko yleisiä ohjeita hitsausliitosten väsymismitoituksesta. Tämän vuoksi muodostettiin kokonaan uudet parametriset yhtälöt sallitun nimellisen jännityksen kynnysarvon vaihteluvälin, ¿¿th, laskemiseksi, jotta vältettäisiin hitsausliitosten juuren puoleiset väsymisvauriot. Lisäksi, jokaiselle liitostyypille laskettiin hitsin juuren puolen väsymisluokat (FAT), joita verrattiin olemassa olevilla mitoitusohjeilla saavutettuihin tuloksiin. Täydentäviksi referensseiksi suoritettiin useita kolmiulotteisia (3D) analyysejä. Julkaistuja kokeellisiin tuloksiin perustuvia tietoja käytettiin apuna hitsausliitosten väsymiskäyttäytymisen ymmärtämiseksi ja materiaalivakioiden määrittämiseksi. Kuormaa kantavien vajaatunkeumaisten hitsausliitosten väsymislujuus määritettiin käyttämällä elementtimenetelmää. Suurimman pääjännityksen kriteeriä hyödynnettiin murtumiskäyttäytymisen ennakoimiseksi. Valitulle hitsatulle materiaalille ja koeolosuhteille murtumiskäyttäytymistä mallinnettiin särön kasvunopeudella da/dN ja jännitysintensiteettikertoimen vaihteluvälillä, 'K. Paris:n yhtälön numeerinen integrointi suoritettiin FRANC2D/L tietokoneohjelmalla. Saatujen tulosten perusteella voidaan laskea FAT tutkittavassa tapauksessa. ¿¿th laskettiin alkusärön jännitysintensiteettikertoimen vaihteluvälin ja kynnysjännitysintensiteettikertoimen, 'Kth, perusteella. ¿Kth arvoa pienemmällä vaihteluvälillä särö ei kasva. Analyyseissäoletuksena oli hitsattu jälkikäsittelemätön liitos, jossa oli valmis alkusärö hitsin juuressa. Analyysien tulokset ovat hyödyllisiä suunnittelijoille, jotka tekevät päätöksiä koskien geometrisiä parametreja, joilla on vaikutusta hitsausliitosten väsymislujuuteen.
Resumo:
We present a polarimetric-based optical encoder for image encryption and verification. A system for generating random polarized vector keys based on a Mach-Zehnder configuration combined with translucent liquid crystal displays in each path of the interferometer is developed. Polarization information of the encrypted signal is retrieved by taking advantage of the information provided by the Stokes parameters. Moreover, photon-counting model is used in the encryption process which provides data sparseness and nonlinear transformation to enhance security. An authorized user with access to the polarization keys and the optical design variables can retrieve and validate the photon-counting plain-text. Optical experimental results demonstrate the feasibility of the encryption method.
Resumo:
The present study was carried out to establish the optimal conditions for performing ochratoxin A (OTA) and citrinin (CIT) extraction using the QuEChERS method in rice. Employing the factorial experimental design, variables that significantly influenced the extraction stages were determined. The following variables were analyzed: addition of water, acidification of acetonitrile with glacial acetic acid, as well as amounts of magnesium sulfate, sodium acetate, sodium citrate and diatomaceous earth. The best combining procedure resulted in a predictive model using more water and less diatomaceous earth. Recoveries of CIT and OTA were 78-105%.
Resumo:
This work proposes a computational methodology to solve problems of optimization in structural design. The application develops, implements and integrates methods for structural analysis, geometric modeling, design sensitivity analysis and optimization. So, the optimum design problem is particularized for plane stress case, with the objective to minimize the structural mass subject to a stress criterion. Notice that, these constraints must be evaluated at a series of discrete points, whose distribution should be dense enough in order to minimize the chance of any significant constraint violation between specified points. Therefore, the local stress constraints are transformed into a global stress measure reducing the computational cost in deriving the optimal shape design. The problem is approximated by Finite Element Method using Lagrangian triangular elements with six nodes, and use a automatic mesh generation with a mesh quality criterion of geometric element. The geometric modeling, i.e., the contour is defined by parametric curves of type B-splines, these curves hold suitable characteristics to implement the Shape Optimization Method, that uses the key points like design variables to determine the solution of minimum problem. A reliable tool for design sensitivity analysis is a prerequisite for performing interactive structural design, synthesis and optimization. General expressions for design sensitivity analysis are derived with respect to key points of B-splines. The method of design sensitivity analysis used is the adjoin approach and the analytical method. The formulation of the optimization problem applies the Augmented Lagrangian Method, which convert an optimization problem constrained problem in an unconstrained. The solution of the Augmented Lagrangian function is achieved by determining the analysis of sensitivity. Therefore, the optimization problem reduces to the solution of a sequence of problems with lateral limits constraints, which is solved by the Memoryless Quasi-Newton Method It is demonstrated by several examples that this new approach of analytical design sensitivity analysis of integrated shape design optimization with a global stress criterion purpose is computationally efficient
Resumo:
With the growth and development of modern society, arises the need to search for new raw materials and new technologies which present the "clean" characteristic, and do not harm the environment, but can join the energy needs of industry and transportation. The Moringa oleifera Lam, plant originating from India, and currently present in the Brazilian Northeast, presents itself as a multi-purpose plant, can be used as a coagulant in water treatment, as a natural remedy and as a feedstock for biodiesel production. In this work, Moringa has been used as a raw material for studies on the extraction and subsequently in the synthesis of biodiesel. Studies have been conducted on various techniques of Moringa oil extraction (solvents, mechanical pressing and enzymatic), being specially developed an experimental design for the aqueous extraction with the aid of the enzyme Neutrase© 0.8 L, with the aim of analyzing the influence variable pH (5.5-7.5), temperature (45-55°C), time (16-24 hours) and amount of catalyst (2-5%) on the extraction yield. In relation to study of the synthesis of biodiesel was initially carried out a conventional transesterification (50°C, KOH as a catalyst, methanol and 60 minutes reaction). Next, a study was conducted using the technique of in situ transesterification by using an experimental design variables as temperature (30-60°C), catalyst amount (2-5%), and molar ratio oil / ethanol (1:420-1:600). The extraction technique that achieved the highest extraction yield (35%) was the one that used hexane as a solvent. The extraction using 32% ethanol obtained by mechanical pressing and extraction reached 25% yield. For the enzymatic extraction, the experimental design indicated that the extraction yield was most affected by the effect of the combination of temperature and time. The maximum yield obtained in this extraction was 16%. After the step of obtaining the oil was accomplished the synthesis of biodiesel by the conventional method and the in situ technique. The method of conventional transesterification was obtained a content of 100% and esters by in situ technique was also obtained in 100% in the experimental point 7, with a molar ratio oil / alcohol 1:420, Temperature 60°C in 5% weight KOH with the reaction time of 1.5 h. By the experimental design, it was found that the variable that most influenced the ester content was late the percentage of catalyst. By physico-chemical analysis it was observed that the biodiesel produced by the in situ method fell within the rules of the ANP, therefore this technique feasible, because does not require the preliminary stage of oil extraction and achieves high levels of esters
Resumo:
The advantages offered by the electronic component LED (Light Emitting Diode) have caused a quick and wide application of this device in replacement of incandescent lights. However, in its combined application, the relationship between the design variables and the desired effect or result is very complex and it becomes difficult to model by conventional techniques. This work consists of the development of a technique, through comparative analysis of neuro-fuzzy architectures, to make possible to obtain the luminous intensity values of brake lights using LEDs from design data.
Resumo:
The advantages offered by the electronic component light emitting diode ( LED) have caused a quick and wide application of this device in replacement of incandescent lights. However, in its combined application, the relationship between the design variables and the desired effect or result is very complex and it becomes difficult to model by conventional techniques. This work consists of the development of a technique, through artificial neural networks, to make possible to obtain the luminous intensity values of brake lights using LEDs from design data. (C) 2005 Elsevier B.V. All rights reserved.
Resumo:
Objective To test the hypothesis that red blood cell (RBC) transfusions in preterm infants are associated with increased intra-hospital mortality.Study design Variables associated with death were studied with Cox regression analysis in a prospective cohort of preterm infants with birth weight <1500 g in the Brazilian Network on Neonatal Research. Intra-hospital death and death after 28 days of life were analyzed as dependent variables. Independent variables were infant demographic and clinical characteristics and RBC transfusions.Results of 1077 infants, 574 (53.3%) received at least one RBC transfusion during the hospital stay. The mean number of transfusions per infant was 3.3 +/- 3.4, with 2.1 +/- 2.1 in the first 28 days of life. Intra-hospital death occurred in 299 neonates (27.8%), and 60 infants (5.6%) died after 28 days of life. After adjusting for confounders, the relative risk of death during hospital stay was 1.49 in infants who received at least one RBC transfusion in the first 28 days of life, compared with infants who did not receive a transfusion. The risk of death after 28 days of life was 1.89 times higher in infants who received more than two RBC transfusions during their hospital stay, compared with infants who received one or two transfusions.Conclusion Transfusion was associated with increased death, and transfusion guidelines should consider risks and benefits of transfusion. (J Pediatr 2011; 159: 371-6).
Resumo:
The advantages offered by the electronic component LED (Light Emitting Diode) have caused a quick and wide application of this device in replacement of incandescent lights. However, in its combined application, the relationship between the design variables and the desired effect or result is very complex and it becomes difficult to model by conventional techniques. This work consists of the development of a technique, through artificial neural networks, to make possible to obtain the luminous intensity values of brake lights using LEDs from design data.
Resumo:
This work addresses the treatment of lower density regions of structures undergoing large deformations during the design process by the topology optimization method (TOM) based on the finite element method. During the design process the nonlinear elastic behavior of the structure is based on exact kinematics. The material model applied in the TOM is based on the solid isotropic microstructure with penalization approach. No void elements are deleted and all internal forces of the nodes surrounding the void elements are considered during the nonlinear equilibrium solution. The distribution of design variables is solved through the method of moving asymptotes, in which the sensitivity of the objective function is obtained directly. In addition, a continuation function and a nonlinear projection function are invoked to obtain a checkerboard free and mesh independent design. 2D examples with both plane strain and plane stress conditions hypothesis are presented and compared. The problem of instability is overcome by adopting a polyconvex constitutive model in conjunction with a suggested relaxation function to stabilize the excessive distorted elements. The exact tangent stiffness matrix is used. The optimal topology results are compared to the results obtained by using the classical Saint Venant–Kirchhoff constitutive law, and strong differences are found.
Resumo:
[EN] This paper proposes the incorporation of engineering knowledge through both (a) advanced state-of-the-art preference handling decision-making tools integrated in multiobjective evolutionary algorithms and (b) engineering knowledge-based variance reduction simulation as enhancing tools for the robust optimum design of structural frames taking uncertainties into consideration in the design variables.The simultaneous minimization of the constrained weight (adding structuralweight and average distribution of constraint violations) on the one hand and the standard deviation of the distribution of constraint violation on the other are handled with multiobjective optimization-based evolutionary computation in two different multiobjective algorithms. The optimum design values of the deterministic structural problem in question are proposed as a reference point (the aspiration level) in reference-point-based evolutionary multiobjective algorithms (here g-dominance is used). Results including
Resumo:
The aim of this Doctoral Thesis is to develop a genetic algorithm based optimization methods to find the best conceptual design architecture of an aero-piston-engine, for given design specifications. Nowadays, the conceptual design of turbine airplanes starts with the aircraft specifications, then the most suited turbofan or turbo propeller for the specific application is chosen. In the aeronautical piston engines field, which has been dormant for several decades, as interest shifted towards turboaircraft, new materials with increased performance and properties have opened new possibilities for development. Moreover, the engine’s modularity given by the cylinder unit, makes it possible to design a specific engine for a given application. In many real engineering problems the amount of design variables may be very high, characterized by several non-linearities needed to describe the behaviour of the phenomena. In this case the objective function has many local extremes, but the designer is usually interested in the global one. The stochastic and the evolutionary optimization techniques, such as the genetic algorithms method, may offer reliable solutions to the design problems, within acceptable computational time. The optimization algorithm developed here can be employed in the first phase of the preliminary project of an aeronautical piston engine design. It’s a mono-objective genetic algorithm, which, starting from the given design specifications, finds the engine propulsive system configuration which possesses minimum mass while satisfying the geometrical, structural and performance constraints. The algorithm reads the project specifications as input data, namely the maximum values of crankshaft and propeller shaft speed and the maximal pressure value in the combustion chamber. The design variables bounds, that describe the solution domain from the geometrical point of view, are introduced too. In the Matlab® Optimization environment the objective function to be minimized is defined as the sum of the masses of the engine propulsive components. Each individual that is generated by the genetic algorithm is the assembly of the flywheel, the vibration damper and so many pistons, connecting rods, cranks, as the number of the cylinders. The fitness is evaluated for each individual of the population, then the rules of the genetic operators are applied, such as reproduction, mutation, selection, crossover. In the reproduction step the elitist method is applied, in order to save the fittest individuals from a contingent mutation and recombination disruption, making it undamaged survive until the next generation. Finally, as the best individual is found, the optimal dimensions values of the components are saved to an Excel® file, in order to build a CAD-automatic-3D-model for each component of the propulsive system, having a direct pre-visualization of the final product, still in the engine’s preliminary project design phase. With the purpose of showing the performance of the algorithm and validating this optimization method, an actual engine is taken, as a case study: it’s the 1900 JTD Fiat Avio, 4 cylinders, 4T, Diesel. Many verifications are made on the mechanical components of the engine, in order to test their feasibility and to decide their survival through generations. A system of inequalities is used to describe the non-linear relations between the design variables, and is used for components checking for static and dynamic loads configurations. The design variables geometrical boundaries are taken from actual engines data and similar design cases. Among the many simulations run for algorithm testing, twelve of them have been chosen as representative of the distribution of the individuals. Then, as an example, for each simulation, the corresponding 3D models of the crankshaft and the connecting rod, have been automatically built. In spite of morphological differences among the component the mass is almost the same. The results show a significant mass reduction (almost 20% for the crankshaft) in comparison to the original configuration, and an acceptable robustness of the method have been shown. The algorithm here developed is shown to be a valid method for an aeronautical-piston-engine preliminary project design optimization. In particular the procedure is able to analyze quite a wide range of design solutions, rejecting the ones that cannot fulfill the feasibility design specifications. This optimization algorithm could increase the aeronautical-piston-engine development, speeding up the production rate and joining modern computation performances and technological awareness to the long lasting traditional design experiences.
Resumo:
The problem of optimal design of a multi-gravity-assist space trajectories, with free number of deep space maneuvers (MGADSM) poses multi-modal cost functions. In the general form of the problem, the number of design variables is solution dependent. To handle global optimization problems where the number of design variables varies from one solution to another, two novel genetic-based techniques are introduced: hidden genes genetic algorithm (HGGA) and dynamic-size multiple population genetic algorithm (DSMPGA). In HGGA, a fixed length for the design variables is assigned for all solutions. Independent variables of each solution are divided into effective and ineffective (hidden) genes. Hidden genes are excluded in cost function evaluations. Full-length solutions undergo standard genetic operations. In DSMPGA, sub-populations of fixed size design spaces are randomly initialized. Standard genetic operations are carried out for a stage of generations. A new population is then created by reproduction from all members based on their relative fitness. The resulting sub-populations have different sizes from their initial sizes. The process repeats, leading to increasing the size of sub-populations of more fit solutions. Both techniques are applied to several MGADSM problems. They have the capability to determine the number of swing-bys, the planets to swing by, launch and arrival dates, and the number of deep space maneuvers as well as their locations, magnitudes, and directions in an optimal sense. The results show that solutions obtained using the developed tools match known solutions for complex case studies. The HGGA is also used to obtain the asteroids sequence and the mission structure in the global trajectory optimization competition (GTOC) problem. As an application of GA optimization to Earth orbits, the problem of visiting a set of ground sites within a constrained time frame is solved. The J2 perturbation and zonal coverage are considered to design repeated Sun-synchronous orbits. Finally, a new set of orbits, the repeated shadow track orbits (RSTO), is introduced. The orbit parameters are optimized such that the shadow of a spacecraft on the Earth visits the same locations periodically every desired number of days.
Resumo:
With the insatiable curiosity of human beings to explore the universe and our solar system, it is essential to benefit from larger propulsion capabilities to execute efficient transfers and carry more scientific equipment. In the field of space trajectory optimization the fundamental advances in using low-thrust propulsion and exploiting the multi-body dynamics has played pivotal role in designing efficient space mission trajectories. The former provides larger cumulative momentum change in comparison with the conventional chemical propulsion whereas the latter results in almost ballistic trajectories with negligible amount of propellant. However, the problem of space trajectory design translates into an optimal control problem which is, in general, time-consuming and very difficult to solve. Therefore, the goal of the thesis is to address the above problem by developing a methodology to simplify and facilitate the process of finding initial low-thrust trajectories in both two-body and multi-body environments. This initial solution will not only provide mission designers with a better understanding of the problem and solution but also serves as a good initial guess for high-fidelity optimal control solvers and increases their convergence rate. Almost all of the high-fidelity solvers enjoy the existence of an initial guess that already satisfies the equations of motion and some of the most important constraints. Despite the nonlinear nature of the problem, it is sought to find a robust technique for a wide range of typical low-thrust transfers with reduced computational intensity. Another important aspect of our developed methodology is the representation of low-thrust trajectories by Fourier series with which the number of design variables reduces significantly. Emphasis is given on simplifying the equations of motion to the possible extent and avoid approximating the controls. These facts contribute to speeding up the solution finding procedure. Several example applications of two and three-dimensional two-body low-thrust transfers are considered. In addition, in the multi-body dynamic, and in particular the restricted-three-body dynamic, several Earth-to-Moon low-thrust transfers are investigated.
Resumo:
In the context of expensive numerical experiments, a promising solution for alleviating the computational costs consists of using partially converged simulations instead of exact solutions. The gain in computational time is at the price of precision in the response. This work addresses the issue of fitting a Gaussian process model to partially converged simulation data for further use in prediction. The main challenge consists of the adequate approximation of the error due to partial convergence, which is correlated in both design variables and time directions. Here, we propose fitting a Gaussian process in the joint space of design parameters and computational time. The model is constructed by building a nonstationary covariance kernel that reflects accurately the actual structure of the error. Practical solutions are proposed for solving parameter estimation issues associated with the proposed model. The method is applied to a computational fluid dynamics test case and shows significant improvement in prediction compared to a classical kriging model.