17 resultados para H150 Engineering Design
em Biblioteca Digital da Produção Intelectual da Universidade de São Paulo
Resumo:
This paper proposes two new approaches for the sensitivity analysis of multiobjective design optimization problems whose performance functions are highly susceptible to small variations in the design variables and/or design environment parameters. In both methods, the less sensitive design alternatives are preferred over others during the multiobjective optimization process. While taking the first approach, the designer chooses the design variable and/or parameter that causes uncertainties. The designer then associates a robustness index with each design alternative and adds each index as an objective function in the optimization problem. For the second approach, the designer must know, a priori, the interval of variation in the design variables or in the design environment parameters, because the designer will be accepting the interval of variation in the objective functions. The second method does not require any law of probability distribution of uncontrollable variations. Finally, the authors give two illustrative examples to highlight the contributions of the paper.
Resumo:
Many engineering sectors are challenged by multi-objective optimization problems. Even if the idea behind these problems is simple and well established, the implementation of any procedure to solve them is not a trivial task. The use of evolutionary algorithms to find candidate solutions is widespread. Usually they supply a discrete picture of the non-dominated solutions, a Pareto set. Although it is very interesting to know the non-dominated solutions, an additional criterion is needed to select one solution to be deployed. To better support the design process, this paper presents a new method of solving non-linear multi-objective optimization problems by adding a control function that will guide the optimization process over the Pareto set that does not need to be found explicitly. The proposed methodology differs from the classical methods that combine the objective functions in a single scale, and is based on a unique run of non-linear single-objective optimizers.
Resumo:
This paper presents an analysis of the capacity of design centric methodologies to prepare engineering students to succeed in the market. Gaps are brainstormed and analyzed with reference to their importance. Reasons that may lead the newly graduated engineers not to succeed right from the beginning of their professional lives have also been evaluated. A comparison among the two subjects above was prepared, reviewed and analyzed. The influence of multidisciplinary, multicultural and complex environmental influences created in the current global business era is taken into account. The industry requirements in terms of what they expect to 'receive' from their engineers are evaluated and compared to the remaining of the study above. An innovative approach to current engineering education that utilizes traditional design-centric methodologies is then proposed, aggregating new disciplines to supplement the traditional engineering education. The solution encompasses the inclusion of disciplines from Human Sciences and Emotional Intelligence fields willing to better prepare the engineer of tomorrow to work in a multidisciplinary, globalized, complex and team working environment. A pilot implementation of such an approach is reviewed and conclusions are drawn from this educational project.
Resumo:
The design of a network is a solution to several engineering and science problems. Several network design problems are known to be NP-hard, and population-based metaheuristics like evolutionary algorithms (EAs) have been largely investigated for such problems. Such optimization methods simultaneously generate a large number of potential solutions to investigate the search space in breadth and, consequently, to avoid local optima. Obtaining a potential solution usually involves the construction and maintenance of several spanning trees, or more generally, spanning forests. To efficiently explore the search space, special data structures have been developed to provide operations that manipulate a set of spanning trees (population). For a tree with n nodes, the most efficient data structures available in the literature require time O(n) to generate a new spanning tree that modifies an existing one and to store the new solution. We propose a new data structure, called node-depth-degree representation (NDDR), and we demonstrate that using this encoding, generating a new spanning forest requires average time O(root n). Experiments with an EA based on NDDR applied to large-scale instances of the degree-constrained minimum spanning tree problem have shown that the implementation adds small constants and lower order terms to the theoretical bound.
Resumo:
This paper presents a theoretical model developed for estimating the power, the optical signal to noise ratio and the number of generated carriers in a comb generator, having as a reference the minimum optical signal do noise ratio at the receiver input, for a given fiber link. Based on the recirculating frequency shifting technique, the generator relies on the use of coherent and orthogonal multi-carriers (Coherent-WDM) that makes use of a single laser source (seed) for feeding high capacity (above 100 Gb/s) systems. The theoretical model has been validated by an experimental demonstration, where 23 comb lines with an optical signal to noise ratio ranging from 25 to 33 dB, in a spectral window of similar to 3.5 nm, are obtained.
Resumo:
There are some variants of the widely used Fuzzy C-Means (FCM) algorithm that support clustering data distributed across different sites. Those methods have been studied under different names, like collaborative and parallel fuzzy clustering. In this study, we offer some augmentation of the two FCM-based clustering algorithms used to cluster distributed data by arriving at some constructive ways of determining essential parameters of the algorithms (including the number of clusters) and forming a set of systematically structured guidelines such as a selection of the specific algorithm depending on the nature of the data environment and the assumptions being made about the number of clusters. A thorough complexity analysis, including space, time, and communication aspects, is reported. A series of detailed numeric experiments is used to illustrate the main ideas discussed in the study.
Resumo:
Linear parameter varying (LPV) control is a model-based control technique that takes into account time-varying parameters of the plant. In the case of rotating systems supported by lubricated bearings, the dynamic characteristics of the bearings change in time as a function of the rotating speed. Hence, LPV control can tackle the problem of run-up and run-down operational conditions when dynamic characteristics of the rotating system change significantly in time due to the bearings and high vibration levels occur. In this work, the LPV control design for a flexible shaft supported by plain journal bearings is presented. The model used in the LPV control design is updated from unbalance response experimental results and dynamic coefficients for the entire range of rotating speeds are obtained by numerical optimization. Experimental implementation of the designed LPV control resulted in strong reduction of vibration amplitudes when crossing the critical speed, without affecting system behavior in sub- or supercritical speeds. (C) 2012 Elsevier Ltd. All rights reserved.
Resumo:
The ALRED construction is a lightweight strategy for constructing message authentication algorithms from an underlying iterated block cipher. Even though this construction's original analyses show that it is secure against some attacks, the absence of formal security proofs in a strong security model still brings uncertainty on its robustness. In this paper, aiming to give a better understanding of the security level provided by different authentication algorithms based on this design strategy, we formally analyze two ALRED variants-the MARVIN message authentication code and the LETTERSOUP authenticated-encryption scheme,-bounding their security as a function of the attacker's resources and of the underlying cipher's characteristics.
Resumo:
Objectives: To evaluate the effect of insertion torque on micromotion to a lateral force in three different implant designs. Material and methods: Thirty-six implants with identical thread design, but different cutting groove design were divided in three groups: (1) non-fluted (no cutting groove, solid screw-form); (2) fluted (901 cut at the apex, tap design); and (3) Blossomt (Patent pending) (non-fluted with engineered trimmed thread design). The implants were screwed into polyurethane foam blocks and the insertion torque was recorded after each turn of 901 by a digital torque gauge. Controlled lateral loads of 10N followed by increments of 5 up to 100N were sequentially applied by a digital force gauge on a titanium abutment. Statistical comparison was performed with two-way mixed model ANOVA that evaluated implant design group, linear effects of turns and displacement loads, and their interaction. Results: While insertion torque increased as a function of number of turns for each design, the slope and final values increased (Po0.001) progressively from the Blossomt to the fluted to the non-fluted design (M +/- standard deviation [SD] = 64.1 +/- 26.8, 139.4 +/- 17.2, and 205.23 +/- 24.3 Ncm, respectively). While a linear relationship between horizontal displacement and lateral force was observed for each design, the slope and maximal displacement increased (Po0.001) progressively from the Blossomt to the fluted to the non-fluted design (M +/- SD 530 +/- 57.7, 585.9 +/- 82.4, and 782.33 +/- 269.4 mm, respectively). There was negligible to moderate levels of association between insertion torque and lateral displacement in the Blossomt, fluted and non-fluted design groups, respectively. Conclusion: Insertion torque was reduced in implant macrodesigns that incorporated cutting edges, and lesser insertion torque was generally associated with decreased micromovement. However, insertion torque and micromotion were unrelated within implant designs, particularly for those designs showing the least insertion torque.
Resumo:
Piezoelectric materials can be used to convert oscillatory mechanical energy into electrical energy. Energy harvesting devices are designed to capture the ambient energy surrounding the electronics and convert it into usable electrical energy. The design of energy harvesting devices is not obvious, requiring optimization procedures. This paper investigates the influence of pattern gradation using topology optimization on the design of piezocomposite energy harvesting devices based on bending behavior. The objective function consists of maximizing the electric power generated in a load resistor. A projection scheme is employed to compute the element densities from design variables and control the length scale of the material density. Examples of two-dimensional piezocomposite energy harvesting devices are presented and discussed using the proposed method. The numerical results illustrate that pattern gradation constraints help to increase the electric power generated in a load resistor and guides the problem toward a more stable solution. (C) 2012 Elsevier Ltd. All rights reserved.
Resumo:
In the field of vehicle dynamics, commercial software can aid the designer during the conceptual and detailed design phases. Simulations using these tools can quickly provide specific design metrics, such as yaw and lateral velocity, for standard maneuvers. However, it remains challenging to correlate these metrics with empirical quantities that depend on many external parameters and design specifications. This scenario is the case with tire wear, which depends on the frictional work developed by the tire-road contact. In this study, an approach is proposed to estimate the tire-road friction during steady-state longitudinal and cornering maneuvers. Using this approach, a qualitative formula for tire wear evaluation is developed, and conceptual design analyses of cornering maneuvers are performed using simplified vehicle models. The influence of some design parameters such as cornering stiffness, the distance between the axles, and the steer angle ratio between the steering axles for vehicles with two steering axles is evaluated. The proposed methodology allows the designer to predict tire wear using simplified vehicle models during the conceptual design phase.
Thermal design of a tray-type distillation column of an ammonia/water absorption refrigeration cycle
Resumo:
The goal of this paper is to present an analysis of a segmented weir sieve-tray distillation column for a 17.58 kW (5 TR) ammonia/water absorption refrigeration cycle. Balances of mass and energy were performed based on the method of Ponchon-Savarit, from which it was possible to determine the ideal number of trays. The analysis showed that four ideal trays were adequate for that small absorption refrigeration system having the feeding system to the column right above the second tray. It was carried out a sensitivity analysis of the main parameters. Vapor and liquid pressure drop constraint along with ammonia and water mass flow ratios defined the internal geometrical sizes of the column, such as the column diameter and height, as well as other designing parameters. Due to the lack of specific correlations, the present work was based on practical correlations used in the petrochemical and beverage production industries. The analysis also permitted to obtain the recommended values of tray spacing in order to have a compact column. The geometry of the tray turns out to be sensitive to the charge of vapor and, to a lesser extent, to the load of the liquid, being insensible to the diameter of tray holes. It was found a column efficiency of 50%. Finally, the paper presents some recommendations in order to have an optimal geometry for a compact size distillation column. (c) 2011 Elsevier Ltd. All rights reserved.
Resumo:
This work aimed at evaluating the spray congealing method for the production of microparticles of carbamazepine combined with a polyoxylglyceride carrier. In addition, the influence of the spray congealing conditions on the improvement of drug solubility was investigated using a three-factor, three-level Box-Behnken design. The factors studied were the cooling air flow rate, atomizing pressure, and molten dispersion feed rate. Dependent variables were the yield, solubility, encapsulation efficiency, particle size, water activity, and flow properties. Statistical analysis showed that only the yield was affected by the factors studied. The characteristics of the microparticles were evaluated using X-ray powder diffraction, scanning electron microscopy, differential scanning calorimetry, and hot-stage microscopy. The results showed a spherical morphology and changes in the crystalline state of the drug. The microparticles were obtained with good yields and encapsulation efficiencies, which ranged from 50 to 80% and 99.5 to 112%, respectively. The average size of the microparticles ranged from 17.7 to 39.4 mu m, the water activities were always below 0.5, and flowability was good to moderate. Both the solubility and dissolution rate of carbamazepine from the spray congealed microparticles were remarkably improved. The carbamazepine solubility showed a threefold increase and dissolution profile showed a twofold increase after 60 min compared to the raw drug. The Box-Behnken fractional factorial design proved to be a powerful tool to identify the best conditions for the manufacture of solid dispersion microparticles by spray congealing.
Resumo:
Over the past few years, the field of global optimization has been very active, producing different kinds of deterministic and stochastic algorithms for optimization in the continuous domain. These days, the use of evolutionary algorithms (EAs) to solve optimization problems is a common practice due to their competitive performance on complex search spaces. EAs are well known for their ability to deal with nonlinear and complex optimization problems. Differential evolution (DE) algorithms are a family of evolutionary optimization techniques that use a rather greedy and less stochastic approach to problem solving, when compared to classical evolutionary algorithms. The main idea is to construct, at each generation, for each element of the population a mutant vector, which is constructed through a specific mutation operation based on adding differences between randomly selected elements of the population to another element. Due to its simple implementation, minimum mathematical processing and good optimization capability, DE has attracted attention. This paper proposes a new approach to solve electromagnetic design problems that combines the DE algorithm with a generator of chaos sequences. This approach is tested on the design of a loudspeaker model with 17 degrees of freedom, for showing its applicability to electromagnetic problems. The results show that the DE algorithm with chaotic sequences presents better, or at least similar, results when compared to the standard DE algorithm and other evolutionary algorithms available in the literature.
Resumo:
Sensor and actuator based on laminated piezocomposite shells have shown increasing demand in the field of smart structures. The distribution of piezoelectric material within material layers affects the performance of these structures; therefore, its amount, shape, size, placement, and polarization should be simultaneously considered in an optimization problem. In addition, previous works suggest the concept of laminated piezocomposite structure that includes fiber-reinforced composite layer can increase the performance of these piezoelectric transducers; however, the design optimization of these devices has not been fully explored yet. Thus, this work aims the development of a methodology using topology optimization techniques for static design of laminated piezocomposite shell structures by considering the optimization of piezoelectric material and polarization distributions together with the optimization of the fiber angle of the composite orthotropic layers, which is free to assume different values along the same composite layer. The finite element model is based on the laminated piezoelectric shell theory, using the degenerate three-dimensional solid approach and first-order shell theory kinematics that accounts for the transverse shear deformation and rotary inertia effects. The topology optimization formulation is implemented by combining the piezoelectric material with penalization and polarization model and the discrete material optimization, where the design variables describe the amount of piezoelectric material and polarization sign at each finite element, with the fiber angles, respectively. Three different objective functions are formulated for the design of actuators, sensors, and energy harvesters. Results of laminated piezocomposite shell transducers are presented to illustrate the method. Copyright (C) 2012 John Wiley & Sons, Ltd.