889 resultados para H150 Engineering Design


Relevância:

30.00% 30.00%

Publicador:

Resumo:

There are some variants of the widely used Fuzzy C-Means (FCM) algorithm that support clustering data distributed across different sites. Those methods have been studied under different names, like collaborative and parallel fuzzy clustering. In this study, we offer some augmentation of the two FCM-based clustering algorithms used to cluster distributed data by arriving at some constructive ways of determining essential parameters of the algorithms (including the number of clusters) and forming a set of systematically structured guidelines such as a selection of the specific algorithm depending on the nature of the data environment and the assumptions being made about the number of clusters. A thorough complexity analysis, including space, time, and communication aspects, is reported. A series of detailed numeric experiments is used to illustrate the main ideas discussed in the study.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Linear parameter varying (LPV) control is a model-based control technique that takes into account time-varying parameters of the plant. In the case of rotating systems supported by lubricated bearings, the dynamic characteristics of the bearings change in time as a function of the rotating speed. Hence, LPV control can tackle the problem of run-up and run-down operational conditions when dynamic characteristics of the rotating system change significantly in time due to the bearings and high vibration levels occur. In this work, the LPV control design for a flexible shaft supported by plain journal bearings is presented. The model used in the LPV control design is updated from unbalance response experimental results and dynamic coefficients for the entire range of rotating speeds are obtained by numerical optimization. Experimental implementation of the designed LPV control resulted in strong reduction of vibration amplitudes when crossing the critical speed, without affecting system behavior in sub- or supercritical speeds. (C) 2012 Elsevier Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The ALRED construction is a lightweight strategy for constructing message authentication algorithms from an underlying iterated block cipher. Even though this construction's original analyses show that it is secure against some attacks, the absence of formal security proofs in a strong security model still brings uncertainty on its robustness. In this paper, aiming to give a better understanding of the security level provided by different authentication algorithms based on this design strategy, we formally analyze two ALRED variants-the MARVIN message authentication code and the LETTERSOUP authenticated-encryption scheme,-bounding their security as a function of the attacker's resources and of the underlying cipher's characteristics.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Objectives: To evaluate the effect of insertion torque on micromotion to a lateral force in three different implant designs. Material and methods: Thirty-six implants with identical thread design, but different cutting groove design were divided in three groups: (1) non-fluted (no cutting groove, solid screw-form); (2) fluted (901 cut at the apex, tap design); and (3) Blossomt (Patent pending) (non-fluted with engineered trimmed thread design). The implants were screwed into polyurethane foam blocks and the insertion torque was recorded after each turn of 901 by a digital torque gauge. Controlled lateral loads of 10N followed by increments of 5 up to 100N were sequentially applied by a digital force gauge on a titanium abutment. Statistical comparison was performed with two-way mixed model ANOVA that evaluated implant design group, linear effects of turns and displacement loads, and their interaction. Results: While insertion torque increased as a function of number of turns for each design, the slope and final values increased (Po0.001) progressively from the Blossomt to the fluted to the non-fluted design (M +/- standard deviation [SD] = 64.1 +/- 26.8, 139.4 +/- 17.2, and 205.23 +/- 24.3 Ncm, respectively). While a linear relationship between horizontal displacement and lateral force was observed for each design, the slope and maximal displacement increased (Po0.001) progressively from the Blossomt to the fluted to the non-fluted design (M +/- SD 530 +/- 57.7, 585.9 +/- 82.4, and 782.33 +/- 269.4 mm, respectively). There was negligible to moderate levels of association between insertion torque and lateral displacement in the Blossomt, fluted and non-fluted design groups, respectively. Conclusion: Insertion torque was reduced in implant macrodesigns that incorporated cutting edges, and lesser insertion torque was generally associated with decreased micromovement. However, insertion torque and micromotion were unrelated within implant designs, particularly for those designs showing the least insertion torque.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Piezoelectric materials can be used to convert oscillatory mechanical energy into electrical energy. Energy harvesting devices are designed to capture the ambient energy surrounding the electronics and convert it into usable electrical energy. The design of energy harvesting devices is not obvious, requiring optimization procedures. This paper investigates the influence of pattern gradation using topology optimization on the design of piezocomposite energy harvesting devices based on bending behavior. The objective function consists of maximizing the electric power generated in a load resistor. A projection scheme is employed to compute the element densities from design variables and control the length scale of the material density. Examples of two-dimensional piezocomposite energy harvesting devices are presented and discussed using the proposed method. The numerical results illustrate that pattern gradation constraints help to increase the electric power generated in a load resistor and guides the problem toward a more stable solution. (C) 2012 Elsevier Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the field of vehicle dynamics, commercial software can aid the designer during the conceptual and detailed design phases. Simulations using these tools can quickly provide specific design metrics, such as yaw and lateral velocity, for standard maneuvers. However, it remains challenging to correlate these metrics with empirical quantities that depend on many external parameters and design specifications. This scenario is the case with tire wear, which depends on the frictional work developed by the tire-road contact. In this study, an approach is proposed to estimate the tire-road friction during steady-state longitudinal and cornering maneuvers. Using this approach, a qualitative formula for tire wear evaluation is developed, and conceptual design analyses of cornering maneuvers are performed using simplified vehicle models. The influence of some design parameters such as cornering stiffness, the distance between the axles, and the steer angle ratio between the steering axles for vehicles with two steering axles is evaluated. The proposed methodology allows the designer to predict tire wear using simplified vehicle models during the conceptual design phase.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The goal of this paper is to present an analysis of a segmented weir sieve-tray distillation column for a 17.58 kW (5 TR) ammonia/water absorption refrigeration cycle. Balances of mass and energy were performed based on the method of Ponchon-Savarit, from which it was possible to determine the ideal number of trays. The analysis showed that four ideal trays were adequate for that small absorption refrigeration system having the feeding system to the column right above the second tray. It was carried out a sensitivity analysis of the main parameters. Vapor and liquid pressure drop constraint along with ammonia and water mass flow ratios defined the internal geometrical sizes of the column, such as the column diameter and height, as well as other designing parameters. Due to the lack of specific correlations, the present work was based on practical correlations used in the petrochemical and beverage production industries. The analysis also permitted to obtain the recommended values of tray spacing in order to have a compact column. The geometry of the tray turns out to be sensitive to the charge of vapor and, to a lesser extent, to the load of the liquid, being insensible to the diameter of tray holes. It was found a column efficiency of 50%. Finally, the paper presents some recommendations in order to have an optimal geometry for a compact size distillation column. (c) 2011 Elsevier Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This work aimed at evaluating the spray congealing method for the production of microparticles of carbamazepine combined with a polyoxylglyceride carrier. In addition, the influence of the spray congealing conditions on the improvement of drug solubility was investigated using a three-factor, three-level Box-Behnken design. The factors studied were the cooling air flow rate, atomizing pressure, and molten dispersion feed rate. Dependent variables were the yield, solubility, encapsulation efficiency, particle size, water activity, and flow properties. Statistical analysis showed that only the yield was affected by the factors studied. The characteristics of the microparticles were evaluated using X-ray powder diffraction, scanning electron microscopy, differential scanning calorimetry, and hot-stage microscopy. The results showed a spherical morphology and changes in the crystalline state of the drug. The microparticles were obtained with good yields and encapsulation efficiencies, which ranged from 50 to 80% and 99.5 to 112%, respectively. The average size of the microparticles ranged from 17.7 to 39.4 mu m, the water activities were always below 0.5, and flowability was good to moderate. Both the solubility and dissolution rate of carbamazepine from the spray congealed microparticles were remarkably improved. The carbamazepine solubility showed a threefold increase and dissolution profile showed a twofold increase after 60 min compared to the raw drug. The Box-Behnken fractional factorial design proved to be a powerful tool to identify the best conditions for the manufacture of solid dispersion microparticles by spray congealing.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Over the past few years, the field of global optimization has been very active, producing different kinds of deterministic and stochastic algorithms for optimization in the continuous domain. These days, the use of evolutionary algorithms (EAs) to solve optimization problems is a common practice due to their competitive performance on complex search spaces. EAs are well known for their ability to deal with nonlinear and complex optimization problems. Differential evolution (DE) algorithms are a family of evolutionary optimization techniques that use a rather greedy and less stochastic approach to problem solving, when compared to classical evolutionary algorithms. The main idea is to construct, at each generation, for each element of the population a mutant vector, which is constructed through a specific mutation operation based on adding differences between randomly selected elements of the population to another element. Due to its simple implementation, minimum mathematical processing and good optimization capability, DE has attracted attention. This paper proposes a new approach to solve electromagnetic design problems that combines the DE algorithm with a generator of chaos sequences. This approach is tested on the design of a loudspeaker model with 17 degrees of freedom, for showing its applicability to electromagnetic problems. The results show that the DE algorithm with chaotic sequences presents better, or at least similar, results when compared to the standard DE algorithm and other evolutionary algorithms available in the literature.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Sensor and actuator based on laminated piezocomposite shells have shown increasing demand in the field of smart structures. The distribution of piezoelectric material within material layers affects the performance of these structures; therefore, its amount, shape, size, placement, and polarization should be simultaneously considered in an optimization problem. In addition, previous works suggest the concept of laminated piezocomposite structure that includes fiber-reinforced composite layer can increase the performance of these piezoelectric transducers; however, the design optimization of these devices has not been fully explored yet. Thus, this work aims the development of a methodology using topology optimization techniques for static design of laminated piezocomposite shell structures by considering the optimization of piezoelectric material and polarization distributions together with the optimization of the fiber angle of the composite orthotropic layers, which is free to assume different values along the same composite layer. The finite element model is based on the laminated piezoelectric shell theory, using the degenerate three-dimensional solid approach and first-order shell theory kinematics that accounts for the transverse shear deformation and rotary inertia effects. The topology optimization formulation is implemented by combining the piezoelectric material with penalization and polarization model and the discrete material optimization, where the design variables describe the amount of piezoelectric material and polarization sign at each finite element, with the fiber angles, respectively. Three different objective functions are formulated for the design of actuators, sensors, and energy harvesters. Results of laminated piezocomposite shell transducers are presented to illustrate the method. Copyright (C) 2012 John Wiley & Sons, Ltd.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Based on literature, this article aims to present the "participant-observation" research protocol, and its practical application in the industrial engineering field, more specifically within the area of design development, and in the case shown by this article, of interiors' design. The main target is to identify the concept of the method, i.e., from its characteristics to structure a general sense about the subject, so that the protocol can be used in different areas of knowledge, especially those ones which are committed with the scientific research involving the expertise from researchers, and subjective feelings and opinions of the users of an engineering product, and how this knowledge can be benefic for product design, contributing since the earliest stage of design.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In deterministic optimization, the uncertainties of the structural system (i.e. dimension, model, material, loads, etc) are not explicitly taken into account. Hence, resulting optimal solutions may lead to reduced reliability levels. The objective of reliability based design optimization (RBDO) is to optimize structures guaranteeing that a minimum level of reliability, chosen a priori by the designer, is maintained. Since reliability analysis using the First Order Reliability Method (FORM) is an optimization procedure itself, RBDO (in its classical version) is a double-loop strategy: the reliability analysis (inner loop) and the structural optimization (outer loop). The coupling of these two loops leads to very high computational costs. To reduce the computational burden of RBDO based on FORM, several authors propose decoupling the structural optimization and the reliability analysis. These procedures may be divided in two groups: (i) serial single loop methods and (ii) unilevel methods. The basic idea of serial single loop methods is to decouple the two loops and solve them sequentially, until some convergence criterion is achieved. On the other hand, uni-level methods employ different strategies to obtain a single loop of optimization to solve the RBDO problem. This paper presents a review of such RBDO strategies. A comparison of the performance (computational cost) of the main strategies is presented for several variants of two benchmark problems from the literature and for a structure modeled using the finite element method.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

[EN] This paper proposes the incorporation of engineering knowledge through both (a) advanced state-of-the-art preference handling decision-making tools integrated in multiobjective evolutionary algorithms and (b) engineering knowledge-based variance reduction simulation as enhancing tools for the robust optimum design of structural frames taking uncertainties into consideration in the design variables.The simultaneous minimization of the constrained weight (adding structuralweight and average distribution of constraint violations) on the one hand and the standard deviation of the distribution of constraint violation on the other are handled with multiobjective optimization-based evolutionary computation in two different multiobjective algorithms. The optimum design values of the deterministic structural problem in question are proposed as a reference point (the aspiration level) in reference-point-based evolutionary multiobjective algorithms (here g-dominance is used). Results including

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The world of communication has changed quickly in the last decade resulting in the the rapid increase in the pace of peoples’ lives. This is due to the explosion of mobile communication and the internet which has now reached all levels of society. With such pressure for access to communication there is increased demand for bandwidth. Photonic technology is the right solution for high speed networks that have to supply wide bandwidth to new communication service providers. In particular this Ph.D. dissertation deals with DWDM optical packet-switched networks. The issue introduces a huge quantity of problems from physical layer up to transport layer. Here this subject is tackled from the network level perspective. The long term solution represented by optical packet switching has been fully explored in this years together with the Network Research Group at the department of Electronics, Computer Science and System of the University of Bologna. Some national as well as international projects supported this research like the Network of Excellence (NoE) e-Photon/ONe, funded by the European Commission in the Sixth Framework Programme and INTREPIDO project (End-to-end Traffic Engineering and Protection for IP over DWDM Optical Networks) funded by the Italian Ministry of Education, University and Scientific Research. Optical packet switching for DWDM networks is studied at single node level as well as at network level. In particular the techniques discussed are thought to be implemented for a long-haul transport network that connects local and metropolitan networks around the world. The main issues faced are contention resolution in a asynchronous variable packet length environment, adaptive routing, wavelength conversion and node architecture. Characteristics that a network must assure as quality of service and resilience are also explored at both node and network level. Results are mainly evaluated via simulation and through analysis.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background. The surgical treatment of dysfunctional hips is a severe condition for the patient and a costly therapy for the public health. Hip resurfacing techniques seem to hold the promise of various advantages over traditional THR, with particular attention to young and active patients. Although the lesson provided in the past by many branches of engineering is that success in designing competitive products can be achieved only by predicting the possible scenario of failure, to date the understanding of the implant quality is poorly pre-clinically addressed. Thus revision is the only delayed and reliable end point for assessment. The aim of the present work was to model the musculoskeletal system so as to develop a protocol for predicting failure of hip resurfacing prosthesis. Methods. Preliminary studies validated the technique for the generation of subject specific finite element (FE) models of long bones from Computed Thomography data. The proposed protocol consisted in the numerical analysis of the prosthesis biomechanics by deterministic and statistic studies so as to assess the risk of biomechanical failure on the different operative conditions the implant might face in a population of interest during various activities of daily living. Physiological conditions were defined including the variability of the anatomy, bone densitometry, surgery uncertainties and published boundary conditions at the hip. The protocol was tested by analysing a successful design on the market and a new prototype of a resurfacing prosthesis. Results. The intrinsic accuracy of models on bone stress predictions (RMSE < 10%) was aligned to the current state of the art in this field. The accuracy of prediction on the bone-prosthesis contact mechanics was also excellent (< 0.001 mm). The sensitivity of models prediction to uncertainties on modelling parameter was found below 8.4%. The analysis of the successful design resulted in a very good agreement with published retrospective studies. The geometry optimisation of the new prototype lead to a final design with a low risk of failure. The statistical analysis confirmed the minimal risk of the optimised design over the entire population of interest. The performances of the optimised design showed a significant improvement with respect to the first prototype (+35%). Limitations. On the authors opinion the major limitation of this study is on boundary conditions. The muscular forces and the hip joint reaction were derived from the few data available in the literature, which can be considered significant but hardly representative of the entire variability of boundary conditions the implant might face over the patients population. This moved the focus of the research on modelling the musculoskeletal system; the ongoing activity is to develop subject-specific musculoskeletal models of the lower limb from medical images. Conclusions. The developed protocol was able to accurately predict known clinical outcomes when applied to a well-established device and, to support the design optimisation phase providing important information on critical characteristics of the patients when applied to a new prosthesis. The presented approach does have a relevant generality that would allow the extension of the protocol to a large set of orthopaedic scenarios with minor changes. Hence, a failure mode analysis criterion can be considered a suitable tool in developing new orthopaedic devices.