975 resultados para Optimization analysis


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Failure analysis has been, throughout the years, a fundamental tool used in the aerospace sector, supporting assessments performed by sustainment and design engineers mainly related to failure modes and material suitability. The predicted service life of aircrafts often exceeds 40 years, and the design assured life rarely accounts for all in service loads and in service environmental menaces that aging aircrafts must deal with throughout their service lives. From the most conservative safe-life conceptual design approaches to the most recent on-condition based design approaches, assessing the condition and predicting the failure modes of components and materials are essential for the development of adequate preventive and corrective maintenance actions as well as for the accomplishment and optimization of scheduled maintenance programs of aircrafts. Moreover, as the operational conditions of aircrafts may vary significantly from operator to operator (especially in military aircraft), it is necessary to access if the defined maintenance programs are adequate to guarantee the continuous reliability and safe usage of the aircrafts, preventing catastrophic failures which bear significant maintenance and repair costs, and that may lead to the loss of human lives. Thus being, failure analysis and material investigations performed as part of aircraft accidents and incidents investigations arise as powerful tools of the utmost importance for safety assurance and cost reduction within the aeronautical and aerospace sectors. The Portuguese Air Force (PRTAF) has operated different aircrafts throughout its long existence, and in some cases, has operated a particular type of aircraft for more than 30 years, gathering a great amount of expertise in: assessing failure modes of the aircrafts materials; conducting aircrafts accidents and incidents investigations (sometimes with the participation of the aircraft manufacturers and/or other operators); and in the development of design and repair solutions for in-service related problems. This paper addresses several studies to support the thesis that failure analysis plays a key role in flight safety improvement within the PRTAF. It presents a short summary of developed

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The present document deals with the optimization of shape of aerodynamic profiles -- The objective is to reduce the drag coefficient on a given profile without penalising the lift coefficient -- A set of control points defining the geometry are passed and parameterized as a B-Spline curve -- These points are modified automatically by means of CFD analysis -- A given shape is defined by an user and a valid volumetric CFD domain is constructed from this planar data and a set of user-defined parameters -- The construction process involves the usage of 2D and 3D meshing algorithms that were coupled into own- code -- The volume of air surrounding the airfoil and mesh quality are also parametrically defined -- Some standard NACA profiles were used by obtaining first its control points in order to test the algorithm -- Navier-Stokes equations were solved for turbulent, steady-state ow of compressible uids using the k-epsilon model and SIMPLE algorithm -- In order to obtain data for the optimization process an utility to extract drag and lift data from the CFD simulation was added -- After a simulation is run drag and lift data are passed to the optimization process -- A gradient-based method using the steepest descent was implemented in order to define the magnitude and direction of the displacement of each control point -- The control points and other parameters defined as the design variables are iteratively modified in order to achieve an optimum -- Preliminary results on conceptual examples show a decrease in drag and a change in geometry that obeys to aerodynamic behavior principles

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This proposal shows that ACO systems can be applied to problems of requirements selection in software incremental development, with the idea of obtaining better results of those produced by expert judgment alone. The evaluation of the ACO systems should be done through a compared analysis with greedy and simulated annealing algorithms, performing experiments with some problems instances

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The ability to predict the properties of magnetic materials in a device is essential to ensuring the correct operation and optimization of the design as well as the device behavior over a wide range of input frequencies. Typically, development and simulation of wide-bandwidth models requires detailed, physics-based simulations that utilize significant computational resources. Balancing the trade-offs between model computational overhead and accuracy can be cumbersome, especially when the nonlinear effects of saturation and hysteresis are included in the model. This study focuses on the development of a system for analyzing magnetic devices in cases where model accuracy and computational intensity must be carefully and easily balanced by the engineer. A method for adjusting model complexity and corresponding level of detail while incorporating the nonlinear effects of hysteresis is presented that builds upon recent work in loss analysis and magnetic equivalent circuit (MEC) modeling. The approach utilizes MEC models in conjunction with linearization and model-order reduction techniques to process magnetic devices based on geometry and core type. The validity of steady-state permeability approximations is also discussed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

With the increasing complexity of today's software, the software development process is becoming highly time and resource consuming. The increasing number of software configurations, input parameters, usage scenarios, supporting platforms, external dependencies, and versions plays an important role in expanding the costs of maintaining and repairing unforeseeable software faults. To repair software faults, developers spend considerable time in identifying the scenarios leading to those faults and root-causing the problems. While software debugging remains largely manual, it is not the case with software testing and verification. The goal of this research is to improve the software development process in general, and software debugging process in particular, by devising techniques and methods for automated software debugging, which leverage the advances in automatic test case generation and replay. In this research, novel algorithms are devised to discover faulty execution paths in programs by utilizing already existing software test cases, which can be either automatically or manually generated. The execution traces, or alternatively, the sequence covers of the failing test cases are extracted. Afterwards, commonalities between these test case sequence covers are extracted, processed, analyzed, and then presented to the developers in the form of subsequences that may be causing the fault. The hypothesis is that code sequences that are shared between a number of faulty test cases for the same reason resemble the faulty execution path, and hence, the search space for the faulty execution path can be narrowed down by using a large number of test cases. To achieve this goal, an efficient algorithm is implemented for finding common subsequences among a set of code sequence covers. Optimization techniques are devised to generate shorter and more logical sequence covers, and to select subsequences with high likelihood of containing the root cause among the set of all possible common subsequences. A hybrid static/dynamic analysis approach is designed to trace back the common subsequences from the end to the root cause. A debugging tool is created to enable developers to use the approach, and integrate it with an existing Integrated Development Environment. The tool is also integrated with the environment's program editors so that developers can benefit from both the tool suggestions, and their source code counterparts. Finally, a comparison between the developed approach and the state-of-the-art techniques shows that developers need only to inspect a small number of lines in order to find the root cause of the fault. Furthermore, experimental evaluation shows that the algorithm optimizations lead to better results in terms of both the algorithm running time and the output subsequence length.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Considerable interest in renewable energy has increased in recent years due to the concerns raised over the environmental impact of conventional energy sources and their price volatility. In particular, wind power has enjoyed a dramatic global growth in installed capacity over the past few decades. Nowadays, the advancement of wind turbine industry represents a challenge for several engineering areas, including materials science, computer science, aerodynamics, analytical design and analysis methods, testing and monitoring, and power electronics. In particular, the technological improvement of wind turbines is currently tied to the use of advanced design methodologies, allowing the designers to develop new and more efficient design concepts. Integrating mathematical optimization techniques into the multidisciplinary design of wind turbines constitutes a promising way to enhance the profitability of these devices. In the literature, wind turbine design optimization is typically performed deterministically. Deterministic optimizations do not consider any degree of randomness affecting the inputs of the system under consideration, and result, therefore, in an unique set of outputs. However, given the stochastic nature of the wind and the uncertainties associated, for instance, with wind turbine operating conditions or geometric tolerances, deterministically optimized designs may be inefficient. Therefore, one of the ways to further improve the design of modern wind turbines is to take into account the aforementioned sources of uncertainty in the optimization process, achieving robust configurations with minimal performance sensitivity to factors causing variability. The research work presented in this thesis deals with the development of a novel integrated multidisciplinary design framework for the robust aeroservoelastic design optimization of multi-megawatt horizontal axis wind turbine (HAWT) rotors, accounting for the stochastic variability related to the input variables. The design system is based on a multidisciplinary analysis module integrating several simulations tools needed to characterize the aeroservoelastic behavior of wind turbines, and determine their economical performance by means of the levelized cost of energy (LCOE). The reported design framework is portable and modular in that any of its analysis modules can be replaced with counterparts of user-selected fidelity. The presented technology is applied to the design of a 5-MW HAWT rotor to be used at sites of wind power density class from 3 to 7, where the mean wind speed at 50 m above the ground ranges from 6.4 to 11.9 m/s. Assuming the mean wind speed to vary stochastically in such range, the rotor design is optimized by minimizing the mean and standard deviation of the LCOE. Airfoil shapes, spanwise distributions of blade chord and twist, internal structural layup and rotor speed are optimized concurrently, subject to an extensive set of structural and aeroelastic constraints. The effectiveness of the multidisciplinary and robust design framework is demonstrated by showing that the probabilistically designed turbine achieves more favorable probabilistic performance than those of the initial baseline turbine and a turbine designed deterministically.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Evaluation of the quality of the environment is essential for human wellness as pollutants in trace amounts can cause serious health problem. Nitrosamines are a group of compounds that are considered potential carcinogens and can be found in drinking water (as disinfection byproducts), foods, beverages and cosmetics. To monitor the level of these compounds to minimize daily intakes, fast and reliable analytical techniques are required. As these compounds are relatively highly polar, extraction and enrichment from environmental samples (aqueous) are challenging. Also, the trend of analytical techniques toward the reduction of sample size and minimization of organic solvent use demands new methods of analysis. In light of fulfilling these requirements, a new method of online preconcentration tailored to an electrokinetic chromatography is introduced. In this method, electroosmotic flow (EOF) was suppressed to increase the interaction time between analyte and micellar phase, therefore the only force to mobilize the neutral analytes is the interaction of analyte with moving micelles. In absence of EOF, polarity of applied potential was switched (negative or positive) to force (anionic or cationic) micelles to move toward the detector. To avoid the excessive band broadening due to longer analysis time caused by slow moving micelles, auxiliary pressure was introduced to boost the micelle movement toward the detector using an in house designed and built apparatus. Applying the external auxiliary pressure significantly reduced the analysis times without compromising separation efficiency. Parameters, such as type of surfactants, composition of background electrolyte (BGE), type of capillary, matrix effect, organic modifiers, etc., were evaluated in optimization of the method. The enrichment factors for targeted analytes were impressive, particularly; cationic surfactants were shown to be suitable for analysis of nitrosamines due to their ability to act as hydrogen bond donors. Ammonium perfluorooctanoate (APFO) also showed remarkable results in term of peak shapes and number of theoretical plates. It was shown that the separation results were best when a high conductivity sample was paired with a BGE of lower conductivity. Using higher surfactant concentrations (up to 200 mM SDS) than usual (50 mM SDS) for micellar electrokinetic chromatography (MEKC) improved the sweeping. A new method for micro-extraction and enrichment of highly polar neutral analytes (N-Nitrosamines in particular) based on three-phase drop micro-extraction was introduced and its performance studied. In this method, a new device using some easy-to-find components was fabricated and its operation and application demonstrated. Compared to conventional extraction methods (liquid-liquid extraction), consumption of organic solvents and operation times were significantly lower.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This dissertation investigates the connection between spectral analysis and frame theory. When considering the spectral properties of a frame, we present a few novel results relating to the spectral decomposition. We first show that scalable frames have the property that the inner product of the scaling coefficients and the eigenvectors must equal the inverse eigenvalues. From this, we prove a similar result when an approximate scaling is obtained. We then focus on the optimization problems inherent to the scalable frames by first showing that there is an equivalence between scaling a frame and optimization problems with a non-restrictive objective function. Various objective functions are considered, and an analysis of the solution type is presented. For linear objectives, we can encourage sparse scalings, and with barrier objective functions, we force dense solutions. We further consider frames in high dimensions, and derive various solution techniques. From here, we restrict ourselves to various frame classes, to add more specificity to the results. Using frames generated from distributions allows for the placement of probabilistic bounds on scalability. For discrete distributions (Bernoulli and Rademacher), we bound the probability of encountering an ONB, and for continuous symmetric distributions (Uniform and Gaussian), we show that symmetry is retained in the transformed domain. We also prove several hyperplane-separation results. With the theory developed, we discuss graph applications of the scalability framework. We make a connection with graph conditioning, and show the in-feasibility of the problem in the general case. After a modification, we show that any complete graph can be conditioned. We then present a modification of standard PCA (robust PCA) developed by Cand\`es, and give some background into Electron Energy-Loss Spectroscopy (EELS). We design a novel scheme for the processing of EELS through robust PCA and least-squares regression, and test this scheme on biological samples. Finally, we take the idea of robust PCA and apply the technique of kernel PCA to perform robust manifold learning. We derive the problem and present an algorithm for its solution. There is also discussion of the differences with RPCA that make theoretical guarantees difficult.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The central motif of this work is prediction and optimization in presence of multiple interacting intelligent agents. We use the phrase `intelligent agents' to imply in some sense, a `bounded rationality', the exact meaning of which varies depending on the setting. Our agents may not be `rational' in the classical game theoretic sense, in that they don't always optimize a global objective. Rather, they rely on heuristics, as is natural for human agents or even software agents operating in the real-world. Within this broad framework we study the problem of influence maximization in social networks where behavior of agents is myopic, but complication stems from the structure of interaction networks. In this setting, we generalize two well-known models and give new algorithms and hardness results for our models. Then we move on to models where the agents reason strategically but are faced with considerable uncertainty. For such games, we give a new solution concept and analyze a real-world game using out techniques. Finally, the richest model we consider is that of Network Cournot Competition which deals with strategic resource allocation in hypergraphs, where agents reason strategically and their interaction is specified indirectly via player's utility functions. For this model, we give the first equilibrium computability results. In all of the above problems, we assume that payoffs for the agents are known. However, for real-world games, getting the payoffs can be quite challenging. To this end, we also study the inverse problem of inferring payoffs, given game history. We propose and evaluate a data analytic framework and we show that it is fast and performant.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Les langages de programmation typés dynamiquement tels que JavaScript et Python repoussent la vérification de typage jusqu’au moment de l’exécution. Afin d’optimiser la performance de ces langages, les implémentations de machines virtuelles pour langages dynamiques doivent tenter d’éliminer les tests de typage dynamiques redondants. Cela se fait habituellement en utilisant une analyse d’inférence de types. Cependant, les analyses de ce genre sont souvent coûteuses et impliquent des compromis entre le temps de compilation et la précision des résultats obtenus. Ceci a conduit à la conception d’architectures de VM de plus en plus complexes. Nous proposons le versionnement paresseux de blocs de base, une technique de compilation à la volée simple qui élimine efficacement les tests de typage dynamiques redondants sur les chemins d’exécution critiques. Cette nouvelle approche génère paresseusement des versions spécialisées des blocs de base tout en propageant de l’information de typage contextualisée. Notre technique ne nécessite pas l’utilisation d’analyses de programme coûteuses, n’est pas contrainte par les limitations de précision des analyses d’inférence de types traditionnelles et évite la complexité des techniques d’optimisation spéculatives. Trois extensions sont apportées au versionnement de blocs de base afin de lui donner des capacités d’optimisation interprocédurale. Une première extension lui donne la possibilité de joindre des informations de typage aux propriétés des objets et aux variables globales. Puis, la spécialisation de points d’entrée lui permet de passer de l’information de typage des fonctions appellantes aux fonctions appellées. Finalement, la spécialisation des continuations d’appels permet de transmettre le type des valeurs de retour des fonctions appellées aux appellants sans coût dynamique. Nous démontrons empiriquement que ces extensions permettent au versionnement de blocs de base d’éliminer plus de tests de typage dynamiques que toute analyse d’inférence de typage statique.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Today , Providing drinking water and process water is one of the major problems in most countries ; the surface water often need to be treated to achieve necessary quality, and in this way, technological and also financial difficulties cause great restrictions in operating the treatment units. Although water supply by simple and cheap systems has been one of the important objectives in most scientific and research centers in the world, still a great percent of population in developing countries, especially in rural areas, don't benefit well quality water. One of the big and available sources for providing acceptable water is sea water. There are two ways to treat sea water first evaporation and second reverse osmosis system. Nowadays R.O system has been used for desalination because of low budget price and easily to operate and maintenance. The sea water should be pretreated before R.O plants, because there is some difficulties in raw sea water that can decrease yield point of membranes in R.O system. The subject of this research may be useful in this way, and we hope to be able to achieve complete success in design and construction of useful pretreatment systems for R.O plant. One of the most important units in the sea water pretreatment plant is filtration, the conventional method for filtration is pressurized sand filters, and the subject of this research is about new filtration which is called continuous back wash sand filtration (CBWSF). The CBWSF designed and tested in this research may be used more economically with less difficulty. It consists two main parts first shell body and second central part comprising of airlift pump, raw water feeding pipe, air supply hose, backwash chamber and sand washer as well as inlet and outlet connections. The CBWSF is a continuously operating filter, i.e. the filter does not have to be taken out of operation for backwashing or cleaning. Inlet water is fed through the sand bed while the sand bed is moving downwards. The water gets filtered while the sand becomes dirty. Simultaneously, the dirty sand is cleaned in the sand washer and the suspended solids are discharged in backwash water. We analyze the behavior of CBWSF in pretreatment of sea water instead of pressurized sand filter. There is one important factor which is not suitable for R.O membranes, it is bio-fouling. This factor is defined by Silt Density Index (SDI).measured by SDI. In this research has been focused on decreasing of SDI and NTU. Based on this goal, the prototype of pretreatment had been designed and manufactured to test. The system design was done mainly by using the design fundamentals of CBWSF. The automatic backwash sand filter can be used in small and also big water supply schemes. In big water treatment plants, the units of filters perform the filtration and backwash stages separately, and in small treatment plants, the unit is usually compacted to achieve less energy consumption. The analysis of the system showed that it may be used feasibly for water treating, especially for limited population. The construction is rapid, simple and economic, and its performance is high enough because no mobile mechanical part is used in it, so it may be proposed as an effective method to improve the water quality and consequently the hygiene level in the remote places of the country.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Tomato (Lycopersicon esculentum Mill.) is the second most important vegetable crop worldwide and a rich source of hydrophilic (H) and lipophilic (L) antioxidants. The H fraction is constituted mainly by ascorbic acid and soluble phenolic compounds, while the L fraction contains carotenoids (mostly lycopene), tocopherols, sterols and lipophilic phenolics [1,2]. To obtain these antioxidants it is necessary to follow appropriate extraction methods and processing conditions. In this regard, this study aimed at determining the optimal extraction conditions for H and L antioxidants from a tomato surplus. A 5-level full factorial design with 4 factors (extraction time (I, 0-20 min), temperature (T, 60-180 •c), ethanol percentage (Et, 0-100%) and solid/liquid ratio (S/L, 5-45 g!L)) was implemented and the response surface methodology used for analysis. Extractions were carried out in a Biotage Initiator Microwave apparatus. The concentration-time response methods of crocin and P-carotene bleaching were applied (using 96-well microplates), since they are suitable in vitro assays to evaluate the antioxidant activity of H and L matrices, respectively [3]. Measurements were carried out at intervals of 3, 5 and 10 min (initiation, propagation and asymptotic phases), during a time frame of 200 min. The parameters Pm (maximum protected substrate) and V m (amount of protected substrate per g of extract) and the so called IC50 were used to quantify the response. The optimum extraction conditions were as follows: r~2.25 min, 7'=149.2 •c, Et=99.1 %and SIL=l5.0 giL for H antioxidants; and t=l5.4 min, 7'=60.0 •c, Et=33.0% and S/L~l5.0 g/L for L antioxidants. The proposed model was validated based on the high values of the adjusted coefficient of determination (R2.wi>0.91) and on the non-siguificant differences between predicted and experimental values. It was also found that the antioxidant capacity of the H fraction was much higher than the L one.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Tomato (Lycopersicon esculentum Mill.), apart from being a functional food rich in carotenoids, vitamins and minerals, is also an important source of phenolic compounds [1 ,2]. As antioxidants, these functional molecules play an important role in the prevention of human pathologies and have many applications in nutraceutical, pharmaceutical and cosmeceutical industries. Therefore, the recovery of added-value phenolic compounds from natural sources, such as tomato surplus or industrial by-products, is highly desirable. Herein, the microwave-assisted extraction of the main phenolic acids and flavonoids from tomato was optimized. A S-Ieve! full factorial Box-Behnken design was implemented and response surface methodology used for analysis. The extraction time (0-20 min), temperature (60-180 "C), ethanol percentage (0-100%), solidlliquid ratio (5-45 g/L) and microwave power (0-400 W) were studied as independent variables. The phenolic profile of the studied tomato variety was initially characterized by HPLC-DAD-ESIIMS [2]. Then, the effect of the different extraction conditions, as defined by the used experimental design, on the target compounds was monitored by HPLC-DAD, using their UV spectra and retention time for identification and a series of calibrations based on external standards for quantification. The proposed model was successfully implemented and statistically validated. The microwave power had no effect on the extraction process. Comparing with the optimal extraction conditions for flavonoids, which demanded a short processing time (2 min), a low temperature (60 "C) and solidlliquid ratio (5 g/L), and pure ethanol, phenolic acids required a longer processing time ( 4.38 min), a higher temperature (145.6 •c) and solidlliquid ratio (45 g/L), and water as extraction solvent. Additionally, the studied tomato variety was highlighted as a source of added-value phenolic acids and flavonoids.