998 resultados para Area Optimization


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Small centrifugal compressors are more and more widely used in many industrialsystems because of their higher efficiency and better off-design performance comparing to piston and scroll compressors as while as higher work coefficient perstage than in axial compressors. Higher efficiency is always the aim of the designer of compressors. In the present work, the influence of four partsof a small centrifugal compressor that compresses heavy molecular weight real gas has been investigated in order to achieve higher efficiency. Two parts concern the impeller: tip clearance and the circumferential position of the splitter blade. The other two parts concern the diffuser: the pinch shape and vane shape. Computational fluid dynamics is applied in this study. The Reynolds averaged Navier-Stokes flow solver Finflo is used. The quasi-steady approach is utilized. Chien's k-e turbulence model is used to model the turbulence. A new practical real gas model is presented in this study. The real gas model is easily generated, accuracy controllable and fairly fast. The numerical results and measurements show good agreement. The influence of tip clearance on the performance of a small compressor is obvious. The pressure ratio and efficiency are decreased as the size of tip clearance is increased, while the total enthalpy rise keeps almost constant. The decrement of the pressure ratio and efficiency is larger at higher mass flow rates and smaller at lower mass flow rates. The flow angles at the inlet and outlet of the impeller are increased as the size of tip clearance is increased. The results of the detailed flow field show that leakingflow is the main reason for the performance drop. The secondary flow region becomes larger as the size of tip clearance is increased and the area of the main flow is compressed. The flow uniformity is then decreased. A detailed study shows that the leaking flow rate is higher near the exit of the impeller than that near the inlet of the impeller. Based on this phenomenon, a new partiallyshrouded impeller is used. The impeller is shrouded near the exit of the impeller. The results show that the flow field near the exit of the impeller is greatly changed by the partially shrouded impeller, and better performance is achievedthan with the unshrouded impeller. The loading distribution on the impeller blade and the flow fields in the impeller is changed by moving the splitter of the impeller in circumferential direction. Moving the splitter slightly to the suction side of the long blade can improve the performance of the compressor. The total enthalpy rise is reduced if only the leading edge of the splitter ismoved to the suction side of the long blade. The performance of the compressor is decreased if the blade is bended from the radius direction at the leading edge of the splitter. The total pressure rise and the enthalpy rise of thecompressor are increased if pinch is used at the diffuser inlet. Among the fivedifferent pinch shape configurations, at design and lower mass flow rates the efficiency of a straight line pinch is the highest, while at higher mass flow rate, the efficiency of a concave pinch is the highest. The sharp corner of the pinch is the main reason for the decrease of efficiency and should be avoided. The variation of the flow angles entering the diffuser in spanwise direction is decreased if pinch is applied. A three-dimensional low solidity twisted vaned diffuser is designed to match the flow angles entering the diffuser. The numerical results show that the pressure recovery in the twisted diffuser is higher than in a conventional low solidity vaned diffuser, which also leads to higher efficiency of the twisted diffuser. Investigation of the detailed flow fields shows that the separation at lower mass flow rate in the twisted diffuser is later than in the conventional low solidity vaned diffuser, which leads to a possible wider flow range of the twisted diffuser.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The threats caused by global warming motivate different stake holders to deal with and control them. This Master's thesis focuses on analyzing carbon trade permits in optimization framework. The studied model determines optimal emission and uncertainty levels which minimize the total cost. Research questions are formulated and answered by using different optimization tools. The model is developed and calibrated by using available consistent data in the area of carbon emission technology and control. Data and some basic modeling assumptions were extracted from reports and existing literatures. The data collected from the countries in the Kyoto treaty are used to estimate the cost functions. Theory and methods of constrained optimization are briefly presented. A two-level optimization problem (individual and between the parties) is analyzed by using several optimization methods. The combined cost optimization between the parties leads into multivariate model and calls for advanced techniques. Lagrangian, Sequential Quadratic Programming and Differential Evolution (DE) algorithm are referred to. The role of inherent measurement uncertainty in the monitoring of emissions is discussed. We briefly investigate an approach where emission uncertainty would be described in stochastic framework. MATLAB software has been used to provide visualizations including the relationship between decision variables and objective function values. Interpretations in the context of carbon trading were briefly presented. Suggestions for future work are given in stochastic modeling, emission trading and coupled analysis of energy prices and carbon permits.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The last decade has shown that the global paper industry needs new processes and products in order to reassert its position in the industry. As the paper markets in Western Europe and North America have stabilized, the competition has tightened. Along with the development of more cost-effective processes and products, new process design methods are also required to break the old molds and create new ideas. This thesis discusses the development of a process design methodology based on simulation and optimization methods. A bi-level optimization problem and a solution procedure for it are formulated and illustrated. Computational models and simulation are used to illustrate the phenomena inside a real process and mathematical optimization is exploited to find out the best process structures and control principles for the process. Dynamic process models are used inside the bi-level optimization problem, which is assumed to be dynamic and multiobjective due to the nature of papermaking processes. The numerical experiments show that the bi-level optimization approach is useful for different kinds of problems related to process design and optimization. Here, the design methodology is applied to a constrained process area of a papermaking line. However, the same methodology is applicable to all types of industrial processes, e.g., the design of biorefiners, because the methodology is totally generalized and can be easily modified.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Arctic region becoming very active area of the industrial developments since it may contain approximately 15-25% of the hydrocarbon and other valuable natural resources which are in great demand nowadays. Harsh operation conditions make the Arctic region difficult to access due to low temperatures which can drop below -50 °C in winter and various additional loads. As a result, newer and modified metallic materials are implemented which can cause certain problems in welding them properly. Steel is still the most widely used material in the Arctic regions due to high mechanical properties, cheapness and manufacturability. Moreover, with recent steel manufacturing development it is possible to make up to 1100 MPa yield strength microalloyed high strength steel which can be operated at temperatures -60 °C possessing reasonable weldability, ductility and suitable impact toughness which is the most crucial property for the Arctic usability. For many years, the arc welding was the most dominant joining method of the metallic materials. Recently, other joining methods are successfully implemented into welding manufacturing due to growing industrial demands and one of them is the laser-arc hybrid welding. The laser-arc hybrid welding successfully combines the advantages and eliminates the disadvantages of the both joining methods therefore produce less distortions, reduce the need of edge preparation, generates narrower heat-affected zone, and increase welding speed or productivity significantly. Moreover, due to easy implementation of the filler wire, accordingly the mechanical properties of the joints can be manipulated in order to produce suitable quality. Moreover, with laser-arc hybrid welding it is possible to achieve matching weld metal compared to the base material even with the low alloying welding wires without excessive softening of the HAZ in the high strength steels. As a result, the laser-arc welding methods can be the most desired and dominating welding technology nowadays, and which is already operating in automotive and shipbuilding industries with a great success. However, in the future it can be extended to offshore, pipe-laying, and heavy equipment industries for arctic environment. CO2 and Nd:YAG laser sources in combination with gas metal arc source have been used widely in the past two decades. Recently, the fiber laser sources offered high power outputs with excellent beam quality, very high electrical efficiency, low maintenance expenses, and higher mobility due to fiber optics. As a result, fiber laser-arc hybrid process offers even more extended advantages and applications. However, the information about fiber or disk laser-arc hybrid welding is very limited. The objectives of the Master’s thesis are concentrated on the study of fiber laser-MAG hybrid welding parameters in order to understand resulting mechanical properties and quality of the welds. In this work only ferrous materials are reviewed. The qualitative methodological approach has been used to achieve the objectives. This study demonstrates that laser-arc hybrid welding is suitable for welding of many types, thicknesses and strength of steels with acceptable mechanical properties along very high productivity. New developments of the fiber laser-arc hybrid process offers extended capabilities over CO2 laser combined with the arc. This work can be used as guideline in hybrid welding technology with comprehensive study the effect of welding parameter on joint quality.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Almost every problem of design, planning and management in the technical and organizational systems has several conflicting goals or interests. Nowadays, multicriteria decision models represent a rapidly developing area of operation research. While solving practical optimization problems, it is necessary to take into account various kinds of uncertainty due to lack of data, inadequacy of mathematical models to real-time processes, calculation errors, etc. In practice, this uncertainty usually leads to undesirable outcomes where the solutions are very sensitive to any changes in the input parameters. An example is the investment managing. Stability analysis of multicriteria discrete optimization problems investigates how the found solutions behave in response to changes in the initial data (input parameters). This thesis is devoted to the stability analysis in the problem of selecting investment project portfolios, which are optimized by considering different types of risk and efficiency of the investment projects. The stability analysis is carried out in two approaches: qualitative and quantitative. The qualitative approach describes the behavior of solutions in conditions with small perturbations in the initial data. The stability of solutions is defined in terms of existence a neighborhood in the initial data space. Any perturbed problem from this neighborhood has stability with respect to the set of efficient solutions of the initial problem. The other approach in the stability analysis studies quantitative measures such as stability radius. This approach gives information about the limits of perturbations in the input parameters, which do not lead to changes in the set of efficient solutions. In present thesis several results were obtained including attainable bounds for the stability radii of Pareto optimal and lexicographically optimal portfolios of the investment problem with Savage's, Wald's criteria and criteria of extreme optimism. In addition, special classes of the problem when the stability radii are expressed by the formulae were indicated. Investigations were completed using different combinations of Chebyshev's, Manhattan and Hölder's metrics, which allowed monitoring input parameters perturbations differently.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The cellular structure of healthy food products, with added dietary fiber and low in calories, is an important factor that contributes to the assessment of quality, which can be quantified by image analysis of visual texture. This study seeks to compare image analysis techniques (binarization using Otsu’s method and the default ImageJ algorithm, a variation of the iterative intermeans method) for quantification of differences in the crumb structure of breads made with different percentages of whole-wheat flour and fat replacer, and discuss the behavior of the parameters number of cells, mean cell area, cell density, and circularity using response surface methodology. Comparative analysis of the results achieved with the Otsu and default ImageJ algorithms showed a significant difference between the studied parameters. The Otsu method demonstrated the crumb structure of the analyzed breads more reliably than the default ImageJ algorithm, and is thus the most suitable in terms of structural representation of the crumb texture.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Wind power is a rapidly developing, low-emission form of energy production. In Fin-land, the official objective is to increase wind power capacity from the current 1 005 MW up to 3 500–4 000 MW by 2025. By the end of April 2015, the total capacity of all wind power project being planned in Finland had surpassed 11 000 MW. As the amount of projects in Finland is record high, an increasing amount of infrastructure is also being planned and constructed. Traditionally, these planning operations are conducted using manual and labor-intensive work methods that are prone to subjectivity. This study introduces a GIS-based methodology for determining optimal paths to sup-port the planning of onshore wind park infrastructure alignment in Nordanå-Lövböle wind park located on the island of Kemiönsaari in Southwest Finland. The presented methodology utilizes a least-cost path (LCP) algorithm for searching of optimal paths within a high resolution real-world terrain dataset derived from airborne lidar scannings. In addition, planning data is used to provide a realistic planning framework for the anal-ysis. In order to produce realistic results, the physiographic and planning datasets are standardized and weighted according to qualitative suitability assessments by utilizing methods and practices offered by multi-criteria evaluation (MCE). The results are pre-sented as scenarios to correspond various different planning objectives. Finally, the methodology is documented by using tools of Business Process Management (BPM). The results show that the presented methodology can be effectively used to search and identify extensive, 20 to 35 kilometers long networks of paths that correspond to certain optimization objectives in the study area. The utilization of high-resolution terrain data produces a more objective and more detailed path alignment plan. This study demon-strates that the presented methodology can be practically applied to support a wind power infrastructure alignment planning process. The six-phase structure of the method-ology allows straightforward incorporation of different optimization objectives. The methodology responds well to combining quantitative and qualitative data. Additional-ly, the careful documentation presents an example of how the methodology can be eval-uated and developed as a business process. This thesis also shows that more emphasis on the research of algorithm-based, more objective methods for the planning of infrastruc-ture alignment is desirable, as technological development has only recently started to realize the potential of these computational methods.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This research focuses on generating aesthetically pleasing images in virtual environments using the particle swarm optimization (PSO) algorithm. The PSO is a stochastic population based search algorithm that is inspired by the flocking behavior of birds. In this research, we implement swarms of cameras flying through a virtual world in search of an image that is aesthetically pleasing. Virtual world exploration using particle swarm optimization is considered to be a new research area and is of interest to both the scientific and artistic communities. Aesthetic rules such as rule of thirds, subject matter, colour similarity and horizon line are all analyzed together as a multi-objective problem to analyze and solve with rendered images. A new multi-objective PSO algorithm, the sum of ranks PSO, is introduced. It is empirically compared to other single-objective and multi-objective swarm algorithms. An advantage of the sum of ranks PSO is that it is useful for solving high-dimensional problems within the context of this research. Throughout many experiments, we show that our approach is capable of automatically producing images satisfying a variety of supplied aesthetic criteria.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

La survie des réseaux est un domaine d'étude technique très intéressant ainsi qu'une préoccupation critique dans la conception des réseaux. Compte tenu du fait que de plus en plus de données sont transportées à travers des réseaux de communication, une simple panne peut interrompre des millions d'utilisateurs et engendrer des millions de dollars de pertes de revenu. Les techniques de protection des réseaux consistent à fournir une capacité supplémentaire dans un réseau et à réacheminer les flux automatiquement autour de la panne en utilisant cette disponibilité de capacité. Cette thèse porte sur la conception de réseaux optiques intégrant des techniques de survie qui utilisent des schémas de protection basés sur les p-cycles. Plus précisément, les p-cycles de protection par chemin sont exploités dans le contexte de pannes sur les liens. Notre étude se concentre sur la mise en place de structures de protection par p-cycles, et ce, en supposant que les chemins d'opération pour l'ensemble des requêtes sont définis a priori. La majorité des travaux existants utilisent des heuristiques ou des méthodes de résolution ayant de la difficulté à résoudre des instances de grande taille. L'objectif de cette thèse est double. D'une part, nous proposons des modèles et des méthodes de résolution capables d'aborder des problèmes de plus grande taille que ceux déjà présentés dans la littérature. D'autre part, grâce aux nouveaux algorithmes, nous sommes en mesure de produire des solutions optimales ou quasi-optimales. Pour ce faire, nous nous appuyons sur la technique de génération de colonnes, celle-ci étant adéquate pour résoudre des problèmes de programmation linéaire de grande taille. Dans ce projet, la génération de colonnes est utilisée comme une façon intelligente d'énumérer implicitement des cycles prometteurs. Nous proposons d'abord des formulations pour le problème maître et le problème auxiliaire ainsi qu'un premier algorithme de génération de colonnes pour la conception de réseaux protegées par des p-cycles de la protection par chemin. L'algorithme obtient de meilleures solutions, dans un temps raisonnable, que celles obtenues par les méthodes existantes. Par la suite, une formulation plus compacte est proposée pour le problème auxiliaire. De plus, nous présentons une nouvelle méthode de décomposition hiérarchique qui apporte une grande amélioration de l'efficacité globale de l'algorithme. En ce qui concerne les solutions en nombres entiers, nous proposons deux méthodes heurisiques qui arrivent à trouver des bonnes solutions. Nous nous attardons aussi à une comparaison systématique entre les p-cycles et les schémas classiques de protection partagée. Nous effectuons donc une comparaison précise en utilisant des formulations unifiées et basées sur la génération de colonnes pour obtenir des résultats de bonne qualité. Par la suite, nous évaluons empiriquement les versions orientée et non-orientée des p-cycles pour la protection par lien ainsi que pour la protection par chemin, dans des scénarios de trafic asymétrique. Nous montrons quel est le coût de protection additionnel engendré lorsque des systèmes bidirectionnels sont employés dans de tels scénarios. Finalement, nous étudions une formulation de génération de colonnes pour la conception de réseaux avec des p-cycles en présence d'exigences de disponibilité et nous obtenons des premières bornes inférieures pour ce problème.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the early 19th century, industrial revolution was fuelled mainly by the development of machine based manufacturing and the increased use of coal. Later on, the focal point shifted to oil, thanks to the mass-production technology, ease of transport/storage and also the (less) environmental issues in comparison with the coal!! By the dawn of 21st century, due to the depletion of oil reserves and pollution resulting from heavy usage of oil the demand for clean energy was on the rising edge. This ever growing demand has propelled research on photovoltaics which has emerged successful and is currently being looked up to as the only solace for meeting our present day energy requirements. The proven PV technology on commercial scale is based on silicon but the recent boom in the demand for photovoltaic modules has in turn created a shortage in supply of silicon. Also the technology is still not accessible to common man. This has onset the research and development work on moderately efficient, eco-friendly and low cost photovoltaic devices (solar cells). Thin film photovoltaic modules have made a breakthrough entry in the PV market on these grounds. Thin films have the potential to revolutionize the present cost structure of solar cells by eliminating the use of the expensive silicon wafers that alone accounts for above 50% of total module manufacturing cost.Well developed thin film photovoltaic technologies are based on amorphous silicon, CdTe and CuInSe2. However the cell fabrication process using amorphous silicon requires handling of very toxic gases (like phosphene, silane and borane) and costly technologies for cell fabrication. In the case of other materials too, there are difficulties like maintaining stoichiometry (especially in large area films), alleged environmental hazards and high cost of indium. Hence there is an urgent need for the development of materials that are easy to prepare, eco-friendly and available in abundance. The work presented in this thesis is an attempt towards the development of a cost-effective, eco-friendly material for thin film solar cells using simple economically viable technique. Sn-based window and absorber layers deposited using Chemical Spray Pyrolysis (CSP) technique have been chosen for the purpose

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Over-sampling sigma-delta analogue-to-digital converters (ADCs) are one of the key building blocks of state of the art wireless transceivers. In the sigma-delta modulator design the scaling coefficients determine the overall signal-to-noise ratio. Therefore, selecting the optimum value of the coefficient is very important. To this end, this paper addresses the design of a fourthorder multi-bit sigma-delta modulator for Wireless Local Area Networks (WLAN) receiver with feed-forward path and the optimum coefficients are selected using genetic algorithm (GA)- based search method. In particular, the proposed converter makes use of low-distortion swing suppression SDM architecture which is highly suitable for low oversampling ratios to attain high linearity over a wide bandwidth. The focus of this paper is the identification of the best coefficients suitable for the proposed topology as well as the optimization of a set of system parameters in order to achieve the desired signal-to-noise ratio. GA-based search engine is a stochastic search method which can find the optimum solution within the given constraints.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Post-transcriptional gene silencing by RNA interference is mediated by small interfering RNA called siRNA. This gene silencing mechanism can be exploited therapeutically to a wide variety of disease-associated targets, especially in AIDS, neurodegenerative diseases, cholesterol and cancer on mice with the hope of extending these approaches to treat humans. Over the recent past, a significant amount of work has been undertaken to understand the gene silencing mediated by exogenous siRNA. The design of efficient exogenous siRNA sequences is challenging because of many issues related to siRNA. While designing efficient siRNA, target mRNAs must be selected such that their corresponding siRNAs are likely to be efficient against that target and unlikely to accidentally silence other transcripts due to sequence similarity. So before doing gene silencing by siRNAs, it is essential to analyze their off-target effects in addition to their inhibition efficiency against a particular target. Hence designing exogenous siRNA with good knock-down efficiency and target specificity is an area of concern to be addressed. Some methods have been developed already by considering both inhibition efficiency and off-target possibility of siRNA against agene. Out of these methods, only a few have achieved good inhibition efficiency, specificity and sensitivity. The main focus of this thesis is to develop computational methods to optimize the efficiency of siRNA in terms of “inhibition capacity and off-target possibility” against target mRNAs with improved efficacy, which may be useful in the area of gene silencing and drug design for tumor development. This study aims to investigate the currently available siRNA prediction approaches and to devise a better computational approach to tackle the problem of siRNA efficacy by inhibition capacity and off-target possibility. The strength and limitations of the available approaches are investigated and taken into consideration for making improved solution. Thus the approaches proposed in this study extend some of the good scoring previous state of the art techniques by incorporating machine learning and statistical approaches and thermodynamic features like whole stacking energy to improve the prediction accuracy, inhibition efficiency, sensitivity and specificity. Here, we propose one Support Vector Machine (SVM) model, and two Artificial Neural Network (ANN) models for siRNA efficiency prediction. In SVM model, the classification property is used to classify whether the siRNA is efficient or inefficient in silencing a target gene. The first ANNmodel, named siRNA Designer, is used for optimizing the inhibition efficiency of siRNA against target genes. The second ANN model, named Optimized siRNA Designer, OpsiD, produces efficient siRNAs with high inhibition efficiency to degrade target genes with improved sensitivity-specificity, and identifies the off-target knockdown possibility of siRNA against non-target genes. The models are trained and tested against a large data set of siRNA sequences. The validations are conducted using Pearson Correlation Coefficient, Mathews Correlation Coefficient, Receiver Operating Characteristic analysis, Accuracy of prediction, Sensitivity and Specificity. It is found that the approach, OpsiD, is capable of predicting the inhibition capacity of siRNA against a target mRNA with improved results over the state of the art techniques. Also we are able to understand the influence of whole stacking energy on efficiency of siRNA. The model is further improved by including the ability to identify the “off-target possibility” of predicted siRNA on non-target genes. Thus the proposed model, OpsiD, can predict optimized siRNA by considering both “inhibition efficiency on target genes and off-target possibility on non-target genes”, with improved inhibition efficiency, specificity and sensitivity. Since we have taken efforts to optimize the siRNA efficacy in terms of “inhibition efficiency and offtarget possibility”, we hope that the risk of “off-target effect” while doing gene silencing in various bioinformatics fields can be overcome to a great extent. These findings may provide new insights into cancer diagnosis, prognosis and therapy by gene silencing. The approach may be found useful for designing exogenous siRNA for therapeutic applications and gene silencing techniques in different areas of bioinformatics.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

On-going human population growth and changing patterns of resource consumption are increasing global demand for ecosystem services, many of which are provided by soils. Some of these ecosystem services are linearly related to the surface area of pervious soil, whereas others show non-linear relationships, making ecosystem service optimization a complex task. As limited land availability creates conflicting demands among various types of land use, a central challenge is how to weigh these conflicting interests and how to achieve the best solutions possible from a perspective of sustainable societal development. These conflicting interests become most apparent in soils that are the most heavily used by humans for specific purposes: urban soils used for green spaces, housing, and other infrastructure and agricultural soils for producing food, fibres and biofuels. We argue that, despite their seemingly divergent uses of land, agricultural and urban soils share common features with regards to interactions between ecosystem services, and that the trade-offs associated with decision-making, while scale- and context-dependent, can be surprisingly similar between the two systems. We propose that the trade-offs within land use types and their soil-related ecosystems services are often disproportional, and quantifying these will enable ecologists and soil scientists to help policy makers optimizing management decisions when confronted with demands for multiple services under limited land availability.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The urban heat island is a well-known phenomenon that impacts a wide variety of city operations. With greater availability of cheap meteorological sensors, it is possible to measure the spatial patterns of urban atmospheric characteristics with greater resolution. To develop robust and resilient networks, recognizing sensors may malfunction, it is important to know when measurement points are providing additional information and also the minimum number of sensors needed to provide spatial information for particular applications. Here we consider the example of temperature data, and the urban heat island, through analysis of a network of sensors in the Tokyo metropolitan area (Extended METROS). The effect of reducing observation points from an existing meteorological measurement network is considered, using random sampling and sampling with clustering. The results indicated the sampling with hierarchical clustering can yield similar temperature patterns with up to a 30% reduction in measurement sites in Tokyo. The methods presented have broader utility in evaluating the robustness and resilience of existing urban temperature networks and in how networks can be enhanced by new mobile and open data sources.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this work, the separation of nine phenolic acids (benzoic, caffeic, chlorogenic, p-coumaric, ferulic, gallic, protocatechuic, syringic, and vanillic acid) was approached by a 32 factorial design in electrolytes consisting of sodium tetraborate buffer(STB) in the concentration range of 10-50 mmol L(-1) and methanol in the volume percentage of 5-20%. Derringer`s desirability functions combined globally were tested as response functions. An optimal electrolyte composed by 50 mmol L(-1) tetraborate buffer at pH 9.2, and 7.5% (v/v) methanol allowed baseline resolution of all phenolic acids under investigation in less than 15 min. In order to promote sample clean up, to preconcentrate the phenolic fraction and to release esterified phenolic acids from the fruit matrix, elaborate liquid-liquid extraction procedures followed by alkaline hydrolysis were performed. The proposed methodology was fully validated (linearity from 10.0 to 100 mu g mL(-1), R(2) > 0.999: LOD and LOQ from 1.32 to 3.80 mu g mL(-1) and from 4.01 to 11.5 mu g mL(-1), respectively; intra-day precision better than 2.8% CV for migration time and 5.4% CV for peak area; inter-day precision better than 4.8% CV for migration time and 4.8-11% CV for peak area: recoveries from 81% to 115%) and applied successfully to the evaluation of phenolic contents of abiu-roxo (Chrysophyllum caimito), wild mulberry growing in Brazil (Morus nigra L.) and tree tomato (Cyphomandra betacea). Values in the range of 1.50-47.3 mu g g(-1) were found, with smaller amounts occurring as free phenolic acids. (C) 2009 Elsevier B.V. All rights reserved.