917 resultados para Statistical mixture-design optimization
Resumo:
In this study an optimization method for the design of combined solar and pellet heating systems is presented and evaluated. The paper describes the steps of the method by applying it for an example of system. The objective of the optimization was to find the design parameters that give the lowest auxiliary energy (pellet fuel + auxiliary electricity) and carbon monoxide (CO) emissions for a system with a typical load, a single family house in Sweden. Weighting factors have been used for the auxiliary energy use and CO emissions to give a combined target function. Different weighting factors were tested. The results show that extreme weighting factors lead to their own minima. However, it was possible to find factors that ensure low values for both auxiliary energy and CO emissions.
Resumo:
In a northern European climate a typical solar combisystem for a single family house normally saves between 10 and 30 % of the auxiliary energy needed for space heating and domestic water heating. It is considered uneconomical to dimension systems for higher energy savings. Overheating problems may also occur. One way of avoiding these problems is to use a collector that is designed so that it has a low optical efficiency in summer, when the solar elevation is high and the load is small, and a high optical efficiency in early spring and late fall when the solar elevation is low and the load is large.The study investigates the possibilities to design the system and, in particular, the collector optics, in order to match the system performance with the yearly variations of the heating load and the solar irradiation. It seems possible to design practically viable load adapted collectors, and to use them for whole roofs ( 40 m2) without causing more overheating stress on the system than with a standard 10 m2 system. The load adapted collectors collect roughly as much energy per unit area as flat plate collectors, but they may be produced at a lower cost due to lower material costs. There is an additional potential for a cost reduction since it is possible to design the load adapted collector for low stagnation temperatures making it possible to use less expensive materials. One and the same collector design is suitable for a wide range of system sizes and roof inclinations. The report contains descriptions of optimized collector designs, properties of realistic collectors, and results of calculations of system output, stagnation performance and cost performance. Appropriate computer tools for optical analysis, optimization of collectors in systems and a very fast simulation model have been developed.
Resumo:
Quadratic assignment problems (QAPs) are commonly solved by heuristic methods, where the optimum is sought iteratively. Heuristics are known to provide good solutions but the quality of the solutions, i.e., the confidence interval of the solution is unknown. This paper uses statistical optimum estimation techniques (SOETs) to assess the quality of Genetic algorithm solutions for QAPs. We examine the functioning of different SOETs regarding biasness, coverage rate and length of interval, and then we compare the SOET lower bound with deterministic ones. The commonly used deterministic bounds are confined to only a few algorithms. We show that, the Jackknife estimators have better performance than Weibull estimators, and when the number of heuristic solutions is as large as 100, higher order JK-estimators perform better than lower order ones. Compared with the deterministic bounds, the SOET lower bound performs significantly better than most deterministic lower bounds and is comparable with the best deterministic ones.
Resumo:
Solutions to combinatorial optimization problems, such as problems of locating facilities, frequently rely on heuristics to minimize the objective function. The optimum is sought iteratively and a criterion is needed to decide when the procedure (almost) attains it. Pre-setting the number of iterations dominates in OR applications, which implies that the quality of the solution cannot be ascertained. A small, almost dormant, branch of the literature suggests using statistical principles to estimate the minimum and its bounds as a tool to decide upon stopping and evaluating the quality of the solution. In this paper we examine the functioning of statistical bounds obtained from four different estimators by using simulated annealing on p-median test problems taken from Beasley’s OR-library. We find the Weibull estimator and the 2nd order Jackknife estimator preferable and the requirement of sample size to be about 10 being much less than the current recommendation. However, reliable statistical bounds are found to depend critically on a sample of heuristic solutions of high quality and we give a simple statistic useful for checking the quality. We end the paper with an illustration on using statistical bounds in a problem of locating some 70 distribution centers of the Swedish Post in one Swedish region.
Resumo:
Combinatorial optimization problems, are one of the most important types of problems in operational research. Heuristic and metaheuristics algorithms are widely applied to find a good solution. However, a common problem is that these algorithms do not guarantee that the solution will coincide with the optimum and, hence, many solutions to real world OR-problems are afflicted with an uncertainty about the quality of the solution. The main aim of this thesis is to investigate the usability of statistical bounds to evaluate the quality of heuristic solutions applied to large combinatorial problems. The contributions of this thesis are both methodological and empirical. From a methodological point of view, the usefulness of statistical bounds on p-median problems is thoroughly investigated. The statistical bounds have good performance in providing informative quality assessment under appropriate parameter settings. Also, they outperform the commonly used Lagrangian bounds. It is demonstrated that the statistical bounds are shown to be comparable with the deterministic bounds in quadratic assignment problems. As to empirical research, environment pollution has become a worldwide problem, and transportation can cause a great amount of pollution. A new method for calculating and comparing the CO2-emissions of online and brick-and-mortar retailing is proposed. It leads to the conclusion that online retailing has significantly lesser CO2-emissions. Another problem is that the Swedish regional division is under revision and the border effect to public service accessibility is concerned of both residents and politicians. After analysis, it is shown that borders hinder the optimal location of public services and consequently the highest achievable economic and social utility may not be attained.
Resumo:
This thesis contributes to the heuristic optimization of the p-median problem and Swedish population redistribution. The p-median model is the most representative model in the location analysis. When facilities are located to a population geographically distributed in Q demand points, the p-median model systematically considers all the demand points such that each demand point will have an effect on the decision of the location. However, a series of questions arise. How do we measure the distances? Does the number of facilities to be located have a strong impact on the result? What scale of the network is suitable? How good is our solution? We have scrutinized a lot of issues like those. The reason why we are interested in those questions is that there are a lot of uncertainties in the solutions. We cannot guarantee our solution is good enough for making decisions. The technique of heuristic optimization is formulated in the thesis. Swedish population redistribution is examined by a spatio-temporal covariance model. A descriptive analysis is not always enough to describe the moving effects from the neighbouring population. A correlation or a covariance analysis is more explicit to show the tendencies. Similarly, the optimization technique of the parameter estimation is required and is executed in the frame of statistical modeling.
Resumo:
This dissertation is focused on theoretical and experimental studies of optical properties of materials and multilayer structures composing liquid crystal displays (LCDs) and electrochromic (EC) devices. By applying spectroscopic ellipsometry, we have determined the optical constants of thin films of electrochromic tungsten oxide (WOx) and nickel oxide (NiOy), the films’ thickness and roughness. These films, which were obtained at spattering conditions possess high transmittance that is important for achieving good visibility and high contrast in an EC device. Another application of the general spectroscopic ellipsometry relates to the study of a photo-alignment layer of a mixture of azo-dyes SD-1 and SDA-2. We have found the optical constants of this mixture before and after illuminating it by polarized UV light. The results obtained confirm the diffusion model to explain the formation of the photo-induced order in azo-dye films. We have developed new techniques for fast characterization of twisted nematic LC cells in transmissive and reflective modes. Our techniques are based on the characteristics functions that we have introduced for determination of parameters of non-uniform birefringent media. These characteristic functions are found by simple procedures and can be utilised for simultaneous determination of retardation, its wavelength dispersion, and twist angle, as well as for solving associated optimization problems. Cholesteric LCD that possesses some unique properties, such as bistability and good selective scattering, however, has a disadvantage – relatively high driving voltage (tens of volts). The way we propose to reduce the driving voltage consists of applying a stack of thin (~1µm) LC layers. We have studied the ability of a layer of a surface stabilized ferroelectric liquid crystal coupled with several retardation plates for birefringent color generation. We have demonstrated that in order to accomplish good color characteristics and high brightness of the display, one or two retardation plates are sufficient.
Resumo:
In this thesis the solar part of a large grid-connected photovoltaic system design has been done. The main purpose was to size and optimize the system and to present figures helping to evaluate the prospective project rationality, which can potentially be constructed on a contaminated area in Falun. The methodology consisted in PV market study and component selection, site analysis and defining suitable area for solar installation; and system configuration optimization based on PVsyst simulations and Levelized Cost of Energy calculations. The procedure was mainly divided on two parts, preliminary and detailed sizing. In the first part the objective was complex, which included the investigation of the most profitable component combination and system optimization due to tilt and row distance. It was done by simulating systems with different components and orientations, which were sized for the same 100kW inverter in order to make a fair comparison. For each simulated result a simplified LCOE calculation procedure was applied. The main results of this part show that with the price of 0.43 €/Wp thin-film modules were the most cost effective solution for the case with a great advantage over crystalline type in terms of financial attractiveness. From the results of the preliminary study it was possible to select the optimal system configuration, which was used in the detailed sizing as a starting point. In this part the PVsyst simulations were run, which included full scale system design considering near shadings created by factory buildings. Additionally, more complex procedure of LCOE calculation has been used here considered insurances, maintenance, time value of money and possible cost reduction due to the system size. Two system options were proposed in final results; both cover the same area of 66000 m2. The first one represents an ordinary South faced design with 1.1 MW nominal power, which was optimized for the highest performance. According to PVsyst simulations, this system should produce 1108 MWh/year with the initial investment of 835,000 € and 0.056 €/kWh LCOE. The second option has an alternative East-West orientation, which allows to cover 80% of occupied ground and consequently have 6.6 MW PV nominal power. The system produces 5388 MWh/year costs about 4500,000 € and delivers electricity with the same price of 0.056 €/kWh. Even though the EW solution has 20% lower specific energy production, it benefits mainly from lower relative costs for inverters, mounting and annual maintenance expenses. After analyzing the performance results, among the two alternatives none of the systems showed a clear superiority so there was no optimal system proposed. Both, South and East-West solutions have own advantages and disadvantages in terms of energy production profile, configuration, installation and maintenance. Furthermore, the uncertainty due to cost figures assumptions restricted the results veracity.
Resumo:
This study contributes a rigorous diagnostic assessment of state-of-the-art multiobjective evolutionary algorithms (MOEAs) and highlights key advances that the water resources field can exploit to better discover the critical tradeoffs constraining our systems. This study provides the most comprehensive diagnostic assessment of MOEAs for water resources to date, exploiting more than 100,000 MOEA runs and trillions of design evaluations. The diagnostic assessment measures the effectiveness, efficiency, reliability, and controllability of ten benchmark MOEAs for a representative suite of water resources applications addressing rainfall-runoff calibration, long-term groundwater monitoring (LTM), and risk-based water supply portfolio planning. The suite of problems encompasses a range of challenging problem properties including (1) many-objective formulations with 4 or more objectives, (2) multi-modality (or false optima), (3) nonlinearity, (4) discreteness, (5) severe constraints, (6) stochastic objectives, and (7) non-separability (also called epistasis). The applications are representative of the dominant problem classes that have shaped the history of MOEAs in water resources and that will be dominant foci in the future. Recommendations are provided for which modern MOEAs should serve as tools and benchmarks in the future water resources literature.
Resumo:
This paper describes the formulation of a Multi-objective Pipe Smoothing Genetic Algorithm (MOPSGA) and its application to the least cost water distribution network design problem. Evolutionary Algorithms have been widely utilised for the optimisation of both theoretical and real-world non-linear optimisation problems, including water system design and maintenance problems. In this work we present a pipe smoothing based approach to the creation and mutation of chromosomes which utilises engineering expertise with the view to increasing the performance of the algorithm whilst promoting engineering feasibility within the population of solutions. MOPSGA is based upon the standard Non-dominated Sorting Genetic Algorithm-II (NSGA-II) and incorporates a modified population initialiser and mutation operator which directly targets elements of a network with the aim to increase network smoothness (in terms of progression from one diameter to the next) using network element awareness and an elementary heuristic. The pipe smoothing heuristic used in this algorithm is based upon a fundamental principle employed by water system engineers when designing water distribution pipe networks where the diameter of any pipe is never greater than the sum of the diameters of the pipes directly upstream resulting in the transition from large to small diameters from source to the extremities of the network. MOPSGA is assessed on a number of water distribution network benchmarks from the literature including some real-world based, large scale systems. The performance of MOPSGA is directly compared to that of NSGA-II with regard to solution quality, engineering feasibility (network smoothness) and computational efficiency. MOPSGA is shown to promote both engineering and hydraulic feasibility whilst attaining good infrastructure costs compared to NSGA-II.
Resumo:
No cenário atual, onde a globalização, aliada a um maior nível de exigência por parte do cliente, impõem às empresas um maior empenho por competitividade, a agdidade no desenvolvimento e otimização de produtos torna-se crucial para a sobrevivência das mesmas no mercado. Neste contexto, procurou-se compilar várias técnicas utilizadas em E n g e h dd Qd& em um método integrado para a Ot+o Expmmid de MWtwa. Essas técnicas fornecem resultados muito mais rápidos e econômicos do que a tradicional prática de variar um componente de cada vez na mistura, devido ao menor número de ensaios necessários. Entretanto, apesar de não serem tão recentes, as ferramentas aplicáveis à otimização de misturas não têm sido utilizadas pelo seu maior beneficiário (a indústria), provavelmente por falta de divulgação de sua existência, ou, principalmente, devido à complexidade dos cálculos envolvidos. Dessa forma, além do método proposto, desenvolveu-se também um sofiwa~q ue irnplementa todas os passos sugeridos, com o intuito de facilitar ainda mais a aplicação dos mesmos por pessoas não especializadas em técnicas estatísticas. Através do software (OptiMix), o método foi testado em uma situação real e em um estudo comparativo com um relato da literatura, a fim de testar sua validade, necessidade de adaptações e consistência dos resultados. A avaliaçio dos estudos de caso demonstrou que o método proposto fornece resultados coerentes com os de outras técnicas alternativas, com a vantagem de o usuário não precisar realizar cálculos, evitando assim, erros e agilizando o processo de otimização.
Resumo:
Com a crescente elevação nas possibilidades de utilização da biomassa nos mais diversos setores da indústria, o presente estudo visa identificar uma alternativa para aumentar a produção utilizando, para tanto, o capim elefante (Pennisetum purpureum) e sorgo sacarino (Sorghum bicolor L.). Os experimentos foram realizados na Fazenda Vitória (4°21'36"S 38°5'17"W) em Beberibe – CE, em uma área total de 1.944 m2. A semeadura foi realizada em 36 parcelas de 54 m2, em delineamento inteiramente casualizado, sendo 9 tratamentos (genótipos) e 4 repetições. Foram utilizados seis diferentes genótipos de sorgo sacarino, s_1, s_2, s_3, s_4, s_5 e s_6. Fazem parte do estudo, ainda, um híbrido de capim elefante e milheto (Pennisetum glaucum), comercialmente conhecido como paraíso, e dois tratamentos de capim elefante, Cameroon e Napier. Para o sorgo sacarino, foram realizadas três colheitas, a primeira avaliada semanalmente e as demais com avaliação única. Para o capim elefante foram realizadas duas colheitas, a primeira com intervalo de 186 dias e a segunda com intervalo de 92 dias. Esse intervalo possibilitou a comparação da produtividade de biomassa de cada tratamento de forma trimestral e semestral. Os resultados observados mostraram que os tratamentos de maior destaque de produtividade de biomassa (base seca) foram com capim elefante, independente do período. Com base nessas informações, concluiu-se que, analisando apenas a produtividade de biomassa, o mais indicado dos tratamentos estudados é o capim elefante Napier ou Cameroon. No entanto, constatou-se que o sorgo sacarino apresentou grande potencial de produtividade com um aditivo, seu mosto pode ser utilizado no processo de fermentação para obtenção do etanol.
Resumo:
In the last decade mobile wireless communications have witnessed an explosive growth in the user’s penetration rate and their widespread deployment around the globe. It is expected that this tendency will continue to increase with the convergence of fixed Internet wired networks with mobile ones and with the evolution to the full IP architecture paradigm. Therefore mobile wireless communications will be of paramount importance on the development of the information society of the near future. In particular a research topic of particular relevance in telecommunications nowadays is related to the design and implementation of mobile communication systems of 4th generation. 4G networks will be characterized by the support of multiple radio access technologies in a core network fully compliant with the Internet Protocol (all IP paradigm). Such networks will sustain the stringent quality of service (QoS) requirements and the expected high data rates from the type of multimedia applications to be available in the near future. The approach followed in the design and implementation of the mobile wireless networks of current generation (2G and 3G) has been the stratification of the architecture into a communication protocol model composed by a set of layers, in which each one encompasses some set of functionalities. In such protocol layered model, communications is only allowed between adjacent layers and through specific interface service points. This modular concept eases the implementation of new functionalities as the behaviour of each layer in the protocol stack is not affected by the others. However, the fact that lower layers in the protocol stack model do not utilize information available from upper layers, and vice versa, downgrades the performance achieved. This is particularly relevant if multiple antenna systems, in a MIMO (Multiple Input Multiple Output) configuration, are implemented. MIMO schemes introduce another degree of freedom for radio resource allocation: the space domain. Contrary to the time and frequency domains, radio resources mapped into the spatial domain cannot be assumed as completely orthogonal, due to the amount of interference resulting from users transmitting in the same frequency sub-channel and/or time slots but in different spatial beams. Therefore, the availability of information regarding the state of radio resources, from lower to upper layers, is of fundamental importance in the prosecution of the levels of QoS expected from those multimedia applications. In order to match applications requirements and the constraints of the mobile radio channel, in the last few years researches have proposed a new paradigm for the layered architecture for communications: the cross-layer design framework. In a general way, the cross-layer design paradigm refers to a protocol design in which the dependence between protocol layers is actively exploited, by breaking out the stringent rules which restrict the communication only between adjacent layers in the original reference model, and allowing direct interaction among different layers of the stack. An efficient management of the set of available radio resources demand for the implementation of efficient and low complexity packet schedulers which prioritize user’s transmissions according to inputs provided from lower as well as upper layers in the protocol stack, fully compliant with the cross-layer design paradigm. Specifically, efficiently designed packet schedulers for 4G networks should result in the maximization of the capacity available, through the consideration of the limitations imposed by the mobile radio channel and comply with the set of QoS requirements from the application layer. IEEE 802.16e standard, also named as Mobile WiMAX, seems to comply with the specifications of 4G mobile networks. The scalable architecture, low cost implementation and high data throughput, enable efficient data multiplexing and low data latency, which are attributes essential to enable broadband data services. Also, the connection oriented approach of Its medium access layer is fully compliant with the quality of service demands from such applications. Therefore, Mobile WiMAX seems to be a promising 4G mobile wireless networks candidate. In this thesis it is proposed the investigation, design and implementation of packet scheduling algorithms for the efficient management of the set of available radio resources, in time, frequency and spatial domains of the Mobile WiMAX networks. The proposed algorithms combine input metrics from physical layer and QoS requirements from upper layers, according to the crosslayer design paradigm. Proposed schedulers are evaluated by means of system level simulations, conducted in a system level simulation platform implementing the physical and medium access control layers of the IEEE802.16e standard.
Resumo:
Over recent years the structural ceramics industry in Brazil has found a very favorable market for growth. However, difficulties related to productivity and product quality are partially inhibiting this possible growth. An alternative for trying to solve these problems and, thus, provide the pottery industry the feasibility of full development, is the substitution of firewood used in the burning process by natural gas. In order to contribute to this process of technological innovation, this paper studies the effect of co-use of ceramic phyllite and kaolin waste on the properties of a clay matrix, verifying the possible benefits that these raw materials can give to the final product, as well as the possibility of such materials to reduce the heat load necessary to obtain products with equal or superior quality. The study was divided into two steps: characterization of materials and study of formulations. Two clays, a phyllite and a residue of kaolin were characterized by the following techniques: laser granulometry, plasticity index by Atterberg limits, X-ray fluorescence, X-ray diffraction, mineralogical composition by Rietveld, thermogravimetric and differential thermal analysis. To study the formulations, specifically for evaluation of technological properties of the parts, was performed an experimental model that combined planning involving a mixture of three components (standard mass x phyllite x kaolin waste) and a 23 factorial design with central point associated with thermal processing parameters. The experiment was performed with restricted strip-plot randomization. In total, 13 compositional points were investigated within the following constraints: phyllite ≤ 20% by weight, kaolin waste ≤ 40% by weight, and standard mass ≥ 60% by weight. The thermal parameters were used at the following levels: 750 and 950 °C to the firing temperature, 5 and 15 °C/min at the heating rate, 15 and 45min to the baseline. The results showed that the introduction of phyllite and/or kaolin waste in ceramic body produced a number of benefits in properties of the final product, such as: decreased absorption of water, apparent porosity and linear retraction at burn; besides the increase in apparent specific mass and mechanical properties of parts. The best results were obtained in the compositional points where the sum of the levels of kaolin waste and phyllite was maximal (40% by weight), as well as conditions which were used in firing temperatures of 950 °C. Regarding the prospect of savings in heat energy required to form the desired microstructure, the phyllite and the residue of kaolin, for having small particle sizes and constitutions mineralogical phases with the presence of fluxes, contributed to the optimization of the firing cycle.
Resumo:
Rio Grande do Norte State stands out as one great producer of structural clay of the brazilian northeastern. The Valley Assu ceramic tiles production stands out obtained from ilitics ball clays that abound in the region under study. Ceramics formulation and the design of experiments with mixture approach, has been applied for researchers, come as an important aid to decrease the number of experiments necessary to the optimization. In this context, the objective of this work is to evaluate the effects of the formulation, temperature and heating rate in the physical-mechanical properties of the red ceramic body used for roofing tile fabrication of the Valley Assu, using design of mixture experiments. Four clays samples used in two ceramics industry of the region were use as raw material and characterized by X-ray diffraction, chemical composition, differential thermal analysis (DTA), thermogravimetric analysis (TGA), particle size distribution analysis and plasticity techniques. Afterwards, they were defined initial molded bodies and made specimens were then prepared by uniaxial pressing at 25 MPa before firing at 850, 950 and 1050 ºC in a laboratory furnace, with heating rate in the proportions of 5, 10 e 15 ºC/min. The following tecnologicals properties were evaluated: linear firing shrinkage, water absorption and flexural strength. Results show that the temperature 1050 ºC and heating rate of 5 ºC/min was the best condition, therefore presented significance in all physical-mechanical properties. The model was accepted as valid based of the production of three new formulations with fractions mass diferents of the initial molded bodies and heated with temperature at 1050 ºC and heating rate of 5 ºC/min. Considering the formulation, temperature and heating rate as variables of the equations, another model was suggested, where from the aplication of design of experiments with mixtures was possible to get a best formulation, whose experimental error is the minor in relation to the too much formulations