907 resultados para Unconstrained minimization
Resumo:
An important problem faced by the oil industry is to distribute multiple oil products through pipelines. Distribution is done in a network composed of refineries (source nodes), storage parks (intermediate nodes), and terminals (demand nodes) interconnected by a set of pipelines transporting oil and derivatives between adjacent areas. Constraints related to storage limits, delivery time, sources availability, sending and receiving limits, among others, must be satisfied. Some researchers deal with this problem under a discrete viewpoint in which the flow in the network is seen as batches sending. Usually, there is no separation device between batches of different products and the losses due to interfaces may be significant. Minimizing delivery time is a typical objective adopted by engineers when scheduling products sending in pipeline networks. However, costs incurred due to losses in interfaces cannot be disregarded. The cost also depends on pumping expenses, which are mostly due to the electricity cost. Since industrial electricity tariff varies over the day, pumping at different time periods have different cost. This work presents an experimental investigation of computational methods designed to deal with the problem of distributing oil derivatives in networks considering three minimization objectives simultaneously: delivery time, losses due to interfaces and electricity cost. The problem is NP-hard and is addressed with hybrid evolutionary algorithms. Hybridizations are mainly focused on Transgenetic Algorithms and classical multi-objective evolutionary algorithm architectures such as MOEA/D, NSGA2 and SPEA2. Three architectures named MOTA/D, NSTA and SPETA are applied to the problem. An experimental study compares the algorithms on thirty test cases. To analyse the results obtained with the algorithms Pareto-compliant quality indicators are used and the significance of the results evaluated with non-parametric statistical tests.
Desenvolvimento da célula base de microestruturas periódicas de compósitos sob otimização topológica
Resumo:
This thesis develops a new technique for composite microstructures projects by the Topology Optimization process, in order to maximize rigidity, making use of Deformation Energy Method and using a refining scheme h-adaptative to obtain a better defining the topological contours of the microstructure. This is done by distributing materials optimally in a region of pre-established project named as Cell Base. In this paper, the Finite Element Method is used to describe the field and for government equation solution. The mesh is refined iteratively refining so that the Finite Element Mesh is made on all the elements which represent solid materials, and all empty elements containing at least one node in a solid material region. The Finite Element Method chosen for the model is the linear triangular three nodes. As for the resolution of the nonlinear programming problem with constraints we were used Augmented Lagrangian method, and a minimization algorithm based on the direction of the Quasi-Newton type and Armijo-Wolfe conditions assisting in the lowering process. The Cell Base that represents the composite is found from the equivalence between a fictional material and a preescribe material, distributed optimally in the project area. The use of the strain energy method is justified for providing a lower computational cost due to a simpler formulation than traditional homogenization method. The results are presented prescription with change, in displacement with change, in volume restriction and from various initial values of relative densities.
Resumo:
The electronic journals correspond publishing channels and dissemination of scientific information. Through them, users can spread their studies as well as developing new researches. One of the systems used for creation and e-journals management is the Electronic System for Journal Publishing (SEER), used in the construction of periodic portals, as well as the creation of magazines in isolation. In this purport, it is believed that the management systems and creation of e-journals should be developed (internally and externally) according to the needs of its users. In the case of internal development, some of these processes refer to the copyright registration and submission of articles, which, in turn, are relevant tasks in the editorial process. Thus, the proposed study, thematic Usability of scientific journals, aims to analyze the usability of the copyright registration process and submission of articles in the Electronic System for Journal Publishing through BiblioCanto magazine, part of the Electronic Journals Portal of the Federal University of Rio Grande do Norte (UFRN). For the realization of the research, two valuation techniques were used: the Usability Test with a total of twenty participants and the Cooperative Evaluation, with the same number of participants separated in four categories considered target audience of that magazine, namely: undergraduate students, graduate students, teachers and librarians. The results indicated that the two analyzed processes (copyright registration and submission of articles) need improvement. In the case of the registration process, the following needs are: signalizing of the conducting registration ambient; description and exclusion of requested information on the registration form. In the process of article submission, it is emphasized improvement of aspects: the early steps to submission, signaling of required fields, concise description of the steps, minimization and review of the steps. To this end, it is believed that in general idea the SEER partially meets the needs of its users regarding the usability of such software.
Resumo:
The electronic journals correspond publishing channels and dissemination of scientific information. Through them, users can spread their studies as well as developing new researches. One of the systems used for creation and e-journals management is the Electronic System for Journal Publishing (SEER), used in the construction of periodic portals, as well as the creation of magazines in isolation. In this purport, it is believed that the management systems and creation of e-journals should be developed (internally and externally) according to the needs of its users. In the case of internal development, some of these processes refer to the copyright registration and submission of articles, which, in turn, are relevant tasks in the editorial process. Thus, the proposed study, thematic Usability of scientific journals, aims to analyze the usability of the copyright registration process and submission of articles in the Electronic System for Journal Publishing through BiblioCanto magazine, part of the Electronic Journals Portal of the Federal University of Rio Grande do Norte (UFRN). For the realization of the research, two valuation techniques were used: the Usability Test with a total of twenty participants and the Cooperative Evaluation, with the same number of participants separated in four categories considered target audience of that magazine, namely: undergraduate students, graduate students, teachers and librarians. The results indicated that the two analyzed processes (copyright registration and submission of articles) need improvement. In the case of the registration process, the following needs are: signalizing of the conducting registration ambient; description and exclusion of requested information on the registration form. In the process of article submission, it is emphasized improvement of aspects: the early steps to submission, signaling of required fields, concise description of the steps, minimization and review of the steps. To this end, it is believed that in general idea the SEER partially meets the needs of its users regarding the usability of such software.
Resumo:
A significant observational effort has been directed to investigate the nature of the so-called dark energy. In this dissertation we derive constraints on dark energy models using three different observable: measurements of the Hubble rate H(z) (compiled by Meng et al. in 2015.); distance modulus of 580 Supernovae Type Ia (Union catalog Compilation 2.1, 2011); and the observations of baryon acoustic oscilations (BAO) and the cosmic microwave background (CMB) by using the so-called CMB/BAO of six peaks of BAO (a peak determined through the Survey 6dFGS data, two through the SDSS and three through WiggleZ). The statistical analysis used was the method of the χ2 minimum (marginalized or minimized over h whenever possible) to link the cosmological parameter: m, ω and δω0. These tests were applied in two parameterization of the parameter ω of the equation of state of dark energy, p = ωρ (here, p is the pressure and ρ is the component of energy density). In one, ω is considered constant and less than -1/3, known as XCDM model; in the other the parameter of state equantion varies with the redshift, where we the call model GS. This last model is based on arguments that arise from the theory of cosmological inflation. For comparison it was also made the analysis of model CDM. Comparison of cosmological models with different observations lead to different optimal settings. Thus, to classify the observational viability of different theoretical models we use two criteria information, the Bayesian information criterion (BIC) and the Akaike information criteria (AIC). The Fisher matrix tool was incorporated into our testing to provide us with the uncertainty of the parameters of each theoretical model. We found that the complementarity of tests is necessary inorder we do not have degenerate parametric spaces. Making the minimization process we found (68%), for the Model XCDM the best fit parameters are m = 0.28 ± 0, 012 and ωX = −1.01 ± 0, 052. While for Model GS the best settings are m = 0.28 ± 0, 011 and δω0 = 0.00 ± 0, 059. Performing a marginalization we found (68%), for the Model XCDM the best fit parameters are m = 0.28 ± 0, 012 and ωX = −1.01 ± 0, 052. While for Model GS the best settings are M = 0.28 ± 0, 011 and δω0 = 0.00 ± 0, 059.
Resumo:
A significant observational effort has been directed to investigate the nature of the so-called dark energy. In this dissertation we derive constraints on dark energy models using three different observable: measurements of the Hubble rate H(z) (compiled by Meng et al. in 2015.); distance modulus of 580 Supernovae Type Ia (Union catalog Compilation 2.1, 2011); and the observations of baryon acoustic oscilations (BAO) and the cosmic microwave background (CMB) by using the so-called CMB/BAO of six peaks of BAO (a peak determined through the Survey 6dFGS data, two through the SDSS and three through WiggleZ). The statistical analysis used was the method of the χ2 minimum (marginalized or minimized over h whenever possible) to link the cosmological parameter: m, ω and δω0. These tests were applied in two parameterization of the parameter ω of the equation of state of dark energy, p = ωρ (here, p is the pressure and ρ is the component of energy density). In one, ω is considered constant and less than -1/3, known as XCDM model; in the other the parameter of state equantion varies with the redshift, where we the call model GS. This last model is based on arguments that arise from the theory of cosmological inflation. For comparison it was also made the analysis of model CDM. Comparison of cosmological models with different observations lead to different optimal settings. Thus, to classify the observational viability of different theoretical models we use two criteria information, the Bayesian information criterion (BIC) and the Akaike information criteria (AIC). The Fisher matrix tool was incorporated into our testing to provide us with the uncertainty of the parameters of each theoretical model. We found that the complementarity of tests is necessary inorder we do not have degenerate parametric spaces. Making the minimization process we found (68%), for the Model XCDM the best fit parameters are m = 0.28 ± 0, 012 and ωX = −1.01 ± 0, 052. While for Model GS the best settings are m = 0.28 ± 0, 011 and δω0 = 0.00 ± 0, 059. Performing a marginalization we found (68%), for the Model XCDM the best fit parameters are m = 0.28 ± 0, 012 and ωX = −1.01 ± 0, 052. While for Model GS the best settings are M = 0.28 ± 0, 011 and δω0 = 0.00 ± 0, 059.
Resumo:
Polygonal Fresnel zone plates with a low number of sides have deserved attention in micro and nanoptics, because they can be straightforwardly integrated in photonic devices, and, at the same time, they represent a balance between the high-focusing performance of a circular zone plate and the easiness of fabrication at micro and nano-scales of polygons. Among them, the most representative family are Square Fresnel Zone Plates (SFZP). In this work, we propose two different customized designs of SFZP for optical wavelengths. Both designs are based on the optimization of a SFZP to perform as close as possible as a usual Fresnel Zone Plate. In the first case, the criterion followed to compute it is the minimization of the difference between the area covered by the angular sector of the zone of the corresponding circular plate and the one covered by the polygon traced on the former. Such a requirement leads to a customized polygon-like Fresnel zone. The simplest one is a square zone with a pattern of phases repeating each five zones. On the other hand, an alternative SFZP can be designed guided by the same criterion but with a new restriction. In this case, the distance between the borders of different zones remains unaltered. A comparison between the two lenses is carried out. The irradiance at focus is computed for both and suitable merit figures are defined to account for the difference between them.
Resumo:
The hydrocycloning operation has a goal to separate solid-liquid suspensions and liquid-liquid emulsions through the centrifugal force action. Hydrocyclones are equipment with reduced size and used in both clarification and thickening. This device is used in many areas, like petrochemical and minerals process, and accumulate advantages like versatility and low cost of maintenance. However, the demand to improve the process and to reduce the costs has motivated several studies of equipment optimization. The filtering hydrocyclone is a non-conventional equipment developed at FEQUI/UFU with objective to improve the hydrocycloning separation efficiency. The purpose of this study is to evaluate the operating conditions of feed concentration and underflow diameter on the performance of a filtering geometry optimized to minimization of energy costs. The filtration effect was investigated through the comparison between the performance of the Optimized Filtering Hydrocyclone (HCOF) and the Optimized Concentrator Hydrocyclone (HCO). Because of the resemblance of hydrocyclones performance, the filtration did not represent significant effect on the performance of the HCOF. It was found that in this geometry the decrease of the variable underflow diameter was very favorable to thickening operation. The suspension concentration of quartzite at 1.0% of solids in volume was increased about 42 times when the 3 mm underflow diameter was used. The increase on the feed solid percentage was good for decreasing the energy spent, so that a minimum number of Euler of 730 was achieved at CVA = 10.0%v. However, a greater amount of solids in suspension leads to a lower efficiency of the equipment. Therefore, to minimize the underflow-to-throughput ratio and keep a high efficiency level, it is indicated to work with dilute suspension (CVA = 1.0%) and 3 mm underflow diameter (η = 67%). But if it is necessary to work with high feed concentration, the use of 5 mm underflow diameter provides a rise in the efficiency. The HCO hydrocyclone was compared to the traditional family of hydrocyclones Rietema and presented advantages like higher efficiency (34% higher in average) and lower energy costs (20% lower in average). Finally, the efficiency curves and project equation have been raised for the HCO hydrocyclone each with satisfactory adjust.
Resumo:
Supply chain operations directly affect service levels. Decision on amendment of facilities is generally decided based on overall cost, leaving out the efficiency of each unit. Decomposing the supply chain superstructure, efficiency analysis of the facilities (warehouses or distribution centers) that serve customers can be easily implemented. With the proposed algorithm, the selection of a facility is based on service level maximization and not just cost minimization as this analysis filters all the feasible solutions utilizing Data Envelopment Analysis (DEA) technique. Through multiple iterations, solutions are filtered via DEA and only the efficient ones are selected leading to cost minimization. In this work, the problem of optimal supply chain networks design is addressed based on a DEA based algorithm. A Branch and Efficiency (B&E) algorithm is deployed for the solution of this problem. Based on this DEA approach, each solution (potentially installed warehouse, plant etc) is treated as a Decision Making Unit, thus is characterized by inputs and outputs. The algorithm through additional constraints named “efficiency cuts”, selects only efficient solutions providing better objective function values. The applicability of the proposed algorithm is demonstrated through illustrative examples.
Resumo:
The advances in three related areas of state-space modeling, sequential Bayesian learning, and decision analysis are addressed, with the statistical challenges of scalability and associated dynamic sparsity. The key theme that ties the three areas is Bayesian model emulation: solving challenging analysis/computational problems using creative model emulators. This idea defines theoretical and applied advances in non-linear, non-Gaussian state-space modeling, dynamic sparsity, decision analysis and statistical computation, across linked contexts of multivariate time series and dynamic networks studies. Examples and applications in financial time series and portfolio analysis, macroeconomics and internet studies from computational advertising demonstrate the utility of the core methodological innovations.
Chapter 1 summarizes the three areas/problems and the key idea of emulating in those areas. Chapter 2 discusses the sequential analysis of latent threshold models with use of emulating models that allows for analytical filtering to enhance the efficiency of posterior sampling. Chapter 3 examines the emulator model in decision analysis, or the synthetic model, that is equivalent to the loss function in the original minimization problem, and shows its performance in the context of sequential portfolio optimization. Chapter 4 describes the method for modeling the steaming data of counts observed on a large network that relies on emulating the whole, dependent network model by independent, conjugate sub-models customized to each set of flow. Chapter 5 reviews those advances and makes the concluding remarks.
Resumo:
Rolling Isolation Systems provide a simple and effective means for protecting components from horizontal floor vibrations. In these systems a platform rolls on four steel balls which, in turn, rest within shallow bowls. The trajectories of the balls is uniquely determined by the horizontal and rotational velocity components of the rolling platform, and thus provides nonholonomic constraints. In general, the bowls are not parabolic, so the potential energy function of this system is not quadratic. This thesis presents the application of Gauss's Principle of Least Constraint to the modeling of rolling isolation platforms. The equations of motion are described in terms of a redundant set of constrained coordinates. Coordinate accelerations are uniquely determined at any point in time via Gauss's Principle by solving a linearly constrained quadratic minimization. In the absence of any modeled damping, the equations of motion conserve energy. This mathematical model is then used to find the bowl profile that minimizes response acceleration subject to displacement constraint.
Resumo:
Bayesian nonparametric models, such as the Gaussian process and the Dirichlet process, have been extensively applied for target kinematics modeling in various applications including environmental monitoring, traffic planning, endangered species tracking, dynamic scene analysis, autonomous robot navigation, and human motion modeling. As shown by these successful applications, Bayesian nonparametric models are able to adjust their complexities adaptively from data as necessary, and are resistant to overfitting or underfitting. However, most existing works assume that the sensor measurements used to learn the Bayesian nonparametric target kinematics models are obtained a priori or that the target kinematics can be measured by the sensor at any given time throughout the task. Little work has been done for controlling the sensor with bounded field of view to obtain measurements of mobile targets that are most informative for reducing the uncertainty of the Bayesian nonparametric models. To present the systematic sensor planning approach to leaning Bayesian nonparametric models, the Gaussian process target kinematics model is introduced at first, which is capable of describing time-invariant spatial phenomena, such as ocean currents, temperature distributions and wind velocity fields. The Dirichlet process-Gaussian process target kinematics model is subsequently discussed for modeling mixture of mobile targets, such as pedestrian motion patterns.
Novel information theoretic functions are developed for these introduced Bayesian nonparametric target kinematics models to represent the expected utility of measurements as a function of sensor control inputs and random environmental variables. A Gaussian process expected Kullback Leibler divergence is developed as the expectation of the KL divergence between the current (prior) and posterior Gaussian process target kinematics models with respect to the future measurements. Then, this approach is extended to develop a new information value function that can be used to estimate target kinematics described by a Dirichlet process-Gaussian process mixture model. A theorem is proposed that shows the novel information theoretic functions are bounded. Based on this theorem, efficient estimators of the new information theoretic functions are designed, which are proved to be unbiased with the variance of the resultant approximation error decreasing linearly as the number of samples increases. Computational complexities for optimizing the novel information theoretic functions under sensor dynamics constraints are studied, and are proved to be NP-hard. A cumulative lower bound is then proposed to reduce the computational complexity to polynomial time.
Three sensor planning algorithms are developed according to the assumptions on the target kinematics and the sensor dynamics. For problems where the control space of the sensor is discrete, a greedy algorithm is proposed. The efficiency of the greedy algorithm is demonstrated by a numerical experiment with data of ocean currents obtained by moored buoys. A sweep line algorithm is developed for applications where the sensor control space is continuous and unconstrained. Synthetic simulations as well as physical experiments with ground robots and a surveillance camera are conducted to evaluate the performance of the sweep line algorithm. Moreover, a lexicographic algorithm is designed based on the cumulative lower bound of the novel information theoretic functions, for the scenario where the sensor dynamics are constrained. Numerical experiments with real data collected from indoor pedestrians by a commercial pan-tilt camera are performed to examine the lexicographic algorithm. Results from both the numerical simulations and the physical experiments show that the three sensor planning algorithms proposed in this dissertation based on the novel information theoretic functions are superior at learning the target kinematics with
little or no prior knowledge
Resumo:
Pour permettre de mieux comprendre la problématique du désengagement des adolescents en éducation physique et aussi à l’endroit de l’activité physique en général, la présente étude avait pour objectif d’examiner les liens entre, d’une part, le climat motivationnel en classe d’éducation physique (maîtrise et performance) et les besoins psychologiques des élèves des deux sexes (compétence, autonomie et appartenance) et, d’autre part, les buts d’accomplissement (maîtrise, performance-approche et performance-évitement) poursuivis en éducation physique. Elle visait aussi à examiner l’impact des buts d’accomplissement sur les attitudes et habitudes des adolescents à l’endroit de l’activité physique en général. Pour atteindre ces objectifs, 909 élèves (âge moyen = 13,87[0,94]) ont rempli des questionnaires à items auto-révélés à trois reprises pendant l’année scolaire. Des modèles d’équations structurelles (AMOS 22), des analyses d’invariance ainsi que l’approche sans contrainte ont servi à analyser les données. Les résultats indiquent que les buts d’accomplissement des élèves varient en fonction du climat motivationnel perçu et que le sentiment de compétence entretient une relation positive avec les trois buts d’accomplissement. Ces relations étaient invariantes selon le sexe des élèves. Par ailleurs, une seule interaction climat-besoin s’est avérée significative : l’interaction entre le climat de maîtrise et le sentiment d’autonomie prédit négativement l’adoption de buts de performance-évitement. Cela signifie que la perception d’un climat de maîtrise réduit l’adoption de buts de performance-évitement par les élèves, mais seulement lorsque ceux-ci affichent un fort sentiment d’autonomie. Finalement, l’adoption de buts de maîtrise et de buts de performance-approche en éducation physique exerce une influence positive sur les attitudes des élèves qui, à leur tour, influencent leurs habitudes en activité physique. Seule l’adoption de buts de performance-approche entretient une relation positive directe avec les habitudes des élèves. En conclusion, l’enseignant d’éducation physique peut agir sur la motivation et l’engagement des élèves en classe, mais aussi à l’extérieur des cours, en instaurant un climat motivationnel de maîtrise et en aidant les élèves à satisfaire leur besoin de compétence.
Resumo:
People go through their life making all kinds of decisions, and some of these decisions affect their demand for transportation, for example, their choices of where to live and where to work, how and when to travel and which route to take. Transport related choices are typically time dependent and characterized by large number of alternatives that can be spatially correlated. This thesis deals with models that can be used to analyze and predict discrete choices in large-scale networks. The proposed models and methods are highly relevant for, but not limited to, transport applications. We model decisions as sequences of choices within the dynamic discrete choice framework, also known as parametric Markov decision processes. Such models are known to be difficult to estimate and to apply to make predictions because dynamic programming problems need to be solved in order to compute choice probabilities. In this thesis we show that it is possible to explore the network structure and the flexibility of dynamic programming so that the dynamic discrete choice modeling approach is not only useful to model time dependent choices, but also makes it easier to model large-scale static choices. The thesis consists of seven articles containing a number of models and methods for estimating, applying and testing large-scale discrete choice models. In the following we group the contributions under three themes: route choice modeling, large-scale multivariate extreme value (MEV) model estimation and nonlinear optimization algorithms. Five articles are related to route choice modeling. We propose different dynamic discrete choice models that allow paths to be correlated based on the MEV and mixed logit models. The resulting route choice models become expensive to estimate and we deal with this challenge by proposing innovative methods that allow to reduce the estimation cost. For example, we propose a decomposition method that not only opens up for possibility of mixing, but also speeds up the estimation for simple logit models, which has implications also for traffic simulation. Moreover, we compare the utility maximization and regret minimization decision rules, and we propose a misspecification test for logit-based route choice models. The second theme is related to the estimation of static discrete choice models with large choice sets. We establish that a class of MEV models can be reformulated as dynamic discrete choice models on the networks of correlation structures. These dynamic models can then be estimated quickly using dynamic programming techniques and an efficient nonlinear optimization algorithm. Finally, the third theme focuses on structured quasi-Newton techniques for estimating discrete choice models by maximum likelihood. We examine and adapt switching methods that can be easily integrated into usual optimization algorithms (line search and trust region) to accelerate the estimation process. The proposed dynamic discrete choice models and estimation methods can be used in various discrete choice applications. In the area of big data analytics, models that can deal with large choice sets and sequential choices are important. Our research can therefore be of interest in various demand analysis applications (predictive analytics) or can be integrated with optimization models (prescriptive analytics). Furthermore, our studies indicate the potential of dynamic programming techniques in this context, even for static models, which opens up a variety of future research directions.
Resumo:
Cloud computing realizes the long-held dream of converting computing capability into a type of utility. It has the potential to fundamentally change the landscape of the IT industry and our way of life. However, as cloud computing expanding substantially in both scale and scope, ensuring its sustainable growth is a critical problem. Service providers have long been suffering from high operational costs. Especially the costs associated with the skyrocketing power consumption of large data centers. In the meantime, while efficient power/energy utilization is indispensable for the sustainable growth of cloud computing, service providers must also satisfy a user's quality of service (QoS) requirements. This problem becomes even more challenging considering the increasingly stringent power/energy and QoS constraints, as well as other factors such as the highly dynamic, heterogeneous, and distributed nature of the computing infrastructures, etc. In this dissertation, we study the problem of delay-sensitive cloud service scheduling for the sustainable development of cloud computing. We first focus our research on the development of scheduling methods for delay-sensitive cloud services on a single server with the goal of maximizing a service provider's profit. We then extend our study to scheduling cloud services in distributed environments. In particular, we develop a queue-based model and derive efficient request dispatching and processing decisions in a multi-electricity-market environment to improve the profits for service providers. We next study a problem of multi-tier service scheduling. By carefully assigning sub deadlines to the service tiers, our approach can significantly improve resource usage efficiencies with statistically guaranteed QoS. Finally, we study the power conscious resource provision problem for service requests with different QoS requirements. By properly sharing computing resources among different requests, our method statistically guarantees all QoS requirements with a minimized number of powered-on servers and thus the power consumptions. The significance of our research is that it is one part of the integrated effort from both industry and academia to ensure the sustainable growth of cloud computing as it continues to evolve and change our society profoundly.