938 resultados para Point method


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper proposes a technique for solving the multiobjective environmental/economic dispatch problem using the weighted sum and ε-constraint strategies, which transform the problem into a set of single-objective problems. In the first strategy, the objective function is a weighted sum of the environmental and economic objective functions. The second strategy considers one of the objective functions: in this case, the environmental function, as a problem constraint, bounded above by a constant. A specific predictor-corrector primal-dual interior point method which uses the modified log barrier is proposed for solving the set of single-objective problems generated by such strategies. The purpose of the modified barrier approach is to solve the problem with relaxation of its original feasible region, enabling the method to be initialized with unfeasible points. The tests involving the proposed solution technique indicate i) the efficiency of the proposed method with respect to the initialization with unfeasible points, and ii) its ability to find a set of efficient solutions for the multiobjective environmental/economic dispatch problem.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Thesis (Ph.D.)--University of Washington, 2016-08

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Two different mesoporous films of TiO2 were coated onto a QCM disc and fired at 450o C for 30 min. The first film was derived from a sol-gel paste that was popular in the early days of dye-sensitised solar cell, i.e. dssc, research, a TiO2(sg) film. The other was a commercial colloidal paste used to make examples of the current dssc cell; a TiO2(ds) film. A QCM was used to determine the mass of the TiO2 film deposited on each disc and the increase in the mass of the film when immersed in water/glycerol solutions with wt% values spanning the range 0-70%. The results of this work reveal that with both TiO2 mesoporous films the solution fills the film's pores and acts as a rigid mass, thereby allowing the porosity of each film to be calculated as: 59.1% and 71.6% for the TiO2(sg) and TiO2(ds) films, respectively. These results, coupled with surface area data, allowed the pore radii of the two films to be calculated as: 9.6 and 17.8 nm, respectively. This method is then simplified further, to just a few frequency measurements in water and only air to reveal the same porosity values. The value of the latter ‘one pointmethod for making porosity measurements is discussed briefly.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

We study the preconditioning of symmetric indefinite linear systems of equations that arise in interior point solution of linear optimization problems. The preconditioning method that we study exploits the block structure of the augmented matrix to design a similar block structure preconditioner to improve the spectral properties of the resulting preconditioned matrix so as to improve the convergence rate of the iterative solution of the system. We also propose a two-phase algorithm that takes advantage of the spectral properties of the transformed matrix to solve for the Newton directions in the interior-point method. Numerical experiments have been performed on some LP test problems in the NETLIB suite to demonstrate the potential of the preconditioning method discussed.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Optimization methods that employ the classical Powell-Hestenes-Rockafellar augmented Lagrangian are useful tools for solving nonlinear programming problems. Their reputation decreased in the last 10 years due to the comparative success of interior-point Newtonian algorithms, which are asymptotically faster. In this research, a combination of both approaches is evaluated. The idea is to produce a competitive method, being more robust and efficient than its `pure` counterparts for critical problems. Moreover, an additional hybrid algorithm is defined, in which the interior-point method is replaced by the Newtonian resolution of a Karush-Kuhn-Tucker (KKT) system identified by the augmented Lagrangian algorithm. The software used in this work is freely available through the Tango Project web page:http://www.ime.usp.br/similar to egbirgin/tango/.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

This article presents a well-known interior point method (IPM) used to solve problems of linear programming that appear as sub-problems in the solution of the long-term transmission network expansion planning problem. The linear programming problem appears when the transportation model is used, and when there is the intention to solve the planning problem using a constructive heuristic algorithm (CHA), ora branch-and-bound algorithm. This paper shows the application of the IPM in a CHA. A good performance of the IPM was obtained, and then it can be used as tool inside algorithm, used to solve the planning problem. Illustrative tests are shown, using electrical systems known in the specialized literature. (C) 2005 Elsevier B.V. All rights reserved.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

The aim of solving the Optimal Power Flow problem is to determine the optimal state of an electric power transmission system, that is, the voltage magnitude and phase angles and the tap ratios of the transformers that optimize the performance of a given system, while satisfying its physical and operating constraints. The Optimal Power Flow problem is modeled as a large-scale mixed-discrete nonlinear programming problem. This paper proposes a method for handling the discrete variables of the Optimal Power Flow problem. A penalty function is presented. Due to the inclusion of the penalty function into the objective function, a sequence of nonlinear programming problems with only continuous variables is obtained and the solutions of these problems converge to a solution of the mixed problem. The obtained nonlinear programming problems are solved by a Primal-Dual Logarithmic-Barrier Method. Numerical tests using the IEEE 14, 30, 118 and 300-Bus test systems indicate that the method is efficient. (C) 2012 Elsevier B.V. All rights reserved.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Background and Objectives: Improved ultrasound and needle technology make popliteal sciatic nerve blockade a popular anesthetic technique and imaging to localize the branch point of the common peroneal and posterior tibial components is important because successful blockade techniques vary with respect to injection of the common trunk proximally or separate injections distally. Nerve stimulation, ultrasound, cadaveric and magnetic resonance studies demonstrate variability in distance and discordance between imaging and anatomic examination of the branch point. The popliteal crease and imprecise, inaccessible landmarks render measurement of the branch point variable and inaccurate. The purpose of this study was to use the tibial tuberosity, a fixed bony reference, to measure the distance of the branch point. Method: During popliteal sciatic nerve blockade in the supine position the branch point was identified by ultrasound and the block needle was inserted. The vertical distance from the tibial tuberosity prominence and needle insertion point was measured. Results: In 92 patients the branch point is a mean distance of 12.91 cm proximal to the tibial tuberosity and more proximal in male (13.74 cm) than female patients (12.08 cm). Body height is related to the branch point distance and is more proximal in taller patients. Separation into two nerve branches during local anesthetic injection supports notions of more proximal neural anatomic division. Limitations: Imaging of the sciatic nerve division may not equal its true anatomic separation. Conclusion: Refinements in identification and resolution of the anatomic division of the nerve branch point will determine if more accurate localization is of any clinical significance for successful nerve blockade.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Linear Programming (LP) is a powerful decision making tool extensively used in various economic and engineering activities. In the early stages the success of LP was mainly due to the efficiency of the simplex method. After the appearance of Karmarkar's paper, the focus of most research was shifted to the field of interior point methods. The present work is concerned with investigating and efficiently implementing the latest techniques in this field taking sparsity into account. The performance of these implementations on different classes of LP problems is reported here. The preconditional conjugate gradient method is one of the most powerful tools for the solution of the least square problem, present in every iteration of all interior point methods. The effect of using different preconditioners on a range of problems with various condition numbers is presented. Decomposition algorithms has been one of the main fields of research in linear programming over the last few years. After reviewing the latest decomposition techniques, three promising methods were chosen the implemented. Sparsity is again a consideration and suggestions have been included to allow improvements when solving problems with these methods. Finally, experimental results on randomly generated data are reported and compared with an interior point method. The efficient implementation of the decomposition methods considered in this study requires the solution of quadratic subproblems. A review of recent work on algorithms for convex quadratic was performed. The most promising algorithms are discussed and implemented taking sparsity into account. The related performance of these algorithms on randomly generated separable and non-separable problems is also reported.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

The accurate and reliable estimation of travel time based on point detector data is needed to support Intelligent Transportation System (ITS) applications. It has been found that the quality of travel time estimation is a function of the method used in the estimation and varies for different traffic conditions. In this study, two hybrid on-line travel time estimation models, and their corresponding off-line methods, were developed to achieve better estimation performance under various traffic conditions, including recurrent congestion and incidents. The first model combines the Mid-Point method, which is a speed-based method, with a traffic flow-based method. The second model integrates two speed-based methods: the Mid-Point method and the Minimum Speed method. In both models, the switch between travel time estimation methods is based on the congestion level and queue status automatically identified by clustering analysis. During incident conditions with rapidly changing queue lengths, shock wave analysis-based refinements are applied for on-line estimation to capture the fast queue propagation and recovery. Travel time estimates obtained from existing speed-based methods, traffic flow-based methods, and the models developed were tested using both simulation and real-world data. The results indicate that all tested methods performed at an acceptable level during periods of low congestion. However, their performances vary with an increase in congestion. Comparisons with other estimation methods also show that the developed hybrid models perform well in all cases. Further comparisons between the on-line and off-line travel time estimation methods reveal that off-line methods perform significantly better only during fast-changing congested conditions, such as during incidents. The impacts of major influential factors on the performance of travel time estimation, including data preprocessing procedures, detector errors, detector spacing, frequency of travel time updates to traveler information devices, travel time link length, and posted travel time range, were investigated in this study. The results show that these factors have more significant impacts on the estimation accuracy and reliability under congested conditions than during uncongested conditions. For the incident conditions, the estimation quality improves with the use of a short rolling period for data smoothing, more accurate detector data, and frequent travel time updates.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Background: Although class attendance is linked to academic performance, questions remain about what determines students’ decisions to attend or miss class. Aims: In addition to the constructs of a common decision-making model, the theory of planned behaviour, the present study examined the influence of student role identity and university student (in-group) identification for predicting both the initiation and maintenance of students’ attendance at voluntary peer-assisted study sessions in a statistics subject. Sample: University students enrolled in a statistics subject were invited to complete a questionnaire at two time points across the academic semester. A total of 79 university students completed questionnaires at the first data collection point, with 46 students completing the questionnaire at the second data collection point. Method: Twice during the semester, students’ attitudes, subjective norm, perceived behavioural control, student role identity, in-group identification, and intention to attend study sessions were assessed via on-line questionnaires. Objective measures of class attendance records for each half-semester (or ‘term’) were obtained. Results: Across both terms, students’ attitudes predicted their attendance intentions, with intentions predicting class attendance. Earlier in the semester, in addition to perceived behavioural control, both student role identity and in-group identification predicted students’ attendance intentions, with only role identity influencing intentions later in the semester. Conclusions: These findings highlight the possible chronology that different identity influences have in determining students’ initial and maintained attendance at voluntary sessions designed to facilitate their learning.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Background Accelerometers have become one of the most common methods of measuring physical activity (PA). Thus, validity of accelerometer data reduction approaches remains an important research area. Yet, few studies directly compare data reduction approaches and other PA measures in free-living samples. Objective To compare PA estimates provided by 3 accelerometer data reduction approaches, steps, and 2 self-reported estimates: Crouter's 2-regression model, Crouter's refined 2-regression model, the weighted cut-point method adopted in the National Health and Nutrition Examination Survey (NHANES; 2003-2004 and 2005-2006 cycles), steps, IPAQ, and 7-day PA recall. Methods A worksite sample (N = 87) completed online-surveys and wore ActiGraph GT1M accelerometers and pedometers (SW-200) during waking hours for 7 consecutive days. Daily time spent in sedentary, light, moderate, and vigorous intensity activity and percentage of participants meeting PA recommendations were calculated and compared. Results Crouter's 2-regression (161.8 +/- 52.3 minutes/day) and refined 2-regression (137.6 +/- 40.3 minutes/day) models provided significantly higher estimates of moderate and vigorous PA and proportions of those meeting PA recommendations (91% and 92%, respectively) as compared with the NHANES weighted cut-point method (39.5 +/- 20.2 minutes/day, 18%). Differences between other measures were also significant. Conclusions When comparing 3 accelerometer cut-point methods, steps, and self-report measures, estimates of PA participation vary substantially.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The exchange of iron species from iron (III) chloride solutions with a strong acid cation resin has been investigated in relation to a variety of water and wastewater applications. A detailed equilibrium isotherm analysis was conducted wherein models such as Langmuir Vageler, Competitive Langmuir, Freundlich, Temkin, Dubinin Astakhov, Sips and Brouers-Sotolongo were applied to the experimental data. An important conclusion was that both the bottle-point method and solution normality used to generate the ion exchange equilibrium information influenced which sorption model fitted the isotherm profiles optimally. Invariably, the calculated value for the maximum loading of iron on strong acid cation resin was substantially higher than the value of 47.1 g/kg of resin which would occur if one Fe3+ ion exchanged for three “H+” sites on the resin surface. Consequently, it was suggested that above pH 1, various iron complexes sorbed to the resin in a manner which required less than 3 sites per iron moiety. Column trials suggested that the iron loading was 86.6 g/kg of resin when 1342 mg/L Fe (III) ions in water were flowed at 31.7 bed volumes per hour. Regeneration with 5 to 10 % HCl solutions reclaimed approximately 90 % of exchange sites.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Common to many types of water and wastewater is the presence of sodium ions which can be removed by desalination technologies, such as reverse osmosis and ion exchange. The focus of this investigation was ion exchange as it potentially offered several advantages compared to competing methods. The equilibrium and column behaviour of a strong acid cation (SAC) resin was examined for the removal of sodium ions from aqueous sodium chloride solutions of varying normality as well as a coal seam gas water sample. The influence of the bottle-point method to generate the sorption isotherms was evaluated and data interpreted with the Langmuir Vageler, Competitive Langmuir, Freundlich, and Dubinin-Astakhov models. With the constant concentration bottle point method, the predicted maximum exchange levels of sodium ions on the resin ranged from 61.7 to 67.5 g Na/kg resin. The general trend was that the lower the initial concentration of sodium ions in the solution, the lower the maximum capacity of the resin for sodium ions. In contrast, the constant mass bottle point method was found to be problematic in that the isotherm profiles may not be complete, if experimental parameters were not chosen carefully. Column studies supported the observations of the equilibrium studies, with maximum sodium loading of ca. 62.9 g Na/kg resin measured, which was in excellent agreement with the predictions of the data from the constant concentration bottle point method. Equilibria involving coal seam gas water were more complex due to the presence of sodium bicarbonate in solution, albeit the maximum loading capacity for sodium ions was in agreement with the results from the more simple sodium chloride solutions.