992 resultados para REDUCED GRADIENT


Relevância:

100.00% 100.00%

Publicador:

Resumo:

A new approach for solving the optimal power flow (OPF) problem is established by combining the reduced gradient method and the augmented Lagrangian method with barriers and exploring specific characteristics of the relations between the variables of the OPF problem. Computer simulations on IEEE 14-bus and IEEE 30-bus test systems illustrate the method. (c) 2007 Elsevier Inc. All rights reserved.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

A method for linearly constrained optimization which modifies and generalizes recent box-constraint optimization algorithms is introduced. The new algorithm is based on a relaxed form of Spectral Projected Gradient iterations. Intercalated with these projected steps, internal iterations restricted to faces of the polytope are performed, which enhance the efficiency of the algorithm. Convergence proofs are given and numerical experiments are included and commented. Software supporting this paper is available through the Tango Project web page: http://www.ime.usp.br/similar to egbirgin/tango/.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Purpose To provide a summary of the classic paper "Differences in the accommodation stimulus response curves of adult myopes and emmetropes" published in Ophthalmic and Physiological Optics in 1998 and to provide an update on the topic of accommodation errors in myopia. Summary The accommodation responses of 33 participants (10 emmetropes, 11 early onset myopes and 12 late onset myopes) aged 18-31 years were measured using the Canon Autoref R-1 free space autorefractor using three methods to vary the accommodation demand: decreasing distance (4 m to 0.25 cm), negative lenses (0 to -4 D at 4 m) and positive lenses (+4 to 0 D at 0.25 m). We observed that the greatest accommodation errors occurred for the negative lens method whereas minimal errors were observed using positive lenses. Adult progressing myopes had greater lags of accommodation than stable myopes at higher demands induced by negative lenses. Progressing myopes had shallower response gradients than the emmetropes and stable myopes; however the reduced gradient was much less than that observed in children using similar methods. Recent Findings This paper has been often cited as evidence that accommodation responses at near may be primarily reduced in adults with progressing myopia and not in stable myopes and/or that challenging accommodation stimuli (negative lenses with monocular viewing) are required to generate larger accommodation errors. As an analogy, animals reared with hyperopic errors develop axial elongation and myopia. Retinal defocus signals are presumably passed to the retinal pigment epithelium and choroid and then ultimately the sclera to modify eye length. A number of lens treatments that act to slow myopia progression may partially work through reducing accommodation errors.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The Support Vector Machine (SVM) is a new and very promising classification technique developed by Vapnik and his group at AT&T Bell Labs. This new learning algorithm can be seen as an alternative training technique for Polynomial, Radial Basis Function and Multi-Layer Perceptron classifiers. An interesting property of this approach is that it is an approximate implementation of the Structural Risk Minimization (SRM) induction principle. The derivation of Support Vector Machines, its relationship with SRM, and its geometrical insight, are discussed in this paper. Training a SVM is equivalent to solve a quadratic programming problem with linear and box constraints in a number of variables equal to the number of data points. When the number of data points exceeds few thousands the problem is very challenging, because the quadratic form is completely dense, so the memory needed to store the problem grows with the square of the number of data points. Therefore, training problems arising in some real applications with large data sets are impossible to load into memory, and cannot be solved using standard non-linear constrained optimization algorithms. We present a decomposition algorithm that can be used to train SVM's over large data sets. The main idea behind the decomposition is the iterative solution of sub-problems and the evaluation of, and also establish the stopping criteria for the algorithm. We present previous approaches, as well as results and important details of our implementation of the algorithm using a second-order variant of the Reduced Gradient Method as the solver of the sub-problems. As an application of SVM's, we present preliminary results we obtained applying SVM to the problem of detecting frontal human faces in real images.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

We report here a new empirical density functional that is constructed based on the performance of OPBE and PBE for spin states and SN 2 reaction barriers and how these are affected by different regions of the reduced gradient expansion. In a previous study [Swart, Sol̀, and Bickelhaupt, J. Comput. Methods Sci. Eng. 9, 69 (2009)] we already reported how, by switching between OPBE and PBE, one could obtain both the good performance of OPBE for spin states and reaction barriers and that of PBE for weak interactions within one and the same (SSB-sw) functional. Here we fine tuned this functional and include a portion of the KT functional and Grimme's dispersion correction to account for π- π stacking. Our new SSB-D functional is found to be a clear improvement and functions very well for biological applications (hydrogen bonding, π -π stacking, spin-state splittings, accuracy of geometries, reaction barriers)

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Pós-graduação em Engenharia Mecânica - FEG

Relevância:

60.00% 60.00%

Publicador:

Resumo:

A method has been constructed for the solution of a wide range of chemical plant simulation models including differential equations and optimization. Double orthogonal collocation on finite elements is applied to convert the model into an NLP problem that is solved either by the VF 13AD package based on successive quadratic programming, or by the GRG2 package, based on the generalized reduced gradient method. This approach is termed simultaneous optimization and solution strategy. The objective functional can contain integral terms. The state and control variables can have time delays. Equalities and inequalities containing state and control variables can be included into the model as well as algebraic equations and inequalities. The maximum number of independent variables is 2. Problems containing 3 independent variables can be transformed into problems having 2 independent variables using finite differencing. The maximum number of NLP variables and constraints is 1500. The method is also suitable for solving ordinary and partial differential equations. The state functions are approximated by a linear combination of Lagrange interpolation polynomials. The control function can either be approximated by a linear combination of Lagrange interpolation polynomials or by a piecewise constant function over finite elements. The number of internal collocation points can vary by finite elements. The residual error is evaluated at arbitrarily chosen equidistant grid-points, thus enabling the user to check the accuracy of the solution between collocation points, where the solution is exact. The solution functions can be tabulated. There is an option to use control vector parameterization to solve optimization problems containing initial value ordinary differential equations. When there are many differential equations or the upper integration limit should be selected optimally then this approach should be used. The portability of the package has been addressed converting the package from V AX FORTRAN 77 into IBM PC FORTRAN 77 and into SUN SPARC 2000 FORTRAN 77. Computer runs have shown that the method can reproduce optimization problems published in the literature. The GRG2 and the VF I 3AD packages, integrated into the optimization package, proved to be robust and reliable. The package contains an executive module, a module performing control vector parameterization and 2 nonlinear problem solver modules, GRG2 and VF I 3AD. There is a stand-alone module that converts the differential-algebraic optimization problem into a nonlinear programming problem.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This paper presents a methodology to forecast the hourly and daily consumption in households assisted by cyber physical systems. The methodology was validated using a database of consumption of a set of 93 domestic consumers. Forecast tools used were based on Fast Fourier Series and Generalized Reduced Gradient. Both tools were tested and their forecast results were compared. The paper shows that both tools allow obtaining satisfactory results for energy consumption forecasting.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

AIMS Our aim was to evaluate the invasive haemodynamic indices of high-risk symptomatic patients presenting with 'paradoxical' low-flow, low-gradient, severe aortic stenosis (AS) (PLF-LG) and low-flow, low-gradient severe AS (LEF-LG) and to compare clinical outcomes following transcatheter aortic valve implantation (TAVI) among these challenging AS subgroups. METHODS AND RESULTS Of 534 symptomatic patients undergoing TAVI, 385 had a full pre-procedural right and left heart catheterization. A total of 208 patients had high-gradient severe AS [HGAS; mean gradient (MG) ≥40 mmHg], 85 had PLF-LG [MG ≤ 40 mmHg, indexed aortic valve area [iAVA] ≤0.6 cm(2) m(-2), stroke volume index ≤35 mL/m(2), ejection fraction (EF) ≥50%], and 61 had LEF-LG (MG ≤ 40 mmHg, iAVA ≤0.6 cm(2) m(-2), EF ≤40%). Compared with HGAS, PLF-LG and LEF-LG had higher systemic vascular resistances (HGAS: 1912 ± 654 vs. PLF-LG 2006 ± 586 vs. LEF-LG 2216 ± 765 dyne s m(-5), P = 0.007) but lower valvulo-arterial impedances (HGAS: 7.8 ± 2.7 vs. PLF-LG 6.9 ± 1.9 vs. LEF-LG 7.7 ± 2.5 mmHg mL(-1) m(-2), P = 0.027). At 30 days, no differences in cardiac death (6.5 vs. 4.9 vs. 6.6%, P = 0.90) or death (8.4 vs. 6.1 vs. 6.6%, P = 0.88) were observed among HGAS, PLF-LG, and LEF-LG groups, respectively. At 1 year, New York Heart Association functional improvement occurred in most surviving patients (HGAS: 69.2% vs. PLF-LG 71.7% vs. LEF-LG 89.3%, P = 0.09) and no significant differences in overall mortality were observed (17.6 vs. 20.5 vs. 24.5%, P = 0.67). Compared with HGAS, LEF-LG had a higher 1 year cardiac mortality (adjusted hazard ratio 2.45, 95% confidence interval 1.04-5.75, P = 0.04). CONCLUSION TAVI in PLF-LG or LEF-LG patients is associated with overall mortality rates comparable with HGAS patients and all groups profit symptomatically to a similar extent.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis aimed to investigate the way in which distance runners modulate their speed in an effort to understand the key processes and determinants of speed selection when encountering hills in natural outdoor environments. One factor which has limited the expansion of knowledge in this area has been a reliance on the motorized treadmill which constrains runners to constant speeds and gradients and only linear paths. Conversely, limits in the portability or storage capacity of available technology have restricted field research to brief durations and level courses. Therefore another aim of this thesis was to evaluate the capacity of lightweight, portable technology to measure running speed in outdoor undulating terrain. The first study of this thesis assessed the validity of a non-differential GPS to measure speed, displacement and position during human locomotion. Three healthy participants walked and ran over straight and curved courses for 59 and 34 trials respectively. A non-differential GPS receiver provided speed data by Doppler Shift and change in GPS position over time, which were compared with actual speeds determined by chronometry. Displacement data from the GPS were compared with a surveyed 100m section, while static positions were collected for 1 hour and compared with the known geodetic point. GPS speed values on the straight course were found to be closely correlated with actual speeds (Doppler shift: r = 0.9994, p < 0.001, Δ GPS position/time: r = 0.9984, p < 0.001). Actual speed errors were lowest using the Doppler shift method (90.8% of values within ± 0.1 m.sec -1). Speed was slightly underestimated on a curved path, though still highly correlated with actual speed (Doppler shift: r = 0.9985, p < 0.001, Δ GPS distance/time: r = 0.9973, p < 0.001). Distance measured by GPS was 100.46 ± 0.49m, while 86.5% of static points were within 1.5m of the actual geodetic point (mean error: 1.08 ± 0.34m, range 0.69-2.10m). Non-differential GPS demonstrated a highly accurate estimation of speed across a wide range of human locomotion velocities using only the raw signal data with a minimal decrease in accuracy around bends. This high level of resolution was matched by accurate displacement and position data. Coupled with reduced size, cost and ease of use, the use of a non-differential receiver offers a valid alternative to differential GPS in the study of overground locomotion. The second study of this dissertation examined speed regulation during overground running on a hilly course. Following an initial laboratory session to calculate physiological thresholds (VO2 max and ventilatory thresholds), eight experienced long distance runners completed a self- paced time trial over three laps of an outdoor course involving uphill, downhill and level sections. A portable gas analyser, GPS receiver and activity monitor were used to collect physiological, speed and stride frequency data. Participants ran 23% slower on uphills and 13.8% faster on downhills compared with level sections. Speeds on level sections were significantly different for 78.4 ± 7.0 seconds following an uphill and 23.6 ± 2.2 seconds following a downhill. Speed changes were primarily regulated by stride length which was 20.5% shorter uphill and 16.2% longer downhill, while stride frequency was relatively stable. Oxygen consumption averaged 100.4% of runner’s individual ventilatory thresholds on uphills, 78.9% on downhills and 89.3% on level sections. Group level speed was highly predicted using a modified gradient factor (r2 = 0.89). Individuals adopted distinct pacing strategies, both across laps and as a function of gradient. Speed was best predicted using a weighted factor to account for prior and current gradients. Oxygen consumption (VO2) limited runner’s speeds only on uphill sections, and was maintained in line with individual ventilatory thresholds. Running speed showed larger individual variation on downhill sections, while speed on the level was systematically influenced by the preceding gradient. Runners who varied their pace more as a function of gradient showed a more consistent level of oxygen consumption. These results suggest that optimising time on the level sections after hills offers the greatest potential to minimise overall time when running over undulating terrain. The third study of this thesis investigated the effect of implementing an individualised pacing strategy on running performance over an undulating course. Six trained distance runners completed three trials involving four laps (9968m) of an outdoor course involving uphill, downhill and level sections. The initial trial was self-paced in the absence of any temporal feedback. For the second and third field trials, runners were paced for the first three laps (7476m) according to two different regimes (Intervention or Control) by matching desired goal times for subsections within each gradient. The fourth lap (2492m) was completed without pacing. Goals for the Intervention trial were based on findings from study two using a modified gradient factor and elapsed distance to predict the time for each section. To maintain the same overall time across all paced conditions, times were proportionately adjusted according to split times from the self-paced trial. The alternative pacing strategy (Control) used the original split times from this initial trial. Five of the six runners increased their range of uphill to downhill speeds on the Intervention trial by more than 30%, but this was unsuccessful in achieving a more consistent level of oxygen consumption with only one runner showing a change of more than 10%. Group level adherence to the Intervention strategy was lowest on downhill sections. Three runners successfully adhered to the Intervention pacing strategy which was gauged by a low Root Mean Square error across subsections and gradients. Of these three, the two who had the largest change in uphill-downhill speeds ran their fastest overall time. This suggests that for some runners the strategy of varying speeds systematically to account for gradients and transitions may benefit race performances on courses involving hills. In summary, a non – differential receiver was found to offer highly accurate measures of speed, distance and position across the range of human locomotion speeds. Self-selected speed was found to be best predicted using a weighted factor to account for prior and current gradients. Oxygen consumption limited runner’s speeds only on uphills, speed on the level was systematically influenced by preceding gradients, while there was a much larger individual variation on downhill sections. Individuals were found to adopt distinct but unrelated pacing strategies as a function of durations and gradients, while runners who varied pace more as a function of gradient showed a more consistent level of oxygen consumption. Finally, the implementation of an individualised pacing strategy to account for gradients and transitions greatly increased runners’ range of uphill-downhill speeds and was able to improve performance in some runners. The efficiency of various gradient-speed trade- offs and the factors limiting faster downhill speeds will however require further investigation to further improve the effectiveness of the suggested strategy.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Analyses of the invariants of the velocity gradient ten- sor were performed on flow fields obtained by DNS of compressible plane mixing layers at convective Mach num- bers Mc=0:15 and 1.1. Joint pdfs of the 2nd and 3rd invariants were examined at turbulent/nonturbulent (T/NT) boundaries—defined as surfaces where the local vorticity first exceeds a threshold fraction of the maximum of the mean vorticity. By increasing the threshold from very small lev-els, the boundary points were moved closer into the turbulent region, and the effects on the pdfs of the invariants were ob-served. Generally, T/NT boundaries are in sheet-like regions at both Mach numbers. At the higher Mach number a distinct lobe appears in the joint pdf isolines which has not been ob-served/reported before. A connection to the delayed entrain-ment and reduced growth rate of the higher Mach number flow is proposed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The anisotropy and gradient of the elastic modulus and the hardness of teeth were investigated by means of instrumented indentation method. Such properties are attributed to the unique microstructures of teeth based on scanning electron microscopic analysis. By comparing the relationship between the ratio of hardness to the reduced elastic modulus and the ratio of elastic unloading work to the total work of teeth in course of indentation to those of other materials, we found that the material behaviors of teeth display metal-like characteristics rather than ceramics as considered traditionally. These material behaviors and relevant functions are discussed briefly.