930 resultados para Multi-Point Method
Resumo:
This article presents a well-known interior point method (IPM) used to solve problems of linear programming that appear as sub-problems in the solution of the long-term transmission network expansion planning problem. The linear programming problem appears when the transportation model is used, and when there is the intention to solve the planning problem using a constructive heuristic algorithm (CHA), ora branch-and-bound algorithm. This paper shows the application of the IPM in a CHA. A good performance of the IPM was obtained, and then it can be used as tool inside algorithm, used to solve the planning problem. Illustrative tests are shown, using electrical systems known in the specialized literature. (C) 2005 Elsevier B.V. All rights reserved.
Resumo:
Physical parameters of different types of lenses were measured through digital speckle pattern interferometry (DSPI) using a multimode diode laser as light source. When such lasers emit two or more longitudinal modes simultaneously the speckle image of an object appears covered of contour fringes. By performing the quantitative fringe evaluation the radii of curvature as well as the refractive indexes of the lenses were determined. The fringe quantitative evaluation was carried out through the four- and the eight-stepping techniques and the branch-cut method was employed for phase unwrapping. With all these parameters the focal length was calculated. This whole-field multi-wavelength method does enable the characterization of spherical and aspherical lenses and of positive and negative ones as well. (C) 2007 Elsevier B.V. All rights reserved.
Resumo:
Torsional vibration predictions and measurements of a marine propulsion system, which has both damping and a highly flexible coupling, are presented in this paper. Using the conventional approach to stress prediction in the shafting system, the numerical predictions and the experimental torsional vibration stress curves in some parts of the shafting system are found to be quite different. The free torsional vibration characteristics and forced torsional vibration response of the system are analyzed in detail to investigate this phenomenon. It is found that the second to fourth natural modes of the shafting system have significant local deformation. This results in large torsional resonant responses in different sections of the system corresponding to different engine speeds. The results show that when there is significant local deformation in the shafting system for different modes, then multi-point measurements should be made, rather than the conventional method of using a single measurement at the free end of the shaft, to obtain the full torsional vibration characteristics of the shafting system.
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
The aim of solving the Optimal Power Flow problem is to determine the optimal state of an electric power transmission system, that is, the voltage magnitude and phase angles and the tap ratios of the transformers that optimize the performance of a given system, while satisfying its physical and operating constraints. The Optimal Power Flow problem is modeled as a large-scale mixed-discrete nonlinear programming problem. This paper proposes a method for handling the discrete variables of the Optimal Power Flow problem. A penalty function is presented. Due to the inclusion of the penalty function into the objective function, a sequence of nonlinear programming problems with only continuous variables is obtained and the solutions of these problems converge to a solution of the mixed problem. The obtained nonlinear programming problems are solved by a Primal-Dual Logarithmic-Barrier Method. Numerical tests using the IEEE 14, 30, 118 and 300-Bus test systems indicate that the method is efficient. (C) 2012 Elsevier B.V. All rights reserved.
Resumo:
Lattice Quantum Chromodynamics (LQCD) is the preferred tool for obtaining non-perturbative results from QCD in the low-energy regime. It has by nowrnentered the era in which high precision calculations for a number of phenomenologically relevant observables at the physical point, with dynamical quark degrees of freedom and controlled systematics, become feasible. Despite these successes there are still quantities where control of systematic effects is insufficient. The subject of this thesis is the exploration of the potential of todays state-of-the-art simulation algorithms for non-perturbativelyrn$\mathcal{O}(a)$-improved Wilson fermions to produce reliable results in thernchiral regime and at the physical point both for zero and non-zero temperature. Important in this context is the control over the chiral extrapolation. Thisrnthesis is concerned with two particular topics, namely the computation of hadronic form factors at zero temperature, and the properties of the phaserntransition in the chiral limit of two-flavour QCD.rnrnThe electromagnetic iso-vector form factor of the pion provides a platform to study systematic effects and the chiral extrapolation for observables connected to the structure of mesons (and baryons). Mesonic form factors are computationally simpler than their baryonic counterparts but share most of the systematic effects. This thesis contains a comprehensive study of the form factor in the regime of low momentum transfer $q^2$, where the form factor is connected to the charge radius of the pion. A particular emphasis is on the region very close to $q^2=0$ which has not been explored so far, neither in experiment nor in LQCD. The results for the form factor close the gap between the smallest spacelike $q^2$-value available so far and $q^2=0$, and reach an unprecedented accuracy at full control over the main systematic effects. This enables the model-independent extraction of the pion charge radius. The results for the form factor and the charge radius are used to test chiral perturbation theory ($\chi$PT) and are thereby extrapolated to the physical point and the continuum. The final result in units of the hadronic radius $r_0$ is rn$$ \left\langle r_\pi^2 \right\rangle^{\rm phys}/r_0^2 = 1.87 \: \left(^{+12}_{-10}\right)\left(^{+\:4}_{-15}\right) \quad \textnormal{or} \quad \left\langle r_\pi^2 \right\rangle^{\rm phys} = 0.473 \: \left(^{+30}_{-26}\right)\left(^{+10}_{-38}\right)(10) \: \textnormal{fm} \;, $$rn which agrees well with the results from other measurements in LQCD and experiment. Note, that this is the first continuum extrapolated result for the charge radius from LQCD which has been extracted from measurements of the form factor in the region of small $q^2$.rnrnThe order of the phase transition in the chiral limit of two-flavour QCD and the associated transition temperature are the last unkown features of the phase diagram at zero chemical potential. The two possible scenarios are a second order transition in the $O(4)$-universality class or a first order transition. Since direct simulations in the chiral limit are not possible the transition can only be investigated by simulating at non-zero quark mass with a subsequent chiral extrapolation, guided by the universal scaling in the vicinity of the critical point. The thesis presents the setup and first results from a study on this topic. The study provides the ideal platform to test the potential and limits of todays simulation algorithms at finite temperature. The results from a first scan at a constant zero-temperature pion mass of about 290~MeV are promising, and it appears that simulations down to physical quark masses are feasible. Of particular relevance for the order of the chiral transition is the strength of the anomalous breaking of the $U_A(1)$ symmetry at the transition point. It can be studied by looking at the degeneracies of the correlation functions in scalar and pseudoscalar channels. For the temperature scan reported in this thesis the breaking is still pronounced in the transition region and the symmetry becomes effectively restored only above $1.16\:T_C$. The thesis also provides an extensive outline of research perspectives and includes a generalisation of the standard multi-histogram method to explicitly $\beta$-dependent fermion actions.
Resumo:
Background and Objectives: Improved ultrasound and needle technology make popliteal sciatic nerve blockade a popular anesthetic technique and imaging to localize the branch point of the common peroneal and posterior tibial components is important because successful blockade techniques vary with respect to injection of the common trunk proximally or separate injections distally. Nerve stimulation, ultrasound, cadaveric and magnetic resonance studies demonstrate variability in distance and discordance between imaging and anatomic examination of the branch point. The popliteal crease and imprecise, inaccessible landmarks render measurement of the branch point variable and inaccurate. The purpose of this study was to use the tibial tuberosity, a fixed bony reference, to measure the distance of the branch point. Method: During popliteal sciatic nerve blockade in the supine position the branch point was identified by ultrasound and the block needle was inserted. The vertical distance from the tibial tuberosity prominence and needle insertion point was measured. Results: In 92 patients the branch point is a mean distance of 12.91 cm proximal to the tibial tuberosity and more proximal in male (13.74 cm) than female patients (12.08 cm). Body height is related to the branch point distance and is more proximal in taller patients. Separation into two nerve branches during local anesthetic injection supports notions of more proximal neural anatomic division. Limitations: Imaging of the sciatic nerve division may not equal its true anatomic separation. Conclusion: Refinements in identification and resolution of the anatomic division of the nerve branch point will determine if more accurate localization is of any clinical significance for successful nerve blockade.
Resumo:
In this paper, we are concerned with determining values of lambda, for which there exist positive solutions of the nonlinear eigenvalue problem [GRAPHICS] where a, b, c, d is an element of [0, infinity), xi(i) is an element of (0, 1), alpha(i), beta(i) is an element of [0 infinity) (for i is an element of {1, ..., m - 2}) are given constants, p, q is an element of C ([0, 1], (0, infinity)), h is an element of C ([0, 1], [0, infinity)), and f is an element of C ([0, infinity), [0, infinity)) satisfying some suitable conditions. Our proofs are based on Guo-Krasnoselskii fixed point theorem. (C) 2004 Elsevier Inc. All rights reserved.
Resumo:
We investigate the structure of the positive solution set for nonlinear three-point boundary value problems of the form u('') + h(t) f(u) = 0, u(0) = 0, u(1) = lambdau(eta), where eta epsilon (0, 1) is given lambda epsilon (0, 1/n) is a parameter, f epsilon C ([0, infinity), [0, infinity)) satisfies f (s) > 0 for s > 0, and h epsilon C([0, 1], [0, infinity)) is not identically zero on any subinterval of [0, 1]. Our main results demonstrate the existence of continua of positive solutions of the above problem. (C) 2004 Elsevier Ltd. All rights reserved.
Using interior point algorithms for the solution of linear programs with special structural features
Resumo:
Linear Programming (LP) is a powerful decision making tool extensively used in various economic and engineering activities. In the early stages the success of LP was mainly due to the efficiency of the simplex method. After the appearance of Karmarkar's paper, the focus of most research was shifted to the field of interior point methods. The present work is concerned with investigating and efficiently implementing the latest techniques in this field taking sparsity into account. The performance of these implementations on different classes of LP problems is reported here. The preconditional conjugate gradient method is one of the most powerful tools for the solution of the least square problem, present in every iteration of all interior point methods. The effect of using different preconditioners on a range of problems with various condition numbers is presented. Decomposition algorithms has been one of the main fields of research in linear programming over the last few years. After reviewing the latest decomposition techniques, three promising methods were chosen the implemented. Sparsity is again a consideration and suggestions have been included to allow improvements when solving problems with these methods. Finally, experimental results on randomly generated data are reported and compared with an interior point method. The efficient implementation of the decomposition methods considered in this study requires the solution of quadratic subproblems. A review of recent work on algorithms for convex quadratic was performed. The most promising algorithms are discussed and implemented taking sparsity into account. The related performance of these algorithms on randomly generated separable and non-separable problems is also reported.
Resumo:
Summary form only given. A novel method for tuning the second and the third order dispersion using a simple multi-point bending device has been demonstrated. A simple model has been developed that allows to calculate the exact bending profile required for compensation for the given values of dispersion and dispersion slope.
Resumo:
Summary form only given. A novel method for tuning the second and the third order dispersion using a simple multi-point bending device has been demonstrated. A simple model has been developed that allows to calculate the exact bending profile required for compensation for the given values of dispersion and dispersion slope.
Resumo:
The accurate and reliable estimation of travel time based on point detector data is needed to support Intelligent Transportation System (ITS) applications. It has been found that the quality of travel time estimation is a function of the method used in the estimation and varies for different traffic conditions. In this study, two hybrid on-line travel time estimation models, and their corresponding off-line methods, were developed to achieve better estimation performance under various traffic conditions, including recurrent congestion and incidents. The first model combines the Mid-Point method, which is a speed-based method, with a traffic flow-based method. The second model integrates two speed-based methods: the Mid-Point method and the Minimum Speed method. In both models, the switch between travel time estimation methods is based on the congestion level and queue status automatically identified by clustering analysis. During incident conditions with rapidly changing queue lengths, shock wave analysis-based refinements are applied for on-line estimation to capture the fast queue propagation and recovery. Travel time estimates obtained from existing speed-based methods, traffic flow-based methods, and the models developed were tested using both simulation and real-world data. The results indicate that all tested methods performed at an acceptable level during periods of low congestion. However, their performances vary with an increase in congestion. Comparisons with other estimation methods also show that the developed hybrid models perform well in all cases. Further comparisons between the on-line and off-line travel time estimation methods reveal that off-line methods perform significantly better only during fast-changing congested conditions, such as during incidents. The impacts of major influential factors on the performance of travel time estimation, including data preprocessing procedures, detector errors, detector spacing, frequency of travel time updates to traveler information devices, travel time link length, and posted travel time range, were investigated in this study. The results show that these factors have more significant impacts on the estimation accuracy and reliability under congested conditions than during uncongested conditions. For the incident conditions, the estimation quality improves with the use of a short rolling period for data smoothing, more accurate detector data, and frequent travel time updates.
Resumo:
Changes in load characteristics, deterioration with age, environmental influences and random actions may cause local or global damage in structures, especially in bridges, which are designed for long life spans. Continuous health monitoring of structures will enable the early identification of distress and allow appropriate retrofitting in order to avoid failure or collapse of the structures. In recent times, structural health monitoring (SHM) has attracted much attention in both research and development. Local and global methods of damage assessment using the monitored information are an integral part of SHM techniques. In the local case, the assessment of the state of a structure is done either by direct visual inspection or using experimental techniques such as acoustic emission, ultrasonic, magnetic particle inspection, radiography and eddy current. A characteristic of all these techniques is that their application requires a prior localization of the damaged zones. The limitations of the local methodologies can be overcome by using vibration-based methods, which give a global damage assessment. The vibration-based damage detection methods use measured changes in dynamic characteristics to evaluate changes in physical properties that may indicate structural damage or degradation. The basic idea is that modal parameters (notably frequencies, mode shapes, and modal damping) are functions of the physical properties of the structure (mass, damping, and stiffness). Changes in the physical properties will therefore cause changes in the modal properties. Any reduction in structural stiffness and increase in damping in the structure may indicate structural damage. This research uses the variations in vibration parameters to develop a multi-criteria method for damage assessment. It incorporates the changes in natural frequencies, modal flexibility and modal strain energy to locate damage in the main load bearing elements in bridge structures such as beams, slabs and trusses and simple bridges involving these elements. Dynamic computer simulation techniques are used to develop and apply the multi-criteria procedure under different damage scenarios. The effectiveness of the procedure is demonstrated through numerical examples. Results show that the proposed method incorporating modal flexibility and modal strain energy changes is competent in damage assessment in the structures treated herein.
Resumo:
Background: Although class attendance is linked to academic performance, questions remain about what determines students’ decisions to attend or miss class. Aims: In addition to the constructs of a common decision-making model, the theory of planned behaviour, the present study examined the influence of student role identity and university student (in-group) identification for predicting both the initiation and maintenance of students’ attendance at voluntary peer-assisted study sessions in a statistics subject. Sample: University students enrolled in a statistics subject were invited to complete a questionnaire at two time points across the academic semester. A total of 79 university students completed questionnaires at the first data collection point, with 46 students completing the questionnaire at the second data collection point. Method: Twice during the semester, students’ attitudes, subjective norm, perceived behavioural control, student role identity, in-group identification, and intention to attend study sessions were assessed via on-line questionnaires. Objective measures of class attendance records for each half-semester (or ‘term’) were obtained. Results: Across both terms, students’ attitudes predicted their attendance intentions, with intentions predicting class attendance. Earlier in the semester, in addition to perceived behavioural control, both student role identity and in-group identification predicted students’ attendance intentions, with only role identity influencing intentions later in the semester. Conclusions: These findings highlight the possible chronology that different identity influences have in determining students’ initial and maintained attendance at voluntary sessions designed to facilitate their learning.