938 resultados para Point method
Resumo:
The ability of neural networks to realize some complex nonlinear function makes them attractive for system identification. This paper describes a novel barrier method using artificial neural networks to solve robust parameter estimation problems for nonlinear model with unknown-but-bounded errors and uncertainties. This problem can be represented by a typical constrained optimization problem. More specifically, a modified Hopfield network is developed and its internal parameters are computed using the valid-subspace technique. These parameters guarantee the network convergence to the equilibrium points. A solution for the robust estimation problem with unknown-but-bounded error corresponds to an equilibrium point of the network. Simulation results are presented as an illustration of the proposed approach.
Resumo:
This paper presents an alternative methodology for loading margin improvement and total real power losses reduction by using a continuation method. In order to attain this goal, a parameterizing equation based on the total real power losses and the equations of the reactive power at the slack and generation buses are added to the conventional power flow equations. The voltages at these buses are considered as control variables and a new parameter is chosen to reduce the real power losses in the transmission lines. The results show that this procedure leads to maximum loading point increase and consequently, in static voltage stability margin improvement. Besides, this procedure also takes to a reduction in the operational costs and, simultaneously, to voltage profile improvement. Another important result of this methodology is that the resulting operating points are close to that provided by an optimal power flow program. © 2004 IEEE.
Resumo:
This work presents a method to obtain B-scan images based on linear array scanning and 2R-SAFT. Using this technique some advantages are obtained: the ultrasonic system is very simple; it avoids the grating lobes formation, characteristic in conventional SAFT; and subaperture size and focussing lens (to compensate emission-reception) can be adapted dynamically to every image point. The proposed method has been experimentally tested in the inspection of CFRP samples. © 2010 American Institute of Physics.
Resumo:
Here, a simplified dynamical model of a magnetically levitated body is considered. The origin of an inertial Cartesian reference frame is set at the pivot point of the pendulum on the levitated body in its static equilibrium state (ie, the gap between the magnet on the base and the magnet on the body, in this state). The governing equations of motion has been derived and the characteristic feature of the strategy is the exploitation of the nonlinear effect of the inertial force associated, with the motion of a pendulum-type vibration absorber driven, by an appropriate control torque [4]. In the present paper, we analyzed the nonlinear dynamics of problem, discussed the energy transfer between the main system and the pendulum in time, and developed State Dependent Riccati Equation (SDRE) control design to reducing the unstable oscillatory movement of the magnetically levitated body to a stable fixed point. The simulations results showed the effectiveness of the (SDRE) control design. Copyright © 2011 by ASME.
Resumo:
This paper provides a contribution to the contingency analysis of electric power systems under steady state conditions. An alternative methodology is presented for static contingency analyses that only use continuation methods and thus provides an accurate determination of the loading margin. Rather than starting from the base case operating point, the proposed continuation power flow obtains the post-contingency loading margins starting from the maximum loading and using a bus voltage magnitude as a parameter. The branch selected for the contingency evaluation is parameterised using a scaling factor, which allows its gradual removal and assures the continuation power flow convergence for the cases where the method would diverge for the complete transmission line or transformer removal. The applicability and effectiveness of the proposed methodology have been investigated on IEEE test systems (14, 57 and 118 buses) and compared with the continuation power flow, which obtains the post-contingency loading margin starting from the base case solution. In general, for most of the analysed contingencies, few iterations are necessary to determine the post-contingency maximum loading point. Thus, a significant reduction in the global number of iterations is achieved. Therefore, the proposed methodology can be used as an alternative technique to verify and even to obtain the list of critical contingencies supplied by the electric power systems security analysis function. © 2013 Elsevier Ltd. All rights reserved.
Resumo:
No hemisfério norte, o censo de aves é fundamental para gerar informações que auxiliam na compreensão de tendências populacionais. Tais censos, devido à marcada sazonalidade deste hemisfério, são realizados durante dois momentos distintos: na estação reprodutiva (aves residentes) e no inverno (quando as aves migratórias deixam determinadas regiões). Na região neotropical, porém, dependendo da localidade, as aves podem se reproduzir durante qualquer ou vários períodos do ano; podem ou não migrar, e aquelas que o fazem podem apresentar um padrão assincrônico. Em contraste com o hemisfério norte, tendências populacionais são desconhecidas, bem como o impacto das taxas rápidas de urbanização e desmatamento, que também são pouco monitoradas. Para melhor entender padrões temporais de riqueza e abundância de aves, e avaliar como um censo similar pode ser implementado na América tropical, foram utilizados pontos de escuta ao longo de 12 meses em uma localidade no Estado de São Paulo, sudeste do Brasil. Os censos ocorreram duas vezes por dia (manhãs/tardes) em uma floresta semidecidual ao longo de transecções com 10 pontos (20 pontos por dia) distantes 200 m entre si e com raio de detecção limitado em 100 m. Ambas as riquezas e abundâncias de aves foram maiores durante as manhãs, mas as curvas de acumulação sugerem que os censos vespertinos com maior esforço amostral podem fornecer resultados similares aos censos matutinos. Riqueza e abundância das aves não variam de acordo com estações (i.e., sem padrão aparente entre reprodução e migração), enquanto espécies exclusivas foram encontradas todos os meses e relativamente poucas espécies (20%) foram registradas em todos os meses do ano. Durante este ano, 84% de todas as aves florestais da área estudada foram registradas. Sugerimos que a metodologia de pontos de escuta pode ser utilizada à semelhança dos censos do hemisfério norte. Recomendamos ainda que o esforço amostral em transecções deva incluir ao menos 20 pontos, e que o início da contagem das aves deva ser sazonal, utilizando o período de migração das espécies austrais (e os seis meses seguintes) para coordenar pontos de escuta. Por último, sugerimos que os censos no Brasil e até mesmo na América Latina podem ajudar no entendimento de tendências populacionais, mas também demandam maior esforço do que o observado em latitudes temperadas, devido à maior riqueza de espécies e diferenças nas dinâmicas de reprodução e migração. Por meio do uso de censos de aves coordenados poderá ser desenvolvida uma técnica para os trópicos que irá gerar informações que permitam acompanhar tendências populacionais, com benefícios para a conservação das aves, similarmente aos censos realizados em países do hemisfério norte.
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Tool Condition Monitoring of Single-Point Dresser Using Acoustic Emission and Neural Networks Models
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
One of the key issues which makes the waveletGalerkin method unsuitable for solving general electromagnetic problems is a lack of exact representations of the connection coefficients. This paper presents the mathematical formulae and computer procedures for computing some common connection coefficients. The characteristic of the present formulae and procedures is that the arbitrary point values of the connection coefficients, rather than the dyadic point values, can be determined. A numerical example is also given to demonstrate the feasibility of using the wavelet-Galerkin method to solve engineering field problems. © 2000 IEEE.
Resumo:
The definition of the sample size is a major problem in studies of phytosociology. The species accumulation curve is used to define the sampling sufficiency, but this method presents some limitations such as the absence of a stabilization point that can be objectively determined and the arbitrariness of the order of sampling units in the curve. A solution to this problem is the use of randomization procedures, e. g. permutation, for obtaining a mean species accumulation curve and empiric confidence intervals. However, the randomization process emphasizes the asymptotical character of the curve. Moreover, the inexistence of an inflection point in the curve makes it impossible to define objectively the point of optimum sample size.
Resumo:
Boiling points (T-B) of acyclic alkynes are predicted from their boiling point numbers (Y-BP) with the relationship T-B(K) = -16.802Y(BP)(2/3) + 337.377Y(BP)(1/3) - 437.883. In turn, Y-BP values are calculated from structure using the equation Y-BP = 1.726 + A(i) + 2.779C + 1.716M(3) + 1.564M + 4.204E(3) + 3.905E + 5.007P - 0.329D + 0.241G + 0.479V + 0.967T + 0.574S. Here A(i) depends on the substitution pattern of the alkyne and the remainder of the equation is the same as that reported earlier for alkanes. For a data set consisting of 76 acyclic alkynes, the correlation of predicted and literature T-B values had an average absolute deviation of 1.46 K, and the R-2 of the correlation was 0.999. In addition, the calculated Y-BP values can be used to predict the flash points of alkynes.
Resumo:
A thorough search of the sky exposed at the Pierre Auger Cosmic Ray Observatory reveals no statistically significant excess of events in any small solid angle that would be indicative of a flux of neutral particles from a discrete source. The search covers from -90 degrees to +15 degrees in declination using four different energy ranges above 1 EeV (10(18) eV). The method used in this search is more sensitive to neutrons than to photons. The upper limit on a neutron flux is derived for a dense grid of directions for each of the four energy ranges. These results constrain scenarios for the production of ultrahigh energy cosmic rays in the Galaxy.
Resumo:
Cloud point extraction (CPE) was employed for separation and preconcentration prior to the determination of nickel by graphite furnace atomic absorption spectrometry (GFAAS), flame atomic absorption spectrometry (FAAS) or UV-Vis spectrophotometry. Di-2-pyridyl ketone salicyloylhydrazone (DPKSH) was used for the first time as a complexing agent in CPE. The nickel complex was extracted from the aqueous phase using the Triton X-114 surfactant. Under optimized conditions, limits of detection obtained with GFAAS, FAAS and UV-Vis spectrophotometry were 0.14, 0.76 and 1.5 mu g L-1, respectively. The extraction was quantitative and the enrichment factor was estimated to be 27. The method was applied to natural waters, hemodialysis concentrates, urine and honey samples. Accuracy was evaluated by analysis of the NIST 1643e Water standard reference material.
Resumo:
Background: Clinical multistage risk assessment associated with electrocardiogram (ECG) and NT-proBNP may be a feasible strategy to screen hypertrophic cardiomyopathy (HCM). We investigated the effectiveness of a screening based on ECG and NT-proBNP in first-degree relatives of patients with HCM. Methods and Results: A total of 106 first-degree relatives were included. All individuals were evaluated by echocardiography, ECG, NT-proBNP, and molecular screening (available for 65 individuals). From the 106 individuals, 36 (34%) had diagnosis confirmed by echocardiography. Using echocardiography as the gold standard, ECG criteria had a sensitivity of 0.71, 0.42, and 0.52 for the Romhilt-Estes, Sokolow-Lyon, and Cornell criteria, respectively. Mean values of NT-ProBNP were higher in affected as compared with nonaffected relatives (26.1 vs. 1290.5, P < .001). The AUC of NT-proBNP was 0.98. Using a cutoff value of 70 pg/mL, we observed a sensitivity of 0.92 and specificity of 0.96. Using molecular genetics as the gold standard, ECG criteria had a sensitivity of 0.67, 0.37, and 0.42 for the Romhilt-Estes, Sokolow-Lyon, and Cornell criteria, respectively. Using a cutoff value of 70 pg/mL, we observed a sensitivity of 0.83 and specificity of 0.98. Conclusion: Values of NT-proBNP above 70 pg/mL can be used to effectively select high-risk first-degree relatives for HCM screening. (J Cardiac Fail 2012;18:564-568)
Resumo:
A new method for analysis of scattering data from lamellar bilayer systems is presented. The method employs a form-free description of the cross-section structure of the bilayer and the fit is performed directly to the scattering data, introducing also a structure factor when required. The cross-section structure (electron density profile in the case of X-ray scattering) is described by a set of Gaussian functions and the technique is termed Gaussian deconvolution. The coefficients of the Gaussians are optimized using a constrained least-squares routine that induces smoothness of the electron density profile. The optimization is coupled with the point-of-inflection method for determining the optimal weight of the smoothness. With the new approach, it is possible to optimize simultaneously the form factor, structure factor and several other parameters in the model. The applicability of this method is demonstrated by using it in a study of a multilamellar system composed of lecithin bilayers, where the form factor and structure factor are obtained simultaneously, and the obtained results provided new insight into this very well known system.