919 resultados para Method of moments algorithm
Resumo:
This paper describes two solutions for systematic measurement of surface elevation that can be used for both profile and surface reconstructions for quantitative fractography case studies. The first one is developed under Khoros graphical interface environment. It consists of an adaption of the almost classical area matching algorithm, that is based on cross-correlation operations, to the well-known method of parallax measurements from stereo pairs. A normalization function was created to avoid false cross-correlation peaks, driving to the true window best matching solution at each region analyzed on both stereo projections. Some limitations to the use of scanning electron microscopy and the types of surface patterns are also discussed. The second algorithm is based on a spatial correlation function. This solution is implemented under the NIH Image macro programming, combining a good representation for low contrast regions and many improvements on overall user interface and performance. Its advantages and limitations are also presented.
Resumo:
We analyze the average performance of a general class of learning algorithms for the nondeterministic polynomial time complete problem of rule extraction by a binary perceptron. The examples are generated by a rule implemented by a teacher network of similar architecture. A variational approach is used in trying to identify the potential energy that leads to the largest generalization in the thermodynamic limit. We restrict our search to algorithms that always satisfy the binary constraints. A replica symmetric ansatz leads to a learning algorithm which presents a phase transition in violation of an information theoretical bound. Stability analysis shows that this is due to a failure of the replica symmetric ansatz and the first step of replica symmetry breaking (RSB) is studied. The variational method does not determine a unique potential but it allows construction of a class with a unique minimum within each first order valley. Members of this class improve on the performance of Gibbs algorithm but fail to reach the Bayesian limit in the low generalization phase. They even fail to reach the performance of the best binary, an optimal clipping of the barycenter of version space. We find a trade-off between a good low performance and early onset of perfect generalization. Although the RSB may be locally stable we discuss the possibility that it fails to be the correct saddle point globally. ©2000 The American Physical Society.
Resumo:
In this paper a method for solving the Short Term Transmission Network Expansion Planning (STTNEP) problem is presented. The STTNEP is a very complex mixed integer nonlinear programming problem that presents a combinatorial explosion in the search space. In this work we present a constructive heuristic algorithm to find a solution of the STTNEP of excellent quality. In each step of the algorithm a sensitivity index is used to add a circuit (transmission line or transformer) to the system. This sensitivity index is obtained solving the STTNEP problem considering as a continuous variable the number of circuits to be added (relaxed problem). The relaxed problem is a large and complex nonlinear programming and was solved through an interior points method that uses a combination of the multiple predictor corrector and multiple centrality corrections methods, both belonging to the family of higher order interior points method (HOIPM). Tests were carried out using a modified Carver system and the results presented show the good performance of both the constructive heuristic algorithm to solve the STTNEP problem and the HOIPM used in each step.
Resumo:
The aim of this paper consists in presenting a method of simulating the warpage in 7xxx series aluminium alloy plates. To perform this simulation finite element software MSC.Patran and MSC.Marc were used. Another result of this analysis will be the influence on material residual stresses induced on the raw material during the rolling process upon the warpage of primary aeronautic parts, fabricated through machining (milling) at Embraer. The method used to determinate the aluminium plate residual stress was Layer Removal Test. The numerical algorithm Modified Flavenot Method was used to convert layer removal and beam deflection in stress level. With such information about the level and profile of residual stresses become possible, during the step that anticipate the manufacturing to incorporate these values in the finite-element approach for modelling warpage parts. Based on that warpage parameter surely the products are manufactured with low relative vulnerability propitiating competitiveness and price. © 2007 American Institute of Physics.
Resumo:
In this paper, a method for solving the short term transmission network expansion planning problem is presented. This is a very complex mixed integer nonlinear programming problem that presents a combinatorial explosion in the search space. In order to And a solution of excellent quality for this problem, a constructive heuristic algorithm is presented in this paper. In each step of the algorithm, a sensitivity index is used to add a circuit (transmission line or transformer) or a capacitor bank (fixed or variable) to the system. This sensitivity index is obtained solving the problem considering the numbers of circuits and capacitors banks to be added (relaxed problem), as continuous variables. The relaxed problem is a large and complex nonlinear programming and was solved through a higher order interior point method. The paper shows results of several tests that were performed using three well-known electric energy systems in order to show the possibility and the advantages of using the AC model. ©2007 IEEE.
Resumo:
A method for context-sensitive analysis of binaries that may have obfuscated procedure call and return operations is presented. Such binaries may use operators to directly manipulate stack instead of using native call and ret instructions to achieve equivalent behavior. Since definition of context-sensitivity and algorithms for context-sensitive analysis have thus far been based on the specific semantics associated to procedure call and return operations, classic interprocedural analyses cannot be used reliably for analyzing programs in which these operations cannot be discerned. A new notion of context-sensitivity is introduced that is based on the state of the stack at any instruction. While changes in 'calling'-context are associated with transfer of control, and hence can be reasoned in terms of paths in an interprocedural control flow graph (ICFG), the same is not true of changes in 'stack'-context. An abstract interpretation based framework is developed to reason about stack-contexts and to derive analogues of call-strings based methods for the context-sensitive analysis using stack-context. The method presented is used to create a context-sensitive version of Venable et al.'s algorithm for detecting obfuscated calls. Experimental results show that the context-sensitive version of the algorithm generates more precise results and is also computationally more efficient than its context-insensitive counterpart. Copyright © 2010 ACM.
Resumo:
In this paper a heuristic technique for solving simultaneous short-term transmission network expansion and reactive power planning problem (TEPRPP) via an AC model is presented. A constructive heuristic algorithm (CHA) aimed to obtaining a significant quality solution for such problem is employed. An interior point method (IPM) is applied to solve TEPRPP as a nonlinear programming (NLP) during the solution steps of the algorithm. For each proposed network topology, an indicator is deployed to identify the weak buses for reactive power sources placement. The objective function of NLP includes the costs of new transmission lines, real power losses as well as reactive power sources. By allocating reactive power sources at load buses, the circuit capacity may increase while the cost of new lines can be decreased. The proposed methodology is tested on Garver's system and the obtained results shows its capability and the viability of using AC model for solving such non-convex optimization problem. © 2011 IEEE.
Resumo:
The invasive fire ant Solenopsis invicta is medically important because its venom is highly potent. However, almost nothing is known about fire ant venom proteins because obtaining even milligram-amounts of these proteins has been prohibitively challenging. We present a simple and fast method of obtaining whole venom compounds from large quantities of fire ants. For this, we separate the ants are from the nest soil, immerse them in dual-phase mixture of apolar organic solvent and water, and evaporate each solvent phase in separate. The remaining extract from the aqueous phase is largely made up of ant venom proteins. We confirmed this by using 2D gel electrophoresis while also demonstrating that our new approach yields the same proteins obtained by other authors using less efficient traditional methods. © 2013 Elsevier Ltd.
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
The objective of the study was to estimate heritability and repeatability for milk yield (MY) and lactation length (LL) in buffaloes using Bayesian inference. The Brazilian genetic improvement program of buffalo provided the data that included 628 females, from four herds, born between 1980 and 2003. In order to obtain the estimates of variance, univariate analyses were performed with the Gibbs sampler, using the MTGSAM software. The model for MY and LL included direct genetic additive and permanent environment as random effects, and contemporary groups, milking frequency and calving number as fixed effects. The convergence diagnosis was performed with the Geweke method using an algorithm implemented in R software through the package Bayesian Output Analysis. Average for milk yield and lactation length was 1,546.1 +/- 483.8 kg and 252.3 +/- 42.5 days, respectively. The heritability coefficients were 0.31 (mode), 0.35 (mean) and 0.34 (median) for MY and 0.11 (mode), 0.10 (mean) and 0.10 (median) for LL. The repeatability coefficient (mode) were 0.50 and 0.15 for MY and LL, respectively. Milk yield is the only trait with clear potential for genetic improvement by direct genetic selection. The repeatability for MY indicates that selection based on the first lactation could contribute for an improvement in this trait.
Resumo:
Hepatocellular carcinoma (HCC) is a primary tumor of the liver. After local therapies, the tumor evaluation is based on the mRECIST criteria, which involves the measurement of the maximum diameter of the viable lesion. This paper describes a computed methodology to measure through the contrasted area of the lesions the maximum diameter of the tumor by a computational algorithm 63 computed tomography (CT) slices from 23 patients were assessed. Non-contrasted liver and HCC typical nodules were evaluated, and a virtual phantom was developed for this purpose. Optimization of the algorithm detection and quantification was made using the virtual phantom. After that, we compared the algorithm findings of maximum diameter of the target lesions against radiologist measures. Computed results of the maximum diameter are in good agreement with the results obtained by radiologist evaluation, indicating that the algorithm was able to detect properly the tumor limits A comparison of the estimated maximum diameter by radiologist versus the algorithm revealed differences on the order of 0.25 cm for large-sized tumors (diameter > 5 cm), whereas agreement lesser than 1.0cm was found for small-sized tumors. Differences between algorithm and radiologist measures were accurate for small-sized tumors with a trend to a small increase for tumors greater than 5 cm. Therefore, traditional methods for measuring lesion diameter should be complemented with non-subjective measurement methods, which would allow a more correct evaluation of the contrast-enhanced areas of HCC according to the mRECIST criteria.
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
Background and Purpose: Oropharyngeal dysphagia is a common manifestation in acute stroke. Aspiration resulting from difficulties in swallowing is a symptom that should be considered due to the frequent occurrence of aspiration pneumonia that could influence the patient's recovery as it causes clinical complications and could even lead to the patient's death. The early clinical evaluation of swallowing disorders can help define approaches and avoid oral feeding, which may be detrimental to the patient. This study aimed to create an algorithm to identify patients at risk of developing dysphagia following acute ischemic stroke in order to be able to decide on the safest way of feeding and minimize the complications of stroke using the National Institutes of Health Stroke Scale (NHISS). Methods: Clinical assessment of swallowing was performed in 50 patients admitted to the emergency unit of the University Hospital, Faculty of Medicine of Ribeirao Preto, Sao Paulo, Brazil, with a diagnosis of ischemic stroke, within 48 h after the beginning of symptoms. Patients, 25 females and 25 males with a mean age of 64.90 years (range 26-91 years), were evaluated consecutively. An anamnesis was taken before the patient's participation in the study in order to exclude a prior history of deglutition difficulties. For the functional assessment of swallowing, three food consistencies were used, i.e. pasty, liquid and solid. After clinical evaluation, we concluded whether there was dysphagia. For statistical analysis we used the Fisher exact test, verifying the association between the variables. To assess whether the NIHSS score characterizes a risk factor for dysphagia, a receiver operational characteristics curve was constructed to obtain characteristics for sensitivity and specificity. Results: Dysphagia was present in 32% of the patients. The clinical evaluation is a reliable method of detection of swallowing difficulties. However, the predictors of risk for the swallowing function must be balanced, and the level of consciousness and the presence of preexisting comorbidities should be considered. Gender, age and cerebral hemisphere involved were not significantly associated with the presence of dysphagia. NIHSS, Glasgow Coma Scale, and speech and language changes had a statistically significant predictive value for the presence of dysphagia. Conclusions: The NIHSS is highly sensitive (88%) and specific (85%) in detecting dysphagia; a score of 12 may be considered as the cutoff value. The creation of an algorithm to detect dysphagia in acute ischemic stroke appears to be useful in selecting the optimal feeding route while awaiting a specialized evaluation. Copyright (C) 2012 S. Karger AG, Basel
Resumo:
A semi-autonomous unmanned underwater vehicle (UUV), named LAURS, is being developed at the Laboratory of Sensors and Actuators at the University of Sao Paulo. The vehicle has been designed to provide inspection and intervention capabilities in specific missions of deep water oil fields. In this work, a method of modeling and identification of yaw motion dynamic system model of an open-frame underwater vehicle is presented. Using an on-board low cost magnetic compass sensor the method is based on the utilization of an uncoupled 1-DOF (degree of freedom) dynamic system equation and the application of the integral method which is the classical least squares algorithm applied to the integral form of the dynamic system equations. Experimental trials with the actual vehicle have been performed in a test tank and diving pool. During these experiments, thrusters responsible for yaw motion are driven by sinusoidal voltage signal profiles. An assessment of the feasibility of the method reveals that estimated dynamic system models are more reliable when considering slow and small sinusoidal voltage signal profiles, i.e. with larger periods and with relatively small amplitude and offset.
Resumo:
The aim of solving the Optimal Power Flow problem is to determine the optimal state of an electric power transmission system, that is, the voltage magnitude and phase angles and the tap ratios of the transformers that optimize the performance of a given system, while satisfying its physical and operating constraints. The Optimal Power Flow problem is modeled as a large-scale mixed-discrete nonlinear programming problem. This paper proposes a method for handling the discrete variables of the Optimal Power Flow problem. A penalty function is presented. Due to the inclusion of the penalty function into the objective function, a sequence of nonlinear programming problems with only continuous variables is obtained and the solutions of these problems converge to a solution of the mixed problem. The obtained nonlinear programming problems are solved by a Primal-Dual Logarithmic-Barrier Method. Numerical tests using the IEEE 14, 30, 118 and 300-Bus test systems indicate that the method is efficient. (C) 2012 Elsevier B.V. All rights reserved.