985 resultados para Quasi-Newton methods
A pseudo-transient solution strategy for the analysis of delamination by means of interface elements
Resumo:
Recent efforts in the finite element modelling of delamination have concentrated on the development of cohesive interface elements. These are characterised by a bilinear constitutive law, where there is an initial high positive stiffness until a threshold stress level is reached, followed by a negative tangent stiffness representing softening (or damage evolution). Complete decohesion occurs when the amount of work done per unit area of crack surface is equal to a critical strain energy release rate. It is difficult to achieve a stable, oscillation-free solution beyond the onset of damage, using standard implicit quasi-static methods, unless a very refined mesh is used. In the present paper, a new solution strategy is proposed based on a pseudo-transient formulation and demonstrated through the modelling of a double cantilever beam undergoing Mode I delamination. A detailed analysis into the sensitivity of the user-defined parameters is also presented. Comparisons with other published solutions using a quasi-static formulation show that the pseudo-transient formulation gives improved accuracy and oscillation-free results with coarser meshes
Resumo:
We survey recent axiomatic results in the theory of cost-sharing. In this litterature, a method computes the individual cost shares assigned to the users of a facility for any profile of demands and any monotonic cost function. We discuss two theories taking radically different views of the asymmetries of the cost function. In the full responsibility theory, each agent is accountable for the part of the costs that can be unambiguously separated and attributed to her own demand. In the partial responsibility theory, the asymmetries of the cost function have no bearing on individual cost shares, only the differences in demand levels matter. We describe several invariance and monotonicity properties that reflect both normative and strategic concerns. We uncover a number of logical trade-offs between our axioms, and derive axiomatic characterizations of a handful of intuitive methods: in the full responsibility approach, the Shapley-Shubik, Aumann-Shapley, and subsidyfree serial methods, and in the partial responsibility approach, the cross-subsidizing serial method and the family of quasi-proportional methods.
Resumo:
Es werde das lineare Regressionsmodell y = X b + e mit den ueblichen Bedingungen betrachtet. Weiter werde angenommen, dass der Parametervektor aus einem Ellipsoid stammt. Ein optimaler Schaetzer fuer den Parametervektor ist durch den Minimax-Schaetzer gegeben. Nach der entscheidungstheoretischen Formulierung des Minimax-Schaetzproblems werden mit dem Bayesschen Ansatz, Spektralen Methoden und der Darstellung von Hoffmann und Laeuter Wege zur Bestimmung des Minimax- Schaetzers dargestellt und in Beziehung gebracht. Eine Betrachtung von Modellen mit drei Einflussgroeßen und gemeinsamen Eigenvektor fuehrt zu einer Strukturierung des Problems nach der Vielfachheit des maximalen Eigenwerts. Die Bestimmung des Minimax-Schaetzers in einem noch nicht geloesten Fall kann auf die Bestimmung einer Nullstelle einer nichtlinearen reellwertigen Funktion gefuehrt werden. Es wird ein Beispiel gefunden, in dem die Nullstelle nicht durch Radikale angegeben werden kann. Durch das Intervallschachtelungs-Prinzip oder Newton-Verfahren ist die numerische Bestimmung der Nullstelle moeglich. Durch Entwicklung einer Fixpunktgleichung aus der Darstellung von Hoffmann und Laeuter war es in einer Simulation moeglich die angestrebten Loesungen zu finden.
Resumo:
Quasi-Newton-Raphson minimization and conjugate gradient minimization have been used to solve the crystal structures of famotidine form B and capsaicin from X-ray powder diffraction data and characterize the chi(2) agreement surfaces. One million quasi-Newton-Raphson minimizations found the famotidine global minimum with a frequency of ca 1 in 5000 and the capsaicin global minimum with a frequency of ca 1 in 10 000. These results, which are corroborated by conjugate gradient minimization, demonstrate the existence of numerous pathways from some of the highest points on these chi(2) agreement surfaces to the respective global minima, which are passable using only downhill moves. This important observation has significant ramifications for the development of improved structure determination algorithms.
Resumo:
Liquid clouds play a profound role in the global radiation budget but it is difficult to remotely retrieve their vertical profile. Ordinary narrow field-of-view (FOV) lidars receive a strong return from such clouds but the information is limited to the first few optical depths. Wideangle multiple-FOV lidars can isolate radiation scattered multiple times before returning to the instrument, often penetrating much deeper into the cloud than the singly-scattered signal. These returns potentially contain information on the vertical profile of extinction coefficient, but are challenging to interpret due to the lack of a fast radiative transfer model for simulating them. This paper describes a variational algorithm that incorporates a fast forward model based on the time-dependent two-stream approximation, and its adjoint. Application of the algorithm to simulated data from a hypothetical airborne three-FOV lidar with a maximum footprint width of 600m suggests that this approach should be able to retrieve the extinction structure down to an optical depth of around 6, and total opticaldepth up to at least 35, depending on the maximum lidar FOV. The convergence behavior of Gauss-Newton and quasi-Newton optimization schemes are compared. We then present results from an application of the algorithm to observations of stratocumulus by the 8-FOV airborne “THOR” lidar. It is demonstrated how the averaging kernel can be used to diagnose the effective vertical resolution of the retrieved profile, and therefore the depth to which information on the vertical structure can be recovered. This work enables exploitation of returns from spaceborne lidar and radar subject to multiple scattering more rigorously than previously possible.
Resumo:
This work proposes a formulation for optimization of 2D-structure layouts submitted to mechanic and thermal shipments and applied an h-adaptive filter process which conduced to computational low spend and high definition structural layouts. The main goal of the formulation is to minimize the structure mass submitted to an effective state of stress of von Mises, with stability and lateral restriction variants. A criterion of global measurement was used for intents a parametric condition of stress fields. To avoid singularity problems was considerate a release on the stress restriction. On the optimization was used a material approach where the homogenized constructive equation was function of the material relative density. The intermediary density effective properties were represented for a SIMP-type artificial model. The problem was simplified by use of the method of finite elements of Galerkin using triangles with linear Lagrangian basis. On the solution of the optimization problem, was applied the augmented Lagrangian Method, that consists on minimum problem sequence solution with box-type restrictions, resolved by a 2nd orderprojection method which uses the method of the quasi-Newton without memory, during the problem process solution. This process reduces computational expends showing be more effective and solid. The results materialize more refined layouts with accurate topologic and shape of structure definitions. On the other hand formulation of mass minimization with global stress criterion provides to modeling ready structural layouts, with violation of the criterion of homogeneous distributed stress
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
In order to evaluate the flying capacity and nest site selection of Angiopolybia pallens (Lepeletier, 1836), we made 17 incursions (136 hours of sample efforts) in Atlantic Rain Forest environments in Bahia state. Our data show this wasp prefers to nest on wide leaves of bushes and short trees (nests between 0.30 and 3m from the ground) placed in half-shady environments (clearings and shadowed cultivations). The logistic regression model using Quasi-Newton method provided a good description of the flying capacity observed in A. pallens (x 2 = 91.52; p≪0.001). According to the logistic regression model, the A. pallens flight autonomy is low, flying for short distances and with an effective radius of action of about 24m measured from their nests, which means a foraging area of nearly 1,800 m 2.
Resumo:
This dissertation concerns convergence analysis for nonparametric problems in the calculus of variations and sufficient conditions for weak local minimizer of a functional for both nonparametric and parametric problems. Newton's method in infinite-dimensional space is proved to be well-defined and converges quadratically to a weak local minimizer of a functional subject to certain boundary conditions. Sufficient conditions for global converges are proposed and a well-defined algorithm based on those conditions is presented and proved to converge. Finite element discretization is employed to achieve an implementable line-search-based quasi-Newton algorithm and a proof of convergence of the discretization of the algorithm is included. This work also proposes sufficient conditions for weak local minimizer without using the language of conjugate points. The form of new conditions is consistent with the ones in finite-dimensional case. It is believed that the new form of sufficient conditions will lead to simpler approaches to verify an extremal as local minimizer for well-known problems in calculus of variations.
Resumo:
We present a novel approach for the reconstruction of spectra from Euclidean correlator data that makes close contact to modern Bayesian concepts. It is based upon an axiomatically justified dimensionless prior distribution, which in the case of constant prior function m(ω) only imprints smoothness on the reconstructed spectrum. In addition we are able to analytically integrate out the only relevant overall hyper-parameter α in the prior, removing the necessity for Gaussian approximations found e.g. in the Maximum Entropy Method. Using a quasi-Newton minimizer and high-precision arithmetic, we are then able to find the unique global extremum of P[ρ|D] in the full Nω » Nτ dimensional search space. The method actually yields gradually improving reconstruction results if the quality of the supplied input data increases, without introducing artificial peak structures, often encountered in the MEM. To support these statements we present mock data analyses for the case of zero width delta peaks and more realistic scenarios, based on the perturbative Euclidean Wilson Loop as well as the Wilson Line correlator in Coulomb gauge.
Resumo:
A lo largo del presente trabajo se investiga la viabilidad de la descomposición automática de espectros de radiación gamma por medio de algoritmos de resolución de sistemas de ecuaciones algebraicas lineales basados en técnicas de pseudoinversión. La determinación de dichos algoritmos ha sido realizada teniendo en cuenta su posible implementación sobre procesadores de propósito específico de baja complejidad. En el primer capítulo se resumen las técnicas para la detección y medida de la radiación gamma que han servido de base para la confección de los espectros tratados en el trabajo. Se reexaminan los conceptos asociados con la naturaleza de la radiación electromagnética, así como los procesos físicos y el tratamiento electrónico que se hallan involucrados en su detección, poniendo de relieve la naturaleza intrínsecamente estadística del proceso de formación del espectro asociado como una clasificación del número de detecciones realizadas en función de la energía supuestamente continua asociada a las mismas. Para ello se aporta una breve descripción de los principales fenómenos de interacción de la radiación con la materia, que condicionan el proceso de detección y formación del espectro. El detector de radiación es considerado el elemento crítico del sistema de medida, puesto que condiciona fuertemente el proceso de detección. Por ello se examinan los principales tipos de detectores, con especial hincapié en los detectores de tipo semiconductor, ya que son los más utilizados en la actualidad. Finalmente, se describen los subsistemas electrónicos fundamentales para el acondicionamiento y pretratamiento de la señal procedente del detector, a la que se le denomina con el término tradicionalmente utilizado de Electrónica Nuclear. En lo que concierne a la espectroscopia, el principal subsistema de interés para el presente trabajo es el analizador multicanal, el cual lleva a cabo el tratamiento cualitativo de la señal, y construye un histograma de intensidad de radiación en el margen de energías al que el detector es sensible. Este vector N-dimensional es lo que generalmente se conoce con el nombre de espectro de radiación. Los distintos radionúclidos que participan en una fuente de radiación no pura dejan su impronta en dicho espectro. En el capítulo segundo se realiza una revisión exhaustiva de los métodos matemáticos en uso hasta el momento ideados para la identificación de los radionúclidos presentes en un espectro compuesto, así como para determinar sus actividades relativas. Uno de ellos es el denominado de regresión lineal múltiple, que se propone como la aproximación más apropiada a los condicionamientos y restricciones del problema: capacidad para tratar con espectros de baja resolución, ausencia del concurso de un operador humano (no supervisión), y posibilidad de ser soportado por algoritmos de baja complejidad capaces de ser instrumentados sobre procesadores dedicados de alta escala de integración. El problema del análisis se plantea formalmente en el tercer capítulo siguiendo las pautas arriba mencionadas y se demuestra que el citado problema admite una solución en la teoría de memorias asociativas lineales. Un operador basado en este tipo de estructuras puede proporcionar la solución al problema de la descomposición espectral deseada. En el mismo contexto, se proponen un par de algoritmos adaptativos complementarios para la construcción del operador, que gozan de unas características aritméticas especialmente apropiadas para su instrumentación sobre procesadores de alta escala de integración. La característica de adaptatividad dota a la memoria asociativa de una gran flexibilidad en lo que se refiere a la incorporación de nueva información en forma progresiva.En el capítulo cuarto se trata con un nuevo problema añadido, de índole altamente compleja. Es el del tratamiento de las deformaciones que introducen en el espectro las derivas instrumentales presentes en el dispositivo detector y en la electrónica de preacondicionamiento. Estas deformaciones invalidan el modelo de regresión lineal utilizado para describir el espectro problema. Se deriva entonces un modelo que incluya las citadas deformaciones como una ampliación de contribuciones en el espectro compuesto, el cual conlleva una ampliación sencilla de la memoria asociativa capaz de tolerar las derivas en la mezcla problema y de llevar a cabo un análisis robusto de contribuciones. El método de ampliación utilizado se basa en la suposición de pequeñas perturbaciones. La práctica en el laboratorio demuestra que, en ocasiones, las derivas instrumentales pueden provocar distorsiones severas en el espectro que no pueden ser tratadas por el modelo anterior. Por ello, en el capítulo quinto se plantea el problema de medidas afectadas por fuertes derivas desde el punto de vista de la teoría de optimización no lineal. Esta reformulación lleva a la introducción de un algoritmo de tipo recursivo inspirado en el de Gauss-Newton que permite introducir el concepto de memoria lineal realimentada. Este operador ofrece una capacidad sensiblemente mejorada para la descomposición de mezclas con fuerte deriva sin la excesiva carga computacional que presentan los algoritmos clásicos de optimización no lineal. El trabajo finaliza con una discusión de los resultados obtenidos en los tres principales niveles de estudio abordados, que se ofrecen en los capítulos tercero, cuarto y quinto, así como con la elevación a definitivas de las principales conclusiones derivadas del estudio y con el desglose de las posibles líneas de continuación del presente trabajo.---ABSTRACT---Through the present research, the feasibility of Automatic Gamma-Radiation Spectral Decomposition by Linear Algebraic Equation-Solving Algorithms using Pseudo-Inverse Techniques is explored. The design of the before mentioned algorithms has been done having into account their possible implementation on Specific-Purpose Processors of Low Complexity. In the first chapter, the techniques for the detection and measurement of gamma radiation employed to construct the spectra being used throughout the research are reviewed. Similarly, the basic concepts related with the nature and properties of the hard electromagnetic radiation are also re-examined, together with the physic and electronic processes involved in the detection of such kind of radiation, with special emphasis in the intrinsic statistical nature of the spectrum build-up process, which is considered as a classification of the number of individual photon-detections as a function of the energy associated to each individual photon. Fbr such, a brief description of the most important matter-energy interaction phenomena conditioning the detection and spectrum formation processes is given. The radiation detector is considered as the most critical element in the measurement system, as this device strongly conditions the detection process. Fbr this reason, the characteristics of the most frequent detectors are re-examined, with special emphasis on those of semiconductor nature, as these are the most frequently employed ones nowadays. Finally, the fundamental electronic subsystems for preaconditioning and treating of the signal delivered by the detector, classically addresed as Nuclear Electronics, is described. As far as Spectroscopy is concerned, the subsystem most interesting for the scope covered by the present research is the so-called Multichannel Analyzer, which is devoted to the cualitative treatment of the signal, building-up a hystogram of radiation intensity in the range of energies in which the detector is sensitive. The resulting N-dimensional vector is generally known with the ñame of Radiation Spectrum. The different radio-nuclides contributing to the spectrum of a composite source will leave their fingerprint in the resulting spectrum. Through the second chapter, an exhaustive review of the mathematical methods devised to the present moment to identify the radio-nuclides present in the composite spectrum and to quantify their relative contributions, is reviewed. One of the more popular ones is the so-known Múltiple Linear Regression, which is proposed as the best suited approach according to the constraints and restrictions present in the formulation of the problem, i.e., the need to treat low-resolution spectra, the absence of control by a human operator (un-supervision), and the possibility of being implemented as low-complexity algorithms amenable of being supported by VLSI Specific Processors. The analysis problem is formally stated through the third chapter, following the hints established in this context, and it is shown that the addressed problem may be satisfactorily solved under the point of view of Linear Associative Memories. An operator based on this kind of structures may provide the solution to the spectral decomposition problem posed. In the same context, a pair of complementary adaptive algorithms useful for the construction of the solving operator are proposed, which share certain special arithmetic characteristics that render them specially suitable for their implementation on VLSI Processors. The adaptive nature of the associative memory provides a high flexibility to this operator, in what refers to the progressive inclusión of new information to the knowledge base. Through the fourth chapter, this fact is treated together with a new problem to be considered, of a high interest but quite complex nature, as is the treatment of the deformations appearing in the spectrum when instrumental drifts in both the detecting device and the pre-acconditioning electronics are to be taken into account. These deformations render the Linear Regression Model proposed almost unuseful to describe the resulting spectrum. A new model including the drifts is derived as an extensión of the individual contributions to the composite spectrum, which implies a simple extensión of the Associative Memory, which renders this suitable to accept the drifts in the composite spectrum, thus producing a robust analysis of contributions. The extensión method is based on the Low-Amplitude Perturbation Hypothesis. Experimental practice shows that in certain cases the instrumental drifts may provoke severe distortions in the resulting spectrum, which can not be treated with the before-mentioned hypothesis. To cover also these less-frequent cases, through the fifth chapter, the problem involving strong drifts is treated under the point of view of Non-Linear Optimization Techniques. This reformulation carries the study to the consideration of recursive algorithms based on the Gauss-Newton methods, which allow the introduction of Feed-Back Memories, computing elements with a sensibly improved capability to decompose spectra affected by strong drifts. The research concludes with a discussion of the results obtained in the three main levéis of study considerad, which are presented in chapters third, fourth and fifth, toghether with the review of the main conclusions derived from the study and the outline of the main research lines opened by the present work.
Resumo:
Recently the Balanced method was introduced as a class of quasi-implicit methods for solving stiff stochastic differential equations. We examine asymptotic and mean-square stability for several implementations of the Balanced method and give a generalized result for the mean-square stability region of any Balanced method. We also investigate the optimal implementation of the Balanced method with respect to strong convergence.
Desenvolvimento da célula base de microestruturas periódicas de compósitos sob otimização topológica
Resumo:
This thesis develops a new technique for composite microstructures projects by the Topology Optimization process, in order to maximize rigidity, making use of Deformation Energy Method and using a refining scheme h-adaptative to obtain a better defining the topological contours of the microstructure. This is done by distributing materials optimally in a region of pre-established project named as Cell Base. In this paper, the Finite Element Method is used to describe the field and for government equation solution. The mesh is refined iteratively refining so that the Finite Element Mesh is made on all the elements which represent solid materials, and all empty elements containing at least one node in a solid material region. The Finite Element Method chosen for the model is the linear triangular three nodes. As for the resolution of the nonlinear programming problem with constraints we were used Augmented Lagrangian method, and a minimization algorithm based on the direction of the Quasi-Newton type and Armijo-Wolfe conditions assisting in the lowering process. The Cell Base that represents the composite is found from the equivalence between a fictional material and a preescribe material, distributed optimally in the project area. The use of the strain energy method is justified for providing a lower computational cost due to a simpler formulation than traditional homogenization method. The results are presented prescription with change, in displacement with change, in volume restriction and from various initial values of relative densities.
Resumo:
This dissertation consists of three papers, which together examine whether policies meant to address inequality, succeed in mitigating the impact of traditional institutions such as caste and enable ethnic minorities to claim their rights. Using experimental and quasi-experimental methods with data from a variety of primary and secondary sources, this dissertation analyzes whether policies meant to empower vulnerable groups in India have succeeded in doing so. The findings suggest that while legislations in the form of mandated political representation or freedom of information laws are necessary in terms of increasing the accountability of government towards citizens, they may not be sufficient in ensuring adequate and uniform delivery of public services, especially to citizens belonging to marginalized groups. Further, empowering citizens – especially those belonging to groups that have faced historic discrimination – to actively participate in civic and political life may require more active and intensive policy and programmatic interventions.
Jacobian-free Newton-Krylov methods with GPU acceleration for computing nonlinear ship wave patterns
Resumo:
The nonlinear problem of steady free-surface flow past a submerged source is considered as a case study for three-dimensional ship wave problems. Of particular interest is the distinctive wedge-shaped wave pattern that forms on the surface of the fluid. By reformulating the governing equations with a standard boundary-integral method, we derive a system of nonlinear algebraic equations that enforce a singular integro-differential equation at each midpoint on a two-dimensional mesh. Our contribution is to solve the system of equations with a Jacobian-free Newton-Krylov method together with a banded preconditioner that is carefully constructed with entries taken from the Jacobian of the linearised problem. Further, we are able to utilise graphics processing unit acceleration to significantly increase the grid refinement and decrease the run-time of our solutions in comparison to schemes that are presently employed in the literature. Our approach provides opportunities to explore the nonlinear features of three-dimensional ship wave patterns, such as the shape of steep waves close to their limiting configuration, in a manner that has been possible in the two-dimensional analogue for some time.