895 resultados para Newton, Willliam


Relevância:

10.00% 10.00%

Publicador:

Resumo:

From direct observations of the longitudinal development of ultra-high energy air showers performed with the Pierre Auger Observatory, upper limits of 3.8%, 2.4%, 3.5% and 11.7% (at 95% c.l.) are obtained on the fraction of cosmic-ray photons above 2, 3, 5 and 10 EeV (1 EeV equivalent to 10(18) eV), respectively. These are the first experimental limits on ultra-high energy photons at energies below 10 EeV. The results complement previous constraints on top-down models from array data and they reduce systematic uncertainties in the interpretation of shower data in terms of primary flux, nuclear composition and proton-air cross-section. (C) 2009 Elsevier B.V. All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We described herein the molecular design of novel in vivo anti-inflammatory 6-methanesulfonamide-3,4-methylenedioxyphenyl-N-acylhydrazone derivatives (1) planned by applying the molecular hybridization approach. This work also points out to the discovery of LASSBio-930 (1c) as a novel anti-inflammatory and anti-hyperalgesic prototype, which was able to reduce carrageenan-induced rat paw edema with an ED(50) of 97.8 mu mol/kg, acting mainly as a non-selective COX inhibitor. (C) 2009 Elsevier Ltd. All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The Pierre Auger Observatory is a hybrid detector for ultra-high energy cosmic rays. It combines a surface array to measure secondary particles at ground level together with a fluorescence detector to measure the development of air showers in the atmosphere above the array. The fluorescence detector comprises 24 large telescopes specialized for measuring the nitrogen fluorescence caused by charged particles of cosmic ray air showers. In this paper we describe the components of the fluorescence detector including its optical system, the design of the camera, the electronics, and the systems for relative and absolute calibration. We also discuss the operation and the monitoring of the detector. Finally, we evaluate the detector performance and precision of shower reconstructions. (C) 2010 Elsevier B.V All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper, we present a new reformulation of the KKT system associated to a variational inequality as a semismooth equation. The reformulation is derived from the concept of differentiable exact penalties for nonlinear programming. The best theoretical results are presented for nonlinear complementarity problems, where simple, verifiable, conditions ensure that the penalty is exact. We close the paper with some preliminary computational tests on the use of a semismooth Newton method to solve the equation derived from the new reformulation. We also compare its performance with the Newton method applied to classical reformulations based on the Fischer-Burmeister function and on the minimum. The new reformulation combines the best features of the classical ones, being as easy to solve as the reformulation that uses the Fischer-Burmeister function while requiring as few Newton steps as the one that is based on the minimum.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The focus of study in this paper is the class of packing problems. More specifically, it deals with the placement of a set of N circular items of unitary radius inside an object with the aim of minimizing its dimensions. Differently shaped containers are considered, namely circles, squares, rectangles, strips and triangles. By means of the resolution of non-linear equations systems through the Newton-Raphson method, the herein presented algorithm succeeds in improving the accuracy of previous results attained by continuous optimization approaches up to numerical machine precision. The computer implementation and the data sets are available at http://www.ime.usp.br/similar to egbirgin/packing/. (C) 2009 Elsevier Ltd, All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A Nonlinear Programming algorithm that converges to second-order stationary points is introduced in this paper. The main tool is a second-order negative-curvature method for box-constrained minimization of a certain class of functions that do not possess continuous second derivatives. This method is used to define an Augmented Lagrangian algorithm of PHR (Powell-Hestenes-Rockafellar) type. Convergence proofs under weak constraint qualifications are given. Numerical examples showing that the new method converges to second-order stationary points in situations in which first-order methods fail are exhibited.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Optimization methods that employ the classical Powell-Hestenes-Rockafellar augmented Lagrangian are useful tools for solving nonlinear programming problems. Their reputation decreased in the last 10 years due to the comparative success of interior-point Newtonian algorithms, which are asymptotically faster. In this research, a combination of both approaches is evaluated. The idea is to produce a competitive method, being more robust and efficient than its `pure` counterparts for critical problems. Moreover, an additional hybrid algorithm is defined, in which the interior-point method is replaced by the Newtonian resolution of a Karush-Kuhn-Tucker (KKT) system identified by the augmented Lagrangian algorithm. The software used in this work is freely available through the Tango Project web page:http://www.ime.usp.br/similar to egbirgin/tango/.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The immersed boundary method is a versatile tool for the investigation of flow-structure interaction. In a large number of applications, the immersed boundaries or structures are very stiff and strong tangential forces on these interfaces induce a well-known, severe time-step restriction for explicit discretizations. This excessive stability constraint can be removed with fully implicit or suitable semi-implicit schemes but at a seemingly prohibitive computational cost. While economical alternatives have been proposed recently for some special cases, there is a practical need for a computationally efficient approach that can be applied more broadly. In this context, we revisit a robust semi-implicit discretization introduced by Peskin in the late 1970s which has received renewed attention recently. This discretization, in which the spreading and interpolation operators are lagged. leads to a linear system of equations for the inter-face configuration at the future time, when the interfacial force is linear. However, this linear system is large and dense and thus it is challenging to streamline its solution. Moreover, while the same linear system or one of similar structure could potentially be used in Newton-type iterations, nonlinear and highly stiff immersed structures pose additional challenges to iterative methods. In this work, we address these problems and propose cost-effective computational strategies for solving Peskin`s lagged-operators type of discretization. We do this by first constructing a sufficiently accurate approximation to the system`s matrix and we obtain a rigorous estimate for this approximation. This matrix is expeditiously computed by using a combination of pre-calculated values and interpolation. The availability of a matrix allows for more efficient matrix-vector products and facilitates the design of effective iterative schemes. We propose efficient iterative approaches to deal with both linear and nonlinear interfacial forces and simple or complex immersed structures with tethered or untethered points. One of these iterative approaches employs a splitting in which we first solve a linear problem for the interfacial force and then we use a nonlinear iteration to find the interface configuration corresponding to this force. We demonstrate that the proposed approach is several orders of magnitude more efficient than the standard explicit method. In addition to considering the standard elliptical drop test case, we show both the robustness and efficacy of the proposed methodology with a 2D model of a heart valve. (C) 2009 Elsevier Inc. All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This study presents the preparation, characterization and application of copper octa(3-aminopropyl)octasilsesquioxane following its subsequent reaction with azide ions (ASCA). The precursor (AC) and the novel compound (ASCA) were characterized by Fourier transform infrared spectra (FTIR), nuclear magnetic resonance (NMR), electron paramagnetic resonance (EPR), scanning electronic microscopy (SEM), X-ray diffraction (XRD), Thermogravimetric analyses and voltammetric technique. The cyclic voltammogram of the modified graphite paste electrode with ASCA (GPE-ASCA), showed one redox couple with formal potential (E(1/2)(ox)) = 0.30 V and an irreversible process at 1.1 V (vs. Ag/AgCl; NaCl 1.0 M; v = 20 mV s(-1)). The material is very sensitive to nitrite concentrations. The modified graphite paste electrode (GPE-ASCA) gives a linear range from 1.0 x 10(-4) to 4.0 x 10(-3) mol L(-1) for the determination of nitrite, with a detection limit of 2.1 x 10(-4) mol L(-1) and the amperometric sensitivity of 8.04 mA/mol L(-1). (C) 2010 Elsevier Ltd. All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The subgradient optimization method is a simple and flexible linear programming iterative algorithm. It is much simpler than Newton's method and can be applied to a wider variety of problems. It also converges when the objective function is non-differentiable. Since an efficient algorithm will not only produce a good solution but also take less computing time, we always prefer a simpler algorithm with high quality. In this study a series of step size parameters in the subgradient equation is studied. The performance is compared for a general piecewise function and a specific p-median problem. We examine how the quality of solution changes by setting five forms of step size parameter.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Random effect models have been widely applied in many fields of research. However, models with uncertain design matrices for random effects have been little investigated before. In some applications with such problems, an expectation method has been used for simplicity. This method does not include the extra information of uncertainty in the design matrix is not included. The closed solution for this problem is generally difficult to attain. We therefore propose an two-step algorithm for estimating the parameters, especially the variance components in the model. The implementation is based on Monte Carlo approximation and a Newton-Raphson-based EM algorithm. As an example, a simulated genetics dataset was analyzed. The results showed that the proportion of the total variance explained by the random effects was accurately estimated, which was highly underestimated by the expectation method. By introducing heuristic search and optimization methods, the algorithm can possibly be developed to infer the 'model-based' best design matrix and the corresponding best estimates.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this work, I consider the center-of-mass wave function for a homogenous sphere under the influence of the self-interaction due to Newtonian gravity. I solve for the ground state numerically and calculate the average radius as a measure of its size. For small masses, M≲10−17 kg, the radial size is independent of density, and the ground state extends beyond the extent of the sphere. For masses larger than this, the ground state is contained within the sphere and to a good approximation given by the solution for an effective radial harmonic-oscillator potential. This work thus determines the limits of applicability of the point-mass Newton Schrödinger equations for spherical masses. In addition, I calculate the fringe visibility for matter-wave interferometry and find that in the low-mass case, interferometry can in principle be performed, whereas for the latter case, it becomes impossible. Based on this, I discuss this transition as a possible boundary for the quantum-classical crossover, independent of the usually evoked environmental decoherence. The two regimes meet at sphere sizes R≈10−7 m, and the density of the material causes only minor variations in this value.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

O uso da mecânica de fluidos computacional no estudo de processos envolvendo o escoamento de fluidos poliméricos está cada vez mais presente nas indústrias de transformação de polímeros. Um código computacional voltado a esta função, para que possa ser aplicado com sucesso, deve levar a predições mais próximas possível da realidade (modelagem), de uma forma relativamente rápida e eficiente (simulação). Em relação à etapa de modelagem, o ponto chave é a seleção de uma equação constitutiva que represente bem as características reológicas do fluido, dentre as diversas opções existentes. Para a etapa de simulação, ou seja, a resolução numérica das equações do modelo, existem diversas metodologias encontradas na literatura, cada qual com suas vantagens e desvantagens. Neste tópico se enquadra o trabalho em questão, que propõe uma nova metodologia para a resolução das equações governantes do escoamento de fluidos viscoelásticos. Esta se baseia no método dos volumes finitos, usando o arranjo co-localizado para as variáveis do problema, e na utilização de aproximações de alta ordem para os fluxos médios lineares e não-lineares e para outros termos não lineares que surgem da discretização das equações constitutivas. Nesta metodologia, trabalha-se com os valores médios das variáveis nos volumes durante todo o processo de resolução, sendo que os valores pontuais são obtidos ao final do procedimento via deconvolução. A solução do sistema de equações não lineares, resultante da discretização das equações, é feita de forma simultânea, usando o método de Newton São mostrados então, resultados da aplicação da metodologia proposta em problemas envolvendo escoamentos de fluidos newtonianos e fluidos viscoelásticos. Para descrever o comportamento reológico destes últimos, são usadas duas equações constitutivas, que são o modelo de Oldroyd-B e o modelo de Phan-Thien-Tanner Simplificado. Por estes resultados pode-se ver que a metodologia é muito promissora, apresentando algumas vantagens frente às metodologias convencionais em volumes finitos. A implementação atual da metodologia desenvolvida está restrita a malhas uniformes e, consequentemente, soluções para problemas com geometrias complexas, que necessitam de refinamento localizado da malha, foram obtidas somente para baixos números de Weissenberg, devido a limitação do custo computacional. Esta restrição pode ser contornada, tornando o seu uso competitivo.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

As aplicações da mecânica vibratória vêm crescendo significativamente na análise de sistemas de suspensões e estruturas de veículos, dentre outras. Desta forma, o presente trabalho desenvolve técnicas para a simulação e o controle de uma suspensão de automóvel utilizando modelos dinâmicos com um, dois e três graus de liberdade. Na obtenção das equações do movimento para o sistema massa-mola-amortecedor, o modelo matemático utilizado tem como base a equação de Lagrange e a segunda lei de Newton, com condições iniciais apropriadas. A solução numérica destas equações é obtida através do método de Runge-Kutta de 4ª ordem, utilizando o software MATLAB. Para controlar as vibrações do sistema utilizou-se três métodos diferentes de controle: clássico, LQR e alocação de pólos. O sistema assim obtido satisfaz as condições de estabilidade e de desempenho e é factível para aplicações práticas, pois os resultados obtidos comparam adequadamente com dados analíticos, numéricos ou experimentais encontrados na literatura, indicando que técnicas de controle como o clássico podem ser simples e eficientes.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

O presente estudo procurou descrever e analisar o contexto em que se desenvolveu o processo de concessão dos sistemas de transporte de massa na Região Metropolitana do Rio de Janeiro, promovido pelo Programa Estadual de Desestatização ¿ PED, na gestão governamental compreendida entre os anos de 1995 e 1998, bem como avaliar suas implicações sobre o modelo de organização e gestão do transporte público regional então vigente. Seu desenvolvimento enfatizou três aspectos desse processo: a caracterização do cenário anterior à proposta de mudança, a análise substantiva da política representada pelo programa de concessões e a avaliação do novo cenário criado como conseqüência do programa. Sua metodologia pautou-se em consulta bibliográfica, volumosa análise documental, observação dos fatos e entrevistas desestruturadas com administradores e técnicos envolvidos no processo. Seus resultados evidenciaram as limitações dos modelos de análise e de planejamento tradicionalmente adotados para a formulação das políticas setoriais, a precariedade dos sistemas de transporte de passageiros regionais e a situação pelos sistemas de metrô, trens e barcas, consubstanciando um ambiente propício às propostas de sua transferência à gestão privada. Evidenciaram, ainda, que a iniciativa foi influenciada pelo contexto dos projetos de reforma do Estado patrocinados pelo Banco Mundial (BIRD), desenvolvendo-se sem referências relevantes na comunidade técnica setorial e gerando um cenário institucional frágil diante da tarefa de gerir os contratos dela resultantes. Embora pautado em estratégias de retomada de investimentos condizentes com as diretrizes do Plano de Transporte de Massa ¿ PTM, elaborado em 1994, a insipiência do programa não permite constatar, ainda tendências significativas no desempenho dos sistemas concedidos. São evidentes, entretanto, seus reflexos na desentruturação do modelo de gestão pública do transporte metropolitano sob responsabilidade do Estado.