968 resultados para Fourth-order methods
Resumo:
Dissertação para obtenção do Grau de Mestre em Engenharia Electrotécnica e de Computadores
Resumo:
The convergence features of an Endogenous Growth model with Physical capital, Human Capital and R&D have been studied. We add an erosion effect (supported by empirical evidence) to this model, and fully characterize its convergence properties. The dynamics is described by a fourth-order system of differential equations. We show that the model converges along a one-dimensional stable manifold and that its equilibrium is saddle-path stable. We also argue that one of the implications of considering this “erosion effect” is the increase in the adherence of the model to data.
Resumo:
This study represents one of the first contributions to the knowledge on the quantitative fidelity of the recent freshwater molluscan assemblages in subtropical rivers. Thanatocoenoses and biocoenoses were studied in straight and meandering to braided sectors, in the middle course of the Touro Passo River, a fourth-order tributary of the Uruguay River, located in the westernmost part of the State of Rio Grande do Sul. Samplings were carried out through quadrats of 5 m², five in each sector. A total area of 50 m² was sampled. Samplings were also made in a lentic environment (abandoned meander), with intermittent communication with the Touro Passo River, aiming to record out-of-habitat shell transportation from the lentic communities to the main river channel. The results show that, despite the frequent oscillation of the water level, the biocoenosis of the Touro Passo River shows high ecological fidelity and undergoes little influence from the lentic vicinal environments. The taxonomic composition and some features of the structure of communities, especially the dominant species, also reflect some ecological differences between the two main sectors sampled, such as the complexity of habitats in the meandering-sector. Regarding the quantitative fidelity, 60% of the species found alive were also found dead and 47.3% of the species found dead were also found alive, at river-scale. However, 72% of the dead individuals belong to species also found alive. This value might be related with the good rank order correlation obtained for live/dead assemblages. Consequently, the dominant species of the thanatocoenoses could be used to infer the ecological attributes of the biocoenoses. The values of all the indexes analyzed were very variable in small-scale samplings (quadrat), but were more similar to others registered in previous studies, when they were analyzed in a station and river scale.
Resumo:
The RP protein (RPP) array approach immobilizes minute amounts of cell lysates or tissue protein extracts as distinct microspots on NC-coated slide. Subsequent detection with specific antibodies allows multiplexed quantification of proteins and their modifications at a scale that is beyond what traditional techniques can achieve. Cellular functions are the result of the coordinated action of signaling proteins assembled in macromolecular complexes. These signaling complexes are highly dynamic structures that change their composition with time and space to adapt to cell environment. Their comprehensive analysis requires until now relatively large amounts of cells (>5 x 10(7)) due to their low abundance and breakdown during isolation procedure. In this study, we combined small scale affinity capture of the T-cell receptor (TCR) and RPP arrays to follow TCR signaling complex assembly in human ex vivo isolated CD4 T-cells. Using this strategy, we report specific recruitment of signaling components to the TCR complex upon T-cell activation in as few as 0.5 million of cells. Second- to fourth-order TCR interacting proteins were accurately quantified, making this strategy specially well-suited to the analysis of membrane-associated signaling complexes in limited amounts of cells or tissues, e.g., ex vivo isolated cells or clinical specimens.
Resumo:
Three-dimensional sequence stratigraphy is a potent exploration and development tool for the discovery of subtle stratigraphic traps. Reservoir morphology, heterogeneity and subtle stratigraphic trapping mechanisms can be better understood through systematic horizontal identification of sedimentary facies of systems tracts provided by three-dimensional attribute maps used as an important complement to the sequential analysis on the two-dimensional seismic lines and the well log data. On new prospects as well as on already-producing fields, the additional input of sequential analysis on three-dimensional data enables the identification, location and precise delimitation of new potentially productive zones. The first part of this paper presents four typical horizontal seismic facies assigned to the successive systems tracts of a third- or fourth-order sequence deposited in inner to outer neritic conditions on a elastic shelf. The construction of this synthetic representative sequence is based on the observed reproducibility of the horizontal seismic facies response to cyclic eustatic events on more than 35 sequences registered in the Gulf coast Plio-Pleistocene and Late Miocene, offshore Louisiana in the West Cameron region of the Gulf of Mexico. The second part shows how three-dimensional sequence stratigraphy can contribute in localizing and understanding sedimentary facies associated with productive zones. A case study in the early Middle Miocene Cibicides opima sands shows multiple stacked gas accumulations in the top slope fan, prograding wedge and basal transgressive systems tract of the third-order sequence between SB15.5 and SB 13.8 Ma.
Resumo:
We deal with a classical predictive mechanical system of two spinless charges where radiation is considered and there are no external fields. The terms (2,2)Paa of the expansion in the charges of the HamiltonJacobi momenta are calculated. Using these, together with known previous results, we can obtain the paa up to the fourth order. Then we have calculated the radiated energy and the 3-momentum in a scattering process as functions of the impact parameter and the incident energy for the former and 3-momentum for the latter. Scattering cross-sections are also calculated. Good agreement with well known results, including those of quantum electrodynamics, has been found.
Resumo:
The objective of this work was to compare random regression models for the estimation of genetic parameters for Guzerat milk production, using orthogonal Legendre polynomials. Records (20,524) of test-day milk yield (TDMY) from 2,816 first-lactation Guzerat cows were used. TDMY grouped into 10-monthly classes were analyzed for additive genetic effect and for environmental and residual permanent effects (random effects), whereas the contemporary group, calving age (linear and quadratic effects) and mean lactation curve were analized as fixed effects. Trajectories for the additive genetic and permanent environmental effects were modeled by means of a covariance function employing orthogonal Legendre polynomials ranging from the second to the fifth order. Residual variances were considered in one, four, six, or ten variance classes. The best model had six residual variance classes. The heritability estimates for the TDMY records varied from 0.19 to 0.32. The random regression model that used a second-order Legendre polynomial for the additive genetic effect, and a fifth-order polynomial for the permanent environmental effect is adequate for comparison by the main employed criteria. The model with a second-order Legendre polynomial for the additive genetic effect, and that with a fourth-order for the permanent environmental effect could also be employed in these analyses.
Resumo:
The binding energies of deformed even-even nuclei have been analyzed within the framework of a recently proposed microscopic-macroscopic model. We have used the semiclassical Wigner-Kirkwood ̄h expansion up to fourth order, instead of the usual Strutinsky averaging scheme, to compute the shell corrections in a deformed Woods-Saxon potential including the spin-orbit contribution. For a large set of 561 even-even nuclei with Z 8 and N 8, we find an rms deviation from the experiment of 610 keV in binding energies, comparable to the one found for the same set of nuclei using the finite range droplet model of Moller and Nix (656 keV). As applications of our model, we explore its predictive power near the proton and neutron drip lines as well as in the superheavy mass region. Next, we systematically explore the fourth-order Wigner-Kirkwood corrections to the smooth part of the energy. It is found that the ratio of the fourth-order to the second-order corrections behaves in a very regular manner as a function of the asymmetry parameter I=(N−Z)/A. This allows us to absorb the fourth-order corrections into the second-order contributions to the binding energy, which enables us us to simplify and speed up the calculation of deformed nuclei.
Resumo:
Diplomityössä on käsitelty uudenlaisia menetelmiä riippumattomien komponenttien analyysiin(ICA): Menetelmät perustuvat colligaatioon ja cross-momenttiin. Colligaatio menetelmä perustuu painojen colligaatioon. Menetelmässä on käytetty kahden tyyppisiä todennäköisyysjakaumia yhden sijasta joka perustuu yleiseen itsenäisyyden kriteeriin. Työssä on käytetty colligaatio lähestymistapaa kahdella asymptoottisella esityksellä. Gram-Charlie ja Edgeworth laajennuksia käytetty arvioimaan todennäköisyyksiä näissä menetelmissä. Työssä on myös käytetty cross-momentti menetelmää joka perustuu neljännen asteen cross-momenttiin. Menetelmä on hyvin samankaltainen FastICA algoritmin kanssa. Molempia menetelmiä on tarkasteltu lineaarisella kahden itsenäisen muuttajan sekoituksella. Lähtö signaalit ja sekoitetut matriisit ovattuntemattomia signaali lähteiden määrää lukuunottamatta. Työssä on vertailtu colligaatio menetelmään ja sen modifikaatioita FastICA:an ja JADE:en. Työssä on myös tehty vertailu analyysi suorituskyvyn ja keskusprosessori ajan suhteen cross-momenttiin perustuvien menetelmien, FastICA:n ja JADE:n useiden sekoitettujen parien kanssa.
Resumo:
The semiclassical Wigner-Kirkwood ̄h expansion method is used to calculate shell corrections for spherical and deformed nuclei. The expansion is carried out up to fourth order in ̄h. A systematic study of Wigner-Kirkwood averaged energies is presented as a function of the deformation degrees of freedom. The shell corrections, along with the pairing energies obtained by using the Lipkin-Nogami scheme, are used in the microscopic-macroscopic approach to calculate binding energies. The macroscopic part is obtained from a liquid drop formula with six adjustable parameters. Considering a set of 367 spherical nuclei, the liquid drop parameters are adjusted to reproduce the experimental binding energies, which yields a root mean square (rms) deviation of 630 keV. It is shown that the proposed approach is indeed promising for the prediction of nuclear masses.
Resumo:
Chemometric activities in Brazil are described according to three phases: before the existence of microcomputers in the 1970s, through the initial stages of microcomputer use in the 1980s and during the years of extensive microcomputer applications of the ´90s and into this century. Pioneering activities in both the university and industry are emphasized. Active research areas in chemometrics are cited including experimental design, pattern recognition and classification, curve resolution for complex systems and multivariate calibration. New trends in chemometrics, especially higher order methods for treating data, are emphasized.
Resumo:
The main objective of this thesis is to show that plate strips subjected to transverse line loads can be analysed by using the beam on elastic foundation (BEF) approach. It is shown that the elastic behaviour of both the centre line section of a semi infinite plate supported along two edges, and the free edge of a cantilever plate strip can be accurately predicted by calculations based on the two parameter BEF theory. The transverse bending stiffness of the plate strip forms the foundation. The foundation modulus is shown, mathematically and physically, to be the zero order term of the fourth order differential equation governing the behaviour of BEF, whereas the torsion rigidity of the plate acts like pre tension in the second order term. Direct equivalence is obtained for harmonic line loading by comparing the differential equations of Levy's method (a simply supported plate) with the BEF method. By equating the second and zero order terms of the semi infinite BEF model for each harmonic component, two parameters are obtained for a simply supported plate of width B: the characteristic length, 1/ λ, and the normalized sum, n, being the effect of axial loading and stiffening resulting from the torsion stiffness, nlin. This procedure gives the following result for the first mode when a uniaxial stress field was assumed (ν = 0): 1/λ = √2B/π and nlin = 1. For constant line loading, which is the superimposition of harmonic components, slightly differing foundation parameters are obtained when the maximum deflection and bending moment values of the theoretical plate, with v = 0, and BEF analysis solutions are equated: 1 /λ= 1.47B/π and nlin. = 0.59 for a simply supported plate; and 1/λ = 0.99B/π and nlin = 0.25 for a fixed plate. The BEF parameters of the plate strip with a free edge are determined based solely on finite element analysis (FEA) results: 1/λ = 1.29B/π and nlin. = 0.65, where B is the double width of the cantilever plate strip. The stress biaxial, v > 0, is shown not to affect the values of the BEF parameters significantly the result of the geometric nonlinearity caused by in plane, axial and biaxial loading is studied theoretically by comparing the differential equations of Levy's method with the BEF approach. The BEF model is generalised to take into account the elastic rotation stiffness of the longitudinal edges. Finally, formulae are presented that take into account the effect of Poisson's ratio, and geometric non linearity, on bending behaviour resulting from axial and transverse inplane loading. It is also shown that the BEF parameters of the semi infinite model are valid for linear elastic analysis of a plate strip of finite length. The BEF model was verified by applying it to the analysis of bending stresses caused by misalignments in a laboratory test panel. In summary, it can be concluded that the advantages of the BEF theory are that it is a simple tool, and that it is accurate enough for specific stress analysis of semi infinite and finite plate bending problems.
Resumo:
We study the phonon dispersion, cohesive and thermal properties of raxe gas solids Ne, Ar, Kr, and Xe, using a variety of potentials obtained from different approaches; such as, fitting to crystal properties, purely ab initio calculations for molecules and dimers or ab initio calculations for solid crystalline phase, a combination of ab initio calculations and fitting to either gas phase data or sohd state properties. We explore whether potentials derived with a certain approaxih have any obvious benefit over the others in reproducing the solid state properties. In particular, we study phonon dispersion, isothermal ajid adiabatic bulk moduli, thermal expansion, and elastic (shear) constants as a function of temperatiue. Anharmonic effects on thermal expansion, specific heat, and bulk moduli have been studied using A^ perturbation theory in the high temperature limit using the neaxest-neighbor central force (nncf) model as developed by Shukla and MacDonald [4]. In our study, we find that potentials based on fitting to the crystal properties have some advantage, particularly for Kr and Xe, in terms of reproducing the thermodynamic properties over an extended range of temperatiures, but agreement with the phonon frequencies with the measured values is not guaranteed. For the lighter element Ne, the LJ potential which is based on fitting to the gas phase data produces best results for the thermodynamic properties; however, the Eggenberger potential for Ne, where the potential is based on combining ab initio quantum chemical calculations and molecular dynamics simulations, produces results that have better agreement with the measured dispersion, and elastic (shear) values. For At, the Morse-type potential, which is based on M0ller-Plesset perturbation theory to fourth order (MP4) ab initio calculations, yields the best results for the thermodynamic properties, elastic (shear) constants, and the phonon dispersion curves.
Resumo:
L'apprentissage profond est un domaine de recherche en forte croissance en apprentissage automatique qui est parvenu à des résultats impressionnants dans différentes tâches allant de la classification d'images à la parole, en passant par la modélisation du langage. Les réseaux de neurones récurrents, une sous-classe d'architecture profonde, s'avèrent particulièrement prometteurs. Les réseaux récurrents peuvent capter la structure temporelle dans les données. Ils ont potentiellement la capacité d'apprendre des corrélations entre des événements éloignés dans le temps et d'emmagasiner indéfiniment des informations dans leur mémoire interne. Dans ce travail, nous tentons d'abord de comprendre pourquoi la profondeur est utile. Similairement à d'autres travaux de la littérature, nos résultats démontrent que les modèles profonds peuvent être plus efficaces pour représenter certaines familles de fonctions comparativement aux modèles peu profonds. Contrairement à ces travaux, nous effectuons notre analyse théorique sur des réseaux profonds acycliques munis de fonctions d'activation linéaires par parties, puisque ce type de modèle est actuellement l'état de l'art dans différentes tâches de classification. La deuxième partie de cette thèse porte sur le processus d'apprentissage. Nous analysons quelques techniques d'optimisation proposées récemment, telles l'optimisation Hessian free, la descente de gradient naturel et la descente des sous-espaces de Krylov. Nous proposons le cadre théorique des méthodes à région de confiance généralisées et nous montrons que plusieurs de ces algorithmes développés récemment peuvent être vus dans cette perspective. Nous argumentons que certains membres de cette famille d'approches peuvent être mieux adaptés que d'autres à l'optimisation non convexe. La dernière partie de ce document se concentre sur les réseaux de neurones récurrents. Nous étudions d'abord le concept de mémoire et tentons de répondre aux questions suivantes: Les réseaux récurrents peuvent-ils démontrer une mémoire sans limite? Ce comportement peut-il être appris? Nous montrons que cela est possible si des indices sont fournis durant l'apprentissage. Ensuite, nous explorons deux problèmes spécifiques à l'entraînement des réseaux récurrents, à savoir la dissipation et l'explosion du gradient. Notre analyse se termine par une solution au problème d'explosion du gradient qui implique de borner la norme du gradient. Nous proposons également un terme de régularisation conçu spécifiquement pour réduire le problème de dissipation du gradient. Sur un ensemble de données synthétique, nous montrons empiriquement que ces mécanismes peuvent permettre aux réseaux récurrents d'apprendre de façon autonome à mémoriser des informations pour une période de temps indéfinie. Finalement, nous explorons la notion de profondeur dans les réseaux de neurones récurrents. Comparativement aux réseaux acycliques, la définition de profondeur dans les réseaux récurrents est souvent ambiguë. Nous proposons différentes façons d'ajouter de la profondeur dans les réseaux récurrents et nous évaluons empiriquement ces propositions.
Resumo:
Dans ce travail, nous étendons le nombre de conditions physiques actuellement con- nues du trou d’échange exact avec la dérivation de l’expansion de quatrième ordre du trou d’échange sphérique moyenne exacte. Nous comparons les expansions de deux- ième et de quatrième ordre avec le trou d’échange exact pour des systèmes atomiques et moléculaires. Nous avons constaté que, en général, l’expansion du quatrième ordre reproduit plus fidèlement le trou d’échange exact pour les petites valeurs de la distance interélectronique. Nous démontrons que les ensembles de base de type gaussiennes ont une influence significative sur les termes de cette nouvelle condition, en étudiant com- ment les oscillations causées par ces ensembles de bases affectent son premier terme. Aussi, nous proposons quatre modèles de trous d’échange analytiques auxquels nous imposons toutes les conditions actuellement connues du trou d’échange exact et la nou- velle présentée dans ce travail. Nous évaluons la performance des modèles en calculant des énergies d’échange et ses contributions à des énergies d’atomisation. On constate que les oscillations causeés par les bases de type gaussiennes peuvent compromettre la précision et la solution des modèles.