54 resultados para Numerical-analyses

em Consorci de Serveis Universitaris de Catalunya (CSUC), Spain


Relevância:

60.00% 60.00%

Publicador:

Resumo:

A new technology for the three-dimensional (3-D) stacking of very thin chips on a substrate is currently under development within the ultrathin chip stacking (UTCS) Esprit Project 24910. In this work, we present the first-level UTCS structure and the analysis of the thermomechanical stresses produced by the manufacturing process. Chips are thinned up to 10 or 15 m. We discuss potentially critical points at the edges of the chips, the suppression of delamination problems of the peripheral dielectric matrix and produce a comparative study of several technological choices for the design of metallic interconnect structures. The purpose of these calculations is to give inputs for the definition of design rules for this technology. We have therefore undertaken a programme that analyzes the influence of sundry design parameters and alternative development options. Numerical analyses are based on the finite element method.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We investigate different models that are intended to describe the small mean free path regime of a kinetic equation, a particular attention being paid to the moment closure by entropy minimization. We introduce a specific asymptotic-induced numerical strategy which is able to treat the stiff terms of the asymptotic diffusive regime. We evaluate on numerics the performances of the method and the abilities of the reduced models to capture the main features of the full kinetic equation.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Estudi elaborat a partir d’una estada a l’ Imperial College London, entre juliol i novembre de 2006. En aquest treball s’ha investigat la geometria més apropiada per a la caracterització de la tenacitat a fractura intralaminar de materials compòsits laminats amb teixit. L’objectiu és assegurar la propagació de l’esquerda sense que la proveta falli abans per cap altre mecanisme de dany per tal de permetre la caracterització experimental de la tenacitat a fractura intralaminar de materials compòsits laminats amb teixit. Amb aquesta fi, s’ha dut a terme l’anàlisi paramètrica de diferents tipus de provetes mitjançant el mètode dels elements finits (FE) combinat amb la virtual crack closure technique (VCCT). Les geometries de les provetes analitzades corresponen a la proveta de l’assaig compact tension (CT) i diferents variacions com la extended compact tension (ECT), la proveta widened compact tension (WCT), tapered compact tension (TCT) i doubly-tapered compact tension (2TCT). Com a resultat d’aquestes anàlisis s’han derivat diferents conclusions per obtenir la geometria de proveta més apropiada per a la caracterització de la tenacitat a fractura intralaminar de materials compòsits laminats amb teixit. A més, també s’han dut a terme una sèrie d’assaigs experimentals per tal de validar els resultats de les anàlisis paramètriques. La concordança trobada entre els resultats numèrics i experimentals és bona tot i la presència d’efectes no previstos durant els assaigs experimentals.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We present experimental and theoretical analyses of data requirements for haplotype inference algorithms. Our experiments include a broad range of problem sizes under two standard models of tree distribution and were designed to yield statistically robust results despite the size of the sample space. Our results validate Gusfield's conjecture that a population size of n log n is required to give (with high probability) sufficient information to deduce the n haplotypes and their complete evolutionary history. The experimental results inspired our experimental finding with theoretical bounds on the population size. We also analyze the population size required to deduce some fixed fraction of the evolutionary history of a set of n haplotypes and establish linear bounds on the required sample size. These linear bounds are also shown theoretically.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Thermal systems interchanging heat and mass by conduction, convection, radiation (solar and thermal ) occur in many engineering applications like energy storage by solar collectors, window glazing in buildings, refrigeration of plastic moulds, air handling units etc. Often these thermal systems are composed of various elements for example a building with wall, windows, rooms, etc. It would be of particular interest to have a modular thermal system which is formed by connecting different modules for the elements, flexibility to use and change models for individual elements, add or remove elements without changing the entire code. A numerical approach to handle the heat transfer and fluid flow in such systems helps in saving the full scale experiment time, cost and also aids optimisation of parameters of the system. In subsequent sections are presented a short summary of the work done until now on the orientation of the thesis in the field of numerical methods for heat transfer and fluid flow applications, the work in process and the future work.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Generalized multiresolution analyses are increasing sequences of subspaces of a Hilbert space H that fail to be multiresolution analyses in the sense of wavelet theory because the core subspace does not have an orthonormal basis generated by a fixed scaling function. Previous authors have studied a multiplicity function m which, loosely speaking, measures the failure of the GMRA to be an MRA. When the Hilbert space H is L2(Rn), the possible multiplicity functions have been characterized by Baggett and Merrill. Here we start with a function m satisfying a consistency condition which is known to be necessary, and build a GMRA in an abstract Hilbert space with multiplicity function m.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper, we develop numerical algorithms that use small requirements of storage and operations for the computation of invariant tori in Hamiltonian systems (exact symplectic maps and Hamiltonian vector fields). The algorithms are based on the parameterization method and follow closely the proof of the KAM theorem given in [LGJV05] and [FLS07]. They essentially consist in solving a functional equation satisfied by the invariant tori by using a Newton method. Using some geometric identities, it is possible to perform a Newton step using little storage and few operations. In this paper we focus on the numerical issues of the algorithms (speed, storage and stability) and we refer to the mentioned papers for the rigorous results. We show how to compute efficiently both maximal invariant tori and whiskered tori, together with the associated invariant stable and unstable manifolds of whiskered tori. Moreover, we present fast algorithms for the iteration of the quasi-periodic cocycles and the computation of the invariant bundles, which is a preliminary step for the computation of invariant whiskered tori. Since quasi-periodic cocycles appear in other contexts, this section may be of independent interest. The numerical methods presented here allow to compute in a unified way primary and secondary invariant KAM tori. Secondary tori are invariant tori which can be contracted to a periodic orbit. We present some preliminary results that ensure that the methods are indeed implementable and fast. We postpone to a future paper optimized implementations and results on the breakdown of invariant tori.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A family of nonempty closed convex sets is built by using the data of the Generalized Nash equilibrium problem (GNEP). The sets are selected iteratively such that the intersection of the selected sets contains solutions of the GNEP. The algorithm introduced by Iusem-Sosa (2003) is adapted to obtain solutions of the GNEP. Finally some numerical experiments are given to illustrate the numerical behavior of the algorithm.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

To describe the collective behavior of large ensembles of neurons in neuronal network, a kinetic theory description was developed in [13, 12], where a macroscopic representation of the network dynamics was directly derived from the microscopic dynamics of individual neurons, which are modeled by conductance-based, linear, integrate-and-fire point neurons. A diffusion approximation then led to a nonlinear Fokker-Planck equation for the probability density function of neuronal membrane potentials and synaptic conductances. In this work, we propose a deterministic numerical scheme for a Fokker-Planck model of an excitatory-only network. Our numerical solver allows us to obtain the time evolution of probability distribution functions, and thus, the evolution of all possible macroscopic quantities that are given by suitable moments of the probability density function. We show that this deterministic scheme is capable of capturing the bistability of stationary states observed in Monte Carlo simulations. Moreover, the transient behavior of the firing rates computed from the Fokker-Planck equation is analyzed in this bistable situation, where a bifurcation scenario, of asynchronous convergence towards stationary states, periodic synchronous solutions or damped oscillatory convergence towards stationary states, can be uncovered by increasing the strength of the excitatory coupling. Finally, the computation of moments of the probability distribution allows us to validate the applicability of a moment closure assumption used in [13] to further simplify the kinetic theory.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Els isòtops estables com a traçadors de la cadena alimentària, s'han utilitzat per caracteritzar la relació entre els consumidors i els seus aliments, ja que el fraccionament isotòpic implica una discriminació en contra de certs isòtops. Però les anàlisis d'isòtops estables (SIA), també es poden dur a terme en peixos cultivats amb dietes artificials, com la orada (Sparus aurata), la especie más cultivada en el Mediterráneo. Canvis en l'abundància natural d'isòtops estables (13C i 15N) en els teixits i les seves reserves poden reflectir els canvis en l'ús i reciclatge dels nutrients ja que els enzims catabòlics implicats en els processos de descarboxilació i desaminació mostren una preferència pels isòtops més lleugers. Per tant, aquestes anàlisis ens poden proporcionar informació útil sobre l'estat nutricional i metabòlic dels peixos. L'objectiu d'aquest projecte va ser determinar la capacitat dels isòtops estables per ser utilitzats com a marcadors potencials de la capacitat de creixement i condicions de cria de l'orada. En aquest sentit, les anàlisis d'isòtops estables s'han combinat amb altres metabòlics (activitats citocrom-c-oxidasa, COX, i citrat sintasa, CS) i els paràmetres de creixement (ARN/ADN). El conjunt de resultats obtinguts en els diferents estudis realitzats en aquest projecte demostra que el SIA, en combinació amb altres paràmetres metabòlics, pot servir com una eina eficaç per discriminar els peixos amb millor potencial de creixement, així com a marcador sensible de l'estat nutricional i d'engreix. D'altra banda, la combinació de l'anàlisi d'isòtops estables amb les eines emergents, com ara tècniques de proteòmica (2D-PAGE), ens proporciona nous coneixements sobre els canvis metabòlics que ocorren en els músculs dels peixos durant l‟increment del creixement muscular induït per l'exercici.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A problem in the archaeometric classification of Catalan Renaissance pottery is the fact, thatthe clay supply of the pottery workshops was centrally organized by guilds, and thereforeusually all potters of a single production centre produced chemically similar ceramics.However, analysing the glazes of the ware usually a large number of inclusions in the glaze isfound, which reveal technological differences between single workshops. These inclusionshave been used by the potters in order to opacify the transparent glaze and to achieve a whitebackground for further decoration.In order to distinguish different technological preparation procedures of the single workshops,at a Scanning Electron Microscope the chemical composition of those inclusions as well astheir size in the two-dimensional cut is recorded. Based on the latter, a frequency distributionof the apparent diameters is estimated for each sample and type of inclusion.Following an approach by S.D. Wicksell (1925), it is principally possible to transform thedistributions of the apparent 2D-diameters back to those of the true three-dimensional bodies.The applicability of this approach and its practical problems are examined using differentways of kernel density estimation and Monte-Carlo tests of the methodology. Finally, it istested in how far the obtained frequency distributions can be used to classify the pottery

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The asymptotic speed problem of front solutions to hyperbolic reaction-diffusion (HRD) equations is studied in detail. We perform linear and variational analyses to obtain bounds for the speed. In contrast to what has been done in previous work, here we derive upper bounds in addition to lower ones in such a way that we can obtain improved bounds. For some functions it is possible to determine the speed without any uncertainty. This is also achieved for some systems of HRD (i.e., time-delayed Lotka-Volterra) equations that take into account the interaction among different species. An analytical analysis is performed for several systems of biological interest, and we find good agreement with the results of numerical simulations as well as with available observations for a system discussed recently

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The space and time discretization inherent to all FDTD schemesintroduce non-physical dispersion errors, i.e. deviations ofthe speed of sound from the theoretical value predicted bythe governing Euler differential equations. A generalmethodologyfor computing this dispersion error via straightforwardnumerical simulations of the FDTD schemes is presented.The method is shown to provide remarkable accuraciesof the order of 1/1000 in a wide variety of twodimensionalfinite difference schemes.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In order to interpret the biplot it is necessary to know which points usually variables are the ones that are important contributors to the solution, and this information is available separately as part of the biplot s numerical results. We propose a new scaling of the display, called the contribution biplot, which incorporates this diagnostic directly into the graphical display, showing visually the important contributors and thus facilitating the biplot interpretation and often simplifying the graphical representation considerably. The contribution biplot can be applied to a wide variety of analyses such as correspondence analysis, principal component analysis, log-ratio analysis and the graphical results of a discriminant analysis/MANOVA, in fact to any method based on the singular-value decomposition. In the contribution biplot one set of points, usually the rows of the data matrix, optimally represent the spatial positions of the cases or sample units, according to some distance measure that usually incorporates some form of standardization unless all data are comparable in scale. The other set of points, usually the columns, is represented by vectors that are related to their contributions to the low-dimensional solution. A fringe benefit is that usually only one common scale for row and column points is needed on the principal axes, thus avoiding the problem of enlarging or contracting the scale of one set of points to make the biplot legible. Furthermore, this version of the biplot also solves the problem in correspondence analysis of low-frequency categories that are located on the periphery of the map, giving the false impression that they are important, when they are in fact contributing minimally to the solution.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper analyses the robustness of Least-Squares Monte Carlo, a techniquerecently proposed by Longstaff and Schwartz (2001) for pricing Americanoptions. This method is based on least-squares regressions in which theexplanatory variables are certain polynomial functions. We analyze theimpact of different basis functions on option prices. Numerical resultsfor American put options provide evidence that a) this approach is veryrobust to the choice of different alternative polynomials and b) few basisfunctions are required. However, these conclusions are not reached whenanalyzing more complex derivatives.