950 resultados para Decomposition of Ranked Models


Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper is part of a set of publications related with the development of mathematical models aimed to simulate the dynamic input and output of experimental nondestructive tests in order to detect structural imperfections. The structures to be considered are composed by steel plates of thin thickness. The imperfections in these cases are cracks and they can penetrate either a significant part of the plate thickness or be micro cracks or superficial imperfections. The first class of cracks is related with structural safety and the second one is more connected to the structural protection to the environment, particularly if protective paintings can be deteriorated. Two mathematical groups of models have been developed. The first group tries to locate the position and extension of the imperfection of the first class of imperfections, i.e. cracks and it is the object of the present paper. Bending Kirchoff thin plate models belong to this first group and they are used to this respect. The another group of models is dealt with membrane structures under the superficial Rayleigh waves excitation. With this group of models the micro cracks detection is intended. In the application of the first group of models to the detection of cracks, it has been observed that the differences between the natural frequencies of the non cracked and the cracked structures are very small. However, geometry and crack position can be identified quite accurately if this comparison is carried out between first derivatives (mode rotations) of the natural modes are used instead. Finally, in relation with the analysis of the superficial crack existence the use of Rayleigh waves is very promising. The geometry and the penetration of the micro crack can be detected very accurately. The mathematical and numerical treatment of the generation of these Rayleigh waves present and a numerical application has been shown.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We discuss linear Ricardo models with a range of parameters. We show that the exact boundary of the region of equilibria of these models is obtained by solving a simple integer programming problem. We show that there is also an exact correspondence between many of the equilibria resulting from families of linear models and the multiple equilibria of economies of scale models.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Estimation of evolutionary distances has always been a major issue in the study of molecular evolution because evolutionary distances are required for estimating the rate of evolution in a gene, the divergence dates between genes or organisms, and the relationships among genes or organisms. Other closely related issues are the estimation of the pattern of nucleotide substitution, the estimation of the degree of rate variation among sites in a DNA sequence, and statistical testing of the molecular clock hypothesis. Mathematical treatments of these problems are considerably simplified by the assumption of a stationary process in which the nucleotide compositions of the sequences under study have remained approximately constant over time, and there now exist fairly extensive studies of stationary models of nucleotide substitution, although some problems remain to be solved. Nonstationary models are much more complex, but significant progress has been recently made by the development of the paralinear and LogDet distances. This paper reviews recent studies on the above issues and reports results on correcting the estimation bias of evolutionary distances, the estimation of the pattern of nucleotide substitution, and the estimation of rate variation among the sites in a sequence.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We summarize recent evidence that models of earthquake faults with dynamically unstable friction laws but no externally imposed heterogeneities can exhibit slip complexity. Two models are described here. The first is a one-dimensional model with velocity-weakening stick-slip friction; the second is a two-dimensional elastodynamic model with slip-weakening friction. Both exhibit small-event complexity and chaotic sequences of large characteristic events. The large events in both models are composed of Heaton pulses. We argue that the key ingredients of these models are reasonably accurate representations of the properties of real faults.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The visual responses of neurons in the cerebral cortex were first adequately characterized in the 1960s by D. H. Hubel and T. N. Wiesel [(1962) J. Physiol. (London) 160, 106-154; (1968) J. Physiol. (London) 195, 215-243] using qualitative analyses based on simple geometric visual targets. Over the past 30 years, it has become common to consider the properties of these neurons by attempting to make formal descriptions of these transformations they execute on the visual image. Most such models have their roots in linear-systems approaches pioneered in the retina by C. Enroth-Cugell and J. R. Robson [(1966) J. Physiol. (London) 187, 517-552], but it is clear that purely linear models of cortical neurons are inadequate. We present two related models: one designed to account for the responses of simple cells in primary visual cortex (V1) and one designed to account for the responses of pattern direction selective cells in MT (or V5), an extrastriate visual area thought to be involved in the analysis of visual motion. These models share a common structure that operates in the same way on different kinds of input, and instantiate the widely held view that computational strategies are similar throughout the cerebral cortex. Implementations of these models for Macintosh microcomputers are available and can be used to explore the models' properties.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The electronic structure and spectrum of several models of the binuclear metal site in soluble CuA domains of cytochrome-c oxidase have been calculated by the use of an extended version of the complete neglect of differential overlap/spectroscopic method. The experimental spectra have two strong transitions of nearly equal intensity around 500 nm and a near-IR transition close to 800 nm. The model that best reproduces these features consists of a dimer of two blue (type 1) copper centers, in which each Cu atom replaces the missing imidazole on the other Cu atom. Thus, both Cu atoms have one cysteine sulfur atom and one imidazole nitrogen atom as ligands, and there are no bridging ligands but a direct Cu-Cu bond. According to the calculations, the two strong bands in the visible region originate from exciton coupling of the dipoles of the two copper monomers, and the near-IR band is a charge-transfer transition between the two Cu atoms. The known amino acid sequence has been used to construct a molecular model of the CuA site by the use of a template and energy minimization. In this model, the two ligand cysteine residues are in one turn of an alpha-helix, whereas one ligand histidine is in a loop following this helix and the other one is in a beta-strand.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Negli ultimi anni i modelli VAR sono diventati il principale strumento econometrico per verificare se può esistere una relazione tra le variabili e per valutare gli effetti delle politiche economiche. Questa tesi studia tre diversi approcci di identificazione a partire dai modelli VAR in forma ridotta (tra cui periodo di campionamento, set di variabili endogene, termini deterministici). Usiamo nel caso di modelli VAR il test di Causalità di Granger per verificare la capacità di una variabile di prevedere un altra, nel caso di cointegrazione usiamo modelli VECM per stimare congiuntamente i coefficienti di lungo periodo ed i coefficienti di breve periodo e nel caso di piccoli set di dati e problemi di overfitting usiamo modelli VAR bayesiani con funzioni di risposta di impulso e decomposizione della varianza, per analizzare l'effetto degli shock sulle variabili macroeconomiche. A tale scopo, gli studi empirici sono effettuati utilizzando serie storiche di dati specifici e formulando diverse ipotesi. Sono stati utilizzati tre modelli VAR: in primis per studiare le decisioni di politica monetaria e discriminare tra le varie teorie post-keynesiane sulla politica monetaria ed in particolare sulla cosiddetta "regola di solvibilità" (Brancaccio e Fontana 2013, 2015) e regola del GDP nominale in Area Euro (paper 1); secondo per estendere l'evidenza dell'ipotesi di endogeneità della moneta valutando gli effetti della cartolarizzazione delle banche sul meccanismo di trasmissione della politica monetaria negli Stati Uniti (paper 2); terzo per valutare gli effetti dell'invecchiamento sulla spesa sanitaria in Italia in termini di implicazioni di politiche economiche (paper 3). La tesi è introdotta dal capitolo 1 in cui si delinea il contesto, la motivazione e lo scopo di questa ricerca, mentre la struttura e la sintesi, così come i principali risultati, sono descritti nei rimanenti capitoli. Nel capitolo 2 sono esaminati, utilizzando un modello VAR in differenze prime con dati trimestrali della zona Euro, se le decisioni in materia di politica monetaria possono essere interpretate in termini di una "regola di politica monetaria", con specifico riferimento alla cosiddetta "nominal GDP targeting rule" (McCallum 1988 Hall e Mankiw 1994; Woodford 2012). I risultati evidenziano una relazione causale che va dallo scostamento tra i tassi di crescita del PIL nominale e PIL obiettivo alle variazioni dei tassi di interesse di mercato a tre mesi. La stessa analisi non sembra confermare l'esistenza di una relazione causale significativa inversa dalla variazione del tasso di interesse di mercato allo scostamento tra i tassi di crescita del PIL nominale e PIL obiettivo. Risultati simili sono stati ottenuti sostituendo il tasso di interesse di mercato con il tasso di interesse di rifinanziamento della BCE. Questa conferma di una sola delle due direzioni di causalità non supporta un'interpretazione della politica monetaria basata sulla nominal GDP targeting rule e dà adito a dubbi in termini più generali per l'applicabilità della regola di Taylor e tutte le regole convenzionali della politica monetaria per il caso in questione. I risultati appaiono invece essere più in linea con altri approcci possibili, come quelli basati su alcune analisi post-keynesiane e marxiste della teoria monetaria e più in particolare la cosiddetta "regola di solvibilità" (Brancaccio e Fontana 2013, 2015). Queste linee di ricerca contestano la tesi semplicistica che l'ambito della politica monetaria consiste nella stabilizzazione dell'inflazione, del PIL reale o del reddito nominale intorno ad un livello "naturale equilibrio". Piuttosto, essi suggeriscono che le banche centrali in realtà seguono uno scopo più complesso, che è il regolamento del sistema finanziario, con particolare riferimento ai rapporti tra creditori e debitori e la relativa solvibilità delle unità economiche. Il capitolo 3 analizza l’offerta di prestiti considerando l’endogeneità della moneta derivante dall'attività di cartolarizzazione delle banche nel corso del periodo 1999-2012. Anche se gran parte della letteratura indaga sulla endogenità dell'offerta di moneta, questo approccio è stato adottato raramente per indagare la endogeneità della moneta nel breve e lungo termine con uno studio degli Stati Uniti durante le due crisi principali: scoppio della bolla dot-com (1998-1999) e la crisi dei mutui sub-prime (2008-2009). In particolare, si considerano gli effetti dell'innovazione finanziaria sul canale dei prestiti utilizzando la serie dei prestiti aggiustata per la cartolarizzazione al fine di verificare se il sistema bancario americano è stimolato a ricercare fonti più economiche di finanziamento come la cartolarizzazione, in caso di politica monetaria restrittiva (Altunbas et al., 2009). L'analisi si basa sull'aggregato monetario M1 ed M2. Utilizzando modelli VECM, esaminiamo una relazione di lungo periodo tra le variabili in livello e valutiamo gli effetti dell’offerta di moneta analizzando quanto la politica monetaria influisce sulle deviazioni di breve periodo dalla relazione di lungo periodo. I risultati mostrano che la cartolarizzazione influenza l'impatto dei prestiti su M1 ed M2. Ciò implica che l'offerta di moneta è endogena confermando l'approccio strutturalista ed evidenziando che gli agenti economici sono motivati ad aumentare la cartolarizzazione per una preventiva copertura contro shock di politica monetaria. Il capitolo 4 indaga il rapporto tra spesa pro capite sanitaria, PIL pro capite, indice di vecchiaia ed aspettativa di vita in Italia nel periodo 1990-2013, utilizzando i modelli VAR bayesiani e dati annuali estratti dalla banca dati OCSE ed Eurostat. Le funzioni di risposta d'impulso e la scomposizione della varianza evidenziano una relazione positiva: dal PIL pro capite alla spesa pro capite sanitaria, dalla speranza di vita alla spesa sanitaria, e dall'indice di invecchiamento alla spesa pro capite sanitaria. L'impatto dell'invecchiamento sulla spesa sanitaria è più significativo rispetto alle altre variabili. Nel complesso, i nostri risultati suggeriscono che le disabilità strettamente connesse all'invecchiamento possono essere il driver principale della spesa sanitaria nel breve-medio periodo. Una buona gestione della sanità contribuisce a migliorare il benessere del paziente, senza aumentare la spesa sanitaria totale. Tuttavia, le politiche che migliorano lo stato di salute delle persone anziane potrebbe essere necessarie per una più bassa domanda pro capite dei servizi sanitari e sociali.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Paper submitted to the 7th International Symposium on Feedstock Recycling of Polymeric Materials (7th ISFR 2013), New Delhi, India, 23-26 October 2013.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A systematic investigation of the thermal decomposition of viscoelastic memory foam (VMF) was performed using thermogravimetric analysis (TGA) to obtain the kinetic parameters, and thermogravimetric analysis coupled to Fourier Transformed Infrared Spectrometry (TGA-FTIR) and thermogravimetric analysis coupled to Mass Spectrometry (TGA-MS) to obtain detailed information of evolved products on pyrolysis and oxidative degradations. Two consecutive nth-order reactions were employed to correlate the experimental data from dynamic and isothermal runs performed at three different heating rates (5, 10 and 20 K/min) under an inert atmosphere. On the other hand, for the kinetic study of the oxidative decomposition, the data from combustion (synthetic air) and poor oxygen combustion (N2:O2 = 9:1) runs, at three heating rates and under dynamic and isothermal conditions, were correlated simultaneously. A kinetic model consisting of three consecutive reactions presented a really good correlation in all runs. TGA-FTIR analysis showed that the main gases released during the pyrolysis of VMF were determined as ether and aliphatic hydrocarbons, whereas in combustion apart from the previous gases, aldehydes, amines and CO2 have also been detected as the main gases. These results were confirmed by the TGA-MS.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

To effectively assess and mitigate risk of permafrost disturbance, disturbance-p rone areas can be predicted through the application of susceptibility models. In this study we developed regional susceptibility models for permafrost disturbances using a field disturbance inventory to test the transferability of the model to a broader region in the Canadian High Arctic. Resulting maps of susceptibility were then used to explore the effect of terrain variables on the occurrence of disturbances within this region. To account for a large range of landscape charac- teristics, the model was calibrated using two locations: Sabine Peninsula, Melville Island, NU, and Fosheim Pen- insula, Ellesmere Island, NU. Spatial patterns of disturbance were predicted with a generalized linear model (GLM) and generalized additive model (GAM), each calibrated using disturbed and randomized undisturbed lo- cations from both locations and GIS-derived terrain predictor variables including slope, potential incoming solar radiation, wetness index, topographic position index, elevation, and distance to water. Each model was validated for the Sabine and Fosheim Peninsulas using independent data sets while the transferability of the model to an independent site was assessed at Cape Bounty, Melville Island, NU. The regional GLM and GAM validated well for both calibration sites (Sabine and Fosheim) with the area under the receiver operating curves (AUROC) N 0.79. Both models were applied directly to Cape Bounty without calibration and validated equally with AUROC's of 0.76; however, each model predicted disturbed and undisturbed samples differently. Addition- ally, the sensitivity of the transferred model was assessed using data sets with different sample sizes. Results in- dicated that models based on larger sample sizes transferred more consistently and captured the variability within the terrain attributes in the respective study areas. Terrain attributes associated with the initiation of dis- turbances were similar regardless of the location. Disturbances commonly occurred on slopes between 4 and 15°, below Holocene marine limit, and in areas with low potential incoming solar radiation

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The FANOVA (or “Sobol’-Hoeffding”) decomposition of multivariate functions has been used for high-dimensional model representation and global sensitivity analysis. When the objective function f has no simple analytic form and is costly to evaluate, computing FANOVA terms may be unaffordable due to numerical integration costs. Several approximate approaches relying on Gaussian random field (GRF) models have been proposed to alleviate these costs, where f is substituted by a (kriging) predictor or by conditional simulations. Here we focus on FANOVA decompositions of GRF sample paths, and we notably introduce an associated kernel decomposition into 4 d 4d terms called KANOVA. An interpretation in terms of tensor product projections is obtained, and it is shown that projected kernels control both the sparsity of GRF sample paths and the dependence structure between FANOVA effects. Applications on simulated data show the relevance of the approach for designing new classes of covariance kernels dedicated to high-dimensional kriging.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We study the distribution of energy level spacings in two models describing coupled single-mode Bose-Einstein condensates. Both models have a fixed number of degrees of freedom, which is small compared to the number of interaction parameters, and is independent of the dimensionality of the Hilbert space. We find that the distribution follows a universal Poisson form independent of the choice of coupling parameters, which is indicative of the integrability of both models. These results complement those for integrable lattice models where the number of degrees of freedom increases with increasing dimensionality of the Hilbert space. Finally, we also show that for one model the inclusion of an additional interaction which breaks the integrability leads to a non-Poisson distribution.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A parallel computing environment to support optimization of large-scale engineering systems is designed and implemented on Windows-based personal computer networks, using the master-worker model and the Parallel Virtual Machine (PVM). It is involved in decomposition of a large engineering system into a number of smaller subsystems optimized in parallel on worker nodes and coordination of subsystem optimization results on the master node. The environment consists of six functional modules, i.e. the master control, the optimization model generator, the optimizer, the data manager, the monitor, and the post processor. Object-oriented design of these modules is presented. The environment supports steps from the generation of optimization models to the solution and the visualization on networks of computers. User-friendly graphical interfaces make it easy to define the problem, and monitor and steer the optimization process. It has been verified by an example of a large space truss optimization. (C) 2004 Elsevier Ltd. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper, we evaluate the performance of the 1- and 5-site models of methane on the description of adsorption on graphite surfaces and in graphitic slit pores. These models have been known to perform well in the description of the fluid-phase behavior and vapor-liquid equilibria. Their performance in adsorption is evaluated in this work for nonporous graphitized thermal carbon black, and simulation results are compared with the experimental data of Avgul and Kiselev (Chemistry and Physics of Carbon; Dekker: New York, 1970; Vol. 6, p 1). On this nonporous surface, it is found that these models perform as well on isotherms at various temperatures as they do on the experimental isosteric heat for adsorption on a graphite surface. They are then tested for their performance in predicting the adsorption isotherms in graphitic slit pores, in which we would like to explore the effect of confinement on the molecule packing. Pore widths of 10 and 20 angstrom are chosen in this investigation, and we also study the effects of temperature by choosing 90.7, 113, and 273 K. The first two are for subcritical conditions, with 90.7 K being the triple point of methane and 113 K being its boiling point. The last temperature is chosen to represent the supercritical condition so that we can investigate the performance of these models at extremely high pressures. We have found that for the case of slit pores investigated in this paper, although the two models yield comparable pore densities (provided the accessible pore width is used in the calculation of pore density), the number of particles predicted by the I-site model is always greater than that predicted by the 5-site model, regardless of whether temperature is subcritical or supercritical. This is due to the packing effect in the confined space such that a methane molecule modeled as a spherical particle in the I-site model would pack better than the fused five-sphere model in the case of the 5-site model. Because the 5-site model better describes the liquid- and solid-phase behavior, we would argue that the packing density in small pores is better described with a more detailed 5-site model, and care should be exercised when using the 1-site model to study adsorption in small pores.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The skyrmions in SU(N) quantum Hall (QH) system are discussed. By analyzing the gauge field structure and the topological properties of this QH system it is pointed out that in the SU(N) QH system there can exist (N-1) types of skyrmion structures, instead of only one type of skyrmions. In this paper, by means of the Abelian projections according to the (N-1) Cartan subalgebra local bases, we obtain the (N-1) U(1) electromagnetic field tensors in the SU(N) gauge field of the QH system, and then derive (N-1) types of skyrmion structures from these U(1) sub-field tensors. Furthermore, in light of the phi-mapping topological current method, the topological charges and the motion of these skyrmions are also discussed.