959 resultados para Finite Volume Methods
Resumo:
We propose an alternate parameterization of stationary regular finite-state Markov chains, and a decomposition of the parameter into time reversible and time irreversible parts. We demonstrate some useful properties of the decomposition, and propose an index for a certain type of time irreversibility. Two empirical examples illustrate the use of the proposed parameter, decomposition and index. One involves observed states; the other, latent states.
Resumo:
The technique of Monte Carlo (MC) tests [Dwass (1957), Barnard (1963)] provides an attractive method of building exact tests from statistics whose finite sample distribution is intractable but can be simulated (provided it does not involve nuisance parameters). We extend this method in two ways: first, by allowing for MC tests based on exchangeable possibly discrete test statistics; second, by generalizing the method to statistics whose null distributions involve nuisance parameters (maximized MC tests, MMC). Simplified asymptotically justified versions of the MMC method are also proposed and it is shown that they provide a simple way of improving standard asymptotics and dealing with nonstandard asymptotics (e.g., unit root asymptotics). Parametric bootstrap tests may be interpreted as a simplified version of the MMC method (without the general validity properties of the latter).
Resumo:
Statistical tests in vector autoregressive (VAR) models are typically based on large-sample approximations, involving the use of asymptotic distributions or bootstrap techniques. After documenting that such methods can be very misleading even with fairly large samples, especially when the number of lags or the number of equations is not small, we propose a general simulation-based technique that allows one to control completely the level of tests in parametric VAR models. In particular, we show that maximized Monte Carlo tests [Dufour (2002)] can provide provably exact tests for such models, whether they are stationary or integrated. Applications to order selection and causality testing are considered as special cases. The technique developed is applied to quarterly and monthly VAR models of the U.S. economy, comprising income, money, interest rates and prices, over the period 1965-1996.
Resumo:
La version intégrale de ce mémoire [ou de cette thèse] est disponible uniquement pour consultation individuelle à la Bibliothèque de musique de l’Université de Montréal (www.bib.umontreal.ca/MU).
Resumo:
Preparation of simple and mixed ferrospinels of nickel, cobalt and copper and their sulphated analogues by the room temperature coprecipitation method yielded fine particles with high surface areas. Study of the vapour phase decomposition of cyclohexanol at 300 °C over all the ferrospinel systems showed very good conversions yielding cyclohexene by dehydration and/or cyclohexanone by dehydrogenation, as the major products. Sulphation very much enhanced the dehydration activity over all the samples. A good correlation was obtained between the dehydration activities of the simple ferrites and their weak plus medium strength acidities (usually of the Brφnsted type) determined independently by the n-butylamine adsorption and ammonia-TPD methods. Mixed ferrites containing copper showed a general decrease in acidities and a drastic decrease in dehydration activities. There was no general correlation between the basicity parameters obtained by electron donor studies and the ratio of dehydrogenation to dehydration activities. There was a leap in the dehydrogenation activities in the case of all the ferrospinel samples containing copper. Along with the basic properties, the redox properties of copper ion have been invoked to account for this added activity.
Resumo:
Magnetism and magnetic materials have been an ever-attractive subject area for engineers and scientists alike because of its versatility in finding applications in useful devices. They find applications in a host of devices ranging from rudimentary devices like loud speakers to sophisticated gadgets like waveguides and Magnetic Random Access Memories (MRAM).The one and only material in the realm of magnetism that has been at the centre stage of applications is ferrites and in that spinel ferrites received the lions share as far as practical applications are concerned.It has been the endeavour of scientists and engineers to remove obsolescence and improve upon the existing so as to save energy and integrate in to various other systems. This has been the hallmark of material scientists and this has led to new materials and new technologies.In the field of ferrites too there has been considerable interest to devise new materials based on iron oxides and other compounds. This means synthesising ultra fine particles and tuning its properties to device new materials. There are various preparation techniques ranging from top- down to bottom-up approaches. This includes synthesising at molecular level, self assembling,gas based condensation. Iow temperature eo-precipitation, solgel process and high energy ball milling. Among these methods sol-gel process allows good control of the properties of ceramic materials. The advantage of this method includes processing at low temperature. mixing at the molecular level and fabrication of novel materials for various devices.Composites are materials. which combine the good qualities of one or more components. They can be prepared in situ or by mechanical means by the incorporation of fine particles in appropriate matrixes. The size of the magnetic powders as well as the nature of matrix affect the processability and other physical properties of the final product. These plastic/rubber magnets can in turn be useful for various applications in different devices. In applications involving ferrites at high frequencies, it is essential that the material possesses an appropriate dielectric permittivity and suitable magnetic permeability. This can be achieved by synthesizing rubber ferrite composites (RFC's). RFCs are very useful materials for microwave absorptions. Hence the synthesis of ferrites in the nanoregirne.investigations on their size effects on the structural, magnetic, and electrical properties and the incorporation of these ferrites into polymer matrixes assume significance.In the present study, nano particles of NiFe204, Li(!5Fe2S04 and Col-e-O, are prepared by sol gel method. By appropriate heat treatments, particles of different grain sizes are obtained. The structural, magnetic and electrical measurements are evaluated as a function of grain size and temperature. NiFel04 prepared in the ultrafine regime are then incorporated in nitrile rubber matrix. The incorporation was carried out according to a specific recipe and for various loadings of magnetic fillers. The cure characteristics, magnetic properties, electrical properties and mechanical properties of these elastomer blends are carried out. The electrical permittivity of all the rubber samples in the X - band are also conducted.
Resumo:
Warships are generally sleek, slender with V shaped sections and block coefficient below 0.5, compared to fuller forms and higher values for commercial ships. They normally operate in the higher Froude number regime, and the hydrodynamic design is primarily aimed at achieving higher speeds with the minimum power. Therefore the structural design and analysis methods are different from those for commercial ships. Certain design guidelines have been given in documents like Naval Engineering Standards and one of the new developments in this regard is the introduction of classification society rules for the design of warships.The marine environment imposes subjective and objective uncertainties on ship structure. The uncertainties in loads, material properties etc.,. make reliable predictions of ship structural response a difficult task. Strength, stiffness and durability criteria for warship structures can be established by investigations on elastic analysis, ultimate strength analysis and reliability analysis. For analysis of complicated warship structures, special means and valid approximations are required.Preliminary structural design of a frigate size ship has been carried out . A finite element model of the hold model, representative of the complexities in the geometric configuration has been created using the finite element software NISA. Two other models representing the geometry to a limited extent also have been created —- one with two transverse frames and the attached plating alongwith the longitudinal members and the other representing the plating and longitudinal stiffeners between two transverse frames. Linear static analysis of the three models have been carried out and each one with three different boundary conditions. The structural responses have been checked for deflections and stresses against the permissible values. The structure has been found adequate in all the cases. The stresses and deflections predicted by the frame model are comparable with those of the hold model. But no such comparison has been realized for the interstiffener plating model with the other two models.Progressive collapse analyses of the models have been conducted for the three boundary conditions, considering geometric nonlinearity and then combined geometric and material nonlinearity for the hold and the frame models. von Mises — lllyushin yield criteria with elastic-perfectly plastic stress-strain curve has been chosen. ln each case, P-Delta curves have been generated and the ultimate load causing failure (ultimate load factor) has been identified as a multiple of the design load specified by NES.Reliability analysis of the hull module under combined geometric and material nonlinearities have been conducted. The Young's Modulus and the shell thickness have been chosen as the variables. Randomly generated values have been used in the analysis. First Order Second Moment has been used to predict the reliability index and thereafter, the probability of failure. The values have been compared against standard values published in literature.
Resumo:
Semiclassical theories such as the Thomas-Fermi and Wigner-Kirkwood methods give a good description of the smooth average part of the total energy of a Fermi gas in some external potential when the chemical potential is varied. However, in systems with a fixed number of particles N, these methods overbind the actual average of the quantum energy as N is varied. We describe a theory that accounts for this effect. Numerical illustrations are discussed for fermions trapped in a harmonic oscillator potential and in a hard-wall cavity, and for self-consistent calculations of atomic nuclei. In the latter case, the influence of deformations on the average behavior of the energy is also considered.
Resumo:
Study on variable stars is an important topic of modern astrophysics. After the invention of powerful telescopes and high resolving powered CCD’s, the variable star data is accumulating in the order of peta-bytes. The huge amount of data need lot of automated methods as well as human experts. This thesis is devoted to the data analysis on variable star’s astronomical time series data and hence belong to the inter-disciplinary topic, Astrostatistics. For an observer on earth, stars that have a change in apparent brightness over time are called variable stars. The variation in brightness may be regular (periodic), quasi periodic (semi-periodic) or irregular manner (aperiodic) and are caused by various reasons. In some cases, the variation is due to some internal thermo-nuclear processes, which are generally known as intrinsic vari- ables and in some other cases, it is due to some external processes, like eclipse or rotation, which are known as extrinsic variables. Intrinsic variables can be further grouped into pulsating variables, eruptive variables and flare stars. Extrinsic variables are grouped into eclipsing binary stars and chromospheri- cal stars. Pulsating variables can again classified into Cepheid, RR Lyrae, RV Tauri, Delta Scuti, Mira etc. The eruptive or cataclysmic variables are novae, supernovae, etc., which rarely occurs and are not periodic phenomena. Most of the other variations are periodic in nature. Variable stars can be observed through many ways such as photometry, spectrophotometry and spectroscopy. The sequence of photometric observa- xiv tions on variable stars produces time series data, which contains time, magni- tude and error. The plot between variable star’s apparent magnitude and time are known as light curve. If the time series data is folded on a period, the plot between apparent magnitude and phase is known as phased light curve. The unique shape of phased light curve is a characteristic of each type of variable star. One way to identify the type of variable star and to classify them is by visually looking at the phased light curve by an expert. For last several years, automated algorithms are used to classify a group of variable stars, with the help of computers. Research on variable stars can be divided into different stages like observa- tion, data reduction, data analysis, modeling and classification. The modeling on variable stars helps to determine the short-term and long-term behaviour and to construct theoretical models (for eg:- Wilson-Devinney model for eclips- ing binaries) and to derive stellar properties like mass, radius, luminosity, tem- perature, internal and external structure, chemical composition and evolution. The classification requires the determination of the basic parameters like pe- riod, amplitude and phase and also some other derived parameters. Out of these, period is the most important parameter since the wrong periods can lead to sparse light curves and misleading information. Time series analysis is a method of applying mathematical and statistical tests to data, to quantify the variation, understand the nature of time-varying phenomena, to gain physical understanding of the system and to predict future behavior of the system. Astronomical time series usually suffer from unevenly spaced time instants, varying error conditions and possibility of big gaps. This is due to daily varying daylight and the weather conditions for ground based observations and observations from space may suffer from the impact of cosmic ray particles. Many large scale astronomical surveys such as MACHO, OGLE, EROS, xv ROTSE, PLANET, Hipparcos, MISAO, NSVS, ASAS, Pan-STARRS, Ke- pler,ESA, Gaia, LSST, CRTS provide variable star’s time series data, even though their primary intention is not variable star observation. Center for Astrostatistics, Pennsylvania State University is established to help the astro- nomical community with the aid of statistical tools for harvesting and analysing archival data. Most of these surveys releases the data to the public for further analysis. There exist many period search algorithms through astronomical time se- ries analysis, which can be classified into parametric (assume some underlying distribution for data) and non-parametric (do not assume any statistical model like Gaussian etc.,) methods. Many of the parametric methods are based on variations of discrete Fourier transforms like Generalised Lomb-Scargle peri- odogram (GLSP) by Zechmeister(2009), Significant Spectrum (SigSpec) by Reegen(2007) etc. Non-parametric methods include Phase Dispersion Minimi- sation (PDM) by Stellingwerf(1978) and Cubic spline method by Akerlof(1994) etc. Even though most of the methods can be brought under automation, any of the method stated above could not fully recover the true periods. The wrong detection of period can be due to several reasons such as power leakage to other frequencies which is due to finite total interval, finite sampling interval and finite amount of data. Another problem is aliasing, which is due to the influence of regular sampling. Also spurious periods appear due to long gaps and power flow to harmonic frequencies is an inherent problem of Fourier methods. Hence obtaining the exact period of variable star from it’s time series data is still a difficult problem, in case of huge databases, when subjected to automation. As Matthew Templeton, AAVSO, states “Variable star data analysis is not always straightforward; large-scale, automated analysis design is non-trivial”. Derekas et al. 2007, Deb et.al. 2010 states “The processing of xvi huge amount of data in these databases is quite challenging, even when looking at seemingly small issues such as period determination and classification”. It will be beneficial for the variable star astronomical community, if basic parameters, such as period, amplitude and phase are obtained more accurately, when huge time series databases are subjected to automation. In the present thesis work, the theories of four popular period search methods are studied, the strength and weakness of these methods are evaluated by applying it on two survey databases and finally a modified form of cubic spline method is intro- duced to confirm the exact period of variable star. For the classification of new variable stars discovered and entering them in the “General Catalogue of Vari- able Stars” or other databases like “Variable Star Index“, the characteristics of the variability has to be quantified in term of variable star parameters.
Resumo:
We consider a first order implicit time stepping procedure (Euler scheme) for the non-stationary Stokes equations in smoothly bounded domains of R3. Using energy estimates we can prove optimal convergence properties in the Sobolev spaces Hm(G) (m = 0;1;2) uniformly in time, provided that the solution of the Stokes equations has a certain degree of regularity. For the solution of the resulting Stokes resolvent boundary value problems we use a representation in form of hydrodynamical volume and boundary layer potentials, where the unknown source densities of the latter can be determined from uniquely solvable boundary integral equations’ systems. For the numerical computation of the potentials and the solution of the boundary integral equations a boundary element method of collocation type is used. Some simulations of a model problem are carried out and illustrate the efficiency of the method.
Resumo:
The finite element method (FEM) is now developed to solve two-dimensional Hartree-Fock (HF) equations for atoms and diatomic molecules. The method and its implementation is described and results are presented for the atoms Be, Ne and Ar as well as the diatomic molecules LiH, BH, N_2 and CO as examples. Total energies and eigenvalues calculated with the FEM on the HF-level are compared with results obtained with the numerical standard methods used for the solution of the one dimensional HF equations for atoms and for diatomic molecules with the traditional LCAO quantum chemical methods and the newly developed finite difference method on the HF-level. In general the accuracy increases from the LCAO - to the finite difference - to the finite element method.
Resumo:
Los métodos disponibles para realizar análisis de descomposición que se pueden aplicar cuando los datos son completamente observados, no son válidos cuando la variable de interés es censurada. Esto puede explicar la escasez de este tipo de ejercicios considerando variables de duración, las cuales se observan usualmente bajo censura. Este documento propone un método del tipo Oaxaca-Blinder para descomponer diferencias en la media en el contexto de datos censurados. La validez de dicho método radica en la identificación y estimación de la distribución conjunta de la variable de duración y un conjunto de covariables. Adicionalmente, se propone un método más general que permite descomponer otros funcionales de interés como la mediana o el coeficiente de Gini, el cual se basa en la especificación de la función de distribución condicional de la variable de duración dado un conjunto de covariables. Con el fin de evaluar el desempeño de dichos métodos, se realizan experimentos tipo Monte Carlo. Finalmente, los métodos propuestos son aplicados para analizar las brechas de género en diferentes características de la duración del desempleo en España, tales como la duración media, la probabilidad de ser desempleado de largo plazo y el coeficiente de Gini. Los resultados obtenidos permiten concluir que los factores diferentes a las características observables, tales como capital humano o estructura del hogar, juegan un papel primordial para explicar dichas brechas.
Resumo:
Two common methods of accounting for electric-field-induced perturbations to molecular vibration are analyzed and compared. The first method is based on a perturbation-theoretic treatment and the second on a finite-field treatment. The relationship between the two, which is not immediately apparent, is made by developing an algebraic formalism for the latter. Some of the higher-order terms in this development are documented here for the first time. As well as considering vibrational dipole polarizabilities and hyperpolarizabilities, we also make mention of the vibrational Stark effec
Resumo:
The goal of this study is to evaluate the effect of mass lumping on the dispersion properties of four finite-element velocity/surface-elevation pairs that are used to approximate the linear shallow-water equations. For each pair, the dispersion relation, obtained using the mass lumping technique, is computed and analysed for both gravity and Rossby waves. The dispersion relations are compared with those obtained for the consistent schemes (without lumping) and the continuous case. The P0-P1, RT0 and P-P1 pairs are shown to preserve good dispersive properties when the mass matrix is lumped. Test problems to simulate fast gravity and slow Rossby waves are in good agreement with the analytical results.
Resumo:
In this article we review recent progress on the design, analysis and implementation of numerical-asymptotic boundary integral methods for the computation of frequency-domain acoustic scattering in a homogeneous unbounded medium by a bounded obstacle. The main aim of the methods is to allow computation of scattering at arbitrarily high frequency with finite computational resources.