938 resultados para Finite volume methods
Resumo:
Magnetism and magnetic materials have been an ever-attractive subject area for engineers and scientists alike because of its versatility in finding applications in useful devices. They find applications in a host of devices ranging from rudimentary devices like loud speakers to sophisticated gadgets like waveguides and Magnetic Random Access Memories (MRAM).The one and only material in the realm of magnetism that has been at the centre stage of applications is ferrites and in that spinel ferrites received the lions share as far as practical applications are concerned.It has been the endeavour of scientists and engineers to remove obsolescence and improve upon the existing so as to save energy and integrate in to various other systems. This has been the hallmark of material scientists and this has led to new materials and new technologies.In the field of ferrites too there has been considerable interest to devise new materials based on iron oxides and other compounds. This means synthesising ultra fine particles and tuning its properties to device new materials. There are various preparation techniques ranging from top- down to bottom-up approaches. This includes synthesising at molecular level, self assembling,gas based condensation. Iow temperature eo-precipitation, solgel process and high energy ball milling. Among these methods sol-gel process allows good control of the properties of ceramic materials. The advantage of this method includes processing at low temperature. mixing at the molecular level and fabrication of novel materials for various devices.Composites are materials. which combine the good qualities of one or more components. They can be prepared in situ or by mechanical means by the incorporation of fine particles in appropriate matrixes. The size of the magnetic powders as well as the nature of matrix affect the processability and other physical properties of the final product. These plastic/rubber magnets can in turn be useful for various applications in different devices. In applications involving ferrites at high frequencies, it is essential that the material possesses an appropriate dielectric permittivity and suitable magnetic permeability. This can be achieved by synthesizing rubber ferrite composites (RFC's). RFCs are very useful materials for microwave absorptions. Hence the synthesis of ferrites in the nanoregirne.investigations on their size effects on the structural, magnetic, and electrical properties and the incorporation of these ferrites into polymer matrixes assume significance.In the present study, nano particles of NiFe204, Li(!5Fe2S04 and Col-e-O, are prepared by sol gel method. By appropriate heat treatments, particles of different grain sizes are obtained. The structural, magnetic and electrical measurements are evaluated as a function of grain size and temperature. NiFel04 prepared in the ultrafine regime are then incorporated in nitrile rubber matrix. The incorporation was carried out according to a specific recipe and for various loadings of magnetic fillers. The cure characteristics, magnetic properties, electrical properties and mechanical properties of these elastomer blends are carried out. The electrical permittivity of all the rubber samples in the X - band are also conducted.
Resumo:
Warships are generally sleek, slender with V shaped sections and block coefficient below 0.5, compared to fuller forms and higher values for commercial ships. They normally operate in the higher Froude number regime, and the hydrodynamic design is primarily aimed at achieving higher speeds with the minimum power. Therefore the structural design and analysis methods are different from those for commercial ships. Certain design guidelines have been given in documents like Naval Engineering Standards and one of the new developments in this regard is the introduction of classification society rules for the design of warships.The marine environment imposes subjective and objective uncertainties on ship structure. The uncertainties in loads, material properties etc.,. make reliable predictions of ship structural response a difficult task. Strength, stiffness and durability criteria for warship structures can be established by investigations on elastic analysis, ultimate strength analysis and reliability analysis. For analysis of complicated warship structures, special means and valid approximations are required.Preliminary structural design of a frigate size ship has been carried out . A finite element model of the hold model, representative of the complexities in the geometric configuration has been created using the finite element software NISA. Two other models representing the geometry to a limited extent also have been created —- one with two transverse frames and the attached plating alongwith the longitudinal members and the other representing the plating and longitudinal stiffeners between two transverse frames. Linear static analysis of the three models have been carried out and each one with three different boundary conditions. The structural responses have been checked for deflections and stresses against the permissible values. The structure has been found adequate in all the cases. The stresses and deflections predicted by the frame model are comparable with those of the hold model. But no such comparison has been realized for the interstiffener plating model with the other two models.Progressive collapse analyses of the models have been conducted for the three boundary conditions, considering geometric nonlinearity and then combined geometric and material nonlinearity for the hold and the frame models. von Mises — lllyushin yield criteria with elastic-perfectly plastic stress-strain curve has been chosen. ln each case, P-Delta curves have been generated and the ultimate load causing failure (ultimate load factor) has been identified as a multiple of the design load specified by NES.Reliability analysis of the hull module under combined geometric and material nonlinearities have been conducted. The Young's Modulus and the shell thickness have been chosen as the variables. Randomly generated values have been used in the analysis. First Order Second Moment has been used to predict the reliability index and thereafter, the probability of failure. The values have been compared against standard values published in literature.
Resumo:
Semiclassical theories such as the Thomas-Fermi and Wigner-Kirkwood methods give a good description of the smooth average part of the total energy of a Fermi gas in some external potential when the chemical potential is varied. However, in systems with a fixed number of particles N, these methods overbind the actual average of the quantum energy as N is varied. We describe a theory that accounts for this effect. Numerical illustrations are discussed for fermions trapped in a harmonic oscillator potential and in a hard-wall cavity, and for self-consistent calculations of atomic nuclei. In the latter case, the influence of deformations on the average behavior of the energy is also considered.
Resumo:
Study on variable stars is an important topic of modern astrophysics. After the invention of powerful telescopes and high resolving powered CCD’s, the variable star data is accumulating in the order of peta-bytes. The huge amount of data need lot of automated methods as well as human experts. This thesis is devoted to the data analysis on variable star’s astronomical time series data and hence belong to the inter-disciplinary topic, Astrostatistics. For an observer on earth, stars that have a change in apparent brightness over time are called variable stars. The variation in brightness may be regular (periodic), quasi periodic (semi-periodic) or irregular manner (aperiodic) and are caused by various reasons. In some cases, the variation is due to some internal thermo-nuclear processes, which are generally known as intrinsic vari- ables and in some other cases, it is due to some external processes, like eclipse or rotation, which are known as extrinsic variables. Intrinsic variables can be further grouped into pulsating variables, eruptive variables and flare stars. Extrinsic variables are grouped into eclipsing binary stars and chromospheri- cal stars. Pulsating variables can again classified into Cepheid, RR Lyrae, RV Tauri, Delta Scuti, Mira etc. The eruptive or cataclysmic variables are novae, supernovae, etc., which rarely occurs and are not periodic phenomena. Most of the other variations are periodic in nature. Variable stars can be observed through many ways such as photometry, spectrophotometry and spectroscopy. The sequence of photometric observa- xiv tions on variable stars produces time series data, which contains time, magni- tude and error. The plot between variable star’s apparent magnitude and time are known as light curve. If the time series data is folded on a period, the plot between apparent magnitude and phase is known as phased light curve. The unique shape of phased light curve is a characteristic of each type of variable star. One way to identify the type of variable star and to classify them is by visually looking at the phased light curve by an expert. For last several years, automated algorithms are used to classify a group of variable stars, with the help of computers. Research on variable stars can be divided into different stages like observa- tion, data reduction, data analysis, modeling and classification. The modeling on variable stars helps to determine the short-term and long-term behaviour and to construct theoretical models (for eg:- Wilson-Devinney model for eclips- ing binaries) and to derive stellar properties like mass, radius, luminosity, tem- perature, internal and external structure, chemical composition and evolution. The classification requires the determination of the basic parameters like pe- riod, amplitude and phase and also some other derived parameters. Out of these, period is the most important parameter since the wrong periods can lead to sparse light curves and misleading information. Time series analysis is a method of applying mathematical and statistical tests to data, to quantify the variation, understand the nature of time-varying phenomena, to gain physical understanding of the system and to predict future behavior of the system. Astronomical time series usually suffer from unevenly spaced time instants, varying error conditions and possibility of big gaps. This is due to daily varying daylight and the weather conditions for ground based observations and observations from space may suffer from the impact of cosmic ray particles. Many large scale astronomical surveys such as MACHO, OGLE, EROS, xv ROTSE, PLANET, Hipparcos, MISAO, NSVS, ASAS, Pan-STARRS, Ke- pler,ESA, Gaia, LSST, CRTS provide variable star’s time series data, even though their primary intention is not variable star observation. Center for Astrostatistics, Pennsylvania State University is established to help the astro- nomical community with the aid of statistical tools for harvesting and analysing archival data. Most of these surveys releases the data to the public for further analysis. There exist many period search algorithms through astronomical time se- ries analysis, which can be classified into parametric (assume some underlying distribution for data) and non-parametric (do not assume any statistical model like Gaussian etc.,) methods. Many of the parametric methods are based on variations of discrete Fourier transforms like Generalised Lomb-Scargle peri- odogram (GLSP) by Zechmeister(2009), Significant Spectrum (SigSpec) by Reegen(2007) etc. Non-parametric methods include Phase Dispersion Minimi- sation (PDM) by Stellingwerf(1978) and Cubic spline method by Akerlof(1994) etc. Even though most of the methods can be brought under automation, any of the method stated above could not fully recover the true periods. The wrong detection of period can be due to several reasons such as power leakage to other frequencies which is due to finite total interval, finite sampling interval and finite amount of data. Another problem is aliasing, which is due to the influence of regular sampling. Also spurious periods appear due to long gaps and power flow to harmonic frequencies is an inherent problem of Fourier methods. Hence obtaining the exact period of variable star from it’s time series data is still a difficult problem, in case of huge databases, when subjected to automation. As Matthew Templeton, AAVSO, states “Variable star data analysis is not always straightforward; large-scale, automated analysis design is non-trivial”. Derekas et al. 2007, Deb et.al. 2010 states “The processing of xvi huge amount of data in these databases is quite challenging, even when looking at seemingly small issues such as period determination and classification”. It will be beneficial for the variable star astronomical community, if basic parameters, such as period, amplitude and phase are obtained more accurately, when huge time series databases are subjected to automation. In the present thesis work, the theories of four popular period search methods are studied, the strength and weakness of these methods are evaluated by applying it on two survey databases and finally a modified form of cubic spline method is intro- duced to confirm the exact period of variable star. For the classification of new variable stars discovered and entering them in the “General Catalogue of Vari- able Stars” or other databases like “Variable Star Index“, the characteristics of the variability has to be quantified in term of variable star parameters.
Resumo:
We consider a first order implicit time stepping procedure (Euler scheme) for the non-stationary Stokes equations in smoothly bounded domains of R3. Using energy estimates we can prove optimal convergence properties in the Sobolev spaces Hm(G) (m = 0;1;2) uniformly in time, provided that the solution of the Stokes equations has a certain degree of regularity. For the solution of the resulting Stokes resolvent boundary value problems we use a representation in form of hydrodynamical volume and boundary layer potentials, where the unknown source densities of the latter can be determined from uniquely solvable boundary integral equations’ systems. For the numerical computation of the potentials and the solution of the boundary integral equations a boundary element method of collocation type is used. Some simulations of a model problem are carried out and illustrate the efficiency of the method.
Resumo:
The finite element method (FEM) is now developed to solve two-dimensional Hartree-Fock (HF) equations for atoms and diatomic molecules. The method and its implementation is described and results are presented for the atoms Be, Ne and Ar as well as the diatomic molecules LiH, BH, N_2 and CO as examples. Total energies and eigenvalues calculated with the FEM on the HF-level are compared with results obtained with the numerical standard methods used for the solution of the one dimensional HF equations for atoms and for diatomic molecules with the traditional LCAO quantum chemical methods and the newly developed finite difference method on the HF-level. In general the accuracy increases from the LCAO - to the finite difference - to the finite element method.
Resumo:
Los métodos disponibles para realizar análisis de descomposición que se pueden aplicar cuando los datos son completamente observados, no son válidos cuando la variable de interés es censurada. Esto puede explicar la escasez de este tipo de ejercicios considerando variables de duración, las cuales se observan usualmente bajo censura. Este documento propone un método del tipo Oaxaca-Blinder para descomponer diferencias en la media en el contexto de datos censurados. La validez de dicho método radica en la identificación y estimación de la distribución conjunta de la variable de duración y un conjunto de covariables. Adicionalmente, se propone un método más general que permite descomponer otros funcionales de interés como la mediana o el coeficiente de Gini, el cual se basa en la especificación de la función de distribución condicional de la variable de duración dado un conjunto de covariables. Con el fin de evaluar el desempeño de dichos métodos, se realizan experimentos tipo Monte Carlo. Finalmente, los métodos propuestos son aplicados para analizar las brechas de género en diferentes características de la duración del desempleo en España, tales como la duración media, la probabilidad de ser desempleado de largo plazo y el coeficiente de Gini. Los resultados obtenidos permiten concluir que los factores diferentes a las características observables, tales como capital humano o estructura del hogar, juegan un papel primordial para explicar dichas brechas.
Resumo:
Two common methods of accounting for electric-field-induced perturbations to molecular vibration are analyzed and compared. The first method is based on a perturbation-theoretic treatment and the second on a finite-field treatment. The relationship between the two, which is not immediately apparent, is made by developing an algebraic formalism for the latter. Some of the higher-order terms in this development are documented here for the first time. As well as considering vibrational dipole polarizabilities and hyperpolarizabilities, we also make mention of the vibrational Stark effec
Resumo:
The goal of this study is to evaluate the effect of mass lumping on the dispersion properties of four finite-element velocity/surface-elevation pairs that are used to approximate the linear shallow-water equations. For each pair, the dispersion relation, obtained using the mass lumping technique, is computed and analysed for both gravity and Rossby waves. The dispersion relations are compared with those obtained for the consistent schemes (without lumping) and the continuous case. The P0-P1, RT0 and P-P1 pairs are shown to preserve good dispersive properties when the mass matrix is lumped. Test problems to simulate fast gravity and slow Rossby waves are in good agreement with the analytical results.
Resumo:
In this article we review recent progress on the design, analysis and implementation of numerical-asymptotic boundary integral methods for the computation of frequency-domain acoustic scattering in a homogeneous unbounded medium by a bounded obstacle. The main aim of the methods is to allow computation of scattering at arbitrarily high frequency with finite computational resources.
Resumo:
Effective medium approximations for the frequency-dependent and complex-valued effective stiffness tensors of cracked/ porous rocks with multiple solid constituents are developed on the basis of the T-matrix approach (based on integral equation methods for quasi-static composites), the elastic - viscoelastic correspondence principle, and a unified treatment of the local and global flow mechanisms, which is consistent with the principle of fluid mass conservation. The main advantage of using the T-matrix approach, rather than the first-order approach of Eshelby or the second-order approach of Hudson, is that it produces physically plausible results even when the volume concentrations of inclusions or cavities are no longer small. The new formulae, which operates with an arbitrary homogeneous (anisotropic) reference medium and contains terms of all order in the volume concentrations of solid particles and communicating cavities, take explicitly account of inclusion shape and spatial distribution independently. We show analytically that an expansion of the T-matrix formulae to first order in the volume concentration of cavities (in agreement with the dilute estimate of Eshelby) has the correct dependence on the properties of the saturating fluid, in the sense that it is consistent with the Brown-Korringa relation, when the frequency is sufficiently low. We present numerical results for the (anisotropic) effective viscoelastic properties of a cracked permeable medium with finite storage porosity, indicating that the complete T-matrix formulae (including the higher-order terms) are generally consistent with the Brown-Korringa relation, at least if we assume the spatial distribution of cavities to be the same for all cavity pairs. We have found an efficient way to treat statistical correlations in the shapes and orientations of the communicating cavities, and also obtained a reasonable match between theoretical predictions (based on a dual porosity model for quartz-clay mixtures, involving relatively flat clay-related pores and more rounded quartz-related pores) and laboratory results for the ultrasonic velocity and attenuation spectra of a suite of typical reservoir rocks. (C) 2003 Elsevier B.V. All rights reserved.
Resumo:
We develop the linearization of a semi-implicit semi-Lagrangian model of the one-dimensional shallow-water equations using two different methods. The usual tangent linear model, formed by linearizing the discrete nonlinear model, is compared with a model formed by first linearizing the continuous nonlinear equations and then discretizing. Both models are shown to perform equally well for finite perturbations. However, the asymptotic behaviour of the two models differs as the perturbation size is reduced. This leads to difficulties in showing that the models are correctly coded using the standard tests. To overcome this difficulty we propose a new method for testing linear models, which we demonstrate both theoretically and numerically. © Crown copyright, 2003. Royal Meteorological Society
Resumo:
A study was conducted to estimate variation among laboratories and between manual and automated techniques of measuring pressure on the resulting gas production profiles (GPP). Eight feeds (molassed sugarbeet feed, grass silage, maize silage, soyabean hulls, maize gluten feed, whole crop wheat silage, wheat, glucose) were milled to pass a I mm screen and sent to three laboratories (ADAS Nutritional Sciences Research Unit, UK; Institute of Grassland and Environmental Research (IGER), UK; Wageningen University, The Netherlands). Each laboratory measured GPP over 144 h using standardised procedures with manual pressure transducers (MPT) and automated pressure systems (APS). The APS at ADAS used a pressure transducer and bottles in a shaking water bath, while the APS at Wageningen and IGER used a pressure sensor and bottles held in a stationary rack. Apparent dry matter degradability (ADDM) was estimated at the end of the incubation. GPP were fitted to a modified Michaelis-Menten model assuming a single phase of gas production, and GPP were described in terms of the asymptotic volume of gas produced (A), the time to half A (B), the time of maximum gas production rate (t(RM) (gas)) and maximum gas production rate (R-M (gas)). There were effects (P<0.001) of substrate on all parameters. However, MPT produced more (P<0.001) gas, but with longer (P<0.001) B and t(RM gas) (P<0.05) and lower (P<0.001) R-M gas compared to APS. There was no difference between apparatus in ADDM estimates. Interactions occurred between substrate and apparatus, substrate and laboratory, and laboratory and apparatus. However, when mean values for MPT were regressed from the individual laboratories, relationships were good (i.e., adjusted R-2 = 0.827 or higher). Good relationships were also observed with APS, although they were weaker than for MPT (i.e., adjusted R-2 = 0.723 or higher). The relationships between mean MPT and mean APS data were also good (i.e., adjusted R 2 = 0. 844 or higher). Data suggest that, although laboratory and method of measuring pressure are sources of variation in GPP estimation, it should be possible using appropriate mathematical models to standardise data among laboratories so that data from one laboratory could be extrapolated to others. This would allow development of a database of GPP data from many diverse feeds. (c) 2005 Published by Elsevier B.V.