982 resultados para Semi-implicit methods


Relevância:

30.00% 30.00%

Publicador:

Resumo:

A new geometry (semiannular) for Josephson junction has been proposed and theoretical studies have shown that the new geometry is useful for electronic applications [1, 2]. In this work we study the voltage‐current response of the junction with a periodic modulation. The fluxon experiences an oscillating potential in the presence of the ac‐bias which increases the depinning current value. We show that in a system with periodic boundary conditions, average progressive motion of fluxon commences after the amplitude of the ac drive exceeds a certain threshold value. The analytic studies are justified by simulating the equation using finite‐difference method. We observe creation and annihilation of fluxons in semiannular Josephson junction with an ac‐bias in the presence of an external magnetic field.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This work presents an efficient method for volume rendering of glioma tumors from segmented 2D MRI Datasets with user interactive control, by replacing manual segmentation required in the state of art methods. The most common primary brain tumors are gliomas, evolving from the cerebral supportive cells. For clinical follow-up, the evaluation of the pre- operative tumor volume is essential. Tumor portions were automatically segmented from 2D MR images using morphological filtering techniques. These seg- mented tumor slices were propagated and modeled with the software package. The 3D modeled tumor consists of gray level values of the original image with exact tumor boundary. Axial slices of FLAIR and T2 weighted images were used for extracting tumors. Volumetric assessment of tumor volume with manual segmentation of its outlines is a time-consuming proc- ess and is prone to error. These defects are overcome in this method. Authors verified the performance of our method on several sets of MRI scans. The 3D modeling was also done using segmented 2D slices with the help of a medical software package called 3D DOCTOR for verification purposes. The results were validated with the ground truth models by the Radi- ologist.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Study on variable stars is an important topic of modern astrophysics. After the invention of powerful telescopes and high resolving powered CCD’s, the variable star data is accumulating in the order of peta-bytes. The huge amount of data need lot of automated methods as well as human experts. This thesis is devoted to the data analysis on variable star’s astronomical time series data and hence belong to the inter-disciplinary topic, Astrostatistics. For an observer on earth, stars that have a change in apparent brightness over time are called variable stars. The variation in brightness may be regular (periodic), quasi periodic (semi-periodic) or irregular manner (aperiodic) and are caused by various reasons. In some cases, the variation is due to some internal thermo-nuclear processes, which are generally known as intrinsic vari- ables and in some other cases, it is due to some external processes, like eclipse or rotation, which are known as extrinsic variables. Intrinsic variables can be further grouped into pulsating variables, eruptive variables and flare stars. Extrinsic variables are grouped into eclipsing binary stars and chromospheri- cal stars. Pulsating variables can again classified into Cepheid, RR Lyrae, RV Tauri, Delta Scuti, Mira etc. The eruptive or cataclysmic variables are novae, supernovae, etc., which rarely occurs and are not periodic phenomena. Most of the other variations are periodic in nature. Variable stars can be observed through many ways such as photometry, spectrophotometry and spectroscopy. The sequence of photometric observa- xiv tions on variable stars produces time series data, which contains time, magni- tude and error. The plot between variable star’s apparent magnitude and time are known as light curve. If the time series data is folded on a period, the plot between apparent magnitude and phase is known as phased light curve. The unique shape of phased light curve is a characteristic of each type of variable star. One way to identify the type of variable star and to classify them is by visually looking at the phased light curve by an expert. For last several years, automated algorithms are used to classify a group of variable stars, with the help of computers. Research on variable stars can be divided into different stages like observa- tion, data reduction, data analysis, modeling and classification. The modeling on variable stars helps to determine the short-term and long-term behaviour and to construct theoretical models (for eg:- Wilson-Devinney model for eclips- ing binaries) and to derive stellar properties like mass, radius, luminosity, tem- perature, internal and external structure, chemical composition and evolution. The classification requires the determination of the basic parameters like pe- riod, amplitude and phase and also some other derived parameters. Out of these, period is the most important parameter since the wrong periods can lead to sparse light curves and misleading information. Time series analysis is a method of applying mathematical and statistical tests to data, to quantify the variation, understand the nature of time-varying phenomena, to gain physical understanding of the system and to predict future behavior of the system. Astronomical time series usually suffer from unevenly spaced time instants, varying error conditions and possibility of big gaps. This is due to daily varying daylight and the weather conditions for ground based observations and observations from space may suffer from the impact of cosmic ray particles. Many large scale astronomical surveys such as MACHO, OGLE, EROS, xv ROTSE, PLANET, Hipparcos, MISAO, NSVS, ASAS, Pan-STARRS, Ke- pler,ESA, Gaia, LSST, CRTS provide variable star’s time series data, even though their primary intention is not variable star observation. Center for Astrostatistics, Pennsylvania State University is established to help the astro- nomical community with the aid of statistical tools for harvesting and analysing archival data. Most of these surveys releases the data to the public for further analysis. There exist many period search algorithms through astronomical time se- ries analysis, which can be classified into parametric (assume some underlying distribution for data) and non-parametric (do not assume any statistical model like Gaussian etc.,) methods. Many of the parametric methods are based on variations of discrete Fourier transforms like Generalised Lomb-Scargle peri- odogram (GLSP) by Zechmeister(2009), Significant Spectrum (SigSpec) by Reegen(2007) etc. Non-parametric methods include Phase Dispersion Minimi- sation (PDM) by Stellingwerf(1978) and Cubic spline method by Akerlof(1994) etc. Even though most of the methods can be brought under automation, any of the method stated above could not fully recover the true periods. The wrong detection of period can be due to several reasons such as power leakage to other frequencies which is due to finite total interval, finite sampling interval and finite amount of data. Another problem is aliasing, which is due to the influence of regular sampling. Also spurious periods appear due to long gaps and power flow to harmonic frequencies is an inherent problem of Fourier methods. Hence obtaining the exact period of variable star from it’s time series data is still a difficult problem, in case of huge databases, when subjected to automation. As Matthew Templeton, AAVSO, states “Variable star data analysis is not always straightforward; large-scale, automated analysis design is non-trivial”. Derekas et al. 2007, Deb et.al. 2010 states “The processing of xvi huge amount of data in these databases is quite challenging, even when looking at seemingly small issues such as period determination and classification”. It will be beneficial for the variable star astronomical community, if basic parameters, such as period, amplitude and phase are obtained more accurately, when huge time series databases are subjected to automation. In the present thesis work, the theories of four popular period search methods are studied, the strength and weakness of these methods are evaluated by applying it on two survey databases and finally a modified form of cubic spline method is intro- duced to confirm the exact period of variable star. For the classification of new variable stars discovered and entering them in the “General Catalogue of Vari- able Stars” or other databases like “Variable Star Index“, the characteristics of the variability has to be quantified in term of variable star parameters.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We consider a first order implicit time stepping procedure (Euler scheme) for the non-stationary Stokes equations in smoothly bounded domains of R3. Using energy estimates we can prove optimal convergence properties in the Sobolev spaces Hm(G) (m = 0;1;2) uniformly in time, provided that the solution of the Stokes equations has a certain degree of regularity. For the solution of the resulting Stokes resolvent boundary value problems we use a representation in form of hydrodynamical volume and boundary layer potentials, where the unknown source densities of the latter can be determined from uniquely solvable boundary integral equations’ systems. For the numerical computation of the potentials and the solution of the boundary integral equations a boundary element method of collocation type is used. Some simulations of a model problem are carried out and illustrate the efficiency of the method.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We consider numerical methods for the compressible time dependent Navier-Stokes equations, discussing the spatial discretization by Finite Volume and Discontinuous Galerkin methods, the time integration by time adaptive implicit Runge-Kutta and Rosenbrock methods and the solution of the appearing nonlinear and linear equations systems by preconditioned Jacobian-Free Newton-Krylov, as well as Multigrid methods. As applications, thermal Fluid structure interaction and other unsteady flow problems are considered. The text is aimed at both mathematicians and engineers.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We study four measures of problem instance behavior that might account for the observed differences in interior-point method (IPM) iterations when these methods are used to solve semidefinite programming (SDP) problem instances: (i) an aggregate geometry measure related to the primal and dual feasible regions (aspect ratios) and norms of the optimal solutions, (ii) the (Renegar-) condition measure C(d) of the data instance, (iii) a measure of the near-absence of strict complementarity of the optimal solution, and (iv) the level of degeneracy of the optimal solution. We compute these measures for the SDPLIB suite problem instances and measure the correlation between these measures and IPM iteration counts (solved using the software SDPT3) when the measures have finite values. Our conclusions are roughly as follows: the aggregate geometry measure is highly correlated with IPM iterations (CORR = 0.896), and is a very good predictor of IPM iterations, particularly for problem instances with solutions of small norm and aspect ratio. The condition measure C(d) is also correlated with IPM iterations, but less so than the aggregate geometry measure (CORR = 0.630). The near-absence of strict complementarity is weakly correlated with IPM iterations (CORR = 0.423). The level of degeneracy of the optimal solution is essentially uncorrelated with IPM iterations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We consider conjugate-gradient like methods for solving block symmetric indefinite linear systems that arise from saddle-point problems or, in particular, regularizations thereof. Such methods require preconditioners that preserve certain sub-blocks from the original systems but allow considerable flexibility for the remaining blocks. We construct a number of families of implicit factorizations that are capable of reproducing the required sub-blocks and (some) of the remainder. These generalize known implicit factorizations for the unregularized case. Improved eigenvalue clustering is possible if additionally some of the noncrucial blocks are reproduced. Numerical experiments confirm that these implicit-factorization preconditioners can be very effective in practice.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Automatic indexing and retrieval of digital data poses major challenges. The main problem arises from the ever increasing mass of digital media and the lack of efficient methods for indexing and retrieval of such data based on the semantic content rather than keywords. To enable intelligent web interactions, or even web filtering, we need to be capable of interpreting the information base in an intelligent manner. For a number of years research has been ongoing in the field of ontological engineering with the aim of using ontologies to add such (meta) knowledge to information. In this paper, we describe the architecture of a system (Dynamic REtrieval Analysis and semantic metadata Management (DREAM)) designed to automatically and intelligently index huge repositories of special effects video clips, based on their semantic content, using a network of scalable ontologies to enable intelligent retrieval. The DREAM Demonstrator has been evaluated as deployed in the film post-production phase to support the process of storage, indexing and retrieval of large data sets of special effects video clips as an exemplar application domain. This paper provides its performance and usability results and highlights the scope for future enhancements of the DREAM architecture which has proven successful in its first and possibly most challenging proving ground, namely film production, where it is already in routine use within our test bed Partners' creative processes. (C) 2009 Published by Elsevier B.V.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A new self-tuning implicit pole-assignment algorithm is presented which, through the use of a pole compression factor and different RLS model and control structures, overcomes stability and convergence problems encountered in previously available algorithms. Computational requirements of the technique are much reduced when compared to explicit pole-assignment schemes, whereas the inherent robustness of the strategy is retained.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Many modern statistical applications involve inference for complex stochastic models, where it is easy to simulate from the models, but impossible to calculate likelihoods. Approximate Bayesian computation (ABC) is a method of inference for such models. It replaces calculation of the likelihood by a step which involves simulating artificial data for different parameter values, and comparing summary statistics of the simulated data with summary statistics of the observed data. Here we show how to construct appropriate summary statistics for ABC in a semi-automatic manner. We aim for summary statistics which will enable inference about certain parameters of interest to be as accurate as possible. Theoretical results show that optimal summary statistics are the posterior means of the parameters. Although these cannot be calculated analytically, we use an extra stage of simulation to estimate how the posterior means vary as a function of the data; and we then use these estimates of our summary statistics within ABC. Empirical results show that our approach is a robust method for choosing summary statistics that can result in substantially more accurate ABC analyses than the ad hoc choices of summary statistics that have been proposed in the literature. We also demonstrate advantages over two alternative methods of simulation-based inference.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

With the prospect of exascale computing, computational methods requiring only local data become especially attractive. Consequently, the typical domain decomposition of atmospheric models means horizontally-explicit vertically-implicit (HEVI) time-stepping schemes warrant further attention. In this analysis, Runge-Kutta implicit-explicit schemes from the literature are analysed for their stability and accuracy using a von Neumann stability analysis of two linear systems. Attention is paid to the numerical phase to indicate the behaviour of phase and group velocities. Where the analysis is tractable, analytically derived expressions are considered. For more complicated cases, amplification factors have been numerically generated and the associated amplitudes and phase diagnosed. Analysis of a system describing acoustic waves has necessitated attributing the three resultant eigenvalues to the three physical modes of the system. To do so, a series of algorithms has been devised to track the eigenvalues across the frequency space. The result enables analysis of whether the schemes exactly preserve the non-divergent mode; and whether there is evidence of spurious reversal in the direction of group velocities or asymmetry in the damping for the pair of acoustic modes. Frequency ranges that span next-generation high-resolution weather models to coarse-resolution climate models are considered; and a comparison is made of errors accumulated from multiple stability-constrained shorter time-steps from the HEVI scheme with a single integration from a fully implicit scheme over the same time interval. Two schemes, “Trap2(2,3,2)” and “UJ3(1,3,2)”, both already used in atmospheric models, are identified as offering consistently good stability and representation of phase across all the analyses. Furthermore, according to a simple measure of computational cost, “Trap2(2,3,2)” is the least expensive.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Purpose The sensitivity of soil organic carbon to global change drivers, according to the depth profile, is receiving increasing attention because of its importance in the global carbon cycle and its potential feedback to climate change. A better knowledge of the vertical distribution of SOC and its controlling factors—the aim of this study—will help scientists predict the consequences of global change. Materials and methods The study area was the Murcia Province (S.E. Spain) under semiarid Mediterranean conditions. The database used consists of 312 soil profiles collected in a systematic grid, each 12 km2 covering a total area of 11,004 km2. Statistical analysis to study the relationships between SOC concentration and control factors in different soil use scenarios was conducted at fixed depths of 0–20, 20–40, 40–60, and 60–100 cm. Results and discussion SOC concentration in the top 40 cm ranged between 6.1 and 31.5 g kg−1, with significant differences according to land use, soil type and lithology, while below this depth, no differences were observed (SOC concentration 2.1–6.8 g kg−1). The ANOVA showed that land use was the most important factor controlling SOC concentration in the 0–40 cm depth. Significant differences were found in the relative importance of environmental and textural factors according to land use and soil depth. In forestland, mean annual precipitation and texture were the main predictors of SOC, while in cropland and shrubland, the main predictors were mean annual temperature and lithology. Total SOC stored in the top 1 m in the region was about 79 Tg with a low mean density of 7.18 kg Cm−3. The vertical distribution of SOC was shallower in forestland and deeper in cropland. A reduction in rainfall would lead to SOC decrease in forestland and shrubland, and an increase of mean annual temperature would adversely affect SOC in croplands and shrubland. With increasing depth, the relative importance of climatic factors decreases and texture becomes more important in controlling SOC in all land uses. Conclusions Due to climate change, impacts will be much greater in surface SOC, the strategies for C sequestration should be focused on subsoil sequestration, which was hindered in forestland due to bedrock limitations to soil depth. In these conditions, sequestration in cropland through appropriate management practices is recommended.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Objective: To assess the perspectives of couples who requested vasectomy in a public health service on the use of male participation contraceptive methods available in Brazil: male condoms, natural family planning/calendar, coitus interruptus and vasectomy. Methods: A qualitative study with semi-structured interviews was held with 20 couples who had requested vasectomy at the Human Reproduction Unit of the Universidade Estadual de Campinas, Brazil. Data analysis was carried out through thematic content analysis. Findings: The couples did not, in general, know any effective contraceptive options for use by men and/or participating in their use, except for vasectomy. The few methods with male participation that they knew of were perceived to interfere in spontaneity and in pleasure of intercourse. Men accepted that condom use in extra-conjugal relations offered them protection from sexually transmitted diseases; that their wives might also participate in extra-marital relationships was not considered. Discussion: The few contraceptive options with male participation lead to difficulty in sharing responsibilities between men and women. On the basis of perceived gender roles, women took the responsibility for contraception until the moment when the situation became untenable, and they faced the unavoidable necessity of sterilization. Conclusion: Specific actions are necessary for men to achieve integral participation in relation to reproductive sexual health. These include education and discussions on gender roles, leading to greater awareness in men of the realities of sexual and reproductive health.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the present paper we obtain a new homological version of the implicit function theorem and some versions of the Darboux theorem. Such results are proved for continuous maps on topological manifolds. As a consequence. some versions of these classic theorems are proved when we consider differenciable (not necessarily C-1) maps.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper describes the development of an implicit finite difference method for solving transient three-dimensional incompressible free surface flows. To reduce the CPU time of explicit low-Reynolds number calculations, we have combined a projection method with an implicit technique for treating the pressure on the free surface. The projection method is employed to uncouple the velocity and the pressure fields, allowing each variable to be solved separately. We employ the normal stress condition on the free surface to derive an implicit technique for calculating the pressure at the free surface. Numerical results demonstrate that this modification is essential for the construction of methods that are more stable than those provided by discretizing the free surface explicitly. In addition, we show that the proposed method can be applied to viscoelastic fluids. Numerical results include the simulation of jet buckling and extrudate swell for Reynolds numbers in the range [0.01, 0.5]. (C) 2008 Elsevier Inc. All rights reserved.