958 resultados para Non-perturbative methods


Relevância:

90.00% 90.00%

Publicador:

Resumo:

The aim of this study was to evaluate and test methods which could improve local estimates of a general model fitted to a large area. In the first three studies, the intention was to divide the study area into sub-areas that were as homogeneous as possible according to the residuals of the general model, and in the fourth study, the localization was based on the local neighbourhood. According to spatial autocorrelation (SA), points closer together in space are more likely to be similar than those that are farther apart. Local indicators of SA (LISAs) test the similarity of data clusters. A LISA was calculated for every observation in the dataset, and together with the spatial position and residual of the global model, the data were segmented using two different methods: classification and regression trees (CART) and the multiresolution segmentation algorithm (MS) of the eCognition software. The general model was then re-fitted (localized) to the formed sub-areas. In kriging, the SA is modelled with a variogram, and the spatial correlation is a function of the distance (and direction) between the observation and the point of calculation. A general trend is corrected with the residual information of the neighbourhood, whose size is controlled by the number of the nearest neighbours. Nearness is measured as Euclidian distance. With all methods, the root mean square errors (RMSEs) were lower, but with the methods that segmented the study area, the deviance in single localized RMSEs was wide. Therefore, an element capable of controlling the division or localization should be included in the segmentation-localization process. Kriging, on the other hand, provided stable estimates when the number of neighbours was sufficient (over 30), thus offering the best potential for further studies. Even CART could be combined with kriging or non-parametric methods, such as most similar neighbours (MSN).

Relevância:

90.00% 90.00%

Publicador:

Resumo:

We present results for one-loop matching coefficients between continuum four-fermion operators, defined in the Naive Dimensional Regularization scheme, and staggered fermion operators of various types. We calculate diagrams involving gluon exchange between quark fines, and ''penguin'' diagrams containing quark loops. For the former we use Landau-gauge operators, with and without O(a) improvement, and including the tadpole improvement suggested by Lepage and Mackenzie. For the latter we use gauge-invariant operators. Combined with existing results for two-loop anomalous dimension matrices and one-loop matching coefficients, our results allow a lattice calculation of the amplitudes for KKBAR mixing and K --> pipi decays with all corrections of O(g2) included. We also discuss the mixing of DELTAS = 1 operators with lower dimension operators, and show that, with staggered fermions, only a single lower dimension operator need be removed by non-perturbative subtraction.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The problem of on-line recognition and retrieval of relatively weak industrial signals such as partial discharges (PD), buried in excessive noise, has been addressed in this paper. The major bottleneck being the recognition and suppression of stochastic pulsive interference (PI) due to the overlapping broad band frequency spectrum of PI and PD pulses. Therefore, on-line, onsite, PD measurement is hardly possible in conventional frequency based DSP techniques. The observed PD signal is modeled as a linear combination of systematic and random components employing probabilistic principal component analysis (PPCA) and the pdf of the underlying stochastic process is obtained. The PD/PI pulses are assumed as the mean of the process and modeled instituting non-parametric methods, based on smooth FIR filters, and a maximum aposteriori probability (MAP) procedure employed therein, to estimate the filter coefficients. The classification of the pulses is undertaken using a simple PCA classifier. The methods proposed by the authors were found to be effective in automatic retrieval of PD pulses completely rejecting PI.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Present study performs the spatial and temporal trend analysis of annual, monthly and seasonal maximum and minimum temperatures (t(max), t(min)) in India. Recent trends in annual, monthly, winter, pre-monsoon, monsoon and post-monsoon extreme temperatures (t(max), t(min)) have been analyzed for three time slots viz. 1901-2003,1948-2003 and 1970-2003. For this purpose, time series of extreme temperatures of India as a whole and seven homogeneous regions, viz. Western Himalaya (WH), Northwest (NW), Northeast (NE), North Central (NC), East coast (EC), West coast (WC) and Interior Peninsula (IP) are considered. Rigorous trend detection analysis has been exercised using variety of non-parametric methods which consider the effect of serial correlation during analysis. During the last three decades minimum temperature trend is present in All India as well as in all temperature homogeneous regions of India either at annual or at any seasonal level (winter, pre-monsoon, monsoon, post-monsoon). Results agree with the earlier observation that the trend in minimum temperature is significant in the last three decades over India (Kothawale et al., 2010). Sequential MK test reveals that most of the trend both in maximum and minimum temperature began after 1970 either in annual or seasonal levels. (C) 2012 Elsevier B.V. All rights reserved.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

We study properties of non-uniform reductions and related completeness notions. We strengthen several results of Hitchcock and Pavan and give a trade-off between the amount of advice needed for a reduction and its honesty on NEXP. We construct an oracle relative to which this trade-off is optimal. We show, in a more systematic study of non-uniform reductions, that among other things non-uniformity can be removed at the cost of more queries. In line with Post's program for complexity theory we connect such 'uniformization' properties to the separation of complexity classes.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In this study we show that forest areas contribute significantly to the estimated benefits from om outdoor recreation in Northern Ireland. Secondly we provide empirical evidence of the gains in the statistical efficiency of both benefit and parameter estimates obtained by analysing follow-up responses with Double Bounded interval data analysis. As these gains are considerable, it is clearly worth considering this method in CVM survey design even when moderately large sample sizes are used. Finally we demonstrate that estimates of means and medians of WTP distributions for access to forest recreation show plausible magnitude, are consistent with previous UK studies, and converge across parametric and non-parametic methods of estimation.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Perturbative distorted-wave and non-perturbative close-coupling methods are used to calculate electron-impact ionization cross sections for the ground state of the neutral Al atom. Configuration-average distorted-wave calculations are made for both direct ionization and excitation-autoionization contributions. The total perturbative results are found to be almost a factor of 2 higher than experiment over a wide energy range. On the other hand, the R-matrix with pseudo-states results for total ionization are found to be in good agreement with experiment. Comparison of time-dependent close-coupling calculations for the direct ionization with the R-matrix with pseudo-state calculations for total ionization reveals that both the direct ionization and excitation-autoionization contributions are strongly affected by correlation effects.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Electron-impact ionization cross sections for the 1s2s 1S and 1s2s 3S metastable states of Li+ are calculated using both perturbative distorted-wave and non-perturbative close-coupling methods. Term-resolved distorted-wave calculations are found to be approximately 15% above term-resolved R-matrix with pseudostates calculations. On the other hand, configuration-average time-dependent close-coupling calculations are found to be in excellent agreement with the configuration-average R-matrix with pseudostates calculations. The non-perturbative R-matrix and close-coupling calculations provide a benchmark for experimental studies of electron-impact ionization of metastable states along the He isoelectronic sequence.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Electron-impact ionization cross sections for argon are calculated using both non-perturbative R-matrix with pseudo-states (RMPS) and perturbative distorted-wave methods. At twice the ionization potential, the 3p(61)S ground-term cross section from a distorted-wave calculation is found to be a factor of 4 above crossed-beams experimental measurements, while with the inclusion of term-dependent continuum effects in the distorted-wave method, the perturbative cross section still remains almost a factor of 2 above experiment. In the case of ionization from the metastable 3p(5)4s(3)P term, the distorted-wave ionization cross section is also higher than the experimental cross section. On the other hand, the ground-term cross section determined from a nonperturbative RMPS calculation that includes 27 LS spectroscopic terms and another 282 LS pseudo-state terms to represent the high Rydberg states, and the target continuum is found to be in excellent agreement with experimental measurements, while the RMPS result is below the experimental cross section for ionization from the metastable term. We conclude that both continuum term dependence and interchannel coupling effects, which are included in the RMPS method, are important for ionization from the ground term, and interchannel coupling is also significant for ionization from the metastable term

Relevância:

90.00% 90.00%

Publicador:

Resumo:

To assess the preferred methods to quit smoking among current smokers. Cross-sectional, population-based study conducted in Lausanne between 2003 and 2006 including 988 current smokers. Preference was assessed by questionnaire. Evidence-based (EB) methods were nicotine replacement, bupropion, physician or group consultations; non-EB-based methods were acupuncture, hypnosis and autogenic training. EB methods were frequently (physician consultation: 48%, 95% confidence interval (45-51); nicotine replacement therapy: 35% (32-38)) or rarely (bupropion and group consultations: 13% (11-15)) preferred by the participants. Non-EB methods were preferred by a third (acupuncture: 33% (30-36)), a quarter (hypnosis: 26% (23-29)) or a seventh (autogenic training: 13% (11-15)) of responders. On multivariate analysis, women preferred both EB and non-EB methods more frequently than men (odds ratio and 95% confidence interval: 1.46 (1.10-1.93) and 2.26 (1.72-2.96) for any EB and non-EB method, respectively). Preference for non-EB methods was higher among highly educated participants, while no such relationship was found for EB methods. Many smokers are unaware of the full variety of methods to quit smoking. Better information regarding these methods is necessary.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Study on variable stars is an important topic of modern astrophysics. After the invention of powerful telescopes and high resolving powered CCD’s, the variable star data is accumulating in the order of peta-bytes. The huge amount of data need lot of automated methods as well as human experts. This thesis is devoted to the data analysis on variable star’s astronomical time series data and hence belong to the inter-disciplinary topic, Astrostatistics. For an observer on earth, stars that have a change in apparent brightness over time are called variable stars. The variation in brightness may be regular (periodic), quasi periodic (semi-periodic) or irregular manner (aperiodic) and are caused by various reasons. In some cases, the variation is due to some internal thermo-nuclear processes, which are generally known as intrinsic vari- ables and in some other cases, it is due to some external processes, like eclipse or rotation, which are known as extrinsic variables. Intrinsic variables can be further grouped into pulsating variables, eruptive variables and flare stars. Extrinsic variables are grouped into eclipsing binary stars and chromospheri- cal stars. Pulsating variables can again classified into Cepheid, RR Lyrae, RV Tauri, Delta Scuti, Mira etc. The eruptive or cataclysmic variables are novae, supernovae, etc., which rarely occurs and are not periodic phenomena. Most of the other variations are periodic in nature. Variable stars can be observed through many ways such as photometry, spectrophotometry and spectroscopy. The sequence of photometric observa- xiv tions on variable stars produces time series data, which contains time, magni- tude and error. The plot between variable star’s apparent magnitude and time are known as light curve. If the time series data is folded on a period, the plot between apparent magnitude and phase is known as phased light curve. The unique shape of phased light curve is a characteristic of each type of variable star. One way to identify the type of variable star and to classify them is by visually looking at the phased light curve by an expert. For last several years, automated algorithms are used to classify a group of variable stars, with the help of computers. Research on variable stars can be divided into different stages like observa- tion, data reduction, data analysis, modeling and classification. The modeling on variable stars helps to determine the short-term and long-term behaviour and to construct theoretical models (for eg:- Wilson-Devinney model for eclips- ing binaries) and to derive stellar properties like mass, radius, luminosity, tem- perature, internal and external structure, chemical composition and evolution. The classification requires the determination of the basic parameters like pe- riod, amplitude and phase and also some other derived parameters. Out of these, period is the most important parameter since the wrong periods can lead to sparse light curves and misleading information. Time series analysis is a method of applying mathematical and statistical tests to data, to quantify the variation, understand the nature of time-varying phenomena, to gain physical understanding of the system and to predict future behavior of the system. Astronomical time series usually suffer from unevenly spaced time instants, varying error conditions and possibility of big gaps. This is due to daily varying daylight and the weather conditions for ground based observations and observations from space may suffer from the impact of cosmic ray particles. Many large scale astronomical surveys such as MACHO, OGLE, EROS, xv ROTSE, PLANET, Hipparcos, MISAO, NSVS, ASAS, Pan-STARRS, Ke- pler,ESA, Gaia, LSST, CRTS provide variable star’s time series data, even though their primary intention is not variable star observation. Center for Astrostatistics, Pennsylvania State University is established to help the astro- nomical community with the aid of statistical tools for harvesting and analysing archival data. Most of these surveys releases the data to the public for further analysis. There exist many period search algorithms through astronomical time se- ries analysis, which can be classified into parametric (assume some underlying distribution for data) and non-parametric (do not assume any statistical model like Gaussian etc.,) methods. Many of the parametric methods are based on variations of discrete Fourier transforms like Generalised Lomb-Scargle peri- odogram (GLSP) by Zechmeister(2009), Significant Spectrum (SigSpec) by Reegen(2007) etc. Non-parametric methods include Phase Dispersion Minimi- sation (PDM) by Stellingwerf(1978) and Cubic spline method by Akerlof(1994) etc. Even though most of the methods can be brought under automation, any of the method stated above could not fully recover the true periods. The wrong detection of period can be due to several reasons such as power leakage to other frequencies which is due to finite total interval, finite sampling interval and finite amount of data. Another problem is aliasing, which is due to the influence of regular sampling. Also spurious periods appear due to long gaps and power flow to harmonic frequencies is an inherent problem of Fourier methods. Hence obtaining the exact period of variable star from it’s time series data is still a difficult problem, in case of huge databases, when subjected to automation. As Matthew Templeton, AAVSO, states “Variable star data analysis is not always straightforward; large-scale, automated analysis design is non-trivial”. Derekas et al. 2007, Deb et.al. 2010 states “The processing of xvi huge amount of data in these databases is quite challenging, even when looking at seemingly small issues such as period determination and classification”. It will be beneficial for the variable star astronomical community, if basic parameters, such as period, amplitude and phase are obtained more accurately, when huge time series databases are subjected to automation. In the present thesis work, the theories of four popular period search methods are studied, the strength and weakness of these methods are evaluated by applying it on two survey databases and finally a modified form of cubic spline method is intro- duced to confirm the exact period of variable star. For the classification of new variable stars discovered and entering them in the “General Catalogue of Vari- able Stars” or other databases like “Variable Star Index“, the characteristics of the variability has to be quantified in term of variable star parameters.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Background: The aim of this study was to evaluate root coverage of gingival recessions and to compare graft vascularization in smokers and non-smokers. Methods: Thirty subjects, 15 smokers and 15 non-smokers, were selected. Each subject had one Miller Class I or II recession in a non-molar tooth. Clinical measurements of probing depth (PD), relative clinical attachment level (CAL), gingival recession (GR), and width of keratinized tissue (KT) were determined at baseline and 3 and 6 months after surgery. The recessions were treated surgically with a coronally positioned flap associated with a subepithelial connective tissue graft. A small portion of this graft was prepared for immunohistochemistry. Blood vessels were identified and counted by expression of factor VIII-related antigen-stained endothelial cells. Results: Intragroup analysis showed that after 6 months there a was gain in CAL, a decrease in GR, and an increase in KT for both groups (P<0.05), whereas changes in PD were not statistically significant. Smokers had less root coverage than non-smokers (58.02% +/- 19.75% versus 83.35% +/- 18.53%; P<0.05). Furthermore, the smokers had more GR (1.48 +/- 0.79 mm versus 0.52 +/- 0.60 mm) than the nonsmokers (P<0.05). Histomorphometry of the donor tissue revealed a blood vessel density of 49.01 +/- 11.91 vessels/200x field for non-smokers and 36.53 +/- 10.23 vessels/200x field for smokers (P<0.05). Conclusion: Root coverage with subepithelial connective tissue graft was negatively affected by smoking, which limited and jeopardized treatment results.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The thermal decomposition of salbutamol (beta(2) - selective adrenoreceptor) was studied using differential scanning calorimetry (DSC) and thermogravimetry/derivative thermogravimetry (TG/DTG). It was observed that the commercial sample showed a different thermal profile than the standard sample caused by the presence of excipients. These compounds increase the thermal stability of the drug. Moreover, higher activation energy was calculated for the pharmaceutical sample, which was estimated by isothermal and non-isothermal methods for the first stage of the thermal decomposition process. For isothermal experiments the average values were E(act) = 130 kJ mol(-1) (for standard sample) and E(act) = 252 kJ mol(-1) (for pharmaceutical sample) in a dynamic nitrogen atmosphere (50 mL min(-1)). For non-isothermal method, activation energy was obtained from the plot of log heating rates vs. 1/T in dynamic air atmosphere (50 mL min(-1)). The calculated values were E(act) = 134 kJ mol(-1) (for standard sample) and E(act) (=) 139 kJ mol(-1) (for pharmaceutical sample).

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Abstract Aims: To develop and evaluate a screening tool to identify people with diabetes at increased risk of medication problems relating to hypoglycaemia and medication non-adherence. Methods: A retrospective audit of attendances at a diabetes outpatient clinic at a public, teaching hospital over a 16-month period was conducted. Logistic regression was undertaken to examine risk factors associated with medication problems relating to hypoglycaemia and medication non-adherence and the most predictive set of factors comprise the Diabetes Medication Risk Screening Tool. Evaluating the tool involved assessing sensitivity and specificity, positive and negative predictive values, cut-off scores, inter-rater reliability, and content validity. Results: The Diabetes Medication Risk Screening Tool comprises seven predictive factors: age, living alone, English language, mental and behavioural problems, comorbidity index score, number of medications prescribed, and number of high-risk medications prescribed. The tool has 76.5% sensitivity, 59.5% specificity, and has a 65.1% positive predictive value, and a 71.8% negative predictive value. A score of 27 or more out of 62 was associated with high-risk of a medication problem. The inter-rater reliability of the tool was high (κ = 0.79, 95% CI 0.75 - 0.84) and the content validity index was 99.4%. Conclusion: The Diabetes Medication Risk Screening Tool has good psychometric properties and can proactively identify people with diabetes at greatest risk of medication problems relating to hypoglycaemia and medication non-adherence.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Aim. Extrinsic compression of the popliteal artery and absence of surrounding anatomical abnormalities characterize the functional popliteal artery entrapment syndrome (PAES). The diagnosis is confirmed to individuals who have typical symptoms of popliteal entrapment and occlusion or important stenosis of the popliteal artery with color duplex sonography (CDS), magnetic resonance imaging (MRI) or arteriography during active plantar flexion-extension maneuvers. However, variable result findings in normal asymptomatic subjects have raised doubts as to the validity of these tests. The purpose of this study was to compare the frequency of popliteal artery compression in 2 groups of asymptomatic subjects, athletes and non-athletes.Methods. Forty-two individuals were studied. Twenty-one subjects were indoor soccer players, and 21 were sedentary individuals. Physical activity was evaluated through questionnaires, anthropometric measurements, and cardiopulmonary exercise test. Evaluation of popliteal artery compression was performed in lower limbs with CDS, ankle-brachial index (ABI) measurements and continuous wave Doppler of the posterior tibial artery.Results. The athletes studied fulfilled the criteria of high level of physical activity whereas sedentary subjects met the criteria of low level of activity. Popliteal artery compression was observed with CDS in 6 (14.2%) studied subjects; 2 of whom (4.7%) were athletes and 4 (9.5%) were non-athletes. This difference was not statistically significant (p=0.21). Doppler of the tibial arteries and ABI measurements gave good specificity and sensibility in the identification of popliteal artery compression.Conclusion. The frequency of popliteal artery compression during maneuvers in normal subjects was 14.2% irrespective of whether or not they performed regular physical activities. Both Doppler and ABI showed good agreement with CDS and should be considered in screening popliteal arteries in individuals suspected of PAES.