966 resultados para ORDER ACCURACY APPROXIMATIONS
Resumo:
Coupled-cluster (CC) theory is one of the most successful approaches in high-accuracy quantum chemistry. The present thesis makes a number of contributions to the determination of molecular properties and excitation energies within the CC framework. The multireference CC (MRCC) method proposed by Mukherjee and coworkers (Mk-MRCC) has been benchmarked within the singles and doubles approximation (Mk-MRCCSD) for molecular equilibrium structures. It is demonstrated that Mk-MRCCSD yields reliable results for multireference cases where single-reference CC methods fail. At the same time, the present work also illustrates that Mk-MRCC still suffers from a number of theoretical problems and sometimes gives rise to results of unsatisfactory accuracy. To determine polarizability tensors and excitation spectra in the MRCC framework, the Mk-MRCC linear-response function has been derived together with the corresponding linear-response equations. Pilot applications show that Mk-MRCC linear-response theory suffers from a severe problem when applied to the calculation of dynamic properties and excitation energies: The Mk-MRCC sufficiency conditions give rise to a redundancy in the Mk-MRCC Jacobian matrix, which entails an artificial splitting of certain excited states. This finding has established a new paradigm in MRCC theory, namely that a convincing method should not only yield accurate energies, but ought to allow for the reliable calculation of dynamic properties as well. In the context of single-reference CC theory, an analytic expression for the dipole Hessian matrix, a third-order quantity relevant to infrared spectroscopy, has been derived and implemented within the CC singles and doubles approximation. The advantages of analytic derivatives over numerical differentiation schemes are demonstrated in some pilot applications.
Resumo:
We give a brief review of the Functional Renormalization method in quantum field theory, which is intrinsically non perturbative, in terms of both the Polchinski equation for the Wilsonian action and the Wetterich equation for the generator of the proper verteces. For the latter case we show a simple application for a theory with one real scalar field within the LPA and LPA' approximations. For the first case, instead, we give a covariant "Hamiltonian" version of the Polchinski equation which consists in doing a Legendre transform of the flow for the corresponding effective Lagrangian replacing arbitrary high order derivative of fields with momenta fields. This approach is suitable for studying new truncations in the derivative expansion. We apply this formulation for a theory with one real scalar field and, as a novel result, derive the flow equations for a theory with N real scalar fields with the O(N) internal symmetry. Within this new approach we analyze numerically the scaling solutions for N=1 in d=3 (critical Ising model), at the leading order in the derivative expansion with an infinite number of couplings, encoded in two functions V(phi) and Z(phi), obtaining an estimate for the quantum anomalous dimension with a 10% accuracy (confronting with Monte Carlo results).
Resumo:
This study investigates the feasibility of predicting the momentamplification in beam-column elements of steel moment-resisting frames using the structure's natural period. Unlike previous methods, which perform moment-amplification on a story-by-story basis, this study develops and tests two models that aim to predict a global amplification factor indicative of the largest relevant instance of local moment amplification in the structure. To thisend, a variety of two-dimensional frames is investigated using first and secondorder finite element analysis. The observed moment amplification is then compared with the predicted amplification based on the structure's natural period, which is calculated by first-order finite element analysis. As a benchmark, design moment amplification factors are calculated for each story using the story stiffness approach, and serve to demonstrate the relativeconservatism and accuracy of the proposed models with respect to current practice in design. The study finds that the observed moment amplification factors may vastly exceed expectations when internal member stresses are initially very small. Where the internal stresses are small relative to the member capacities, thesecases are inconsequential for design. To qualify the significance of the observed amplification factors, two parameters are used: the second-order moment normalized to the plastic moment capacity, and the combined flexural and axial stress interaction equations developed by AISC
Resumo:
This paper examines the accuracy of software-based on-line energy estimation techniques. It evaluates today’s most widespread energy estimation model in order to investigate whether the current methodology of pure software-based energy estimation running on a sensor node itself can indeed reliably and accurately determine its energy consumption - independent of the particular node instance, the traffic load the node is exposed to, or the MAC protocol the node is running. The paper enhances today’s widely used energy estimation model by integrating radio transceiver switches into the model, and proposes a methodology to find the optimal estimation model parameters. It proves by statistical validation with experimental data that the proposed model enhancement and parameter calibration methodology significantly increases the estimation accuracy.
Resumo:
Image-guided microsurgery requires accuracies an order of magnitude higher than today's navigation systems provide. A critical step toward the achievement of such low-error requirements is a highly accurate and verified patient-to-image registration. With the aim of reducing target registration error to a level that would facilitate the use of image-guided robotic microsurgery on the rigid anatomy of the head, we have developed a semiautomatic fiducial detection technique. Automatic force-controlled localization of fiducials on the patient is achieved through the implementation of a robotic-controlled tactile search within the head of a standard surgical screw. Precise detection of the corresponding fiducials in the image data is realized using an automated model-based matching algorithm on high-resolution, isometric cone beam CT images. Verification of the registration technique on phantoms demonstrated that through the elimination of user variability, clinically relevant target registration errors of approximately 0.1 mm could be achieved.
Resumo:
Background mortality is an essential component of any forest growth and yield model. Forecasts of mortality contribute largely to the variability and accuracy of model predictions at the tree, stand and forest level. In the present study, I implement and evaluate state-of-the-art techniques to increase the accuracy of individual tree mortality models, similar to those used in many of the current variants of the Forest Vegetation Simulator, using data from North Idaho and Montana. The first technique addresses methods to correct for bias induced by measurement error typically present in competition variables. The second implements survival regression and evaluates its performance against the traditional logistic regression approach. I selected the regression calibration (RC) algorithm as a good candidate for addressing the measurement error problem. Two logistic regression models for each species were fitted, one ignoring the measurement error, which is the “naïve” approach, and the other applying RC. The models fitted with RC outperformed the naïve models in terms of discrimination when the competition variable was found to be statistically significant. The effect of RC was more obvious where measurement error variance was large and for more shade-intolerant species. The process of model fitting and variable selection revealed that past emphasis on DBH as a predictor variable for mortality, while producing models with strong metrics of fit, may make models less generalizable. The evaluation of the error variance estimator developed by Stage and Wykoff (1998), and core to the implementation of RC, in different spatial patterns and diameter distributions, revealed that the Stage and Wykoff estimate notably overestimated the true variance in all simulated stands, but those that are clustered. Results show a systematic bias even when all the assumptions made by the authors are guaranteed. I argue that this is the result of the Poisson-based estimate ignoring the overlapping area of potential plots around a tree. Effects, especially in the application phase, of the variance estimate justify suggested future efforts of improving the accuracy of the variance estimate. The second technique implemented and evaluated is a survival regression model that accounts for the time dependent nature of variables, such as diameter and competition variables, and the interval-censored nature of data collected from remeasured plots. The performance of the model is compared with the traditional logistic regression model as a tool to predict individual tree mortality. Validation of both approaches shows that the survival regression approach discriminates better between dead and alive trees for all species. In conclusion, I showed that the proposed techniques do increase the accuracy of individual tree mortality models, and are a promising first step towards the next generation of background mortality models. I have also identified the next steps to undertake in order to advance mortality models further.
Resumo:
KIVA is a FORTRAN code developed by Los Alamos national lab to simulate complete engine cycle. KIVA is a flow solver code which is used to perform calculation of properties in a fluid flow field. It involves using various numerical schemes and methods to solve the Navier-Stokes equation. This project involves improving the accuracy of one such scheme by upgrading it to a higher order scheme. The numerical scheme to be modified is used in the critical final stage calculation called as rezoning phase. The primitive objective of this project is to implement a higher order numerical scheme, to validate and verify that the new scheme is better than the existing scheme. The latest version of the KIVA family (KIVA 4) is used for implementing the higher order scheme to support handling the unstructured mesh. The code is validated using the traditional shock tube problem and the results are verified to be more accurate than the existing schemes in reference with the analytical result. The convection test is performed to compare the computational accuracy on convective transfer; it is found that the new scheme has less numerical diffusion compared to the existing schemes. A four valve pentroof engine, an example case of KIVA package is used as application to ensure the stability of the scheme in practical application. The results are compared for the temperature profile. In spite of all the positive results, the numerical scheme implemented has a downside of consuming more CPU time for the computational analysis. The detailed comparison is provided. However, in an overview, the implementation of the higher order scheme in the latest code KIVA 4 is verified to be successful and it gives better results than the existing scheme which satisfies the objective of this project.
Resumo:
The prognosis for lung cancer patients remains poor. Five year survival rates have been reported to be 15%. Studies have shown that dose escalation to the tumor can lead to better local control and subsequently better overall survival. However, dose to lung tumor is limited by normal tissue toxicity. The most prevalent thoracic toxicity is radiation pneumonitis. In order to determine a safe dose that can be delivered to the healthy lung, researchers have turned to mathematical models predicting the rate of radiation pneumonitis. However, these models rely on simple metrics based on the dose-volume histogram and are not yet accurate enough to be used for dose escalation trials. The purpose of this work was to improve the fit of predictive risk models for radiation pneumonitis and to show the dosimetric benefit of using the models to guide patient treatment planning. The study was divided into 3 specific aims. The first two specifics aims were focused on improving the fit of the predictive model. In Specific Aim 1 we incorporated information about the spatial location of the lung dose distribution into a predictive model. In Specific Aim 2 we incorporated ventilation-based functional information into a predictive pneumonitis model. In the third specific aim a proof of principle virtual simulation was performed where a model-determined limit was used to scale the prescription dose. The data showed that for our patient cohort, the fit of the model to the data was not improved by incorporating spatial information. Although we were not able to achieve a significant improvement in model fit using pre-treatment ventilation, we show some promising results indicating that ventilation imaging can provide useful information about lung function in lung cancer patients. The virtual simulation trial demonstrated that using a personalized lung dose limit derived from a predictive model will result in a different prescription than what was achieved with the clinically used plan; thus demonstrating the utility of a normal tissue toxicity model in personalizing the prescription dose.
Resumo:
Clock synchronization in the order of nanoseconds is one of the critical factors for time-based localization. Currently used time synchronization methods are developed for the more relaxed needs of network operation. Their usability for positioning should be carefully evaluated. In this paper, we are particularly interested in GPS-based time synchronization. To judge its usability for localization we need a method that can evaluate the achieved time synchronization with nanosecond accuracy. Our method to evaluate the synchronization accuracy is inspired by signal processing algorithms and relies on fine grain time information. The method is able to calculate the clock offset and skew between devices with nanosecond accuracy in real time. It was implemented using software defined radio technology. We demonstrate that GPS-based synchronization suffers from remaining clock offset in the range of a few hundred of nanoseconds but the clock skew is negligible. Finally, we determine a corresponding lower bound on the expected positioning error.
Resumo:
BACKGROUND The accuracy of CT pulmonary angiography (CTPA) in detecting or excluding pulmonary embolism has not yet been assessed in patients with high body weight (BW). METHODS This retrospective study involved CTPAs of 114 patients weighing 75-99 kg and those of 123 consecutive patients weighing 100-150 kg. Three independent blinded radiologists analyzed all examinations in randomized order. Readers' data on pulmonary emboli were compared with a composite reference standard, comprising clinical probability, reference CTPA result, additional imaging when performed and 90-day follow-up. Results in both BW groups and in two body mass index (BMI) groups (BMI <30 kg/m(2) and BMI ≥ 30 kg/m(2), i.e., non-obese and obese patients) were compared. RESULTS The prevalence of pulmonary embolism was not significantly different in the BW groups (P=1.0). The reference CTPA result was positive in 23 of 114 patients in the 75-99 kg group and in 25 of 123 patients in the ≥ 100 kg group, respectively (odds ratio, 0.991; 95% confidence interval, 0.501 to 1.957; P=1.0). No pulmonary embolism-related death or venous thromboembolism occurred during follow-up. The mean accuracy of three readers was 91.5% in the 75-99 kg group and 89.9% in the ≥ 100 kg group (odds ratio, 1.207; 95% confidence interval, 0.451 to 3.255; P=0.495), and 89.9% in non-obese patients and 91.2% in obese patients (odds ratio, 0.853; 95% confidence interval, 0.317 to 2.319; P=0.816). CONCLUSION The diagnostic accuracy of CTPA in patients weighing 75-99 kg or 100-150 kg proved not to be significantly different.
Resumo:
It has been repeatedly demonstrated that athletes in a state of ego depletion do not perform up to their capabilities. We assume that autonomous self-control exertion, in contrast to forced self-control exertion, can serve as a buffer against ego depletion effects and can help individuals to show superior performance. In the present study, we applied a between-subjects design to test the assumption that autonomously exerted self-control is less detrimental for subsequent self-control performance in sports than is forced self-control exertion. In a primary self-control task, the level of autonomy was manipulated through specific instructions, resulting in three experimental conditions (autonomy-supportive: n = 19; neutral: n = 19; controlling: n = 19). As a secondary self-control task, participants executed a series of tennis serves under high-pressure conditions, and performance accuracy served as our dependent variable. As expected, a one-way between-groups ANOVA revealed that participants from the autonomy-supportive condition performed significantly better under pressure than did participants from the controlling condition. These results further highlight the importance of autonomy-supportive instructions in order to enable athletes to show superior achievements in high-pressure situations. Practical implications for the coach–athlete relationship are discussed.
Resumo:
A large deviations type approximation to the probability of ruin within a finite time for the compound Poisson risk process perturbed by diffusion is derived. This approximation is based on the saddlepoint method and generalizes the approximation for the non-perturbed risk process by Barndorff-Nielsen and Schmidli (Scand Actuar J 1995(2):169–186, 1995). An importance sampling approximation to this probability of ruin is also provided. Numerical illustrations assess the accuracy of the saddlepoint approximation using importance sampling as a benchmark. The relative deviations between saddlepoint approximation and importance sampling are very small, even for extremely small probabilities of ruin. The saddlepoint approximation is however substantially faster to compute.
Resumo:
The Astronomical Institute of the University of Bern (AIUB) is conducting several search campaigns for orbital debris. The debris objects are discovered during systematic survey observations. In general only a short observation arc, or tracklet, is available for most of these objects. From this discovery tracklet a first orbit determination is computed in order to be able to find the object again in subsequent follow-up observations. The additional observations are used in the orbit improvement process to obtain accurate orbits to be included in a catalogue. In this paper, the accuracy of the initial orbit determination is analyzed. This depends on a number of factors: tracklet length, number of observations, type of orbit, astrometric error, and observation geometry. The latter is characterized by both the position of the object along its orbit and the location of the observing station. Different positions involve different distances from the target object and a different observing angle with respect to its orbital plane and trajectory. The present analysis aims at optimizing the geometry of the discovery observation is depending on the considered orbit.
Resumo:
Charcoal particles in pollen slides are often abundant, and thus analysts are faced with the problem of setting the minimum counting sum as small as possible in order to save time. We analysed the reliability of charcoal-concentration estimates based on different counting sums, using simulated low-to high-count samples. Bootstrap simulations indicate that the variability of inferred charcoal concentrations increases progressively with decreasing sums. Below 200 items (i.e., the sum of charcoal particles and exotic marker grains), reconstructed fire incidence is either too high or too low. Statistical comparisons show that the means of bootstrap simulations stabilize after 200 counts. Moreover, a count of 200-300 items is sufficient to produce a charcoal-concentration estimate with less than+5% error if compared with high-count samples of 1000 items for charcoal/marker grain ratios 0.1-0.91. If, however, this ratio is extremely high or low (> 0.91 or < 0.1) and if such samples are frequent, we suggest that marker grains are reduced or added prior to new sample processing.
Resumo:
The saddlepoint method provides accurate approximations for the distributions of many test statistics, estimators and for important probabilities arising in various stochastic models. The saddlepoint approximation is a large deviations technique which is substantially more accurate than limiting normal or Edgeworth approximations, especially in presence of very small sample sizes or very small probabilities. The outstanding accuracy of the saddlepoint approximation can be explained by the fact that it has bounded relative error.