940 resultados para linear approximation method


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Coupled-cluster (CC) theory is one of the most successful approaches in high-accuracy quantum chemistry. The present thesis makes a number of contributions to the determination of molecular properties and excitation energies within the CC framework. The multireference CC (MRCC) method proposed by Mukherjee and coworkers (Mk-MRCC) has been benchmarked within the singles and doubles approximation (Mk-MRCCSD) for molecular equilibrium structures. It is demonstrated that Mk-MRCCSD yields reliable results for multireference cases where single-reference CC methods fail. At the same time, the present work also illustrates that Mk-MRCC still suffers from a number of theoretical problems and sometimes gives rise to results of unsatisfactory accuracy. To determine polarizability tensors and excitation spectra in the MRCC framework, the Mk-MRCC linear-response function has been derived together with the corresponding linear-response equations. Pilot applications show that Mk-MRCC linear-response theory suffers from a severe problem when applied to the calculation of dynamic properties and excitation energies: The Mk-MRCC sufficiency conditions give rise to a redundancy in the Mk-MRCC Jacobian matrix, which entails an artificial splitting of certain excited states. This finding has established a new paradigm in MRCC theory, namely that a convincing method should not only yield accurate energies, but ought to allow for the reliable calculation of dynamic properties as well. In the context of single-reference CC theory, an analytic expression for the dipole Hessian matrix, a third-order quantity relevant to infrared spectroscopy, has been derived and implemented within the CC singles and doubles approximation. The advantages of analytic derivatives over numerical differentiation schemes are demonstrated in some pilot applications.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In vielen Teilgebieten der Mathematik ist es w"{u}nschenswert, die Monodromiegruppe einer homogenen linearen Differenzialgleichung zu verstehen. Es sind nur wenige analytische Methoden zur Berechnung dieser Gruppe bekannt, daher entwickeln wir im ersten Teil dieser Arbeit eine numerische Methode zur Approximation ihrer Erzeuger.rnIm zweiten Abschnitt fassen wir die Grundlagen der Theorie der Uniformisierung Riemannscher Fl"achen und die der arithmetischen Fuchsschen Gruppen zusammen. Auss erdem erkl"aren wir, wie unsere numerische Methode bei der Bestimmung von uniformisierenden Differenzialgleichungen dienlich sein kann. F"ur arithmetische Fuchssche Gruppen mit zwei Erzeugern erhalten wir lokale Daten und freie Parameter von Lam'{e} Gleichungen, welche die zugeh"origen Riemannschen Fl"achen uniformisieren. rnIm dritten Teil geben wir einen kurzen Abriss zur homologischen Spiegelsymmetrie und f"uhren die $widehat{Gamma}$-Klasse ein. Wir erkl"aren wie diese genutzt werden kann, um eine Hodge-theoretische Version der Spiegelsymmetrie f"ur torische Varit"aten zu beweisen. Daraus gewinnen wir Vermutungen "uber die Monodromiegruppe $M$ von Picard-Fuchs Gleichungen von gewissen Familien $f:mathcal{X}rightarrow bbp^1$ von $n$-dimensionalen Calabi-Yau Variet"aten. Diese besagen erstens, dass bez"uglich einer nat"urlichen Basis die Monodromiematrizen in $M$ Eintr"age aus dem K"orper $bbq(zeta(2j+1)/(2 pi i)^{2j+1},j=1,ldots,lfloor (n-1)/2 rfloor)$ haben. Und zweitens, dass sich topologische Invarianten des Spiegelpartners einer generischen Faser von $f:mathcal{X}rightarrow bbp^1$ aus einem speziellen Element von $M$ rekonstruieren lassen. Schliess lich benutzen wir die im ersten Teil entwickelten Methoden zur Verifizierung dieser Vermutungen, vornehmlich in Hinblick auf Dimension drei. Dar"uber hinaus erstellen wir eine Liste von Kandidaten topologischer Invarianten von vermutlich existierenden dreidimensionalen Calabi-Yau Variet"aten mit $h^{1,1}=1$.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Die vorliegende Arbeit behandelt die Entwicklung und Verbesserung von linear skalierenden Algorithmen für Elektronenstruktur basierte Molekulardynamik. Molekulardynamik ist eine Methode zur Computersimulation des komplexen Zusammenspiels zwischen Atomen und Molekülen bei endlicher Temperatur. Ein entscheidender Vorteil dieser Methode ist ihre hohe Genauigkeit und Vorhersagekraft. Allerdings verhindert der Rechenaufwand, welcher grundsätzlich kubisch mit der Anzahl der Atome skaliert, die Anwendung auf große Systeme und lange Zeitskalen. Ausgehend von einem neuen Formalismus, basierend auf dem großkanonischen Potential und einer Faktorisierung der Dichtematrix, wird die Diagonalisierung der entsprechenden Hamiltonmatrix vermieden. Dieser nutzt aus, dass die Hamilton- und die Dichtematrix aufgrund von Lokalisierung dünn besetzt sind. Das reduziert den Rechenaufwand so, dass er linear mit der Systemgröße skaliert. Um seine Effizienz zu demonstrieren, wird der daraus entstehende Algorithmus auf ein System mit flüssigem Methan angewandt, das extremem Druck (etwa 100 GPa) und extremer Temperatur (2000 - 8000 K) ausgesetzt ist. In der Simulation dissoziiert Methan bei Temperaturen oberhalb von 4000 K. Die Bildung von sp²-gebundenem polymerischen Kohlenstoff wird beobachtet. Die Simulationen liefern keinen Hinweis auf die Entstehung von Diamant und wirken sich daher auf die bisherigen Planetenmodelle von Neptun und Uranus aus. Da das Umgehen der Diagonalisierung der Hamiltonmatrix die Inversion von Matrizen mit sich bringt, wird zusätzlich das Problem behandelt, eine (inverse) p-te Wurzel einer gegebenen Matrix zu berechnen. Dies resultiert in einer neuen Formel für symmetrisch positiv definite Matrizen. Sie verallgemeinert die Newton-Schulz Iteration, Altmans Formel für beschränkte und nicht singuläre Operatoren und Newtons Methode zur Berechnung von Nullstellen von Funktionen. Der Nachweis wird erbracht, dass die Konvergenzordnung immer mindestens quadratisch ist und adaptives Anpassen eines Parameters q in allen Fällen zu besseren Ergebnissen führt.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Molecular dynamics simulations of silicate and borate glasses and melts: Structure, diffusion dynamics and vibrational properties. In this work computer simulations of the model glass formers SiO2 and B2O3 are presented, using the techniques of classical molecular dynamics (MD) simulations and quantum mechanical calculations, based on density functional theory (DFT). The latter limits the system size to about 100−200 atoms. SiO2 and B2O3 are the two most important network formers for industrial applications of oxide glasses. Glass samples are generated by means of a quench from the melt with classical MD simulations and a subsequent structural relaxation with DFT forces. In addition, full ab initio quenches are carried out with a significantly faster cooling rate. In principle, the structural properties are in good agreement with experimental results from neutron and X-ray scattering, in all cases. A special focus is on the study of vibrational properties, as they give access to low-temperature thermodynamic properties. The vibrational spectra are calculated by the so-called ”frozen phonon” method. In all cases, the DFT curves show an acceptable agreement with experimental results of inelastic neutron scattering. In case of the model glass former B2O3, a new classical interaction potential is parametrized, based on the liquid trajectory of an ab initio MD simulation at 2300 K. In this course, a structural fitting routine is used. The inclusion of 3-body angular interactions leads to a significantly improved agreement of the liquid properties of the classical MD and ab initio MD simulations. However, the generated glass structures, in all cases, show a significantly lower fraction of 3-membered planar boroxol rings as predicted by experimental results (f=60%-80%). The largest boroxol ring fraction of f=15±5% is observed in the full ab initio quenches from 2300 K. In case of SiO2, the glass structures after the quantum mechanical relaxation are the basis for calculations of the linear thermal expansion coefficient αL(T), employing the quasi-harmonic approximation. The striking observation is a change change of sign of αL(T) going along with a temperature range of negative αL(T) at low temperatures, which is in good agreement with experimental results.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the present thesis we address the problem of detecting and localizing a small spherical target with characteristic electrical properties inside a volume of cylindrical shape, representing female breast, with MWI. One of the main works of this project is to properly extend the existing linear inversion algorithm from planar slice to volume reconstruction; results obtained, under the same conditions and experimental setup are reported for the two different approaches. Preliminar comparison and performance analysis of the reconstruction algorithms is performed via numerical simulations in a software-created environment: a single dipole antenna is used for illuminating the virtual breast phantom from different positions and, for each position, the corresponding scattered field value is registered. Collected data are then exploited in order to reconstruct the investigation domain, along with the scatterer position, in the form of image called pseudospectrum. During this process the tumor is modeled as a dielectric sphere of small radius and, for electromagnetic scattering purposes, it's treated as a point-like source. To improve the performance of reconstruction technique, we repeat the acquisition for a number of frequencies in a given range: the different pseudospectra, reconstructed from single frequency data, are incoherently combined with MUltiple SIgnal Classification (MUSIC) method which returns an overall enhanced image. We exploit multi-frequency approach to test the performance of 3D linear inversion reconstruction algorithm while varying the source position inside the phantom and the height of antenna plane. Analysis results and reconstructed images are then reported. Finally, we perform 3D reconstruction from experimental data gathered with the acquisition system in the microwave laboratory at DIFA, University of Bologna for a recently developed breast-phantom prototype; obtained pseudospectrum and performance analysis for the real model are reported.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The chemotherapeutic drug 5-fluorouracil (5-FU) is widely used for treating solid tumors. Response to 5-FU treatment is variable with 10-30% of patients experiencing serious toxicity partly explained by reduced activity of dihydropyrimidine dehydrogenase (DPD). DPD converts endogenous uracil (U) into 5,6-dihydrouracil (UH(2) ), and analogously, 5-FU into 5-fluoro-5,6-dihydrouracil (5-FUH(2) ). Combined quantification of U and UH(2) with 5-FU and 5-FUH(2) may provide a pre-therapeutic assessment of DPD activity and further guide drug dosing during therapy. Here, we report the development of a liquid chromatography-tandem mass spectrometry assay for simultaneous quantification of U, UH(2) , 5-FU and 5-FUH(2) in human plasma. Samples were prepared by liquid-liquid extraction with 10:1 ethyl acetate-2-propanol (v/v). The evaporated samples were reconstituted in 0.1% formic acid and 10 μL aliquots were injected into the HPLC system. Analyte separation was achieved on an Atlantis dC(18) column with a mobile phase consisting of 1.0 mm ammonium acetate, 0.5 mm formic acid and 3.3% methanol. Positively ionized analytes were detected by multiple reaction monitoring. The analytical response was linear in the range 0.01-10 μm for U, 0.1-10 μm for UH(2) , 0.1-75 μm for 5-FU and 0.75-75 μm for 5-FUH(2) , covering the expected concentration ranges in plasma. The method was validated following the FDA guidelines and applied to clinical samples obtained from ten 5-FU-treated colorectal cancer patients. The present method merges the analysis of 5-FU pharmacokinetics and DPD activity into a single assay representing a valuable tool to improve the efficacy and safety of 5-FU-based chemotherapy.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The aim of this study is to develop a new simple method for analyzing one-dimensional transcranial magnetic stimulation (TMS) mapping studies in humans. Motor evoked potentials (MEP) were recorded from the abductor pollicis brevis (APB) muscle during stimulation at nine different positions on the scalp along a line passing through the APB hot spot and the vertex. Non-linear curve fitting according to the Levenberg-Marquardt algorithm was performed on the averaged amplitude values obtained at all points to find the best-fitting symmetrical and asymmetrical peak functions. Several peak functions could be fitted to the experimental data. Across all subjects, a symmetric, bell-shaped curve, the complementary error function (erfc) gave the best results. This function is characterized by three parameters giving its amplitude, position, and width. None of the mathematical functions tested with less or more than three parameters fitted better. The amplitude and position parameters of the erfc were highly correlated with the amplitude at the hot spot and with the location of the center of gravity of the TMS curve. In conclusion, non-linear curve fitting is an accurate method for the mathematical characterization of one-dimensional TMS curves. This is the first method that provides information on amplitude, position and width simultaneously.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Despite the widespread popularity of linear models for correlated outcomes (e.g. linear mixed modesl and time series models), distribution diagnostic methodology remains relatively underdeveloped in this context. In this paper we present an easy-to-implement approach that lends itself to graphical displays of model fit. Our approach involves multiplying the estimated marginal residual vector by the Cholesky decomposition of the inverse of the estimated marginal variance matrix. Linear functions or the resulting "rotated" residuals are used to construct an empirical cumulative distribution function (ECDF), whose stochastic limit is characterized. We describe a resampling technique that serves as a computationally efficient parametric bootstrap for generating representatives of the stochastic limit of the ECDF. Through functionals, such representatives are used to construct global tests for the hypothesis of normal margional errors. In addition, we demonstrate that the ECDF of the predicted random effects, as described by Lange and Ryan (1989), can be formulated as a special case of our approach. Thus, our method supports both omnibus and directed tests. Our method works well in a variety of circumstances, including models having independent units of sampling (clustered data) and models for which all observations are correlated (e.g., a single time series).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Despite the widespread popularity of linear models for correlated outcomes (e.g. linear mixed models and time series models), distribution diagnostic methodology remains relatively underdeveloped in this context. In this paper we present an easy-to-implement approach that lends itself to graphical displays of model fit. Our approach involves multiplying the estimated margional residual vector by the Cholesky decomposition of the inverse of the estimated margional variance matrix. The resulting "rotated" residuals are used to construct an empirical cumulative distribution function and pointwise standard errors. The theoretical framework, including conditions and asymptotic properties, involves technical details that are motivated by Lange and Ryan (1989), Pierce (1982), and Randles (1982). Our method appears to work well in a variety of circumstances, including models having independent units of sampling (clustered data) and models for which all observations are correlated (e.g., a single time series). Our methods can produce satisfactory results even for models that do not satisfy all of the technical conditions stated in our theory.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Various inference procedures for linear regression models with censored failure times have been studied extensively. Recent developments on efficient algorithms to implement these procedures enhance the practical usage of such models in survival analysis. In this article, we present robust inferences for certain covariate effects on the failure time in the presence of "nuisance" confounders under a semiparametric, partial linear regression setting. Specifically, the estimation procedures for the regression coefficients of interest are derived from a working linear model and are valid even when the function of the confounders in the model is not correctly specified. The new proposals are illustrated with two examples and their validity for cases with practical sample sizes is demonstrated via a simulation study.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We propose a new method for fitting proportional hazards models with error-prone covariates. Regression coefficients are estimated by solving an estimating equation that is the average of the partial likelihood scores based on imputed true covariates. For the purpose of imputation, a linear spline model is assumed on the baseline hazard. We discuss consistency and asymptotic normality of the resulting estimators, and propose a stochastic approximation scheme to obtain the estimates. The algorithm is easy to implement, and reduces to the ordinary Cox partial likelihood approach when the measurement error has a degenerative distribution. Simulations indicate high efficiency and robustness. We consider the special case where error-prone replicates are available on the unobserved true covariates. As expected, increasing the number of replicate for the unobserved covariates increases efficiency and reduces bias. We illustrate the practical utility of the proposed method with an Eastern Cooperative Oncology Group clinical trial where a genetic marker, c-myc expression level, is subject to measurement error.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We introduce a diagnostic test for the mixing distribution in a generalised linear mixed model. The test is based on the difference between the marginal maximum likelihood and conditional maximum likelihood estimates of a subset of the fixed effects in the model. We derive the asymptotic variance of this difference, and propose a test statistic that has a limiting chi-square distribution under the null hypothesis that the mixing distribution is correctly specified. For the important special case of the logistic regression model with random intercepts, we evaluate via simulation the power of the test in finite samples under several alternative distributional forms for the mixing distribution. We illustrate the method by applying it to data from a clinical trial investigating the effects of hormonal contraceptives in women.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: The goal of this study was to determine whether site-specific differences in the subgingival microbiota could be detected by the checkerboard method in subjects with periodontitis. Methods: Subjects with at least six periodontal pockets with a probing depth (PD) between 5 and 7 mm were enrolled in the study. Subgingival plaque samples were collected with sterile curets by a single-stroke procedure at six selected periodontal sites from 161 subjects (966 subgingival sites). Subgingival bacterial samples were assayed with the checkerboard DNA-DNA hybridization method identifying 37 species. Results: Probing depths of 5, 6, and 7 mm were found at 50% (n = 483), 34% (n = 328), and 16% (n = 155) of sites, respectively. Statistical analysis failed to demonstrate differences in the sum of bacterial counts by tooth type (P = 0.18) or specific location of the sample (P = 0.78). With the exceptions of Campylobacter gracilis (P <0.001) and Actinomyces naeslundii (P <0.001), analysis by general linear model multivariate regression failed to identify subject or sample location factors as explanatory to microbiologic results. A trend of difference in bacterial load by tooth type was found for Prevotella nigrescens (P <0.01). At a cutoff level of >/=1.0 x 10(5), Porphyromonas gingivalis and Tannerella forsythia (previously T. forsythensis) were present at 48.0% to 56.3% and 46.0% to 51.2% of sampled sites, respectively. Conclusions: Given the similarities in the clinical evidence of periodontitis, the presence and levels of 37 species commonly studied in periodontitis are similar, with no differences between molar, premolar, and incisor/cuspid subgingival sites. This may facilitate microbiologic sampling strategies in subjects during periodontal therapy.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Since the introduction of the rope-pump in Nicaragua in the 1990s, the dependence on wells in rural areas has grown steadily. However, little or no attention is paid to rope-pump well performance after installation. Due to financial restraints, groundwater resource monitoring using conventional testing methods is too costly and out of reach of rural municipalities. Nonetheless, there is widespread agreement that without a way to quantify the changes in well performance over time, prioritizing regulatory actions is impossible. A manual pumping test method is presented, which at a fraction of the cost of a conventional pumping test, measures the specific capacity of rope-pump wells. The method requires only sight modifcations to the well and reasonable limitations on well useage prior to testing. The pumping test was performed a minimum of 33 times in three wells over an eight-month period in a small rural community in Chontales, Nicaragua. Data was used to measure seasonal variations in specific well capacity for three rope-pump wells completed in fractured crystalline basalt. Data collected from the tests were analyzed using four methods (equilibrium approximation, time-drawdown during pumping, time-drawdown during recovery, and time-drawdown during late-time recovery) to determine the best data-analyzing method. One conventional pumping test was performed to aid in evaluating the manual method. The equilibrim approximation can be performed while in the field with only a calculator and is the most technologically appropriate method for analyzing data. Results from this method overestimate specific capacity by 41% when compared to results from the conventional pumping test. The other analyes methods, requiring more sophisticated tools and higher-level interpretation skills, yielded results that agree to within 14% (pumping phase), 31% (recovery phase) and 133% (late-time recovery) of the conventional test productivity value. The wide variability in accuracy results principally from difficulties in achieving equilibrated pumping level and casing storage effects in the puping/recovery data. Decreases in well productivity resulting from naturally occuring seasonal water-table drops varied from insignificant in two wells to 80% in the third. Despite practical and theoretical limitations on the method, the collected data may be useful for municipal institutions to track changes in well behavior, eventually developing a database for planning future ground water development projects. Furthermore, the data could improve well-users’ abilities to self regulate well usage without expensive aquifer characterization.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Linear programs, or LPs, are often used in optimization problems, such as improving manufacturing efficiency of maximizing the yield from limited resources. The most common method for solving LPs is the Simplex Method, which will yield a solution, if one exists, but over the real numbers. From a purely numerical standpoint, it will be an optimal solution, but quite often we desire an optimal integer solution. A linear program in which the variables are also constrained to be integers is called an integer linear program or ILP. It is the focus of this report to present a parallel algorithm for solving ILPs. We discuss a serial algorithm using a breadth-first branch-and-bound search to check the feasible solution space, and then extend it into a parallel algorithm using a client-server model. In the parallel mode, the search may not be truly breadth-first, depending on the solution time for each node in the solution tree. Our search takes advantage of pruning, often resulting in super-linear improvements in solution time. Finally, we present results from sample ILPs, describe a few modifications to enhance the algorithm and improve solution time, and offer suggestions for future work.