934 resultados para static computer simulation


Relevância:

80.00% 80.00%

Publicador:

Resumo:

Die vorliegende Arbeit behandelt die Entwicklung und Verbesserung von linear skalierenden Algorithmen für Elektronenstruktur basierte Molekulardynamik. Molekulardynamik ist eine Methode zur Computersimulation des komplexen Zusammenspiels zwischen Atomen und Molekülen bei endlicher Temperatur. Ein entscheidender Vorteil dieser Methode ist ihre hohe Genauigkeit und Vorhersagekraft. Allerdings verhindert der Rechenaufwand, welcher grundsätzlich kubisch mit der Anzahl der Atome skaliert, die Anwendung auf große Systeme und lange Zeitskalen. Ausgehend von einem neuen Formalismus, basierend auf dem großkanonischen Potential und einer Faktorisierung der Dichtematrix, wird die Diagonalisierung der entsprechenden Hamiltonmatrix vermieden. Dieser nutzt aus, dass die Hamilton- und die Dichtematrix aufgrund von Lokalisierung dünn besetzt sind. Das reduziert den Rechenaufwand so, dass er linear mit der Systemgröße skaliert. Um seine Effizienz zu demonstrieren, wird der daraus entstehende Algorithmus auf ein System mit flüssigem Methan angewandt, das extremem Druck (etwa 100 GPa) und extremer Temperatur (2000 - 8000 K) ausgesetzt ist. In der Simulation dissoziiert Methan bei Temperaturen oberhalb von 4000 K. Die Bildung von sp²-gebundenem polymerischen Kohlenstoff wird beobachtet. Die Simulationen liefern keinen Hinweis auf die Entstehung von Diamant und wirken sich daher auf die bisherigen Planetenmodelle von Neptun und Uranus aus. Da das Umgehen der Diagonalisierung der Hamiltonmatrix die Inversion von Matrizen mit sich bringt, wird zusätzlich das Problem behandelt, eine (inverse) p-te Wurzel einer gegebenen Matrix zu berechnen. Dies resultiert in einer neuen Formel für symmetrisch positiv definite Matrizen. Sie verallgemeinert die Newton-Schulz Iteration, Altmans Formel für beschränkte und nicht singuläre Operatoren und Newtons Methode zur Berechnung von Nullstellen von Funktionen. Der Nachweis wird erbracht, dass die Konvergenzordnung immer mindestens quadratisch ist und adaptives Anpassen eines Parameters q in allen Fällen zu besseren Ergebnissen führt.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Coarse graining is a popular technique used in physics to speed up the computer simulation of molecular fluids. An essential part of this technique is a method that solves the inverse problem of determining the interaction potential or its parameters from the given structural data. Due to discrepancies between model and reality, the potential is not unique, such that stability of such method and its convergence to a meaningful solution are issues.rnrnIn this work, we investigate empirically whether coarse graining can be improved by applying the theory of inverse problems from applied mathematics. In particular, we use the singular value analysis to reveal the weak interaction parameters, that have a negligible influence on the structure of the fluid and which cause non-uniqueness of the solution. Further, we apply a regularizing Levenberg-Marquardt method, which is stable against the mentioned discrepancies. Then, we compare it to the existing physical methods - the Iterative Boltzmann Inversion and the Inverse Monte Carlo method, which are fast and well adapted to the problem, but sometimes have convergence problems.rnrnFrom analysis of the Iterative Boltzmann Inversion, we elaborate a meaningful approximation of the structure and use it to derive a modification of the Levenberg-Marquardt method. We engage the latter for reconstruction of the interaction parameters from experimental data for liquid argon and nitrogen. We show that the modified method is stable, convergent and fast. Further, the singular value analysis of the structure and its approximation allows to determine the crucial interaction parameters, that is, to simplify the modeling of interactions. Therefore, our results build a rigorous bridge between the inverse problem from physics and the powerful solution tools from mathematics. rn

Relevância:

80.00% 80.00%

Publicador:

Resumo:

PURPOSE: The advent of imaging software programs has proved to be useful for diagnosis, treatment planning, and outcome measurement, but precision of 3-dimensional (3D) surgical simulation still needs to be tested. This study was conducted to determine whether the virtual surgery performed on 3D models constructed from cone-beam computed tomography (CBCT) can correctly simulate the actual surgical outcome and to validate the ability of this emerging technology to recreate the orthognathic surgery hard tissue movements in 3 translational and 3 rotational planes of space. MATERIALS AND METHODS: Construction of pre- and postsurgery 3D models from CBCTs of 14 patients who had combined maxillary advancement and mandibular setback surgery and 6 patients who had 1-piece maxillary advancement surgery was performed. The postsurgery and virtually simulated surgery 3D models were registered at the cranial base to quantify differences between simulated and actual surgery models. Hotelling t tests were used to assess the differences between simulated and actual surgical outcomes. RESULTS: For all anatomic regions of interest, there was no statistically significant difference between the simulated and the actual surgical models. The right lateral ramus was the only region that showed a statistically significant, but small difference when comparing 2- and 1-jaw surgeries. CONCLUSIONS: Virtual surgical methods were reliably reproduced. Oral surgery residents could benefit from virtual surgical training. Computer simulation has the potential to increase predictability in the operating room.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

IEF protein binary separations were performed in a 12-μL drop suspended between two palladium electrodes, using pH gradients created by electrolysis of simple buffers at low voltages (1.5-5 V). The dynamics of pH gradient formation and protein separation were investigated by computer simulation and experimentally via digital video microscope imaging in the presence and absence of pH indicator solution. Albumin, ferritin, myoglobin, and cytochrome c were used as model proteins. A drop containing 2.4 μg of each protein was applied, electrophoresed, and allowed to evaporate until it splits to produce two fractions that were recovered by rinsing the electrodes with a few microliters of buffer. Analysis by gel electrophoresis revealed that anode and cathode fractions were depleted from high pI and low pI proteins, respectively, whereas proteins with intermediate pI values were recovered in both fractions. Comparable data were obtained with diluted bovine serum that was fortified with myoglobin and cytochrome c.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The separation of small molecules by capillary electrophoresis is governed by a complex interplay among several physical effects. Until recently, a systematic understanding of how the influence of all of these effects is observed experimentally has remained unclear. The work presented in this thesis involves the use of transient isotachophoretic stacking (tITP) and computer simulation to improve and better understand an in-capillary chemical assay for creatinine. This assay involves the use of electrophoretically mediated micro-analysis (EMMA) to carry out the Jaffé reaction inside a capillary tube. The primary contribution of this work is the elucidation of the role of the length and concentration of the hydroxide plug used to achieve tITP stacking of the product formed by the in-capillary EMMA/Jaffé method. Computer simulation using SIMUL 5.0 predicts that a 3-4 fold gain in sensitivity can be recognized by timing the tITP stacking event such that the Jaffé product peak is at its maximum height as that peak is electrophoresing past the detection window. Overall, the length of the hydroxide plug alters the timing of the stacking event and lower concentration plugs of hydroxide lead to more rapidly occurring tITP stacking events. Also, the inclusion of intentional tITP stacking in the EMMA/Jaffé method improves the sensitivity of the assay, including creatinine concentrations within the normal biological range. Ultimately, improvement in assay sensitivity can be rationally designed by using the length and concentration of the hydroxide plug to engineer the timing of the tITP stacking event such that stacking occurs as the Jaffé product is passing the detection window.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In the past few decades, integrated circuits have become a major part of everyday life. Every circuit that is created needs to be tested for faults so faulty circuits are not sent to end-users. The creation of these tests is time consuming, costly and difficult to perform on larger circuits. This research presents a novel method for fault detection and test pattern reduction in integrated circuitry under test. By leveraging the FPGA's reconfigurability and parallel processing capabilities, a speed up in fault detection can be achieved over previous computer simulation techniques. This work presents the following contributions to the field of Stuck-At-Fault detection: We present a new method for inserting faults into a circuit net list. Given any circuit netlist, our tool can insert multiplexers into a circuit at correct internal nodes to aid in fault emulation on reconfigurable hardware. We present a parallel method of fault emulation. The benefit of the FPGA is not only its ability to implement any circuit, but its ability to process data in parallel. This research utilizes this to create a more efficient emulation method that implements numerous copies of the same circuit in the FPGA. A new method to organize the most efficient faults. Most methods for determinin the minimum number of inputs to cover the most faults require sophisticated softwareprograms that use heuristics. By utilizing hardware, this research is able to process data faster and use a simpler method for an efficient way of minimizing inputs.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Bidirectional ITP in fused-silica capillaries double-coated with Polybrene and poly-(vinylsulfonate) is a robust approach for analysis of low-molecular-mass compounds. EOF towards the cathode is strong (mobility >4.0 x 10(-8) m(2)/Vs) within the entire pH range investigated (2.40-8.08), dependent on ionic strength and buffer used and, at constant ionic strength, higher at alkaline pH. Electrokinetic separations and transport in such coated capillaries can be described with a dynamic computer model which permits the combined simulation of electrophoresis and electroosmosis in which the EOF is predicted either with a constant (i.e. pH- and ionic strength-independent) or a pH- and ionic strength-dependent electroosmotic mobility. Detector profiles predicted by computer simulation agree qualitatively well with bidirectional isotachopherograms that are monitored with a setup comprising two axial contactless conductivity detectors and a UV absorbance detector. The varying EOF predicted with a pH- and ionic strength-dependent electroosmotic mobility can be regarded as being realistic.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

CE-ESI multistage IT-MS (CE-MS(n), n < or = 4) and computer simulation of fragmentation are demonstrated to be effective tools to detect and identify phase I and phase II metabolites of hydromorphone (HMOR) in human urine. Using the same CE conditions as previously developed for the analysis of urinary oxycodone and its metabolites, HMOR and its phase I metabolites produced by N-demethylation, 6-keto-reduction and N-oxidation and phase II conjugates of HMOR and its metabolites formed with glucuronic acid, glucose, and sulfuric acid could be detected in urine samples of a patient that were collected during a pharmacotherapy episode with daily ingestion of 48 mg of HMOR chloride. The CE-MS(n) data obtained with the HMOR standard, synthesized hydromorphol and hydromorphone-N-oxide, and CYP3A4 in vitro produced norhydromorphone were employed to identify the metabolites. This approach led to the identification of previously unknown HMOR metabolites, including HMOR-3O-glucide and various N-oxides, structures for which no standard compounds or mass spectra library data were available. Furthermore, the separation of alpha- and beta-hydromorphol, the stereoisomers of 6-keto-reduced HMOR, was achieved by CE in the presence of the single isomer heptakis(2,3-diacetyl-6-sulfato)-beta-CD. The obtained data indicate that the urinary excretion of alpha-hydromorphol is larger than that of beta-hydromorphol.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The aim of many genetic studies is to locate the genomic regions (called quantitative trait loci, QTLs) that contribute to variation in a quantitative trait (such as body weight). Confidence intervals for the locations of QTLs are particularly important for the design of further experiments to identify the gene or genes responsible for the effect. Likelihood support intervals are the most widely used method to obtain confidence intervals for QTL location, but the non-parametric bootstrap has also been recommended. Through extensive computer simulation, we show that bootstrap confidence intervals are poorly behaved and so should not be used in this context. The profile likelihood (or LOD curve) for QTL location has a tendency to peak at genetic markers, and so the distribution of the maximum likelihood estimate (MLE) of QTL location has the unusual feature of point masses at genetic markers; this contributes to the poor behavior of the bootstrap. Likelihood support intervals and approximate Bayes credible intervals, on the other hand, are shown to behave appropriately.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

BACKGROUND: Assessment of lung volume (FRC) and ventilation inhomogeneities with ultrasonic flowmeter and multiple breath washout (MBW) has been used to provide important information about lung disease in infants. Sub-optimal adjustment of the mainstream molar mass (MM) signal for temperature and external deadspace may lead to analysis errors in infants with critically small tidal volume changes during breathing. METHODS: We measured expiratory temperature in human infants at 5 weeks of age and examined the influence of temperature and deadspace changes on FRC results with computer simulation modeling. A new analysis method with optimized temperature and deadspace settings was then derived, tested for robustness to analysis errors and compared with the previously used analysis methods. RESULTS: Temperature in the facemask was higher and variations of deadspace volumes larger than previously assumed. Both showed considerable impact upon FRC and LCI results with high variability when obtained with the previously used analysis model. Using the measured temperature we optimized model parameters and tested a newly derived analysis method, which was found to be more robust to variations in deadspace. Comparison between both analysis methods showed systematic differences and a wide scatter. CONCLUSION: Corrected deadspace and more realistic temperature assumptions improved the stability of the analysis of MM measurements obtained by ultrasonic flowmeter in infants. This new analysis method using the only currently available commercial ultrasonic flowmeter in infants may help to improve stability of the analysis and further facilitate assessment of lung volume and ventilation inhomogeneities in infants.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Although current concepts of anterior femoroacetabular impingement predict damage in the labrum and the cartilage, the actual joint damage has not been verified by computer simulation. We retrospectively compared the intraoperative locations of labral and cartilage damage of 40 hips during surgical dislocation for cam or pincer type femoroacetabular impingement (Group I) with the locations of femoroacetabular impingement in 15 additional hips using computer simulation (Group II). We found no difference between the mean locations of the chondrolabral damage of Group I and the computed impingement zone of Group II. The standard deviation was larger for measures of articular damage from Group I in comparison to the computed values of Group II. The most severe hip damage occurred at the zone of highest probability of femoroacetabular impact, typically in the anterosuperior quadrant of the acetabulum for both cam and pincer type femoroacetabular impingements. However, the extent of joint damage along the acetabular rim was larger intraoperatively than that observed on the images of the 3-D joint simulations. We concluded femoroacetabular impingement mechanism contributes to early osteoarthritis including labral lesions. LEVEL OF EVIDENCE: Level II, diagnostic study. See the Guidelines for Authors for a complete description of levels of evidence.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Large Power transformers, an aging and vulnerable part of our energy infrastructure, are at choke points in the grid and are key to reliability and security. Damage or destruction due to vandalism, misoperation, or other unexpected events is of great concern, given replacement costs upward of $2M and lead time of 12 months. Transient overvoltages can cause great damage and there is much interest in improving computer simulation models to correctly predict and avoid the consequences. EMTP (the Electromagnetic Transients Program) has been developed for computer simulation of power system transients. Component models for most equipment have been developed and benchmarked. Power transformers would appear to be simple. However, due to their nonlinear and frequency-dependent behaviors, they can be one of the most complex system components to model. It is imperative that the applied models be appropriate for the range of frequencies and excitation levels that the system experiences. Thus, transformer modeling is not a mature field and newer improved models must be made available. In this work, improved topologically-correct duality-based models are developed for three-phase autotransformers having five-legged, three-legged, and shell-form cores. The main problem in the implementation of detailed models is the lack of complete and reliable data, as no international standard suggests how to measure and calculate parameters. Therefore, parameter estimation methods are developed here to determine the parameters of a given model in cases where available information is incomplete. The transformer nameplate data is required and relative physical dimensions of the core are estimated. The models include a separate representation of each segment of the core, including hysteresis of the core, λ-i saturation characteristic, capacitive effects, and frequency dependency of winding resistance and core loss. Steady-state excitation, and de-energization and re-energization transients are simulated and compared with an earlier-developed BCTRAN-based model. Black start energization cases are also simulated as a means of model evaluation and compared with actual event records. The simulated results using the model developed here are reasonable and more correct than those of the BCTRAN-based model. Simulation accuracy is dependent on the accuracy of the equipment model and its parameters. This work is significant in that it advances existing parameter estimation methods in cases where the available data and measurements are incomplete. The accuracy of EMTP simulation for power systems including three-phase autotransformers is thus enhanced. Theoretical results obtained from this work provide a sound foundation for development of transformer parameter estimation methods using engineering optimization. In addition, it should be possible to refine which information and measurement data are necessary for complete duality-based transformer models. To further refine and develop the models and transformer parameter estimation methods developed here, iterative full-scale laboratory tests using high-voltage and high-power three-phase transformer would be helpful.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This document corresponds to the tutorial on realistic neural modeling given by David Beeman at WAM-BAMM*05, the first annual meeting of the World Association of Modelers (WAM) Biologically Accurate Modeling Meeting (BAMM) on March 31, 2005 in San Antonio, TX. Part I - Introduction to Realistic Neural Modeling for the Beginner: This is a general overview and introduction to compartmental cell modeling and realistic network simulation for the beginner. Although examples are drawn from GENESIS simulations, the tutorial emphasizes the general modeling approach, rather than the details of using any particular simulator. Part II - Getting Started with Modeling Using GENESIS: This builds upon the background of Part I to describe some details of how this approach is used to construct cell and network simulations in GENESIS. It serves as an introduction and roadmap to the extended hands-on GENESIS Modeling Tutorial.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We describe four recent additions to NEURON's suite of graphical tools that make it easier for users to create and manage models: an enhancement to the Channel Builder that facilitates the specification and efficient simulation of stochastic channel models