941 resultados para pacs: simulation techniques


Relevância:

30.00% 30.00%

Publicador:

Resumo:

During the last few decades an unprecedented technological growth has been at the center of the embedded systems design paramount, with Moore’s Law being the leading factor of this trend. Today in fact an ever increasing number of cores can be integrated on the same die, marking the transition from state-of-the-art multi-core chips to the new many-core design paradigm. Despite the extraordinarily high computing power, the complexity of many-core chips opens the door to several challenges. As a result of the increased silicon density of modern Systems-on-a-Chip (SoC), the design space exploration needed to find the best design has exploded and hardware designers are in fact facing the problem of a huge design space. Virtual Platforms have always been used to enable hardware-software co-design, but today they are facing with the huge complexity of both hardware and software systems. In this thesis two different research works on Virtual Platforms are presented: the first one is intended for the hardware developer, to easily allow complex cycle accurate simulations of many-core SoCs. The second work exploits the parallel computing power of off-the-shelf General Purpose Graphics Processing Units (GPGPUs), with the goal of an increased simulation speed. The term Virtualization can be used in the context of many-core systems not only to refer to the aforementioned hardware emulation tools (Virtual Platforms), but also for two other main purposes: 1) to help the programmer to achieve the maximum possible performance of an application, by hiding the complexity of the underlying hardware. 2) to efficiently exploit the high parallel hardware of many-core chips in environments with multiple active Virtual Machines. This thesis is focused on virtualization techniques with the goal to mitigate, and overtake when possible, some of the challenges introduced by the many-core design paradigm.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Die vorliegende Doktorarbeit befasst sich mit klassischen Vektor-Spingläsern eine Art von ungeordneten Magneten - auf verschiedenen Gittertypen. Da siernbedeutsam für eine experimentelle Realisierung sind, ist ein theoretisches Verständnis von Spinglas-Modellen mit wenigen Spinkomponenten und niedriger Gitterdimension von großer Bedeutung. Da sich dies jedoch als sehr schwierigrnerweist, sind neue, aussichtsreiche Ansätze nötig. Diese Arbeit betrachtet daher den Limesrnunendlich vieler Spindimensionen. Darin entstehen mehrere Vereinfachungen im Vergleichrnzu Modellen niedriger Spindimension, so dass für dieses bedeutsame Problem Eigenschaften sowohl bei Temperatur Null als auch bei endlichen Temperaturenrnüberwiegend mit numerischen Methoden ermittelt werden. Sowohl hyperkubische Gitter als auch ein vielseitiges 1d-Modell werden betrachtet. Letzteres erlaubt es, unterschiedliche Universalitätsklassen durch bloßes Abstimmen eines einzigen Parameters zu untersuchen. "Finite-size scaling''-Formen, kritische Exponenten, Quotienten kritischer Exponenten und andere kritische Größen werden nahegelegt und mit numerischen Ergebnissen verglichen. Eine detaillierte Beschreibung der Herleitungen aller numerisch ausgewerteter Gleichungen wird ebenso angegeben. Bei Temperatur Null wird eine gründliche Untersuchung der Grundzustände und Defektenergien gemacht. Eine Reihe interessanter Größen wird analysiert und insbesondere die untere kritische Dimension bestimmt. Bei endlicher Temperatur sind der Ordnungsparameter und die Spinglas-Suszeptibilität über die numerisch berechnete Korrelationsmatrix zugänglich. Das Spinglas-Modell im Limes unendlich vieler Spinkomponenten kann man als Ausgangspunkt zur Untersuchung der natürlicheren Modelle mit niedriger Spindimension betrachten. Wünschenswert wäre natürlich ein Modell, das die Vorteile des ersten mit den Eigenschaften des zweiten verbände. Daher wird in Modell mit Anisotropie vorgeschlagen und getestet, mit welchem versucht wird, dieses Ziel zu erreichen. Es wird auf reizvolle Wege hingewiesen, das Modell zu nutzen und eine tiefergehende Beschäftigung anzuregen. Zuletzt werden sogenannte "real-space" Renormierungsgruppenrechnungen sowohl analytisch als auch numerisch für endlich-dimensionale Vektor-Spingläser mit endlicher Anzahl von Spinkomponenten durchgeführt. Dies wird mit einer zuvor bestimmten neuen Migdal-Kadanoff Rekursionsrelation geschehen. Neben anderen Größen wird die untere kritische Dimension bestimmt.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In dieser Arbeit wurden Simulation von Flüssigkeiten auf molekularer Ebene durchgeführt, wobei unterschiedliche Multi-Skalen Techniken verwendet wurden. Diese erlauben eine effektive Beschreibung der Flüssigkeit, die weniger Rechenzeit im Computer benötigt und somit Phänomene auf längeren Zeit- und Längenskalen beschreiben kann.rnrnEin wesentlicher Aspekt ist dabei ein vereinfachtes (“coarse-grained”) Modell, welches in einem systematischen Verfahren aus Simulationen des detaillierten Modells gewonnen wird. Dabei werden ausgewählte Eigenschaften des detaillierten Modells (z.B. Paar-Korrelationsfunktion, Druck, etc) reproduziert.rnrnEs wurden Algorithmen untersucht, die eine gleichzeitige Kopplung von detaillierten und vereinfachten Modell erlauben (“Adaptive Resolution Scheme”, AdResS). Dabei wird das detaillierte Modell in einem vordefinierten Teilvolumen der Flüssigkeit (z.B. nahe einer Oberfläche) verwendet, während der Rest mithilfe des vereinfachten Modells beschrieben wird.rnrnHierzu wurde eine Methode (“Thermodynamische Kraft”) entwickelt um die Kopplung auch dann zu ermöglichen, wenn die Modelle in verschiedenen thermodynamischen Zuständen befinden. Zudem wurde ein neuartiger Algorithmus der Kopplung beschrieben (H-AdResS) der die Kopplung mittels einer Hamilton-Funktion beschreibt. In diesem Algorithmus ist eine zur Thermodynamischen Kraft analoge Korrektur mit weniger Rechenaufwand möglich.rnrnAls Anwendung dieser grundlegenden Techniken wurden Pfadintegral Molekulardynamik (MD) Simulationen von Wasser untersucht. Mithilfe dieser Methode ist es möglich, quantenmechanische Effekte der Kerne (Delokalisation, Nullpunktsenergie) in die Simulation einzubeziehen. Hierbei wurde zuerst eine Multi-Skalen Technik (“Force-matching”) verwendet um eine effektive Wechselwirkung aus einer detaillierten Simulation auf Basis der Dichtefunktionaltheorie zu extrahieren. Die Pfadintegral MD Simulation verbessert die Beschreibung der intra-molekularen Struktur im Vergleich mit experimentellen Daten. Das Modell eignet sich auch zur gleichzeitigen Kopplung in einer Simulation, wobei ein Wassermolekül (beschrieben durch 48 Punktteilchen im Pfadintegral-MD Modell) mit einem vereinfachten Modell (ein Punktteilchen) gekoppelt wird. Auf diese Weise konnte eine Wasser-Vakuum Grenzfläche simuliert werden, wobei nur die Oberfläche im Pfadintegral Modell und der Rest im vereinfachten Modell beschrieben wird.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND: After bovine spongiform encephalopathy (BSE) emerged in European cattle livestock in 1986 a fundamental question was whether the agent established also in the small ruminants' population. In Switzerland transmissible spongiform encephalopathies (TSEs) in small ruminants have been monitored since 1990. While in the most recent TSE cases a BSE infection could be excluded, for historical cases techniques to discriminate scrapie from BSE had not been available at the time of diagnosis and thus their status remained unclear. We herein applied state-of-the-art techniques to retrospectively classify these animals and to re-analyze the affected flocks for secondary cases. These results were the basis for models, simulating the course of TSEs over a period of 70 years. The aim was to come to a statistically based overall assessment of the TSE situation in the domestic small ruminant population in Switzerland. RESULTS: In sum 16 TSE cases were identified in small ruminants in Switzerland since 1981, of which eight were atypical and six were classical scrapie. In two animals retrospective analysis did not allow any further classification due to the lack of appropriate tissue samples. We found no evidence for an infection with the BSE agent in the cases under investigation. In none of the affected flocks, secondary cases were identified. A Bayesian prevalence calculation resulted in most likely estimates of one case of BSE, five cases of classical scrapie and 21 cases of atypical scrapie per 100'000 small ruminants. According to our models none of the TSEs is considered to cause a broader epidemic in Switzerland. In a closed population, they are rather expected to fade out in the next decades or, in case of a sporadic origin, may remain at a very low level. CONCLUSIONS: In summary, these data indicate that despite a significant epidemic of BSE in cattle, there is no evidence that BSE established in the small ruminant population in Switzerland. Classical and atypical scrapie both occur at a very low level and are not expected to escalate into an epidemic. In this situation the extent of TSE surveillance in small ruminants requires reevaluation based on cost-benefit analysis.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The design of a high-density neural recording system targeting epilepsy monitoring is presented. Circuit challenges and techniques are discussed to optimize the amplifier topology and the included OTA. A new platform supporting active recording devices targeting wireless and high-resolution focus localization in epilepsy diagnosis is also proposed. The post-layout simulation results of an amplifier dedicated to this application are presented. The amplifier is designed in a UMC 0.18µm CMOS technology, has an NEF of 2.19 and occupies a silicon area of 0.038 mm(2), while consuming 5.8 µW from a 1.8-V supply.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Signal proteins are able to adapt their response to a change in the environment, governing in this way a broad variety of important cellular processes in living systems. While conventional molecular-dynamics (MD) techniques can be used to explore the early signaling pathway of these protein systems at atomistic resolution, the high computational costs limit their usefulness for the elucidation of the multiscale transduction dynamics of most signaling processes, occurring on experimental timescales. To cope with the problem, we present in this paper a novel multiscale-modeling method, based on a combination of the kinetic Monte-Carlo- and MD-technique, and demonstrate its suitability for investigating the signaling behavior of the photoswitch light-oxygen-voltage-2-Jα domain from Avena Sativa (AsLOV2-Jα) and an AsLOV2-Jα-regulated photoactivable Rac1-GTPase (PA-Rac1), recently employed to control the motility of cancer cells through light stimulus. More specifically, we show that their signaling pathways begin with a residual re-arrangement and subsequent H-bond formation of amino acids near to the flavin-mononucleotide chromophore, causing a coupling between β-strands and subsequent detachment of a peripheral α-helix from the AsLOV2-domain. In the case of the PA-Rac1 system we find that this latter process induces the release of the AsLOV2-inhibitor from the switchII-activation site of the GTPase, enabling signal activation through effector-protein binding. These applications demonstrate that our approach reliably reproduces the signaling pathways of complex signal proteins, ranging from nanoseconds up to seconds at affordable computational costs.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Determining how an exhaust system will perform acoustically before a prototype muffler is built can save the designer both a substantial amount of time and resources. In order to effectively use the simulation tools available it is important to understand what is the most effective tool for the intended purpose of analysis as well as how typical elements in an exhaust system affect muffler performance. An in-depth look at the available tools and their most beneficial uses are presented in this thesis. A full parametric study was conducted using the FEM method for typical muffler elements which was also correlated to experimental results. This thesis lays out the overall ground work on how to accurately predict sound pressure levels in the free field for an exhaust system with the engine properties included. The accuracy of the model is heavily dependent on the correct temperature profile of the model in addition to the accuracy of the source properties. These factors will be discussed in detail and methods for determining them will be presented. The secondary effects of mean flow, which affects both the acoustical wave propagation and the flow noise generation, will be discussed. Effective ways for predicting these secondary effects will be described. Experimental models will be tested on a flow rig that showcases these phenomena.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This technical report discusses the application of the Lattice Boltzmann Method (LBM) and Cellular Automata (CA) simulation in fluid flow and particle deposition. The current work focuses on incompressible flow simulation passing cylinders, in which we incorporate the LBM D2Q9 and CA techniques to simulate the fluid flow and particle loading respectively. For the LBM part, the theories of boundary conditions are studied and verified using the Poiseuille flow test. For the CA part, several models regarding simulation of particles are explained. And a new Digital Differential Analyzer (DDA) algorithm is introduced to simulate particle motion in the Boolean model. The numerical results are compared with a previous probability velocity model by Masselot [Masselot 2000], which shows a satisfactory result.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND: Various osteotomy techniques have been developed to correct the deformity caused by slipped capital femoral epiphysis (SCFE) and compared by their clinical outcomes. The aim of the presented study was to compare an intertrochanteric uniplanar flexion osteotomy with a multiplanar osteotomy by their ability to improve postoperative range of motion as measured by simulation of computed tomographic data in patients with SCFE. METHODS: We examined 19 patients with moderate or severe SCFE as classified based on slippage angle. A computer program for the simulation of movement and osteotomy developed in our laboratory was used for study execution. According to a 3-dimensional reconstruction of the computed tomographic data, the physiological range was determined by flexion, abduction, and internal rotation. The multiplanar osteotomy was compared with the uniplanar flexion osteotomy. Both intertrochanteric osteotomy techniques were simulated, and the improvements of the movement range were assessed and compared. RESULTS: The mean slipping and thus correction angles measured were 25 degrees (range, 8-46 degrees) inferior and 54 degrees (range, 32-78 degrees) posterior. After the simulation of multiplanar osteotomy, the virtually measured ranges of motion as determined by bone-to-bone contact were 61 degrees for flexion, 57 degrees for abduction, and 66 degrees for internal rotation. The simulation of the uniplanar flexion osteotomy achieved a flexion of 63 degrees, an abduction of 36 degrees, and an internal rotation of 54 degrees. CONCLUSIONS: Apart from abduction, the improvement in the range of motion by a uniplanar flexion osteotomy is comparable with that of the multiplanar osteotomy. However, the improvement in flexion for the simulation of both techniques is not satisfactory with regard to the requirements of normal everyday life, in contrast to abduction and internal rotation. LEVEL OF EVIDENCE: Level III, Retrospective comparative study.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Despite major advances in the study of glioma, the quantitative links between intra-tumor molecular/cellular properties, clinically observable properties such as morphology, and critical tumor behaviors such as growth and invasiveness remain unclear, hampering more effective coupling of tumor physical characteristics with implications for prognosis and therapy. Although molecular biology, histopathology, and radiological imaging are employed in this endeavor, studies are severely challenged by the multitude of different physical scales involved in tumor growth, i.e., from molecular nanoscale to cell microscale and finally to tissue centimeter scale. Consequently, it is often difficult to determine the underlying dynamics across dimensions. New techniques are needed to tackle these issues. Here, we address this multi-scalar problem by employing a novel predictive three-dimensional mathematical and computational model based on first-principle equations (conservation laws of physics) that describe mathematically the diffusion of cell substrates and other processes determining tumor mass growth and invasion. The model uses conserved variables to represent known determinants of glioma behavior, e.g., cell density and oxygen concentration, as well as biological functional relationships and parameters linking phenomena at different scales whose specific forms and values are hypothesized and calculated based on in vitro and in vivo experiments and from histopathology of tissue specimens from human gliomas. This model enables correlation of glioma morphology to tumor growth by quantifying interdependence of tumor mass on the microenvironment (e.g., hypoxia, tissue disruption) and on the cellular phenotypes (e.g., mitosis and apoptosis rates, cell adhesion strength). Once functional relationships between variables and associated parameter values have been informed, e.g., from histopathology or intra-operative analysis, this model can be used for disease diagnosis/prognosis, hypothesis testing, and to guide surgery and therapy. In particular, this tool identifies and quantifies the effects of vascularization and other cell-scale glioma morphological characteristics as predictors of tumor-scale growth and invasion.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Despite major advances in the study of glioma, the quantitative links between intra-tumor molecular/cellular properties, clinically observable properties such as morphology, and critical tumor behaviors such as growth and invasiveness remain unclear, hampering more effective coupling of tumor physical characteristics with implications for prognosis and therapy. Although molecular biology, histopathology, and radiological imaging are employed in this endeavor, studies are severely challenged by the multitude of different physical scales involved in tumor growth, i.e., from molecular nanoscale to cell microscale and finally to tissue centimeter scale. Consequently, it is often difficult to determine the underlying dynamics across dimensions. New techniques are needed to tackle these issues. Here, we address this multi-scalar problem by employing a novel predictive three-dimensional mathematical and computational model based on first-principle equations (conservation laws of physics) that describe mathematically the diffusion of cell substrates and other processes determining tumor mass growth and invasion. The model uses conserved variables to represent known determinants of glioma behavior, e.g., cell density and oxygen concentration, as well as biological functional relationships and parameters linking phenomena at different scales whose specific forms and values are hypothesized and calculated based on in vitro and in vivo experiments and from histopathology of tissue specimens from human gliomas. This model enables correlation of glioma morphology to tumor growth by quantifying interdependence of tumor mass on the microenvironment (e.g., hypoxia, tissue disruption) and on the cellular phenotypes (e.g., mitosis and apoptosis rates, cell adhesion strength). Once functional relationships between variables and associated parameter values have been informed, e.g., from histopathology or intra-operative analysis, this model can be used for disease diagnosis/prognosis, hypothesis testing, and to guide surgery and therapy. In particular, this tool identifies and quantifies the effects of vascularization and other cell-scale glioma morphological characteristics as predictors of tumor-scale growth and invasion.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This study aims at assessing the skill of several climate field reconstruction techniques (CFR) to reconstruct past precipitation over continental Europe and the Mediterranean at seasonal time scales over the last two millennia from proxy records. A number of pseudoproxy experiments are performed within the virtual reality ofa regional paleoclimate simulation at 45 km resolution to analyse different aspects of reconstruction skill. Canonical Correlation Analysis (CCA), two versions of an Analog Method (AM) and Bayesian hierarchical modeling (BHM) are applied to reconstruct precipitation from a synthetic network of pseudoproxies that are contaminated with various types of noise. The skill of the derived reconstructions is assessed through comparison with precipitation simulated by the regional climate model. Unlike BHM, CCA systematically underestimates the variance. The AM can be adjusted to overcome this shortcoming, presenting an intermediate behaviour between the two aforementioned techniques. However, a trade-off between reconstruction-target correlations and reconstructed variance is the drawback of all CFR techniques. CCA (BHM) presents the largest (lowest) skill in preserving the temporal evolution, whereas the AM can be tuned to reproduce better correlation at the expense of losing variance. While BHM has been shown to perform well for temperatures, it relies heavily on prescribed spatial correlation lengths. While this assumption is valid for temperature, it is hardly warranted for precipitation. In general, none of the methods outperforms the other. All experiments agree that a dense and regularly distributed proxy network is required to reconstruct precipitation accurately, reflecting its high spatial and temporal variability. This is especially true in summer, when a specifically short de-correlation distance from the proxy location is caused by localised summertime convective precipitation events.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Simulating surface wind over complex terrain is a challenge in regional climate modelling. Therefore, this study aims at identifying a set-up of the Weather Research and Forecasting Model (WRF) model that minimises system- atic errors of surface winds in hindcast simulations. Major factors of the model configuration are tested to find a suitable set-up: the horizontal resolution, the planetary boundary layer (PBL) parameterisation scheme and the way the WRF is nested to the driving data set. Hence, a number of sensitivity simulations at a spatial resolution of 2 km are carried out and compared to observations. Given the importance of wind storms, the analysis is based on case studies of 24 historical wind storms that caused great economic damage in Switzerland. Each of these events is downscaled using eight different model set-ups, but sharing the same driving data set. The results show that the lack of representation of the unresolved topography leads to a general overestimation of wind speed in WRF. However, this bias can be substantially reduced by using a PBL scheme that explicitly considers the effects of non-resolved topography, which also improves the spatial structure of wind speed over Switzerland. The wind direction, although generally well reproduced, is not very sensitive to the PBL scheme. Further sensitivity tests include four types of nesting methods: nesting only at the boundaries of the outermost domain, analysis nudging, spectral nudging, and the so-called re-forecast method, where the simulation is frequently restarted. These simulations show that restricting the freedom of the model to develop large-scale disturbances slightly increases the temporal agreement with the observations, at the same time that it further reduces the overestimation of wind speed, especially for maximum wind peaks. The model performance is also evaluated in the outermost domains, where the resolution is coarser. The results demonstrate the important role of horizontal resolution, where the step from 6 to 2 km significantly improves model performance. In summary, the combination of a grid size of 2 km, the non-local PBL scheme modified to explicitly account for non-resolved orography, as well as analysis or spectral nudging, is a superior combination when dynamical downscaling is aimed at reproducing real wind fields.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Today, there is little knowledge on the attitude state of decommissioned intact objects in Earth orbit. Observational means have advanced in the past years, but are still limited with respect to an accurate estimate of motion vector orientations and magnitude. Especially for the preparation of Active Debris Removal (ADR) missions as planned by ESA’s Clean Space initiative or contingency scenarios for ESA spacecraft like ENVISAT, such knowledge is needed. ESA's “Debris Attitude Motion Measurements and Modelling” project (ESA Contract No. 40000112447), led by the Astronomical Institute of the University of Bern (AIUB), addresses this problem. The goal of the project is to achieve a good understanding of the attitude evolution and the considerable internal and external effects which occur. To characterize the attitude state of selected targets in LEO and GTO, multiple observation methods are combined. Optical observations are carried out by AIUB, Satellite Laser Ranging (SLR) is performed by the Space Research Institute of the Austrian Academy of Sciences (IWF) and radar measurements and signal level determination are provided by the Fraunhofer Institute for High Frequency Physics and Radar Techniques (FHR). The In-Orbit Tumbling Analysis tool (ιOTA) is a prototype software, currently in development by Hyperschall Technologie Göttingen GmbH (HTG) within the framework of the project. ιOTA will be a highly modular software tool to perform short-(days), medium-(months) and long-term (years) propagation of the orbit and attitude motion (six degrees-of-freedom) of spacecraft in Earth orbit. The simulation takes into account all relevant acting forces and torques, including aerodynamic drag, solar radiation pressure, gravitational influences of Earth, Sun and Moon, eddy current damping, impulse and momentum transfer from space debris or micro meteoroid impact, as well as the optional definition of particular spacecraft specific influences like tank sloshing, reaction wheel behaviour, magnetic torquer activity and thruster firing. The purpose of ιOTA is to provide high accuracy short-term simulations to support observers and potential ADR missions, as well as medium-and long-term simulations to study the significance of the particular internal and external influences on the attitude, especially damping factors and momentum transfer. The simulation will also enable the investigation of the altitude dependency of the particular external influences. ιOTA's post-processing modules will generate synthetic measurements for observers and for software validation. The validation of the software will be done by cross-calibration with observations and measurements acquired by the project partners.