934 resultados para upscale extensions


Relevância:

10.00% 10.00%

Publicador:

Resumo:

A permanent electric dipole moment of the neutron violates time reversal as well as parity symmetry. Thus it also violates the combination of charge conjugation and parity symmetry if the combination of all three symmetries is a symmetry of nature. The violation of these symmetries could help to explain the observed baryon content of the Universe. The prediction of the Standard Model of particle physics for the neutron electric dipole moment is only about 10e−32 ecm. At the same time the combined violation of charge conjugation and parity symmetry in the Standard Model is insufficient to explain the observed baryon asymmetry of the Universe. Several extensions to the Standard Model can explain the observed baryon asymmetry and also predict values for the neutron electric dipole moment just below the current best experimental limit of d n < 2.9e−26 ecm, (90% C.L.) that has been obtained by the Sussex-RAL-ILL collaboration in 2006. The very same experiment that set the current best limit on the electric dipole moment has been upgraded and moved to the Paul Scherrer Institute. Now an international collaboration is aiming at increasing the sensitivity for an electric dipole moment by more than an order of magnitude. This thesis took place in the frame of this experiment and went along with the commissioning of the experiment until first data taking. After a short layout of the theoretical background in chapter 1, the experiment with all subsystems and their performance are described in detail in chapter 2. To reach the goal sensitivity the control of systematic errors is as important as an increase in statistical sensitivity. Known systematic efects are described and evaluated in chapter 3. During about ten days in 2012, a first set of data was measured with the experiment at the Paul Scherrer Institute. An analysis of this data is presented in chapter 4, together with general tools developed for future analysis eforts. The result for the upper limit of an electric dipole moment of the neutron is |dn| ≤ 6.4e−25 ecm (95%C.L.). Chapter 5 presents investigations for a next generation experiment, to build electrodes made partly from insulating material. Among other advantages, such electrodes would reduce magnetic noise, generated by the thermal movement of charge carriers. The last Chapter summarizes this work and gives an outlook.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Data sets describing the state of the earth's atmosphere are of great importance in the atmospheric sciences. Over the last decades, the quality and sheer amount of the available data increased significantly, resulting in a rising demand for new tools capable of handling and analysing these large, multidimensional sets of atmospheric data. The interdisciplinary work presented in this thesis covers the development and the application of practical software tools and efficient algorithms from the field of computer science, aiming at the goal of enabling atmospheric scientists to analyse and to gain new insights from these large data sets. For this purpose, our tools combine novel techniques with well-established methods from different areas such as scientific visualization and data segmentation. In this thesis, three practical tools are presented. Two of these tools are software systems (Insight and IWAL) for different types of processing and interactive visualization of data, the third tool is an efficient algorithm for data segmentation implemented as part of Insight.Insight is a toolkit for the interactive, three-dimensional visualization and processing of large sets of atmospheric data, originally developed as a testing environment for the novel segmentation algorithm. It provides a dynamic system for combining at runtime data from different sources, a variety of different data processing algorithms, and several visualization techniques. Its modular architecture and flexible scripting support led to additional applications of the software, from which two examples are presented: the usage of Insight as a WMS (web map service) server, and the automatic production of a sequence of images for the visualization of cyclone simulations. The core application of Insight is the provision of the novel segmentation algorithm for the efficient detection and tracking of 3D features in large sets of atmospheric data, as well as for the precise localization of the occurring genesis, lysis, merging and splitting events. Data segmentation usually leads to a significant reduction of the size of the considered data. This enables a practical visualization of the data, statistical analyses of the features and their events, and the manual or automatic detection of interesting situations for subsequent detailed investigation. The concepts of the novel algorithm, its technical realization, and several extensions for avoiding under- and over-segmentation are discussed. As example applications, this thesis covers the setup and the results of the segmentation of upper-tropospheric jet streams and cyclones as full 3D objects. Finally, IWAL is presented, which is a web application for providing an easy interactive access to meteorological data visualizations, primarily aimed at students. As a web application, the needs to retrieve all input data sets and to install and handle complex visualization tools on a local machine are avoided. The main challenge in the provision of customizable visualizations to large numbers of simultaneous users was to find an acceptable trade-off between the available visualization options and the performance of the application. Besides the implementational details, benchmarks and the results of a user survey are presented.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Graphene nanoribbons (GNRs), which are defined as nanometer-wide strips of graphene, are attracting an increasing attention as one on the most promising materials for future nanoelectronics. Unlike zero-bandgap graphene that cannot be switched off in transistors, GNRs possess open bandgaps that critically depend on their width and edge structures. GNRs were predominantly prepared through “top-down” methods such as “cutting” of graphene and “unzipping” of carbon nanotubes, but these methods cannot precisely control the structure of the resulting GNRs. In contrast, “bottom-up” chemical synthetic approach enables fabrication of structurally defined and uniform GNRs from tailor-made polyphenylene precursors. Nevertheless, width and length of the GNRs obtainable by this method were considerably limited. In this study, lateral as well as longitudinal extensions of the GNRs were achieved while preserving the high structural definition, based on the bottom-up solution synthesis. Initially, wider (~2 nm) GNRs were synthesized by using laterally expanded monomers through AA-type Yamamoto polymerization, which proved more efficient than the conventional A2B2-type Suzuki polymerization. The wider GNRs showed broad absorption profile extending to the near-infrared region with a low optical bandgap of 1.12 eV, which indicated a potential of such GNRs for the application in photovoltaic cells. Next, high longitudinal extension of narrow (~1 nm) GNRs over 600 nm was accomplished based on AB-type Diels–Alder polymerization, which provided corresponding polyphenylene precursors with the weight-average molecular weight of larger than 600,000 g/mol. Bulky alkyl chains densely installed on the peripheral positions of these GNRs enhanced their liquid-phase processability, which allowed their formation of highly ordered self-assembled monolayers. Furthermore, non-contact time-resolved terahertz spectroscopy measurements demonstrated high charge-carrier mobility within individual GNRs. Remarkably, lateral extension of the AB-type monomer enabled the fabrication of wider (~2 nm) and long (>100 nm) GNRs through the Diels–Alder polymerization. Such longitudinally extended and structurally well-defined GNRs are expected to allow the fabrication of single-ribbon transistors for the fundamental studies on the electronic properties of the GNRs as well as contribute to the development of future electronic devices.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Although the Standard Model of particle physics (SM) provides an extremely successful description of the ordinary matter, one knows from astronomical observations that it accounts only for around 5% of the total energy density of the Universe, whereas around 30% are contributed by the dark matter. Motivated by anomalies in cosmic ray observations and by attempts to solve questions of the SM like the (g-2)_mu discrepancy, proposed U(1) extensions of the SM gauge group have raised attention in recent years. In the considered U(1) extensions a new, light messenger particle, the hidden photon, couples to the hidden sector as well as to the electromagnetic current of the SM by kinetic mixing. This allows for a search for this particle in laboratory experiments exploring the electromagnetic interaction. Various experimental programs have been started to search for hidden photons, such as in electron-scattering experiments, which are a versatile tool to explore various physics phenomena. One approach is the dedicated search in fixed-target experiments at modest energies as performed at MAMI or at JLAB. In these experiments the scattering of an electron beam off a hadronic target e+(A,Z)->e+(A,Z)+l^+l^- is investigated and a search for a very narrow resonance in the invariant mass distribution of the lepton pair is performed. This requires an accurate understanding of the theoretical basis of the underlying processes. For this purpose it is demonstrated in the first part of this work, in which way the hidden photon can be motivated from existing puzzles encountered at the precision frontier of the SM. The main part of this thesis deals with the analysis of the theoretical framework for electron scattering fixed-target experiments searching for hidden photons. As a first step, the cross section for the bremsstrahlung emission of hidden photons in such experiments is studied. Based on these results, the applicability of the Weizsäcker-Williams approximation to calculate the signal cross section of the process, which is widely used to design such experimental setups, is investigated. In a next step, the reaction e+(A,Z)->e+(A,Z)+l^+l^- is analyzed as signal and background process in order to describe existing data obtained by the A1 experiment at MAMI with the aim to give accurate predictions of exclusion limits for the hidden photon parameter space. Finally, the derived methods are used to find predictions for future experiments, e.g., at MESA or at JLAB, allowing for a comprehensive study of the discovery potential of the complementary experiments. In the last part, a feasibility study for probing the hidden photon model by rare kaon decays is performed. For this purpose, invisible as well as visible decays of the hidden photon are considered within different classes of models. This allows one to find bounds for the parameter space from existing data and to estimate the reach of future experiments.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The Standard Model of particle physics is a very successful theory which describes nearly all known processes of particle physics very precisely. Nevertheless, there are several observations which cannot be explained within the existing theory. In this thesis, two analyses with high energy electrons and positrons using data of the ATLAS detector are presented. One, probing the Standard Model of particle physics and another searching for phenomena beyond the Standard Model.rnThe production of an electron-positron pair via the Drell-Yan process leads to a very clean signature in the detector with low background contributions. This allows for a very precise measurement of the cross-section and can be used as a precision test of perturbative quantum chromodynamics (pQCD) where this process has been calculated at next-to-next-to-leading order (NNLO). The invariant mass spectrum mee is sensitive to parton distribution functions (PFDs), in particular to the poorly known distribution of antiquarks at large momentum fraction (Bjoerken x). The measurementrnof the high-mass Drell-Yan cross-section in proton-proton collisions at a center-of-mass energy of sqrt(s) = 7 TeV is performed on a dataset collected with the ATLAS detector, corresponding to an integrated luminosity of 4.7 fb-1. The differential cross-section of pp -> Z/gamma + X -> e+e- + X is measured as a function of the invariant mass in the range 116 GeV < mee < 1500 GeV. The background is estimated using a data driven method and Monte Carlo simulations. The final cross-section is corrected for detector effects and different levels of final state radiation corrections. A comparison isrnmade to various event generators and to predictions of pQCD calculations at NNLO. A good agreement within the uncertainties between measured cross-sections and Standard Model predictions is observed.rnExamples of observed phenomena which can not be explained by the Standard Model are the amount of dark matter in the universe and neutrino oscillations. To explain these phenomena several extensions of the Standard Model are proposed, some of them leading to new processes with a high multiplicity of electrons and/or positrons in the final state. A model independent search in multi-object final states, with objects defined as electrons and positrons, is performed to search for these phenomenas. Therndataset collected at a center-of-mass energy of sqrt(s) = 8 TeV, corresponding to an integrated luminosity of 20.3 fb-1 is used. The events are separated in different categories using the object multiplicity. The data-driven background method, already used for the cross-section measurement was developed further for up to five objects to get an estimation of the number of events including fake contributions. Within the uncertainties the comparison between data and Standard Model predictions shows no significant deviations.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Thema dieser Arbeit ist die Entwicklung und Kombination verschiedener numerischer Methoden, sowie deren Anwendung auf Probleme stark korrelierter Elektronensysteme. Solche Materialien zeigen viele interessante physikalische Eigenschaften, wie z.B. Supraleitung und magnetische Ordnung und spielen eine bedeutende Rolle in technischen Anwendungen. Es werden zwei verschiedene Modelle behandelt: das Hubbard-Modell und das Kondo-Gitter-Modell (KLM). In den letzten Jahrzehnten konnten bereits viele Erkenntnisse durch die numerische Lösung dieser Modelle gewonnen werden. Dennoch bleibt der physikalische Ursprung vieler Effekte verborgen. Grund dafür ist die Beschränkung aktueller Methoden auf bestimmte Parameterbereiche. Eine der stärksten Einschränkungen ist das Fehlen effizienter Algorithmen für tiefe Temperaturen.rnrnBasierend auf dem Blankenbecler-Scalapino-Sugar Quanten-Monte-Carlo (BSS-QMC) Algorithmus präsentieren wir eine numerisch exakte Methode, die das Hubbard-Modell und das KLM effizient bei sehr tiefen Temperaturen löst. Diese Methode wird auf den Mott-Übergang im zweidimensionalen Hubbard-Modell angewendet. Im Gegensatz zu früheren Studien können wir einen Mott-Übergang bei endlichen Temperaturen und endlichen Wechselwirkungen klar ausschließen.rnrnAuf der Basis dieses exakten BSS-QMC Algorithmus, haben wir einen Störstellenlöser für die dynamische Molekularfeld Theorie (DMFT) sowie ihre Cluster Erweiterungen (CDMFT) entwickelt. Die DMFT ist die vorherrschende Theorie stark korrelierter Systeme, bei denen übliche Bandstrukturrechnungen versagen. Eine Hauptlimitation ist dabei die Verfügbarkeit effizienter Störstellenlöser für das intrinsische Quantenproblem. Der in dieser Arbeit entwickelte Algorithmus hat das gleiche überlegene Skalierungsverhalten mit der inversen Temperatur wie BSS-QMC. Wir untersuchen den Mott-Übergang im Rahmen der DMFT und analysieren den Einfluss von systematischen Fehlern auf diesen Übergang.rnrnEin weiteres prominentes Thema ist die Vernachlässigung von nicht-lokalen Wechselwirkungen in der DMFT. Hierzu kombinieren wir direkte BSS-QMC Gitterrechnungen mit CDMFT für das halb gefüllte zweidimensionale anisotrope Hubbard Modell, das dotierte Hubbard Modell und das KLM. Die Ergebnisse für die verschiedenen Modelle unterscheiden sich stark: während nicht-lokale Korrelationen eine wichtige Rolle im zweidimensionalen (anisotropen) Modell spielen, ist in der paramagnetischen Phase die Impulsabhängigkeit der Selbstenergie für stark dotierte Systeme und für das KLM deutlich schwächer. Eine bemerkenswerte Erkenntnis ist, dass die Selbstenergie sich durch die nicht-wechselwirkende Dispersion parametrisieren lässt. Die spezielle Struktur der Selbstenergie im Impulsraum kann sehr nützlich für die Klassifizierung von elektronischen Korrelationseffekten sein und öffnet den Weg für die Entwicklung neuer Schemata über die Grenzen der DMFT hinaus.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Ozon (O3) ist ein wichtiges Oxidierungs- und Treibhausgas in der Erdatmosphäre. Es hat Einfluss auf das Klima, die Luftqualität sowie auf die menschliche Gesundheit und die Vegetation. Ökosysteme, wie beispielsweise Wälder, sind Senken für troposphärisches Ozon und werden in Zukunft, bedingt durch Stürme, Pflanzenschädlinge und Änderungen in der Landnutzung, heterogener sein. Es ist anzunehmen, dass diese Heterogenitäten die Aufnahme von Treibhausgasen verringern und signifikante Rückkopplungen auf das Klimasystem bewirken werden. Beeinflusst wird der Atmosphären-Biosphären-Austausch von Ozon durch stomatäre Aufnahme, Deposition auf Pflanzenoberflächen und Böden sowie chemische Umwandlungen. Diese Prozesse zu verstehen und den Ozonaustausch für verschiedene Ökosysteme zu quantifizieren sind Voraussetzungen, um von lokalen Messungen auf regionale Ozonflüsse zu schließen.rnFür die Messung von vertikalen turbulenten Ozonflüssen wird die Eddy Kovarianz Methode genutzt. Die Verwendung von Eddy Kovarianz Systemen mit geschlossenem Pfad, basierend auf schnellen Chemilumineszenz-Ozonsensoren, kann zu Fehlern in der Flussmessung führen. Ein direkter Vergleich von nebeneinander angebrachten Ozonsensoren ermöglichte es einen Einblick in die Faktoren zu erhalten, die die Genauigkeit der Messungen beeinflussen. Systematische Unterschiede zwischen einzelnen Sensoren und der Einfluss von unterschiedlichen Längen des Einlassschlauches wurden untersucht, indem Frequenzspektren analysiert und Korrekturfaktoren für die Ozonflüsse bestimmt wurden. Die experimentell bestimmten Korrekturfaktoren zeigten keinen signifikanten Unterschied zu Korrekturfaktoren, die mithilfe von theoretischen Transferfunktionen bestimmt wurden, wodurch die Anwendbarkeit der theoretisch ermittelten Faktoren zur Korrektur von Ozonflüssen bestätigt wurde.rnIm Sommer 2011 wurden im Rahmen des EGER (ExchanGE processes in mountainous Regions) Projektes Messungen durchgeführt, um zu einem besseren Verständnis des Atmosphären-Biosphären Ozonaustauschs in gestörten Ökosystemen beizutragen. Ozonflüsse wurden auf beiden Seiten einer Waldkante gemessen, die einen Fichtenwald und einen Windwurf trennt. Auf der straßenähnlichen Freifläche, die durch den Sturm "Kyrill" (2007) entstand, entwickelte sich eine Sekundärvegetation, die sich in ihrer Phänologie und Blattphysiologie vom ursprünglich vorherrschenden Fichtenwald unterschied. Der mittlere nächtliche Fluss über dem Fichtenwald war -6 bis -7 nmol m2 s-1 und nahm auf -13 nmol m2 s-1 um die Mittagszeit ab. Die Ozonflüsse zeigten eine deutliche Beziehung zur Pflanzenverdunstung und CO2 Aufnahme, was darauf hinwies, dass während des Tages der Großteil des Ozons von den Pflanzenstomata aufgenommen wurde. Die relativ hohe nächtliche Deposition wurde durch nicht-stomatäre Prozesse verursacht. Die Deposition über dem Wald war im gesamten Tagesverlauf in etwa doppelt so hoch wie über der Freifläche. Dieses Verhältnis stimmte mit dem Verhältnis des Pflanzenflächenindex (PAI) überein. Die Störung des Ökosystems verringerte somit die Fähigkeit des Bewuchses, als Senke für troposphärisches Ozon zu fungieren. Der deutliche Unterschied der Ozonflüsse der beiden Bewuchsarten verdeutlichte die Herausforderung bei der Regionalisierung von Ozonflüssen in heterogen bewaldeten Gebieten.rnDie gemessenen Flüsse wurden darüber hinaus mit Simulationen verglichen, die mit dem Chemiemodell MLC-CHEM durchgeführt wurden. Um das Modell bezüglich der Berechnung von Ozonflüssen zu evaluieren, wurden gemessene und modellierte Flüsse von zwei Positionen im EGER-Gebiet verwendet. Obwohl die Größenordnung der Flüsse übereinstimmte, zeigten die Ergebnisse eine signifikante Differenz zwischen gemessenen und modellierten Flüssen. Zudem gab es eine klare Abhängigkeit der Differenz von der relativen Feuchte, mit abnehmender Differenz bei zunehmender Feuchte, was zeigte, dass das Modell vor einer Verwendung für umfangreiche Studien des Ozonflusses weiterer Verbesserungen bedarf.rn

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Zeitreihen sind allgegenwärtig. Die Erfassung und Verarbeitung kontinuierlich gemessener Daten ist in allen Bereichen der Naturwissenschaften, Medizin und Finanzwelt vertreten. Das enorme Anwachsen aufgezeichneter Datenmengen, sei es durch automatisierte Monitoring-Systeme oder integrierte Sensoren, bedarf außerordentlich schneller Algorithmen in Theorie und Praxis. Infolgedessen beschäftigt sich diese Arbeit mit der effizienten Berechnung von Teilsequenzalignments. Komplexe Algorithmen wie z.B. Anomaliedetektion, Motivfabfrage oder die unüberwachte Extraktion von prototypischen Bausteinen in Zeitreihen machen exzessiven Gebrauch von diesen Alignments. Darin begründet sich der Bedarf nach schnellen Implementierungen. Diese Arbeit untergliedert sich in drei Ansätze, die sich dieser Herausforderung widmen. Das umfasst vier Alignierungsalgorithmen und ihre Parallelisierung auf CUDA-fähiger Hardware, einen Algorithmus zur Segmentierung von Datenströmen und eine einheitliche Behandlung von Liegruppen-wertigen Zeitreihen.rnrnDer erste Beitrag ist eine vollständige CUDA-Portierung der UCR-Suite, die weltführende Implementierung von Teilsequenzalignierung. Das umfasst ein neues Berechnungsschema zur Ermittlung lokaler Alignierungsgüten unter Verwendung z-normierten euklidischen Abstands, welches auf jeder parallelen Hardware mit Unterstützung für schnelle Fouriertransformation einsetzbar ist. Des Weiteren geben wir eine SIMT-verträgliche Umsetzung der Lower-Bound-Kaskade der UCR-Suite zur effizienten Berechnung lokaler Alignierungsgüten unter Dynamic Time Warping an. Beide CUDA-Implementierungen ermöglichen eine um ein bis zwei Größenordnungen schnellere Berechnung als etablierte Methoden.rnrnAls zweites untersuchen wir zwei Linearzeit-Approximierungen für das elastische Alignment von Teilsequenzen. Auf der einen Seite behandeln wir ein SIMT-verträgliches Relaxierungschema für Greedy DTW und seine effiziente CUDA-Parallelisierung. Auf der anderen Seite führen wir ein neues lokales Abstandsmaß ein, den Gliding Elastic Match (GEM), welches mit der gleichen asymptotischen Zeitkomplexität wie Greedy DTW berechnet werden kann, jedoch eine vollständige Relaxierung der Penalty-Matrix bietet. Weitere Verbesserungen umfassen Invarianz gegen Trends auf der Messachse und uniforme Skalierung auf der Zeitachse. Des Weiteren wird eine Erweiterung von GEM zur Multi-Shape-Segmentierung diskutiert und auf Bewegungsdaten evaluiert. Beide CUDA-Parallelisierung verzeichnen Laufzeitverbesserungen um bis zu zwei Größenordnungen.rnrnDie Behandlung von Zeitreihen beschränkt sich in der Literatur in der Regel auf reellwertige Messdaten. Der dritte Beitrag umfasst eine einheitliche Methode zur Behandlung von Liegruppen-wertigen Zeitreihen. Darauf aufbauend werden Distanzmaße auf der Rotationsgruppe SO(3) und auf der euklidischen Gruppe SE(3) behandelt. Des Weiteren werden speichereffiziente Darstellungen und gruppenkompatible Erweiterungen elastischer Maße diskutiert.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Nowadays, data handling and data analysis in High Energy Physics requires a vast amount of computational power and storage. In particular, the world-wide LHC Com- puting Grid (LCG), an infrastructure and pool of services developed and deployed by a ample community of physicists and computer scientists, has demonstrated to be a game changer in the efficiency of data analyses during Run-I at the LHC, playing a crucial role in the Higgs boson discovery. Recently, the Cloud computing paradigm is emerging and reaching a considerable adoption level by many different scientific organizations and not only. Cloud allows to access and utilize not-owned large computing resources shared among many scientific communities. Considering the challenging requirements of LHC physics in Run-II and beyond, the LHC computing community is interested in exploring Clouds and see whether they can provide a complementary approach - or even a valid alternative - to the existing technological solutions based on Grid. In the LHC community, several experiments have been adopting Cloud approaches, and in particular the experience of the CMS experiment is of relevance to this thesis. The LHC Run-II has just started, and Cloud-based solutions are already in production for CMS. However, other approaches of Cloud usage are being thought of and are at the prototype level, as the work done in this thesis. This effort is of paramount importance to be able to equip CMS with the capability to elastically and flexibly access and utilize the computing resources needed to face the challenges of Run-III and Run-IV. The main purpose of this thesis is to present forefront Cloud approaches that allow the CMS experiment to extend to on-demand resources dynamically allocated as needed. Moreover, a direct access to Cloud resources is presented as suitable use case to face up with the CMS experiment needs. Chapter 1 presents an overview of High Energy Physics at the LHC and of the CMS experience in Run-I, as well as preparation for Run-II. Chapter 2 describes the current CMS Computing Model, and Chapter 3 provides Cloud approaches pursued and used within the CMS Collaboration. Chapter 4 and Chapter 5 discuss the original and forefront work done in this thesis to develop and test working prototypes of elastic extensions of CMS computing resources on Clouds, and HEP Computing “as a Service”. The impact of such work on a benchmark CMS physics use-cases is also demonstrated.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

BACKGROUND: Physiologic data display is essential to decision making in critical care. Current displays echo first-generation hemodynamic monitors dating to the 1970s and have not kept pace with new insights into physiology or the needs of clinicians who must make progressively more complex decisions about their patients. The effectiveness of any redesign must be tested before deployment. Tools that compare current displays with novel presentations of processed physiologic data are required. Regenerating conventional physiologic displays from archived physiologic data is an essential first step. OBJECTIVES: The purposes of the study were to (1) describe the SSSI (single sensor single indicator) paradigm that is currently used for physiologic signal displays, (2) identify and discuss possible extensions and enhancements of the SSSI paradigm, and (3) develop a general approach and a software prototype to construct such "extended SSSI displays" from raw data. RESULTS: We present Multi Wave Animator (MWA) framework-a set of open source MATLAB (MathWorks, Inc., Natick, MA, USA) scripts aimed to create dynamic visualizations (eg, video files in AVI format) of patient vital signs recorded from bedside (intensive care unit or operating room) monitors. Multi Wave Animator creates animations in which vital signs are displayed to mimic their appearance on current bedside monitors. The source code of MWA is freely available online together with a detailed tutorial and sample data sets.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Objectives: To evaluate the biological and technical complication rates of fixed dental prostheses (FDP) with end abutments or cantilever extensions on teeth (FDP-tt/cFDP-tt) on implants (FDP-ii/cFDP-ii) and tooth-implant-supported (FDP-ti/cFDP-ti) in patients treated for chronic periodontitis. Material and methods: From a cohort of 392 patients treated between 1978 and 2002 by graduate students, 199 were re-examined in 2005. Of these, 84 patients had received ceramo-metal FDPs (six groups). Results: At the re-evaluation, the mean age of the patients was 62 years (36.2–83.4). One hundred and seventy-five FDPs were seated (82 FDP-tt, 9 FDP-ii, 20 FDP-ti, 39 cFDP-tt, 15 cFDP-ii, 10 cFDP-ti). The mean observation time was 11.3 years; 21 FDPs were lost, and 46 technical and 50 biological complications occurred. Chances for the survival of the three groups of FDPs with end abutments were very high (risk for failure 2.8%, 0%, 5.6%). The probability to remain without complications and/or failure was 70.3%, 88.9% and 74.7% in FDPs with end abutments, but 49.8–25% only in FDPs with extensions at 10 years. Conclusions: In patients treated for chronic periodontitis and provided with ceramo-metal FDPs, high survival rates, especially for FDPs with end abutments, can be expected. The incidence rates of any negative events were increased drastically in the three groups with extension cFDPs (tt, ii, ti). Strategic decisions in the choice of a particular FDP design and the choice of teeth/implants as abutments appear to influence the risks for complications to be expected with fixed reconstruction. If possible, extensions on tooth abutments should be avoided or used only after a cautious clinical evaluation of all options.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Domain-specific languages (DSLs) are increasingly used as embedded languages within general-purpose host languages. DSLs provide a compact, dedicated syntax for specifying parts of an application related to specialized domains. Unfortunately, such language extensions typically do not integrate well with the development tools of the host language. Editors, compilers and debuggers are either unaware of the extensions, or must be adapted at a non-trivial cost. We present a novel approach to embed DSLs into an existing host language by leveraging the underlying representation of the host language used by these tools. Helvetia is an extensible system that intercepts the compilation pipeline of the Smalltalk host language to seamlessly integrate language extensions. We validate our approach by case studies that demonstrate three fundamentally different ways to extend or adapt the host language syntax and semantics.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Let G be a locally finite group satisfying the condition given in the title and suppose that G is not nilpotent-by-Chernikov. It is shown that G has a section S that is not nilpotent-by-Chernikov, where S is either a p-group or a semi-direct product of the additive group A of a locally finite field F by a subgroup K of the multiplicative group of F, where K acts by multiplication on A and generates F as a ring. Non-(nilpotent-by-Chernikov) extensions of this latter kind exist and are described in detail.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

To every partially ordered set (poset), one can associate a generating function, known as the P-partition generating function. We find necessary conditions and sufficient conditions for two posets to have the same P-partition generating function. We define the notion of a jump sequence for a labeled poset and show that having equal jumpsequences is a necessary condition for generating function equality. We also develop multiple ways of modifying posets that preserve generating function equality. Finally, we are able to give a complete classification of equalities among partially ordered setswith exactly two linear extensions.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Stimulation of human epileptic tissue can induce rhythmic, self-terminating responses on the EEG or ECoG. These responses play a potentially important role in localising tissue involved in the generation of seizure activity, yet the underlying mechanisms are unknown. However, in vitro evidence suggests that self-terminating oscillations in nervous tissue are underpinned by non-trivial spatio-temporal dynamics in an excitable medium. In this study, we investigate this hypothesis in spatial extensions to a neural mass model for epileptiform dynamics. We demonstrate that spatial extensions to this model in one and two dimensions display propagating travelling waves but also more complex transient dynamics in response to local perturbations. The neural mass formulation with local excitatory and inhibitory circuits, allows the direct incorporation of spatially distributed, functional heterogeneities into the model. We show that such heterogeneities can lead to prolonged reverberating responses to a single pulse perturbation, depending upon the location at which the stimulus is delivered. This leads to the hypothesis that prolonged rhythmic responses to local stimulation in epileptogenic tissue result from repeated self-excitation of regions of tissue with diminished inhibitory capabilities. Combined with previous models of the dynamics of focal seizures this macroscopic framework is a first step towards an explicit spatial formulation of the concept of the epileptogenic zone. Ultimately, an improved understanding of the pathophysiologic mechanisms of the epileptogenic zone will help to improve diagnostic and therapeutic measures for treating epilepsy.