933 resultados para Set of Weak Stationary Dynamic Actions
Resumo:
Electronic applications are nowadays converging under the umbrella of the cloud computing vision. The future ecosystem of information and communication technology is going to integrate clouds of portable clients and embedded devices exchanging information, through the internet layer, with processing clusters of servers, data-centers and high performance computing systems. Even thus the whole society is waiting to embrace this revolution, there is a backside of the story. Portable devices require battery to work far from the power plugs and their storage capacity does not scale as the increasing power requirement does. At the other end processing clusters, such as data-centers and server farms, are build upon the integration of thousands multiprocessors. For each of them during the last decade the technology scaling has produced a dramatic increase in power density with significant spatial and temporal variability. This leads to power and temperature hot-spots, which may cause non-uniform ageing and accelerated chip failure. Nonetheless all the heat removed from the silicon translates in high cooling costs. Moreover trend in ICT carbon footprint shows that run-time power consumption of the all spectrum of devices accounts for a significant slice of entire world carbon emissions. This thesis work embrace the full ICT ecosystem and dynamic power consumption concerns by describing a set of new and promising system levels resource management techniques to reduce the power consumption and related issues for two corner cases: Mobile Devices and High Performance Computing.
Resumo:
The focus of this thesis was the in-situ application of the new analytical technique "GCxGC" in both the marine and continental boundary layer, as well as in the free troposphere. Biogenic and anthropogenic VOCs were analysed and used to characterise local chemistry at the individual measurement sites. The first part of the thesis work was the characterisation of a new set of columns that was to be used later in the field. To simplify the identification, a time-of-flight mass spectrometer (TOF-MS) detector was coupled to the GCxGC. In the field the TOF-MS was substituted by a more robust and tractable flame ionisation detector (FID), which is more suitable for quantitative measurements. During the process, a variety of volatile organic compounds could be assigned to different environmental sources, e.g. plankton sources, eucalyptus forest or urban centers. In-situ measurements of biogenic and anthropogenic VOCs were conducted at the Meteorological Observatory Hohenpeissenberg (MOHP), Germany, applying a thermodesorption-GCxGC-FID system. The measured VOCs were compared to GC-MS measurements routinely conducted at the MOHP as well as to PTR-MS measurements. Furthermore, a compressed ambient air standard was measured from three different gas chromatographic instruments and the results were compared. With few exceptions, the in-situ, as well as the standard measurements, revealed good agreement between the individual instruments. Diurnal cycles were observed, with differing patterns for the biogenic and the anthropogenic compounds. The variability-lifetime relationship of compounds with atmospheric lifetimes from a few hours to a few days in presence of O3 and OH was examined. It revealed a weak but significant influence of chemistry on these short-lived VOCs at the site. The relationship was also used to estimate the average OH radical concentration during the campaign, which was compared to in-situ OH measurements (1.7 x 10^6 molecules/cm^3, 0.071 ppt) for the first time. The OH concentration ranging from 3.5 to 6.5 x 10^5 molecules/cm^3 (0.015 to 0.027 ppt) obtained with this method represents an approximation of the average OH concentration influencing the discussed VOCs from emission to measurement. Based on these findings, the average concentration of the nighttime NO3 radicals was estimated using the same approach and found to range from 2.2 to 5.0 x 10^8 molecules/cm^3 (9.2 to 21.0 ppt). During the MINATROC field campaign, in-situ ambient air measurements with the GCxGC-FID were conducted at Tenerife, Spain. Although the station is mainly situated in the free troposphere, local influences of anthropogenic and biogenic VOCs were observed. Due to a strong dust event originating from Western Africa it was possible to compare the mixing ratios during normal and elevated dust loading in the atmosphere. The mixing ratios during the dust event were found to be lower. However, this could not be attributed to heterogeneous reactions as there was a change in the wind direction from northwesterly to southeasterly during the dust event.
Resumo:
The objective of this work of thesis is the refined estimations of source parameters. To such a purpose we used two different approaches, one in the frequency domain and the other in the time domain. In frequency domain, we analyzed the P- and S-wave displacement spectra to estimate spectral parameters, that is corner frequencies and low frequency spectral amplitudes. We used a parametric modeling approach which is combined with a multi-step, non-linear inversion strategy and includes the correction for attenuation and site effects. The iterative multi-step procedure was applied to about 700 microearthquakes in the moment range 1011-1014 N•m and recorded at the dense, wide-dynamic range, seismic networks operating in Southern Apennines (Italy). The analysis of the source parameters is often complicated when we are not able to model the propagation accurately. In this case the empirical Green function approach is a very useful tool to study the seismic source properties. In fact the Empirical Green Functions (EGFs) consent to represent the contribution of propagation and site effects to signal without using approximate velocity models. An EGF is a recorded three-component set of time-histories of a small earthquake whose source mechanism and propagation path are similar to those of the master event. Thus, in time domain, the deconvolution method of Vallée (2004) was applied to calculate the source time functions (RSTFs) and to accurately estimate source size and rupture velocity. This technique was applied to 1) large event, that is Mw=6.3 2009 L’Aquila mainshock (Central Italy), 2) moderate events, that is cluster of earthquakes of 2009 L’Aquila sequence with moment magnitude ranging between 3 and 5.6, 3) small event, i.e. Mw=2.9 Laviano mainshock (Southern Italy).
Resumo:
Neisseria meningitidis (Nm) is the major cause of septicemia and meningococcal meningitis. During the course of infection, it must adapt to different host environments as a crucial factor for survival. Despite the severity of meningococcal sepsis, little is known about how Nm adapts to permit survival and growth in human blood. A previous time-course transcriptome analysis, using an ex vivo model of human whole blood infection, showed that Nm alters the expression of nearly 30% of ORFs of the genome: major dynamic changes were observed in the expression of transcriptional regulators, transport and binding proteins, energy metabolism, and surface-exposed virulence factors. Starting from these data, mutagenesis studies of a subset of up-regulated genes were performed and the mutants were tested for the ability to survive in human whole blood; Nm mutant strains lacking the genes encoding NMB1483, NalP, Mip, NspA, Fur, TbpB, and LctP were sensitive to killing by human blood. Then, the analysis was extended to the whole Nm transcriptome in human blood, using a customized 60-mer oligonucleotide tiling microarray. The application of specifically developed software combined with this new tiling array allowed the identification of different types of regulated transcripts: small intergenic RNAs, antisense RNAs, 5’ and 3’ untranslated regions and operons. The expression of these RNA molecules was confirmed by 5’-3’RACE protocol and specific RT-PCR. Here we describe the complete transcriptome of Nm during incubation in human blood; we were able to identify new proteins important for survival in human blood and also to identify additional roles of previously known virulence factors in aiding survival in blood. In addition the tiling array analysis demonstrated that Nm expresses a set of new transcripts, not previously identified, and suggests the presence of a circuit of regulatory RNA elements used by Nm to adapt to proliferate in human blood.
Resumo:
This thesis presents several data processing and compression techniques capable of addressing the strict requirements of wireless sensor networks. After introducing a general overview of sensor networks, the energy problem is introduced, dividing the different energy reduction approaches according to the different subsystem they try to optimize. To manage the complexity brought by these techniques, a quick overview of the most common middlewares for WSNs is given, describing in detail SPINE2, a framework for data processing in the node environment. The focus is then shifted on the in-network aggregation techniques, used to reduce data sent by the network nodes trying to prolong the network lifetime as long as possible. Among the several techniques, the most promising approach is the Compressive Sensing (CS). To investigate this technique, a practical implementation of the algorithm is compared against a simpler aggregation scheme, deriving a mixed algorithm able to successfully reduce the power consumption. The analysis moves from compression implemented on single nodes to CS for signal ensembles, trying to exploit the correlations among sensors and nodes to improve compression and reconstruction quality. The two main techniques for signal ensembles, Distributed CS (DCS) and Kronecker CS (KCS), are introduced and compared against a common set of data gathered by real deployments. The best trade-off between reconstruction quality and power consumption is then investigated. The usage of CS is also addressed when the signal of interest is sampled at a Sub-Nyquist rate, evaluating the reconstruction performance. Finally the group sparsity CS (GS-CS) is compared to another well-known technique for reconstruction of signals from an highly sub-sampled version. These two frameworks are compared again against a real data-set and an insightful analysis of the trade-off between reconstruction quality and lifetime is given.
Resumo:
The topic of this thesis is the feedback stabilization of the attitude of magnetically actuated spacecraft. The use of magnetic coils is an attractive solution for the generation of control torques on small satellites flying inclined low Earth orbits, since magnetic control systems are characterized by reduced weight and cost, higher reliability, and require less power with respect to other kinds of actuators. At the same time, the possibility of smooth modulation of control torques reduces coupling of the attitude control system with flexible modes, thus preserving pointing precision with respect to the case when pulse-modulated thrusters are used. The principle based on the interaction between the Earth's magnetic field and the magnetic field generated by the set of coils introduces an inherent nonlinearity, because control torques can be delivered only in a plane that is orthogonal to the direction of the geomagnetic field vector. In other words, the system is underactuated, because the rotational degrees of freedom of the spacecraft, modeled as a rigid body, exceed the number of independent control actions. The solution of the control issue for underactuated spacecraft is also interesting in the case of actuator failure, e.g. after the loss of a reaction-wheel in a three-axes stabilized spacecraft with no redundancy. The application of well known control strategies is no longer possible in this case for both regulation and tracking, so that new methods have been suggested for tackling this particular problem. The main contribution of this thesis is to propose continuous time-varying controllers that globally stabilize the attitude of a spacecraft, when magneto-torquers alone are used and when a momentum-wheel supports magnetic control in order to overcome the inherent underactuation. A kinematic maneuver planning scheme, stability analyses, and detailed simulation results are also provided, with new theoretical developments and particular attention toward application considerations.
Resumo:
Nowadays microfluidic is becoming an important technology in many chemical and biological processes and analysis applications. The potential to replace large-scale conventional laboratory instrumentation with miniaturized and self-contained systems, (called lab-on-a-chip (LOC) or point-of-care-testing (POCT)), offers a variety of advantages such as low reagent consumption, faster analysis speeds, and the capability of operating in a massively parallel scale in order to achieve high-throughput. Micro-electro-mechanical-systems (MEMS) technologies enable both the fabrication of miniaturized system and the possibility of developing compact and portable systems. The work described in this dissertation is towards the development of micromachined separation devices for both high-speed gas chromatography (HSGC) and gravitational field-flow fractionation (GrFFF) using MEMS technologies. Concerning the HSGC, a complete platform of three MEMS-based GC core components (injector, separation column and detector) is designed, fabricated and characterized. The microinjector consists of a set of pneumatically driven microvalves, based on a polymeric actuating membrane. Experimental results demonstrate that the microinjector is able to guarantee low dead volumes, fast actuation time, a wide operating temperature range and high chemical inertness. The microcolumn consists of an all-silicon microcolumn having a nearly circular cross-section channel. The extensive characterization has produced separation performances very close to the theoretical ideal expectations. A thermal conductivity detector (TCD) is chosen as most proper detector to be miniaturized since the volume reduction of the detector chamber results in increased mass and reduced dead volumes. The microTDC shows a good sensitivity and a very wide dynamic range. Finally a feasibility study for miniaturizing a channel suited for GrFFF is performed. The proposed GrFFF microchannel is at early stage of development, but represents a first step for the realization of a highly portable and potentially low-cost POCT device for biomedical applications.
Resumo:
Most of the problems in modern structural design can be described with a set of equation; solutions of these mathematical models can lead the engineer and designer to get info during the design stage. The same holds true for physical-chemistry; this branch of chemistry uses mathematics and physics in order to explain real chemical phenomena. In this work two extremely different chemical processes will be studied; the dynamic of an artificial molecular motor and the generation and propagation of the nervous signals between excitable cells and tissues like neurons and axons. These two processes, in spite of their chemical and physical differences, can be both described successfully by partial differential equations, that are, respectively the Fokker-Planck equation and the Hodgkin and Huxley model. With the aid of an advanced engineering software these two processes have been modeled and simulated in order to extract a lot of physical informations about them and to predict a lot of properties that can be, in future, extremely useful during the design stage of both molecular motors and devices which rely their actions on the nervous communications between active fibres.
Resumo:
In distributed systems like clouds or service oriented frameworks, applications are typically assembled by deploying and connecting a large number of heterogeneous software components, spanning from fine-grained packages to coarse-grained complex services. The complexity of such systems requires a rich set of techniques and tools to support the automation of their deployment process. By relying on a formal model of components, a technique is devised for computing the sequence of actions allowing the deployment of a desired configuration. An efficient algorithm, working in polynomial time, is described and proven to be sound and complete. Finally, a prototype tool implementing the proposed algorithm has been developed. Experimental results support the adoption of this novel approach in real life scenarios.
Resumo:
Natürliche hydraulische Bruchbildung ist in allen Bereichen der Erdkruste ein wichtiger und stark verbreiteter Prozess. Sie beeinflusst die effektive Permeabilität und Fluidtransport auf mehreren Größenordnungen, indem sie hydraulische Konnektivität bewirkt. Der Prozess der Bruchbildung ist sowohl sehr dynamisch als auch hoch komplex. Die Dynamik stammt von der starken Wechselwirkung tektonischer und hydraulischer Prozesse, während sich die Komplexität aus der potentiellen Abhängigkeit der poroelastischen Eigenschaften von Fluiddruck und Bruchbildung ergibt. Die Bildung hydraulischer Brüche besteht aus drei Phasen: 1) Nukleation, 2) zeitabhängiges quasi-statisches Wachstum so lange der Fluiddruck die Zugfestigkeit des Gesteins übersteigt, und 3) in heterogenen Gesteinen der Einfluss von Lagen unterschiedlicher mechanischer oder sedimentärer Eigenschaften auf die Bruchausbreitung. Auch die mechanische Heterogenität, die durch präexistierende Brüche und Gesteinsdeformation erzeugt wird, hat großen Einfluß auf den Wachstumsverlauf. Die Richtung der Bruchausbreitung wird entweder durch die Verbindung von Diskontinuitäten mit geringer Zugfestigkeit im Bereich vor der Bruchfront bestimmt, oder die Bruchausbreitung kann enden, wenn der Bruch auf Diskontinuitäten mit hoher Festigkeit trifft. Durch diese Wechselwirkungen entsteht ein Kluftnetzwerk mit komplexer Geometrie, das die lokale Deformationsgeschichte und die Dynamik der unterliegenden physikalischen Prozesse reflektiert. rnrnNatürliche hydraulische Bruchbildung hat wesentliche Implikationen für akademische und kommerzielle Fragestellungen in verschiedenen Feldern der Geowissenschaften. Seit den 50er Jahren wird hydraulisches Fracturing eingesetzt, um die Permeabilität von Gas und Öllagerstätten zu erhöhen. Geländebeobachtungen, Isotopenstudien, Laborexperimente und numerische Analysen bestätigen die entscheidende Rolle des Fluiddruckgefälles in Verbindung mit poroelastischen Effekten für den lokalen Spannungszustand und für die Bedingungen, unter denen sich hydraulische Brüche bilden und ausbreiten. Die meisten numerischen hydromechanischen Modelle nehmen für die Kopplung zwischen Fluid und propagierenden Brüchen vordefinierte Bruchgeometrien mit konstantem Fluiddruck an, um das Problem rechnerisch eingrenzen zu können. Da natürliche Gesteine kaum so einfach strukturiert sind, sind diese Modelle generell nicht sonderlich effektiv in der Analyse dieses komplexen Prozesses. Insbesondere unterschätzen sie die Rückkopplung von poroelastischen Effekten und gekoppelte Fluid-Festgestein Prozesse, d.h. die Entwicklung des Porendrucks in Abhängigkeit vom Gesteinsversagen und umgekehrt.rnrnIn dieser Arbeit wird ein zweidimensionales gekoppeltes poro-elasto-plastisches Computer-Model für die qualitative und zum Teil auch quantitativ Analyse der Rolle lokalisierter oder homogen verteilter Fluiddrücke auf die dynamische Ausbreitung von hydraulischen Brüchen und die zeitgleiche Evolution der effektiven Permeabilität entwickelt. Das Programm ist rechnerisch effizient, indem es die Fluiddynamik mittels einer Druckdiffusions-Gleichung nach Darcy ohne redundante Komponenten beschreibt. Es berücksichtigt auch die Biot-Kompressibilität poröser Gesteine, die implementiert wurde um die Kontrollparameter in der Mechanik hydraulischer Bruchbildung in verschiedenen geologischen Szenarien mit homogenen und heterogenen Sedimentären Abfolgen zu bestimmen. Als Resultat ergibt sich, dass der Fluiddruck-Gradient in geschlossenen Systemen lokal zu Störungen des homogenen Spannungsfeldes führen. Abhängig von den Randbedingungen können sich diese Störungen eine Neuausrichtung der Bruchausbreitung zur Folge haben kann. Durch den Effekt auf den lokalen Spannungszustand können hohe Druckgradienten auch schichtparallele Bruchbildung oder Schlupf in nicht-entwässerten heterogenen Medien erzeugen. Ein Beispiel von besonderer Bedeutung ist die Evolution von Akkretionskeilen, wo die große Dynamik der tektonischen Aktivität zusammen mit extremen Porendrücken lokal starke Störungen des Spannungsfeldes erzeugt, die eine hoch-komplexe strukturelle Entwicklung inklusive vertikaler und horizontaler hydraulischer Bruch-Netzwerke bewirkt. Die Transport-Eigenschaften der Gesteine werden stark durch die Dynamik in der Entwicklung lokaler Permeabilitäten durch Dehnungsbrüche und Störungen bestimmt. Möglicherweise besteht ein enger Zusammenhang zwischen der Bildung von Grabenstrukturen und großmaßstäblicher Fluid-Migration. rnrnDie Konsistenz zwischen den Resultaten der Simulationen und vorhergehender experimenteller Untersuchungen deutet darauf hin, dass das beschriebene numerische Verfahren zur qualitativen Analyse hydraulischer Brüche gut geeignet ist. Das Schema hat auch Nachteile wenn es um die quantitative Analyse des Fluidflusses durch induzierte Bruchflächen in deformierten Gesteinen geht. Es empfiehlt sich zudem, das vorgestellte numerische Schema um die Kopplung mit thermo-chemischen Prozessen zu erweitern, um dynamische Probleme im Zusammenhang mit dem Wachstum von Kluftfüllungen in hydraulischen Brüchen zu untersuchen.
Resumo:
In dieser Arbeit untersuchen wir mittels zeitaufgelöster Abbildungen die Gigahertz-Dynamik von magnetischen Skyrmionen, um die Bewegungsgleichungen für diese Quasiteilchen zu bestimmen. Um dieses Ziel zu erreichen haben wir zunächst ein CoB/Pt Schichtsystem entwickelt, das starke senkrechte magnetische Anisotropie mit einer besonders geringen Rauigkeit der Energielandschaft verbindet. Diese Eigenschaften sind für das repetitive dynamische Abbildungsverfahren unerlässlich. In einem zweiten Schritt haben wir das Probendesign optimiert und so weiterentwickelt, dass eine Beobachtung der Skyrmionenbewegung mit einer Auflösung von besser als 3 nm möglich wurde. Aufgrund dieser Verbesserungen ist es uns gelungen, die Trajektorie eines Skyrmionen aufzuzeichnen. Diese Bewegung ist eine Superposition von zwei Drehbewegungen, einer im Uhrzeigersinn und einer gegen läufigen. Aus der Existenz dieser zwei Moden lässt sich schließen, dass Skyrmionen träge Quasiteilchen sind, und aus den Frequenzen können wir einen Wert für die träge Masse ableiten. Es stellt sich heraus, dass die Masse von Skyrmion fünfmal größer ist als von existierenden Theorien vorhergesagt. Die Masse wird folglich durch einen neuartigen Mechanismus bestimmt, der sich aus der räumlichen Beschränkung der Skyrmionen ergibt, welche sich direkt aus der Topologie bleitenrnlässt.
Resumo:
In this thesis the measurement of the effective weak mixing angle wma in proton-proton collisions is described. The results are extracted from the forward-backward asymmetry (AFB) in electron-positron final states at the ATLAS experiment at the LHC. The AFB is defined upon the distribution of the polar angle between the incoming quark and outgoing lepton. The signal process used in this study is the reaction pp to zgamma + X to ee + X taking a total integrated luminosity of 4.8\,fb^(-1) of data into account. The data was recorded at a proton-proton center-of-mass energy of sqrt(s)=7TeV. The weak mixing angle is a central parameter of the electroweak theory of the Standard Model (SM) and relates the neutral current interactions of electromagnetism and weak force. The higher order corrections on wma are related to other SM parameters like the mass of the Higgs boson.rnrnBecause of the symmetric initial state constellation of colliding protons, there is no favoured forward or backward direction in the experimental setup. The reference axis used in the definition of the polar angle is therefore chosen with respect to the longitudinal boost of the electron-positron final state. This leads to events with low absolute rapidity have a higher chance of being assigned to the opposite direction of the reference axis. This effect called dilution is reduced when events at higher rapidities are used. It can be studied including electrons and positrons in the forward regions of the ATLAS calorimeters. Electrons and positrons are further referred to as electrons. To include the electrons from the forward region, the energy calibration for the forward calorimeters had to be redone. This calibration is performed by inter-calibrating the forward electron energy scale using pairs of a central and a forward electron and the previously derived central electron energy calibration. The uncertainty is shown to be dominated by the systematic variations.rnrnThe extraction of wma is performed using chi^2 tests, comparing the measured distribution of AFB in data to a set of template distributions with varied values of wma. The templates are built in a forward folding technique using modified generator level samples and the official fully simulated signal sample with full detector simulation and particle reconstruction and identification. The analysis is performed in two different channels: pairs of central electrons or one central and one forward electron. The results of the two channels are in good agreement and are the first measurements of wma at the Z resonance using electron final states at proton-proton collisions at sqrt(s)=7TeV. The precision of the measurement is already systematically limited mostly by the uncertainties resulting from the knowledge of the parton distribution functions (PDF) and the systematic uncertainties of the energy calibration.rnrnThe extracted results of wma are combined and yield a value of wma_comb = 0.2288 +- 0.0004 (stat.) +- 0.0009 (syst.) = 0.2288 +- 0.0010 (tot.). The measurements are compared to the results of previous measurements at the Z boson resonance. The deviation with respect to the combined result provided by the LEP and SLC experiments is up to 2.7 standard deviations.
Resumo:
BACKGROUND: Physiologic data display is essential to decision making in critical care. Current displays echo first-generation hemodynamic monitors dating to the 1970s and have not kept pace with new insights into physiology or the needs of clinicians who must make progressively more complex decisions about their patients. The effectiveness of any redesign must be tested before deployment. Tools that compare current displays with novel presentations of processed physiologic data are required. Regenerating conventional physiologic displays from archived physiologic data is an essential first step. OBJECTIVES: The purposes of the study were to (1) describe the SSSI (single sensor single indicator) paradigm that is currently used for physiologic signal displays, (2) identify and discuss possible extensions and enhancements of the SSSI paradigm, and (3) develop a general approach and a software prototype to construct such "extended SSSI displays" from raw data. RESULTS: We present Multi Wave Animator (MWA) framework-a set of open source MATLAB (MathWorks, Inc., Natick, MA, USA) scripts aimed to create dynamic visualizations (eg, video files in AVI format) of patient vital signs recorded from bedside (intensive care unit or operating room) monitors. Multi Wave Animator creates animations in which vital signs are displayed to mimic their appearance on current bedside monitors. The source code of MWA is freely available online together with a detailed tutorial and sample data sets.
Resumo:
Monte Carlo (MC) based dose calculations can compute dose distributions with an accuracy surpassing that of conventional algorithms used in radiotherapy, especially in regions of tissue inhomogeneities and surface discontinuities. The Swiss Monte Carlo Plan (SMCP) is a GUI-based framework for photon MC treatment planning (MCTP) interfaced to the Eclipse treatment planning system (TPS). As for any dose calculation algorithm, also the MCTP needs to be commissioned and validated before using the algorithm for clinical cases. Aim of this study is the investigation of a 6 MV beam for clinical situations within the framework of the SMCP. In this respect, all parts i.e. open fields and all the clinically available beam modifiers have to be configured so that the calculated dose distributions match the corresponding measurements. Dose distributions for the 6 MV beam were simulated in a water phantom using a phase space source above the beam modifiers. The VMC++ code was used for the radiation transport through the beam modifiers (jaws, wedges, block and multileaf collimator (MLC)) as well as for the calculation of the dose distributions within the phantom. The voxel size of the dose distributions was 2mm in all directions. The statistical uncertainty of the calculated dose distributions was below 0.4%. Simulated depth dose curves and dose profiles in terms of [Gy/MU] for static and dynamic fields were compared with the corresponding measurements using dose difference and γ analysis. For the dose difference criterion of ±1% of D(max) and the distance to agreement criterion of ±1 mm, the γ analysis showed an excellent agreement between measurements and simulations for all static open and MLC fields. The tuning of the density and the thickness for all hard wedges lead to an agreement with the corresponding measurements within 1% or 1mm. Similar results have been achieved for the block. For the validation of the tuned hard wedges, a very good agreement between calculated and measured dose distributions was achieved using a 1%/1mm criteria for the γ analysis. The calculated dose distributions of the enhanced dynamic wedges (10°, 15°, 20°, 25°, 30°, 45° and 60°) met the criteria of 1%/1mm when compared with the measurements for all situations considered. For the IMRT fields all compared measured dose values agreed with the calculated dose values within a 2% dose difference or within 1 mm distance. The SMCP has been successfully validated for a static and dynamic 6 MV photon beam, thus resulting in accurate dose calculations suitable for applications in clinical cases.
Resumo:
To derive tests for randomness, nonlinear-independence, and stationarity, we combine surrogates with a nonlinear prediction error, a nonlinear interdependence measure, and linear variability measures, respectively. We apply these tests to intracranial electroencephalographic recordings (EEG) from patients suffering from pharmacoresistant focal-onset epilepsy. These recordings had been performed prior to and independent from our study as part of the epilepsy diagnostics. The clinical purpose of these recordings was to delineate the brain areas to be surgically removed in each individual patient in order to achieve seizure control. This allowed us to define two distinct sets of signals: One set of signals recorded from brain areas where the first ictal EEG signal changes were detected as judged by expert visual inspection ("focal signals") and one set of signals recorded from brain areas that were not involved at seizure onset ("nonfocal signals"). We find more rejections for both the randomness and the nonlinear-independence test for focal versus nonfocal signals. In contrast more rejections of the stationarity test are found for nonfocal signals. Furthermore, while for nonfocal signals the rejection of the stationarity test increases the rejection probability of the randomness and nonlinear-independence test substantially, we find a much weaker influence for the focal signals. In consequence, the contrast between the focal and nonfocal signals obtained from the randomness and nonlinear-independence test is further enhanced when we exclude signals for which the stationarity test is rejected. To study the dependence between the randomness and nonlinear-independence test we include only focal signals for which the stationarity test is not rejected. We show that the rejection of these two tests correlates across signals. The rejection of either test is, however, neither necessary nor sufficient for the rejection of the other test. Thus, our results suggest that EEG signals from epileptogenic brain areas are less random, more nonlinear-dependent, and more stationary compared to signals recorded from nonepileptogenic brain areas. We provide the data, source code, and detailed results in the public domain.