931 resultados para Space-time trellis codes


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Cytochrom c Oxidase (CcO), der Komplex IV der Atmungskette, ist eine der Häm-Kupfer enthaltenden Oxidasen und hat eine wichtige Funktion im Zellmetabolismus. Das Enzym enthält vier prosthetische Gruppen und befindet sich in der inneren Membran von Mitochondrien und in der Zellmembran einiger aerober Bakterien. Die CcO katalysiert den Elektronentransfer (ET) von Cytochrom c zu O2, wobei die eigentliche Reaktion am binuklearen Zentrum (CuB-Häm a3) erfolgt. Bei der Reduktion von O2 zu zwei H2O werden vier Protonen verbraucht. Zudem werden vier Protonen über die Membran transportiert, wodurch eine elektrochemische Potentialdifferenz dieser Ionen zwischen Matrix und Intermembranphase entsteht. Trotz ihrer Wichtigkeit sind Membranproteine wie die CcO noch wenig untersucht, weshalb auch der Mechanismus der Atmungskette noch nicht vollständig aufgeklärt ist. Das Ziel dieser Arbeit ist, einen Beitrag zum Verständnis der Funktion der CcO zu leisten. Hierzu wurde die CcO aus Rhodobacter sphaeroides über einen His-Anker, der am C-Terminus der Untereinheit II angebracht wurde, an eine funktionalisierte Metallelektrode in definierter Orientierung gebunden. Der erste Elektronenakzeptor, das CuA, liegt dabei am nächsten zur Metalloberfläche. Dann wurde eine Doppelschicht aus Lipiden insitu zwischen die gebundenen Proteine eingefügt, was zur sog. proteingebundenen Lipid-Doppelschicht Membran (ptBLM) führt. Dabei musste die optimale Oberflächenkonzentration der gebundenen Proteine herausgefunden werden. Elektrochemische Impedanzspektroskopie(EIS), Oberflächenplasmonenresonanzspektroskopie (SPR) und zyklische Voltammetrie (CV) wurden angewandt um die Aktivität der CcO als Funktion der Packungsdichte zu charakterisieren. Der Hauptteil der Arbeit betrifft die Untersuchung des direkten ET zur CcO unter anaeroben Bedingungen. Die Kombination aus zeitaufgelöster oberflächenverstärkter Infrarot-Absorptionsspektroskopie (tr-SEIRAS) und Elektrochemie hat sich dafür als besonders geeignet erwiesen. In einer ersten Studie wurde der ET mit Hilfe von fast scan CV untersucht, wobei CVs von nicht-aktivierter sowie aktivierter CcO mit verschiedenen Vorschubgeschwindigkeiten gemessen wurden. Die aktivierte Form wurde nach dem katalytischen Umsatz des Proteins in Anwesenheit von O2 erhalten. Ein vier-ET-modell wurde entwickelt um die CVs zu analysieren. Die Methode erlaubt zwischen dem Mechanismus des sequentiellen und des unabhängigen ET zu den vier Zentren CuA, Häm a, Häm a3 und CuB zu unterscheiden. Zudem lassen sich die Standardredoxpotentiale und die kinetischen Koeffizienten des ET bestimmen. In einer zweiten Studie wurde tr-SEIRAS im step scan Modus angewandt. Dafür wurden Rechteckpulse an die CcO angelegt und SEIRAS im ART-Modus verwendet um Spektren bei definierten Zeitscheiben aufzunehmen. Aus diesen Spektren wurden einzelne Banden isoliert, die Veränderungen von Vibrationsmoden der Aminosäuren und Peptidgruppen in Abhängigkeit des Redoxzustands der Zentren zeigen. Aufgrund von Zuordnungen aus der Literatur, die durch potentiometrische Titration der CcO ermittelt wurden, konnten die Banden versuchsweise den Redoxzentren zugeordnet werden. Die Bandenflächen gegen die Zeit aufgetragen geben dann die Redox-Kinetik der Zentren wieder und wurden wiederum mit dem vier-ET-Modell ausgewertet. Die Ergebnisse beider Studien erlauben die Schlussfolgerung, dass der ET zur CcO in einer ptBLM mit größter Wahrscheinlichkeit dem sequentiellen Mechanismus folgt, was dem natürlichen ET von Cytochrom c zur CcO entspricht.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The space environment has always been one of the most challenging for communications, both at physical and network layer. Concerning the latter, the most common challenges are the lack of continuous network connectivity, very long delays and relatively frequent losses. Because of these problems, the normal TCP/IP suite protocols are hardly applicable. Moreover, in space scenarios reliability is fundamental. In fact, it is usually not tolerable to lose important information or to receive it with a very large delay because of a challenging transmission channel. In terrestrial protocols, such as TCP, reliability is obtained by means of an ARQ (Automatic Retransmission reQuest) method, which, however, has not good performance when there are long delays on the transmission channel. At physical layer, Forward Error Correction Codes (FECs), based on the insertion of redundant information, are an alternative way to assure reliability. On binary channels, when single bits are flipped because of channel noise, redundancy bits can be exploited to recover the original information. In the presence of binary erasure channels, where bits are not flipped but lost, redundancy can still be used to recover the original information. FECs codes, designed for this purpose, are usually called Erasure Codes (ECs). It is worth noting that ECs, primarily studied for binary channels, can also be used at upper layers, i.e. applied on packets instead of bits, offering a very interesting alternative to the usual ARQ methods, especially in the presence of long delays. A protocol created to add reliability to DTN networks is the Licklider Transmission Protocol (LTP), created to obtain better performance on long delay links. The aim of this thesis is the application of ECs to LTP.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The thesis analyses the hydrodynamic induced by an array of Wave energy Converters (WECs), under an experimental and numerical point of view. WECs can be considered an innovative solution able to contribute to the green energy supply and –at the same time– to protect the rear coastal area under marine spatial planning considerations. This research activity essentially rises due to this combined concept. The WEC under exam is a floating device belonging to the Wave Activated Bodies (WAB) class. Experimental data were performed at Aalborg University in different scales and layouts, and the performance of the models was analysed under a variety of irregular wave attacks. The numerical simulations performed with the codes MIKE 21 BW and ANSYS-AQWA. Experimental results were also used to calibrate the numerical parameters and/or to directly been compared to numerical results, in order to extend the experimental database. Results of the research activity are summarized in terms of device performance and guidelines for a future wave farm installation. The device length should be “tuned” based on the local climate conditions. The wave transmission behind the devices is pretty high, suggesting that the tested layout should be considered as a module of a wave farm installation. Indications on the minimum inter-distance among the devices are provided. Furthermore, a CALM mooring system leads to lower wave transmission and also larger power production than a spread mooring. The two numerical codes have different potentialities. The hydrodynamics around single and multiple devices is obtained with MIKE 21 BW, while wave loads and motions for a single moored device are derived from ANSYS-AQWA. Combining the experimental and numerical it is suggested –for both coastal protection and energy production– to adopt a staggered layout, which will maximise the devices density and minimize the marine space required for the installation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The PhD research activity has taken place in the space debris field. In detail, it is focused on the possibility of detecting space debris from the space based platform. The research is focused at the same time on the software and the hardware of this detection system. For the software, a program has been developed for being able to detect an object in space and locate it in the sky solving the star field. For the hardware, the possibility of adapting a ground telescope for space activity has been considered and it has been tested on a possible electronic board.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The study of supermassive black hole (SMBH) accretion during their phase of activity (hence becoming active galactic nuclei, AGN), and its relation to the host-galaxy growth, requires large datasets of AGN, including a significant fraction of obscured sources. X-ray data are strategic in AGN selection, because at X-ray energies the contamination from non-active galaxies is far less significant than in optical/infrared surveys, and the selection of obscured AGN, including also a fraction of heavily obscured AGN, is much more effective. In this thesis, I present the results of the Chandra COSMOS Legacy survey, a 4.6 Ms X-ray survey covering the equatorial COSMOS area. The COSMOS Legacy depth (flux limit f=2x10^(-16) erg/s/cm^(-2) in the 0.5-2 keV band) is significantly better than that of other X-ray surveys on similar area, and represents the path for surveys with future facilities, like Athena and X-ray Surveyor. The final Chandra COSMOS Legacy catalog contains 4016 point-like sources, 97% of which with redshift. 65% of the sources are optically obscured and potentially caught in the phase of main BH growth. We used the sample of 174 Chandra COSMOS Legacy at z>3 to place constraints on the BH formation scenario. We found a significant disagreement between our space density and the predictions of a physical model of AGN activation through major-merger. This suggests that in our luminosity range the BH triggering through secular accretion is likely preferred to a major-merger triggering scenario. Thanks to its large statistics, the Chandra COSMOS Legacy dataset, combined with the other multiwavelength COSMOS catalogs, will be used to answer questions related to a large number of astrophysical topics, with particular focus on the SMBH accretion in different luminosity and redshift regimes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In vielen Industriezweigen, zum Beispiel in der Automobilindustrie, werden Digitale Versuchsmodelle (Digital MockUps) eingesetzt, um die Konstruktion und die Funktion eines Produkts am virtuellen Prototypen zu überprüfen. Ein Anwendungsfall ist dabei die Überprüfung von Sicherheitsabständen einzelner Bauteile, die sogenannte Abstandsanalyse. Ingenieure ermitteln dabei für bestimmte Bauteile, ob diese in ihrer Ruhelage sowie während einer Bewegung einen vorgegeben Sicherheitsabstand zu den umgebenden Bauteilen einhalten. Unterschreiten Bauteile den Sicherheitsabstand, so muss deren Form oder Lage verändert werden. Dazu ist es wichtig, die Bereiche der Bauteile, welche den Sicherhabstand verletzen, genau zu kennen. rnrnIn dieser Arbeit präsentieren wir eine Lösung zur Echtzeitberechnung aller den Sicherheitsabstand unterschreitenden Bereiche zwischen zwei geometrischen Objekten. Die Objekte sind dabei jeweils als Menge von Primitiven (z.B. Dreiecken) gegeben. Für jeden Zeitpunkt, in dem eine Transformation auf eines der Objekte angewendet wird, berechnen wir die Menge aller den Sicherheitsabstand unterschreitenden Primitive und bezeichnen diese als die Menge aller toleranzverletzenden Primitive. Wir präsentieren in dieser Arbeit eine ganzheitliche Lösung, welche sich in die folgenden drei großen Themengebiete unterteilen lässt.rnrnIm ersten Teil dieser Arbeit untersuchen wir Algorithmen, die für zwei Dreiecke überprüfen, ob diese toleranzverletzend sind. Hierfür präsentieren wir verschiedene Ansätze für Dreiecks-Dreiecks Toleranztests und zeigen, dass spezielle Toleranztests deutlich performanter sind als bisher verwendete Abstandsberechnungen. Im Fokus unserer Arbeit steht dabei die Entwicklung eines neuartigen Toleranztests, welcher im Dualraum arbeitet. In all unseren Benchmarks zur Berechnung aller toleranzverletzenden Primitive beweist sich unser Ansatz im dualen Raum immer als der Performanteste.rnrnDer zweite Teil dieser Arbeit befasst sich mit Datenstrukturen und Algorithmen zur Echtzeitberechnung aller toleranzverletzenden Primitive zwischen zwei geometrischen Objekten. Wir entwickeln eine kombinierte Datenstruktur, die sich aus einer flachen hierarchischen Datenstruktur und mehreren Uniform Grids zusammensetzt. Um effiziente Laufzeiten zu gewährleisten ist es vor allem wichtig, den geforderten Sicherheitsabstand sinnvoll im Design der Datenstrukturen und der Anfragealgorithmen zu beachten. Wir präsentieren hierzu Lösungen, die die Menge der zu testenden Paare von Primitiven schnell bestimmen. Darüber hinaus entwickeln wir Strategien, wie Primitive als toleranzverletzend erkannt werden können, ohne einen aufwändigen Primitiv-Primitiv Toleranztest zu berechnen. In unseren Benchmarks zeigen wir, dass wir mit unseren Lösungen in der Lage sind, in Echtzeit alle toleranzverletzenden Primitive zwischen zwei komplexen geometrischen Objekten, bestehend aus jeweils vielen hunderttausend Primitiven, zu berechnen. rnrnIm dritten Teil präsentieren wir eine neuartige, speicheroptimierte Datenstruktur zur Verwaltung der Zellinhalte der zuvor verwendeten Uniform Grids. Wir bezeichnen diese Datenstruktur als Shrubs. Bisherige Ansätze zur Speicheroptimierung von Uniform Grids beziehen sich vor allem auf Hashing Methoden. Diese reduzieren aber nicht den Speicherverbrauch der Zellinhalte. In unserem Anwendungsfall haben benachbarte Zellen oft ähnliche Inhalte. Unser Ansatz ist in der Lage, den Speicherbedarf der Zellinhalte eines Uniform Grids, basierend auf den redundanten Zellinhalten, verlustlos auf ein fünftel der bisherigen Größe zu komprimieren und zur Laufzeit zu dekomprimieren.rnrnAbschießend zeigen wir, wie unsere Lösung zur Berechnung aller toleranzverletzenden Primitive Anwendung in der Praxis finden kann. Neben der reinen Abstandsanalyse zeigen wir Anwendungen für verschiedene Problemstellungen der Pfadplanung.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

I Polar Codes sono la prima classe di codici a correzione d’errore di cui è stato dimostrato il raggiungimento della capacità per ogni canale simmetrico, discreto e senza memoria, grazie ad un nuovo metodo introdotto recentemente, chiamato ”Channel Polarization”. In questa tesi verranno descritti in dettaglio i principali algoritmi di codifica e decodifica. In particolare verranno confrontate le prestazioni dei simulatori sviluppati per il ”Successive Cancellation Decoder” e per il ”Successive Cancellation List Decoder” rispetto ai risultati riportati in letteratura. Al fine di migliorare la distanza minima e di conseguenza le prestazioni, utilizzeremo uno schema concatenato con il polar code come codice interno ed un CRC come codice esterno. Proporremo inoltre una nuova tecnica per analizzare la channel polarization nel caso di trasmissione su canale AWGN che risulta il modello statistico più appropriato per le comunicazioni satellitari e nelle applicazioni deep space. In aggiunta, investigheremo l’importanza di una accurata approssimazione delle funzioni di polarizzazione.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Neglect is defined as the failure to attend and to orient to the contralesional side of space. A horizontal bias towards the right visual field is a classical finding in patients who suffered from a right-hemispheric stroke. The vertical dimension of spatial attention orienting has only sparsely been investigated so far. The aim of this study was to investigate the specificity of this vertical bias by means of a search task, which taps a more pronounced top-down attentional component. Eye movements and behavioural search performance were measured in thirteen patients with left-sided neglect after right hemispheric stroke and in thirteen age-matched controls. Concerning behavioural performance, patients found significantly less targets than healthy controls in both the upper and lower left quadrant. However, when targets were located in the lower left quadrant, patients needed more visual fixations (and therefore longer search time) to find them, suggesting a time-dependent vertical bias.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Identification of the subarachnoid space has traditionally been achieved by either a blind landmark-guided approach or using prepuncture ultrasound assistance. To assess the feasibility of performing spinal anaesthesia under real-time ultrasound guidance in routine clinical practice we conducted a single center prospective observational study among patients undergoing lower limb orthopaedic surgery. A spinal needle was inserted unassisted within the ultrasound transducer imaging plane using a paramedian approach (i.e., the operator held the transducer in one hand and the spinal needle in the other). The primary outcome measure was the success rate of CSF acquisition under real-time ultrasound guidance with CSF being located in 97 out of 100 consecutive patients within median three needle passes (IQR 1-6). CSF was not acquired in three patients. Subsequent attempts combining landmark palpation and pre-puncture ultrasound scanning resulted in successful spinal anaesthesia in two of these patients with the third patient requiring general anaesthesia. Median time from spinal needle insertion until intrathecal injection completion was 1.2 minutes (IQR 0.83-4.1) demonstrating the feasibility of this technique in routine clinical practice.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Clay mineral-rich sedimentary formations are currently under investigation to evaluate their potential use as host formations for installation of deep underground disposal facilities for radioactive waste (e.g. Boom Clay (BE), Opalinus Clay (CH), Callovo-Oxfordian argillite (FR)). The ultimate safety of the corresponding repository concepts depends largely on the capacity of the host formation to limit the flux towards the biosphere of radionuclides (RN) contained in the waste to acceptably low levels. Data for diffusion-driven transfer in these formations shows extreme differences in the measured or modelled behaviour for various radionuclides, e. g. between halogen RN (Cl-36, I-129) and actinides (U-238,U-235, Np-237, Th-232, etc.), which result from major differences between RN of the effects on transport of two phenomena: diffusion and sorption. This paper describes recent research aimed at improving understanding of these two phenomena, focusing on the results of studies carried out during the EC Funmig IP on clayrocks from the above three formations and from the Boda formation (HU). Project results regarding phenomena governing water, cation and anion distribution and mobility in the pore volumes influenced by the negatively-charged surfaces of clay minerals show a convergence of the modelling results for behaviour at the molecular scale and descriptions based on electrical double layer models. Transport models exist which couple ion distribution relative to the clay-solution interface and differentiated diffusive characteristics. These codes are able to reproduce the main trends in behaviour observed experimentally, e.g. D-e(anion) < D-e(HTO) < D-e(cation) and D-e(anion) variations as a function of ionic strength and material density. These trends are also well-explained by models of transport through ideal porous matrices made up of a charged surface material. Experimental validation of these models is good as regards monovalent alkaline cations, in progress for divalent electrostatically-interacting cations (e.g. Sr2+) and still relatively poor for 'strongly sorbing', high K-d cations. Funmig results have clarified understanding of how clayrock mineral composition, and the corresponding organisation of mineral grain assemblages and their associated porosity, can affect mobile solute (anions, HTO) diffusion at different scales (mm to geological formation). In particular, advances made in the capacity to map clayrock mineral grain-porosity organisation at high resolution provide additional elements for understanding diffusion anisotropy and for relating diffusion characteristics measured at different scales. On the other hand, the results of studies focusing on evaluating the potential effects of heterogeneity on mobile species diffusion at the formation scale tend to show that there is a minimal effect when compared to a homogeneous property model. Finally, the results of a natural tracer-based study carried out on the Opalinus Clay formation increase confidence in the use of diffusion parameters measured on laboratory scale samples for predicting diffusion over geological time-space scales. Much effort was placed on improving understanding of coupled sorption-diffusion phenomena for sorbing cations in clayrocks. Results regarding sorption equilibrium in dispersed and compacted materials for weakly to moderately sorbing cations (Sr2+, Cs+, Co2+) tend to show that the same sorption model probably holds in both systems. It was not possible to demonstrate this for highly sorbing elements such as Eu(III) because of the extremely long times needed to reach equilibrium conditions, but there does not seem to be any clear reason why such elements should not have similar behaviour. Diffusion experiments carried out with Sr2+, Cs+ and Eu(III) on all of the clayrocks gave mixed results and tend to show that coupled diffusion-sorption migration is much more complex than expected, leading generally to greater mobility than that predicted by coupling a batch-determined K-d and Ficks law based on the diffusion behaviour of HTO. If the K-d measured on equivalent dispersed systems holds as was shown to be the case for Sr, Cs (and probably Co) for Opalinus Clay, these results indicate that these cations have a D-e value higher than HTO (up to a factor of 10 for Cs+). Results are as yet very limited for very moderate to strongly sorbing species (e.g. Co(II), Eu(III), Cu(II)) because of their very slow transfer characteristics. (C) 2011 Elsevier Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Stem cell regeneration of damaged tissue has recently been reported in many different organs. Since the loss of retinal pigment epithelium (RPE) in the eye is associated with a major cause of visual loss - specifically, age-related macular degeneration - we investigated whether hematopoietic stem cells (HSC) given systemically can home to the damaged subretinal space and express markers of RPE lineage. Green fluorescent protein (GFP) cells of bone marrow origin were used in a sodium iodate (NaIO(3)) model of RPE damage in the mouse. The optimal time for adoptive transfer of bone marrow-derived stem cells relative to the time of injury and the optimal cell type [whole bone marrow, mobilized peripheral blood, HSC, facilitating cells (FC)] were determined by counting the number of GFP(+) cells in whole eye flat mounts. Immunocytochemistry was performed to identify the bone marrow origin of the cells in the RPE using antibodies for CD45, Sca-1, and c-kit, as well as the expression of the RPE-specific marker, RPE-65. The time at which bone marrow-derived cells were adoptively transferred relative to the time of NaIO(3) injection did not significantly influence the number of cells that homed to the subretinal space. At both one and two weeks after intravenous (i.v.) injection, GFP(+) cells of bone marrow origin were observed in the damaged subretinal space, at sites of RPE loss, but not in the normal subretinal space. The combined transplantation of HSC+FC cells appeared to favor the survival of the homed stem cells at two weeks, and RPE-65 was expressed by adoptively transferred HSC by four weeks. We have shown that systemically injected HSC homed to the subretinal space in the presence of RPE damage and that FC promoted survival of these cells. Furthermore, the RPE-specific marker RPE-65 was expressed on adoptively transferred HSC in the denuded areas.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Boston Harbor has had a history of poor water quality, including contamination by enteric pathogens. We conduct a statistical analysis of data collected by the Massachusetts Water Resources Authority (MWRA) between 1996 and 2002 to evaluate the effects of court-mandated improvements in sewage treatment. Motivated by the ineffectiveness of standard Poisson mixture models and their zero-inflated counterparts, we propose a new negative binomial model for time series of Enterococcus counts in Boston Harbor, where nonstationarity and autocorrelation are modeled using a nonparametric smooth function of time in the predictor. Without further restrictions, this function is not identifiable in the presence of time-dependent covariates; consequently we use a basis orthogonal to the space spanned by the covariates and use penalized quasi-likelihood (PQL) for estimation. We conclude that Enterococcus counts were greatly reduced near the Nut Island Treatment Plant (NITP) outfalls following the transfer of wastewaters from NITP to the Deer Island Treatment Plant (DITP) and that the transfer of wastewaters from Boston Harbor to the offshore diffusers in Massachusetts Bay reduced the Enterococcus counts near the DITP outfalls.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In environmental epidemiology, exposure X and health outcome Y vary in space and time. We present a method to diagnose the possible influence of unmeasured confounders U on the estimated effect of X on Y and to propose several approaches to robust estimation. The idea is to use space and time as proxy measures for the unmeasured factors U. We start with the time series case where X and Y are continuous variables at equally-spaced times and assume a linear model. We define matching estimator b(u)s that correspond to pairs of observations with specific lag u. Controlling for a smooth function of time, St, using a kernel estimator is roughly equivalent to estimating the association with a linear combination of the b(u)s with weights that involve two components: the assumptions about the smoothness of St and the normalized variogram of the X process. When an unmeasured confounder U exists, but the model otherwise correctly controls for measured confounders, the excess variation in b(u)s is evidence of confounding by U. We use the plot of b(u)s versus lag u, lagged-estimator-plot (LEP), to diagnose the influence of U on the effect of X on Y. We use appropriate linear combination of b(u)s or extrapolate to b(0) to obtain novel estimators that are more robust to the influence of smooth U. The methods are extended to time series log-linear models and to spatial analyses. The LEP plot gives us a direct view of the magnitude of the estimators for each lag u and provides evidence when models did not adequately describe the data.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present an overview of different methods for decomposing a multichannel spontaneous electroencephalogram (EEG) into sets of temporal patterns and topographic distributions. All of the methods presented here consider the scalp electric field as the basic analysis entity in space. In time, the resolution of the methods is between milliseconds (time-domain analysis), subseconds (time- and frequency-domain analysis) and seconds (frequency-domain analysis). For any of these methods, we show that large parts of the data can be explained by a small number of topographic distributions. Physically, this implies that the brain regions that generated one of those topographies must have been active with a common phase. If several brain regions are producing EEG signals at the same time and frequency, they have a strong tendency to do this in a synchronized mode. This view is illustrated by several examples (including combined EEG and functional magnetic resonance imaging (fMRI)) and a selective review of the literature. The findings are discussed in terms of short-lasting binding between different brain regions through synchronized oscillations, which could constitute a mechanism to form transient, functional neurocognitive networks.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Currently photon Monte Carlo treatment planning (MCTP) for a patient stored in the patient database of a treatment planning system (TPS) can usually only be performed using a cumbersome multi-step procedure where many user interactions are needed. This means automation is needed for usage in clinical routine. In addition, because of the long computing time in MCTP, optimization of the MC calculations is essential. For these purposes a new graphical user interface (GUI)-based photon MC environment has been developed resulting in a very flexible framework. By this means appropriate MC transport methods are assigned to different geometric regions by still benefiting from the features included in the TPS. In order to provide a flexible MC environment, the MC particle transport has been divided into different parts: the source, beam modifiers and the patient. The source part includes the phase-space source, source models and full MC transport through the treatment head. The beam modifier part consists of one module for each beam modifier. To simulate the radiation transport through each individual beam modifier, one out of three full MC transport codes can be selected independently. Additionally, for each beam modifier a simple or an exact geometry can be chosen. Thereby, different complexity levels of radiation transport are applied during the simulation. For the patient dose calculation, two different MC codes are available. A special plug-in in Eclipse providing all necessary information by means of Dicom streams was used to start the developed MC GUI. The implementation of this framework separates the MC transport from the geometry and the modules pass the particles in memory; hence, no files are used as the interface. The implementation is realized for 6 and 15 MV beams of a Varian Clinac 2300 C/D. Several applications demonstrate the usefulness of the framework. Apart from applications dealing with the beam modifiers, two patient cases are shown. Thereby, comparisons are performed between MC calculated dose distributions and those calculated by a pencil beam or the AAA algorithm. Interfacing this flexible and efficient MC environment with Eclipse allows a widespread use for all kinds of investigations from timing and benchmarking studies to clinical patient studies. Additionally, it is possible to add modules keeping the system highly flexible and efficient.