899 resultados para the SIMPLE algorithm


Relevância:

90.00% 90.00%

Publicador:

Resumo:

DI Diesel engine are widely used both for industrial and automotive applications due to their durability and fuel economy. Nonetheless, increasing environmental concerns force that type of engine to comply with increasingly demanding emission limits, so that, it has become mandatory to develop a robust design methodology of the DI Diesel combustion system focused on reduction of soot and NOx simultaneously while maintaining a reasonable fuel economy. In recent years, genetic algorithms and CFD three-dimensional combustion simulations have been successfully applied to that kind of problem. However, combining GAs optimization with actual CFD three-dimensional combustion simulations can be too onerous since a large number of calculations is usually needed for the genetic algorithm to converge, resulting in a high computational cost and, thus, limiting the suitability of this method for industrial processes. In order to make the optimization process less time-consuming, CFD simulations can be more conveniently used to generate a training set for the learning process of an artificial neural network which, once correctly trained, can be used to forecast the engine outputs as a function of the design parameters during a GA optimization performing a so-called virtual optimization. In the current work, a numerical methodology for the multi-objective virtual optimization of the combustion of an automotive DI Diesel engine, which relies on artificial neural networks and genetic algorithms, was developed.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The dynamics of a passive back-to-back test rig have been characterised, leading to a multi-coordinate approach for the analysis of arbitrary test configurations. Universal joints have been introduced into a typical pre-loaded back-to-back system in order to produce an oscillating torsional moment in a test specimen. Two different arrangements have been investigated using a frequency-based sub-structuring approach: the receptance method. A numerical model has been developed in accordance with this theory, allowing interconnection of systems with two-coordinates and closed multi-loop schemes. The model calculates the receptance functions and modal and deflected shapes of a general system. Closed form expressions of the following individual elements have been developed: a servomotor, damped continuous shaft and a universal joint. Numerical results for specific cases have been compared with published data in literature and experimental measurements undertaken in the present work. Due to the complexity of the universal joint and its oscillating dynamic effects, a more detailed analysis of this component has been developed. Two models have been presented. The first represents the joint as two inertias connected by a massless cross-piece. The second, derived by the dynamic analysis of a spherical four-link mechanism, considers the contribution of the floating element and its gyroscopic effects. An investigation into non-linear behaviour has led to a time domain model that utilises the Runge-Kutta fourth order method for resolution of the dynamic equations. It has been demonstrated that the torsional receptances of a universal joint, derived using the simple model, result in representation of the joint as an equivalent variable inertia. In order to verify the model, a test rig has been built and experimental validation undertaken. The variable inertia of a universal joint has lead to a novel application of the component as a passive device for the balancing of inertia variations in slider-crank mechanisms.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Die vorliegende Arbeit ist im Zuge des DFG Projektes Spätpleistozäne, holozäne und aktuelle Geomorphodynamik in abflusslosen Becken der Mongolischen Gobi´´ entstanden. Das Arbeitsgebiet befindet sich in der südlichen Mongolei im nördlichen Teil der Wüste Gobi. Neben einigen Teilen der Sahara (Heintzenberg, 2009), beispielsweise das Bodélé Becken des nördlichen Tschads (z.B. Washington et al., 2006a; Todd et al., 2006; Warren et al., 2007) wird Zentralasien als ein Hauptliefergebiet für Partikel in die globale Zirkulation der Atmosphäre gesehen (Goudie, 2009). Hauptaugenmerk liegt hierbei besonders auf den abflusslosen Becken und deren Sedimentablagerungen. Die, der Deflation ausgesetzten Flächen der Seebecken, sind hauptsächliche Quelle für Partikel die sich in Form von Staub respektive Sand ausbreiten. Im Hinblick auf geomorphologische Landschaftsentwicklung wurde der Zusammenhang von Beckensedimenten zu Hangdepositionen numerisch simuliert. Ein von Grunert and Lehmkuhl (2004) publiziertes Model, angelehnt an Ideen von Pye (1995) wird damit in Betracht gezogen. Die vorliegenden Untersuchungen modellieren Verbreitungsmechanismen auf regionaler Ebene ausgehend von einer größeren Anzahl an einzelnen punktuellen Standorten. Diese sind repräsentativ für die einzelnen geomorphologischen Systemglieder mit möglicherweise einer Beteiligung am Budget aeolischer Geomorphodynamik. Die Bodenbedeckung durch das charakteristische Steinpflaster der Gobi - Region, sowie unter anderem Korngrößenverteilungen der Oberflächensedimente wurden untersucht. Des Weiteren diente eine zehnjährige Zeitreihe (Jan 1998 bis Dez 2007) meteorologischer Daten als Grundlage zur Analyse der Bedingungen für äolische Geomorphodynamik. Die Daten stammen von 32 staatlichen mongolischen Wetterstationen aus der Region und Teile davon wurden für die Simulationen verwendet. Zusätzlich wurden atmosphärische Messungen zur Untersuchung der atmosphärischen Stabilität und ihrer tageszeitlichen Variabilität mit Mess-Drachenaufstiegen vorgenommen. Die Feldbefunde und auch die Ergebnisse der Laboruntersuchungen sowie der Datensatz meteorologischer Parameter dienten als Eingangsparameter für die Modellierungen. Emissionsraten der einzelnen Standorte und die Partikelverteilung im 3D Windfeld wurden modelliert um die Konvektivität der Beckensedimente und Hangdepositionen zu simulieren. Im Falle hoher mechanischer Turbulenz der bodennahen Luftschicht (mit einhergehender hoher Wind Reibungsgeschwindigkeit), wurde generell eine neutrale Stabilität festgestellt und die Simulationen von Partikelemission sowie deren Ausbreitung und Deposition unter neutraler Stabilitätsbedingung berechnet. Die Berechnung der Partikelemission wurde auf der Grundlage eines sehr vereinfachten missionsmodells in Anlehnung an bestehende Untersuchungen (Laurent et al., 2006; Darmenova et al., 2009; Shao and Dong, 2006; Alfaro, 2008) durchgeführt. Sowohl 3D Windfeldkalkulationen als auch unterschiedliche Ausbreitungsszenarien äolischer Sedimente wurden mit dem kommerziellen Programm LASAT® (Lagrange-Simulation von Aerosol-Transport) realisiert. Diesem liegt ein Langargischer Algorithmus zugrunde, mittels dessen die Verbreitung einzelner Partikel im Windfeld mit statistischer Wahrscheinlichkeit berechnet wird. Über Sedimentationsparameter kann damit ein Ausbreitungsmodell der Beckensedimente in Hinblick auf die Gebirgsfußflächen und -hänge generiert werden. Ein weiterer Teil der Untersuchungen beschäftigt sich mit der geochemischen Zusammensetzung der Oberflächensedimente. Diese Proxy sollte dazu dienen die simulierten Ausbreitungsrichtungen der Partikel aus unterschiedlichen Quellregionen nach zu verfolgen. Im Falle der Mongolischen Gobi zeigte sich eine weitestgehende Homogenität der Minerale und chemischen Elemente in den Sedimenten. Laser Bebohrungen einzelner Sandkörner zeigten nur sehr leichte Unterschiede in Abhängigkeit der Quellregionen. Die Spektren der Minerale und untersuchten Elemente deuten auf graitische Zusammensetzungen hin. Die, im Untersuchungsgebiet weit verbreiteten Alkali-Granite (Jahn et al., 2009) zeigten sich als hauptverantwortlich für die Sedimentproduktion im Untersuchungsgebiet. Neben diesen Mineral- und Elementbestimmungen wurde die Leichtmineralfraktion auf die Charakteristik des Quarzes hin untersucht. Dazu wurden Quarzgehalt, Kristallisation und das Elektronen-Spin-Resonanz Signal des E’1 - Centers in Sauerstoff Fehlstellungen des SiO2 Gitters bestimmt. Die Untersuchungen sind mit dem Methodenvorschlag von Sun et al. (2007) durchgeführt worden und sind prinzipiell gut geeignet um Herkunftsanalysenrndurchzuführen. Eine signifikante Zuordnung der einzelnen Quellgebiete ist jedoch auch in dieser Proxy nicht zu finden gewesen.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In distributed systems like clouds or service oriented frameworks, applications are typically assembled by deploying and connecting a large number of heterogeneous software components, spanning from fine-grained packages to coarse-grained complex services. The complexity of such systems requires a rich set of techniques and tools to support the automation of their deployment process. By relying on a formal model of components, a technique is devised for computing the sequence of actions allowing the deployment of a desired configuration. An efficient algorithm, working in polynomial time, is described and proven to be sound and complete. Finally, a prototype tool implementing the proposed algorithm has been developed. Experimental results support the adoption of this novel approach in real life scenarios.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The conventional way to calculate hard scattering processes in perturbation theory using Feynman diagrams is not efficient enough to calculate all necessary processes - for example for the Large Hadron Collider - to a sufficient precision. Two alternatives to order-by-order calculations are studied in this thesis.rnrnIn the first part we compare the numerical implementations of four different recursive methods for the efficient computation of Born gluon amplitudes: Berends-Giele recurrence relations and recursive calculations with scalar diagrams, with maximal helicity violating vertices and with shifted momenta. From the four methods considered, the Berends-Giele method performs best, if the number of external partons is eight or bigger. However, for less than eight external partons, the recursion relation with shifted momenta offers the best performance. When investigating the numerical stability and accuracy, we found that all methods give satisfactory results.rnrnIn the second part of this thesis we present an implementation of a parton shower algorithm based on the dipole formalism. The formalism treats initial- and final-state partons on the same footing. The shower algorithm can be used for hadron colliders and electron-positron colliders. Also massive partons in the final state were included in the shower algorithm. Finally, we studied numerical results for an electron-positron collider, the Tevatron and the Large Hadron Collider.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Due to its practical importance and inherent complexity, the optimisation of distribution networks for supplying drinking water has been the subject of extensive study for the past 30 years. The optimization is governed by sizing the pipes in the water distribution network (WDN) and / or optimises specific parts of the network such as pumps, tanks etc. or try to analyse and optimise the reliability of a WDN. In this thesis, the author has analysed two different WDNs (Anytown City and Cabrera city networks), trying to solve and optimise a multi-objective optimisation problem (MOOP). The main two objectives in both cases were the minimisation of Energy Cost (€) or Energy consumption (kWh), along with the total Number of pump switches (TNps) during a day. For this purpose, a decision support system generator for Multi-objective optimisation used. Its name is GANetXL and has been developed by the Center of Water System in the University of Exeter. GANetXL, works by calling the EPANET hydraulic solver, each time a hydraulic analysis has been fulfilled. The main algorithm used, was a second-generation algorithm for multi-objective optimisation called NSGA_II that gave us the Pareto fronts of each configuration. The first experiment that has been carried out was the network of Anytown city. It is a big network with a pump station of four fixed speed parallel pumps that are boosting the water dynamics. The main intervention was to change these pumps to new Variable speed driven pumps (VSDPs), by installing inverters capable to diverse their velocity during the day. Hence, it’s been achieved great Energy and cost savings along with minimisation in the number of pump switches. The results of the research are thoroughly illustrated in chapter 7, with comments and a variety of graphs and different configurations. The second experiment was about the network of Cabrera city. The smaller WDN had a unique FS pump in the system. The problem was the same as far as the optimisation process was concerned, thus, the minimisation of the energy consumption and in parallel the minimisation of TNps. The same optimisation tool has been used (GANetXL).The main scope was to carry out several and different experiments regarding a vast variety of configurations, using different pump (but this time keeping the FS mode), different tank levels, different pipe diameters and different emitters coefficient. All these different modes came up with a large number of results that were compared in the chapter 8. Concluding, it should be said that the optimisation of WDNs is a very interested field that has a vast space of options to deal with. This includes a large number of algorithms to choose from, different techniques and configurations to be made and different support system generators. The researcher has to be ready to “roam” between these choices, till a satisfactory result will convince him/her that has reached a good optimisation point.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This paper presents the first full-fledged branch-and-price (bap) algorithm for the capacitated arc-routing problem (CARP). Prior exact solution techniques either rely on cutting planes or the transformation of the CARP into a node-routing problem. The drawbacks are either models with inherent symmetry, dense underlying networks, or a formulation where edge flows in a potential solution do not allow the reconstruction of unique CARP tours. The proposed algorithm circumvents all these drawbacks by taking the beneficial ingredients from existing CARP methods and combining them in a new way. The first step is the solution of the one-index formulation of the CARP in order to produce strong cuts and an excellent lower bound. It is known that this bound is typically stronger than relaxations of a pure set-partitioning CARP model.rnSuch a set-partitioning master program results from a Dantzig-Wolfe decomposition. In the second phase, the master program is initialized with the strong cuts, CARP tours are iteratively generated by a pricing procedure, and branching is required to produce integer solutions. This is a cut-first bap-second algorithm and its main function is, in fact, the splitting of edge flows into unique CARP tours.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Data sets describing the state of the earth's atmosphere are of great importance in the atmospheric sciences. Over the last decades, the quality and sheer amount of the available data increased significantly, resulting in a rising demand for new tools capable of handling and analysing these large, multidimensional sets of atmospheric data. The interdisciplinary work presented in this thesis covers the development and the application of practical software tools and efficient algorithms from the field of computer science, aiming at the goal of enabling atmospheric scientists to analyse and to gain new insights from these large data sets. For this purpose, our tools combine novel techniques with well-established methods from different areas such as scientific visualization and data segmentation. In this thesis, three practical tools are presented. Two of these tools are software systems (Insight and IWAL) for different types of processing and interactive visualization of data, the third tool is an efficient algorithm for data segmentation implemented as part of Insight.Insight is a toolkit for the interactive, three-dimensional visualization and processing of large sets of atmospheric data, originally developed as a testing environment for the novel segmentation algorithm. It provides a dynamic system for combining at runtime data from different sources, a variety of different data processing algorithms, and several visualization techniques. Its modular architecture and flexible scripting support led to additional applications of the software, from which two examples are presented: the usage of Insight as a WMS (web map service) server, and the automatic production of a sequence of images for the visualization of cyclone simulations. The core application of Insight is the provision of the novel segmentation algorithm for the efficient detection and tracking of 3D features in large sets of atmospheric data, as well as for the precise localization of the occurring genesis, lysis, merging and splitting events. Data segmentation usually leads to a significant reduction of the size of the considered data. This enables a practical visualization of the data, statistical analyses of the features and their events, and the manual or automatic detection of interesting situations for subsequent detailed investigation. The concepts of the novel algorithm, its technical realization, and several extensions for avoiding under- and over-segmentation are discussed. As example applications, this thesis covers the setup and the results of the segmentation of upper-tropospheric jet streams and cyclones as full 3D objects. Finally, IWAL is presented, which is a web application for providing an easy interactive access to meteorological data visualizations, primarily aimed at students. As a web application, the needs to retrieve all input data sets and to install and handle complex visualization tools on a local machine are avoided. The main challenge in the provision of customizable visualizations to large numbers of simultaneous users was to find an acceptable trade-off between the available visualization options and the performance of the application. Besides the implementational details, benchmarks and the results of a user survey are presented.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

A computationally efficient procedure for modeling the alkaline hydrolysis of esters is proposed based on calculations performed on methyl acetate and methyl benzoate systems. Extensive geometry and energy comparisons were performed on the simple ester methyl acetate. The effectiveness of performing high level single point ab initio energy calculations on the geometries obtained from semiempirical and ab initio methods was determined. The AM1 and PM3 semiempirical methods are evaluated for their ability to model the transition states and intermediates for ester hydrolysis. The Cramer/Truhlar SM3 solvation method was used to determine activation energies. The most computationally efficient way to model the transition states of large esters is to use the PM3 method. The PM3 transition structure can then be used as a template for the design of haptens capable of inducing catalytic antibodies.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The WHO fracture risk assessment tool FRAX® is a computer based algorithm that provides models for the assessment of fracture probability in men and women. The approach uses easily obtained clinical risk factors (CRFs) to estimate 10-year probability of a major osteoporotic fracture (hip, clinical spine, humerus or wrist fracture) and the 10-year probability of a hip fracture. The estimate can be used alone or with femoral neck bone mineral density (BMD) to enhance fracture risk prediction. FRAX® is the only risk engine which takes into account the hazard of death as well as that of fracture. Probability of fracture is calculated in men and women from age, body mass index, and dichotomized variables that comprise a prior fragility fracture, parental history of hip fracture, current tobacco smoking, ever long-term use of oral glucocorticoids, rheumatoid arthritis, other causes of secondary osteoporosis, daily alcohol consumption of 3 or more units daily. The relationship between risk factors and fracture probability was constructed using information of nine population-based cohorts from around the world. CRFs for fracture had been identified that provided independent information on fracture risk based on a series of meta-analyses. The FRAX® algorithm was validated in 11 independent cohorts with in excess of 1 million patient-years, including the Swiss SEMOF cohort. Since fracture risk varies markedly in different regions of the world, FRAX® models need to be calibrated to those countries where the epidemiology of fracture and death is known. Models are currently available for 31 countries across the world. The Swiss-specific FRAX® model was developed very soon after the first release of FRAX® in 2008 and was published in 2009, using Swiss epidemiological data, integrating fracture risk and death hazard of our country. Two FRAX®-based approaches may be used to explore intervention thresholds. They have recently been investigated in the Swiss setting. In the first approach the guideline that individuals with a fracture probability equal to or exceeding that of women with a prior fragility fracture should be considered for treatment is translated into thresholds using 10-year fracture probabilities. In that case the threshold is age-dependent and increases from 16 % at the age of 60 ys to 40 % at the age of 80 ys. The second approach is a cost-effectiveness approach. Using a FRAX®-based intervention threshold of 15 % for both, women and men 50 years and older, should permit cost-effective access to therapy to patients at high fracture probability in our country and thereby contribute to further reduce the growing burden of osteoporotic fractures.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

An automated algorithm for detection of the acetabular rim was developed. Accuracy of the algorithm was validated in a sawbone study and compared against manually conducted digitization attempts, which were established as the ground truth. The latter proved to be reliable and reproducible, demonstrated by almost perfect intra- and interobserver reliability. Validation of the automated algorithm showed no significant difference compared to the manually acquired data in terms of detected version and inclination. Automated detection of the acetabular rim contour and the spatial orientation of the acetabular opening plane can be accurately achieved with this algorithm.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

With improvements in acquisition speed and quality, the amount of medical image data to be screened by clinicians is starting to become challenging in the daily clinical practice. To quickly visualize and find abnormalities in medical images, we propose a new method combining segmentation algorithms with statistical shape models. A statistical shape model built from a healthy population will have a close fit in healthy regions. The model will however not fit to morphological abnormalities often present in the areas of pathologies. Using the residual fitting error of the statistical shape model, pathologies can be visualized very quickly. This idea is applied to finding drusen in the retinal pigment epithelium (RPE) of optical coherence tomography (OCT) volumes. A segmentation technique able to accurately segment drusen in patients with age-related macular degeneration (AMD) is applied. The segmentation is then analyzed with a statistical shape model to visualize potentially pathological areas. An extensive evaluation is performed to validate the segmentation algorithm, as well as the quality and sensitivity of the hinting system. Most of the drusen with a height of 85.5 microm were detected, and all drusen at least 93.6 microm high were detected.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The goal of this paper is to contribute to the understanding of complex polynomials and Blaschke products, two very important function classes in mathematics. For a polynomial, $f,$ of degree $n,$ we study when it is possible to write $f$ as a composition $f=g\circ h$, where $g$ and $h$ are polynomials, each of degree less than $n.$ A polynomial is defined to be \emph{decomposable }if such an $h$ and $g$ exist, and a polynomial is said to be \emph{indecomposable} if no such $h$ and $g$ exist. We apply the results of Rickards in \cite{key-2}. We show that $$C_{n}=\{(z_{1},z_{2},...,z_{n})\in\mathbb{C}^{n}\,|\,(z-z_{1})(z-z_{2})...(z-z_{n})\,\mbox{is decomposable}\},$$ has measure $0$ when considered a subset of $\mathbb{R}^{2n}.$ Using this we prove the stronger result that $$D_{n}=\{(z_{1},z_{2},...,z_{n})\in\mathbb{C}^{n}\,|\,\mbox{There exists\,}a\in\mathbb{C}\,\,\mbox{with}\,\,(z-z_{1})(z-z_{2})...(z-z_{n})(z-a)\,\mbox{decomposable}\},$$ also has measure zero when considered a subset of $\mathbb{R}^{2n}.$ We show that for any polynomial $p$, there exists an $a\in\mathbb{C}$ such that $p(z)(z-a)$ is indecomposable, and we also examine the case of $D_{5}$ in detail. The main work of this paper studies finite Blaschke products, analytic functions on $\overline{\mathbb{D}}$ that map $\partial\mathbb{D}$ to $\partial\mathbb{D}.$ In analogy with polynomials, we discuss when a degree $n$ Blaschke product, $B,$ can be written as a composition $C\circ D$, where $C$ and $D$ are finite Blaschke products, each of degree less than $n.$ Decomposable and indecomposable are defined analogously. Our main results are divided into two sections. First, we equate a condition on the zeros of the Blaschke product with the existence of a decomposition where the right-hand factor, $D,$ has degree $2.$ We also equate decomposability of a Blaschke product, $B,$ with the existence of a Poncelet curve, whose foci are a subset of the zeros of $B,$ such that the Poncelet curve satisfies certain tangency conditions. This result is hard to apply in general, but has a very nice geometric interpretation when we desire a composition where the right-hand factor is degree 2 or 3. Our second section of finite Blaschke product results builds off of the work of Cowen in \cite{key-3}. For a finite Blaschke product $B,$ Cowen defines the so-called monodromy group, $G_{B},$ of the finite Blaschke product. He then equates the decomposability of a finite Blaschke product, $B,$ with the existence of a nontrivial partition, $\mathcal{P},$ of the branches of $B^{-1}(z),$ such that $G_{B}$ respects $\mathcal{P}$. We present an in-depth analysis of how to calculate $G_{B}$, extending Cowen's description. These methods allow us to equate the existence of a decomposition where the left-hand factor has degree 2, with a simple condition on the critical points of the Blaschke product. In addition we are able to put a condition of the structure of $G_{B}$ for any decomposable Blaschke product satisfying certain normalization conditions. The final section of this paper discusses how one can put the results of the paper into practice to determine, if a particular Blaschke product is decomposable. We compare three major algorithms. The first is a brute force technique where one searches through the zero set of $B$ for subsets which could be the zero set of $D$, exhaustively searching for a successful decomposition $B(z)=C(D(z)).$ The second algorithm involves simply examining the cardinality of the image, under $B,$ of the set of critical points of $B.$ For a degree $n$ Blaschke product, $B,$ if this cardinality is greater than $\frac{n}{2}$, the Blaschke product is indecomposable. The final algorithm attempts to apply the geometric interpretation of decomposability given by our theorem concerning the existence of a particular Poncelet curve. The final two algorithms can be implemented easily with the use of an HTML

Relevância:

90.00% 90.00%

Publicador:

Resumo:

We consider nonparametric missing data models for which the censoring mechanism satisfies coarsening at random and which allow complete observations on the variable X of interest. W show that beyond some empirical process conditions the only essential condition for efficiency of an NPMLE of the distribution of X is that the regions associated with incomplete observations on X contain enough complete observations. This is heuristically explained by describing the EM-algorithm. We provide identifiably of the self-consistency equation and efficiency of the NPMLE in order to make this statement rigorous. The usual kind of differentiability conditions in the proof are avoided by using an identity which holds for the NPMLE of linear parameters in convex models. We provide a bivariate censoring application in which the condition and hence the NPMLE fails, but where other estimators, not based on the NPMLE principle, are highly inefficient. It is shown how to slightly reduce the data so that the conditions hold for the reduced data. The conditions are verified for the univariate censoring, double censored, and Ibragimov-Has'minski models.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Currently photon Monte Carlo treatment planning (MCTP) for a patient stored in the patient database of a treatment planning system (TPS) can usually only be performed using a cumbersome multi-step procedure where many user interactions are needed. This means automation is needed for usage in clinical routine. In addition, because of the long computing time in MCTP, optimization of the MC calculations is essential. For these purposes a new graphical user interface (GUI)-based photon MC environment has been developed resulting in a very flexible framework. By this means appropriate MC transport methods are assigned to different geometric regions by still benefiting from the features included in the TPS. In order to provide a flexible MC environment, the MC particle transport has been divided into different parts: the source, beam modifiers and the patient. The source part includes the phase-space source, source models and full MC transport through the treatment head. The beam modifier part consists of one module for each beam modifier. To simulate the radiation transport through each individual beam modifier, one out of three full MC transport codes can be selected independently. Additionally, for each beam modifier a simple or an exact geometry can be chosen. Thereby, different complexity levels of radiation transport are applied during the simulation. For the patient dose calculation, two different MC codes are available. A special plug-in in Eclipse providing all necessary information by means of Dicom streams was used to start the developed MC GUI. The implementation of this framework separates the MC transport from the geometry and the modules pass the particles in memory; hence, no files are used as the interface. The implementation is realized for 6 and 15 MV beams of a Varian Clinac 2300 C/D. Several applications demonstrate the usefulness of the framework. Apart from applications dealing with the beam modifiers, two patient cases are shown. Thereby, comparisons are performed between MC calculated dose distributions and those calculated by a pencil beam or the AAA algorithm. Interfacing this flexible and efficient MC environment with Eclipse allows a widespread use for all kinds of investigations from timing and benchmarking studies to clinical patient studies. Additionally, it is possible to add modules keeping the system highly flexible and efficient.